Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 149
45.7k
| code_prompt
stringlengths 67
140k
|
---|---|---|
0 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Tweedie Density Estimation
This package follows the following 2 papers to estimate the density of a presumed Tweedie distribution
Step1: And to prove this is a typical Tweedie distribution, we'll plot it. Notice that there's a big point of mass at exactly 0, and then some positive values.
Step2: A typical problem would be to try to improve the GLM by choosing an appropriate value for the var_power parameter. That's often difficult, but it can be made easier by using the scipy's minimize_scalar function which will seek the best value of p. To show this works, let's initially set p to be some ludicrious value and see if the minimize_scalar moves us in the right direction.
Step3: Sure enough, we move in the right direction.
Now, let's run the GLM using the updated value of p.
Step4: Still moving in the right direction. | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from __future__ import print_function
import numpy as np
import scipy as sp
from tweedie import tweedie
import seaborn as sns
import statsmodels.api as sm
# Number of parameters for model
p = 20
# Number of simulated observations
n = 100000
np.random.seed(43)
exog = np.random.rand(n, p - 1)
exog = np.hstack((np.ones((n, 1)), exog))
beta = np.concatenate(([500], np.random.randint(-100, 100, p - 1))) / 100
eta = np.dot(exog, beta)
mu = np.exp(eta)
endog = tweedie(mu=mu, p=1.5, phi=20).rvs(n)
Explanation: Tweedie Density Estimation
This package follows the following 2 papers to estimate the density of a presumed Tweedie distribution:
Dunn, Peter K. and Smyth, Gordon K. 2001, Tweedie Family Densities: Methods
of Evaluation
Dunn, Peter K. and Smyth, Gordon K. 2005, Series evaluation of Tweedie
exponential dispersion model densities
Below, I'll demonstrate a relatively simple approach to estimating Tweedie density from a given distribution. First, I'm going to generate some data and try to fit a GLM on it.
End of explanation
sns.distplot(endog)
sns.distplot(endog[endog > 0])
Explanation: And to prove this is a typical Tweedie distribution, we'll plot it. Notice that there's a big point of mass at exactly 0, and then some positive values.
End of explanation
res = sm.GLM(endog, exog, family=sm.families.Tweedie(link=sm.families.links.log, var_power=1.1)).fit()
print(res.summary())
def loglike_p(p):
return -tweedie(mu=res.mu, p=p, phi=res.scale).logpdf(res._endog).sum()
opt = sp.optimize.minimize_scalar(loglike_p, bounds=(1.05, 1.95), method='bounded')
print(opt)
Explanation: A typical problem would be to try to improve the GLM by choosing an appropriate value for the var_power parameter. That's often difficult, but it can be made easier by using the scipy's minimize_scalar function which will seek the best value of p. To show this works, let's initially set p to be some ludicrious value and see if the minimize_scalar moves us in the right direction.
End of explanation
res2 = sm.GLM(endog, exog, family=sm.families.Tweedie(link=sm.families.links.log, var_power=opt.x)).fit()
print(res.summary())
def loglike_p(p):
return -tweedie(mu=res2.mu, p=p, phi=res2.scale).logpdf(res2._endog).sum()
opt2 = sp.optimize.minimize_scalar(loglike_p, bounds=(1.05, 1.95), method='bounded')
print(opt2)
Explanation: Sure enough, we move in the right direction.
Now, let's run the GLM using the updated value of p.
End of explanation
res3 = sm.GLM(endog, exog, family=sm.families.Tweedie(link=sm.families.links.log, var_power=opt2.x)).fit()
print(res.summary())
def loglike_p(p):
return -tweedie(mu=res3.mu, p=p, phi=res3.scale).logpdf(res3._endog).sum()
opt3 = sp.optimize.minimize_scalar(loglike_p, bounds=(1.05, 1.95), method='bounded')
print(opt3)
Explanation: Still moving in the right direction.
End of explanation
<END_TASK> |
1 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
syncID
Step1: We also need to import the following functions created in previous lessons
Step2: Calculate Hillshade
<img src="http
Step3: Now that we have a function to generate hillshade, we need to read in the NEON LiDAR Digital Terrain Model (DTM) geotif using the raster2array function and then calculate hillshade using the hillshade function. We can then plot both using the plot_band_array function.
Step4: Calculate CHM & Overlay on Top of Hillshade
Step5: Links to Tutorials on Creating Hillshades | <ASSISTANT_TASK:>
Python Code:
from osgeo import gdal
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
Explanation: syncID: 7e916532e9fa49aeba7464350e661778
title: "Create a Hillshade from a Terrain Raster in Python"
description: "Learn how to create a hillshade from a terrain raster in Python."
dateCreated: 2017-06-21
authors: Bridget Hass
contributors: Donal O'Leary
estimatedTime: 0.5 hour
packagesLibraries: numpy, gdal, matplotlib
topics: lidar, raster, remote-sensing
languagesTool: python
dataProduct: DP1.30003, DP3.30015, DP3.30024, DP3.30025
code1: https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/tutorials/Python/Lidar/lidar-topography/create_hillshade_from_terrain_raster_py/create_hillshade_from_terrain_raster_py.ipynb
tutorialSeries: intro-lidar-py-series
urlTitle: create-hillshade-py
Create a Hillshade from a Terrain Raster in Python
In this tutorial, we will learn how to create a hillshade from a terrain raster in Python.
First, let's import the required packages and set plot display to inline:
End of explanation
# %load ../neon_aop_python_functions/raster2array.py
# raster2array.py reads in the first band of geotif file and returns an array and associated
# metadata dictionary.
# Input: raster_geotif (eg. 'raster.tif')
# Outputs:
# array_rows: # of rows in the array
# array_cols: # of columns in the array
# bands: # of bands
# driver: (for NEON data this is Geotif)
# projection:
# geotransform:
# pixelWidth: width of pixel (for NEON data this = 1)
# pixelHeight: height of pixel (for NEON data this = -1)
# ext_dict: dictionary of raster extent, containing the following information
# {'xMin': xMin_value,'xMax': xMax_value, 'yMin': yMin_value, 'yMax': yMax_value}
# Note: to extract a value from ext_dict, use the syntax: eg. xMin = metadata['ext_dict']['xMin']
# extent: raster extent values (xMin, xMax, yMin, yMax)
# noDataValue: no data value
# scaleFactor: scale factor
# band_stats: dictionary of statistics for band 1:
# {'min': min_value, 'max': max_value, 'mean': mean_value, 'stdev': stdev_value}
# Note: to extract a value from band_stats dictionary, use the syntax:
# eg. array_min = metadata['band_stats']['min']
# Usage: array, metadata = raster2array('raster.tif')
from osgeo import gdal
import numpy as np
def raster2array(geotif_file):
metadata = {}
dataset = gdal.Open(geotif_file)
metadata['array_rows'] = dataset.RasterYSize
metadata['array_cols'] = dataset.RasterXSize
metadata['bands'] = dataset.RasterCount
metadata['driver'] = dataset.GetDriver().LongName
metadata['projection'] = dataset.GetProjection()
metadata['geotransform'] = dataset.GetGeoTransform()
mapinfo = dataset.GetGeoTransform()
metadata['pixelWidth'] = mapinfo[1]
metadata['pixelHeight'] = mapinfo[5]
# metadata['xMin'] = mapinfo[0]
# metadata['yMax'] = mapinfo[3]
# metadata['xMax'] = mapinfo[0] + dataset.RasterXSize/mapinfo[1]
# metadata['yMin'] = mapinfo[3] + dataset.RasterYSize/mapinfo[5]
metadata['ext_dict'] = {}
metadata['ext_dict']['xMin'] = mapinfo[0]
metadata['ext_dict']['xMax'] = mapinfo[0] + dataset.RasterXSize/mapinfo[1]
metadata['ext_dict']['yMin'] = mapinfo[3] + dataset.RasterYSize/mapinfo[5]
metadata['ext_dict']['yMax'] = mapinfo[3]
metadata['extent'] = (metadata['ext_dict']['xMin'],metadata['ext_dict']['xMax'],
metadata['ext_dict']['yMin'],metadata['ext_dict']['yMax'])
if metadata['bands'] == 1:
raster = dataset.GetRasterBand(1)
metadata['noDataValue'] = raster.GetNoDataValue()
metadata['scaleFactor'] = raster.GetScale()
# band statistics
metadata['bandstats'] = {} #make a nested dictionary to store band stats in same
stats = raster.GetStatistics(True,True)
metadata['bandstats']['min'] = round(stats[0],2)
metadata['bandstats']['max'] = round(stats[1],2)
metadata['bandstats']['mean'] = round(stats[2],2)
metadata['bandstats']['stdev'] = round(stats[3],2)
array = dataset.GetRasterBand(1).ReadAsArray(0,0,metadata['array_cols'],metadata['array_rows']).astype(np.float)
array[array==int(metadata['noDataValue'])]=np.nan
array = array/metadata['scaleFactor']
return array, metadata
elif metadata['bands'] > 1:
print('More than one band ... fix function for case of multiple bands')
# %load ../neon_aop_python_functions/plot_band_array.py
def plot_band_array(band_array,refl_extent,title,cbar_label,colormap='spectral',alpha=1):
plt.imshow(band_array,extent=refl_extent,alpha=alpha);
cbar = plt.colorbar(); plt.set_cmap(colormap);
cbar.set_label(cbar_label,rotation=270,labelpad=20)
plt.title(title); ax = plt.gca();
ax.ticklabel_format(useOffset=False, style='plain') #do not use scientific notation #
rotatexlabels = plt.setp(ax.get_xticklabels(),rotation=90) #rotate x tick labels 90 degree
Explanation: We also need to import the following functions created in previous lessons:
- raster2array.py
- plotbandarray.py
End of explanation
#https://github.com/rveciana/introduccion-python-geoespacial/blob/master/hillshade.py
def hillshade(array,azimuth,angle_altitude):
azimuth = 360.0 - azimuth
x, y = np.gradient(array)
slope = np.pi/2. - np.arctan(np.sqrt(x*x + y*y))
aspect = np.arctan2(-x, y)
azimuthrad = azimuth*np.pi/180.
altituderad = angle_altitude*np.pi/180.
shaded = np.sin(altituderad)*np.sin(slope) + np.cos(altituderad)*np.cos(slope)*np.cos((azimuthrad - np.pi/2.) - aspect)
return 255*(shaded + 1)/2
Explanation: Calculate Hillshade
<img src="http://www.geography.hunter.cuny.edu/~jochen/GTECH361/lectures/lecture11/concepts/Hillshade_files/image001.gif" style="width: 250px;"/>
<center><font size="2">http://www.geography.hunter.cuny.edu/~jochen/GTECH361/lectures/lecture11/concepts/Hillshade.htm</font></center>
Hillshade is used to visualize the hypothetical illumination value (from 0-255) of each pixel on a surface given a specified light source. To calculate hillshade, we need the zenith (altitude) and azimuth of the illumination source, as well as the slope and aspect of the terrain. The formula for hillshade is:
$$Hillshade = 255.0 * (( cos(zenith_I)cos(slope_T))+(sin(zenith_I)sin(slope_T)*cos(azimuth_I-aspect_T))$$
Where all angles are in radians.
For more information about how hillshades work, refer to the ESRI ArcGIS Help page: http://webhelp.esri.com/arcgisdesktop/9.2/index.cfm?TopicName=How%20Hillshade%20works.
End of explanation
# Use raster2array to convert TEAK DTM Geotif to array & plot
#dtm_array, dtm_metadata = raster2array('2013_TEAK_1_326000_4103000_DTM.tif')
dtm_array, dtm_metadata = raster2array('/Users/olearyd/Git/data/2013_TEAK_1_326000_4103000_DTM.tif')
plot_band_array(dtm_array,dtm_metadata['extent'],'TEAK DTM','Elevation, m',colormap='gist_earth')
ax = plt.gca(); plt.grid('on')
# Use hillshade function on a DTM Geotiff
hs_array = hillshade(dtm_array,225,45)
plot_band_array(hs_array,dtm_metadata['extent'],'TEAK Hillshade, Aspect=225°',
'Hillshade',colormap='Greys',alpha=0.8)
ax = plt.gca(); plt.grid('on')
#Overlay transparent hillshade on DTM:
fig = plt.figure(frameon=False)
im1 = plt.imshow(dtm_array,cmap='terrain_r',extent=dtm_metadata['extent']);
cbar = plt.colorbar(); cbar.set_label('Elevation, m',rotation=270,labelpad=20)
im2 = plt.imshow(hs_array,cmap='Greys',alpha=0.8,extent=dtm_metadata['extent']); #plt.colorbar()
ax=plt.gca(); ax.ticklabel_format(useOffset=False, style='plain') #do not use scientific notation
rotatexlabels = plt.setp(ax.get_xticklabels(),rotation=90) #rotate x tick labels 90 degrees
plt.grid('on'); # plt.colorbar();
plt.title('TEAK Hillshade + DTM')
Explanation: Now that we have a function to generate hillshade, we need to read in the NEON LiDAR Digital Terrain Model (DTM) geotif using the raster2array function and then calculate hillshade using the hillshade function. We can then plot both using the plot_band_array function.
End of explanation
#Calculate CHM from DSM & DTM:
dsm_array, dsm_metadata = raster2array('/Users/olearyd/Git/data/2013_TEAK_1_326000_4103000_DSM.tif')
teak_chm = dsm_array - dtm_array;
plot_band_array(teak_chm,dtm_metadata['extent'],'TEAK Canopy Height Model','Canopy Height, m',colormap='Greens')
ax = plt.gca(); plt.grid('on')
#Overlay transparent hillshade on DTM:
fig = plt.figure(frameon=False)
#Terrain
im1 = plt.imshow(dtm_array,cmap='YlOrBr',extent=dtm_metadata['extent']);
cbar1 = plt.colorbar(); cbar1.set_label('Elevation, m',rotation=270,labelpad=20)
#Hillshade
im2 = plt.imshow(hs_array,cmap='Greys',alpha=.5,extent=dtm_metadata['extent']); #plt.colorbar()
#Canopy
im3 = plt.imshow(teak_chm,cmap='Greens',alpha=0.6,extent=dtm_metadata['extent']);
cbar2 = plt.colorbar(); cbar2.set_label('Canopy Height, m',rotation=270,labelpad=20)
ax=plt.gca(); ax.ticklabel_format(useOffset=False, style='plain') #do not use scientific notation
rotatexlabels = plt.setp(ax.get_xticklabels(),rotation=90) #rotate x tick labels 90 degrees
plt.grid('on'); # plt.colorbar();
plt.title('TEAK 2013 \n Terrain, Hillshade, & Canopy Height')
Explanation: Calculate CHM & Overlay on Top of Hillshade
End of explanation
#Importing the TEAK CHM Geotiff resulted in v. sparse data ?
chm_array, chm_metadata = raster2array('/Users/olearyd/Git/data/2013_TEAK_1_326000_4103000_pit_free_CHM.tif')
print('TEAK CHM Array\n:',chm_array)
# print(chm_metadata)
#print metadata in alphabetical order
for item in sorted(chm_metadata):
print(item + ':', chm_metadata[item])
# print(chm_metadata['extent'])
import copy
chm_nonzero_array = copy.copy(chm_array)
chm_nonzero_array[chm_array==0]=np.nan
print('TEAK CHM nonzero array:\n',chm_nonzero_array)
print(np.nanmin(chm_nonzero_array))
print(np.nanmax(chm_nonzero_array))
Explanation: Links to Tutorials on Creating Hillshades:
Python Hillshade:
- http://geoexamples.blogspot.com/2014/03/shaded-relief-images-using-gdal-python.html
- http://pangea.stanford.edu/~samuelj/musings/dems-in-python-pt-3-slope-and-hillshades-.html
ESRI ArcGIS Hillshade Algorithm:
- http://webhelp.esri.com/arcgisdesktop/9.2/index.cfm?TopicName=How%20Hillshade%20works
GitHub Hillshade Functions/Tutorials:
- https://github.com/rveciana/introduccion-python-geoespacial/blob/master/hillshade.py
- https://github.com/clhenrick/gdal_hillshade_tutorial
GDAL Hillshade:
- http://www.gdal.org/gdaldem.html
- http://gis.stackexchange.com/questions/144535/how-to-create-transparent-hillshade/144700
Scratch Code
End of explanation
<END_TASK> |
2 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
GLM
Step1: Local Functions
Step2: Generate Data
This dummy dataset is created to emulate some data created as part of a study into quantified self, and the real data is more complicated than this. Ask Ian Osvald if you'd like to know more https
Step3: View means of the various combinations (poisson mean values)
Step4: Briefly Describe Dataset
Step5: Observe
Step6: 1. Manual method, create design matrices and manually specify model
Create Design Matrices
Step7: Create Model
Step8: Sample Model
Step9: View Diagnostics
Step10: Observe
Step11: Observe
Step12: Sample Model
Step13: View Traces
Step14: Transform coeffs
Step15: Observe
Step16: ... of 9.45 with a range [25%, 75%] of [4.17, 24.18], we see this is pretty close to the overall mean of | <ASSISTANT_TASK:>
Python Code:
## Interactive magics
%matplotlib inline
import sys
import warnings
warnings.filterwarnings('ignore')
import re
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import patsy as pt
from scipy import optimize
# pymc3 libraries
import pymc3 as pm
import theano as thno
import theano.tensor as T
sns.set(style="darkgrid", palette="muted")
pd.set_option('display.mpl_style', 'default')
plt.rcParams['figure.figsize'] = 14, 6
np.random.seed(0)
Explanation: GLM: Poisson Regression
A minimal reproducable example of poisson regression to predict counts using dummy data.
This Notebook is basically an excuse to demo poisson regression using PyMC3, both manually and using the glm library to demo interactions using the patsy library. We will create some dummy data, poisson distributed according to a linear model, and try to recover the coefficients of that linear model through inference.
For more statistical detail see:
Basic info on Wikipedia
GLMs: Poisson regression, exposure, and overdispersion in Chapter 6.2 of ARM, Gelmann & Hill 2006
This worked example from ARM 6.2 by Clay Ford
This very basic model is insipired by a project by Ian Osvald, which is concerend with understanding the various effects of external environmental factors upon the allergic sneezing of a test subject.
Contents
Setup
Local Functions
Generate Data
Poisson Regression
Create Design Matrices
Create Model
Sample Model
View Diagnostics and Outputs
Package Requirements (shown as a conda-env YAML):
```
$> less conda_env_pymc3_examples.yml
name: pymc3_examples
channels:
- defaults
dependencies:
- python=3.5
- jupyter
- ipywidgets
- numpy
- scipy
- matplotlib
- pandas
- pytables
- scikit-learn
- statsmodels
- seaborn
- patsy
- requests
- pip
- pip:
- regex
$> conda env create --file conda_env_pymc3_examples.yml
$> source activate pymc3_examples
$> pip install --process-dependency-links git+https://github.com/pymc-devs/pymc3
```
Setup
End of explanation
def strip_derived_rvs(rvs):
'''Convenience fn: remove PyMC3-generated RVs from a list'''
ret_rvs = []
for rv in rvs:
if not (re.search('_log',rv.name) or re.search('_interval',rv.name)):
ret_rvs.append(rv)
return ret_rvs
def plot_traces_pymc(trcs, varnames=None):
''' Convenience fn: plot traces with overlaid means and values '''
nrows = len(trcs.varnames)
if varnames is not None:
nrows = len(varnames)
ax = pm.traceplot(trcs, varnames=varnames, figsize=(12,nrows*1.4),
lines={k: v['mean'] for k, v in
pm.df_summary(trcs,varnames=varnames).iterrows()})
for i, mn in enumerate(pm.df_summary(trcs, varnames=varnames)['mean']):
ax[i,0].annotate('{:.2f}'.format(mn), xy=(mn,0), xycoords='data',
xytext=(5,10), textcoords='offset points', rotation=90,
va='bottom', fontsize='large', color='#AA0022')
Explanation: Local Functions
End of explanation
# decide poisson theta values
theta_noalcohol_meds = 1 # no alcohol, took an antihist
theta_alcohol_meds = 3 # alcohol, took an antihist
theta_noalcohol_nomeds = 6 # no alcohol, no antihist
theta_alcohol_nomeds = 36 # alcohol, no antihist
# create samples
q = 1000
df = pd.DataFrame({
'nsneeze': np.concatenate((np.random.poisson(theta_noalcohol_meds, q),
np.random.poisson(theta_alcohol_meds, q),
np.random.poisson(theta_noalcohol_nomeds, q),
np.random.poisson(theta_alcohol_nomeds, q))),
'alcohol': np.concatenate((np.repeat(False, q),
np.repeat(True, q),
np.repeat(False, q),
np.repeat(True, q))),
'nomeds': np.concatenate((np.repeat(False, q),
np.repeat(False, q),
np.repeat(True, q),
np.repeat(True, q)))})
df.tail()
Explanation: Generate Data
This dummy dataset is created to emulate some data created as part of a study into quantified self, and the real data is more complicated than this. Ask Ian Osvald if you'd like to know more https://twitter.com/ianozsvald
Assumptions:
The subject sneezes N times per day, recorded as nsneeze (int)
The subject may or may not drink alcohol during that day, recorded as alcohol (boolean)
The subject may or may not take an antihistamine medication during that day, recorded as the negative action nomeds (boolean)
I postulate (probably incorrectly) that sneezing occurs at some baseline rate, which increases if an antihistamine is not taken, and further increased after alcohol is consumed.
The data is aggegated per day, to yield a total count of sneezes on that day, with a boolean flag for alcohol and antihistamine usage, with the big assumption that nsneezes have a direct causal relationship.
Create 4000 days of data: daily counts of sneezes which are poisson distributed w.r.t alcohol consumption and antihistamine usage
End of explanation
df.groupby(['alcohol','nomeds']).mean().unstack()
Explanation: View means of the various combinations (poisson mean values)
End of explanation
g = sns.factorplot(x='nsneeze', row='nomeds', col='alcohol', data=df,
kind='count', size=4, aspect=1.5)
Explanation: Briefly Describe Dataset
End of explanation
fml = 'nsneeze ~ alcohol + antihist + alcohol:antihist' # full patsy formulation
fml = 'nsneeze ~ alcohol * nomeds' # lazy, alternative patsy formulation
Explanation: Observe:
This looks a lot like poisson-distributed count data (because it is)
With nomeds == False and alcohol == False (top-left, akak antihistamines WERE used, alcohol was NOT drunk) the mean of the poisson distribution of sneeze counts is low.
Changing alcohol == True (top-right) increases the sneeze count nsneeze slightly
Changing nomeds == True (lower-left) increases the sneeze count nsneeze further
Changing both alcohol == True and nomeds == True (lower-right) increases the sneeze count nsneeze a lot, increasing both the mean and variance.
Poisson Regression
Our model here is a very simple Poisson regression, allowing for interaction of terms:
$$ \theta = exp(\beta X)$$
$$ Y_{sneeze_count} ~ Poisson(\theta)$$
Create linear model for interaction of terms
End of explanation
(mx_en, mx_ex) = pt.dmatrices(fml, df, return_type='dataframe', NA_action='raise')
pd.concat((mx_ex.head(3),mx_ex.tail(3)))
Explanation: 1. Manual method, create design matrices and manually specify model
Create Design Matrices
End of explanation
with pm.Model() as mdl_fish:
# define priors, weakly informative Normal
b0 = pm.Normal('b0_intercept', mu=0, sd=10)
b1 = pm.Normal('b1_alcohol[T.True]', mu=0, sd=10)
b2 = pm.Normal('b2_nomeds[T.True]', mu=0, sd=10)
b3 = pm.Normal('b3_alcohol[T.True]:nomeds[T.True]', mu=0, sd=10)
# define linear model and exp link function
theta = (b0 +
b1 * mx_ex['alcohol[T.True]'] +
b2 * mx_ex['nomeds[T.True]'] +
b3 * mx_ex['alcohol[T.True]:nomeds[T.True]'])
## Define Poisson likelihood
y = pm.Poisson('y', mu=np.exp(theta), observed=mx_en['nsneeze'].values)
Explanation: Create Model
End of explanation
with mdl_fish:
trc_fish = pm.sample(2000, tune=1000, njobs=4)[1000:]
Explanation: Sample Model
End of explanation
rvs_fish = [rv.name for rv in strip_derived_rvs(mdl_fish.unobserved_RVs)]
plot_traces_pymc(trc_fish, varnames=rvs_fish)
Explanation: View Diagnostics
End of explanation
np.exp(pm.df_summary(trc_fish, varnames=rvs_fish)[['mean','hpd_2.5','hpd_97.5']])
Explanation: Observe:
The model converges quickly and traceplots looks pretty well mixed
Transform coeffs and recover theta values
End of explanation
with pm.Model() as mdl_fish_alt:
pm.glm.GLM.from_formula(fml, df, family=pm.glm.families.Poisson())
Explanation: Observe:
The contributions from each feature as a multiplier of the baseline sneezecount appear to be as per the data generation:
exp(b0_intercept): mean=1.02 cr=[0.96, 1.08]
Roughly linear baseline count when no alcohol and meds, as per the generated data:
theta_noalcohol_meds = 1 (as set above)
theta_noalcohol_meds = exp(b0_intercept)
= 1
exp(b1_alcohol): mean=2.88 cr=[2.69, 3.09]
non-zero positive effect of adding alcohol, a ~3x multiplier of
baseline sneeze count, as per the generated data:
theta_alcohol_meds = 3 (as set above)
theta_alcohol_meds = exp(b0_intercept + b1_alcohol)
= exp(b0_intercept) * exp(b1_alcohol)
= 1 * 3 = 3
exp(b2_nomeds[T.True]): mean=5.76 cr=[5.40, 6.17]
larger, non-zero positive effect of adding nomeds, a ~6x multiplier of
baseline sneeze count, as per the generated data:
theta_noalcohol_nomeds = 6 (as set above)
theta_noalcohol_nomeds = exp(b0_intercept + b2_nomeds)
= exp(b0_intercept) * exp(b2_nomeds)
= 1 * 6 = 6
exp(b3_alcohol[T.True]:nomeds[T.True]): mean=2.12 cr=[1.98, 2.30]
small, positive interaction effect of alcohol and meds, a ~2x multiplier of
baseline sneeze count, as per the generated data:
theta_alcohol_nomeds = 36 (as set above)
theta_alcohol_nomeds = exp(b0_intercept + b1_alcohol + b2_nomeds + b3_alcohol:nomeds)
= exp(b0_intercept) * exp(b1_alcohol) * exp(b2_nomeds * b3_alcohol:nomeds)
= 1 * 3 * 6 * 2 = 36
2. Alternative method, using pymc.glm
Create Model
Alternative automatic formulation using pmyc.glm
End of explanation
with mdl_fish_alt:
trc_fish_alt = pm.sample(4000, tune=2000)[2000:]
Explanation: Sample Model
End of explanation
rvs_fish_alt = [rv.name for rv in strip_derived_rvs(mdl_fish_alt.unobserved_RVs)]
plot_traces_pymc(trc_fish_alt, varnames=rvs_fish_alt)
Explanation: View Traces
End of explanation
np.exp(pm.df_summary(trc_fish_alt, varnames=rvs_fish_alt)[['mean','hpd_2.5','hpd_97.5']])
Explanation: Transform coeffs
End of explanation
np.percentile(trc_fish_alt['mu'], [25,50,75])
Explanation: Observe:
The traceplots look well mixed
The transformed model coeffs look moreorless the same as those generated by the manual model
Note also that the mu coeff is for the overall mean of the dataset and has an extreme skew, if we look at the median value ...
End of explanation
df['nsneeze'].mean()
Explanation: ... of 9.45 with a range [25%, 75%] of [4.17, 24.18], we see this is pretty close to the overall mean of:
End of explanation
<END_TASK> |
3 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Chapter 3 - Developing Templates
Generating SoftMax distributions from normals could get quite tedious – for any sufficiently complicated shape, the number of normals to be used could be excessive. Let's add a layer of abstraction onto all our work.
Polygon Construction
We can put everything together from all we've talked about (shifting the distribution and generating weights from normals) to a more tangible process
Step1: NOTE
Step2: As expected, our boundaries stayed the same but our probabilities are less spread out. Looking good!
However, we need to address a few assumptions. Most importantly, our interior class will not always be centered at the origin. Let's look at a shifted coordinate frame again, with the center of our polygon at $(-2,3)$
Step3: Great! We've successfully decomposed the space around the Pentagon, so we can tell the automatic security bots where the suspect is without having to pull out a map of the Pentagon and show them directly where on the map our intruder may be. That is, we've replaced communication of specific coordinates with the communication of 'zones' formed by spatial relationships to landmarks.
However, the methodology build up to this point doesn't work for all cases. For instance
Step4: From Polygons to Polytopes | <ASSISTANT_TASK:>
Python Code:
import numpy as np
%matplotlib inline
from cops_and_robots.robo_tools.fusion.softmax import SoftMax, make_regular_2D_poly
poly = make_regular_2D_poly(5, max_r=2, theta=-np.pi/4)
labels = ['Interior','Heliport Facade','Mall Terrace Entrance',
'River Terrace Entrance', 'Concourse Entrance',
'South Parking Entrance']
sm = SoftMax(poly=poly, class_labels=labels, resolution=0.1)
sm.plot(plot_poly=True, plot_normals=False)
Explanation: Chapter 3 - Developing Templates
Generating SoftMax distributions from normals could get quite tedious – for any sufficiently complicated shape, the number of normals to be used could be excessive. Let's add a layer of abstraction onto all our work.
Polygon Construction
We can put everything together from all we've talked about (shifting the distribution and generating weights from normals) to a more tangible process: generating a softmax distribution from a polytope. Let's motivate this with an example first.
Imagine you worked at the Pentagon as an HRI researcher. One day, while pondering the nature of language, you happened to look out your window and spot an intruder. If you called a human security officer, you might say something like, "I see an intruder in front of the Heliport facade." We can use our SoftMax classifier to translate this same sentence for a security bot to understand.
First, we'd need to divide the space in a similar way we did for the Pac-Man problem:
<img src="https://raw.githubusercontent.com/COHRINT/cops_and_robots/master/notebooks/softmax/img/pentagon.png" alt="Pentagon space division" width="500px">
As opposed to our Pac-Man problem, we can't assign weights by inspection. Instead, we'll use our weights-from-normals tactic to generate our weights for each class, and our shifted bias tactic to place those weights appropriately.
Step 1: Define Polytope
We can use a geometry library like Shapely to define custom polytopes (in this case, a pentagon). For a quick way to get ideal pentagon vertex coordinates, you can either calculate them by hand or use some online tools.
Let's try a pentagon with the following coordinates (starting at the corner between the South Parking Entrance and the Heliport Facade):
$$
\begin{align}
P_1 &= (P_{1x}, P_{1y}) = (-1.90,-0.93) \
P_2 &= (-1.40,1.45) \
P_3 &= (1.03,1.71) \
P_4 &= (2.02,-0.51) \
P_5 &= (0.21,-2.15) \
\end{align}
$$
Step 2: Get Normals and Offsets
We want to get six classes, so we'd like to specify $\frac{6(6-1)}{2} = 15$ normal vectors in order to use our transformation matrix $A$. But, we only have six unknowns, so we can reduce the size of our $A$ matrix. That is, we can use:
$$
\mathbf{N} = \begin{bmatrix}
\mathbf{n}{0,1}^T \
\mathbf{n}{0,2}^T \
\mathbf{n}{0,3}^T \
\mathbf{n}{0,4}^T \
\mathbf{n}{0,5}^T \
\mathbf{n}{1,2}^T \
\end{bmatrix}
= \begin{bmatrix}
-1 & 1 & 0 & 0 & 0 & 0 \
-1 & 0 & 1 & 0 & 0 & 0 \
-1 & 0 & 0 & 1 & 0 & 0 \
-1 & 0 & 0 & 0 & 1 & 0 \
-1 & 0 & 0 & 0 & 0 & 1 \
0 & -1 & 1 & 0 & 0 & 0 \
\end{bmatrix}
\begin{bmatrix}
\mathbf{w}{0}^T \
\mathbf{w}{1}^T \
\mathbf{w}{2}^T \
\mathbf{w}{3}^T \
\mathbf{w}{4}^T \
\mathbf{w}{5}^T \
\end{bmatrix}
= \mathbf{A}\mathbf{W}
$$
Where $\mathbf{n}_{0,1}$ is the boundary between the interior and the South Parking Entrance, and so on.
Except, we can be smarter about this. We only care about the relative weights, so why not define one class and solve for the weights of all other classes? Since we have one interior class with weights $w_0$, simply define $w_0 = \begin{bmatrix}0 & 0 \end{bmatrix}^T$ and $b_0 = 0$, leaving us with the following five equations and five unkowns:
$$
\mathbf{N} = \begin{bmatrix}
\mathbf{n}{0,1}^T \
\mathbf{n}{0,2}^T \
\mathbf{n}{0,3}^T \
\mathbf{n}{0,4}^T \
\mathbf{n}{0,5}^T \
\end{bmatrix}
= \begin{bmatrix}
1 & 0 & 0 & 0 & 0 \
0 & 1 & 0 & 0 & 0 \
0 & 0 & 1 & 0 & 0 \
0 & 0 & 0 & 1 & 0 \
0 & 0 & 0 & 0 & 1 \
\end{bmatrix}
\begin{bmatrix}
\mathbf{w}{1}^T \
\mathbf{w}{2}^T \
\mathbf{w}{3}^T \
\mathbf{w}{4}^T \
\mathbf{w}{5}^T \
\end{bmatrix}
= \mathbf{A}\mathbf{W}
$$
Does it make sense that the weights we'd use correspond directly to the class boundaries of each class with some zero-weighted interior class? Yes: think of a class boundary as defined by its normal vector. Those normal vectors point exactly in the direction of greatest probability of a given class.
Thus, we have:
$$
\mathbf{n}{0,i} = \mathbf{w}{i} \; \forall i \in N
$$
We have the normals, but solving for the class biases will require digging deeper. We need the equation for a normal fixed to the surface of the polytope (not simply its magnitude and direction!).
In $\mathbb{R}^2$, we know that a line is uniquely defined by two points passing through it – a face's bounding vertices, for instance. This can help us find the normal vectors and offsets, giving us the weights and biases.
Recall the specification of our hyperplanes in $\mathbb{R}^2$:
\begin{align}
0 &= (\mathbf{w}i - \mathbf{w}_j)^T\mathbf{x} + (b_i - b_j) \
&= (w{i,x} - w_{j,x})x + (w_{i,y} - w_{j,y})y + (b_i - b_j) \
&= w_{i,x}x + w_{i,y}y + b_i
\end{align}
Where the last line assumes $j$ is the interior class with weights and a bias of 0.
Since we have two points on this line segment (and any third point from a linear combination of the first two), we can use their $x$ and $y$ values to calculate our weights:
\begin{equation}\label{eq:nullspace}
\begin{bmatrix}
x_1 & y_1 & 1 \
x_2 & y_2 & 1 \
x_3 & y_3 & 1 \
\end{bmatrix}
\begin{bmatrix}
w_{i,x}\
w_{i,y}\
b_i
\end{bmatrix}
=\begin{bmatrix}
0\
0\
0
\end{bmatrix}
\end{equation}
The non-trivial solution to $\ref{eq:nullspace}$ can be found through various decomposition techniques. We use Singular Value Decomposition.
In short, given any polygon, we can use its vertices to find the equations of the normals representing the class boundaries between the interior class and an exterior class for each face. Let's try this out and see if it works well.
Note that this part, as well as several future ones, require long swaths of code
to be fully explained. Rather than include the code in this document, you can always find it
on our Github.
End of explanation
steepness = 5
sm = SoftMax(poly=poly, class_labels=labels, resolution=0.1, steepness=5)
sm.plot(plot_poly=True, plot_normals=False)
Explanation: NOTE: 3D Plotting currently borked
Well, that looks like the class boundaries are lining up just fine, but what about the probability distributions themselves? They seem a bit diffuse. If you remember from Chapter 1, we can simply multiply the weights and biases by the same value to raise the steepness of each class. Let's try that:
End of explanation
poly = make_regular_2D_poly(5, max_r=2, theta=-np.pi/4, origin=(-2,3))
sm = SoftMax(poly=poly, class_labels=labels, resolution=0.1, steepness=5)
sm.plot(plot_poly=True, plot_normals=False)
Explanation: As expected, our boundaries stayed the same but our probabilities are less spread out. Looking good!
However, we need to address a few assumptions. Most importantly, our interior class will not always be centered at the origin. Let's look at a shifted coordinate frame again, with the center of our polygon at $(-2,3)$:
\begin{align}
\mathbf{x}' &= \begin{bmatrix}x & y\end{bmatrix}^T + \begin{bmatrix}2 & -3\end{bmatrix}^T = \begin{bmatrix}x + 2 & y -3\end{bmatrix}^T \
0 &= (\mathbf{w}i - \mathbf{w}_j)^T \mathbf{x}' + (b_i - b_j) \
&= (\mathbf{w}_i - \mathbf{w}_j)^T \mathbf{x} + (\mathbf{w}_i - \mathbf{w}_j)^T \mathbf{b} + (b_i - b_j)\
&= \mathbf{w}_i^T \mathbf{x} + \mathbf{w}_i^T \mathbf{b} + b_i\
&= w{i,x}x + w_{i,y}y + \begin{bmatrix}w_{i,x} & w_{i,y}\end{bmatrix}\begin{bmatrix}-2 \ 3\end{bmatrix} + b_i\
&= w_{i,x}x + w_{i,y}y -2 w_{i,x} + 3w_{i,y} + b_i\
&= w_{i,x}(x - 2) + w_{i,y}(y + 3) + b_i\
\end{align}
$$
\begin{bmatrix}
x_1 & y_1 & 1 \
x_2 & y_2 & 1 \
x_3 & y_3 & 1 \
\end{bmatrix}
\begin{bmatrix}
w_{i,x}\
w_{i,y}\
b_i
\end{bmatrix}
=\begin{bmatrix}
0\
0\
0
\end{bmatrix}
$$
End of explanation
from shapely.geometry import Polygon
import numpy as np
polygon = Polygon(((-1.8996,-0.92915),
(-1.395,1.4523),
(1.0256,1.7093),
(2.018,-0.51393),
(0.21001,-2.145),))
pts = polygon.exterior.coords[:]
normals = np.zeros((len(pts) - 1, 2))
biases = np.zeros(len(pts) - 1)
for i in range(len(pts) - 1):
slope = (pts[i + 1][1] - pts[i][1]) / (pts[i + 1][0] - pts[i][0])
normals[i] = np.array((-slope, 1))
biases[i] = pts[i][1] - slope * pts[i][0]
print(normals)
print(biases)
Explanation: Great! We've successfully decomposed the space around the Pentagon, so we can tell the automatic security bots where the suspect is without having to pull out a map of the Pentagon and show them directly where on the map our intruder may be. That is, we've replaced communication of specific coordinates with the communication of 'zones' formed by spatial relationships to landmarks.
However, the methodology build up to this point doesn't work for all cases. For instance: what happens if we want to use a non-symmetric shape to develop a SoftMax model? Chapter 3 will dive into some of these pitfalls and how to get around them
End of explanation
from collada import Collada
import numpy as np
from scipy.spatial import ConvexHull
from mpl_toolkits.mplot3d import Axes3D
from mpl_toolkits.mplot3d.art3d import Poly3DCollection
import matplotlib.pyplot as plt
lim_ = 1 # Fleming
mesh = Collada('/Users/nick/Downloads/Desk.dae')
# lim_ = 250 # Pentagon
# mesh = Collada('/Users/nick/Downloads/Pentagon.dae')
mpu = mesh.assetInfo.unitmeter # how many real-world meters are in one distance unit
# Grab all model vertices
geometries = mesh.geometries
primitives = [ _.primitives for _ in geometries]
trisets = [ _[0] for _ in primitives]
vertices = [ a.vertex for a in trisets]
vertices = np.asarray(vertices) * mpu
# Find convex hull
verts = np.zeros([1,3])
for vert in vertices:
verts = np.vstack((verts,vert))
verts = np.subtract(verts[1:,:],np.array([0,3.8,0]))
hull = ConvexHull(verts)
# Get all facets (faces) of the convex hull
facets = []
for simplex in hull.simplices:
facet = [(verts[simplex[0],0],verts[simplex[0],1],verts[simplex[0],2]),
(verts[simplex[1],0],verts[simplex[1],1],verts[simplex[1],2]),
(verts[simplex[2],0],verts[simplex[2],1],verts[simplex[2],2]),
]
facets.append(facet)
# Plot vertices and convex hull
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(verts[:,0], verts[:,1],verts[:,2])
ax.add_collection3d(Poly3DCollection(facets, alpha=0.5))
ax.auto_scale_xyz([-lim_, lim_], [-lim_, lim_], [-lim_, lim_])
plt.show()
from IPython.core.display import HTML
# Borrowed style from Probabilistic Programming and Bayesian Methods for Hackers
def css_styling():
styles = open("../styles/custom.css", "r").read()
return HTML(styles)
css_styling()
Explanation: From Polygons to Polytopes
End of explanation
<END_TASK> |
4 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
.. _tut_stats_cluster_source_1samp
Step1: Set parameters
Step2: Read epochs for all channels, removing a bad one
Step3: Transform to source space
Step4: Transform to common cortical space
Step5: Compute statistic
Step6: Visualize the clusters | <ASSISTANT_TASK:>
Python Code:
# Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
# Eric Larson <larson.eric.d@gmail.com>
# License: BSD (3-clause)
import os.path as op
import numpy as np
from numpy.random import randn
from scipy import stats as stats
import mne
from mne import (io, spatial_tris_connectivity, compute_morph_matrix,
grade_to_tris)
from mne.epochs import equalize_epoch_counts
from mne.stats import (spatio_temporal_cluster_1samp_test,
summarize_clusters_stc)
from mne.minimum_norm import apply_inverse, read_inverse_operator
from mne.datasets import sample
print(__doc__)
Explanation: .. _tut_stats_cluster_source_1samp:
Permutation t-test on source data with spatio-temporal clustering
Tests if the evoked response is significantly different between
conditions across subjects (simulated here using one subject's data).
The multiple comparisons problem is addressed with a cluster-level
permutation test across space and time.
End of explanation
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
subjects_dir = data_path + '/subjects'
tmin = -0.2
tmax = 0.3 # Use a lower tmax to reduce multiple comparisons
# Setup for reading the raw data
raw = io.Raw(raw_fname)
events = mne.read_events(event_fname)
Explanation: Set parameters
End of explanation
raw.info['bads'] += ['MEG 2443']
picks = mne.pick_types(raw.info, meg=True, eog=True, exclude='bads')
event_id = 1 # L auditory
reject = dict(grad=1000e-13, mag=4000e-15, eog=150e-6)
epochs1 = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=reject, preload=True)
event_id = 3 # L visual
epochs2 = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=reject, preload=True)
# Equalize trial counts to eliminate bias (which would otherwise be
# introduced by the abs() performed below)
equalize_epoch_counts([epochs1, epochs2])
Explanation: Read epochs for all channels, removing a bad one
End of explanation
fname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'
snr = 3.0
lambda2 = 1.0 / snr ** 2
method = "dSPM" # use dSPM method (could also be MNE or sLORETA)
inverse_operator = read_inverse_operator(fname_inv)
sample_vertices = [s['vertno'] for s in inverse_operator['src']]
# Let's average and compute inverse, resampling to speed things up
evoked1 = epochs1.average()
evoked1.resample(50)
condition1 = apply_inverse(evoked1, inverse_operator, lambda2, method)
evoked2 = epochs2.average()
evoked2.resample(50)
condition2 = apply_inverse(evoked2, inverse_operator, lambda2, method)
# Let's only deal with t > 0, cropping to reduce multiple comparisons
condition1.crop(0, None)
condition2.crop(0, None)
tmin = condition1.tmin
tstep = condition1.tstep
Explanation: Transform to source space
End of explanation
# Normally you would read in estimates across several subjects and morph
# them to the same cortical space (e.g. fsaverage). For example purposes,
# we will simulate this by just having each "subject" have the same
# response (just noisy in source space) here. Note that for 7 subjects
# with a two-sided statistical test, the minimum significance under a
# permutation test is only p = 1/(2 ** 6) = 0.015, which is large.
n_vertices_sample, n_times = condition1.data.shape
n_subjects = 7
print('Simulating data for %d subjects.' % n_subjects)
# Let's make sure our results replicate, so set the seed.
np.random.seed(0)
X = randn(n_vertices_sample, n_times, n_subjects, 2) * 10
X[:, :, :, 0] += condition1.data[:, :, np.newaxis]
X[:, :, :, 1] += condition2.data[:, :, np.newaxis]
# It's a good idea to spatially smooth the data, and for visualization
# purposes, let's morph these to fsaverage, which is a grade 5 source space
# with vertices 0:10242 for each hemisphere. Usually you'd have to morph
# each subject's data separately (and you might want to use morph_data
# instead), but here since all estimates are on 'sample' we can use one
# morph matrix for all the heavy lifting.
fsave_vertices = [np.arange(10242), np.arange(10242)]
morph_mat = compute_morph_matrix('sample', 'fsaverage', sample_vertices,
fsave_vertices, 20, subjects_dir)
n_vertices_fsave = morph_mat.shape[0]
# We have to change the shape for the dot() to work properly
X = X.reshape(n_vertices_sample, n_times * n_subjects * 2)
print('Morphing data.')
X = morph_mat.dot(X) # morph_mat is a sparse matrix
X = X.reshape(n_vertices_fsave, n_times, n_subjects, 2)
# Finally, we want to compare the overall activity levels in each condition,
# the diff is taken along the last axis (condition). The negative sign makes
# it so condition1 > condition2 shows up as "red blobs" (instead of blue).
X = np.abs(X) # only magnitude
X = X[:, :, :, 0] - X[:, :, :, 1] # make paired contrast
Explanation: Transform to common cortical space
End of explanation
# To use an algorithm optimized for spatio-temporal clustering, we
# just pass the spatial connectivity matrix (instead of spatio-temporal)
print('Computing connectivity.')
connectivity = spatial_tris_connectivity(grade_to_tris(5))
# Note that X needs to be a multi-dimensional array of shape
# samples (subjects) x time x space, so we permute dimensions
X = np.transpose(X, [2, 1, 0])
# Now let's actually do the clustering. This can take a long time...
# Here we set the threshold quite high to reduce computation.
p_threshold = 0.001
t_threshold = -stats.distributions.t.ppf(p_threshold / 2., n_subjects - 1)
print('Clustering.')
T_obs, clusters, cluster_p_values, H0 = clu = \
spatio_temporal_cluster_1samp_test(X, connectivity=connectivity, n_jobs=2,
threshold=t_threshold)
# Now select the clusters that are sig. at p < 0.05 (note that this value
# is multiple-comparisons corrected).
good_cluster_inds = np.where(cluster_p_values < 0.05)[0]
Explanation: Compute statistic
End of explanation
print('Visualizing clusters.')
# Now let's build a convenient representation of each cluster, where each
# cluster becomes a "time point" in the SourceEstimate
stc_all_cluster_vis = summarize_clusters_stc(clu, tstep=tstep,
vertices=fsave_vertices,
subject='fsaverage')
# Let's actually plot the first "time point" in the SourceEstimate, which
# shows all the clusters, weighted by duration
subjects_dir = op.join(data_path, 'subjects')
# blue blobs are for condition A < condition B, red for A > B
brain = stc_all_cluster_vis.plot(hemi='both', subjects_dir=subjects_dir,
time_label='Duration significant (ms)')
brain.set_data_time_index(0)
brain.show_view('lateral')
brain.save_image('clusters.png')
Explanation: Visualize the clusters
End of explanation
<END_TASK> |
5 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
* Visualizing of genetic similarity with Lightning + GraphX *
Setup lightning
Step1: Load structure similarity data
Public data from http
Step2: Show the network (unlabeled)
Step3: Show the network colored by degree
Step4: Show the network colored by connected components | <ASSISTANT_TASK:>
Python Code:
%libraryDependencies += "org.viz.lightning" %% "lightning-scala" % "0.1.6"
%update
import org.viz.lightning._
import org.apache.spark.graphx._
val lgn = Lightning(host="https://lightning-spark-summit.herokuapp.com" )
lgn.enableNotebook()
Explanation: * Visualizing of genetic similarity with Lightning + GraphX *
Setup lightning
End of explanation
val source = "/Users/mathisonian/projects/spark-summit/notebooks/data/allen-connectivity.txt"
val g = GraphLoader.edgeListFile(sc, source)
Explanation: Load structure similarity data
Public data from http://www.brain-map.org/
End of explanation
val links = g.edges.collect().map(e => Array(e.srcId.toInt, e.dstId.toInt))
lgn.force(links)
Explanation: Show the network (unlabeled)
End of explanation
val links = g.edges.collect().map(e => Array(e.srcId.toInt, e.dstId.toInt))
val degrees = g.degrees.sortBy(_._1).collect().map(x => Math.log(x._2))
lgn.force(links, value=degrees, colormap="Lightning")
Explanation: Show the network colored by degree
End of explanation
val links = g.edges.collect().map(e => Array(e.srcId.toInt, e.dstId.toInt))
val connectedComponents = g.connectedComponents().vertices.sortBy(_._1).map(_._2.toInt).collect()
lgn.force(links, label=connectedComponents)
Explanation: Show the network colored by connected components
End of explanation
<END_TASK> |
6 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
ES-DOC CMIP6 Model Properties - Ocean
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables
Is Required
Step9: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required
Step10: 2.2. Eos Functional Temp
Is Required
Step11: 2.3. Eos Functional Salt
Is Required
Step12: 2.4. Eos Functional Depth
Is Required
Step13: 2.5. Ocean Freezing Point
Is Required
Step14: 2.6. Ocean Specific Heat
Is Required
Step15: 2.7. Ocean Reference Density
Is Required
Step16: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required
Step17: 3.2. Type
Is Required
Step18: 3.3. Ocean Smoothing
Is Required
Step19: 3.4. Source
Is Required
Step20: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required
Step21: 4.2. River Mouth
Is Required
Step22: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required
Step23: 5.2. Code Version
Is Required
Step24: 5.3. Code Languages
Is Required
Step25: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required
Step26: 6.2. Canonical Horizontal Resolution
Is Required
Step27: 6.3. Range Horizontal Resolution
Is Required
Step28: 6.4. Number Of Horizontal Gridpoints
Is Required
Step29: 6.5. Number Of Vertical Levels
Is Required
Step30: 6.6. Is Adaptive Grid
Is Required
Step31: 6.7. Thickness Level 1
Is Required
Step32: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required
Step33: 7.2. Global Mean Metrics Used
Is Required
Step34: 7.3. Regional Metrics Used
Is Required
Step35: 7.4. Trend Metrics Used
Is Required
Step36: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required
Step37: 8.2. Scheme
Is Required
Step38: 8.3. Consistency Properties
Is Required
Step39: 8.4. Corrected Conserved Prognostic Variables
Is Required
Step40: 8.5. Was Flux Correction Used
Is Required
Step41: 9. Grid
Ocean grid
9.1. Overview
Is Required
Step42: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required
Step43: 10.2. Partial Steps
Is Required
Step44: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required
Step45: 11.2. Staggering
Is Required
Step46: 11.3. Scheme
Is Required
Step47: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required
Step48: 12.2. Diurnal Cycle
Is Required
Step49: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required
Step50: 13.2. Time Step
Is Required
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required
Step52: 14.2. Scheme
Is Required
Step53: 14.3. Time Step
Is Required
Step54: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required
Step55: 15.2. Time Step
Is Required
Step56: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required
Step57: 17. Advection
Ocean advection
17.1. Overview
Is Required
Step58: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required
Step59: 18.2. Scheme Name
Is Required
Step60: 18.3. ALE
Is Required
Step61: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required
Step62: 19.2. Flux Limiter
Is Required
Step63: 19.3. Effective Order
Is Required
Step64: 19.4. Name
Is Required
Step65: 19.5. Passive Tracers
Is Required
Step66: 19.6. Passive Tracers Advection
Is Required
Step67: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required
Step68: 20.2. Flux Limiter
Is Required
Step69: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required
Step70: 21.2. Scheme
Is Required
Step71: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required
Step72: 22.2. Order
Is Required
Step73: 22.3. Discretisation
Is Required
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required
Step75: 23.2. Constant Coefficient
Is Required
Step76: 23.3. Variable Coefficient
Is Required
Step77: 23.4. Coeff Background
Is Required
Step78: 23.5. Coeff Backscatter
Is Required
Step79: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required
Step80: 24.2. Submesoscale Mixing
Is Required
Step81: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required
Step82: 25.2. Order
Is Required
Step83: 25.3. Discretisation
Is Required
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required
Step85: 26.2. Constant Coefficient
Is Required
Step86: 26.3. Variable Coefficient
Is Required
Step87: 26.4. Coeff Background
Is Required
Step88: 26.5. Coeff Backscatter
Is Required
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required
Step90: 27.2. Constant Val
Is Required
Step91: 27.3. Flux Type
Is Required
Step92: 27.4. Added Diffusivity
Is Required
Step93: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required
Step96: 30.2. Closure Order
Is Required
Step97: 30.3. Constant
Is Required
Step98: 30.4. Background
Is Required
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required
Step100: 31.2. Closure Order
Is Required
Step101: 31.3. Constant
Is Required
Step102: 31.4. Background
Is Required
Step103: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required
Step104: 32.2. Tide Induced Mixing
Is Required
Step105: 32.3. Double Diffusion
Is Required
Step106: 32.4. Shear Mixing
Is Required
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required
Step108: 33.2. Constant
Is Required
Step109: 33.3. Profile
Is Required
Step110: 33.4. Background
Is Required
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required
Step112: 34.2. Constant
Is Required
Step113: 34.3. Profile
Is Required
Step114: 34.4. Background
Is Required
Step115: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required
Step116: 35.2. Scheme
Is Required
Step117: 35.3. Embeded Seaice
Is Required
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required
Step119: 36.2. Type Of Bbl
Is Required
Step120: 36.3. Lateral Mixing Coef
Is Required
Step121: 36.4. Sill Overflow
Is Required
Step122: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required
Step123: 37.2. Surface Pressure
Is Required
Step124: 37.3. Momentum Flux Correction
Is Required
Step125: 37.4. Tracers Flux Correction
Is Required
Step126: 37.5. Wave Effects
Is Required
Step127: 37.6. River Runoff Budget
Is Required
Step128: 37.7. Geothermal Heating
Is Required
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required
Step132: 40.2. Ocean Colour
Is Required
Step133: 40.3. Extinction Depth
Is Required
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required
Step135: 41.2. From Sea Ice
Is Required
Step136: 41.3. Forced Mode Restoring
Is Required | <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'noaa-gfdl', 'sandbox-2', 'ocean')
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: NOAA-GFDL
Source ID: SANDBOX-2
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-20 15:02:35
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation
<END_TASK> |
7 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Reframing Design Pattern
The Reframing design pattern refers to changing the representation of the output of a machine learning problem. For example, we could take something that is intuitively a regression problem and instead pose it as a classification problem (and vice versa).
Let's look at the natality dataset. Notice that for a given set of inputs, the weight_pounds (the label) can take many different values.
Step3: Comparing categorical label and regression
Since baby weight is a positive real value, this is intuitively a regression problem. However, we can train the model as a multi-class classification by bucketizing the output label. At inference time, the model then predicts a collection of probabilities corresponding to these potential outputs.
Let's do both and see how they compare.
Step4: We'll use the same features for both models. But we need to create a categorical weight label for the classification model.
Step5: Create tf.data datsets for both classification and regression.
Step6: First, train the classification model and examine the validation accuracy.
Step7: Next, we'll train the regression model and examine the validation RMSE.
Step8: The regression model gives a single numeric prediction of baby weight.
Step9: The classification model predicts a probability for each bucket of values.
Step10: Increasing the number of categorical labels
We'll generalize the code above to accommodate N label buckets, instead of just 4.
Step11: Create the feature columns and build the model.
Step12: Make a prediction on the example above.
Step13: Restricting the prediction range
One way to restrict the prediction range is to make the last-but-one activation function sigmoid instead, and add a lambda layer to scale the (0,1) values to the desired range. The drawback is that it will be difficult for the neural network to reach the extreme values. | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import seaborn as sns
from google.cloud import bigquery
import matplotlib as plt
%matplotlib inline
bq = bigquery.Client()
query =
SELECT
weight_pounds,
is_male,
gestation_weeks,
mother_age,
plurality,
mother_race
FROM
`bigquery-public-data.samples.natality`
WHERE
weight_pounds IS NOT NULL
AND is_male = true
AND gestation_weeks = 38
AND mother_age = 28
AND mother_race = 1
AND plurality = 1
AND RAND() < 0.01
df = bq.query(query).to_dataframe()
df.head()
fig = sns.distplot(df[["weight_pounds"]])
fig.set_title("Distribution of baby weight")
fig.set_xlabel("weight_pounds")
fig.figure.savefig("weight_distrib.png")
#average weight_pounds for this cross section
np.mean(df.weight_pounds)
np.std(df.weight_pounds)
weeks = 36
age = 28
query =
SELECT
weight_pounds,
is_male,
gestation_weeks,
mother_age,
plurality,
mother_race
FROM
`bigquery-public-data.samples.natality`
WHERE
weight_pounds IS NOT NULL
AND is_male = true
AND gestation_weeks = {}
AND mother_age = {}
AND mother_race = 1
AND plurality = 1
AND RAND() < 0.01
.format(weeks, age)
df = bq.query(query).to_dataframe()
print('weeks={} age={} mean={} stddev={}'.format(weeks, age, np.mean(df.weight_pounds), np.std(df.weight_pounds)))
Explanation: Reframing Design Pattern
The Reframing design pattern refers to changing the representation of the output of a machine learning problem. For example, we could take something that is intuitively a regression problem and instead pose it as a classification problem (and vice versa).
Let's look at the natality dataset. Notice that for a given set of inputs, the weight_pounds (the label) can take many different values.
End of explanation
import os
import numpy as np
import pandas as pd
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.keras.utils import to_categorical
from tensorflow import keras
from tensorflow import feature_column as fc
from tensorflow.keras import layers, models, Model
%matplotlib inline
df = pd.read_csv("./data/babyweight_train.csv")
Explanation: Comparing categorical label and regression
Since baby weight is a positive real value, this is intuitively a regression problem. However, we can train the model as a multi-class classification by bucketizing the output label. At inference time, the model then predicts a collection of probabilities corresponding to these potential outputs.
Let's do both and see how they compare.
End of explanation
# prepare inputs
df.is_male = df.is_male.astype(str)
df.mother_race.fillna(0, inplace = True)
df.mother_race = df.mother_race.astype(str)
# create categorical label
def categorical_weight(weight_pounds):
if weight_pounds < 3.31:
return 0
elif weight_pounds >= 3.31 and weight_pounds < 5.5:
return 1
elif weight_pounds >= 5.5 and weight_pounds < 8.8:
return 2
else:
return 3
df["weight_category"] = df.weight_pounds.apply(lambda x: categorical_weight(x))
df.head()
def encode_labels(classes):
one_hots = to_categorical(classes)
return one_hots
FEATURES = ['is_male', 'mother_age', 'plurality', 'gestation_weeks', 'mother_race']
LABEL_CLS = ['weight_category']
LABEL_REG = ['weight_pounds']
N_TRAIN = int(df.shape[0] * 0.80)
X_train = df[FEATURES][:N_TRAIN]
X_valid = df[FEATURES][N_TRAIN:]
y_train_cls = encode_labels(df[LABEL_CLS][:N_TRAIN])
y_train_reg = df[LABEL_REG][:N_TRAIN]
y_valid_cls = encode_labels(df[LABEL_CLS][N_TRAIN:])
y_valid_reg = df[LABEL_REG][N_TRAIN:]
Explanation: We'll use the same features for both models. But we need to create a categorical weight label for the classification model.
End of explanation
# train/validation dataset for classification model
cls_train_data = tf.data.Dataset.from_tensor_slices((X_train.to_dict('list'), y_train_cls))
cls_valid_data = tf.data.Dataset.from_tensor_slices((X_valid.to_dict('list'), y_valid_cls))
# train/validation dataset for regression model
reg_train_data = tf.data.Dataset.from_tensor_slices((X_train.to_dict('list'), y_train_reg.values))
reg_valid_data = tf.data.Dataset.from_tensor_slices((X_valid.to_dict('list'), y_valid_reg.values))
# Examine the two datasets. Notice the different label values.
for data_type in [cls_train_data, reg_train_data]:
for dict_slice in data_type.take(1):
print("{}\n".format(dict_slice))
# create feature columns to handle categorical variables
numeric_columns = [fc.numeric_column("mother_age"),
fc.numeric_column("gestation_weeks")]
CATEGORIES = {
'plurality': list(df.plurality.unique()),
'is_male' : list(df.is_male.unique()),
'mother_race': list(df.mother_race.unique())
}
categorical_columns = []
for feature, vocab in CATEGORIES.items():
cat_col = fc.categorical_column_with_vocabulary_list(
key=feature, vocabulary_list=vocab, dtype=tf.string)
categorical_columns.append(fc.indicator_column(cat_col))
# create Inputs for model
inputs = {colname: tf.keras.layers.Input(
name=colname, shape=(), dtype="float32")
for colname in ["mother_age", "gestation_weeks"]}
inputs.update({colname: tf.keras.layers.Input(
name=colname, shape=(), dtype=tf.string)
for colname in ["plurality", "is_male", "mother_race"]})
# build DenseFeatures for the model
dnn_inputs = layers.DenseFeatures(categorical_columns+numeric_columns)(inputs)
# create hidden layers
h1 = layers.Dense(20, activation="relu")(dnn_inputs)
h2 = layers.Dense(10, activation="relu")(h1)
# create classification model
cls_output = layers.Dense(4, activation="softmax")(h2)
cls_model = tf.keras.models.Model(inputs=inputs, outputs=cls_output)
cls_model.compile(optimizer='adam',
loss=tf.keras.losses.CategoricalCrossentropy(),
metrics=['accuracy'])
# create regression model
reg_output = layers.Dense(1, activation="relu")(h2)
reg_model = tf.keras.models.Model(inputs=inputs, outputs=reg_output)
reg_model.compile(optimizer='adam',
loss=tf.keras.losses.MeanSquaredError(),
metrics=['mse'])
Explanation: Create tf.data datsets for both classification and regression.
End of explanation
# train the classifcation model
cls_model.fit(cls_train_data.batch(50), epochs=1)
val_loss, val_accuracy = cls_model.evaluate(cls_valid_data.batch(X_valid.shape[0]))
print("Validation accuracy for classifcation model: {}".format(val_accuracy))
Explanation: First, train the classification model and examine the validation accuracy.
End of explanation
# train the classifcation model
reg_model.fit(reg_train_data.batch(50), epochs=1)
val_loss, val_mse = reg_model.evaluate(reg_valid_data.batch(X_valid.shape[0]))
print("Validation RMSE for regression model: {}".format(val_mse**0.5))
Explanation: Next, we'll train the regression model and examine the validation RMSE.
End of explanation
preds = reg_model.predict(x={"gestation_weeks": tf.convert_to_tensor([38]),
"is_male": tf.convert_to_tensor(["True"]),
"mother_age": tf.convert_to_tensor([28]),
"mother_race": tf.convert_to_tensor(["1.0"]),
"plurality": tf.convert_to_tensor(["Single(1)"])},
steps=1).squeeze()
preds
Explanation: The regression model gives a single numeric prediction of baby weight.
End of explanation
preds = cls_model.predict(x={"gestation_weeks": tf.convert_to_tensor([38]),
"is_male": tf.convert_to_tensor(["True"]),
"mother_age": tf.convert_to_tensor([28]),
"mother_race": tf.convert_to_tensor(["1.0"]),
"plurality": tf.convert_to_tensor(["Single(1)"])},
steps=1).squeeze()
preds
objects = ('very_low', 'low', 'average', 'high')
y_pos = np.arange(len(objects))
predictions = list(preds)
plt.bar(y_pos, predictions, align='center', alpha=0.5)
plt.xticks(y_pos, objects)
plt.title('Baby weight prediction')
plt.show()
Explanation: The classification model predicts a probability for each bucket of values.
End of explanation
# Read in the data and preprocess
df = pd.read_csv("./data/babyweight_train.csv")
# prepare inputs
df.is_male = df.is_male.astype(str)
df.mother_race.fillna(0, inplace = True)
df.mother_race = df.mother_race.astype(str)
# create categorical label
MIN = np.min(df.weight_pounds)
MAX = np.max(df.weight_pounds)
NBUCKETS = 50
def categorical_weight(weight_pounds, weight_min, weight_max, nbuckets=10):
buckets = np.linspace(weight_min, weight_max, nbuckets)
return np.digitize(weight_pounds, buckets) - 1
df["weight_category"] = df.weight_pounds.apply(lambda x: categorical_weight(x, MIN, MAX, NBUCKETS))
def encode_labels(classes):
one_hots = to_categorical(classes)
return one_hots
FEATURES = ['is_male', 'mother_age', 'plurality', 'gestation_weeks', 'mother_race']
LABEL_COLUMN = ['weight_category']
N_TRAIN = int(df.shape[0] * 0.80)
X_train, y_train = df[FEATURES][:N_TRAIN], encode_labels(df[LABEL_COLUMN][:N_TRAIN])
X_valid, y_valid = df[FEATURES][N_TRAIN:], encode_labels(df[LABEL_COLUMN][N_TRAIN:])
# create the training dataset
train_data = tf.data.Dataset.from_tensor_slices((X_train.to_dict('list'), y_train))
valid_data = tf.data.Dataset.from_tensor_slices((X_valid.to_dict('list'), y_valid))
Explanation: Increasing the number of categorical labels
We'll generalize the code above to accommodate N label buckets, instead of just 4.
End of explanation
# create feature columns to handle categorical variables
numeric_columns = [fc.numeric_column("mother_age"),
fc.numeric_column("gestation_weeks")]
CATEGORIES = {
'plurality': list(df.plurality.unique()),
'is_male' : list(df.is_male.unique()),
'mother_race': list(df.mother_race.unique())
}
categorical_columns = []
for feature, vocab in CATEGORIES.items():
cat_col = fc.categorical_column_with_vocabulary_list(
key=feature, vocabulary_list=vocab, dtype=tf.string)
categorical_columns.append(fc.indicator_column(cat_col))
# create Inputs for model
inputs = {colname: tf.keras.layers.Input(
name=colname, shape=(), dtype="float32")
for colname in ["mother_age", "gestation_weeks"]}
inputs.update({colname: tf.keras.layers.Input(
name=colname, shape=(), dtype=tf.string)
for colname in ["plurality", "is_male", "mother_race"]})
# build DenseFeatures for the model
dnn_inputs = layers.DenseFeatures(categorical_columns+numeric_columns)(inputs)
# model
h1 = layers.Dense(20, activation="relu")(dnn_inputs)
h2 = layers.Dense(10, activation="relu")(h1)
output = layers.Dense(NBUCKETS, activation="softmax")(h2)
model = tf.keras.models.Model(inputs=inputs, outputs=output)
model.compile(optimizer='adam',
loss=tf.keras.losses.CategoricalCrossentropy(),
metrics=['accuracy'])
# train the model
model.fit(train_data.batch(50), epochs=1)
Explanation: Create the feature columns and build the model.
End of explanation
preds = model.predict(x={"gestation_weeks": tf.convert_to_tensor([38]),
"is_male": tf.convert_to_tensor(["True"]),
"mother_age": tf.convert_to_tensor([28]),
"mother_race": tf.convert_to_tensor(["1.0"]),
"plurality": tf.convert_to_tensor(["Single(1)"])},
steps=1).squeeze()
objects = [str(_) for _ in range(NBUCKETS)]
y_pos = np.arange(len(objects))
predictions = list(preds)
plt.bar(y_pos, predictions, align='center', alpha=0.5)
plt.xticks(y_pos, objects)
plt.title('Baby weight prediction')
plt.show()
Explanation: Make a prediction on the example above.
End of explanation
import numpy as np
import tensorflow as tf
from tensorflow import keras
MIN_Y = 3
MAX_Y = 20
input_size = 10
inputs = keras.layers.Input(shape=(input_size,))
h1 = keras.layers.Dense(20, 'relu')(inputs)
h2 = keras.layers.Dense(1, 'sigmoid')(h1) # 0-1 range
output = keras.layers.Lambda(lambda y : (y*(MAX_Y-MIN_Y) + MIN_Y))(h2) # scaled
model = keras.Model(inputs, output)
# fit the model
model.compile(optimizer='adam', loss='mse')
batch_size = 2048
for i in range(0, 10):
x = np.random.rand(batch_size, input_size)
y = 0.5*(x[:,0] + x[:,1]) * (MAX_Y-MIN_Y) + MIN_Y
model.fit(x, y)
# verify
min_y = np.finfo(np.float64).max
max_y = np.finfo(np.float64).min
for i in range(0, 10):
x = np.random.randn(batch_size, input_size)
y = model.predict(x)
min_y = min(y.min(), min_y)
max_y = max(y.max(), max_y)
print('min={} max={}'.format(min_y, max_y))
Explanation: Restricting the prediction range
One way to restrict the prediction range is to make the last-but-one activation function sigmoid instead, and add a lambda layer to scale the (0,1) values to the desired range. The drawback is that it will be difficult for the neural network to reach the extreme values.
End of explanation
<END_TASK> |
8 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
YouTube Spam Collection Data Set (Part 2)
Source
Step1: <a id='section3b'></a>
Read in the data from the first CSV alone
Step2: <a id='section3c'></a>
Data cleanup
Step3: <a id='section3d'></a>
Split the data
Step4: <a id='section3e'></a>
Transform the data
Step5: <a id='section3f'></a>
Build the model
In this step, we will build 6 models, and pick the one with the best accuracy score
Step6: Model selection
Based on accuracy scores, the best algorithm is the Decision Tree Classifier. Logistic Regression and AdaBoost Classifier also performed very well. We will choose Decision Tree as our model, and look to tune it.
Step7: It looks like we were able to eke out some improvement in the performance. The Decision Tree Classifier seems to perform best with the min_samples_split set to 7. We will use this for our final model. Note that the default values for 'criterion' and 'splitter' seem to be part of the best performing set of parameters.
<a id='section3g'></a>
Run the prediction
Step8: <a id='section3h'></a>
Score the prediction
Step9: Some of the false negatives seem like they should have been marked as spam, so it is interesting that the model missed these. We may need to tune our vectorizer and/or attempt some other classifiers.
Let us check the area under the ROC curve. | <ASSISTANT_TASK:>
Python Code:
# Import modules
import numpy as np
import pandas as pd
Explanation: YouTube Spam Collection Data Set (Part 2)
Source: UCI Machine Learning Repository
Original Source: YouTube Spam Collection v. 1
Alberto, T.C., Lochter J.V., Almeida, T.A. Filtragem Automática de Spam nos Comentários do YouTube. Anais do XII Encontro Nacional de Inteligência Artificial e Computacional (ENIAC'15), Natal, RN, Brazil, 2015. (preprint)
Alberto, T.C., Lochter J.V., Almeida, T.A. TubeSpam: Comment Spam Filtering on YouTube. Proceedings of the 14th IEEE International Conference on Machine Learning and Applications (ICMLA'15), 1-6, Miami, FL, USA, December, 2015. (preprint)
Contents
1 Data Set Description
2 Approach
3 Solution
3a Import modules
3b Read the data set
3c Data cleanup
3d Split the data
3e Transform the data
3f Build the model
3g Run predictions
3h Score the prediction
4 Summary
<a id='section1'></a>
1. Data Set Description
From the description accompanying the data set, "the samples were extracted from the comments section of five videos that were among the 10 most viewed on YouTube during the collection period."
The data is available in five distinct data sets, and the data is classified as 1 for "spam" and 0 for "ham"
<a id='section2'></a>
2. Approach
Since the data set is split across five data sets, we will take two passes at the data. This is the second pass.
In the (optional) first pass, we considered only the Psy data set, as a way to wrap our hands around the problem. The notebook for this can be accessed here.
Our second pass will involve merging all five data sets and then running the classification on the combined data set. In this round, we will also tune the model and the vectorizer to eke out some improvements.
<a id='section3'></a>
3. Solution
<a id='section3a'></a>
Import initial set of modules
End of explanation
# Read the data set; print the first few rows
files = ['data\\Youtube01-Psy.csv', 'data\\Youtube02-KatyPerry.csv', 'data\\Youtube03-LMFAO.csv',
'data\\Youtube04-Eminem.csv', 'data\\Youtube05-Shakira.csv']
df = pd.DataFrame()
for file in files:
df = df.append(pd.read_csv(file))
df.head()
Explanation: <a id='section3b'></a>
Read in the data from the first CSV alone
End of explanation
# Check for missing values
df.info()
# Looks like there are missing values in the DATE column, but it is not a column of interest. Let's proceed.
# Of the five columns, the only relevant columns for spam/ham classification are the CONTENT and CLASS columns.
# We will use just these two columns. But first, let's check the distribution of spam and ham
df.CLASS.value_counts()
# There is an almost equal distribution. Given that this is a small data set, this is probably good,
# because the algorithm has enough items it can learn from
# Now, let us set up our X and y
X = df.CONTENT
y = df.CLASS
Explanation: <a id='section3c'></a>
Data cleanup
End of explanation
# Let us now split the data set into train and test sets
# We will use an 80/20 split
test_size = 0.2
seed = 42
scoring = 'accuracy'
num_folds = 10
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=seed, test_size=test_size)
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import MultinomialNB
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeClassifier
models = []
names = []
results = []
lr = ('LR', LogisticRegression())
knn = ('KNN', KNeighborsClassifier())
svc = ('SVC', SVC())
nb = ('NB', MultinomialNB())
cart = ('CART', DecisionTreeClassifier())
models.extend([lr, knn, svc, nb, cart])
Explanation: <a id='section3d'></a>
Split the data
End of explanation
# Set up a vectorizer, and create a Document-Term matrix
from sklearn.feature_extraction.text import CountVectorizer
vect = CountVectorizer()
X_train_dtm = vect.fit_transform(X_train)
# Check the layout of the Document-Term matrix
X_train_dtm
Explanation: <a id='section3e'></a>
Transform the data
End of explanation
from sklearn.model_selection import KFold, cross_val_score
for name, model in models:
kfold = KFold(n_splits=num_folds, random_state=seed)
score = cross_val_score(model, X_train_dtm, y_train, scoring=scoring, cv=kfold)
names.append(name)
results.append(score)
from sklearn.ensemble import AdaBoostClassifier, GradientBoostingClassifier, \
RandomForestClassifier, ExtraTreesClassifier
ensembles = []
ensemble_names = []
ensemble_results = []
ensembles.append(('AB', AdaBoostClassifier()))
ensembles.append(('RF', RandomForestClassifier()))
ensembles.append(('ET', ExtraTreesClassifier()))
for name, model in ensembles:
kfold = KFold(n_splits=num_folds, random_state=seed)
score = cross_val_score(model, X_train_dtm, y_train, cv=kfold, scoring=scoring)
ensemble_names.append(name)
ensemble_results.append(score)
models_list = []
for i, name in enumerate(names):
d = {'model': name, 'mean': results[i].mean(), 'std': results[i].std()}
models_list.append(d)
for i, name in enumerate(ensemble_names):
d = {'model': name, 'mean': results[i].mean(), 'std': results[i].std()}
models_list.append(d)
models_df = pd.DataFrame(models_list).set_index('model')
models_df.sort_values('mean', ascending=False)
Explanation: <a id='section3f'></a>
Build the model
In this step, we will build 6 models, and pick the one with the best accuracy score
End of explanation
cart
from sklearn.model_selection import GridSearchCV
final_model = DecisionTreeClassifier()
criterion_values = ['gini', 'entropy']
splitter_values = ['best', 'random']
min_samples_split_values = np.arange(2, 11, 1)
param_grid = dict(criterion=criterion_values, splitter=splitter_values,
min_samples_split=min_samples_split_values)
kfold = KFold(n_splits=num_folds, random_state=seed)
grid = GridSearchCV(estimator=final_model, cv=kfold, scoring=scoring, param_grid=param_grid)
grid_result = grid.fit(X_train_dtm, y_train)
print(grid_result.best_params_, grid_result.best_score_)
Explanation: Model selection
Based on accuracy scores, the best algorithm is the Decision Tree Classifier. Logistic Regression and AdaBoost Classifier also performed very well. We will choose Decision Tree as our model, and look to tune it.
End of explanation
final_model = DecisionTreeClassifier(min_samples_split=7, random_state=seed)
final_model.fit(X_train_dtm, y_train)
# Transform the test data to a DTM and predict
X_test_dtm = vect.transform(X_test)
y_pred = final_model.predict(X_test_dtm)
Explanation: It looks like we were able to eke out some improvement in the performance. The Decision Tree Classifier seems to perform best with the min_samples_split set to 7. We will use this for our final model. Note that the default values for 'criterion' and 'splitter' seem to be part of the best performing set of parameters.
<a id='section3g'></a>
Run the prediction
End of explanation
# Let us check the accuracy score
# It needs to better than 50%, which was the baseline
from sklearn.metrics import accuracy_score, confusion_matrix, roc_auc_score
accuracy_score(y_test, y_pred)
# The accuracy score was 93.37%, which is lower than we may have anticipated
# Let us check the confusion matrix to get a sense of the prediction distribution
confusion_matrix(y_test, y_pred)
# The model predicted 366 out of 392 instances correctly
# We had 14 false positives and 12 false negatives
# What were the false positive comments? (That is, ham marked as spam)
X_test[y_pred > y_test]
# And what were the false negative comments? (That is, spam comments that went undetected)
X_test[y_pred < y_test]
Explanation: <a id='section3h'></a>
Score the prediction
End of explanation
roc_auc_score(y_test, final_model.predict_proba(X_test_dtm)[:, 1])
Explanation: Some of the false negatives seem like they should have been marked as spam, so it is interesting that the model missed these. We may need to tune our vectorizer and/or attempt some other classifiers.
Let us check the area under the ROC curve.
End of explanation
<END_TASK> |
9 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Table of Contents
<p><div class="lev1 toc-item"><a href="#Building-an-ANN" data-toc-modified-id="Building-an-ANN-1"><span class="toc-item-num">1 </span>Building an ANN</a></div><div class="lev2 toc-item"><a href="#Installing-packages" data-toc-modified-id="Installing-packages-11"><span class="toc-item-num">1.1 </span>Installing packages</a></div><div class="lev2 toc-item"><a href="#Data-Preprocessing" data-toc-modified-id="Data-Preprocessing-12"><span class="toc-item-num">1.2 </span>Data Preprocessing</a></div><div class="lev2 toc-item"><a href="#Building-an-ANN" data-toc-modified-id="Building-an-ANN-13"><span class="toc-item-num">1.3 </span>Building an ANN</a></div><div class="lev2 toc-item"><a href="#Making-predictions-and-evaluating-the-model" data-toc-modified-id="Making-predictions-and-evaluating-the-model-14"><span class="toc-item-num">1.4 </span>Making predictions and evaluating the model</a></div><div class="lev2 toc-item"><a href="#Evaluating,-Improving-and-Tuning-the-ANN" data-toc-modified-id="Evaluating,-Improving-and-Tuning-the-ANN-15"><span class="toc-item-num">1.5 </span>Evaluating, Improving and Tuning the ANN</a></div>
# Building an ANN
Credit
Step1: Data Preprocessing
Step2: y (actual value)
Step3: Building an ANN
Step4: Making predictions and evaluating the model
Step5: Evaluating, Improving and Tuning the ANN
Using K-Fold Cross validation with Keras | <ASSISTANT_TASK:>
Python Code:
# Installing Theano
# pip install --upgrade --no-deps git+git://github.com/Theano/Theano.git
# Installing Tensorflow
# pip install tensorflow
# Installing Keras
# pip install --upgrade keras
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a href="#Building-an-ANN" data-toc-modified-id="Building-an-ANN-1"><span class="toc-item-num">1 </span>Building an ANN</a></div><div class="lev2 toc-item"><a href="#Installing-packages" data-toc-modified-id="Installing-packages-11"><span class="toc-item-num">1.1 </span>Installing packages</a></div><div class="lev2 toc-item"><a href="#Data-Preprocessing" data-toc-modified-id="Data-Preprocessing-12"><span class="toc-item-num">1.2 </span>Data Preprocessing</a></div><div class="lev2 toc-item"><a href="#Building-an-ANN" data-toc-modified-id="Building-an-ANN-13"><span class="toc-item-num">1.3 </span>Building an ANN</a></div><div class="lev2 toc-item"><a href="#Making-predictions-and-evaluating-the-model" data-toc-modified-id="Making-predictions-and-evaluating-the-model-14"><span class="toc-item-num">1.4 </span>Making predictions and evaluating the model</a></div><div class="lev2 toc-item"><a href="#Evaluating,-Improving-and-Tuning-the-ANN" data-toc-modified-id="Evaluating,-Improving-and-Tuning-the-ANN-15"><span class="toc-item-num">1.5 </span>Evaluating, Improving and Tuning the ANN</a></div>
# Building an ANN
Credit: [Deep Learning A-Z™: Hands-On Artificial Neural Networks](https://www.udemy.com/deeplearning/learn/v4/content)
- [Getting the dataset](https://www.superdatascience.com/deep-learning/)
## Installing packages
End of explanation
# Importing the libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
# Importing the dataset
dataset = pd.read_csv('./Artificial_Neural_Networks/Churn_Modelling.csv')
X = dataset.iloc[:, 3:13].values
y = dataset.iloc[:, 13].values
Explanation: Data Preprocessing
End of explanation
print (X.shape)
X
print (y.shape)
y
# Encoding categorical data
from sklearn.preprocessing import LabelEncoder, OneHotEncoder
labelencoder_X_1 = LabelEncoder()
X[:, 1] = labelencoder_X_1.fit_transform(X[:, 1])
labelencoder_X_2 = LabelEncoder()
X[:, 2] = labelencoder_X_2.fit_transform(X[:, 2])
onehotencoder = OneHotEncoder(categorical_features = [1])
X = onehotencoder.fit_transform(X).toarray()
X = X[:, 1:]
print (X.shape)
X
print (y.shape)
y
# Splitting the dataset into the Training set and Test set
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)
# Feature Scaling
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
Explanation: y (actual value): exited, this is the value we are trying to predict, which means if the customer stays or exit the bank.
End of explanation
# Importing the Keras libraries and packages
import keras
from keras.models import Sequential
from keras.layers import Dense
# Initialising the ANN
classifier = Sequential()
# Adding the input layer and the first hidden layer
classifier.add(Dense(units = 6, kernel_initializer = 'uniform', activation = 'relu', input_dim = 11))
# Adding the second hidden layer
classifier.add(Dense(units = 6, kernel_initializer = 'uniform', activation = 'relu'))
# Adding the output layer
classifier.add(Dense(units = 1, kernel_initializer = 'uniform', activation = 'sigmoid'))
# Compiling the ANN
classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
# Fitting the ANN to the Training set
classifier.fit(X_train, y_train, batch_size = 10, epochs = 100)
Explanation: Building an ANN
End of explanation
y_pred = classifier.predict(X_test)
y_pred = (y_pred > 0.5)
# Making the Confusion Matrix
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
cm
Explanation: Making predictions and evaluating the model
End of explanation
# Evaluating the ANN
from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import cross_val_score
from keras.models import Sequential
from keras.layers import Dense
def build_classifier():
classifier = Sequential()
classifier.add(Dense(units = 6, kernel_initializer = 'uniform', activation = 'relu', input_dim = 11))
classifier.add(Dense(units = 6, kernel_initializer = 'uniform', activation = 'relu'))
classifier.add(Dense(units = 1, kernel_initializer = 'uniform', activation = 'sigmoid'))
classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])
return classifier
classifier = KerasClassifier(build_fn = build_classifier, batch_size = 10, epochs = 100)
accuracies = cross_val_score(estimator = classifier, X = X_train, y = y_train, cv = 10, n_jobs = -1)
mean = accuracies.mean()
variance = accuracies.std()
# Improving the ANN
# Dropout Regularization to reduce overfitting if needed
# Tuning the ANN
from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import GridSearchCV
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
def build_classifier(optimizer):
classifier = Sequential()
classifier.add(Dense(units = 6, kernel_initializer = 'uniform', activation = 'relu', input_dim = 11))
# classifier.add(Dropout(p = 0.1))
classifier.add(Dense(units = 6, kernel_initializer = 'uniform', activation = 'relu'))
# classifier.add(Dropout(p = 0.1))
classifier.add(Dense(units = 1, kernel_initializer = 'uniform', activation = 'sigmoid'))
classifier.compile(optimizer = optimizer, loss = 'binary_crossentropy', metrics = ['accuracy'])
return classifier
classifier = KerasClassifier(build_fn = build_classifier)
parameters = {'batch_size': [25, 32],
'epochs': [100, 500],
'optimizer': ['adam', 'rmsprop']}
grid_search = GridSearchCV(estimator = classifier,
param_grid = parameters,
scoring = 'accuracy',
cv = 10)
grid_search = grid_search.fit(X_train, y_train)
best_parameters = grid_search.best_params_
best_accuracy = grid_search.best_score_
Explanation: Evaluating, Improving and Tuning the ANN
Using K-Fold Cross validation with Keras
End of explanation
<END_TASK> |
10 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Enter State Farm
Step1: Create Sample
The following assumes you've already created your validation set - remember that the training and validation set should contain different drivers, as mentioned on the Kaggle competition page.
Step4: Validation Set (Sample)
How I'll do it
Step5: 2/3
Step6: I understand now why I was getting weird validation-accuracy results
Step7: Create Batches
Step8: Basic Models
Linear Model
First, we try the simplest model and use default parameters. Note the trick of making the first layer a batchnorm layer - that way we don't have to worry about normalizing the input ourselves.
Step9: As you can see below, this training is going nowhere...
Step10: Let's first check the number of parameters to see that there's enough parameters to find some useful relationships
Step11: Since we have a simple model with no regularization and plenty of parameters, it seems most likely that our learning rate is too hgh. Perhaps it is jumping to a solution where it predicts one or two classes with high confidence, so that it can give a zero prediction to as many classes as possible - that's the best approach for a model that is no better than random, and there is likely to be where we would end up with a high learning rate. So let's check
Step12: (Not so in this case, only kind of, but it was indeed predicted 1 or 6 back on the Mac)
Our hypothesis was correct. It's nearly always predicting class 1 or 6, with very high confidence. So let's try a lower learning rate
Step13: Great - we found our way out of that hole ... Now we can increase the learning rate and see where we can get to.
Step14: We're stabilizing at validation accuracy of 0.39 (~.35 in my NB). Not great, but a lot better than random. Before moving on, let's check that our validation set on the sample is large enough that it gives consistent results
Step15: Yup, pretty consistent - if we see imporvements of 3% or more, it's probably not random, based on the above samples.
L2 Regularization
The previous model is over-fitting a lot, but we can't use dropout since we only have one layer. We can try to decrease overfitting in our model by adding l2 regularization (ie
Step16: Looks like we can get a bit over 50% (almost, here
Step17: (Odd, I may not have a good validation set if I'm getting such higher valacc numbers... ---- not anymore now that I'm using a proper valset. Of course, just as with JH's notebook
Step18: On GPU running out of memory (2692/3017 MiB) at this point. Restarting with smaller batch size (32?)
Step19: The training set here is very rapidly reaching a very high accuracy. So if we could regularize this, perhaps we could get a reasonable results.
So, what kind of regularization should we try first? As we discussed in lesson 3, we should start with data augmentation.
Data Augmentation
To find the best data augmentation parameters, we can try each type of data augmentation, one at a time. For each type, we can try four very different levels of augmentation, and see which is the best. In the steps below we've only kept the single best results we found. We're using the CNN we defined above, since we have already observed it can model the data quickly and accurately.
Width shift
Step20: Height shift
Step21: Random shear angles (max in radians) -
Step22: Rotation
Step23: Channel shift
Step24: And finally, putting it all together!
Step25: At first glance, this isn't looking encouraging, since the validation set is poor and getting worse. But the training set is getting better, and still has a long way to go in accuracy - so we should try annealing our learning rate and running more epochs, before we make a decision.
Step26: Lucky we tried that - we're starting to make progress! Let's keep going. | <ASSISTANT_TASK:>
Python Code:
import theano
import os, sys
sys.path.insert(1, os.path.join(os.getcwd(), 'utils'))
%matplotlib inline
from __future__ import print_function, division
# path = "data/sample/"
path = "data/statefarm/sample/"
import utils; reload(utils)
from utils import *
from IPython.display import FileLink
# batch_size = 64
batch_size = 32
Explanation: Enter State Farm
End of explanation
%cd data/statefarm
%cd train
%mkdir ../sample
%mkdir ../sample/train
%mkdir ../sample/valid
for d in glob('c?'):
os.mkdir('../sample/train/' + d)
os.mkdir('../sample/valid/' + d)
from shutil import copyfile
g = glob('c?/*.jpg')
shuf = np.random.permutation(g)
for i in range(1500): copyfile(shuf[i], '../sample/train/' + shuf[i])
# # removing copied sample training images
# help(os)
# for f in glob('c?/*.jpg'):
# os.remove(f)
% cd ../../..
%mkdir data/statefarm/results
%mkdir data/statefarm/sample/test
Explanation: Create Sample
The following assumes you've already created your validation set - remember that the training and validation set should contain different drivers, as mentioned on the Kaggle competition page.
End of explanation
# run once, make sure you're in datadir first
# path = os.getcwd()
# os.mkdir(path + '/valid')
# for i in xrange(10): os.mkdir(path + '/valid' + '/c' + str(i))
def reset_valid(verbose=1, valid_path='', TRAIN_DIR=''):
Moves all images in validation set back to
their respective classes in the training set.
counter = 0
if not valid_path: valid_path = os.getcwd() + '/valid/'
if not TRAIN_DIR: TRAIN_DIR = os.getcwd() + '/train'
%cd $valid_path
for i in xrange(10):
%cd c"$i"
g = glob('*.jpg')
for n in xrange(len(g)):
os.rename(g[n], TRAIN_DIR + '/c' + str(i) + '/' + g[n])
counter += 1
% cd ..
if verbose: print("Moved {} files.".format(counter))
# %mv $VALID_DIR/c"$i"/$*.jpg $TRAIN_DIR/c"$i"/$*.jpg
# modified from: http://forums.fast.ai/t/statefarm-kaggle-comp/183/20
def set_valid(number=1, verbose=1, data_path=''):
Moves <number> of subjects from training to validation
directories. Verbosity: 0: Silent; 1: print no. files moved;
2: print each move operation
if not data_path: data_path = os.getcwd() + '/'
counter = 0
if number < 0: number = 0
for n in xrange(number):
# read CSV file into Pandas DataFrame
dil = pd.read_csv(data_path + 'driver_imgs_list.csv')
# group frame by subject in image
grouped_subjects = dil.groupby('subject')
# pick <number> subjects at random
subject = grouped_subjects.groups.keys()[np.random.randint(0, high=len(grouped_subjects.groups))] # <-- groups?
# get the group assoc w/ subject
group = grouped_subjects.get_group(subject)
# loop over gropu & move imgs to validation dir
for (subject, clssnm, img) in group.values:
source = '{}train/{}/{}'.format(data_path, clssnm, img)
target = source.replace('train', 'valid')
if verbose > 1: print('mv {} {}'.format(source, target))
os.rename(source, target)
counter += 1
if verbose: print ("Files moved: {}".format(counter))
Explanation: Validation Set (Sample)
How I'll do it: create a full val set in the full valid folder, then copy over the same percentage as train to the sample/valid folder.
Acutally: wouldn't it be better if I used the full validation set for more accurate results? Then again, for processing on my MacBook, it may be good enough to go w/ the 1st method.
1/3: function definitions for moving stuff & aming dirs:
End of explanation
%pwd
# %cd ~/Deshar/Kaukasos/FAI
%cd ~/Kaukasos/FAI
%cd data/statefarm/
reset_valid()
%cd ..
set_valid(number=3)
Explanation: 2/3: Making sure we're in the right dir, & moving stuff
End of explanation
%pwd
%cd valid
# g = glob('valid/c?/*.jpg') # <-- this doesnt work: why?
g = glob('c?/*.jpg')
shuf = np.random.permutation(g)
# for i in range(1000): copyfile(shuf[i], '/sample/' + shuf[i])
for i in range(1000): copyfile(shuf[i], '../sample/valid/' + shuf[i])
Explanation: I understand now why I was getting weird validation-accuracy results: I was moving a unique valset from training, in the full data directory and not in the sample dir. But then why was my model even able to train if there wasn't anything in the sample validation folders? Because I was only copying 1000 random images from sample/train to sample/valid. Ooof..
Nevermind, ignore (some of) that.... the 1000 sample val imgs are taken from the valid set moved from training in the full directory.. The problem affecting accuracy is that the valid set separated from training after the sample training set is copied.. So some of the val imgs will have drivers in sample training set. This explains why accuracy was off, but not as off as one would expect. Will reconfigure this.
This notebook is being rerun on my Asus Linux machine. Upgrading from an Intel Core i5 CPU to an NVidia GTX 870M GPU should yield a good speedup.
CPU times:
* Single Linear Model: 60~48 seconds
* Single (100 Node) Hidden Layer: 67~52 seconds
* Single block of 2 Convolutional layers (+ LM): 453~410 seconds
3/3: copying val set from the full valid folder to sample valid
J.Howard uses a permutation of 1,000 val imgs, so I'll just do that here.
End of explanation
batches = get_batches(path + 'train', batch_size=batch_size)
val_batches = get_batches(path + 'valid', batch_size=batch_size*2, shuffle=False)
%pwd
os.mkdir(path + 'test')
(val_classes, trn_classes, val_labels, trn_labels, val_filenames, filenames,
test_filename) = get_classes(path)
Explanation: Create Batches
End of explanation
model = Sequential([
BatchNormalization(axis=1, input_shape=(3, 224, 224)),
Flatten(),
Dense(10, activation='softmax')
])
Explanation: Basic Models
Linear Model
First, we try the simplest model and use default parameters. Note the trick of making the first layer a batchnorm layer - that way we don't have to worry about normalizing the input ourselves.
End of explanation
model.compile(Adam(), loss='categorical_crossentropy', metrics=['accuracy'])
model.fit_generator(batches, batches.nb_sample, nb_epoch=2, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
Explanation: As you can see below, this training is going nowhere...
End of explanation
model.summary()
10*3*224*224
Explanation: Let's first check the number of parameters to see that there's enough parameters to find some useful relationships:
End of explanation
np.round(model.predict_generator(batches, batches.n)[:10],2)
# temp = model.predict_generator(batches, batches.n)
Explanation: Since we have a simple model with no regularization and plenty of parameters, it seems most likely that our learning rate is too hgh. Perhaps it is jumping to a solution where it predicts one or two classes with high confidence, so that it can give a zero prediction to as many classes as possible - that's the best approach for a model that is no better than random, and there is likely to be where we would end up with a high learning rate. So let's check:
End of explanation
# here's a way to take a look at the learning rate
import keras.backend as K
LR = K.eval(model.optimizer.lr)
print(LR)
model = Sequential([
BatchNormalization(axis=1, input_shape=(3,224,224)),
Flatten(),
Dense(10, activation='softmax')
])
model.compile(Adam(lr=1e-5), loss='categorical_crossentropy', metrics=['accuracy'])
model.fit_generator(batches, batches.nb_sample, nb_epoch=2, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
Explanation: (Not so in this case, only kind of, but it was indeed predicted 1 or 6 back on the Mac)
Our hypothesis was correct. It's nearly always predicting class 1 or 6, with very high confidence. So let's try a lower learning rate:
End of explanation
model.optimizer.lr=0.001
model.fit_generator(batches, batches.nb_sample, nb_epoch=4, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
Explanation: Great - we found our way out of that hole ... Now we can increase the learning rate and see where we can get to.
End of explanation
rnd_batches = get_batches(path+'valid', batch_size=batch_size*2, shuffle=True)
val_res = [model.evaluate_generator(rnd_batches, rnd_batches.nb_sample) for i in range(10)]
np.round(val_res,2)
Explanation: We're stabilizing at validation accuracy of 0.39 (~.35 in my NB). Not great, but a lot better than random. Before moving on, let's check that our validation set on the sample is large enough that it gives consistent results:
End of explanation
model = Sequential([
BatchNormalization(axis=1, input_shape=(3,224,224)),
Flatten(),
Dense(10, activation='softmax', W_regularizer=l2(0.01))
])
model.compile(Adam(lr=1e-5), loss='categorical_crossentropy', metrics=['accuracy'])
model.fit_generator(batches, batches.nb_sample, nb_epoch=2, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
model.optimizer.lr=0.001
model.fit_generator(batches, batches.nb_sample, nb_epoch=4, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
Explanation: Yup, pretty consistent - if we see imporvements of 3% or more, it's probably not random, based on the above samples.
L2 Regularization
The previous model is over-fitting a lot, but we can't use dropout since we only have one layer. We can try to decrease overfitting in our model by adding l2 regularization (ie: add the sum of squares of the weights to our loss function):
End of explanation
model = Sequential([
BatchNormalization(axis=1, input_shape=(3, 224, 224)),
Flatten(),
Dense(100, activation='relu'), #¿would λ2 be good here?
BatchNormalization(),
Dense(10, activation='softmax')
])
model.compile(Adam(lr=1e-5), loss='categorical_crossentropy', metrics=['accuracy'])
model.fit_generator(batches, batches.nb_sample, nb_epoch=2, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
model.optimizer.lr = 0.01
model.fit_generator(batches, batches.nb_sample, nb_epoch=5, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
Explanation: Looks like we can get a bit over 50% (almost, here: 42.8%) accuracy this way. This'll be a good benchmark for our future models - if we can't beat 50%, then we're not even beating a linear model trained on a sample, so we'll know that's not a good approach.
Single hidden layer
The next simplest model is to add a single hidden layer.
End of explanation
def conv1(batches):
model = Sequential([
BatchNormalization(axis=1, input_shape=(3,224,224)),
Convolution2D(32, 3, 3, activation='relu'),
BatchNormalization(axis=1),
MaxPooling2D((3, 3)),
Convolution2D(64, 3, 3, activation='relu'),
BatchNormalization(axis=1),
MaxPooling2D((3,3)),
Flatten(),
Dense(200, activation='relu'),
BatchNormalization(),
Dense(10, activation='softmax')
])
model.compile(Adam(1e-3), loss='categorical_crossentropy', metrics=['accuracy'])
model.fit_generator(batches, batches.nb_sample, nb_epoch=2, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
model.optimizer.lr = 0.001
model.fit_generator(batches, batches.nb_sample, nb_epoch=4, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
return model
Explanation: (Odd, I may not have a good validation set if I'm getting such higher valacc numbers... ---- not anymore now that I'm using a proper valset. Of course, just as with JH's notebook: val accuracy has decreased a bit.)
Not looking very encouraging... which isn't surprising since we know that CNNs are a much better choice for computer vision problems. So we'll try one.
Single Conv Layer
2 conv layers with max pooling followed by a simple dense network is a good simple CNN to start with:
End of explanation
conv1(batches)
Explanation: On GPU running out of memory (2692/3017 MiB) at this point. Restarting with smaller batch size (32?)
End of explanation
gen_t = image.ImageDataGenerator(width_shift_range=0.1)
batches = get_batches(path + 'train', gen_t, batch_size=batch_size)
model = conv1(batches)
Explanation: The training set here is very rapidly reaching a very high accuracy. So if we could regularize this, perhaps we could get a reasonable results.
So, what kind of regularization should we try first? As we discussed in lesson 3, we should start with data augmentation.
Data Augmentation
To find the best data augmentation parameters, we can try each type of data augmentation, one at a time. For each type, we can try four very different levels of augmentation, and see which is the best. In the steps below we've only kept the single best results we found. We're using the CNN we defined above, since we have already observed it can model the data quickly and accurately.
Width shift: move the image left and right -
End of explanation
gen_t = image.ImageDataGenerator(height_shift_range=0.05)
batches = get_batches(path + 'train', gen_t, batch_size=batch_size)
model = conv1(batches)
Explanation: Height shift: move the image up and down -
End of explanation
gen_t = image.ImageDataGenerator(shear_range=0.1)
batches = get_batches(path + 'train', gen_t, batch_size=batch_size)
model = conv1(batches)
Explanation: Random shear angles (max in radians) -
End of explanation
gen_t = image.ImageDataGenerator(rotation_range=15)
batches = get_batches(path + 'train', gen_t, batch_size=batch_size)
model = conv1(batches)
Explanation: Rotation: max in degrees -
End of explanation
gen_t = image.ImageDataGenerator(channel_shift_range=20)
batches = get_batches(path + 'train', gen_t, batch_size=batch_size)
model = conv1(batches)
Explanation: Channel shift: randomly changing the R,B,G colors -
End of explanation
gen_t = image.ImageDataGenerator(rotation_range=15, height_shift_range=0.05,
shear_range=0.1, channel_shift_range=20, width_shift_range=0.1)
batches = get_batches(path + 'train', gen_t, batch_size=batch_size)
model = conv1(batches)
Explanation: And finally, putting it all together!
End of explanation
model.optimizer.lr = 0.0001
model.fit_generator(batches, batches.nb_sample, nb_epoch=5, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
Explanation: At first glance, this isn't looking encouraging, since the validation set is poor and getting worse. But the training set is getting better, and still has a long way to go in accuracy - so we should try annealing our learning rate and running more epochs, before we make a decision.
End of explanation
model.fit_generator(batches, batches.nb_sample, nb_epoch=25, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
Explanation: Lucky we tried that - we're starting to make progress! Let's keep going.
End of explanation
<END_TASK> |
11 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Import
Step1: Reading initial data
Step2: Remove rows with NAN from data
Step3: Add diff_pt and cos(diff_phi)
Step4: Add max, sum among PIDs
Step5: define label = signB * signTrack
if > 0 (same sign) - label 1
if < 0 (different sign) - label 0
Step6: Apply ghost prob cut
Step7: Leave not muons, kaons, electrons, protons, pions
Step8: Tracks sum of charges asymmetry checks
Step9: Calculating tagging efficiency ($\epsilon_{tag}$)
$$N (\text{passed selection}) = \sum_{\text{passed selection}} sw_i$$
$$N (\text{all events}) = \sum_{\text{all events}} sw_i,$$
where $sw_i$ - sPLot weight (sWeight for signal)
$$\epsilon_{tag} = \frac{N (\text{passed selection})} {N (\text{all events})}$$
$$\Delta\epsilon_{tag} = \frac{\sqrt{\epsilon_{tag}(1-\epsilon_{tag}) \sum_{\text{all events}}sw_i^2}} {N (\text{all events})}$$
All events are not availables (some selections are applyed before), that is why we used
$$\Delta\epsilon_{tag} = \frac{\sqrt{N (\text{passed selection})}} {N (\text{all events})},$$
which is similar to the previous definition
Step10: Choose most probable B-events
Step11: Define B-like events for training
Events with low sWeight still will be used only to test quality.
Step12: Main idea
Step13: PID pairs scatters
Step14: pt
Step15: count of tracks
Step16: PIDs histograms
Step17: Train to distinguish same sign vs opposite sign
Step18: DT
Step19: Calibration
Step20: Implementing best tracking
Step21: MC trained algorithm testing | <ASSISTANT_TASK:>
Python Code:
import pandas
import numpy
from folding_group import FoldingGroupClassifier
from rep.data import LabeledDataStorage
from rep.report import ClassificationReport
from rep.report.metrics import RocAuc
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import roc_curve, roc_auc_score
from utils import get_N_B_events, get_events_number, get_events_statistics
Explanation: Import
End of explanation
import root_numpy
data_nan = pandas.DataFrame(root_numpy.root2array('datasets/data/csv/JPsiK/Tracks.root'))
data_nan.head()
event_id_column = 'event_id'
event_id = data_nan.run.apply(str) + '_' + data_nan.event.apply(str)
data_nan['group_column'] = numpy.unique(event_id, return_inverse=True)[1]
data_nan[event_id_column] = event_id
get_events_statistics(data_nan)
get_N_B_events()
Explanation: Reading initial data
End of explanation
data = data_nan.dropna()
len(data_nan), len(data), get_events_statistics(data)
Explanation: Remove rows with NAN from data
End of explanation
from utils import add_diff_pt
# add diff pt
add_diff_pt(data)
# add cos(diff_phi)
data['cos_diff_phi'] = numpy.cos(data.diff_phi.values)
Explanation: Add diff_pt and cos(diff_phi)
End of explanation
from itertools import combinations
PIDs = {'k': data.PIDNNk.values,
'e': data.PIDNNe.values,
'mu': data.PIDNNm.values,
}
for (pid_name1, pid_values1), (pid_name2, pid_values2) in combinations(PIDs.items(), 2):
data.loc[:, 'max_PID_{}_{}'.format(pid_name1, pid_name2)] = numpy.maximum(pid_values1, pid_values2)
data.loc[:, 'sum_PID_{}_{}'.format(pid_name1, pid_name2)] = pid_values1 + pid_values2
Explanation: Add max, sum among PIDs
End of explanation
data.loc[:, 'label'] = (data.signB.values * data.signTrack.values > 0) * 1
', '.join(data.columns)
Explanation: define label = signB * signTrack
if > 0 (same sign) - label 1
if < 0 (different sign) - label 0
End of explanation
initial_cut = '(ghostProb < 0.4)'
data = data.query(initial_cut)
get_events_statistics(data)
Explanation: Apply ghost prob cut
End of explanation
threshold_kaon = 0.
threshold_muon = 0.
threshold_electron = 0.
threshold_pion = 0.
threshold_proton = 0.
cut_pid = " ( (PIDNNk > {trk}) | (PIDNNm > {trm}) | (PIDNNe > {tre}) | (PIDNNpi > {trpi}) | (PIDNNp > {trp})) "
cut_pid = cut_pid.format(trk=threshold_kaon, trm=threshold_muon, tre=threshold_electron, trpi=threshold_pion,
trp=threshold_proton)
data = data.query(cut_pid)
get_events_statistics(data)
Explanation: Leave not muons, kaons, electrons, protons, pions
End of explanation
from utils import compute_sum_of_charges
means = [compute_sum_of_charges(data[mask], name, bins=bins,
event_id_column=event_id_column) for mask, name, bins in \
zip([data.signB > -100,
(data.IPs > 3) & ((abs(data.diff_eta) > 0.6) | (abs(data.diff_phi) > 0.825)),
(abs(data.diff_eta) < 0.6) & (abs(data.diff_phi) < 0.825) & (data.IPs < 3)],
['full', 'OS', 'SS'], [21, 21, 21])]
Explanation: Tracks sum of charges asymmetry checks
End of explanation
N_B_passed = float(get_events_number(data))
tagging_efficiency = N_B_passed / get_N_B_events()
tagging_efficiency_delta = sqrt(N_B_passed) / get_N_B_events()
tagging_efficiency, tagging_efficiency_delta
hist(data.diff_pt.values, bins=100)
pass
Explanation: Calculating tagging efficiency ($\epsilon_{tag}$)
$$N (\text{passed selection}) = \sum_{\text{passed selection}} sw_i$$
$$N (\text{all events}) = \sum_{\text{all events}} sw_i,$$
where $sw_i$ - sPLot weight (sWeight for signal)
$$\epsilon_{tag} = \frac{N (\text{passed selection})} {N (\text{all events})}$$
$$\Delta\epsilon_{tag} = \frac{\sqrt{\epsilon_{tag}(1-\epsilon_{tag}) \sum_{\text{all events}}sw_i^2}} {N (\text{all events})}$$
All events are not availables (some selections are applyed before), that is why we used
$$\Delta\epsilon_{tag} = \frac{\sqrt{N (\text{passed selection})}} {N (\text{all events})},$$
which is similar to the previous definition
End of explanation
_, take_indices = numpy.unique(data[event_id_column], return_index=True)
figure(figsize=[15, 5])
subplot(1, 2, 1)
hist(data.Bmass.values[take_indices], bins=100)
title('B mass hist')
xlabel('mass')
subplot(1, 2, 2)
hist(data.N_sig_sw.values[take_indices], bins=100, normed=True)
title('sWeights hist')
xlabel('signal sWeights')
plt.savefig('img/Bmass_less_PID.png' , format='png')
Explanation: Choose most probable B-events
End of explanation
sweight_threshold = 1.
data_sw_passed = data[data.N_sig_sw > sweight_threshold]
data_sw_not_passed = data[data.N_sig_sw <= sweight_threshold]
get_events_statistics(data_sw_passed)
_, take_indices = numpy.unique(data_sw_passed[event_id_column], return_index=True)
figure(figsize=[15, 5])
subplot(1, 2, 1)
hist(data_sw_passed.Bmass.values[take_indices], bins=100)
title('B mass hist for sWeight > 1 selection')
xlabel('mass')
subplot(1, 2, 2)
hist(data_sw_passed.N_sig_sw.values[take_indices], bins=100, normed=True)
title('sWeights hist for sWeight > 1 selection')
xlabel('signal sWeights')
plt.savefig('img/Bmass_selected_less_PID.png' , format='png')
hist(data_sw_passed.diff_pt.values, bins=100)
pass
Explanation: Define B-like events for training
Events with low sWeight still will be used only to test quality.
End of explanation
features = list(set(data.columns) - {'index', 'run', 'event', 'i', 'signB', 'signTrack', 'N_sig_sw', 'Bmass', 'mult',
'PIDNNp', 'PIDNNpi', 'label', 'thetaMin', 'Dist_phi', event_id_column,
'mu_cut', 'e_cut', 'K_cut', 'ID', 'diff_phi', 'group_column'})
features
Explanation: Main idea:
find tracks, which can help reconstruct the sign of B if you know track sign.
label = signB * signTrack
* the highest output means that this is same sign B as track
* the lowest output means that this is opposite sign B than track
Define features
End of explanation
figure(figsize=[15, 16])
bins = 60
step = 3
for i, (feature1, feature2) in enumerate(combinations(['PIDNNk', 'PIDNNm', 'PIDNNe', 'PIDNNp', 'PIDNNpi'], 2)):
subplot(4, 3, i + 1)
Z, (x, y) = numpy.histogramdd(data_sw_passed[[feature1, feature2]].values, bins=bins, range=([0, 1], [0, 1]))
pcolor(numpy.log(Z).T, vmin=0)
xlabel(feature1)
ylabel(feature2)
xticks(numpy.arange(bins, step), x[::step]), yticks(numpy.arange(bins, step), y[::step])
plt.savefig('img/PID_selected_less_PID.png' , format='png')
Explanation: PID pairs scatters
End of explanation
hist(data_sw_passed.diff_pt.values, bins=60, normed=True)
pass
Explanation: pt
End of explanation
figure(figsize=(20, 6))
subplot(1, 2, 1)
_, n_tracks = numpy.unique(data_sw_passed[event_id_column], return_counts=True)
hist(n_tracks, bins=100)
title('Number of tracks for events with sWeight > 1')
subplot(1, 2, 2)
_, n_tracks_all = numpy.unique(data[event_id_column], return_counts=True)
hist(n_tracks_all, bins=106)
title('Number of tracks')
plt.savefig('img/tracks_number_less_PID.png' , format='png')
Explanation: count of tracks
End of explanation
figure(figsize=[15, 4])
for i, column in enumerate(['PIDNNm', 'PIDNNe', 'PIDNNk']):
subplot(1, 3, i + 1)
hist(data_sw_passed[column].values, bins=60, range=(0, 1), label=column)
legend()
Explanation: PIDs histograms
End of explanation
from decisiontrain import DecisionTrainClassifier
from rep.estimators import SklearnClassifier
from hep_ml.losses import LogLossFunction
data_sw_passed_lds = LabeledDataStorage(data_sw_passed, data_sw_passed.label.values, data_sw_passed.N_sig_sw.values)
Explanation: Train to distinguish same sign vs opposite sign
End of explanation
tt_base = DecisionTrainClassifier(learning_rate=0.1, n_estimators=3000, depth=6,
max_features=15, n_threads=14, loss=LogLossFunction(regularization=100))
tt_folding = FoldingGroupClassifier(SklearnClassifier(tt_base), n_folds=2, random_state=11,
train_features=features, group_feature='group_column')
%time tt_folding.fit_lds(data_sw_passed_lds)
pass
import cPickle
with open('models/dt_full_group.pkl', 'w') as f:
cPickle.dump(tt_folding, f)
# import cPickle
# with open('models/dt_full_group.pkl', 'r') as f:
# tt_folding = cPickle.load(f)
comparison_report = tt_folding.test_on_lds(data_sw_passed_lds)
comparison_report.compute_metric(RocAuc())
comparison_report.roc()
lc = comparison_report.learning_curve(RocAuc(), steps=1)
lc
comparison_report.feature_importance()
Explanation: DT
End of explanation
from utils import get_result_with_bootstrap_for_given_part
result = get_result_with_bootstrap_for_given_part(tagging_efficiency, tagging_efficiency_delta, tt_folding,
[data_sw_passed, data_sw_not_passed], 'tt-log', get_N_B_events(),
logistic=True, n_calibrations=30)
result
import utils
reload(utils)
from utils import get_result_with_bootstrap_for_given_part
result = get_result_with_bootstrap_for_given_part(tagging_efficiency, tagging_efficiency_delta, tt_folding,
[data_sw_passed, data_sw_not_passed], 'tt-log', get_N_B_events(),
logistic=True, n_calibrations=1)
result
result.to_csv('img/tracks.csv', index=False, header=True)
Explanation: Calibration
End of explanation
from utils import prepare_B_data_for_given_part
Bdata_prepared = prepare_B_data_for_given_part(tt_folding, [data_sw_passed, data_sw_not_passed],
get_N_B_events(), logistic=True)
Bdata_prepared.to_csv('models/Bdata_tracks.csv', header=True, index=False)
Explanation: Implementing best tracking
End of explanation
from utils import estimate_algorithm
import cPickle
with open('models/dt_MC.pkl', 'r') as f:
tt_folding_MC = cPickle.load(f)
with open('models/calibrator_tracks_MC.pkl', 'r') as f:
calibrator_tracks_MC = cPickle.load(f)
with open('models/calibrator_B_MC.pkl', 'r') as f:
calibrator_B_MC = cPickle.load(f)
p_MC = tt_folding_MC.predict_proba(data)[:, 1]
roc_auc_score(data.label, p_MC, sample_weight=data.N_sig_sw.values.astype(float64))
estimate_algorithm(tt_folding_MC, calibrator_tracks_MC, calibrator_B_MC, data, get_N_B_events())
estimate_algorithm(tt_folding_MC, calibrator_tracks_MC, calibrator_B_MC, data, get_N_B_events(), calib_part_itself=True,
calib_itself=True)
Explanation: MC trained algorithm testing
End of explanation
<END_TASK> |
12 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Feature selection
Step1: Our first step is to count up all of the words in each of the documents. This conditional frequency distribution should look familiar by now. | <ASSISTANT_TASK:>
Python Code:
documents = nltk.corpus.PlaintextCorpusReader('../data/EmbryoProjectTexts/files', 'https.+')
metadata = zotero.read('../data/EmbryoProjectTexts', index_by='link', follow_links=False)
Explanation: Feature selection: keywords
A major problem-area in text mining is determining the thematic or topical content of texts. One of the most basic problems in this area is to identify the terms in a text -- "keywords" -- that most accurately represents its distinctive thematic characteristics.
In this notebook, we will use Dunning's log-likelihood statistic to identify keywords for individual documents in a collection of texts. It is fairly typical that methods used for statistical analysis are also used for information extraction and classification.
We'll use the Embryo Project corpus from earlier notebooks. Recall that the plain text documents are stored separately from their metadata -- this is the format that you would expect from a Zotero RDF export.
End of explanation
wordcounts_per_document = nltk.ConditionalFreqDist([
(fileid, normalize_token(token))
for fileid in documents.fileids()
for token in documents.words(fileids=[fileid])
if filter_token(token)
])
from scipy import sparse
# We pick a single "focal" document that we want to characterize.
focal_fileid = documents.fileids()[3]
# Since this procedure will involve numerical matrices, we
# need to map documents and words onto row and column indices.
# These "dictionaries" will help us to keep track of those
# mappings.
document_index = {} # Maps int -> fileid (str).
vocabulary = {} # Maps int -> word (str).
lookup = {} # Reverse map for vocabulary (word (str) -> int).
# Containers for sparse data.
I = [] # Document vector.
J = [] # Word vector.
data = [] # Word count vector.
labels = [] # Vector of labels; either the URI of interest, or "Other".
# Here we transform the ConditionalFrequencyDist into three vectors (I, J, data)
# that sparsely describe the document-word count matrix.
for i, (fileid, counts) in enumerate(wordcounts_per_document.iteritems()):
document_index[i] = fileid
for token, count in counts.iteritems():
# Removing low-frequency terms is optional, but speeds things up
# quite a bit for this demonstration.
if count < 3:
continue
# get() lets us
j = lookup.get(token, len(vocabulary))
vocabulary[j] = token
lookup[token] = j
I.append(i)
J.append(j)
data.append(count)
labels.append(fileid if fileid == focal_fileid else 'Other')
print '\r', i,
sparse_matrix = sparse.coo_matrix((data, (I, J)))
sparse_matrix.shape
from sklearn.feature_selection import chi2
from sklearn.feature_extraction.text import CountVectorizer
keyness, _ = chi2(sparse_matrix, labels)
ranking = np.argsort(keyness)[::-1]
_, words = zip(*sorted(vocabulary.items(), key=lambda i: i[0]))
words = np.array(words)
keywords = words[ranking]
zip(keywords[:20], keyness[ranking][:20])
def extract_keywords(fileid, n=20):
print '\r', fileid,
document_index = {} # Maps int -> fileid (str).
vocabulary = {} # Maps int -> word (str).
lookup = {} # Reverse map for vocabulary (word (str) -> int).
I = []
J = []
data = []
labels = []
for i, (key, counts) in enumerate(wordcounts_per_document.iteritems()):
document_index[i] = key
for token, count in counts.iteritems():
if count < 3:
continue
j = lookup.get(token, len(vocabulary))
vocabulary[j] = token
lookup[token] = j
I.append(i)
J.append(j)
data.append(count)
labels.append(key if key == fileid else 'Other')
sparse_matrix = sparse.coo_matrix((data, (I, J)))
keyness, _ = chi2(sparse_matrix, labels)
ranking = np.argsort(keyness)[::-1]
_, words = zip(*sorted(vocabulary.items(), key=lambda i: i[0]))
words = np.array(words)
keywords = words[ranking]
return keywords[:n]
keywords = [extract_keywords(fileid) for fileid in documents.fileids()]
Explanation: Our first step is to count up all of the words in each of the documents. This conditional frequency distribution should look familiar by now.
End of explanation
<END_TASK> |
13 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Index
Work
WorkOk
WorkSnowball
Forward Snowballing
Other
WorkUnrelated
WorkNoFile
WorkLang
Step1: Work
Since this is the default class, it is used as a safety check.
No work should be stored with this class at the end of the snowballing.
If the work has this class, we did not decide yet whether the work is related or not.
Step2: WorkOk
This class is used for related work that has not been explored with a backward snowballing yet.
Step3: WorkSnowball
This class is used for related work that has been explored with backward snowballing.
Step4: Forward Snowballing
Use the attribute .snowball to indicate when were performed the last snowball.
The query search WorkOk and WorkSnowball with outdated snowball attributes.
Step5: Other
WorkUnrelated
Number of work unrelated to the snowballing
Step6: WorkNoFile
Number of work without files
Step7: WorkLang
Number of work in foreign languages | <ASSISTANT_TASK:>
Python Code:
import database
from datetime import datetime
from snowballing.operations import load_work, reload
from snowballing.jupyter_utils import work_button, idisplay
reload()
Explanation: Index
Work
WorkOk
WorkSnowball
Forward Snowballing
Other
WorkUnrelated
WorkNoFile
WorkLang
End of explanation
reload()
query = [idisplay(work_button(w)) for w in load_work() if w._category == "work"]
len(query)
Explanation: Work
Since this is the default class, it is used as a safety check.
No work should be stored with this class at the end of the snowballing.
If the work has this class, we did not decide yet whether the work is related or not.
End of explanation
reload()
query = [idisplay(work_button(w)) for w in load_work() if w._category == "ok"]
len(query)
Explanation: WorkOk
This class is used for related work that has not been explored with a backward snowballing yet.
End of explanation
reload()
query = [idisplay(work_button(w)) for w in load_work() if w._category == "snowball"]
len(query)
Explanation: WorkSnowball
This class is used for related work that has been explored with backward snowballing.
End of explanation
current_snowball = datetime(2017, 7, 26)
reload()
query = [
idisplay(work_button(w))
for w in load_work()
if w._category in ("ok", "snowball")
if not hasattr(w, "_snowball")
or w._snowball < current_snowball
]
len(query)
Explanation: Forward Snowballing
Use the attribute .snowball to indicate when were performed the last snowball.
The query search WorkOk and WorkSnowball with outdated snowball attributes.
End of explanation
sum(
1
for w in load_work()
if w._category == "unrelated"
)
Explanation: Other
WorkUnrelated
Number of work unrelated to the snowballing
End of explanation
sum(
1
for w in load_work()
if w._category == "nofile"
)
Explanation: WorkNoFile
Number of work without files
End of explanation
sum(
1
for w in load_work()
if w._category == "lang"
)
Explanation: WorkLang
Number of work in foreign languages
End of explanation
<END_TASK> |
14 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Unity ML Agents
Environment Basics
This notebook contains a walkthrough of the basic functions of the Python API for Unity ML Agents. For instructions on building a Unity environment, see here.
1. Load dependencies
Step1: 2. Set environment parameters
Be sure to set env_name to the name of the Unity environment file you want to launch.
Step2: 3. Start the environment
UnityEnvironment launches and begins communication with the environment when instantiated.
Environments contain brains which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python.
Step3: 4. Examine the observation and state spaces
We can reset the environment to be provided with an initial set of observations and states for all the agents within the environment. In ML-Agents, states refer to a vector of variables corresponding to relevant aspects of the environment for an agent. Likewise, observations refer to a set of relevant pixel-wise visuals for an agent.
Step4: 5. Take random actions in the environment
Once we restart an environment, we can step the environment forward and provide actions to all of the agents within the environment. Here we simply choose random actions based on the action_space_type of the default brain.
Step5: 6. Close the environment when finished
When we are finished using an environment, we can close it with the function below. | <ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import numpy as np
from unityagents import UnityEnvironment
%matplotlib inline
Explanation: Unity ML Agents
Environment Basics
This notebook contains a walkthrough of the basic functions of the Python API for Unity ML Agents. For instructions on building a Unity environment, see here.
1. Load dependencies
End of explanation
env_name = "3DBall" # Name of the Unity environment binary to launch
train_mode = True # Whether to run the environment in training or inference mode
Explanation: 2. Set environment parameters
Be sure to set env_name to the name of the Unity environment file you want to launch.
End of explanation
env = UnityEnvironment(file_name=env_name)
# Examine environment parameters
print(str(env))
# Set the default brain to work with
default_brain = env.brain_names[0]
brain = env.brains[default_brain]
Explanation: 3. Start the environment
UnityEnvironment launches and begins communication with the environment when instantiated.
Environments contain brains which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python.
End of explanation
# Reset the environment
env_info = env.reset(train_mode=train_mode)[default_brain]
# Examine the state space for the default brain
print("Agent state looks like: \n{}".format(env_info.states[0]))
# Examine the observation space for the default brain
for observation in env_info.observations:
print("Agent observations look like:")
if observation.shape[3] == 3:
plt.imshow(observation[0,:,:,:])
else:
plt.imshow(observation[0,:,:,0])
Explanation: 4. Examine the observation and state spaces
We can reset the environment to be provided with an initial set of observations and states for all the agents within the environment. In ML-Agents, states refer to a vector of variables corresponding to relevant aspects of the environment for an agent. Likewise, observations refer to a set of relevant pixel-wise visuals for an agent.
End of explanation
for episode in range(10):
env_info = env.reset(train_mode=train_mode)[default_brain]
done = False
episode_rewards = 0
while not done:
if brain.action_space_type == 'continuous':
env_info = env.step(np.random.randn(len(env_info.agents),
brain.action_space_size))[default_brain]
else:
env_info = env.step(np.random.randint(0, brain.action_space_size,
size=(len(env_info.agents))))[default_brain]
episode_rewards += env_info.rewards[0]
done = env_info.local_done[0]
print("Total reward this episode: {}".format(episode_rewards))
Explanation: 5. Take random actions in the environment
Once we restart an environment, we can step the environment forward and provide actions to all of the agents within the environment. Here we simply choose random actions based on the action_space_type of the default brain.
End of explanation
env.close()
Explanation: 6. Close the environment when finished
When we are finished using an environment, we can close it with the function below.
End of explanation
<END_TASK> |
15 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
<div Style="text-align
Step1: Obtenemos nuestro clasificador
Step2: Obtenemos la imagen ejemplo
Step3: Definimos algunas funciones necesarias | <ASSISTANT_TASK:>
Python Code:
# Imports
import pickle
%matplotlib inline
#para dibujar en el propio notebook
import numpy as np #numpy como np
import matplotlib.pyplot as plt #matplotlib como plot
from skimage import io
from skimage.transform import rescale
from skimage.color import rgb2gray
from skimage.io import imshow
from skimage.feature import daisy
Explanation: <div Style="text-align: center;line-height: 30px;font-size:32px;font-weight: bold"> Reconocimiento automático de fitolitos</div>
En este notebook comprobaremos el reconocimiento de fitolitos en una imagen ejemplo, para facilmente comprobar el portencial de nuestro clasificador.
End of explanation
path = '../../rsc/obj/'
cls_path = path + 'cls.sav'
cluster_path = path + 'cluster.sav'
cls = pickle.load(open(cls_path, 'rb'))
cluster = pickle.load(open(cluster_path, 'rb'))
Explanation: Obtenemos nuestro clasificador
End of explanation
img_path = '../../rsc/img/Default/2017_5_17_17_54Image_746.jpg'
# img_path = '../../rsc/img/Default/2017_5_17_18_17Image_803.jpg'
# img_path = '../../rsc/img/Default/2017_5_17_16_38Image_483.jpg'
# img_path = '../../rsc/img/Default/2017_5_17_18_9Image_7351.jpg'
# img_path = '../../rsc/img/Default/2017_5_17_15_27Image_100.jpg'
image = rescale(io.imread(img_path), 0.5)
image = rgb2gray(image)
imshow(image)
Explanation: Obtenemos la imagen ejemplo
End of explanation
def predict_image(imgTest):
global cluster
global cls
num_centers = len(cluster.cluster_centers_)
testInstances = []
features = daisy(imgTest)
numFils, numCols, sizeDesc = features.shape
features = features.reshape((numFils*numCols,sizeDesc))
pertenencias=cluster.predict(features)
# extrae histograma
bovw_representation, _ = np.histogram(pertenencias, bins=num_centers, range=(0,num_centers-1))
# añade al conjunto de entrenamiento final
testInstances.append(bovw_representation)
testInstances = np.array(testInstances)
return cls.predict_proba(testInstances)
def sliding_window(image, alto,ancho, stepSize):
# slide a window across the image
for y in range(0, image.shape[0], stepSize):
for x in range(0, image.shape[1], stepSize):
# yield the current window
#print(alto,ancho)
#print(image[y:y + alto, x:x + ancho].shape)
yield (x, y, image[y:y + alto, x:x + ancho])
def predict_window(img):
probs = predict_image(img)[0]
#print(probs,probs[0]>0.5)
#if probs[0]>0.5:
# plt.imshow(img)
return probs[1]
#stepSize=60
stepSize=40
# stepSize=70
# alto=280
# ancho=280
alto=150
ancho=150
windowSize = (alto,ancho)
def predict_sliding_window(image, alto,ancho, stepSize):
#print(alto,ancho, stepSize)
probs = []
predichas = []
i = 0
# loop over the sliding window for each layer of the pyramid
n_winds = 0
for (x, y, window) in sliding_window(image, alto, ancho, stepSize):
#print(window.shape,alto,ancho)
if window.shape[0] != alto or window.shape[1] != ancho:
continue
n_winds+=1
print("hay ",n_winds," ventanas")
for (x, y, window) in sliding_window(image, alto,ancho, stepSize):
# if the window does not meet our desired window size, ignore it
if window.shape[0] != alto or window.shape[1] != ancho:
continue
i+=1
if i%10==0:
print("Procesada ventana "+str(i)+" de "+str(n_winds),end="\r")
# THIS IS WHERE YOU WOULD PROCESS YOUR WINDOW, SUCH AS APPLYING A
# MACHINE LEARNING CLASSIFIER TO CLASSIFY THE CONTENTS OF THE
# WINDOW
prob = predict_window(window)
if prob>0.5:
probs.append(prob)
# x1 ,y1, x2, y2
box = (x, y, x+ancho, y+alto)
predichas.append(box)
return probs,np.array(predichas)
def non_max_suppression(boxes, probs=None, overlapThresh=0.3):
# if there are no boxes, return an empty list
if len(boxes) == 0:
return []
# if the bounding boxes are integers, convert them to floats -- this
# is important since we'll be doing a bunch of divisions
if boxes.dtype.kind == "i":
boxes = boxes.astype("float")
# initialize the list of picked indexes
pick = []
# grab the coordinates of the bounding boxes
x1 = boxes[:, 0]
y1 = boxes[:, 1]
x2 = boxes[:, 2]
y2 = boxes[:, 3]
# compute the area of the bounding boxes and grab the indexes to sort
# (in the case that no probabilities are provided, simply sort on the
# bottom-left y-coordinate)
area = (x2 - x1 + 1) * (y2 - y1 + 1)
idxs = y2
# if probabilities are provided, sort on them instead
if probs is not None:
idxs = probs
# sort the indexes
idxs = np.argsort(idxs)
# keep looping while some indexes still remain in the indexes list
while len(idxs) > 0:
# grab the last index in the indexes list and add the index value
# to the list of picked indexes
last = len(idxs) - 1
i = idxs[last]
pick.append(i)
# find the largest (x, y) coordinates for the start of the bounding
# box and the smallest (x, y) coordinates for the end of the bounding
# box
xx1 = np.maximum(x1[i], x1[idxs[:last]])
yy1 = np.maximum(y1[i], y1[idxs[:last]])
xx2 = np.minimum(x2[i], x2[idxs[:last]])
yy2 = np.minimum(y2[i], y2[idxs[:last]])
# compute the width and height of the bounding box
w = np.maximum(0, xx2 - xx1 + 1)
h = np.maximum(0, yy2 - yy1 + 1)
# compute the ratio of overlap
overlap = (w * h) / area[idxs[:last]]
# delete all indexes from the index list that have overlap greater
# than the provided overlap threshold
idxs = np.delete(idxs, np.concatenate(([last],
np.where(overlap > overlapThresh)[0])))
# return only the bounding boxes that were picked
return boxes[pick].astype("int")
probs0,predichas0 = predict_sliding_window(image, alto,ancho, stepSize)
probs0 = np.array(probs0)
predichas0 = np.array(predichas0)
predichas0 = predichas0[probs0>0.995]
probs0 = probs0[probs0>0.995]
boxes = non_max_suppression(predichas0, probs=probs0, overlapThresh=0.3)
print(boxes)
import matplotlib.patches as patches
fig = plt.figure(figsize=(5, 5))
ax = fig.add_subplot(111)
ax.imshow(image, cmap=plt.get_cmap('gray'))
for box in boxes:
# Create a Rectangle patch
rect = patches.Rectangle((box[0],box[1]),
box[2]-box[0],
box[3]-box[1],
linewidth=1,edgecolor='g',facecolor='none')
# Add the patch to the Axes
ax.add_patch(rect)
Explanation: Definimos algunas funciones necesarias
End of explanation
<END_TASK> |
16 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Test the pretrained model
Get the game screen as the input, and use the virtual joystick to steer the wheel.
Step1: Load pretrained model, and check performance for each forward pass.
The performance needs to be checked, because it determines the fps of our program.
Step2: load pretrained model
Step3: Get sample image, to find out the performance of our model
Step4: Be aware that the performance of the model can affect the performance of the overall program, since it may not react quickly enough to the changing environment.
Setup virtual joystick.
Make sure to setup the virtual joystick inside ETS2 inside the controller settings.
Get the coordinates for the game screen. The values may be different per game setting.
Step5: As seen below, a virtual device can now be detected.
Step6: Set up game loop | <ASSISTANT_TASK:>
Python Code:
import sys
sys.path.append('../')
from europilot.screen import stream_local_game_screen
from europilot.screen import Box
from europilot.joystick import LinuxVirtualJoystick
Explanation: Test the pretrained model
Get the game screen as the input, and use the virtual joystick to steer the wheel.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
import os
import numpy as np
from PIL import Image
from time import time
import keras
from keras.models import load_model
from keras.preprocessing import image
parent_path = os.path.dirname(os.getcwd())
model_path = os.path.join(parent_path, 'model')
# multiply by constant to undo normalization
OUTPUT_NORMALIZATION = 655.35
# limit GPU memory usage
import tensorflow as tf
from keras.backend.tensorflow_backend import set_session
config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 0.5
set_session(tf.Session(config=config))
Explanation: Load pretrained model, and check performance for each forward pass.
The performance needs to be checked, because it determines the fps of our program.
End of explanation
%ls $model_path
model = load_model(os.path.join(model_path, 'v5-PilotNet_v2-009-0.67221.h5'))
Explanation: load pretrained model
End of explanation
%ls ../sample/img/raw
sample_img = Image.open(os.path.join('../sample/img/raw', '9d0c3c2b_2017_07_27_14_55_08_16.jpg')).convert('RGB')
front_coord = (289,167, 851, 508)
sample_img_front = sample_img.crop(front_coord)
plt.imshow(sample_img_front)
sample_arr = image.img_to_array(sample_img_front)
sample_arr = np.reshape(sample_arr, (1,) + sample_arr.shape)
model.predict(sample_arr, batch_size = 1)
start = time()
for i in range(100):
model.predict(sample_arr, batch_size = 1)
end = time()
fps = 100. / (end - start)
print("fps: %f" % fps)
Explanation: Get sample image, to find out the performance of our model
End of explanation
x1, y1 = (68, 54)
x2, y2 = (x1 + 1024, y1 + 768)
box = Box(x1, y1, x2, y2)
joy = LinuxVirtualJoystick()
Explanation: Be aware that the performance of the model can affect the performance of the overall program, since it may not react quickly enough to the changing environment.
Setup virtual joystick.
Make sure to setup the virtual joystick inside ETS2 inside the controller settings.
Get the coordinates for the game screen. The values may be different per game setting.
End of explanation
!cat /proc/bus/input/devices
Explanation: As seen below, a virtual device can now be detected.
End of explanation
# values computed from dataset sample.
def normalize(img):
img[:,:,0] -= 89.5761
img[:,:,0] /= 58.4214
img[:,:,1] -= 97.5966
img[:,:,1] /= 61.7917
img[:,:,2] -= 88.3135
img[:,:,2] /= 68.2043
return img
def get_angle(predict):
angle = predict[0][0]
angle *= OUTPUT_NORMALIZATION
return int(angle)
streamer = stream_local_game_screen(box=box, default_fps=60)
while True:
image_data = next(streamer)
im = Image.fromarray(image_data)
img_front = im.crop(front_coord)
arr = image.img_to_array(img_front)
arr = normalize(arr)
arr = np.reshape(arr, (1,) + arr.shape)
angle = get_angle(model.predict(arr, batch_size = 1))
joy.emit(angle)
Explanation: Set up game loop
End of explanation
<END_TASK> |
17 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Ordinary differential equations
Euler method
Step1: To check correctness we are going to solve simple differential equation
$$y' = (x + y)^2,\
y(0) = 0,\
[a, b] = [0, 0.5],\
h = 0.05
$$
Solution here is a function $y(x) = tan(x) - x$.
Step2: The next method we are going to use is Runge-Kutta method family. Actually Euler method is a special case of Runge-Kutta methods.
Runge-Kutta methods family
We actually are going to try only two from Runge-Kutta methods
Step3: Let's solve slightly different equation
$$
y' = \frac{sin(x)}{y},\
y(0) = 1,\
[a, b] = [0, 5],\
h = 1.25
$$
A correct solution is $y = \sqrt{3 - 2cos(x)}$
Step4: Now let's move to system of differential equations.
Runge-Kutta methods for SDE
Step5: Eg. We have system of differential equations
$$
y = z',\
z' = \frac{2xz}{x^2+1},\
y(0) = 1,\
z(0) = 3
$$
Let's try to solve it using Runge-Kutta methods of order 4.
Step6: Predator-prey equation
$$
\frac{dx}{dt} = \alpha x - \beta xy\
\frac{dy}{dt} = \delta xy - \gamma y
$$
where $x$ - population of preys, and $y$ - population predators
Step7: Equilibrium
Let's look at population equilibrium
$$
y = \frac{\alpha}{\beta}\
x = \frac{\gamma}{\delta}
$$
we will take values close to these to show how it's approaching equilibrium | <ASSISTANT_TASK:>
Python Code:
def euler(f, x, y0):
h = x[1] - x[0]
y = np.empty_like(x)
y[0] = y0
for i in range(1, len(x)):
y[i] = y[i - 1] + h * f(x[i - 1], y[i - 1])
return y
Explanation: Ordinary differential equations
Euler method
End of explanation
dy = lambda x, y: x*x + y*y
x = np.linspace(0, 0.5, 100)
y0 = 0
y = euler(dy, x, y0)
y_ans = np.tan(x) - x
plt.figure(figsize=(15, 10))
plt.plot(x, y, x, y_ans)
plt.legend(['euler', 'answer'], loc='best')
plt.xlabel('x')
plt.title('Euler method (Runge-Kutta 1-st order method)')
plt.show()
Explanation: To check correctness we are going to solve simple differential equation
$$y' = (x + y)^2,\
y(0) = 0,\
[a, b] = [0, 0.5],\
h = 0.05
$$
Solution here is a function $y(x) = tan(x) - x$.
End of explanation
def runge_kutta3(f, x, y0):
h = x[1] - x[0]
y = np.empty_like(x)
y[0] = y0
for i in range(1, len(x)):
k1 = h * f(x[i - 1], y[i - 1])
k2 = h * f(x[i - 1] + h/3, y[i - 1] + k1/3)
k3 = h * f(x[i - 1] + 2*h/3, y[i - 1] + 2*k2/3)
y[i] = y[i - 1] + (k1 + 3*k3) / 4
return y
def runge_kutta4(f, x, y0):
h = x[1] - x[0]
y = np.empty_like(x)
y[0] = y0
for i in range(1, len(x)):
k1 = h * f(x[i - 1], y[i - 1])
k2 = h * f(x[i - 1] + h/2, y[i - 1] + k1/2)
k3 = h * f(x[i - 1] + h/2, y[i - 1] + k2/2)
k4 = h * f(x[i - 1] + h, y[i - 1] + k3)
y[i] = y[i - 1] + (k1 + 2*k2 + 2*k3 + k4) / 6
return y
Explanation: The next method we are going to use is Runge-Kutta method family. Actually Euler method is a special case of Runge-Kutta methods.
Runge-Kutta methods family
We actually are going to try only two from Runge-Kutta methods: RK3 and RK4.
End of explanation
dy = lambda x, y: np.sin(x) / y
x = np.linspace(0, 5, 4)
y0 = 1
y3 = runge_kutta3(dy, x, y0)
y4 = runge_kutta4(dy, x, y0)
y_ans = np.sqrt(3 - 2*np.cos(x))
plt.figure(figsize=(15, 10))
plt.plot(x, y3, x, y4, x, y_ans)
plt.legend(['rk3', 'rk4', 'ans'], loc='best')
plt.xlabel('x')
plt.title('Runge-Kutta 3-rd and 4-th order methods')
plt.show()
Explanation: Let's solve slightly different equation
$$
y' = \frac{sin(x)}{y},\
y(0) = 1,\
[a, b] = [0, 5],\
h = 1.25
$$
A correct solution is $y = \sqrt{3 - 2cos(x)}$
End of explanation
def fmap(fs, x):
return np.array([f(*x) for f in fs])
def runge_kutta4_system(fs, x, y0):
h = x[1] - x[0]
y = np.empty((len(x), len(y0)))
y[0] = y0
for i in range(1, len(x)):
k1 = h * fmap(fs, [x[i - 1], *y[i - 1]])
k2 = h * fmap(fs, [x[i - 1] + h/2, *(y[i - 1] + k1/2)])
k3 = h * fmap(fs, [x[i - 1] + h/2, *(y[i - 1] + k2/2)])
k4 = h * fmap(fs, [x[i - 1] + h, *(y[i - 1] + k3)])
y[i] = y[i - 1] + (k1 + 2*k2 + 2*k3 + k4) / 6
return y
Explanation: Now let's move to system of differential equations.
Runge-Kutta methods for SDE
End of explanation
dy = lambda x, y, z: z
dz = lambda x, y, z: 2*x*z / (x*x + 1)
fs = [dy, dz]
x = np.linspace(0, 1, 10)
y0 = np.array([1, 3])
y = runge_kutta4_system(fs, x, y0)
plt.figure(figsize=(15, 10))
plt.plot(x, y[:, 0], x, y[:, 1])
plt.legend(['y(x)', 'z(x)'], loc='best')
plt.xlabel('x')
plt.title('Runge-Kutta 4-th order method for system of differential equations')
plt.show()
Explanation: Eg. We have system of differential equations
$$
y = z',\
z' = \frac{2xz}{x^2+1},\
y(0) = 1,\
z(0) = 3
$$
Let's try to solve it using Runge-Kutta methods of order 4.
End of explanation
dx = lambda t, x, y: 2/3*x - 4/3*x*y
dy = lambda t, x, y: x*y - y
fs = [dx, dy]
t = np.linspace(0, 20, 500)
y0 = np.array([1, 2])
z = runge_kutta4_system(fs, t, y0)
plt.figure(figsize=(15, 10))
plt.plot(t, z[:, 0], t, z[:, 1])
plt.legend(['prey', 'predator'], loc='best')
plt.xlabel('time (sec)')
plt.ylabel('population')
plt.title('Lotka-Volterra equation')
plt.show()
plt.figure(figsize=(15, 10))
plt.plot(z[:, 0], z[:, 1])
plt.xlabel('pray')
plt.ylabel('predator')
plt.title('Parametric graph')
plt.show()
Explanation: Predator-prey equation
$$
\frac{dx}{dt} = \alpha x - \beta xy\
\frac{dy}{dt} = \delta xy - \gamma y
$$
where $x$ - population of preys, and $y$ - population predators
End of explanation
dx = lambda t, x, y: 2/3*x - 4/3*x*y
dy = lambda t, x, y: x*y - y
fs = [dx, dy]
t = np.linspace(0, 20, 500)
y0 = np.array([1, 101/200])
z = runge_kutta4_system(fs, t, y0)
plt.figure(figsize=(15, 10))
plt.plot(t, z[:, 0], t, z[:, 1])
plt.legend(['prey', 'predator'], loc='best')
plt.xlabel('time (sec)')
plt.ylabel('population')
plt.title('Lotka-Volterra equilibrium')
plt.show()
plt.figure(figsize=(15, 10))
plt.plot(z[:, 0], z[:, 1])
plt.xlabel('pray')
plt.ylabel('predator')
plt.title('Parametric graph of equilibrium')
plt.show()
Explanation: Equilibrium
Let's look at population equilibrium
$$
y = \frac{\alpha}{\beta}\
x = \frac{\gamma}{\delta}
$$
we will take values close to these to show how it's approaching equilibrium
End of explanation
<END_TASK> |
18 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
.. _tut_io_export_pandas
Step1: Export DataFrame
Step2: Explore Pandas MultiIndex | <ASSISTANT_TASK:>
Python Code:
# Author: Denis Engemann <denis.engemann@gmail.com>
#
# License: BSD (3-clause)
import mne
import matplotlib.pyplot as plt
import numpy as np
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
raw = mne.io.read_raw_fif(raw_fname)
# For simplicity we will only consider the first 10 epochs
events = mne.read_events(event_fname)[:10]
# Add a bad channel
raw.info['bads'] += ['MEG 2443']
picks = mne.pick_types(raw.info, meg='grad', eeg=False, eog=True,
stim=False, exclude='bads')
tmin, tmax = -0.2, 0.5
baseline = (None, 0)
reject = dict(grad=4000e-13, eog=150e-6)
event_id = dict(auditory_l=1, auditory_r=2, visual_l=3, visual_r=4)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True, picks=picks,
baseline=baseline, preload=True, reject=reject)
Explanation: .. _tut_io_export_pandas:
Export epochs to Pandas DataFrame
In this example the pandas exporter will be used to produce a DataFrame
object. After exploring some basic features a split-apply-combine
work flow will be conducted to examine the latencies of the response
maxima across epochs and conditions.
Note. Equivalent methods are available for raw and evoked data objects.
Short Pandas Primer
Pandas Data Frames
~~~~~~~~~~~~~~~~~~
A data frame can be thought of as a combination of matrix, list and dict:
It knows about linear algebra and element-wise operations but is size mutable
and allows for labeled access to its data. In addition, the pandas data frame
class provides many useful methods for restructuring, reshaping and visualizing
data. As most methods return data frame instances, operations can be chained
with ease; this allows to write efficient one-liners. Technically a DataFrame
can be seen as a high-level container for numpy arrays and hence switching
back and forth between numpy arrays and DataFrames is very easy.
Taken together, these features qualify data frames for inter operation with
databases and for interactive data exploration / analysis.
Additionally, pandas interfaces with the R statistical computing language that
covers a huge amount of statistical functionality.
Export Options
~~~~~~~~~~~~~~
The pandas exporter comes with a few options worth being commented.
Pandas DataFrame objects use a so called hierarchical index. This can be
thought of as an array of unique tuples, in our case, representing the higher
dimensional MEG data in a 2D data table. The column names are the channel names
from the epoch object. The channels can be accessed like entries of a
dictionary:
df['MEG 2333']
Epochs and time slices can be accessed with the .ix method:
epochs_df.ix[(1, 2), 'MEG 2333']
However, it is also possible to include this index as regular categorial data
columns which yields a long table format typically used for repeated measure
designs. To take control of this feature, on export, you can specify which
of the three dimensions 'condition', 'epoch' and 'time' is passed to the Pandas
index using the index parameter. Note that this decision is revertible any
time, as demonstrated below.
Similarly, for convenience, it is possible to scale the times, e.g. from
seconds to milliseconds.
Some Instance Methods
~~~~~~~~~~~~~~~~~~~~~
Most numpy methods and many ufuncs can be found as instance methods, e.g.
mean, median, var, std, mul, , max, argmax etc.
Below an incomplete listing of additional useful data frame instance methods:
apply : apply function to data.
Any kind of custom function can be applied to the data. In combination with
lambda this can be very useful.
describe : quickly generate summary stats
Very useful for exploring data.
groupby : generate subgroups and initialize a 'split-apply-combine' operation.
Creates a group object. Subsequently, methods like apply, agg, or transform
can be used to manipulate the underlying data separately but
simultaneously. Finally, reset_index can be used to combine the results
back into a data frame.
plot : wrapper around plt.plot
However it comes with some special options. For examples see below.
shape : shape attribute
gets the dimensions of the data frame.
values :
return underlying numpy array.
to_records :
export data as numpy record array.
to_dict :
export data as dict of arrays.
Reference
~~~~~~~~~
More information and additional introductory materials can be found at the
pandas doc sites: http://pandas.pydata.org/pandas-docs/stable/
End of explanation
# The following parameters will scale the channels and times plotting
# friendly. The info columns 'epoch' and 'time' will be used as hierarchical
# index whereas the condition is treated as categorial data. Note that
# this is optional. By passing None you could also print out all nesting
# factors in a long table style commonly used for analyzing repeated measure
# designs.
index, scale_time, scalings = ['epoch', 'time'], 1e3, dict(grad=1e13)
df = epochs.to_data_frame(picks=None, scalings=scalings, scale_time=scale_time,
index=index)
# Create MEG channel selector and drop EOG channel.
meg_chs = [c for c in df.columns if 'MEG' in c]
df.pop('EOG 061') # this works just like with a list.
Explanation: Export DataFrame
End of explanation
# Pandas is using a MultiIndex or hierarchical index to handle higher
# dimensionality while at the same time representing data in a flat 2d manner.
print(df.index.names, df.index.levels)
# Inspecting the index object unveils that 'epoch', 'time' are used
# for subsetting data. We can take advantage of that by using the
# .ix attribute, where in this case the first position indexes the MultiIndex
# and the second the columns, that is, channels.
# Plot some channels across the first three epochs
xticks, sel = np.arange(3, 600, 120), meg_chs[:15]
df.ix[:3, sel].plot(xticks=xticks)
mne.viz.tight_layout()
# slice the time starting at t0 in epoch 2 and ending 500ms after
# the base line in epoch 3. Note that the second part of the tuple
# represents time in milliseconds from stimulus onset.
df.ix[(1, 0):(3, 500), sel].plot(xticks=xticks)
mne.viz.tight_layout()
# Note: For convenience the index was converted from floating point values
# to integer values. To restore the original values you can e.g. say
# df['times'] = np.tile(epoch.times, len(epochs_times)
# We now reset the index of the DataFrame to expose some Pandas
# pivoting functionality. To simplify the groupby operation we
# we drop the indices to treat epoch and time as categroial factors.
df = df.reset_index()
# The ensuing DataFrame then is split into subsets reflecting a crossing
# between condition and trial number. The idea is that we can broadcast
# operations into each cell simultaneously.
factors = ['condition', 'epoch']
sel = factors + ['MEG 1332', 'MEG 1342']
grouped = df[sel].groupby(factors)
# To make the plot labels more readable let's edit the values of 'condition'.
df.condition = df.condition.apply(lambda name: name + ' ')
# Now we compare the mean of two channels response across conditions.
grouped.mean().plot(kind='bar', stacked=True, title='Mean MEG Response',
color=['steelblue', 'orange'])
mne.viz.tight_layout()
# We can even accomplish more complicated tasks in a few lines calling
# apply method and passing a function. Assume we wanted to know the time
# slice of the maximum response for each condition.
max_latency = grouped[sel[2]].apply(lambda x: df.time[x.argmax()])
print(max_latency)
# Then make the plot labels more readable let's edit the values of 'condition'.
df.condition = df.condition.apply(lambda name: name + ' ')
plt.figure()
max_latency.plot(kind='barh', title='Latency of Maximum Reponse',
color=['steelblue'])
mne.viz.tight_layout()
# Finally, we will again remove the index to create a proper data table that
# can be used with statistical packages like statsmodels or R.
final_df = max_latency.reset_index()
final_df.rename(columns={0: sel[2]}) # as the index is oblivious of names.
# The index is now written into regular columns so it can be used as factor.
print(final_df)
plt.show()
# To save as csv file, uncomment the next line.
# final_df.to_csv('my_epochs.csv')
# Note. Data Frames can be easily concatenated, e.g., across subjects.
# E.g. say:
#
# import pandas as pd
# group = pd.concat([df_1, df_2])
# group['subject'] = np.r_[np.ones(len(df_1)), np.ones(len(df_2)) + 1]
Explanation: Explore Pandas MultiIndex
End of explanation
<END_TASK> |
19 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Committors and Shooting Point Analysis
There are several ways that we can calculate committors. This deals with two, in particular.
First, we frequently want to run a committor from some specific snapshots. For example, you have a transition trajectory, and you want a sense of where the barrier is. So you select 10-20 frames from the trajectory, and run each of them 10-20 times with random velocities to see which states they end up in. The first section of this document describes how to do that process for any list of snapshots.
Second, we frequently want to plot the committor as a function of some collective variable(s). In this case, we can histogram a bunch of committor shots according to the desired CVs. And, since every shooting move is essentially a committor trial, we can histogram all the shooting points from a normal path sampling calculation to obtain the committor landscape. This is what we do in the second section of this document.
Note that there are also tricks to get the committor based on the reweighted path ensemble, and if you're really trying to get the committor landscape in some collective variables, that will probably be the best approach. The approaches here are useful for more simple analyses and for troubleshooting while setting up path sampling simulations.
Simple Committor Run
The analysis we use for the committor can be used on shooting points in general, but often we just want to run a committor calculation directly. Here we set up a simple run in a situation where, if the forward extension is chosen, we always end up in the right state, and if the backward extension is chosen, we always end up in the left state.
Step1: In addition to the standard setup as above, we need a way to randomize the snapshots. For this simple example, we actually won't randomize them (NoModification), but typically we would assign totally random velocities from a Boltzmann distribution (RandomVelocities).
Step2: Now we set up the committor simulation and run it. In this example, we use a list of two snapshots. If you only want to do the committor from one snapshot, you don't have to wrap it in a list.
Step3: Now we do the analysis
Step4: Analyzing shooting points from a TIS calculation
The same analysis procedure can be applied to shooting points from an arbitrary TIS simulation.
Step5: Committor as a function of one variable
First we'll histogram the committor as a function of the distance from the center of state A. Recall that the order parameter we used is actually the square of the distance, so we take its square root to make the scale more reasonable.
Step6: Here we've used the order parameter associated with state A as our collective variable. However, it is important to keep in mind that the choice of collective variable is completely arbitrary. For example, we could do the same analysis by looking at the distance from the central point (0.0, 0.0). The plot would look different, since it is a different CV, but the procedure would remain the same.
This is important | <ASSISTANT_TASK:>
Python Code:
pes = toys.LinearSlope(m=[0.0], c=[0.0]) # flat line
topology = toys.Topology(n_spatial=1, masses=[1.0], pes=pes)
integrator = toys.LeapfrogVerletIntegrator(0.1)
options = {
'integ': integrator,
'n_frames_max': 1000,
'n_steps_per_frame': 1
}
engine = toys.Engine(options=options, topology=topology)
snap0 = toys.Snapshot(coordinates=np.array([[0.0]]),
velocities=np.array([[1.0]]),
engine=engine)
snap1 = toys.Snapshot(coordinates=np.array([[0.2]]),
velocities=np.array([[1.0]]),
engine=engine)
cv = paths.FunctionCV("Id", lambda snap : snap.coordinates[0][0])
# these are our states:
left = paths.CVDefinedVolume(cv, float("-inf"), -1.0).named("left")
right = paths.CVDefinedVolume(cv, 1.0, float("inf")).named("right")
# set up a file for storage
storage = paths.Storage("committor_test.nc", mode="w", template=snap0)
Explanation: Committors and Shooting Point Analysis
There are several ways that we can calculate committors. This deals with two, in particular.
First, we frequently want to run a committor from some specific snapshots. For example, you have a transition trajectory, and you want a sense of where the barrier is. So you select 10-20 frames from the trajectory, and run each of them 10-20 times with random velocities to see which states they end up in. The first section of this document describes how to do that process for any list of snapshots.
Second, we frequently want to plot the committor as a function of some collective variable(s). In this case, we can histogram a bunch of committor shots according to the desired CVs. And, since every shooting move is essentially a committor trial, we can histogram all the shooting points from a normal path sampling calculation to obtain the committor landscape. This is what we do in the second section of this document.
Note that there are also tricks to get the committor based on the reweighted path ensemble, and if you're really trying to get the committor landscape in some collective variables, that will probably be the best approach. The approaches here are useful for more simple analyses and for troubleshooting while setting up path sampling simulations.
Simple Committor Run
The analysis we use for the committor can be used on shooting points in general, but often we just want to run a committor calculation directly. Here we set up a simple run in a situation where, if the forward extension is chosen, we always end up in the right state, and if the backward extension is chosen, we always end up in the left state.
End of explanation
## more typical:
#randomizer = paths.RandomVelocities(beta=1.0)
## for testing purposes:
randomizer = paths.NoModification()
Explanation: In addition to the standard setup as above, we need a way to randomize the snapshots. For this simple example, we actually won't randomize them (NoModification), but typically we would assign totally random velocities from a Boltzmann distribution (RandomVelocities).
End of explanation
simulation = paths.CommittorSimulation(storage=storage,
engine=engine,
states=[left, right],
randomizer=randomizer,
initial_snapshots=[snap0, snap1])
simulation.run(n_per_snapshot=10)
Explanation: Now we set up the committor simulation and run it. In this example, we use a list of two snapshots. If you only want to do the committor from one snapshot, you don't have to wrap it in a list.
End of explanation
results = paths.ShootingPointAnalysis(steps=storage.steps, states=[left, right])
results[snap0]
# prettier printing of the same
# first version uses number indexes to label snapshots
results.to_pandas()
# second version uses given label_function
results.to_pandas(label_function=cv)
Explanation: Now we do the analysis:
End of explanation
store2 = paths.AnalysisStorage("mstis.nc")
stateA = store2.volumes.find("A")
stateB = store2.volumes.find("B")
stateC = store2.volumes.find("C")
results = paths.ShootingPointAnalysis(store2.steps, [stateA, stateB, stateC])
len(results)
len(store2.steps)
Explanation: Analyzing shooting points from a TIS calculation
The same analysis procedure can be applied to shooting points from an arbitrary TIS simulation.
End of explanation
opA = store2.cvs['opA']
distA = lambda snap : np.sqrt(opA(snap))
bins = [0.0+0.05*i for i in range(31)]
hist, bins = results.committor_histogram(distA, stateA, bins)
plt.bar(left=bins[:-1], height=hist, width=[bins[i+1]-bins[i] for i in range(len(bins)-1)], log=True)
plt.xlim(0.0, 1.6);
Explanation: Committor as a function of one variable
First we'll histogram the committor as a function of the distance from the center of state A. Recall that the order parameter we used is actually the square of the distance, so we take its square root to make the scale more reasonable.
End of explanation
twoD_hash = lambda snap: (snap.xyz[0][0], snap.xyz[0][1])
bins = [-1.0+i*0.05 for i in range(41)]
hist, bins_x, bins_y = results.committor_histogram(twoD_hash, stateA, bins)
# when using pcolor, we need to transpose the histogram (hist.T)
plt.pcolor(bins_x, bins_y, hist.T)
plt.clim(0.0, 1.0)
plt.colorbar();
Explanation: Here we've used the order parameter associated with state A as our collective variable. However, it is important to keep in mind that the choice of collective variable is completely arbitrary. For example, we could do the same analysis by looking at the distance from the central point (0.0, 0.0). The plot would look different, since it is a different CV, but the procedure would remain the same.
This is important: because we have access to the whole snapshots, things that were not calculated during the data-generation phase can be calculated in the data-analysis phase.
Committor as a function of two variables
Frequently, we're interested in looking at the committor as a function of two collective variables. This also follows the same procedure, although it is important that the new hashing function must return a tuple, where previously it returned a float. (Technically, it needs to return a hashable object. Tuples are hashable; lists, for example, are not.)
End of explanation
<END_TASK> |
20 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
1 entity referent
self ("me")
addressee ("you here")
other ("somebody else")
2+ entity referent
self, addressee ("me and you here" / inclusive we)
self, other ("me and somebody else" / exclusive we)
addressee, addressee ("the two or more of you here")
addressee, other ("one of you here and somebody else")
other, other ("the two or more of them")
3+ entity referent
self, addressee, addressee ("me and the two or more of you here")
self, addressee, other ("me, one of you here, and somebody else")
self, other, other ("me and two or more other people")
addressee, addressee, other ("the two or more of you and somebody else")
addressee, other, other ("one of you and two or more other people")
4+ entity referent
self, addressee, addressee, other ("me, the two or more of you here, and somebody else")
self, addressee, other, other ("me, one of you here, and two or more other people")
addressee, addressee, other, other ("the two or more of you here and two or more other people")
5+ entity referent
self, addressee, addressee, other, other ("me, the two or more of you here, and two or more other people")
There are 17 possible markers if there's no distinction between 2 entities of the same type and 3+ entities of the same type.
* a dual or trial entity number could be added to have a 3-way distinction between e.g. [other, other] and [other, other, other]
* another entity category besides self, addressee, and other could be added (invisible/divine entities)
* multiple self referents could be included (choral we)
Also, what about the issue of mis-identifying the cue as "self" rather than "addressee" (kids calling themselves "you")?
Step1: Spoken English collapses these to 6 possibilities
Step2: Assume that the distribution of referent sets is uniform, which is probably not true.
Step3: With 100 trials, the learner is getting a lot of them right, but just by predicting 'you guys' or 'we' (if self is a referent) all of the time, since those cover most of the referent sets. | <ASSISTANT_TASK:>
Python Code:
from itertools import combinations, combinations_with_replacement
referents = []
for i in xrange(1, len(entities) * 2):
for combo in combinations_with_replacement(entities, i):
# choral we is impossible
if combo.count('self') > 1:
continue
# only singular vs plural
if combo.count('addressee') > 2:
continue
if combo.count('other') > 2:
continue
# compound cues
referent = list(combo)
for j in xrange(2, len(combo) + 1):
for compound in combinations(combo, j):
if compound not in referent:
referent.append(compound)
referents.append(referent)
len(referents)
referents
Explanation: 1 entity referent
self ("me")
addressee ("you here")
other ("somebody else")
2+ entity referent
self, addressee ("me and you here" / inclusive we)
self, other ("me and somebody else" / exclusive we)
addressee, addressee ("the two or more of you here")
addressee, other ("one of you here and somebody else")
other, other ("the two or more of them")
3+ entity referent
self, addressee, addressee ("me and the two or more of you here")
self, addressee, other ("me, one of you here, and somebody else")
self, other, other ("me and two or more other people")
addressee, addressee, other ("the two or more of you and somebody else")
addressee, other, other ("one of you and two or more other people")
4+ entity referent
self, addressee, addressee, other ("me, the two or more of you here, and somebody else")
self, addressee, other, other ("me, one of you here, and two or more other people")
addressee, addressee, other, other ("the two or more of you here and two or more other people")
5+ entity referent
self, addressee, addressee, other, other ("me, the two or more of you here, and two or more other people")
There are 17 possible markers if there's no distinction between 2 entities of the same type and 3+ entities of the same type.
* a dual or trial entity number could be added to have a 3-way distinction between e.g. [other, other] and [other, other, other]
* another entity category besides self, addressee, and other could be added (invisible/divine entities)
* multiple self referents could be included (choral we)
Also, what about the issue of mis-identifying the cue as "self" rather than "addressee" (kids calling themselves "you")?
End of explanation
def english(referents):
# first-person
if 'self' in referents:
if 'addressee' in referents: # inclusive we
# doesn't matter who else is being referred to
return 'we'
if 'other' in referents: # exclusive we
# doesn't matter who else is being referred to
return 'we'
return 'I'
# second-person, if the speaker isn't included
elif 'addressee' in referents:
if referents.count('addressee') > 1: # inclusive you
return 'you guys'
if 'other' in referents: # exclusive you
return 'you guys'
return 'you'
# third-person, if the addressee isn't included either
elif 'other' in referents:
if referents.count('other') > 1:
return 'they'
return 's/he'
english(['self', 'addressee'])
english(['self', 'other'])
english(['addressee', 'other'])
english(['addressee', 'addressee']) # also ('addressee', 'addressee') compound
import pandas
data = pandas.DataFrame()
data['Cues'] = referents
data['Outcomes'] = [english(referent) for referent in referents]
data
Explanation: Spoken English collapses these to 6 possibilities: I, you, s/he, we, you guys, they
End of explanation
import numpy
def sampler(p):
def uniform():
return numpy.random.choice(p)
return uniform
referent_sampler = sampler(len(data))
import ndl
def activation(W):
return pandas.DataFrame([ndl.activation(c, W) for c in data.Cues], index=data.index)
W = ndl.rw(data, M=100, distribution=referent_sampler)
A = activation(W)
A
pandas.DataFrame([data['Outcomes'], A.idxmax(1), A.idxmax(1) == data['Outcomes']],
index = ['Truth', 'Prediction', 'Accurate?']).T
Explanation: Assume that the distribution of referent sets is uniform, which is probably not true.
End of explanation
import sim
english_learning = sim.Simulation(english, data, referent_sampler, 2000)
import matplotlib.pyplot as plt
%matplotlib inline
trajectory = [english_learning.accuracy(i) for i in xrange(1, english_learning.MAX_M)]
plt.plot(range(1, len(trajectory) + 1), trajectory, '-')
plt.xlabel('Trial Number')
%load_ext rpy2.ipython
%Rpush trajectory
%%R
trajectory = data.frame(trial=1:length(trajectory), learned=trajectory)
library('ggplot2')
ggplot(trajectory, aes(trial, learned)) +
geom_point(alpha=0.25) +
stat_smooth() +
coord_cartesian(ylim=c(0,1))
Explanation: With 100 trials, the learner is getting a lot of them right, but just by predicting 'you guys' or 'we' (if self is a referent) all of the time, since those cover most of the referent sets.
End of explanation
<END_TASK> |
21 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Plotting and Visualization
Step1: Landscape of Plotting Libraries
matplotlib
pandas
seaborn
mpld3
"Bringing matplotlib to the browser"
d3py
"a plotting library for python based on d3."
mayavi
"seeks to provide easy and interactive visualization of 3D data."
ggplot
"Yes, it's another port of ggplot2."
bokeh
"Bokeh is a Python interactive visualization library that targets modern web browsers for presentation."
mpl_toolkits
basemap
mplot3d
Matplotlib Orientation
Introduction
Matplotlib is the de facto standard for plotting in Python
Understanding matplotlib is key to unlocking its power
Online Documentation
Project web site is here
Examples online and also in the source code
API documentation is here
The FAQ may answer some of your questions
Get to know the gallery
Getting Help
First stop should be the gallery
Next stop could be stackoverflow
There's also a mailing list
This notebook draws heavily from the following sources [E.g., 1, 2, 3]
Notebook specifics
Step2: Backends
Potential uses of matplotlib
interactively from python shell/IPython
Embed in a GUI
Generate postscript images in batch scripts
In a web application to serve graphs
Each of these use cases is enabled by using a backend
Two types
User interface / Interactive (for use in pygtk, wxpython, tkinter, qt4, or macosx)
Hard copy / Non-interactive (PNG, SVG, PDF, PS)
Set your backend in your matplotlibrc
Or with the use function (before importing pyplot
python
from matplotlib import use
use('PS') # postscript
Configuration
See Customizing Matplotlib for more information
You can edit your matplotlibrc to change the matplotlib defaults
Step3: This has a popular one
Take a look at the Seaborn project. More on that later.
You can also change them dynamically using the global rcParams object
Step4: You can also use the rc_context context manager
Step5: Interactive Plotting with PyPlot
Interative backends allow plotting to the screen
Interactive mode plots to the screen without calls to show
Interactive mode does not require using pyplot
Doing the following at the interpreter will show a plot
python
import matplotlib.pyplot as plt
plt.ion()
plt.plot([1, 2, 3, 4, 5])
plt.title("Title")
At IPython interpreter enable interactive with (or set it in matplotlibrc
python
import matplotlib.pyplot as plt
plt.ion()
or with
from matplotlib import interactive
interactive()
Step6: If using object method calls, you must call draw or draw_if_interactive to see changes
Again, this is unnecessary in the notebook
Step7: By default the plot method takes x values, then y values
If the y values are omitted, then it is assumed that the x values are the indices of the given values
Step8: What is the pyplot namespace?
It's where everything comes together
Usually where you want to start
Broadly, 3 categories of functions
Plotting preparation
Plotting functions
Plot modifiers
Plotting Preparation
Function | Description
Step9: Close the last made Figure, by default
Step10: You can also refer to figures by their number starting at 1
plt.close('all') is handy
One of the most commonly used option used to create a Figure is figsize, a tuple of integers specifying the width and height in inches
Step11: Axes
The Axes object is contained within and belongs to a figure
This is where the plotting happens
You will interact with the Axes most often
Use the add_subplot method to put an axes on a figure
It takes the shorthand for n_rows, n_cols, plot_number
Step12: You may have guessed that you can have more than one axes on a plot
Step13: Library Plotting
You'll notice above that I stopped using plt for almost everything but figure creation
This is usually how I use matplotlib and allows the most flexible, powerful usage
In fact, most calls functions in the pyplot namespace call gca to get the current axis and then delegate to the method of the Axes object
Step14: You'll also notice that I assign the returns from the matplotlib object method calls to variables
This is a good habit to get in and we will see why below
One last handy function is plt.subplots
It's almost all I ever use from the plt namespace with a few exceptions
Step15: Notebook aside
You can work on figures across cells. Just make the existing figure object the last line in the cell.
Step16: Exercise
Let's make some basic plots. Make a scatter plot as above with 500 points. Draw random numbers from 0 to 100 for the y axis and set the limits of the y axis at 0 and 200.
Styling
Step17: Colors
Single letter shortcuts
b
Step18: Labels and Legends
You can label many things in matplotlib
Labeling lines allows automatic legend creation
Step19: You can label the X and Y axes
Step20: Label the axes with a title
Step21: Ticks and Tick Labels
The Ticks are the location of the Tick labels
The Tick lines denote the Ticks
The Tick labels are the text accompanying the tick
A Ticker determines the ticks and their labels automatically
You can use tick_params to adjust the appearance of the ticks
Step22: You can set your own tick labels
Step23: Spines
The spines are the boundaries of the axes, and they can be selectively turned off
Step24: More on plot
The plot function is a bit of a work horse with a flexible API
Step25: Plotting in Pandas vs Matplotlib
Pandas provides a few accessors that allow you to stay fairly high-level without giving up any of the power and flexibility of matplotlib
Series and DataFrames have a plot method
They take a kind keyword argument which accepts several values for plots other than the default line plot. These include
Step26: Notice that these return AxesSubplot objects, so we have our hook in to all of the powerful methods from matplotlib
So, too, do DataFrames
Step27: Exercise
Without re-plotting any of the above, re-size the fonts for the labels and the legend and display the figure.
Alternatively, we can plot the above in separate subplots
We can also change the figsize
Step28: These are just matplotlib objects
Note the use of tight_layout below
tight_layout automatically adjusts the subplot params so that the subplot fits the figure
You can have more fine-grained control using
python
fig.subplots_adjust
Step29: We can easily add a secondary y-axis
Step30: We can also ask pandas to plot on already existing axes
Step31: Bar plots
Bar plots are useful for displaying and comparing measurable quantities, such as counts or volumes.
We can use the plot method with a kind='bar' argument.
Let's use temperature data from NYC 1995 - 2014
Step32: Or equivalently
Step33: Recall that pandas.cut can be used to bin continuous data into buckets
Step34: Celsius bins
Step35: What's wrong with this graph?
Axis labels and tick labels to start
Some things we can do through the plot method
Some things we have to do with matplotlib
Make the xticks labels bigger and rotate them
Step36: Horizontal bar chart
Step37: Stacked bar chart
The pandas crosstab function creates a cross-tabulation of two or more factors.
Step38: Matplotlib provides a variety of ColorMaps
The Paired colormap is a good qualitative colormap
Step39: Histograms
Frequently it is useful to look at the distribution of data before you analyze it.
Histograms display relative frequencies of data values
The y-axis is always some measure of frequency, raw counts of values or scaled proportions
Step40: It's even a good exercise here! Let's drop turn the -99 into NaNs.
Step41: Incidentally, pandas will handle nulls in plotting
Step42: Optimal number of bins
Scott's rule
$$h=\frac{3.5\sigma}{n^{1/3}}$$
Step43: Density Plots
Kernel Density Estimators are a kind of smoothed histogram (more on this later)
Pandas provides a hook to KDE plots using statsmodels, if installed, or scipy
Step44: We can compare the KDE to the normed histogram
Step45: Exercise
Create KDE estimates for the temperature in each season on a single plot. Label the plotted lines.
Box plots
Boxplots (aka "box and whisker" plots) are a different way to display distributions of data
The box contains the quartiles of the data
The "whiskers" are typically the lower and upper 5 percent values
In matplotlib they are 1.5 * the lower/upper quarteriles by default
The horizontal line is the median
Boxplots have their own method on DataFrames
Step46: We can add some more information by overlaying the original data on the boxplot
Step47: Scatterplots
Let's load the baseball dataset to look at scatterplots
Step48: We can uncover more information by changing the size of the points
Step49: Or by adding color using the c keyword
Step50: c can also be a color intensity
in this case we can specify a colormap through the cmap keyword
Step51: Notice that there is a colorbar automatically
We can adjust it just like all other things matplotlib
It's actually implemented as a separate axes subplot in the figure
Step52: Use pd.scatter_matrix To view a large number of variables simultaenously
Step53: Plotting Time-Series
Let's convert the temperature data into a TimeSeries for convenience
Step54: Pandas plotting is DatetimeIndex aware
Outside of the browser, you can pan and zoom and the tick labels adjust dynamically
Step55: GridSpec
GridSpec provides a high-level abstraction for placing subplots on a grid
plt.subplot2grid is a helper function for creating grids of subplots
To create a 2x2 figure with a reference to the first axes we could do
python
ax = plt.subplot(2, 2, 1)
Equivalently with subplot2grid
Step56: We can have more easy, fine-grained control with subplot2grid for creating multiple subplots that span columns, for example
Step57: You can use GridSpec class directly to create the same plot
Step58: Seaborn
Seaborn is a Python visualization library based on matplotlib. It provides a high-level interface for drawing attractive statistical graphics.
It is built on top of matplotlib
Provides support for numpy and pandas
Coupled with statistical routines from scipy and statsmodels
Trellis plots
"At the heart of quantitative reasoning is a single question
Step59: FacetGrid
Used to visualize the distribution of a variable or the relationship between multiple variables within subsets of your data
Can be drawn with up to three dimensions
Step60: Violin plot
The violin plot is a combination of a boxplot and a kernel density estimator
Step61: We can plot the points inside the violins and re-order the seasons
Step62: Distribution plots
Seaborn allows you to look at bivariate distributions. Here, we can compare the distribution of the temperatures in 1995 and 2014.
Step63: We can also look at a hexbin plot of the same data with the marginal distributions as histograms.
Step64: mpld3
The mpld3 project brings together Matplotlib, and D3js, the popular Javascript library for creating interactive data visualizations for the web. The result is a simple API for exporting your matplotlib graphics to HTML code which can be used within the browser, within standard web pages, blogs, or tools such as the IPython notebook.
Let's look at a regular scatter plot
Step65: Unfortunately, this is just a static image. Let's use mpld3 to change that. Using the display command, you get a fully interactive visualization of the figure.
Step66: Notice the toolbar on hover. You can use that to interact with the figure.
You can use mpld3 for every plot that you render in the notebook by executing
python
mpld3.enable_notebook()
mpld3 plugins
Much like event handling via callback functions in regular matplotlib (not covered in this notebook), you can define plugins for mpld3 to specify additional interactivity.
A number of plugins are built-in, and it is also possible to define new, custom plugins for nearly limitless interactive behaviors. For example, here is the built-in Linked Brushing plugin that allows exploration of multi-dimensional datasets
Step67: Putting it all together
Let's recreate this graphic inspired by Tufte's
Step68: This is a plot of NYC's weather in 2014 versus historical averages
Daily historical highs and lows
Historical confidence intervals around averages
The daily temperatures for 2013
Markers for new highs and lows
Annotations for points
Text for the graphic
Custom tick labels
Load the data from yesterday
Step69: You probably don't wan't to work with the month, day tuples in its present form for plotting
Instead, you can use the below for the x axis
Step70: First, make the figure and plot the high and low bars (Hints
Step71: Annotate the points one of the 2014 historical lows and one of the 2014 historical highs with the appropriate text (Hint
Step72: Other frequently used plotting tricks
XKCD and Annotation
Step73: Tick Tricks
Step74: ColorMaps
See colormap reference
Sharing Axes
Step75: Twinning Axes
Step76: Image Plots
Step77: $LaTeX$
By default, matplotlib uses its own $TeX$ enging for text and math layout
You have the option to use call out to $TeX$, though by setting the text.usetext option
Step78: Contour Plots
Step79: Arrows
Step80: Filling in plots | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import matplotlib as mpl # used sparingly
import matplotlib.pyplot as plt
pd.set_option("notebook_repr_html", False)
pd.set_option("max_rows", 10)
Explanation: Plotting and Visualization
End of explanation
%matplotlib inline
Explanation: Landscape of Plotting Libraries
matplotlib
pandas
seaborn
mpld3
"Bringing matplotlib to the browser"
d3py
"a plotting library for python based on d3."
mayavi
"seeks to provide easy and interactive visualization of 3D data."
ggplot
"Yes, it's another port of ggplot2."
bokeh
"Bokeh is a Python interactive visualization library that targets modern web browsers for presentation."
mpl_toolkits
basemap
mplot3d
Matplotlib Orientation
Introduction
Matplotlib is the de facto standard for plotting in Python
Understanding matplotlib is key to unlocking its power
Online Documentation
Project web site is here
Examples online and also in the source code
API documentation is here
The FAQ may answer some of your questions
Get to know the gallery
Getting Help
First stop should be the gallery
Next stop could be stackoverflow
There's also a mailing list
This notebook draws heavily from the following sources [E.g., 1, 2, 3]
Notebook specifics
End of explanation
from matplotlib import matplotlib_fname
matplotlib_fname()
Explanation: Backends
Potential uses of matplotlib
interactively from python shell/IPython
Embed in a GUI
Generate postscript images in batch scripts
In a web application to serve graphs
Each of these use cases is enabled by using a backend
Two types
User interface / Interactive (for use in pygtk, wxpython, tkinter, qt4, or macosx)
Hard copy / Non-interactive (PNG, SVG, PDF, PS)
Set your backend in your matplotlibrc
Or with the use function (before importing pyplot
python
from matplotlib import use
use('PS') # postscript
Configuration
See Customizing Matplotlib for more information
You can edit your matplotlibrc to change the matplotlib defaults
End of explanation
from matplotlib import rcParams
rcParams.keys()
rcParams['font.family']
rcParams['font.family'] = 'monospace'
rcParams['font.family']
rcParams['font.family'] = 'sans-serif'
Explanation: This has a popular one
Take a look at the Seaborn project. More on that later.
You can also change them dynamically using the global rcParams object
End of explanation
from matplotlib import rc_context
with rc_context({'font.family': 'monospace'}):
print(rcParams['font.family'])
print(rcParams['font.family'])
Explanation: You can also use the rc_context context manager
End of explanation
import matplotlib.pyplot as plt
plt.plot([1, 2, 3, 4])
plt.title("Title")
plt.xlabel("X")
Explanation: Interactive Plotting with PyPlot
Interative backends allow plotting to the screen
Interactive mode plots to the screen without calls to show
Interactive mode does not require using pyplot
Doing the following at the interpreter will show a plot
python
import matplotlib.pyplot as plt
plt.ion()
plt.plot([1, 2, 3, 4, 5])
plt.title("Title")
At IPython interpreter enable interactive with (or set it in matplotlibrc
python
import matplotlib.pyplot as plt
plt.ion()
or with
from matplotlib import interactive
interactive()
End of explanation
fig, ax = plt.subplots()
ax.plot([1, 2, 3, 4, 5])
ax.set_title("Title")
plt.draw_if_interactive()
Explanation: If using object method calls, you must call draw or draw_if_interactive to see changes
Again, this is unnecessary in the notebook
End of explanation
plt.plot([1, 5, 3])
Explanation: By default the plot method takes x values, then y values
If the y values are omitted, then it is assumed that the x values are the indices of the given values
End of explanation
plt.Figure?
fig = plt.Figure()
Explanation: What is the pyplot namespace?
It's where everything comes together
Usually where you want to start
Broadly, 3 categories of functions
Plotting preparation
Plotting functions
Plot modifiers
Plotting Preparation
Function | Description
:-----------------|:----------------------------------------------------------
autoscale | Autoscale the axis view to the data (toggle).
axes | Add an axes to the figure.
axis | Convenience method to get or set axis properties.
cla | Clear the current axes.
clf | Clear the current figure.
clim | Set the color limits of the current image.
delaxes | Remove an axes from the current figure.
locator_params | Control behavior of tick locators.
margins | Set or retrieve autoscaling margins.
figure | Creates a new figure.
gca | Return the current axis instance.
gcf | Return a reference to the current figure.
gci | Get the current colorable artist.
hold | Set the hold state.
ioff | Turn interactive mode off.
ion | Turn interactive mode on.
ishold | Return the hold status of the current axes.
isinteractive | Return status of interactive mode.
rc | Set the current rc params.
rc_context | Return a context manager for managing rc settings.
rcdefaults | Restore the default rc params.
savefig | Save the current figure.
sca | Set the current Axes instance.
sci | Set the current image.
set_cmap | Set the default colormap
setp | Set a property on an artist object
show | Display a figure
subplot | Return a subplot axes positioned by the given grid definition.
subplot2grid | Create a subplot in a grid.
subplot_tool | Launch a subplot tool window for a figure.
subplots | Create a figure with a set of subplots already made.
subplots_adjust | Tune the subplot layout.
switch_backend | Switch the default backend.
tick_params | Change the appearance of ticks and tick labels.
ticklabel_format| Change the ScalarFormatter used by default for linear axes.
tight_layout | Automatically adjust subplot parameters to give specified padding.
xkcd | Turns on XKCD sketch-style drawing mode.
xlabel | Set the x axis label of the current axis.
xlim | Get or set the x limits of the current axes.
xscale | Set the scaling of the x-axis.
xticks | Get or set the x-limits of the current tick locations and labels.
ylabel | Set the y axis label of the current axis.
ylim | Get or set the y-limits of the current axes.
yscale | Set the scaling of the y-axis.
yticks | Get or set the y-limits of the current tick locations and labels.
Plotting Functions
Function | Description
:-----------------|:--------------------------------------------
acorr | Plot the autocorrelation of x
bar | Make a bar plot
barbs | Plot a 2-D field of barbs
barh | Make a horizontal bar plot
boxplot | Make a box and whisker plot
broken_barh | Plot horizontal bars
cohere | Plot the coherence between x and y
contour | Plot contours
contourf | Plot filled contours
csd | Plot cross-spectral density
errorbar | Plot an errorbar graph
eventplot | Plot identical parallel lines at specific positions
fill | Plot filled polygons
fill_between | Make filled polygons between two curves
fill_betweenx | Make filled polygons between two horizontal curves
hexbin | Make a hexagonal binning plot
hist | Plot a histogram
hist2d | Make a 2D histogram plot
imshow | Display an image on the axes
loglog | Make a plot with log scaling on both the x and y axis
matshow | Display an array as a matrix in a new figure window
pcolor | Create a pseudocolor plot of a 2-D array
pcolormesh | Plot a quadrilateral mesh
pie | Plot a pie chart
plot | Plot lines and/or markers
plot_date | Plot with data with dates
polar | Make a polar plot
psd | Plot the power spectral density
quiver | Plot a 2-D field of arrows
scatter | Make a scatter plot of x vs y
semilogx | Make a plot with log scaling on the x axis
semilogy | Make a plot with log scaling on the y axis
specgram | Plot a spectrogram
spy | Plot the sparsity pattern on a 2-D array
stackplot | Draws a stacked area plot
stem | Create a stem plot
step | Make a step plot
streamplot | Draws streamlines of a vector flow
tricontour | Draw contours on an unstructured triangular grid
tricontourf | Draw filled contours on an unstructured triangular grid
tripcolor | Create a pseudocolor plot of an unstructured triangular grid
triplot | Draw a unstructured triangular grid as lines and/or markers
xcorr | Plot the cross-correlation between x and y
Plot modifiers
Function | Description
:-----------------|:---------------------------------------------------------------------
annotate | Create an annotation: a piece of text referring to a data point
arrow | Add an arrow to the axes
axhline | Add a horizontal line across the axis
axhspan | Add a horizontal span (rectangle) across the axis
axvline | Add a vertical line across the axes
axvspan | Add a vertical span (rectangle) across the axes
box | Turn the axes box on or off
clabel | Label a contour plot
colorbar | Add a colorbar to a plot
grid | Turn the axes grids on or off
hlines | Plot horizontal lines
legend | Place a legend on the current axes
minorticks_off | Remove minor ticks from the current plot
minorticks_on | Display minor ticks on the current plot
quiverkey | Add a key to a quiver plot
rgrids | Get or set the radial gridlines on a polar plot
suptitle | Add a centered title to the figure
table | Add a table to the current axes
text | Add text to the axes
title | Set a title of the current axes
vlines | Plot vertical lines
xlabel | Set the x axis label of the current axis
ylabel | Set the y axis label of the current axis"
Figures
The Figure is the central object of matplotlib
It is the GUI window that contains the plot
End of explanation
plt.close()
Explanation: Close the last made Figure, by default
End of explanation
fig = plt.figure(figsize=(5, 5))
Explanation: You can also refer to figures by their number starting at 1
plt.close('all') is handy
One of the most commonly used option used to create a Figure is figsize, a tuple of integers specifying the width and height in inches
End of explanation
fig = plt.figure()
ax = fig.add_subplot(111)
lines = ax.plot([1, 2, 3])
text = ax.set_xlabel("X")
Explanation: Axes
The Axes object is contained within and belongs to a figure
This is where the plotting happens
You will interact with the Axes most often
Use the add_subplot method to put an axes on a figure
It takes the shorthand for n_rows, n_cols, plot_number
End of explanation
fig = plt.figure(figsize=(10, 5))
ax1 = fig.add_subplot(121)
ax1.plot([1, 2, 3])
ax2 = fig.add_subplot(122)
ax2.plot([3, 2, 1])
Explanation: You may have guessed that you can have more than one axes on a plot
End of explanation
plt.xlabel??
Explanation: Library Plotting
You'll notice above that I stopped using plt for almost everything but figure creation
This is usually how I use matplotlib and allows the most flexible, powerful usage
In fact, most calls functions in the pyplot namespace call gca to get the current axis and then delegate to the method of the Axes object
End of explanation
fig, ax = plt.subplots(figsize=(8, 6))
ax.scatter(np.random.randn(20), np.random.randn(20))
Explanation: You'll also notice that I assign the returns from the matplotlib object method calls to variables
This is a good habit to get in and we will see why below
One last handy function is plt.subplots
It's almost all I ever use from the plt namespace with a few exceptions
End of explanation
fig, ax = plt.subplots(figsize=(8, 6))
ax.scatter(np.random.randn(20), np.random.randn(20))
ax.scatter(np.random.randn(20), np.random.randn(20), color='r')
fig
Explanation: Notebook aside
You can work on figures across cells. Just make the existing figure object the last line in the cell.
End of explanation
plt.plot?
Explanation: Exercise
Let's make some basic plots. Make a scatter plot as above with 500 points. Draw random numbers from 0 to 100 for the y axis and set the limits of the y axis at 0 and 200.
Styling
End of explanation
x = np.linspace(-2*np.pi, 2*np.pi, 100)
y = np.sin(x)
plt.plot(x, y)
Explanation: Colors
Single letter shortcuts
b: blue
g: green
r: red
c: cyan
m: magenta
y: yellow
k: black
w: white
Shades of gray string float in the 0-1 range
color = '0.75'
HTML hex strings
color = '#eeefff'
R, G, B tuples with R, G, B in [0, 1]
HTML names for colors, like ‘red’, ‘burlywood’ and ‘chartreuse’
Markers
See here for the full list
A few commonly used ones are
".": point
",": pixel
"o": circle
"*": star
"+": plus
"x": x
"D”: diamond
Linestyles
'-' solid
'--' dashed
'-.' dash_dot
':' dotted
'None' draw nothing
' ' draw nothing
'' draw nothing
Exercises
Create a figure that holds two subplots in two rows. In the top one, plot a sin curve from $-2\pi$ to $2\pi$ in green. In the second one, plot a dashed red line (Hint: you may find np.linspace to be useful).
End of explanation
fig, ax = plt.subplots(figsize=(8, 8))
ax.plot([1, 2, 4, 5], label="Line 1")
ax.plot([2, 5, 3, 4], label="Line 2")
legend = ax.legend(loc='best', fontsize=20)
Explanation: Labels and Legends
You can label many things in matplotlib
Labeling lines allows automatic legend creation
End of explanation
fig, ax = plt.subplots(figsize=(8, 8))
ax.plot([1, 2, 4, 5], label="Line 1")
ax.plot([2, 5, 3, 4], label="Line 2")
ax.set_xlabel("X", fontsize=20)
ax.set_ylabel("Y", fontsize=20)
legend = ax.legend(loc='best', fontsize=20)
Explanation: You can label the X and Y axes
End of explanation
fig, ax = plt.subplots(figsize=(8, 8))
ax.plot([1, 2, 4, 5], label="Line 1")
ax.plot([2, 5, 3, 4], label="Line 2")
ax.set_xlabel("X", fontsize=20)
ax.set_ylabel("Y", fontsize=20)
ax.set_title("Title", fontsize=20)
legend = ax.legend(loc='best', fontsize=20)
Explanation: Label the axes with a title
End of explanation
fig, ax = plt.subplots(figsize=(8, 8))
ax.grid(False)
ax.tick_params(axis='y', which='major', length=15, right=False)
ax.tick_params(axis='x', which='major', length=15, top=False, direction="out", pad=15)
Explanation: Ticks and Tick Labels
The Ticks are the location of the Tick labels
The Tick lines denote the Ticks
The Tick labels are the text accompanying the tick
A Ticker determines the ticks and their labels automatically
You can use tick_params to adjust the appearance of the ticks
End of explanation
fig, ax = plt.subplots(figsize=(8, 8))
ax.grid(False)
ax.tick_params(axis='y', which='major', length=15, right=False)
ax.tick_params(axis='x', which='major', length=15, top=False)
ticklabels = ax.xaxis.set_ticklabels(['aaaa', 'bbbb', 'cccc',
'dddd', 'eeee', 'ffff'],
rotation=45, fontsize=15)
Explanation: You can set your own tick labels
End of explanation
ax.spines
fig, ax = plt.subplots(figsize=(8, 8))
ax.tick_params(bottom=False, top=False, left=False, right=False)
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.spines['bottom'].set_visible(False)
ax.spines['left'].set_visible(False)
ax.grid(False)
ax.xaxis.set_ticklabels([])
ax.yaxis.set_ticklabels([]);
Explanation: Spines
The spines are the boundaries of the axes, and they can be selectively turned off
End of explanation
x, y = np.random.randn(2, 100)
x.sort()
fig, ax = plt.subplots()
ax.plot(y, 'g--')
fig, ax = plt.subplots()
ax.plot(x, y)
fig, ax = plt.subplots()
ax.plot(x, y, 'o')
x2, y2 = np.random.randn(2, 200)
x2.sort()
fig, ax = plt.subplots()
lines = ax.plot(x, y, 'o', x2, y2, 'ro', ms=8, alpha=.5)
Explanation: More on plot
The plot function is a bit of a work horse with a flexible API
End of explanation
y = pd.Series(np.random.randn(25))
y.plot()
y.cumsum().plot()
Explanation: Plotting in Pandas vs Matplotlib
Pandas provides a few accessors that allow you to stay fairly high-level without giving up any of the power and flexibility of matplotlib
Series and DataFrames have a plot method
They take a kind keyword argument which accepts several values for plots other than the default line plot. These include:
bar or barh for bar plots
hist for histogram
box for boxplot
kde or 'density' for density plots
area for area plots
scatter for scatter plots
hexbin for hexagonal bin plots
pie for pie plots
End of explanation
dta = pd.DataFrame({'normal': np.random.normal(size=100),
'gamma': np.random.gamma(1, size=100),
'poisson': np.random.poisson(size=100)})
ax = dta.cumsum(0).plot()
Explanation: Notice that these return AxesSubplot objects, so we have our hook in to all of the powerful methods from matplotlib
So, too, do DataFrames
End of explanation
ax = dta.cumsum(0).plot(subplots=True, figsize=(10, 10))
Explanation: Exercise
Without re-plotting any of the above, re-size the fonts for the labels and the legend and display the figure.
Alternatively, we can plot the above in separate subplots
We can also change the figsize
End of explanation
axes = dta.cumsum(0).plot(subplots=True, figsize=(10, 10))
fig = axes[0].figure
fig.tight_layout()
Explanation: These are just matplotlib objects
Note the use of tight_layout below
tight_layout automatically adjusts the subplot params so that the subplot fits the figure
You can have more fine-grained control using
python
fig.subplots_adjust
End of explanation
axes = dta.cumsum().plot(secondary_y='normal')
Explanation: We can easily add a secondary y-axis
End of explanation
fig, axes = plt.subplots(1, 3, figsize=(12, 4))
for i, ax in enumerate(axes):
variable = dta.columns[i]
ax = dta[variable].cumsum().plot(ax=ax)
ax.set_title(variable, fontsize=16)
axes[0].set_ylabel("Cumulative Sum", fontsize=14);
Explanation: We can also ask pandas to plot on already existing axes
End of explanation
dta = pd.read_csv("../data/weather_nyc.csv")
dta = dta.ix[dta.year < 2015] # truncate to end of year
Explanation: Bar plots
Bar plots are useful for displaying and comparing measurable quantities, such as counts or volumes.
We can use the plot method with a kind='bar' argument.
Let's use temperature data from NYC 1995 - 2014
End of explanation
dta.query("year < 2015")
Explanation: Or equivalently
End of explanation
bins = [dta.temp.min(), 32, 55, 80, dta.temp.max()]
bins
labels = ["freezing", "cold", "warm", "hot"]
dta["temp_bin"] = pd.cut(dta.temp, bins, labels=labels)
try:
from scipy.constants import F2C
except ImportError: # no scipy installed
def F2C(f):
return (np.array(f) - 32)/1.8
lmap = lambda func, x : list(map(func, x))
Explanation: Recall that pandas.cut can be used to bin continuous data into buckets
End of explanation
bins = [dta.tempc.min()] + lmap(F2C, (32, 55, 80)) + [dta.tempc.max()]
bins
labels = ["freezing", "cold", "warm", "hot"]
dta["tempc_bin"] = pd.cut(dta.temp, bins, labels=labels)
dta.head()
ax = dta.groupby("temp_bin").size().plot(kind="bar")
Explanation: Celsius bins
End of explanation
ax = dta.groupby("temp_bin").size().plot(kind="bar", rot=0, fontsize=16, figsize=(8, 5))
ax.set_xlabel("Temperature")
ax.set_ylabel("Number of Days")
ax.set_title("Temperatures from 1995 - 2014");
Explanation: What's wrong with this graph?
Axis labels and tick labels to start
Some things we can do through the plot method
Some things we have to do with matplotlib
Make the xticks labels bigger and rotate them
End of explanation
dta.groupby(["season", "temp_bin"]).size().plot(kind="barh", figsize=(6, 8))
Explanation: Horizontal bar chart
End of explanation
ct = pd.crosstab(dta.temp_bin, dta.season)
ct
ax = ct.plot(kind="bar", stacked=True, figsize=(12, 8), grid=False,
legend=True)
Explanation: Stacked bar chart
The pandas crosstab function creates a cross-tabulation of two or more factors.
End of explanation
colors = plt.cm.Paired(np.linspace(0, 1, 4))
colors
ax = pd.crosstab(dta.temp_bin, dta.season).plot(kind="bar", stacked=True,
figsize=(12, 8), grid=False,
legend=True, colors=colors, rot=0,
fontsize=16)
# adjust the fontsize of the legend
legend = ax.get_legend()
for text in legend.get_texts():
text.set_fontsize(18)
legend.get_title().set_fontsize(20)
Explanation: Matplotlib provides a variety of ColorMaps
The Paired colormap is a good qualitative colormap
End of explanation
dta.temp.min()
ax = dta.temp.plot(kind="hist", bins=50)
Explanation: Histograms
Frequently it is useful to look at the distribution of data before you analyze it.
Histograms display relative frequencies of data values
The y-axis is always some measure of frequency, raw counts of values or scaled proportions
End of explanation
dta.ix[dta.temp == -99, ["temp", "tempc"]] = np.nan
Explanation: It's even a good exercise here! Let's drop turn the -99 into NaNs.
End of explanation
ax = dta.temp.plot(kind="hist", bins=50, grid=False, figsize=(10, 6))
# plot a vertical line that spans the axis
line = ax.axvline(dta.temp.mean(), color='r', lw=3, label="Mean")
# specifically add a legend
handles, labels = ax.get_legend_handles_labels()
ax.legend([handles[0]], [labels[0]], fontsize=16)
handles
Explanation: Incidentally, pandas will handle nulls in plotting
End of explanation
def scotts_rule(x):
x = x.dropna()
std = x.std()
return 3.5 * std / (len(x)**(1./3))
def width_to_nbins(x, h):
x = x.dropna()
return int(round(x.ptp()/h))
h = scotts_rule(dta.temp)
nbins = width_to_nbins(dta.temp, h)
ax = dta.temp.plot(kind="hist", bins=nbins, grid=False, figsize=(10, 6))
# plot a vertical line that spans the axis
line = ax.axvline(dta.temp.mean(), color='r', lw=3, label="Mean")
Explanation: Optimal number of bins
Scott's rule
$$h=\frac{3.5\sigma}{n^{1/3}}$$
End of explanation
ax = dta.temp.plot(kind='kde', grid=False, figsize=(10, 6))
ax.set_xlim(0, 100)
Explanation: Density Plots
Kernel Density Estimators are a kind of smoothed histogram (more on this later)
Pandas provides a hook to KDE plots using statsmodels, if installed, or scipy
End of explanation
ax = dta.temp.plot(kind='kde', grid=False, figsize=(10, 6), color='r', lw=3)
ax = dta.temp.plot(kind="hist", bins=nbins, grid=False, figsize=(10, 6), ax=ax, normed=True, alpha=.7)
ax.set_xlim(0, 100)
Explanation: We can compare the KDE to the normed histogram
End of explanation
ax = dta.boxplot(column="temp", by="season", grid=False, figsize=(8, 10), fontsize=16,
whis=[5, 95])
ax.set_title(ax.get_title(), fontsize=20)
ax.xaxis.get_label().set_fontsize(18)
fig = ax.figure
# Change the size of the figure title
# http://stackoverflow.com/a/12449783/535665
fig.texts[0].set_fontsize(20)
# whitespace between axes and fig boundary
fig.subplots_adjust(top=.85)
Explanation: Exercise
Create KDE estimates for the temperature in each season on a single plot. Label the plotted lines.
Box plots
Boxplots (aka "box and whisker" plots) are a different way to display distributions of data
The box contains the quartiles of the data
The "whiskers" are typically the lower and upper 5 percent values
In matplotlib they are 1.5 * the lower/upper quarteriles by default
The horizontal line is the median
Boxplots have their own method on DataFrames
End of explanation
def jitter(x, n, noise=.05):
return x + np.random.normal(0, noise, size=n)
ax = dta.boxplot(column="temp", by="season", grid=False, figsize=(8, 10), fontsize=16,
whis=[5, 95])
ax.set_title(ax.get_title(), fontsize=20)
ax.xaxis.get_label().set_fontsize(18)
fig = ax.figure
# http://stackoverflow.com/a/12449783/535665
fig.texts[0].set_fontsize(20)
# whitespace between axes and fig boundary
fig.subplots_adjust(top=.85)
for i, season in enumerate(ax.get_xticklabels()):
y = dta.ix[dta.season == season.get_text()].temp
x = jitter(i + 1, len(y))
# there's a lot of data so turn the alpha way down (or sub-sample)
ax.plot(x, y, 'ro', alpha=.05)
Explanation: We can add some more information by overlaying the original data on the boxplot
End of explanation
baseball = pd.read_csv("../data/baseball.csv")
baseball.head()
ax = baseball.plot(kind="scatter", x="ab", y="h", grid=False, figsize=(8, 6), s=8**2,
alpha=.7)
ax.margins(0)
ax.set_xlim(0, 700)
ax.set_ylim(0, 200)
Explanation: Scatterplots
Let's load the baseball dataset to look at scatterplots
End of explanation
ax = baseball.plot(kind="scatter", x="ab", y="h", grid=False, figsize=(8, 6), s=baseball.hr*10,
alpha=.5)
ax.margins(0)
ax.set_xlim(0, 700)
ax.set_ylim(0, 200)
Explanation: We can uncover more information by changing the size of the points
End of explanation
ax = baseball.plot(kind="scatter", x="ab", y="h", grid=False, figsize=(8, 6), c="DarkGreen", s=50)
ax = baseball.plot(kind="scatter", x="ab", y="rbi", grid=False, figsize=(8, 6), c="Blue", s=50,
ax=ax)
ax.margins(0)
ax.set_xlim(0, 700)
ax.set_ylim(0, 200);
Explanation: Or by adding color using the c keyword
End of explanation
ax = baseball.plot(kind="scatter", x="ab", y="h", grid=False, figsize=(8, 6), c=baseball.hr*10,
s=40, cmap="hot")
ax.margins(0)
ax.set_xlim(0, 700)
ax.set_ylim(0, 200);
Explanation: c can also be a color intensity
in this case we can specify a colormap through the cmap keyword
End of explanation
ax = baseball.plot(kind="scatter", x="ab", y="h", grid=False, figsize=(8, 6), c=baseball.hr*10,
s=40, cmap="hot")
ax.margins(0)
ax.set_xlim(0, 700)
ax.set_ylim(0, 200)
fig = ax.figure
# colorbars are actually a separate subplot in your figure
colorbar = fig.axes[1]
colorbar.yaxis.set_tick_params(right=False);
Explanation: Notice that there is a colorbar automatically
We can adjust it just like all other things matplotlib
It's actually implemented as a separate axes subplot in the figure
End of explanation
ax = pd.scatter_matrix(baseball.loc[:,'r':'sb'], figsize=(14, 10), diagonal='hist')
ax = pd.scatter_matrix(baseball.loc[:,'r':'sb'], figsize=(14, 10), diagonal='kde')
Explanation: Use pd.scatter_matrix To view a large number of variables simultaenously
End of explanation
idx = pd.to_datetime(dta.year*10000 + dta.month*100 + dta.day, format='%Y%m%d')
idx
y = dta.set_index(idx).temp
y.head()
y.index
Explanation: Plotting Time-Series
Let's convert the temperature data into a TimeSeries for convenience
End of explanation
#ax = y.plot(figsize=(12, 8))
ax = pd.rolling_mean(y, window=60, min_periods=1, center=True).plot(figsize=(12, 8),
label="Rolling 2-month mean")
means = y.groupby(lambda x : x.year).mean()
means.index = pd.DatetimeIndex(pd.to_datetime(means.index * 10000 + 1231, format="%Y%m%d"))
ax = means.plot(ax=ax, label="Yearly Average")
legend = ax.legend()
Explanation: Pandas plotting is DatetimeIndex aware
Outside of the browser, you can pan and zoom and the tick labels adjust dynamically
End of explanation
ax = plt.subplot2grid((2, 2), (0, 0))
Explanation: GridSpec
GridSpec provides a high-level abstraction for placing subplots on a grid
plt.subplot2grid is a helper function for creating grids of subplots
To create a 2x2 figure with a reference to the first axes we could do
python
ax = plt.subplot(2, 2, 1)
Equivalently with subplot2grid
End of explanation
with plt.rc_context(rc={"xtick.labelsize": 0,
"ytick.labelsize": 0,
"axes.facecolor": "lightgray",
"figure.figsize": (8, 8)}):
ax1 = plt.subplot2grid((3,3), (0,0), colspan=3)
ax2 = plt.subplot2grid((3,3), (1,0), colspan=2)
ax3 = plt.subplot2grid((3,3), (1, 2), rowspan=2)
ax4 = plt.subplot2grid((3,3), (2, 0))
ax5 = plt.subplot2grid((3,3), (2, 1))
ax1.figure.suptitle("subplot2grid", fontsize=20)
Explanation: We can have more easy, fine-grained control with subplot2grid for creating multiple subplots that span columns, for example
End of explanation
from matplotlib.gridspec import GridSpec
with plt.rc_context(rc={"xtick.labelsize": 0,
"ytick.labelsize": 0,
"axes.facecolor": "lightgray"}):
fig, ax = plt.subplots(figsize=(8, 8))
gs = GridSpec(3, 3)
ax1 = plt.subplot(gs[0, :])
# identical to ax1 = plt.subplot(gs.new_subplotspec((0,0), colspan=3))
ax2 = plt.subplot(gs[1,:-1])
ax3 = plt.subplot(gs[1:, -1])
ax4 = plt.subplot(gs[-1,0])
ax5 = plt.subplot(gs[-1,-2])
fig.suptitle("GridSpec", fontsize=20)
Explanation: You can use GridSpec class directly to create the same plot
End of explanation
import seaborn as sns
tips = sns.load_dataset("tips")
tips.head()
Explanation: Seaborn
Seaborn is a Python visualization library based on matplotlib. It provides a high-level interface for drawing attractive statistical graphics.
It is built on top of matplotlib
Provides support for numpy and pandas
Coupled with statistical routines from scipy and statsmodels
Trellis plots
"At the heart of quantitative reasoning is a single question: Compared to what? Small multiple designs, multivariate and data bountiful, answer directly by visually enforcing comparisons of changes, of the differences among objects, of the scope of alternatives. For a wide range of problems in data presentation, small multiples are the best design solution."
-Edward Tufte
For medium-dimensional data,
Multiple instances of the same plot on different subsets of your dataset.
Quickly extract a large amount of information about complex data.
End of explanation
with mpl.rc_context(rc={"legend.fontsize": "18", "axes.titlesize": "18"}):
g = sns.FacetGrid(tips, col="sex", hue="smoker", size=7)
g.map(plt.scatter, "total_bill", "tip", alpha=.7, s=80)
g.add_legend()
g._legend.get_title().set_fontsize(20)
g.axes[0, 0].title.set_fontsize(20)
g.axes[0, 0].xaxis.get_label().set_fontsize(20)
g.axes[0, 1].title.set_fontsize(20)
g.axes[0, 1].xaxis.get_label().set_fontsize(20)
Explanation: FacetGrid
Used to visualize the distribution of a variable or the relationship between multiple variables within subsets of your data
Can be drawn with up to three dimensions: row, col, and hue.
These should be discrete variables
Say we wanted to examine differences between lunch and dinner in the tips dataset
End of explanation
ax = dta.boxplot(column="temp", by="season", grid=False, figsize=(8, 10), fontsize=16,
whis=[5, 95])
X = dta[["temp", "season"]].dropna()
ax = sns.violinplot(X.temp, groupby=X.season)
Explanation: Violin plot
The violin plot is a combination of a boxplot and a kernel density estimator
End of explanation
ax = sns.violinplot(X.temp, groupby=X.season, inner='points', alpha=.5,
order=['Winter', 'Spring', 'Summer', 'Fall'])
Explanation: We can plot the points inside the violins and re-order the seasons
End of explanation
temp95 = dta.query("year == 1995")[["temp", "month", "day"]]
temp14 = dta.query("year == 2014")[["temp", "month", "day"]]
temps = temp95.merge(temp14, on=["month", "day"], how="inner", suffixes=("_95", "_14"))
g = sns.jointplot(temps.temp_95, temps.temp_14, kind="kde", size=7, space=0)
Explanation: Distribution plots
Seaborn allows you to look at bivariate distributions. Here, we can compare the distribution of the temperatures in 1995 and 2014.
End of explanation
g = sns.jointplot(temps.temp_95, temps.temp_14, kind="hex", color="#4CB391",
joint_kws={"bins": 200})
Explanation: We can also look at a hexbin plot of the same data with the marginal distributions as histograms.
End of explanation
fig, ax = plt.subplots(figsize=(6, 6))
np.random.seed(0)
x, y = np.random.normal(size=(2, 200))
color, size = np.random.random((2, 200))
ax.scatter(x, y, c=color, s=500 * size, alpha=0.5, cmap="rainbow")
ax.grid(color='lightgray', alpha=0.7)
Explanation: mpld3
The mpld3 project brings together Matplotlib, and D3js, the popular Javascript library for creating interactive data visualizations for the web. The result is a simple API for exporting your matplotlib graphics to HTML code which can be used within the browser, within standard web pages, blogs, or tools such as the IPython notebook.
Let's look at a regular scatter plot
End of explanation
import mpld3
mpld3.display(fig)
Explanation: Unfortunately, this is just a static image. Let's use mpld3 to change that. Using the display command, you get a fully interactive visualization of the figure.
End of explanation
from mpld3 import plugins
fig, ax = plt.subplots(6, 6, figsize=(6, 6))
fig.subplots_adjust(hspace=0.1, wspace=0.1)
ax = ax[::-1]
X = baseball.loc[:, 'r':'rbi']
for i in range(6):
for j in range(6):
ax[i, j].xaxis.set_major_formatter(plt.NullFormatter())
ax[i, j].yaxis.set_major_formatter(plt.NullFormatter())
points = ax[i, j].scatter(X.values[:, j], X.values[:, i])
if i == 0:
ax[i, j].set_xlabel(X.columns[j])
ax[i, 0].set_ylabel(X.columns[i])
plugins.connect(fig, plugins.LinkedBrush(points))
mpld3.display(fig)
Explanation: Notice the toolbar on hover. You can use that to interact with the figure.
You can use mpld3 for every plot that you render in the notebook by executing
python
mpld3.enable_notebook()
mpld3 plugins
Much like event handling via callback functions in regular matplotlib (not covered in this notebook), you can define plugins for mpld3 to specify additional interactivity.
A number of plugins are built-in, and it is also possible to define new, custom plugins for nearly limitless interactive behaviors. For example, here is the built-in Linked Brushing plugin that allows exploration of multi-dimensional datasets:
End of explanation
from IPython.display import Image, HTML
# Image("./tufte.svg")
HTML("./tufte.svg")
Explanation: Putting it all together
Let's recreate this graphic inspired by Tufte's
End of explanation
import os
to_colors = lambda x : x/255.
blue3 = list(map(to_colors, (24, 116, 205))) # 1874CD
wheat2 = list(map(to_colors, (238, 216, 174))) # EED8AE
wheat3 = list(map(to_colors, (205, 186, 150))) # CDBA96
wheat4 = list(map(to_colors, (139, 126, 102))) # 8B7E66
firebrick3 = list(map(to_colors, (205, 38, 38))) # CD2626
gray30 = list(map(to_colors, (77, 77, 77))) # 4D4D4D
Explanation: This is a plot of NYC's weather in 2014 versus historical averages
Daily historical highs and lows
Historical confidence intervals around averages
The daily temperatures for 2013
Markers for new highs and lows
Annotations for points
Text for the graphic
Custom tick labels
Load the data from yesterday
End of explanation
idx = range(366)
Explanation: You probably don't wan't to work with the month, day tuples in its present form for plotting
Instead, you can use the below for the x axis
End of explanation
np.where([True, False, False, True, False])[0]
Explanation: First, make the figure and plot the high and low bars (Hints: see the ax.vlines)
The color is wheat3
Second, plot the confidence intervals around the historical means
The color is wheat4
Plot the highs and lows of the present year in present_highs and present_lows
You will need the x axes of these two objects to line up with your current x axis (Hint: you may find np.where to be helpful)
End of explanation
yticks = range(-10, 101, 10)
ylabels = [str(i) + u"\u00b0" for i in yticks]
ylabels
Explanation: Annotate the points one of the 2014 historical lows and one of the 2014 historical highs with the appropriate text (Hint: see ax.annotate)
You may want to look at some of the examples below for annotate and arrows
Now, add text to the figure. (Hint: see ax.text)
Finally, let's add the correct tick labels
You can use unicode to add the $^\circ$
End of explanation
with plt.xkcd():
# Based on "Stove Ownership" from XKCD by Randall Monroe
# http://xkcd.com/418/
fig = plt.figure()
ax = fig.add_axes((0.1, 0.2, 0.8, 0.7))
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
plt.xticks([])
plt.yticks([])
ax.set_ylim([-30, 10])
data = np.ones(100)
data[70:] -= np.arange(1, 31)
plt.annotate(
'THE DAY I REALIZED\nI COULD COOK BACON\nWHENEVER I WANTED',
xy=(70, 1), arrowprops=dict(arrowstyle='->'), xytext=(15, -10), zorder=-1)
plt.plot(data)
plt.xlabel('time')
plt.ylabel('my overall health')
fig.text(0.5, 0.05,
'"Stove Ownership" from xkcd by Randall Monroe', ha='center')
with plt.xkcd():
# Based on "The data So Far" from XKCD by Randall Monroe
# http://xkcd.com/373/
fig = plt.figure()
ax = fig.add_axes((0.1, 0.2, 0.8, 0.7))
ax.bar([-0.125, 1.0-0.125], [0, 100], 0.25)
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
ax.xaxis.set_ticks_position('bottom')
ax.set_xticks([0, 1])
ax.set_xlim([-0.5, 1.5])
ax.set_ylim([0, 110])
ax.set_xticklabels(['CONFIRMED BY\nEXPERIMENT', 'REFUTED BY\nEXPERIMENT'])
ax.set_yticks([])
fig.suptitle("CLAIMS OF SUPERNATURAL POWERS")
fig.text(0.5, 0.01,
'"The Data So Far" from xkcd by Randall Monroe',
ha='center', )
Explanation: Other frequently used plotting tricks
XKCD and Annotation
End of explanation
from matplotlib.ticker import MaxNLocator
x = np.arange(20)
y = np.random.randn(20)
fig, ax = plt.subplots()
ax.plot(x, y)
ax.xaxis.set_major_locator(MaxNLocator(nbins=8))
Explanation: Tick Tricks
End of explanation
x = np.arange(20)
y1 = np.random.randn(20)
y2 = np.random.randn(20)
fig, axes = plt.subplots(2, 1, sharex=True)
axes[0].plot(x, y1)
axes[1].plot(x, y2)
fig.tight_layout()
Explanation: ColorMaps
See colormap reference
Sharing Axes
End of explanation
t = np.arange(0.01, 10.0, 0.01)
s1 = np.exp(t)
s2 = np.sin(2*np.pi*t)
fig, ax1 = plt.subplots()
ax1.plot(t, s1, 'b-')
ax1.set_xlabel('time (s)')
# Make the y-axis label and tick labels match the line color.
ax1.set_ylabel('exp', color='b', fontsize=18)
for tl in ax1.get_yticklabels():
tl.set_color('b')
ax2 = ax1.twinx()
ax2.plot(t, s2, 'r.')
ax2.set_ylabel('sin', color='r', fontsize=18)
for tl in ax2.get_yticklabels():
tl.set_color('r')
Explanation: Twinning Axes
End of explanation
fig, ax = plt.subplots()
ax.imshow(np.random.uniform(0, 1, size=(50, 50)), cmap="RdYlGn")
Explanation: Image Plots
End of explanation
fig, ax = plt.subplots()
ax.set_ylabel("$\\beta^2$", fontsize=20, rotation=0, labelpad=20)
with mpl.rc_context(rc={"text.usetex": True}):
fig, ax = plt.subplots(figsize=(5, 5))
ax.set_ylabel("$\\beta^2$", fontsize=20, rotation=0, labelpad=20)
Explanation: $LaTeX$
By default, matplotlib uses its own $TeX$ enging for text and math layout
You have the option to use call out to $TeX$, though by setting the text.usetext option
End of explanation
from matplotlib.pylab import bivariate_normal
np.random.seed(12)
delta = 0.025
x = np.arange(-3.0, 3.0, delta)
y = np.arange(-2.0, 2.0, delta)
X, Y = np.meshgrid(x, y)
Z1 = bivariate_normal(X, Y, 1.0, 1.0, 0.0, 0.0)
Z2 = bivariate_normal(X, Y, 1.5, 0.5, 1, 1)
# difference of Gaussians
Z = 10.0 * (Z2 - Z1)
with mpl.rc_context(rc={'xtick.direction': 'out',
'ytick.direction': 'out'}):
# Create a simple contour plot with labels using default colors. The
# inline argument to clabel will control whether the labels are draw
# over the line segments of the contour, removing the lines beneath
# the label
fig, ax = plt.subplots(figsize=(8, 8))
contours = ax.contour(X, Y, Z)
ax.clabel(contours, inline=1, fontsize=10)
Explanation: Contour Plots
End of explanation
fig, ax = plt.subplots()
ax.arrow(0, 0, 0.5, 0.5, head_width=0.05, head_length=0.1, fc='k', ec='k')
ax.arrow(0.25, 0, 0.5, 0.5, head_width=0, head_length=0, fc='k', ec='k')
Explanation: Arrows
End of explanation
x = np.arange(0.0, 2, 0.01)
y1 = np.sin(2*np.pi*x)
y2 = 1.2*np.sin(4*np.pi*x)
fig, axes = plt.subplots(3, 1, sharex=True, figsize=(6, 10))
axes[0].fill_between(x, 0, y1)
axes[0].set_ylabel('between y1 and 0')
axes[1].fill_between(x, y1, 1)
axes[1].set_ylabel('between y1 and 1')
Explanation: Filling in plots
End of explanation
<END_TASK> |
22 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Machine Learning Engineer Nanodegree
Introduction and Foundations
Project 0
Step1: From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship
Step3: The very same sample of the RMS Titanic data now shows the Survived feature removed from the DataFrame. Note that data (the passenger data) and outcomes (the outcomes of survival) are now paired. That means for any passenger data.loc[i], they have the survival outcome outcome[i].
To measure the performance of our predictions, we need a metric to score our predictions against the true outcomes of survival. Since we are interested in how accurate our predictions are, we will calculate the proportion of passengers where our prediction of their survival is correct. Run the code cell below to create our accuracy_score function and test a prediction on the first five passengers.
Think
Step5: Tip
Step6: Question 1
Using the RMS Titanic data, how accurate would a prediction be that none of the passengers survived?
Hint
Step7: Answer
Step9: Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females did survive the ship sinking. Let's build on our previous prediction
Step10: Question 2
How accurate would a prediction be that all female passengers survived and the remaining passengers did not survive?
Hint
Step11: Answer
Step13: Examining the survival statistics, the majority of males younger then 10 survived the ship sinking, whereas most males age 10 or older did not survive the ship sinking. Let's continue to build on our previous prediction
Step14: Question 3
How accurate would a prediction be that all female passengers and all male passengers younger than 10 survived?
Hint
Step15: Answer
Step17: After exploring the survival statistics visualization, fill in the missing code below so that the function will make your prediction.
Make sure to keep track of the various features and conditions you tried before arriving at your final prediction model.
Hint
Step18: Question 4
Describe the steps you took to implement the final prediction model so that it got an accuracy of at least 80%. What features did you look at? Were certain features more informative than others? Which conditions did you use to split the survival outcomes in the data? How accurate are your predictions?
Hint | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
# RMS Titanic data visualization code
from titanic_visualizations import survival_stats
from IPython.display import display
%matplotlib inline
# Load the dataset
in_file = 'titanic_data.csv'
full_data = pd.read_csv(in_file)
# Print the first few entries of the RMS Titanic data
display(full_data.head())
Explanation: Machine Learning Engineer Nanodegree
Introduction and Foundations
Project 0: Titanic Survival Exploration
In 1912, the ship RMS Titanic struck an iceberg on its maiden voyage and sank, resulting in the deaths of most of its passengers and crew. In this introductory project, we will explore a subset of the RMS Titanic passenger manifest to determine which features best predict whether someone survived or did not survive. To complete this project, you will need to implement several conditional predictions and answer the questions below. Your project submission will be evaluated based on the completion of the code and your responses to the questions.
Tip: Quoted sections like this will provide helpful instructions on how to navigate and use an iPython notebook.
Getting Started
To begin working with the RMS Titanic passenger data, we'll first need to import the functionality we need, and load our data into a pandas DataFrame.
Run the code cell below to load our data and display the first few entries (passengers) for examination using the .head() function.
Tip: You can run a code cell by clicking on the cell and using the keyboard shortcut Shift + Enter or Shift + Return. Alternatively, a code cell can be executed using the Play button in the hotbar after selecting it. Markdown cells (text cells like this one) can be edited by double-clicking, and saved using these same shortcuts. Markdown allows you to write easy-to-read plain text that can be converted to HTML.
End of explanation
# Store the 'Survived' feature in a new variable and remove it from the dataset
outcomes = full_data['Survived']
data = full_data.drop('Survived', axis = 1)
# Show the new dataset with 'Survived' removed
display(data.head())
Explanation: From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship:
- Survived: Outcome of survival (0 = No; 1 = Yes)
- Pclass: Socio-economic class (1 = Upper class; 2 = Middle class; 3 = Lower class)
- Name: Name of passenger
- Sex: Sex of the passenger
- Age: Age of the passenger (Some entries contain NaN)
- SibSp: Number of siblings and spouses of the passenger aboard
- Parch: Number of parents and children of the passenger aboard
- Ticket: Ticket number of the passenger
- Fare: Fare paid by the passenger
- Cabin Cabin number of the passenger (Some entries contain NaN)
- Embarked: Port of embarkation of the passenger (C = Cherbourg; Q = Queenstown; S = Southampton)
Since we're interested in the outcome of survival for each passenger or crew member, we can remove the Survived feature from this dataset and store it as its own separate variable outcomes. We will use these outcomes as our prediction targets.
Run the code cell below to remove Survived as a feature of the dataset and store it in outcomes.
End of explanation
def accuracy_score(truth, pred):
Returns accuracy score for input truth and predictions.
# Ensure that the number of predictions matches number of outcomes
if len(truth) == len(pred):
# Calculate and return the accuracy as a percent
return "Predictions have an accuracy of {:.2f}%.".format((truth == pred).mean()*100)
else:
return "Number of predictions does not match number of outcomes!"
# Test the 'accuracy_score' function
predictions = pd.Series(np.ones(5, dtype = int))
print accuracy_score(outcomes[:5], predictions)
Explanation: The very same sample of the RMS Titanic data now shows the Survived feature removed from the DataFrame. Note that data (the passenger data) and outcomes (the outcomes of survival) are now paired. That means for any passenger data.loc[i], they have the survival outcome outcome[i].
To measure the performance of our predictions, we need a metric to score our predictions against the true outcomes of survival. Since we are interested in how accurate our predictions are, we will calculate the proportion of passengers where our prediction of their survival is correct. Run the code cell below to create our accuracy_score function and test a prediction on the first five passengers.
Think: Out of the first five passengers, if we predict that all of them survived, what would you expect the accuracy of our predictions to be?
End of explanation
def predictions_0(data):
Model with no features. Always predicts a passenger did not survive.
predictions = []
for _, passenger in data.iterrows():
# Predict the survival of 'passenger'
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_0(data)
Explanation: Tip: If you save an iPython Notebook, the output from running code blocks will also be saved. However, the state of your workspace will be reset once a new session is started. Make sure that you run all of the code blocks from your previous session to reestablish variables and functions before picking up where you last left off.
Making Predictions
If we were asked to make a prediction about any passenger aboard the RMS Titanic whom we knew nothing about, then the best prediction we could make would be that they did not survive. This is because we can assume that a majority of the passengers (more than 50%) did not survive the ship sinking.
The predictions_0 function below will always predict that a passenger did not survive.
End of explanation
print accuracy_score(outcomes, predictions)
Explanation: Question 1
Using the RMS Titanic data, how accurate would a prediction be that none of the passengers survived?
Hint: Run the code cell below to see the accuracy of this prediction.
End of explanation
survival_stats(data, outcomes, 'Sex')
Explanation: Answer: Predictions have an accuracy of 61.62%.
Let's take a look at whether the feature Sex has any indication of survival rates among passengers using the survival_stats function. This function is defined in the titanic_visualizations.py Python script included with this project. The first two parameters passed to the function are the RMS Titanic data and passenger survival outcomes, respectively. The third parameter indicates which feature we want to plot survival statistics across.
Run the code cell below to plot the survival outcomes of passengers based on their sex.
End of explanation
def predictions_1(data):
Model with one feature:
- Predict a passenger survived if they are female.
predictions = []
for _, passenger in data.iterrows():
# all women have survived
if passenger['Sex'] == 'female':
predictions.append(1)
else:
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_1(data)
Explanation: Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females did survive the ship sinking. Let's build on our previous prediction: If a passenger was female, then we will predict that they survived. Otherwise, we will predict the passenger did not survive.
Fill in the missing code below so that the function will make this prediction.
Hint: You can access the values of each feature for a passenger like a dictionary. For example, passenger['Sex'] is the sex of the passenger.
End of explanation
print accuracy_score(outcomes, predictions)
Explanation: Question 2
How accurate would a prediction be that all female passengers survived and the remaining passengers did not survive?
Hint: Run the code cell below to see the accuracy of this prediction.
End of explanation
survival_stats(data, outcomes, 'Age', ["Sex == 'male'"])
Explanation: Answer: Predictions have an accuracy of 78.68%.
Using just the Sex feature for each passenger, we are able to increase the accuracy of our predictions by a significant margin. Now, let's consider using an additional feature to see if we can further improve our predictions. For example, consider all of the male passengers aboard the RMS Titanic: Can we find a subset of those passengers that had a higher rate of survival? Let's start by looking at the Age of each male, by again using the survival_stats function. This time, we'll use a fourth parameter to filter out the data so that only passengers with the Sex 'male' will be included.
Run the code cell below to plot the survival outcomes of male passengers based on their age.
End of explanation
def predictions_2(data):
Model with two features:
- Predict a passenger survived if they are female.
- Predict a passenger survived if they are male and younger than 10.
predictions = []
for _, passenger in data.iterrows():
# All women and male passengers of age < 10 have survived
if passenger['Sex'] == 'female' or passenger['Sex'] == 'male' and passenger['Age'] < 10:
predictions.append(1)
else:
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_2(data)
Explanation: Examining the survival statistics, the majority of males younger then 10 survived the ship sinking, whereas most males age 10 or older did not survive the ship sinking. Let's continue to build on our previous prediction: If a passenger was female, then we will predict they survive. If a passenger was male and younger than 10, then we will also predict they survive. Otherwise, we will predict they do not survive.
Fill in the missing code below so that the function will make this prediction.
Hint: You can start your implementation of this function using the prediction code you wrote earlier from predictions_1.
End of explanation
print accuracy_score(outcomes, predictions)
Explanation: Question 3
How accurate would a prediction be that all female passengers and all male passengers younger than 10 survived?
Hint: Run the code cell below to see the accuracy of this prediction.
End of explanation
survival_stats(data, outcomes, 'Age', ["Sex == 'female'", "Pclass == 3"])
Explanation: Answer: Predictions have an accuracy of 79.35%.
Adding the feature Age as a condition in conjunction with Sex improves the accuracy by a small margin more than with simply using the feature Sex alone. Now it's your turn: Find a series of features and conditions to split the data on to obtain an outcome prediction accuracy of at least 80%. This may require multiple features and multiple levels of conditional statements to succeed. You can use the same feature multiple times with different conditions.
Pclass, Sex, Age, SibSp, and Parch are some suggested features to try.
Use the survival_stats function below to to examine various survival statistics.
Hint: To use mulitple filter conditions, put each condition in the list passed as the last argument. Example: ["Sex == 'male'", "Age < 18"]
End of explanation
import random
def predictions_3(data):
Model with multiple features. Makes a prediction with an accuracy of at least 80%.
predictions = []
for _, passenger in data.iterrows():
# all women and passengers of age < 10 have survived except for
# women from 3rd Class who embarked the Titanic in Queenstown and
# women from 3rd class of age > 40 and < 60.
if passenger['Sex'] == 'female' and passenger['Embarked'] == 'Q' and passenger['Pclass'] == 3 \
and passenger['Age'] > 30 \
or passenger['Pclass'] == 3 and passenger['Sex'] == 'female' and passenger['Age'] > 40 and passenger['Age'] < 60:
predictions.append(0)
elif passenger['Sex'] == 'female' \
or passenger['Sex'] == 'male' and passenger['Age'] < 10:
predictions.append(1)
else:
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_3(data)
Explanation: After exploring the survival statistics visualization, fill in the missing code below so that the function will make your prediction.
Make sure to keep track of the various features and conditions you tried before arriving at your final prediction model.
Hint: You can start your implementation of this function using the prediction code you wrote earlier from predictions_2.
End of explanation
print accuracy_score(outcomes, predictions)
Explanation: Question 4
Describe the steps you took to implement the final prediction model so that it got an accuracy of at least 80%. What features did you look at? Were certain features more informative than others? Which conditions did you use to split the survival outcomes in the data? How accurate are your predictions?
Hint: Run the code cell below to see the accuracy of your predictions.
End of explanation
<END_TASK> |
23 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Apply logistic regression to categorize whether a county had high mortality rate due to contamination
1. Import the necessary packages to read in the data, plot, and create a logistic regression model
Step1: 2. Read in the hanford.csv file in the data/ folder
Step2: <img src="../../images/hanford_variables.png"></img>
3. Calculate the basic descriptive statistics on the data
Step3: 4. Find a reasonable threshold to say exposure is high and recode the data
Step4: 5. Create a logistic regression model
Step5: 6. Predict whether the mortality rate (Cancer per 100,000 man years) will be high at an exposure level of 50 | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
%matplotlib inline
import numpy as np
from sklearn.linear_model import LogisticRegression
Explanation: Apply logistic regression to categorize whether a county had high mortality rate due to contamination
1. Import the necessary packages to read in the data, plot, and create a logistic regression model
End of explanation
df = pd.read_csv("../data/hanford.csv")
Explanation: 2. Read in the hanford.csv file in the data/ folder
End of explanation
df.describe()
Explanation: <img src="../../images/hanford_variables.png"></img>
3. Calculate the basic descriptive statistics on the data
End of explanation
df['Mortality'].hist(bins=5)
df['Mortality'].mean()
df['Mort_high'] = df['Mortality'].apply(lambda x:1 if x>=147.1 else 0)
df['Expo_high'] = df['Exposure'].apply(lambda x:1 if x>=3.41 else 0)
def exposure_high(x):
if x >= 3.41:
return 1
else:
return 0
df
Explanation: 4. Find a reasonable threshold to say exposure is high and recode the data
End of explanation
lm = LogisticRegression()
x = np.asarray(df[['Exposure']])
y = np.asarray(df['Mort_high'])
lm = lm.fit(x,y)
Explanation: 5. Create a logistic regression model
End of explanation
lm.predict([50])
Explanation: 6. Predict whether the mortality rate (Cancer per 100,000 man years) will be high at an exposure level of 50
End of explanation
<END_TASK> |
24 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
SC57 - Working with big, multi-dimensional geoscientific datasets in Python
Step1: Basic data arrays in numpy
Step2: numpy is a powerful but "low-level" array manipulation tool. Axis only have numbers and no names (it is easy to forget which axis is what, a common source of trivial bugs), arrays can't carry metadata (e.g. units), and the data is unstructured (i.e. the coordinates and/or other related arrays have to be handled separately
Step3: Let's Do Some Math
Step4: Notice xarray has changed the colormap according to the dataset (borrowing logic from Seaborn).
* With degrees C, the data passes through 0, so a diverging colormap is used
* With Kelvin, the default colormap is used.
Step5: Adding Data to DataSets
Step6: Let's add those kelvin and centigrade dataArrays to the dataset.
Step7: 3. Selecting data with named dimensions
In xarray there are many different ways for selecting and indexing data.
Positional indexing (old way)
This is the "old way", i.e. like numpy
Step8: This selection implies prior knowledge about the structure of the data, and is therefore much less readable than the "xarray methods" presented below.
Selection by index
Selection based on the index of a coordinate
Step9: Selection by value
Selection based on the value of a coordinate
Step10: Selection by value works well for time, too
Step11: Selecting a range of values
The syntax is similar, but you'll need to use a slice
Step12: Nearest neighbor lookup
Step13: 4. Operations and computation
We can do arithmetic directly on Dataset and DataArray objects.
Labels are preserved and dataArray dimensions automatically aligned.
Broadcasting
<img src="./figures/broadcast.png" width="50%" align="left">
Step14: Alignment
<img src="./figures/align.png" width="50%" align="left">
Step15: Aggregation
Step16: Masking with .where()
Step17: 5. Groupby and "split-apply-combine"
Xarray implements the "split-apply-combine" paradigm with groupby. This works really well for calculating climatologies
Step18: <img src="./figures/split_apply-combine.png" alt="split" style="width
Step19: You can also do arithmetic with groupby objects, which repeats the arithmetic over each group
Step20: Resample adjusts a time series to a new resolution
Step21: 6. Graphics
xarray plotting functions rely on matplotlib internally, but they make use of all available metadata to make the plotting operations more intuitive and interpretable.
1D plots
Step22: 2D plots
Step23: Customizing 2d plots
Step24: Dealing with Outliers
Step25: Facet plots
Step26: Plotting on maps
For plotting on maps, we rely on the excellent cartopy library.
Step27: Facet plots on maps
Step28: Seaborn is Cool
Statistical visualization with Seaborn
Step29: 7. Out-of-core computation
Here's a quick demo of how xarray can leverage dask to work with data that doesn't fit in memory. This lets xarray substitute for tools like cdo and nco.
Let's open 10 years of runoff data
xarraycan open multiple files at once using string pattern matching.
In this case we open all the files that match our filestr, i.e. all the files for the 2080s.
Each of these files (compressed) is approximately 80 MB.
PS - these files weren't available during the tutorial. The data we used was daily discharge hydrological data from the ISIMIP project (e.g. HadGEM2-ES / PCRGLOBWB / RCP2p6), which we cannot share here but is available for download.
Step30: xarray even puts them in the right order for you.
Step31: How big is all this data uncompressed? Will it fit into memory?
Step32: Working with Big Data
This data is too big for our memory.
That means we need to process it in chunks.
We can do this chunking in xarray very easily.
xarray computes data 'lazily'. That means that data is only loaded into memory when it is actually required. This also allows us to inspect datasets without loading all the data into memory.
To do this xarray integrates with dask to support streaming computation on datasets that don’t fit into memory.
<img src="./figures/dask-array.png" width="40%" align="center"> | <ASSISTANT_TASK:>
Python Code:
# standard imports
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import xarray as xr
import warnings
%matplotlib inline
np.set_printoptions(precision=3, linewidth=80, edgeitems=1) # make numpy less verbose
xr.set_options(display_width=70)
warnings.simplefilter('ignore') # filter some warning messages
Explanation: SC57 - Working with big, multi-dimensional geoscientific datasets in Python: a tutorial introduction to xarray
Original notebook by Stephan Hoyer, Rossbypalooza, 2016.
Modified by Edward Byers, Matthew Gidden and Fabien Maussion for EGU General Assembly 2017, Vienna, Austria
Thursday, 27th April, 15:30–17:00 / Room -2.91
Convenors
* Dr Edward Byers - International Institute for Applied Systems Analysis, Laxenburg, Austria
* Dr Matthew Gidden - International Institute for Applied Systems Analysis, Laxenburg, Austria
* Dr Fabien Maussion - University of Innsbruck, Innsbruck, Austria
With
you can reach
Structure of this tutorial
Introduction to key features of xarray
Basic operations in xarray: opening, inspecting, selecting and indexing data
Selecting data with named dimensions
Operations and computation
Groupby and "split-apply-combine"
Graphics
Out-of-core computation
1. Key features of xarray
What is xarray?
xarray is an open source project and Python package
xarray has been designed to perform labelled data analysis on multi-dimensional arrays
the xarray approach adopts the Common Data Model for self-describing scientific data in widespread use in the Earth sciences
xarray.Dataset is an in-memory representation of a netCDF file.
xarray is built on top of the dataprocessing library Pandas (the best way to work with tabular data (e.g., CSV files) in Python)
Our data
<img src="./figures/dataset.png" width="50%" align="right">
numeric
multi-dimensional
labelled
(lots of) metadata
sometimes (very) large
What is xarray good for?
Gridded, multi-dimensional and large datasets, commonly used in earth sciences, but also increasingly finance, engineering (signal/image processing), and biological sciences
Integration with other data analysis packages such as Pandas
I/O operations (NetCDF)
Plotting
Out of core computation and parallel processing
Extensions based on xarray
...
Where can I find more info?
For more information about xarray
Read the online documentation
Ask questions on StackOverflow
View the source code and file bug reports on GitHub
For more doing data analysis with Python:
Thomas Wiecki, A modern guide to getting started with Data Science and Python
Wes McKinney, Python for Data Analysis (book)
Packages building on xarray for the geophysical sciences
For analyzing GCM output:
xgcm by Ryan Abernathey
oogcm by Julien Le Sommer
MPAS xarray by Phil Wolfram
marc_analysis by Daniel Rothenberg
Other tools:
windspharm: wind spherical harmonics by Andrew Dawson
eofs: empirical orthogonal functions by Andrew Dawson
infinite-diff by Spencer Hill
aospy by Spencer Hill and Spencer Clark
regionmask by Mathias Hauser
salem by Fabien Maussion
Resources for teaching and learning xarray in geosciences:
- Fabien's teaching repo: courses that combine teaching climatology and xarray
2. Basic operations in xarray
Import python packages
End of explanation
import numpy as np
a = np.array([[1, 3, 9], [2, 8, 4]])
a
a[1, 2]
a.mean(axis=0)
Explanation: Basic data arrays in numpy
End of explanation
ds = xr.tutorial.load_dataset('air_temperature')
ds
ds.air
ds.dims
ds.attrs
ds.air.values
type(ds.air.values)
ds.air.dims
ds.air.attrs
ds.air.attrs['tutorial-date'] = 27042017
ds.air.attrs
Explanation: numpy is a powerful but "low-level" array manipulation tool. Axis only have numbers and no names (it is easy to forget which axis is what, a common source of trivial bugs), arrays can't carry metadata (e.g. units), and the data is unstructured (i.e. the coordinates and/or other related arrays have to be handled separately: another source of bugs).
This is where xarray comes in!
Properties of xarray.Dataset and xarray.DataArray objects
We'll start with the "air_temperature" tutorial dataset. This tutorial comes with the xarray package. Other examples here.
End of explanation
kelvin = ds.air.mean(dim='time')
kelvin.plot();
centigrade = kelvin - 273.16
centigrade.plot();
Explanation: Let's Do Some Math
End of explanation
# ufuncs work too
np.sin(centigrade).plot();
Explanation: Notice xarray has changed the colormap according to the dataset (borrowing logic from Seaborn).
* With degrees C, the data passes through 0, so a diverging colormap is used
* With Kelvin, the default colormap is used.
End of explanation
ds
Explanation: Adding Data to DataSets
End of explanation
ds['centigrade'] = centigrade
ds['kelvin'] = kelvin
ds
ds.kelvin.attrs # attrs are empty! Let's add some
ds.kelvin.attrs['Description'] = 'Mean air tempterature (through time) in kelvin.'
ds.kelvin
ds.to_netcdf('new file.nc')
Explanation: Let's add those kelvin and centigrade dataArrays to the dataset.
End of explanation
ds.air[:, 1, 2] # note that the attributes, coordinates are preserved
ds.air[:, 1, 2].plot();
Explanation: 3. Selecting data with named dimensions
In xarray there are many different ways for selecting and indexing data.
Positional indexing (old way)
This is the "old way", i.e. like numpy:
End of explanation
ds.air.isel(time=0).plot(); # like above, but with a dimension name this time
Explanation: This selection implies prior knowledge about the structure of the data, and is therefore much less readable than the "xarray methods" presented below.
Selection by index
Selection based on the index of a coordinate:
End of explanation
ds.air.sel(lat=72.5, lon=205).plot();
Explanation: Selection by value
Selection based on the value of a coordinate:
End of explanation
ds.air.sel(time='2013-01-02').plot(); # Note that we will extract 4 time steps! 3d data is plotted as histogram
ds.air.sel(time='2013-01-02T06:00').plot(); # or look at a single timestep
Explanation: Selection by value works well for time, too
End of explanation
ds.air.sel(lat=slice(60, 50), lon=slice(200, 270), time='2013-01-02T06:00:00').plot();
Explanation: Selecting a range of values
The syntax is similar, but you'll need to use a slice:
End of explanation
ds.air.sel(lat=41.8781, lon=360-87.6298, method='nearest', tolerance=5).plot();
Explanation: Nearest neighbor lookup
End of explanation
a = xr.DataArray(np.arange(3), dims='time',
coords={'time':np.arange(3)})
b = xr.DataArray(np.arange(4), dims='space',
coords={'space':np.arange(4)})
a + b
Explanation: 4. Operations and computation
We can do arithmetic directly on Dataset and DataArray objects.
Labels are preserved and dataArray dimensions automatically aligned.
Broadcasting
<img src="./figures/broadcast.png" width="50%" align="left">
End of explanation
atime = np.arange(3)
btime = np.arange(5) + 1
atime, btime
a = xr.DataArray(np.arange(3), dims='time',
coords={'time':atime})
b = xr.DataArray(np.arange(5), dims='time',
coords={'time':btime})
a + b
Explanation: Alignment
<img src="./figures/align.png" width="50%" align="left">
End of explanation
ds.max()
ds.air.median(dim=['lat', 'lon']).plot();
Explanation: Aggregation
End of explanation
means = ds.air.mean(dim=['time'])
means.where(means > 273.15).plot();
Explanation: Masking with .where()
End of explanation
ds.air.groupby('time.season').mean()
Explanation: 5. Groupby and "split-apply-combine"
Xarray implements the "split-apply-combine" paradigm with groupby. This works really well for calculating climatologies:
End of explanation
ds.air.groupby('time.month').mean('time')
clim = ds.air.groupby('time.month').mean('time')
Explanation: <img src="./figures/split_apply-combine.png" alt="split" style="width: 50%;"/>
End of explanation
anomalies = ds.air.groupby('time.month') - clim
anomalies
anomalies.plot();
anomalies.sel(time= '2013-02').plot(); # Find all the anomolous values for February
Explanation: You can also do arithmetic with groupby objects, which repeats the arithmetic over each group:
End of explanation
tmin = ds.air.resample('1D', dim='time', how='min') # Resample to one day '1D
tmax = ds.air.resample('1D', dim='time', how='max')
(tmin.sel(time='2013-02-15') - 273.15).plot();
ds_extremes = xr.Dataset({'tmin': tmin, 'tmax': tmax})
ds_extremes
Explanation: Resample adjusts a time series to a new resolution:
End of explanation
zonal_t_average = ds.air.mean(dim=['lon', 'time']) - 273.15
zonal_t_average.plot(); # 1D arrays are plotted as line plots
Explanation: 6. Graphics
xarray plotting functions rely on matplotlib internally, but they make use of all available metadata to make the plotting operations more intuitive and interpretable.
1D plots
End of explanation
t_average = ds.air.mean(dim='time') - 273.15
t_average.plot(); # 2D arrays are plotted with pcolormesh
t_average.plot.contourf(); # but you can use contour(), contourf() or imshow() if you wish
Explanation: 2D plots
End of explanation
t_average.plot.contourf(cmap='BrBG_r', vmin=-15, vmax=15);
t_average.plot.contourf(cmap='BrBG_r', levels=22, center=False);
Explanation: Customizing 2d plots
End of explanation
air_outliers = ds.air.isel(time=0).copy()
air_outliers[0, 0] = 100
air_outliers[-1, -1] = 400
air_outliers.plot(); # outliers mess with the datarange and colorscale!
# Using `robust=True` uses the 2nd and 98th percentiles of the data to compute the color limits.
air_outliers.plot(robust=True);
Explanation: Dealing with Outliers
End of explanation
t_season = ds.air.groupby('time.season').mean(dim='time') - 273.15
# facet plot allows to do multiplot with the same color mappings
t_season.plot.contourf(x='lon', y='lat', col='season', col_wrap=2, levels=22);
Explanation: Facet plots
End of explanation
import cartopy.crs as ccrs
f = plt.figure(figsize=(8, 4))
# Define the map projection *on which* you want to plot
ax = plt.axes(projection=ccrs.Orthographic(-80, 35))
# ax is an empty plot. We now plot the variable t_average onto ax
# the keyword "transform" tells the function in which projection the air temp data is stored
t_average.plot(ax=ax, transform=ccrs.PlateCarree())
# Add gridlines and coastlines to the plot
ax.coastlines(); ax.gridlines();
Explanation: Plotting on maps
For plotting on maps, we rely on the excellent cartopy library.
End of explanation
# this time we need to retrieve the plots to do things with the axes later on
p = t_season.plot(x='lon', y='lat', col='season', transform=ccrs.PlateCarree(),
subplot_kws={'projection': ccrs.Orthographic(-80, 35)})
for ax in p.axes.flat:
ax.coastlines()
Explanation: Facet plots on maps
End of explanation
import seaborn as sns
data = (ds_extremes
.sel_points(lat=[41.8781, 37.7749], lon=[360-87.6298, 360-122.4194],
method='nearest', tolerance=3,
dim=xr.DataArray(['Chicago', 'San Francisco'],
name='location', dims='location'))
.to_dataframe()
.reset_index()
.assign(month=lambda x: x.time.dt.month))
plt.figure(figsize=(10, 5))
sns.violinplot('month', 'tmax', 'location', data=data, split=True, inner=None);
Explanation: Seaborn is Cool
Statistical visualization with Seaborn:
End of explanation
from glob import glob
files = glob('data/*dis*.nc')
runoff = xr.open_mfdataset(files)
runoff
Explanation: 7. Out-of-core computation
Here's a quick demo of how xarray can leverage dask to work with data that doesn't fit in memory. This lets xarray substitute for tools like cdo and nco.
Let's open 10 years of runoff data
xarraycan open multiple files at once using string pattern matching.
In this case we open all the files that match our filestr, i.e. all the files for the 2080s.
Each of these files (compressed) is approximately 80 MB.
PS - these files weren't available during the tutorial. The data we used was daily discharge hydrological data from the ISIMIP project (e.g. HadGEM2-ES / PCRGLOBWB / RCP2p6), which we cannot share here but is available for download.
End of explanation
runoff.time
Explanation: xarray even puts them in the right order for you.
End of explanation
runoff.nbytes / 1e9 # Convert to gigiabytes
Explanation: How big is all this data uncompressed? Will it fit into memory?
End of explanation
runoff = runoff.chunk({'lat': 60})
runoff.chunks
%time ro_seasonal = runoff.groupby('time.season').mean('time')
import dask
from multiprocessing.pool import ThreadPool
dask.set_options(pool=ThreadPool(1))
%time ro_seasonal.compute()
dask.set_options(pool=ThreadPool(4))
%time ro_seasonal = runoff.groupby('time.season').mean('time')
%time result = ro_seasonal.compute()
brazil = dict(lat=slice(10.75, -40.75), lon=slice(-100.25, -25.25))
result.dis.sel(**brazil).plot(col='season', size=4, cmap='Spectral_r')
Explanation: Working with Big Data
This data is too big for our memory.
That means we need to process it in chunks.
We can do this chunking in xarray very easily.
xarray computes data 'lazily'. That means that data is only loaded into memory when it is actually required. This also allows us to inspect datasets without loading all the data into memory.
To do this xarray integrates with dask to support streaming computation on datasets that don’t fit into memory.
<img src="./figures/dask-array.png" width="40%" align="center">
End of explanation
<END_TASK> |
25 | <SYSTEM_TASK:>
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
<END_TASK>
<USER_TASK:>
Problem:
I have two arrays: | <ASSISTANT_TASK:>
Python Code:
import numpy as np
a = np.array(
[[[ 0, 1],
[ 2, 3],
[ 4, 5]],
[[ 6, 7],
[ 8, 9],
[10, 11]],
[[12, 13],
[14, 15],
[16, 17]]]
)
b = np.array(
[[0, 1, 1],
[1, 0, 1],
[1, 1, 0]]
)
result = np.take_along_axis(a, b[..., np.newaxis], axis=-1)[..., 0]
<END_TASK> |
26 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Analyzing Locust Load Testing Results
This Notebook demonstrates how to analyze AI Platform Prediction load testing runs using metrics captured in Cloud Monitoring.
This Notebook build on the 02-perf-testing.ipynb notebook that shows how to configure and run load tests against AI Platform Prediction using Locust.io. The outlined testing process results in a Pandas dataframe that aggregates the standard AI Platform Prediction metrics with a set of custom, log-based metrics generated from log entries captured by the Locust testing script.
The Notebook covers the following steps
Step1: Configure GCP environment settings
Step2: 1. Retrieve and consolidate test results
Locust's web interface along with a Cloud Monitoring dashboard provide a cursory view into performance of a tested AI Platform Prediction model version. A more thorough analysis can be performed by consolidating metrics collected during a test and using data analytics and visualization tools.
In this section, you will retrieve the metrics captured in Cloud Monitoring and consolidate them into a single Pandas dataframe.
1.1 List available AI Platform Prediction metrics
Step3: 1.2. List custom log based metrics
Step7: 1.3. Retrieve test metrics
Define a helper function that retrieves test metrics from Cloud Monitoring
Step8: The retrieved dataframe uses hierarchical indexing for column names. The reason is that some metrics contain multiple time series. For example, the GPU duty_cycle metric includes a time series of measures per each GPU used in the deployment (denoted as replica_id). The top level of the column index is a metric name. The second level is a replica_id. The third level is a signature of a model.
All metrics are aligned on the same timeline.
2. Analyzing and Visualizing test results
In the context of our scenario the key concern is GPU utilization at various levels of throughput and latency. The primary metric exposed by AI Platform Prediction to monitor GPU utilization is duty cycle. This metric captures an average fraction of time over the 60 second period during which the accelerator(s) were actively processing.
2.1. GPU utilization
Step9: 2.2. CPU utilization
Step10: 2.3. Latency
Step11: 2.4. Request throughput
We are going to use the response_rate metric, which tracks a number of responses returned by AI Platform Prediction over a 1 minute interval.
Step12: Cleaning up | <ASSISTANT_TASK:>
Python Code:
import time
from datetime import datetime
from typing import List
import numpy as np
import pandas as pd
import google.auth
from google.cloud import logging_v2
from google.cloud.monitoring_dashboard.v1 import DashboardsServiceClient
from google.cloud.logging_v2 import MetricsServiceV2Client
from google.cloud.monitoring_v3.query import Query
from google.cloud.monitoring_v3 import MetricServiceClient
import matplotlib.pyplot as plt
Explanation: Analyzing Locust Load Testing Results
This Notebook demonstrates how to analyze AI Platform Prediction load testing runs using metrics captured in Cloud Monitoring.
This Notebook build on the 02-perf-testing.ipynb notebook that shows how to configure and run load tests against AI Platform Prediction using Locust.io. The outlined testing process results in a Pandas dataframe that aggregates the standard AI Platform Prediction metrics with a set of custom, log-based metrics generated from log entries captured by the Locust testing script.
The Notebook covers the following steps:
1. Retrieve and consolidate test results from Cloud Monitoring
2. Analyze and visualize utilization and latency results
Setup
This notebook was tested on AI Platform Notebooks using the standard TF 2.2 image.
Import libraries
End of explanation
PROJECT_ID = '[your-project-id]' # Set your project Id
MODEL_NAME = 'resnet_classifier'
MODEL_VERSION = 'v1'
LOG_NAME = 'locust' # Set your log name
TEST_ID = 'test-20200829-190943' # Set your test Id
TEST_START_TIME = datetime.fromisoformat('2020-08-28T21:30:00-00:00') # Set your test start time
TEST_END_TIME = datetime.fromisoformat('2020-08-29T22:00:00-00:00') # Set your test end time
Explanation: Configure GCP environment settings
End of explanation
creds , _ = google.auth.default()
client = MetricServiceClient(credentials=creds)
project_path = client.project_path(PROJECT_ID)
filter = 'metric.type=starts_with("ml.googleapis.com/prediction")'
for descriptor in client.list_metric_descriptors(project_path, filter_=filter):
print(descriptor.type)
Explanation: 1. Retrieve and consolidate test results
Locust's web interface along with a Cloud Monitoring dashboard provide a cursory view into performance of a tested AI Platform Prediction model version. A more thorough analysis can be performed by consolidating metrics collected during a test and using data analytics and visualization tools.
In this section, you will retrieve the metrics captured in Cloud Monitoring and consolidate them into a single Pandas dataframe.
1.1 List available AI Platform Prediction metrics
End of explanation
filter = 'metric.type=starts_with("logging.googleapis.com/user")'
for descriptor in client.list_metric_descriptors(project_path, filter_=filter):
print(descriptor.type)
Explanation: 1.2. List custom log based metrics
End of explanation
def retrieve_metrics(client, project_id, start_time, end_time, model, model_version, test_id, log_name):
Retrieves test metrics from Cloud Monitoring.
def _get_aipp_metric(metric_type: str, labels: List[str]=[], metric_name=None)-> pd.DataFrame:
Retrieves a specified AIPP metric.
query = Query(client, project_id, metric_type=metric_type)
query = query.select_interval(end_time, start_time)
query = query.select_resources(model_id=model)
query = query.select_resources(version_id=model_version)
if metric_name:
labels = ['metric'] + labels
df = query.as_dataframe(labels=labels)
if not df.empty:
if metric_name:
df.columns.set_levels([metric_name], level=0, inplace=True)
df = df.set_index(df.index.round('T'))
return df
def _get_locust_metric(metric_type: str, labels: List[str]=[], metric_name=None)-> pd.DataFrame:
Retrieves a specified custom log-based metric.
query = Query(client, project_id, metric_type=metric_type)
query = query.select_interval(end_time, start_time)
query = query.select_metrics(log=log_name)
query = query.select_metrics(test_id=test_id)
if metric_name:
labels = ['metric'] + labels
df = query.as_dataframe(labels=labels)
if not df.empty:
if metric_name:
df.columns.set_levels([metric_name], level=0, inplace=True)
df = df.apply(lambda row: [metric.mean for metric in row])
df = df.set_index(df.index.round('T'))
return df
# Retrieve GPU duty cycle
metric_type = 'ml.googleapis.com/prediction/online/accelerator/duty_cycle'
metric = _get_aipp_metric(metric_type, ['replica_id', 'signature'], 'duty_cycle')
df = metric
# Retrieve CPU utilization
metric_type = 'ml.googleapis.com/prediction/online/cpu/utilization'
metric = _get_aipp_metric(metric_type, ['replica_id', 'signature'], 'cpu_utilization')
if not metric.empty:
df = df.merge(metric, how='outer', right_index=True, left_index=True)
# Retrieve prediction count
metric_type = 'ml.googleapis.com/prediction/prediction_count'
metric = _get_aipp_metric(metric_type, ['replica_id', 'signature'], 'prediction_count')
if not metric.empty:
df = df.merge(metric, how='outer', right_index=True, left_index=True)
# Retrieve responses per second
metric_type = 'ml.googleapis.com/prediction/response_count'
metric = _get_aipp_metric(metric_type, ['replica_id', 'signature'], 'response_rate')
if not metric.empty:
metric = (metric/60).round(2)
df = df.merge(metric, how='outer', right_index=True, left_index=True)
# Retrieve backend latencies
metric_type = 'ml.googleapis.com/prediction/latencies'
metric = _get_aipp_metric(metric_type, ['latency_type', 'replica_id', 'signature'])
if not metric.empty:
metric = metric.apply(lambda row: [round(latency.mean/1000,1) for latency in row])
metric.columns.set_names(['metric', 'replica_id', 'signature'], inplace=True)
level_values = ['Latency: ' + value for value in metric.columns.get_level_values(level=0)]
metric.columns.set_levels(level_values, level=0, inplace=True)
df = df.merge(metric, how='outer', right_index=True, left_index=True)
# Retrieve Locust latency
metric_type = 'logging.googleapis.com/user/locust_latency'
metric = _get_locust_metric(metric_type, ['replica_id', 'signature'], 'Latency: client')
if not metric.empty:
metric = metric.round(2).replace([0], np.nan)
df = df.merge(metric, how='outer', right_index=True, left_index=True)
# Retrieve Locust user count
metric_type = 'logging.googleapis.com/user/locust_users'
metric = _get_locust_metric(metric_type, ['replica_id', 'signature'], 'User count')
if not metric.empty:
metric = metric.round()
df = df.merge(metric, how='outer', right_index=True, left_index=True)
# Retrieve Locust num_failures
metric_type = 'logging.googleapis.com/user/num_failures'
metric = _get_locust_metric(metric_type, ['replica_id', 'signature'], 'Num of failures')
if not metric.empty:
metric = metric.round()
df = df.merge(metric, how='outer', right_index=True, left_index=True)
# Retrieve Locust num_failures
metric_type = 'logging.googleapis.com/user/num_requests'
metric = _get_locust_metric(metric_type, ['replica_id', 'signature'], 'Num of requests')
if not metric.empty:
metric = metric.round()
df = df.merge(metric, how='outer', right_index=True, left_index=True)
return df
test_result = retrieve_metrics(
client,
PROJECT_ID,
TEST_START_TIME,
TEST_END_TIME,
MODEL_NAME,
MODEL_VERSION,
TEST_ID,
LOG_NAME
)
test_result.head().T
Explanation: 1.3. Retrieve test metrics
Define a helper function that retrieves test metrics from Cloud Monitoring
End of explanation
gpu_utilization_results = test_result['duty_cycle']
gpu_utilization_results.columns = gpu_utilization_results.columns.get_level_values(0)
ax = gpu_utilization_results.plot(figsize=(14, 9), legend=True)
ax.set_xlabel('Time', fontsize=16)
ax.set_ylabel('Utilization ratio', fontsize=16)
_ = ax.set_title("GPU Utilization", fontsize=20)
Explanation: The retrieved dataframe uses hierarchical indexing for column names. The reason is that some metrics contain multiple time series. For example, the GPU duty_cycle metric includes a time series of measures per each GPU used in the deployment (denoted as replica_id). The top level of the column index is a metric name. The second level is a replica_id. The third level is a signature of a model.
All metrics are aligned on the same timeline.
2. Analyzing and Visualizing test results
In the context of our scenario the key concern is GPU utilization at various levels of throughput and latency. The primary metric exposed by AI Platform Prediction to monitor GPU utilization is duty cycle. This metric captures an average fraction of time over the 60 second period during which the accelerator(s) were actively processing.
2.1. GPU utilization
End of explanation
cpu_utilization_results = test_result['cpu_utilization']
cpu_utilization_results.columns = cpu_utilization_results.columns.get_level_values(0)
ax = cpu_utilization_results.plot(figsize=(14, 9), legend=True)
ax.set_xlabel('Time', fontsize=16)
ax.set_ylabel('Utilization ratio', fontsize=16)
_ = ax.set_title("CPU Utilization", fontsize=20)
Explanation: 2.2. CPU utilization
End of explanation
latency_results = test_result[['Latency: model', 'Latency: client']]
latency_results.columns = latency_results.columns.get_level_values(0)
ax = latency_results.plot(figsize=(14, 9), legend=True)
ax.set_xlabel('Time', fontsize=16)
ax.set_ylabel('milisecond', fontsize=16)
_ = ax.set_title("Latency", fontsize=20)
Explanation: 2.3. Latency
End of explanation
throughput_results = test_result[['response_rate', 'User count']]
throughput_results.columns = throughput_results.columns.get_level_values(0)
ax = throughput_results.plot(figsize=(14, 9), legend=True)
ax.set_xlabel('Time', fontsize=16)
ax.set_ylabel('Count', fontsize=16)
_ = ax.set_title("Response Rate vs User Count", fontsize=20)
Explanation: 2.4. Request throughput
We are going to use the response_rate metric, which tracks a number of responses returned by AI Platform Prediction over a 1 minute interval.
End of explanation
logging_client = MetricsServiceV2Client(credentials=creds)
parent = logging_client.project_path(PROJECT_ID)
for element in logging_client.list_log_metrics(parent):
metric_path = logging_client.metric_path(PROJECT_ID, element.name)
logging_client.delete_log_metric(metric_path)
print("Deleted metric: ", metric_path)
display_name = 'AI Platform Prediction and Locust'
dashboard_service_client = DashboardsServiceClient(credentials=creds)
parent = 'projects/{}'.format(PROJECT_ID)
for dashboard in dashboard_service_client.list_dashboards(parent):
if dashboard.display_name == display_name:
dashboard_service_client.delete_dashboard(dashboard.name)
print("Deleted dashboard:", dashboard.name)
Explanation: Cleaning up: delete the log-based metrics and dasboard
End of explanation
<END_TASK> |
27 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Building interactive plots using bqplot and ipywidgets
bqplot is built on top of the ipywidgets framework
ipwidgets and bqplot widgets can be seamlessly integrated to build interactive plots
bqplot figure widgets can be stacked with UI controls available in ipywidgets by using Layout classes (Box, HBox, VBox) in ipywidgets
(Note that only Figure objects (not Mark objects) inherit from DOMWidget class and can be combined with other widgets from ipywidgets)
Trait attributes of widgets can be linked using callbacks. Callbacks should be registered using the observe method
Please follow these links for detailed documentation on
Step1: Update the plot on a button click
Step2: Let's look at an example where we link a plot to a dropdown menu
Step3: Let's now create a scatter plot where we select X and Y data from the two dropdown menus
Step4: In the example below, we'll look at plots of trigonometic functions | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import ipywidgets as widgets
import bqplot.pyplot as plt
Explanation: Building interactive plots using bqplot and ipywidgets
bqplot is built on top of the ipywidgets framework
ipwidgets and bqplot widgets can be seamlessly integrated to build interactive plots
bqplot figure widgets can be stacked with UI controls available in ipywidgets by using Layout classes (Box, HBox, VBox) in ipywidgets
(Note that only Figure objects (not Mark objects) inherit from DOMWidget class and can be combined with other widgets from ipywidgets)
Trait attributes of widgets can be linked using callbacks. Callbacks should be registered using the observe method
Please follow these links for detailed documentation on:
1. Layout and Styling of Jupyter Widgets
* Linking Widgets
<br>Let's look at examples of linking plots with UI controls
End of explanation
y = np.random.randn(100).cumsum() # simple random walk
# create a button
update_btn = widgets.Button(description="Update", button_style="success")
# create a figure widget
fig1 = plt.figure(animation_duration=750)
line = plt.plot(y)
# define an on_click function
def on_btn_click(btn):
# update the y attribute of line mark
line.y = np.random.randn(100).cumsum() # another random walk
# register the on_click function
update_btn.on_click(on_btn_click)
# stack button and figure using VBox
widgets.VBox([fig1, update_btn])
Explanation: Update the plot on a button click
End of explanation
import pandas as pd
# create a dummy time series for 5 dummy stock tickers
dates = pd.date_range(start="20180101", end="20181231")
n = len(dates)
tickers = list("ABCDE")
prices = pd.DataFrame(np.random.randn(n, 5).cumsum(axis=0), columns=tickers)
# create a dropdown menu for tickers
dropdown = widgets.Dropdown(description="Ticker", options=tickers)
# create figure for plotting time series
current_ticker = dropdown.value
fig_title_tmpl = '"{}" Time Series' # string template for title of the figure
fig2 = plt.figure(title=fig_title_tmpl.format(current_ticker))
fig2.layout.width = "900px"
time_series = plt.plot(dates, prices[current_ticker])
plt.xlabel("Date")
plt.ylabel("Price")
# 1. create a callback which updates the plot when dropdown item is selected
def update_plot(*args):
selected_ticker = dropdown.value
# update the y attribute of the mark by selecting
# the column from the price data frame
time_series.y = prices[selected_ticker]
# update the title of the figure
fig2.title = fig_title_tmpl.format(selected_ticker)
# 2. register the callback by using the 'observe' method
dropdown.observe(update_plot, "value")
# stack the dropdown and fig widgets using VBox
widgets.VBox([dropdown, fig2])
Explanation: Let's look at an example where we link a plot to a dropdown menu
End of explanation
# create two dropdown menus for X and Y attributes of scatter
x_dropdown = widgets.Dropdown(description="X", options=tickers, value="A")
y_dropdown = widgets.Dropdown(description="Y", options=tickers, value="B")
# create figure for plotting the scatter
x_ticker = x_dropdown.value
y_ticker = y_dropdown.value
# set up fig_margin to allow space to display color bar
fig_margin = dict(top=20, bottom=40, left=60, right=80)
fig3 = plt.figure(animation_duration=1000, fig_margin=fig_margin)
# custom axis options for color data
axes_options = {"color": {"tick_format": "%m/%y", "side": "right", "num_ticks": 5}}
scatter = plt.scatter(
x=prices[x_ticker],
y=prices[y_ticker],
color=dates, # represent chronology using color scale
stroke="black",
colors=["red"],
default_size=32,
axes_options=axes_options,
)
plt.xlabel(x_ticker)
plt.ylabel(y_ticker)
# 1. create a callback which updates the plot when dropdown item is selected
def update_scatter(*args):
x_ticker = x_dropdown.value
y_ticker = y_dropdown.value
# update the x and y attributes of the mark by selecting
# the column from the price data frame
with scatter.hold_sync():
scatter.x = prices[x_ticker]
scatter.y = prices[y_ticker]
# update the title of the figure
plt.xlabel(x_ticker)
plt.ylabel(y_ticker)
# 2. register the callback by using the 'observe' method
x_dropdown.observe(update_scatter, "value")
y_dropdown.observe(update_scatter, "value")
# stack the dropdown and fig widgets using VBox
widgets.VBox([widgets.HBox([x_dropdown, y_dropdown]), fig3])
Explanation: Let's now create a scatter plot where we select X and Y data from the two dropdown menus
End of explanation
funcs = dict(sin=np.sin, cos=np.cos, tan=np.tan, sinh=np.sinh, tanh=np.tanh)
dropdown = widgets.Dropdown(options=funcs, description="Function")
fig = plt.figure(title="sin(x)", animation_duration=1000)
# create x and y data attributes for the line chart
x = np.arange(-10, 10, 0.1)
y = np.sin(x)
line = plt.plot(x, y, "m")
def update_line(*args):
f = dropdown.value
fig.title = f"{f.__name__}(x)"
line.y = f(line.x)
dropdown.observe(update_line, "value")
widgets.VBox([dropdown, fig])
Explanation: In the example below, we'll look at plots of trigonometic functions
End of explanation
<END_TASK> |
28 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Convolutional Neural Network (LeNet)
Update of the example of CNN given on deeplearning.net. This notebook tries to explain all the code as if reader had no knowledge of Theano whatsoever.
Theano is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. Theano features
Step1: The shape of the 4D tensor corresponding to the weight matrix W is
Step2: Note that we use the same weight initialization formula as with the MLP. Weights are sampled randomly from a uniform distribution in the range [-1/fan-in, 1/fan-in], where fan-in is the number of inputs to a hidden unit. For MLPs, this was the number of units in the layer below. For CNNs however, we have to take into account the number of input feature maps and the size of the receptive fields.
Step3: RandomState.uniform(low=0.0, high=1.0, size=None)
Step4: We chose to have only 2 filters, so 2 bias terms need to be initialized.
Step5: nnet.conv2d
Step6: tensor.nnet.sigmoid(x)
Step7: 2. Testing ConvOp on an image
Step8: <img src="images/figure_3.png">
3. MaxPooling
Another important concept of CNNs is max-pooling, which is a form of non-linear down-sampling. Max-pooling partitions the input image into a set of non-overlapping rectangles and, for each such sub-region, outputs the maximum value.
Max-pooling is useful in vision for two reasons
Step11: theano.tensor.signal.downsample.max_pool_2d(input, ds, ignore_border=None, st=None, padding=(0, 0), mode='max')
Step13: Notice that when initializing the weight values, the fan-in is determined by the size of the receptive fields and the number of input feature maps.
B. Full LeNet model
Sparse, convolutional layers and max-pooling are at the heart of the LeNet family of models. While the exact details of the model will vary greatly, the figure below shows a graphical depiction of a LeNet model.
<img src="images/mylenet.png">
The lower-layers are composed to alternating convolution and max-pooling layers. The upper-layers however are fully-connected and correspond to a traditional MLP (hidden layer + logistic regression). The input to the first fully-connected layer is the set of all features maps at the layer below.
From an implementation point of view, this means lower-layers operate on 4D tensors. These are then flattened to a 2D matrix of rasterized feature maps, to be compatible with our previous MLP implementation.
Using the LogisticRegression class defined in Classifying MNIST digits using Logistic Regression and the HiddenLayer class defined in Multilayer Perceptron, we can instantiate the network as follows.
1. HiddenLayer class
The original code for this class can be found here
Step18: The class uses tanh as activation function by default. This can be supported by the results presented in the scientific paper by called Performance Analysis of Various Activation Functions in Generalized MLP Architectures of Neural Networks by Ahmet V Olgac and Bekir Karlik.
In this study, we have used five conventional differentiable and monotonic activation functions for
the evolution of MLP architecture along with Generalized Delta rule learning. These proposed
well-known and effective activation functions are Bi-polar sigmoid, Uni-polar sigmoid, Tanh, Conic
Section, and Radial Bases Function (RBF). Having compared their performances, simulation
results show that Tanh (hyperbolic tangent) function performs better recognition accuracy than
those of the other functions. In other words, the neural network computed good results when
“Tanh-Tanh” combination of activation functions was used for both neurons (or nodes) of hidden
and output layers.
The paper by Xavier can be found at
Step20: .negative_log_likelihood(y)
Step25: C. Implementation of Learning Rate Decay
Let's modify the code of evaluate_lenet5 function so it allows Learning Rate Decay.
Definition
Step27: D. Implementation of dropout
Dropout is a technique that was presented in G. Hinton's work "Dropout
Step28: 2. Creating dropout classes
We create child classes from HiddenLayer and LeNetConvPoolLayer so that they take into account dropout.
Step32: Note
Step34: After running the code for 50 epochs (237 minutes of computation) we get
Step35: 2. Testing the function on a single untrained LeNetConvPoolLayer
To test the function let's take back the example of the image of me when I was 5 years old. I will feed it to a LeNetConvPoolLayer, retrieve the weights, and display them.
Step37: <img src="images/filters2.png" width = 400 >
As the weights are randomly initialized we of course see random pattern in each filter.
3. Displaying the learned filters after training
Let's now modify the code of evaluate_lenet5 so that it displays the filters after training. Full code can be found at code/filter_visualization.py. | <ASSISTANT_TASK:>
Python Code:
import cPickle
import gzip
import os
import sys
import timeit
import numpy
import theano
import theano.tensor as T
from theano.tensor.signal import downsample
from theano.tensor.nnet import conv
rng = numpy.random.RandomState(23455)
# instantiate 4D tensor for input
input = T.tensor4(name='input')
w_shp = (2, 3, 9, 9)
Explanation: Convolutional Neural Network (LeNet)
Update of the example of CNN given on deeplearning.net. This notebook tries to explain all the code as if reader had no knowledge of Theano whatsoever.
Theano is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. Theano features:
tight integration with NumPy – Use numpy.ndarray in Theano-compiled functions.
transparent use of a GPU – Perform data-intensive calculations up to 140x faster than with CPU.(float32 only)
efficient symbolic differentiation – Theano does your derivatives for function with one or many inputs.
speed and stability optimizations – Get the right answer for log(1+x) even when x is really tiny.
dynamic C code generation – Evaluate expressions faster.
extensive unit-testing and self-verification – Detect and diagnose many types of mistake.
Theano has been powering large-scale computationally intensive scientific investigations since 2007.
Outline of this document:
A. The tools to implement CNNs
1. The Convolution Operator
2. Testing ConvOp on an image
3. MaxPooling
4. Convolution + MaxPooling layer
B. Full LeNet model
1. HiddenLayer class
2. LogisticRegression class
3. Loading dataset
4. Putting it all together
C. Implementation of Learning Rate Decay
D. Implementation of dropout
1. Creating dropout function
2. Creating dropout classes
3. Rewriting evaluate_lenet5
E. Visualization of the convolutional filters
1. Visualization function
2. Testing the function on a single untrained LeNetConvPoolLayer
3. Displaying the learned filters after training
F. Automated creation of a CNN + MLP
A. The tools to implement CNNs
1. The Convolution Operator
ConvOp is the main workhorse for implementing a convolutional layer in Theano. ConvOp is used by theano.tensor.signal.conv2d, which takes two symbolic inputs:
a 4D tensor corresponding to a mini-batch of input images. The shape of the tensor is as follows: [mini-batch size, number of input feature maps, image height, image width].
a 4D tensor corresponding to the weight matrix W. The shape of the tensor is: [number of feature maps at layer m, number of feature maps at layer m-1, filter height, filter width]
Below is the Theano code for implementing a convolutional layer similar to the one of Figure 1. The input consists of 3 features maps (an RGB color image) of size 120x160. We use two convolutional filters with 9x9 receptive fields.
End of explanation
w_bound = numpy.sqrt(3 * 9 * 9)
Explanation: The shape of the 4D tensor corresponding to the weight matrix W is:
number of feature maps at layer 2: as we chose to have only 2 convolutional filters, we will have 2 resulting feature maps.
number of feature maps at layer 1: the original image being RGB, it has 3 layers on top of each other, so 3 feature maps.
filter height: the convolutional filters has 9x9 receptive fields, so height = 9 pixels
filter width: similarly, width = 9 pixels
End of explanation
W = theano.shared( numpy.asarray(
rng.uniform(
low=-1.0 / w_bound,
high=1.0 / w_bound,
size=w_shp),
dtype=input.dtype), name ='W')
Explanation: Note that we use the same weight initialization formula as with the MLP. Weights are sampled randomly from a uniform distribution in the range [-1/fan-in, 1/fan-in], where fan-in is the number of inputs to a hidden unit. For MLPs, this was the number of units in the layer below. For CNNs however, we have to take into account the number of input feature maps and the size of the receptive fields.
End of explanation
# initialize shared variable for bias (1D tensor) with random values
# IMPORTANT: biases are usually initialized to zero. However in this
# particular application, we simply apply the convolutional layer to
# an image without learning the parameters. We therefore initialize
# them to random values to "simulate" learning.
b_shp = (2,)
b = theano.shared(numpy.asarray(
rng.uniform(low=-.5, high=.5, size=b_shp),
dtype=input.dtype), name ='b')
Explanation: RandomState.uniform(low=0.0, high=1.0, size=None): draw samples from a uniform distribution: samples are uniformly distributed over the half-open interval [low, high) (includes low, but excludes high). In other words, any value within the given interval is equally likely to be drawn by uniform. source
theano.shared: the main benefits of using shared constructors are you can use them to initialise important variables with predefined numerical values (weight matrices in a neural network, for example). source
The distinction between Theano-managed memory and user-managed memory can be broken down by some Theano functions (e.g. shared, get_value and the constructors for In and Out) by using a borrow=True flag. This can make those methods faster (by avoiding copy operations) at the expense of risking subtle bugs in the overall program (by aliasing memory).
Take home message:
It is a safe practice (and a good idea) to use borrow=True in a shared variable constructor when the shared variable stands for a large object (in terms of memory footprint) and you do not want to create copies of it in memory.
It is not a reliable technique to use borrow=True to modify shared variables through side-effect, because with some devices (e.g. GPU devices) this technique will not work.
End of explanation
# build symbolic expression that computes the convolution of input with filters in w
conv_out = conv.conv2d(input, W)
Explanation: We chose to have only 2 filters, so 2 bias terms need to be initialized.
End of explanation
# build symbolic expression to add bias and apply activation function, i.e. produce neural net layer output
output = T.nnet.sigmoid(conv_out + b.dimshuffle('x', 0, 'x', 'x'))
Explanation: nnet.conv2d: This is the standard operator for convolutional neural networks working with batches of multi-channel 2D images, available for CPU and GPU. source
End of explanation
# create theano function to compute filtered images
f = theano.function([input], output)
Explanation: tensor.nnet.sigmoid(x): returns the standard sigmoid nonlinearity applied to x.
Parameters: x - symbolic Tensor (or compatible)
Return type: same as x
Returns: element-wise sigmoid: $$sigmoid(x) = \frac{1}{1 + \exp(-x)}$$.
Note: in numpy and in Theano, the transpose of a vector is exactly the same vector! Use reshape or dimshuffle to turn your vector into a row or column matrix. source
End of explanation
import pylab
from PIL import Image
# open random image of dimensions 1936×2592
img = Image.open(open('images/profilepic4.jpg'))
img = numpy.asarray(img, dtype='float64') / 256. # divide by 256 to have RGB 0-1 scale and not 0 - 256
#put image in 4D tensor of shape (1, 3, height, width)
img_ = img.transpose(2, 0, 1).reshape(1, 3, 2592, 1936)
filtered_img = f(img_)
# plot original image and first and second components of output
pylab.subplot(1, 3, 1); pylab.axis('off'); pylab.imshow(img)
pylab.gray();
# recall that the convOp output (filtered image) is actually a "minibatch",
# of size 1 here, so we take index 0 in the first dimension:
pylab.subplot(1, 3, 2); pylab.axis('off'); pylab.imshow(filtered_img[0, 0, :, :])
pylab.subplot(1, 3, 3); pylab.axis('off'); pylab.imshow(filtered_img[0, 1, :, :])
pylab.show()
Explanation: 2. Testing ConvOp on an image
End of explanation
from theano.tensor.signal import downsample
input = T.dtensor4('input')
maxpool_shape = (2, 2)
pool_out = downsample.max_pool_2d(input, maxpool_shape, ignore_border=True)
g = theano.function([input],pool_out)
invals = numpy.random.RandomState(1).rand(3, 2, 5, 5)
print 'With ignore_border set to True:'
print 'invals[0, 0, :, :] =\n', invals[0, 0, :, :]
print 'output[0, 0, :, :] =\n', g(invals)[0, 0, :, :]
pool_out = downsample.max_pool_2d(input, maxpool_shape, ignore_border=False)
g = theano.function([input],pool_out)
print 'With ignore_border set to False:'
print 'invals[0, 0, :, :] =\n', invals[0, 0, :, :]
print 'output[0, 0, :, :] =\n', g(invals)[0, 0, :, :]
Explanation: <img src="images/figure_3.png">
3. MaxPooling
Another important concept of CNNs is max-pooling, which is a form of non-linear down-sampling. Max-pooling partitions the input image into a set of non-overlapping rectangles and, for each such sub-region, outputs the maximum value.
Max-pooling is useful in vision for two reasons:
By eliminating non-maximal values, it reduces computation for upper layers.
It provides a form of translation invariance. Imagine cascading a max-pooling layer with a convolutional layer. There are 8 directions in which one can translate the input image by a single pixel. If max-pooling is done over a 2x2 region, 3 out of these 8 possible configurations will produce exactly the same output at the convolutional layer. For max-pooling over a 3x3 window, this jumps to 5/8.
Since it provides additional robustness to position, max-pooling is a “smart” way of reducing the dimensionality of intermediate representations.
Max-pooling is done in Theano by way of theano.tensor.signal.downsample.max_pool_2d. This function takes as input an N dimensional tensor (where N >= 2) and a downscaling factor and performs max-pooling over the 2 trailing dimensions of the tensor.
End of explanation
class LeNetConvPoolLayer(object):
Pool Layer of a convolutional network
def __init__(self, rng, input, filter_shape, image_shape, poolsize=(2, 2)):
Allocate a LeNetConvPoolLayer with shared variable internal parameters.
:type rng: numpy.random.RandomState
:param rng: a random number generator used to initialize weights
:type input: theano.tensor.dtensor4
:param input: symbolic image tensor, of shape image_shape
:type filter_shape: tuple or list of length 4
:param filter_shape: (number of filters, num input feature maps,
filter height, filter width)
:type image_shape: tuple or list of length 4
:param image_shape: (batch size, num input feature maps,
image height, image width)
:type poolsize: tuple or list of length 2
:param poolsize: the downsampling (pooling) factor (#rows, #cols)
assert image_shape[1] == filter_shape[1]
# assert just checks if the number of feature maps is consistent between filter shape and image_shape
self.input = input
# there are "num input feature maps * filter height * filter width"
# inputs to each hidden unit
# reminder: Weights are sampled randomly from a uniform distribution
# in the range [-1/fan-in, 1/fan-in], where fan-in is the number of inputs to a hidden unit
fan_in = numpy.prod(filter_shape[1:])
# each unit in the lower layer receives a gradient from:
# "num output feature maps * filter height * filter width" /
# pooling size
fan_out = (filter_shape[0] * numpy.prod(filter_shape[2:]) /
numpy.prod(poolsize))
# initialize weights with random weights
W_bound = numpy.sqrt(6. / (fan_in + fan_out))
self.W = theano.shared(
numpy.asarray(
rng.uniform(low=-W_bound, high=W_bound, size=filter_shape),
dtype=theano.config.floatX
),
borrow=True # see above the def of theano.shared for explanation of borrow
)
# the bias is a 1D tensor -- one bias per output feature map
b_values = numpy.zeros((filter_shape[0],), dtype=theano.config.floatX)
self.b = theano.shared(value=b_values, borrow=True)
# convolve input feature maps with filters
conv_out = conv.conv2d(
input=input,
filters=self.W,
filter_shape=filter_shape,
image_shape=image_shape
)
# downsample each feature map individually, using maxpooling
pooled_out = downsample.max_pool_2d(
input=conv_out,
ds=poolsize,
ignore_border=True
)
# add the bias term. Since the bias is a vector (1D array), we first
# reshape it to a tensor of shape (1, n_filters, 1, 1). Each bias will
# thus be broadcasted across mini-batches and feature map
# width & height
self.output = T.tanh(pooled_out + self.b.dimshuffle('x', 0, 'x', 'x'))
# store parameters of this layer
self.params = [self.W, self.b]
# keep track of model input
self.input = input
Explanation: theano.tensor.signal.downsample.max_pool_2d(input, ds, ignore_border=None, st=None, padding=(0, 0), mode='max'): takes as input a N-D tensor, where N >= 2. It downscales the input image by the specified factor, by keeping only the maximum value of non-overlapping patches of size (ds[0],ds[1])
Parameters:
input (N-D theano tensor of input images) – Input images. Max pooling will be done over the 2 last dimensions.
ds (tuple of length 2) – Factor by which to downscale (vertical ds, horizontal ds). (2,2) will halve the image in each dimension.
ignore_border (bool (default None, will print a warning and set to False)) – When True, (5,5) input with ds=(2,2) will generate a (2,2) output. (3,3) otherwise.
st (tuple of lenght 2) – Stride size, which is the number of shifts over rows/cols to get the next pool region. If st is None, it is considered equal to ds (no overlap on pooling regions).
padding (tuple of two ints) – (pad_h, pad_w), pad zeros to extend beyond four borders of the images, pad_h is the size of the top and bottom margins, and pad_w is the size of the left and right margins.
mode ({‘max’, ‘sum’, ‘average_inc_pad’, ‘average_exc_pad’}) – Operation executed on each window. max and sum always exclude the padding in the computation. average gives you the choice to include or exclude it.
source
4. Convolution + MaxPooling layer
We now have all we need to implement a LeNet model in Theano. We start with the LeNetConvPoolLayer class, which implements a {convolution + max-pooling} layer.
End of explanation
class HiddenLayer(object):
def __init__(self, rng, input, n_in, n_out, W=None, b=None,
activation=T.tanh):
Typical hidden layer of a MLP: units are fully-connected and have
sigmoidal activation function. Weight matrix W is of shape (n_in,n_out)
and the bias vector b is of shape (n_out,).
NOTE : The nonlinearity used here is tanh
Hidden unit activation is given by: tanh(dot(input,W) + b)
:type rng: numpy.random.RandomState
:param rng: a random number generator used to initialize weights
:type input: theano.tensor.dmatrix
:param input: a symbolic tensor of shape (n_examples, n_in)
:type n_in: int
:param n_in: dimensionality of input
:type n_out: int
:param n_out: number of hidden units
:type activation: theano.Op or function
:param activation: Non linearity to be applied in the hidden
layer
self.input = input
# `W` is initialized with `W_values` which is uniformely sampled
# from sqrt(-6./(n_in+n_hidden)) and sqrt(6./(n_in+n_hidden))
# for tanh activation function
# the output of uniform if converted using asarray to dtype
# theano.config.floatX so that the code is runable on GPU
# Note : optimal initialization of weights is dependent on the
# activation function used (among other things).
# For example, results presented in [Xavier10] suggest that you
# should use 4 times larger initial weights for sigmoid
# compared to tanh
# We have no info for other function, so we use the same as
# tanh.
if W is None:
W_values = numpy.asarray(
rng.uniform(
low=-numpy.sqrt(6. / (n_in + n_out)),
high=numpy.sqrt(6. / (n_in + n_out)),
size=(n_in, n_out)
),
dtype=theano.config.floatX
)
if activation == theano.tensor.nnet.sigmoid:
W_values *= 4
W = theano.shared(value=W_values, name='W', borrow=True)
if b is None:
b_values = numpy.zeros((n_out,), dtype=theano.config.floatX)
b = theano.shared(value=b_values, name='b', borrow=True)
self.W = W
self.b = b
lin_output = T.dot(input, self.W) + self.b
self.output = (
lin_output if activation is None
else activation(lin_output)
)
# parameters of the model
self.params = [self.W, self.b]
Explanation: Notice that when initializing the weight values, the fan-in is determined by the size of the receptive fields and the number of input feature maps.
B. Full LeNet model
Sparse, convolutional layers and max-pooling are at the heart of the LeNet family of models. While the exact details of the model will vary greatly, the figure below shows a graphical depiction of a LeNet model.
<img src="images/mylenet.png">
The lower-layers are composed to alternating convolution and max-pooling layers. The upper-layers however are fully-connected and correspond to a traditional MLP (hidden layer + logistic regression). The input to the first fully-connected layer is the set of all features maps at the layer below.
From an implementation point of view, this means lower-layers operate on 4D tensors. These are then flattened to a 2D matrix of rasterized feature maps, to be compatible with our previous MLP implementation.
Using the LogisticRegression class defined in Classifying MNIST digits using Logistic Regression and the HiddenLayer class defined in Multilayer Perceptron, we can instantiate the network as follows.
1. HiddenLayer class
The original code for this class can be found here: source
End of explanation
class LogisticRegression(object):
Multi-class Logistic Regression Class
The logistic regression is fully described by a weight matrix :math:`W`
and bias vector :math:`b`. Classification is done by projecting data
points onto a set of hyperplanes, the distance to which is used to
determine a class membership probability.
def __init__(self, input, n_in, n_out):
Initialize the parameters of the logistic regression
:type input: theano.tensor.TensorType
:param input: symbolic variable that describes the input of the
architecture (one minibatch)
:type n_in: int
:param n_in: number of input units, the dimension of the space in
which the datapoints lie
:type n_out: int
:param n_out: number of output units, the dimension of the space in
which the labels lie
# initialize with 0 the weights W as a matrix of shape (n_in, n_out)
self.W = theano.shared(
value=numpy.zeros(
(n_in, n_out),
dtype=theano.config.floatX
),
name='W',
borrow=True
)
# initialize the biases b as a vector of n_out 0s
self.b = theano.shared(
value=numpy.zeros(
(n_out,),
dtype=theano.config.floatX
),
name='b',
borrow=True
)
# symbolic expression for computing the matrix of class-membership
# probabilities
# Where:
# W is a matrix where column-k represent the separation hyperplane for
# class-k
# x is a matrix where row-j represents input training sample-j
# b is a vector where element-k represent the free parameter of
# hyperplane-k
self.p_y_given_x = T.nnet.softmax(T.dot(input, self.W) + self.b)
# symbolic description of how to compute prediction as class whose
# probability is maximal
self.y_pred = T.argmax(self.p_y_given_x, axis=1)
# parameters of the model
self.params = [self.W, self.b]
# keep track of model input
self.input = input
def negative_log_likelihood(self, y):
Return the mean of the negative log-likelihood of the prediction
of this model under a given target distribution.
.. math::
\frac{1}{|\mathcal{D}|} \mathcal{L} (\theta=\{W,b\}, \mathcal{D}) =
\frac{1}{|\mathcal{D}|} \sum_{i=0}^{|\mathcal{D}|}
\log(P(Y=y^{(i)}|x^{(i)}, W,b)) \\
\ell (\theta=\{W,b\}, \mathcal{D})
:type y: theano.tensor.TensorType
:param y: corresponds to a vector that gives for each example the
correct label
Note: we use the mean instead of the sum so that
the learning rate is less dependent on the batch size
# y.shape[0] is (symbolically) the number of rows in y, i.e.,
# number of examples (call it n) in the minibatch
# T.arange(y.shape[0]) is a symbolic vector which will contain
# [0,1,2,... n-1] T.log(self.p_y_given_x) is a matrix of
# Log-Probabilities (call it LP) with one row per example and
# one column per class LP[T.arange(y.shape[0]),y] is a vector
# v containing [LP[0,y[0]], LP[1,y[1]], LP[2,y[2]], ...,
# LP[n-1,y[n-1]]] and T.mean(LP[T.arange(y.shape[0]),y]) is
# the mean (across minibatch examples) of the elements in v,
# i.e., the mean log-likelihood across the minibatch.
return -T.mean(T.log(self.p_y_given_x)[T.arange(y.shape[0]), y])
def errors(self, y):
Return a float representing the number of errors in the minibatch
over the total number of examples of the minibatch ; zero one
loss over the size of the minibatch
:type y: theano.tensor.TensorType
:param y: corresponds to a vector that gives for each example the
correct label
# check if y has same dimension of y_pred
if y.ndim != self.y_pred.ndim:
raise TypeError(
'y should have the same shape as self.y_pred',
('y', y.type, 'y_pred', self.y_pred.type)
)
# check if y is of the correct datatype
if y.dtype.startswith('int'):
# the T.neq operator returns a vector of 0s and 1s, where 1
# represents a mistake in prediction
return T.mean(T.neq(self.y_pred, y))
else:
raise NotImplementedError()
Explanation: The class uses tanh as activation function by default. This can be supported by the results presented in the scientific paper by called Performance Analysis of Various Activation Functions in Generalized MLP Architectures of Neural Networks by Ahmet V Olgac and Bekir Karlik.
In this study, we have used five conventional differentiable and monotonic activation functions for
the evolution of MLP architecture along with Generalized Delta rule learning. These proposed
well-known and effective activation functions are Bi-polar sigmoid, Uni-polar sigmoid, Tanh, Conic
Section, and Radial Bases Function (RBF). Having compared their performances, simulation
results show that Tanh (hyperbolic tangent) function performs better recognition accuracy than
those of the other functions. In other words, the neural network computed good results when
“Tanh-Tanh” combination of activation functions was used for both neurons (or nodes) of hidden
and output layers.
The paper by Xavier can be found at:
2. LogisticRegression class
The original code for this class can be found here: source
End of explanation
def evaluate_lenet5(learning_rate=0.1, n_epochs=200,
dataset='mnist.pkl.gz',
nkerns=[20, 50], batch_size=500):
Demonstrates lenet on MNIST dataset
:type learning_rate: float
:param learning_rate: learning rate used (factor for the stochastic
gradient)
:type n_epochs: int
:param n_epochs: maximal number of epochs to run the optimizer
:type dataset: string
:param dataset: path to the dataset used for training /testing (MNIST here)
:type nkerns: list of ints
:param nkerns: number of kernels on each layer (so 20 convolutional filters, and then 50 activation units)
rng = numpy.random.RandomState(23455)
datasets = load_data(dataset)
train_set_x, train_set_y = datasets[0]
valid_set_x, valid_set_y = datasets[1]
test_set_x, test_set_y = datasets[2]
# compute number of minibatches for training, validation and testing
n_train_batches = train_set_x.get_value(borrow=True).shape[0]
n_valid_batches = valid_set_x.get_value(borrow=True).shape[0]
n_test_batches = test_set_x.get_value(borrow=True).shape[0]
n_train_batches /= batch_size
n_valid_batches /= batch_size
n_test_batches /= batch_size
# allocate symbolic variables for the data
index = T.lscalar() # index to a [mini]batch
# start-snippet-1
x = T.matrix('x') # the data is presented as rasterized images
y = T.ivector('y') # the labels are presented as 1D vector of
# [int] labels
######################
# BUILD ACTUAL MODEL #
######################
print '... building the model'
# Reshape matrix of rasterized images of shape (batch_size, 28 * 28)
# to a 4D tensor, compatible with our LeNetConvPoolLayer
# (28, 28) is the size of MNIST images.
layer0_input = x.reshape((batch_size, 1, 28, 28))
# Construct the first convolutional pooling layer:
# filtering reduces the image size to (28-5+1 , 28-5+1) = (24, 24)
# maxpooling reduces this further to (24/2, 24/2) = (12, 12)
# 4D output tensor is thus of shape (batch_size, nkerns[0], 12, 12)
layer0 = LeNetConvPoolLayer(
rng,
input=layer0_input,
image_shape=(batch_size, 1, 28, 28),
filter_shape=(nkerns[0], 1, 5, 5),
poolsize=(2, 2)
)
'''
Reminder of LeNetConvPoolLayer input parameters and types
:type rng: numpy.random.RandomState
:param rng: a random number generator used to initialize weights
:type input: theano.tensor.dtensor4
:param input: symbolic image tensor, of shape image_shape
:type filter_shape: tuple or list of length 4
:param filter_shape: (number of filters, num input feature maps,
filter height, filter width)
:type image_shape: tuple or list of length 4
:param image_shape: (batch size, num input feature maps,
image height, image width)
:type poolsize: tuple or list of length 2
:param poolsize: the downsampling (pooling) factor (#rows, #cols)
'''
# Construct the second convolutional pooling layer
# filtering reduces the image size to (12-5+1, 12-5+1) = (8, 8)
# maxpooling reduces this further to (8/2, 8/2) = (4, 4)
# 4D output tensor is thus of shape (batch_size, nkerns[1], 4, 4)
layer1 = LeNetConvPoolLayer(
rng,
input=layer0.output,
image_shape=(batch_size, nkerns[0], 12, 12),
filter_shape=(nkerns[1], nkerns[0], 5, 5),
poolsize=(2, 2)
)
# the HiddenLayer being fully-connected, it operates on 2D matrices of
# shape (batch_size, num_pixels) (i.e matrix of rasterized images).
# This will generate a matrix of shape (batch_size, nkerns[1] * 4 * 4),
# or (500, 50 * 4 * 4) = (500, 800) with the default values.
layer2_input = layer1.output.flatten(2)
# construct a fully-connected sigmoidal layer
layer2 = HiddenLayer(
rng,
input=layer2_input,
n_in=nkerns[1] * 4 * 4,
n_out=500,
activation=T.tanh
)
# classify the values of the fully-connected sigmoidal layer
layer3 = LogisticRegression(input=layer2.output, n_in=500, n_out=10)
# the cost we minimize during training is the NLL of the model
cost = layer3.negative_log_likelihood(y)
# create a function to compute the mistakes that are made by the model
test_model = theano.function(
[index],
layer3.errors(y),
givens={
x: test_set_x[index * batch_size: (index + 1) * batch_size],
y: test_set_y[index * batch_size: (index + 1) * batch_size]
}
)
validate_model = theano.function(
[index],
layer3.errors(y),
givens={
x: valid_set_x[index * batch_size: (index + 1) * batch_size],
y: valid_set_y[index * batch_size: (index + 1) * batch_size]
}
)
# create a list of all model parameters to be fit by gradient descent
params = layer3.params + layer2.params + layer1.params + layer0.params
# create a list of gradients for all model parameters
grads = T.grad(cost, params)
# train_model is a function that updates the model parameters by
# SGD Since this model has many parameters, it would be tedious to
# manually create an update rule for each model parameter. We thus
# create the updates list by automatically looping over all
# (params[i], grads[i]) pairs.
updates = [
(param_i, param_i - learning_rate * grad_i)
for param_i, grad_i in zip(params, grads)
]
train_model = theano.function(
[index],
cost,
updates=updates,
givens={
x: train_set_x[index * batch_size: (index + 1) * batch_size],
y: train_set_y[index * batch_size: (index + 1) * batch_size]
}
)
# end-snippet-1
###############
# TRAIN MODEL #
###############
print '... training'
# early-stopping parameters
patience = 10000 # look as this many examples regardless
patience_increase = 2 # wait this much longer when a new best is
# found
improvement_threshold = 0.995 # a relative improvement of this much is
# considered significant
validation_frequency = min(n_train_batches, patience / 2)
# go through this many
# minibatche before checking the network
# on the validation set; in this case we
# check every epoch
best_validation_loss = numpy.inf
best_iter = 0
test_score = 0.
start_time = timeit.default_timer()
epoch = 0
done_looping = False
while (epoch < n_epochs) and (not done_looping):
epoch = epoch + 1
for minibatch_index in xrange(n_train_batches):
# This function is very similar to range(), but returns an xrange object instead of a list.
# This is an opaque sequence type which yields the same values as the corresponding list,
# without actually storing them all simultaneously. The advantage of xrange() over range()
# is minimal (since xrange() still has to create the values when asked for them) except when a
# very large range is used on a memory-starved machine or when all of the range’s elements
# are never used (such as when the loop is usually terminated with break).
# For more information on xrange objects, see XRange Type and Sequence Types — str,
# unicode, list, tuple, bytearray, buffer, xrange
iter = (epoch - 1) * n_train_batches + minibatch_index
# for epoch = 1 (first value while entering the "while" loop; iter = 0 * n_train_batches + minibtach_index
# so iter = 0. This will call train_model over the index of train_set_x[0:500] and train_set_y[0:500].
# the (epoch -1) * n_train_batches keep track of the iteration number while looping over and over on
# the train set.
if iter % 100 == 0:
print 'training @ iter = ', iter
cost_ij = train_model(minibatch_index)
# Only at this moment all the symbolic expression that were called during "Building the model" are
# called with real values replacing the symbolic tensors. That is how theano works.
if (iter + 1) % validation_frequency == 0:
# compute zero-one loss on validation set
validation_losses = [validate_model(i) for i
in xrange(n_valid_batches)]
this_validation_loss = numpy.mean(validation_losses)
print('epoch %i, minibatch %i/%i, validation error %f %%' %
(epoch, minibatch_index + 1, n_train_batches,
this_validation_loss * 100.))
# if we got the best validation score until now
if this_validation_loss < best_validation_loss:
#improve patience if loss improvement is good enough
if this_validation_loss < best_validation_loss * \
improvement_threshold:
patience = max(patience, iter * patience_increase)
# save best validation score and iteration number
best_validation_loss = this_validation_loss
best_iter = iter
# test it on the test set
test_losses = [
test_model(i)
for i in xrange(n_test_batches)
]
test_score = numpy.mean(test_losses)
print((' epoch %i, minibatch %i/%i, test error of '
'best model %f %%') %
(epoch, minibatch_index + 1, n_train_batches,
test_score * 100.))
if patience <= iter:
done_looping = True
break
end_time = timeit.default_timer()
print('Optimization complete.')
print('Best validation score of %f %% obtained at iteration %i, '
'with test performance %f %%' %
(best_validation_loss * 100., best_iter + 1, test_score * 100.))
print >> sys.stderr, ('The code for file ' +
os.path.split(__file__)[1] +
' ran for %.2fm' % ((end_time - start_time) / 60.))
Explanation: .negative_log_likelihood(y): this method returns the mean of the negative log-likelihood of the prediction of this model under a given target distribution:
$$\frac{1}{|\mathcal{D}|} \mathcal{L} (\theta={W,b}, \mathcal{D}) =\frac{1}{|\mathcal{D}|} \sum_{i=0}^{|\mathcal{D}|} \log(P(Y=y^{(i)}|x^{(i)}, W,b)) \ \ell (\theta={W,b}, \mathcal{D})$$
type y: theano.tensor.TensorType
param y: corresponds to a vector that gives for each example the correct label
Note: we use the mean instead of the sum so that the learning rate is less dependent on the batch size.
3. Loading dataset
Original code can be found here. This piece of code loads the dataset and partitions it into: train set, validation set and test set.
4. Putting it all together
End of explanation
def evaluate_lenet5_ldr(learning_rate=0.1, learning_rate_decay = 0.98, n_epochs=200,
dataset='mnist.pkl.gz',
nkerns=[20, 50], batch_size=500):
:type learning_rate_decay: float
:param learning_rate_decay: learning rate decay used
rng = numpy.random.RandomState(23455)
...
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!#
# Theano function to decay the learning rate, this is separate from the
# training function because we only want to do this once each epoch instead
# of after each minibatch.
decay_learning_rate = theano.function(inputs=[], outputs=learning_rate,
updates={learning_rate: learning_rate * learning_rate_decay})
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!#
###############
# TRAIN MODEL #
###############
...
while (epoch < n_epochs) and (not done_looping):
epoch = epoch + 1
for minibatch_index in xrange(n_train_batches):
iter = (epoch - 1) * n_train_batches + minibatch_index
if iter % 100 == 0:
print 'training @ iter = ', iter
cost_ij = train_model(minibatch_index)
if (iter + 1) % validation_frequency == 0:
# compute zero-one loss on validation set
validation_losses = [validate_model(i) for i
in xrange(n_valid_batches)]
this_validation_loss = numpy.mean(validation_losses)
print('epoch %i, minibatch %i/%i, validation error %f %%' %
(epoch, minibatch_index + 1, n_train_batches,
this_validation_loss * 100.))
# if we got the best validation score until now
if this_validation_loss < best_validation_loss:
#improve patience if loss improvement is good enough
if this_validation_loss < best_validation_loss * \
improvement_threshold:
patience = max(patience, iter * patience_increase)
# save best validation score and iteration number
best_validation_loss = this_validation_loss
best_iter = iter
# test it on the test set
test_losses = [
test_model(i)
for i in xrange(n_test_batches)
]
test_score = numpy.mean(test_losses)
print((' epoch %i, minibatch %i/%i, test error of '
'best model %f %%') %
(epoch, minibatch_index + 1, n_train_batches,
test_score * 100.))
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
new_learning_rate = decay_learning_rate()
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
if patience <= iter:
done_looping = True
break
...
Explanation: C. Implementation of Learning Rate Decay
Let's modify the code of evaluate_lenet5 function so it allows Learning Rate Decay.
Definition: the learning rate is the step-size of the update of the parameters during gradient descent. It is typically between 0.1 and 0.01. However, if it is too big, gradient descent can overshoot the minimum and diverge. If it is too small, the optimization is very slow and may get stuck into a local minimum. The learning rate decay allows for the learning rate to be big at the beginning and then slowly decrease when nearing the global minimum:
initial learning rate: $$\alpha = \alpha_0$$
learning rate decay: $$\alpha_d$$
at each iteration update: $$\alpha = \alpha_d*\alpha$$
The full code can be found at code/convolutional_mlp_ldr.py
End of explanation
def _dropout_from_layer(rng, layer, p):
p is the probablity of dropping a unit
srng = theano.tensor.shared_randomstreams.RandomStreams(
rng.randint(999999))
# p=1-p because 1's indicate keep and p is probability of dropping
mask = srng.binomial(n=1, p=1-p, size=layer.shape)
# The cast is important because
# int * float32 = float64 which pulls things off the gpu
output = layer * T.cast(mask, theano.config.floatX)
return output
Explanation: D. Implementation of dropout
Dropout is a technique that was presented in G. Hinton's work "Dropout: A simple Way to Prevent Neural Networks from Overfitting". As can be read in the abstract:
Deep neural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural nets at test time. Dropout is a technique for addressing this problem. The key idea is to randomly drop units (along with their connections) from the neural network during training. This prevents units from co-adapting too much. During training, dropout samples from an exponential number of different “thinned” networks. At test time, it is easy to approximate the effect of averaging the predictions of all these thinned networks by simply using a single unthinned network that has smaller weights
<img src="images/hinton1.png" width=650>
Dropping out 20% of the input units and 50% of the hidden units was often found to be optimal
Implementation solution presented here is greatly inspired from GitHub user mdenil work on dropout, who implemented Hinton's dropout on a Multi-Layer Perceptron (base code from deeplearning.net).
1. Creating dropout function
This function takes a layer (which can be either a layer of units in an MLP or a layer of feature maps in a CNN) and drop units from the layer with a probability of p (or in the case of CNN pixels from feature maps with a probability of p).
End of explanation
class DropoutHiddenLayer(HiddenLayer):
def __init__(self, rng, input, n_in, n_out,
activation, dropout_rate, W=None, b=None):
super(DropoutHiddenLayer, self).__init__(
rng=rng, input=input, n_in=n_in, n_out=n_out, W=W, b=b,
activation=activation)
self.output = _dropout_from_layer(rng, self.output, p=dropout_rate)
class DropoutLeNetConvPoolLayer(LeNetConvPoolLayer):
def __init__(self, rng, input, filter_shape, image_shape, poolsize,
dropout_rate, W=None, b=None):
super(DropoutLeNetConvPoolLayer, self).__init__(
rng=rng, input=input, filter_shape=filter_shape, image_shape=image_shape,
poolsize=poolsize, W=W, b=b)
self.output = _dropout_from_layer(rng, self.output, p=dropout_rate)
Explanation: 2. Creating dropout classes
We create child classes from HiddenLayer and LeNetConvPoolLayer so that they take into account dropout.
End of explanation
def evaluate_lenet5(initial_learning_rate=0.1, learning_rate_decay = 1,
dropout_rates = [0.2, 0.2, 0.2, 0.5], n_epochs=200,
dataset='mnist.pkl.gz',
nkerns=[20, 50], batch_size=500):
:type dropout_rates: list of float
:param dropout_rates: dropout rate used for each layer (input layer,
1st filtered layer, 2nd filtered layer, fully connected layer)
...
######################
# BUILD ACTUAL MODEL #
######################
print '... building the model'
# Reshape matrix of rasterized images of shape (batch_size, 28 * 28)
# to a 4D tensor, compatible with our LeNetConvPoolLayer
# (28, 28) is the size of MNIST images.
layer0_input = x.reshape((batch_size, 1, 28, 28))
# Dropping out pixels from original image randomly, with a probability of dropping
# low enough not too drop too much information (20% was found to be ideal)
layer0_input_dropout = _dropout_from_layer(rng, layer0_input, dropout_rates[0])
# Construct the first convolutional pooling layer:
# filtering reduces the image size to (28-5+1 , 28-5+1) = (24, 24)
# maxpooling reduces this further to (24/2, 24/2) = (12, 12)
# 4D output tensor is thus of shape (batch_size, nkerns[0], 12, 12)
layer0_dropout = DropoutLeNetConvPoolLayer(
rng,
input=layer0_input_dropout,
image_shape=(batch_size, 1, 28, 28),
filter_shape=(nkerns[0], 1, 5, 5),
poolsize=(2, 2),
dropout_rate= dropout_rates[1]
)
# Creating in parallel a normal LeNetConvPoolLayer that share the same
# W and b as the dropout layer, with W scaled with p.
layer0 = LeNetConvPoolLayer(
rng,
input=layer0_input,
image_shape=(batch_size, 1, 28, 28),
filter_shape=(nkerns[0], 1, 5, 5),
poolsize=(2, 2),
W=layer0_dropout.W * (1 - dropout_rates[0]),
b=layer0_dropout.b
)
# Construct the second convolutional pooling layer
# filtering reduces the image size to (12-5+1, 12-5+1) = (8, 8)
# maxpooling reduces this further to (8/2, 8/2) = (4, 4)
# 4D output tensor is thus of shape (batch_size, nkerns[1], 4, 4)
layer1_dropout = DropoutLeNetConvPoolLayer(
rng,
input=layer0_dropout.output,
image_shape=(batch_size, nkerns[0], 12, 12),
filter_shape=(nkerns[1], nkerns[0], 5, 5),
poolsize=(2, 2),
dropout_rate = dropout_rates[2]
)
layer1 = LeNetConvPoolLayer(
rng,
input=layer0.output,
image_shape=(batch_size, nkerns[0], 12, 12),
filter_shape=(nkerns[1], nkerns[0], 5, 5),
poolsize=(2, 2),
W=layer1_dropout.W * (1 - dropout_rates[1]),
b=layer1_dropout.b
)
# the HiddenLayer being fully-connected, it operates on 2D matrices of
# shape (batch_size, num_pixels) (i.e matrix of rasterized images).
# This will generate a matrix of shape (batch_size, nkerns[1] * 4 * 4),
# or (500, 50 * 4 * 4) = (500, 800) with the default values.
layer2_dropout_input = layer1_dropout.output.flatten(2)
layer2_input = layer1.output.flatten(2)
# construct a fully-connected sigmoidal layer
layer2_dropout = DropoutHiddenLayer(
rng,
input=layer2_dropout_input,
n_in=nkerns[1] * 4 * 4,
n_out=500,
activation=T.tanh,
dropout_rate = dropout_rates[3]
)
layer2 = HiddenLayer(
rng,
input=layer2_input,
n_in=nkerns[1] * 4 * 4,
n_out=500,
activation=T.tanh,
W=layer2_dropout.W * (1 - dropout_rates[2]),
b=layer2_dropout.b
)
# classify the values of the fully-connected sigmoidal layer
layer3_dropout = LogisticRegression(
input = layer2_dropout.output,
n_in = 500, n_out = 10)
layer3 = LogisticRegression(
input=layer2.output,
n_in=500, n_out=10,
W=layer3_dropout.W * (1 - dropout_rates[-1]),
b=layer3_dropout.b
)
# the cost we minimize during training is the NLL of the model
cost = layer3.negative_log_likelihood(y)
dropout_cost = layer3_dropout.negative_log_likelihood(y)
# create a function to compute the mistakes that are made by the model
test_model = theano.function(
[index],
layer3.errors(y),
givens={
x: test_set_x[index * batch_size: (index + 1) * batch_size],
y: test_set_y[index * batch_size: (index + 1) * batch_size]
}
)
validate_model = theano.function(
[index],
layer3.errors(y),
givens={
x: valid_set_x[index * batch_size: (index + 1) * batch_size],
y: valid_set_y[index * batch_size: (index + 1) * batch_size]
}
)
# create a list of all model parameters to be fit by gradient descent
params = layer3_dropout.params + layer2_dropout.params + layer1_dropout.params + layer0_dropout.params
# create a list of gradients for all model parameters
grads = T.grad(dropout_cost, params)
# train_model is a function that updates the model parameters by SGD
updates = [
(param_i, param_i - learning_rate * grad_i)
for param_i, grad_i in zip(params, grads)
]
train_model = theano.function(
[index],
dropout_cost,
updates=updates,
givens={
x: train_set_x[index * batch_size: (index + 1) * batch_size],
y: train_set_y[index * batch_size: (index + 1) * batch_size]
}
)
...
Explanation: Note: we dropout pixels after pooling.
3. Rewriting evaluate_lenet5
Each time a layer is instantiated, two actual layers need to be actually created in parallel: the dropout layer which drops out some of its units with a probability of p, and an associated layer sharing the same coefficient W and b except W is scaled using p.
Again, full code can be found at code/convolutional_mlp_dropout.py
End of explanation
import pylab
from PIL import Image
def display_filter(W, n_cols = 5):
:type W: numpy_nd_array
:param W: parameter W of a convolutional + max pooling layer
:type image_width: int
: param image_width: width of the final image representing the different filters
W_shape = W.shape
n_filters = W_shape[0]
#param filter_shape: (number of filters, num input feature maps, filter height, filter width)
filter_height = W_shape[2]
filter_width = W_shape[3]
n_lines = numpy.ceil(n_filters / n_cols)
for n in range(n_filters):
Wn = W[n,0,:,:]
Wn = Wn / Wn.max() # Scaling W to get 0-1 gray scale
pylab.subplot(n_lines, n_cols, n + 1); pylab.axis('off'); pylab.imshow(W[n,0,:,:], cmap=pylab.gray())
pylab.show()
Explanation: After running the code for 50 epochs (237 minutes of computation) we get:
Best validation score of 1.560000 % obtained at iteration 5000, with test performance 1.450000 %
Full result file at results/dropout_good_percent.txt
E. Visualization of the convolutional filters
Read this article on understanding the convolutional neural networks. Many methods of visualization what the convolutional networks learned are descrived. We will retain the first one, as it is the most straight-forward to implement:
Visualizing the activations and first-layer weights:
Layer Activations: the most straight-forward visualization technique is to show the activations of the network during the forward pass. For ReLU networks, the activations usually start out looking relatively blobby and dense, but as the training progresses the activations usually become more sparse and localized. One dangerous pitfall that can be easily noticed with this visualization is that some activation maps may be all zero for many different inputs, which can indicate dead filters, and can be a symptom of high learning rates.
Conv/FC Filters: The second common strategy is to visualize the weights. These are usually most interpretable on the first CONV layer which is looking directly at the raw pixel data, but it is possible to also show the filter weights deeper in the network. The weights are useful to visualize because well-trained networks usually display nice and smooth filters without any noisy patterns. Noisy patterns can be an indicator of a network that hasn't been trained for long enough, or possibly a very low regularization strength that may have led to overfitting.
I would like to visualize the filters, so implement the second most common strategy to see the first 20 filters.
M. D. Zeiler wrote an interesting paper about Deconvolutional Networks (DeConvNet) for visualizing and understanding convolutional filter. The only code I found for this subject can be found here.
1. Visualization function
Let's create a function that displays the weight of the filters if fed the weight parameter W.
End of explanation
rng = numpy.random.RandomState(1234)
img = Image.open(open('images/profilepic4.jpg'))
img = numpy.asarray(img, dtype='float64') / 256. # divide by 256 to have RGB 0-1 scale and not 0 - 256
img_ = img.transpose(2, 0, 1).reshape(1, 3, 2592, 1936)
input = img_
filter_shape = [20,3,12,12]
image_shape = [1,3,2592,1936]
poolsize = (2, 2)
layer_test = LeNetConvPoolLayer(rng, input, filter_shape, image_shape, poolsize)
f = theano.function([], layer_test.params)
W = f[0]
display_filter(W)
Explanation: 2. Testing the function on a single untrained LeNetConvPoolLayer
To test the function let's take back the example of the image of me when I was 5 years old. I will feed it to a LeNetConvPoolLayer, retrieve the weights, and display them.
End of explanation
def evaluate_lenet5(initial_learning_rate=0.1, learning_rate_decay = 1,
dropout_rates = [0.2, 0.2, 0.2, 0.5], n_epochs=200,
dataset='mnist.pkl.gz', display_filters = True,
nkerns=[20, 50], batch_size=500):
:type display_filters: Bool
:param display_filters: True if we want to display the learned filters after training
we skip to the very end of the code, after training is done
if display_filters:
# Retrieving the filters from first and second layer
first_convlayer_params = theano.function([], layer0_dropout.params)
second_convlayer_params = theano.function([], layer1_dropout.params)
W0 = first_convlayer_params[0]
W1 = second_convlayer_params[0]
# Display filters from first layer (20 filters)
display_filter(W0)
# Display filters from second layer (50 filters)
display_filter(W1)
Explanation: <img src="images/filters2.png" width = 400 >
As the weights are randomly initialized we of course see random pattern in each filter.
3. Displaying the learned filters after training
Let's now modify the code of evaluate_lenet5 so that it displays the filters after training. Full code can be found at code/filter_visualization.py.
End of explanation
<END_TASK> |
29 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
High level API
We recommend using tf.keras as a high-level API for building neural networks. That said, most TensorFlow APIs are usable with eager execution.
Layers
Step1: The full list of pre-existing layers can be seen in the documentation. It includes Dense (a fully-connected layer),
Conv2D, LSTM, BatchNormalization, Dropout, and many others.
Step2: Implementing custom layers
The best way to implement your own layer is extending the tf.keras.Layer class and implementing
Step3: Note that you don't have to wait until build is called to create your variables, you can also create them in __init__.
Overall code is easier to read and maintain if it uses standard layers whenever possible, as other readers will be familiar with the behavior of standard layers. If you want to use a layer which is not present in tf.keras.layers or tf.contrib.layers, consider filing a github issue or, even better, sending us a pull request!
Models
Step4: Much of the time, however, models which compose many layers simply call one layer after the other. This can be done in very little code using tf.keras.Sequential | <ASSISTANT_TASK:>
Python Code:
# In the tf.keras.layers package, layers are objects. To construct a layer,
# simply construct the object. Most layers take as a first argument the number
# of output dimensions / channels.
layer = tf.keras.layers.Dense(100)
# The number of input dimensionss is often unnecessary, as it can be inferred
# the first time the layer is used, but it can be provided if you want to
# specify it manually, which is useful in some complex models.
layer = tf.keras.layers.Dense(10, input_shape=(None, 5))
Explanation: High level API
We recommend using tf.keras as a high-level API for building neural networks. That said, most TensorFlow APIs are usable with eager execution.
Layers: common sets of useful operations
Most of the time when writing code for machine learning models you want to operate at a higher level of abstraction than individual operations and manipulation of individual variables.
Many machine learning models are expressible as the composition and stacking of relatively simple layers, and TensorFlow provides both a set of many common layers as a well as easy ways for you to write your own application-specific layers either from scratch or as the composition of existing layers.
TensorFlow includes the full Keras API in the tf.keras package, and the Keras layers are very useful when building your own models.
End of explanation
# To use a layer, simply call it.
layer(tf.zeros([10, 5]))
# Layers have many useful methods. For example, you can inspect all variables
# in a layer by calling layer.variables. In this case a fully-connected layer
# will have variables for weights and biases.
layer.variables
# The variables are also accessible through nice accessors
layer.kernel, layer.bias
Explanation: The full list of pre-existing layers can be seen in the documentation. It includes Dense (a fully-connected layer),
Conv2D, LSTM, BatchNormalization, Dropout, and many others.
End of explanation
class MyDenseLayer(tf.keras.layers.Layer):
def __init__(self, num_outputs):
super(MyDenseLayer, self).__init__()
self.num_outputs = num_outputs
def build(self, input_shape):
self.kernel = self.add_variable("kernel",
shape=[input_shape[-1].value,
self.num_outputs])
def call(self, input):
return tf.matmul(input, self.kernel)
layer = MyDenseLayer(10)
print(layer(tf.zeros([10, 5])))
print(layer.variables)
Explanation: Implementing custom layers
The best way to implement your own layer is extending the tf.keras.Layer class and implementing:
* __init__ , where you can do all input-independent initialization
* build, where you know the shapes of the input tensors and can do the rest of the initialization
* call, where you do the forward computation
Note that you don't have to wait until build is called to create your variables, you can also create them in __init__. However, the advantage of creating them in build is that it enables late variable creation based on the shape of the inputs the layer will operate on. On the other hand, creating variables in __init__ would mean that shapes requires to create the variables will need to be explicitly specified.
End of explanation
class ResnetIdentityBlock(tf.keras.Model):
def __init__(self, kernel_size, filters):
super(ResnetIdentityBlock, self).__init__(name='')
filters1, filters2, filters3 = filters
self.conv2a = tf.keras.layers.Conv2D(filters1, (1, 1))
self.bn2a = tf.keras.layers.BatchNormalization()
self.conv2b = tf.keras.layers.Conv2D(filters2, kernel_size, padding='same')
self.bn2b = tf.keras.layers.BatchNormalization()
self.conv2c = tf.keras.layers.Conv2D(filters3, (1, 1))
self.bn2c = tf.keras.layers.BatchNormalization()
def call(self, input_tensor, training=False):
x = self.conv2a(input_tensor)
x = self.bn2a(x, training=training)
x = tf.nn.relu(x)
x = self.conv2b(x)
x = self.bn2b(x, training=training)
x = tf.nn.relu(x)
x = self.conv2c(x)
x = self.bn2c(x, training=training)
x += input_tensor
return tf.nn.relu(x)
block = ResnetIdentityBlock(1, [1, 2, 3])
print(block(tf.zeros([1, 2, 3, 3])))
print([x.name for x in block.variables])
Explanation: Note that you don't have to wait until build is called to create your variables, you can also create them in __init__.
Overall code is easier to read and maintain if it uses standard layers whenever possible, as other readers will be familiar with the behavior of standard layers. If you want to use a layer which is not present in tf.keras.layers or tf.contrib.layers, consider filing a github issue or, even better, sending us a pull request!
Models: composing layers
Many interesting layer-like things in machine learning models are implemented by composing existing layers. For example, each residual block in a resnet is a composition of convolutions, batch normalizations, and a shortcut.
The main class used when creating a layer-like thing which contains other layers is tf.keras.Model. Implementing one is done by inheriting from tf.keras.Model.
End of explanation
my_seq = tf.keras.Sequential([tf.keras.layers.Conv2D(1, (1, 1)),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Conv2D(2, 1,
padding='same'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Conv2D(3, (1, 1)),
tf.keras.layers.BatchNormalization()])
my_seq(tf.zeros([1, 2, 3, 3]))
Explanation: Much of the time, however, models which compose many layers simply call one layer after the other. This can be done in very little code using tf.keras.Sequential
End of explanation
<END_TASK> |
30 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Fitting Models Exercise 2
Imports
Step1: Fitting a decaying oscillation
For this problem you are given a raw dataset in the file decay_osc.npz. This file contains three arrays
Step2: Now, using curve_fit to fit this model and determine the estimates and uncertainties for the parameters | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import scipy.optimize as opt
Explanation: Fitting Models Exercise 2
Imports
End of explanation
def modl(t,A,o,l,d):
return A*np.exp(-1*t)*np.cos(o*t)+d
thetabest,thetacov=opt.curve_fit(modl,tdata,ydata,np.array((6,1,1,0)),dy,absolute_sigma=True)
assert True # leave this to grade the data import and raw data plot
Explanation: Fitting a decaying oscillation
For this problem you are given a raw dataset in the file decay_osc.npz. This file contains three arrays:
tdata: an array of time values
ydata: an array of y values
dy: the absolute uncertainties (standard deviations) in y
Your job is to fit the following model to this data:
$$ y(t) = A e^{-\lambda t} \cos{\omega t + \delta} $$
First, import the data using NumPy and make an appropriately styled error bar plot of the raw data.
End of explanation
# YOUR CODE HERE
raise NotImplementedError()
assert True # leave this cell for grading the fit; should include a plot and printout of the parameters+errors
Explanation: Now, using curve_fit to fit this model and determine the estimates and uncertainties for the parameters:
Print the parameters estimates and uncertainties.
Plot the raw and best fit model.
You will likely have to pass an initial guess to curve_fit to get a good fit.
Treat the uncertainties in $y$ as absolute errors by passing absolute_sigma=True.
End of explanation
<END_TASK> |
31 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Analysis of the IAC Dataset
Step1: Preprocessing
The IAC corpus of 10,000 comments is preprocessed into a training set (80%) and a test set (20%). We exctract only the responses for available question and response pairs within the corpus and assign to each both the nicenasty score and a boolean variable indicating whether the nicenasty score is positive or negative. We will use the former for regression and the latter for a classification based analysis.
Regression
We begin by trying to train a regression algorithm on the nicenasty scores of the IAC response corpus. We will use a bag of words model to embed the text into a feature space and tf-idf for downscaling. We will use a random forest as our final regressor, which performs the best of those regressors we experimented with.
Reading the data
Step2: Training the regressor
Step3: Results
Ultimately, our regression is unable to capture only a small portion of the signal in the data as shown in the plot below.
Step4: Classification
Regression seems difficult, but perhaps we can solve an easier classification problem. Let's just try to build a classifier to predict whether a response has a positive nicenasty score. We use the same bag of word ans tf-idf approach. We chose logistic regression as the classifier after testing a few others which perform less well.
Reading the data
Step5: Training the classifier
Step6: Results
We are able to obtain an accuracy of 73% (which increases to 74.4% with a little hyperparameter searching). This is only slightly better than a classifier which predicts all "True" which would achieve 71% accuracy on our test set. The classification report below shows that our classifier tends to bias toward guessing True and misses a significant portion of the nasty comments.
Step7: Hyperparameter Search and Results
Step8: Convolutional Neural Net
Convolutional neural networks have achieved some success in text classification tasks. Here, we use the skflow package to build a simple CNN to see what kind of classification accuracy we can achieve.
Reading the data
Step9: Processing the vocabulary
Step11: Defining the model
Step12: Training and evaluating the model | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import sklearn
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.ensemble import RandomForestRegressor
from sklearn.linear_model import LinearRegression, LogisticRegression
from sklearn import metrics
import matplotlib.pyplot as plt
import scipy
import math
import tensorflow as tf
import skflow
Explanation: Analysis of the IAC Dataset
End of explanation
train = pd.read_csv('../data/isnice_text_train.csv')
X_train, y_train = train.ix[:,2], train.ix[:,1]
test = pd.read_csv('../data/isnice_text_test.csv')
X_test, y_test = test.ix[:,2], test.ix[:,1]
Explanation: Preprocessing
The IAC corpus of 10,000 comments is preprocessed into a training set (80%) and a test set (20%). We exctract only the responses for available question and response pairs within the corpus and assign to each both the nicenasty score and a boolean variable indicating whether the nicenasty score is positive or negative. We will use the former for regression and the latter for a classification based analysis.
Regression
We begin by trying to train a regression algorithm on the nicenasty scores of the IAC response corpus. We will use a bag of words model to embed the text into a feature space and tf-idf for downscaling. We will use a random forest as our final regressor, which performs the best of those regressors we experimented with.
Reading the data
End of explanation
text_reg = Pipeline([('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('reg', RandomForestRegressor()),
])
text_reg = text_reg.fit(X_train, y_train)
Explanation: Training the regressor
End of explanation
predicted = text_reg.predict(X_test)
MSE = metrics.regression.mean_squared_error(y_test, predicted)
print 'MSE: {0:f}'.format(MSE)
R2 = metrics.r2_score(y_test, predicted)
print 'R-squared: {0:f}'.format(R2)
plt.scatter(y_test, predicted)
plt.xlabel('Actual Score')
plt.ylabel('Predicted Score')
plt.title('Actual vs Predicted Score')
Explanation: Results
Ultimately, our regression is unable to capture only a small portion of the signal in the data as shown in the plot below.
End of explanation
train = pd.read_csv('../data/isnice_text_train.csv')
X_train, y_train = train.ix[:,2], train.ix[:,3]
test = pd.read_csv('../data/isnice_text_test.csv')
X_test, y_test = test.ix[:,2], test.ix[:,3]
Explanation: Classification
Regression seems difficult, but perhaps we can solve an easier classification problem. Let's just try to build a classifier to predict whether a response has a positive nicenasty score. We use the same bag of word ans tf-idf approach. We chose logistic regression as the classifier after testing a few others which perform less well.
Reading the data
End of explanation
text_clf = Pipeline([
('vect', CountVectorizer()),
('tfidf', TfidfTransformer(sublinear_tf=True,norm='l2')),
('clf', LogisticRegression()),
])
text_clf = text_clf.fit(X_train, y_train)
Explanation: Training the classifier
End of explanation
predicted = text_clf.predict(X_test)
accur = np.mean(predicted == y_test)
print 'Accuracy: {0:f}'.format(accur)
Explanation: Results
We are able to obtain an accuracy of 73% (which increases to 74.4% with a little hyperparameter searching). This is only slightly better than a classifier which predicts all "True" which would achieve 71% accuracy on our test set. The classification report below shows that our classifier tends to bias toward guessing True and misses a significant portion of the nasty comments.
End of explanation
from sklearn.grid_search import GridSearchCV
parameters = {'vect__ngram_range': [(1, 1), (1, 2), (1,4)],
'tfidf__use_idf': (True, False),
'clf__C': [0.1, 1, 5, 10],
}
gs_clf = GridSearchCV(text_clf, parameters, n_jobs=-1)
gs_clf = gs_clf.fit(X_train, y_train)
best_parameters, score, _ = max(gs_clf.grid_scores_, key=lambda x: x[1])
print "Best Parameters"
for param_name in sorted(parameters.keys()):
print("%s: %r" % (param_name, best_parameters[param_name]))
predicted = gs_clf.predict(X_test)
accur = np.mean(predicted == y_test)
print ""
print 'Accuracy: {0:f}'.format(accur)
print ""
print "Classification Report"
print(metrics.classification_report(y_test, predicted))
Explanation: Hyperparameter Search and Results
End of explanation
train = pd.read_csv('../data/isnice_text_train.csv')
X_train, y_train = train.ix[:,2], train.ix[:,3]
test = pd.read_csv('../data/isnice_text_test.csv')
X_test, y_test = test.ix[:,2], test.ix[:,3]
Explanation: Convolutional Neural Net
Convolutional neural networks have achieved some success in text classification tasks. Here, we use the skflow package to build a simple CNN to see what kind of classification accuracy we can achieve.
Reading the data
End of explanation
MAX_DOCUMENT_LENGTH = 100
vocab_processor = skflow.preprocessing.VocabularyProcessor(MAX_DOCUMENT_LENGTH)
X_train = np.array(list(vocab_processor.fit_transform(X_train)))
X_test = np.array(list(vocab_processor.transform(X_test)))
n_words = len(vocab_processor.vocabulary_)
print('Total words: %d' % n_words)
Explanation: Processing the vocabulary
End of explanation
EMBEDDING_SIZE = 20
N_FILTERS = 10
WINDOW_SIZE = 20
FILTER_SHAPE1 = [WINDOW_SIZE, EMBEDDING_SIZE]
FILTER_SHAPE2 = [WINDOW_SIZE, N_FILTERS]
POOLING_WINDOW = 4
POOLING_STRIDE = 2
def cnn_model(X, y):
2 layer Convolutional network to predict from sequence of words
to a class.
# Convert indexes of words into embeddings.
# This creates embeddings matrix of [n_words, EMBEDDING_SIZE] and then
# maps word indexes of the sequence into [batch_size, sequence_length,
# EMBEDDING_SIZE].
word_vectors = skflow.ops.categorical_variable(X, n_classes=n_words,
embedding_size=EMBEDDING_SIZE, name='words')
word_vectors = tf.expand_dims(word_vectors, 3)
with tf.variable_scope('CNN_Layer1'):
# Apply Convolution filtering on input sequence.
conv1 = skflow.ops.conv2d(word_vectors, N_FILTERS, FILTER_SHAPE1, padding='VALID')
# Add a RELU for non linearity.
conv1 = tf.nn.relu(conv1)
# Max pooling across output of Convlution+Relu.
pool1 = tf.nn.max_pool(conv1, ksize=[1, POOLING_WINDOW, 1, 1],
strides=[1, POOLING_STRIDE, 1, 1], padding='SAME')
# Transpose matrix so that n_filters from convolution becomes width.
pool1 = tf.transpose(pool1, [0, 1, 3, 2])
with tf.variable_scope('CNN_Layer2'):
# Second level of convolution filtering.
conv2 = skflow.ops.conv2d(pool1, N_FILTERS, FILTER_SHAPE2,
padding='VALID')
# Max across each filter to get useful features for classification.
pool2 = tf.squeeze(tf.reduce_max(conv2, 1), squeeze_dims=[1])
# Apply regular WX + B and classification.
return skflow.models.logistic_regression(pool2, y)
classifier = skflow.TensorFlowEstimator(model_fn=cnn_model, n_classes=2,
steps=100, optimizer='Adam', learning_rate=0.005, continue_training=True)
Explanation: Defining the model
End of explanation
for i in xrange(4):
classifier.fit(X_train, y_train, logdir='/tmp/tf_examples/word_cnn')
score = metrics.accuracy_score(y_test, classifier.predict(X_test))
print('Accuracy: {0:f}'.format(score))
Explanation: Training and evaluating the model
End of explanation
<END_TASK> |
32 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Home Depot Product Search Relevance
The challenge is to predict a relevance score for the provided combinations of search terms and products. To create the ground truth labels, Home Depot has crowdsourced the search/product pairs to multiple human raters.
LabGraph Create
This notebook uses the LabGraph create machine learning iPython module. You need a personal licence to run this code.
Step1: Load data from CSV files
Step2: Data merging
Step3: Let's explore some data
Let's examine 3 different queries and products
Step4: 'angle bracket' search term is not contained in the body. 'angle' would be after stemming however 'bracket' is not.
Step5: only 'wood' is present from search term
Step6: 'sheer' and 'courtain' are present and that's all
How many search terms are not present in description and title for ranked 3 documents
Ranked 3 documents are the most relevents searches, but how many search queries doesn't include the searched term in the description and the title
Step7: TF-IDF with linear regression | <ASSISTANT_TASK:>
Python Code:
import graphlab as gl
Explanation: Home Depot Product Search Relevance
The challenge is to predict a relevance score for the provided combinations of search terms and products. To create the ground truth labels, Home Depot has crowdsourced the search/product pairs to multiple human raters.
LabGraph Create
This notebook uses the LabGraph create machine learning iPython module. You need a personal licence to run this code.
End of explanation
train = gl.SFrame.read_csv("../data/train.csv")
test = gl.SFrame.read_csv("../data/test.csv")
desc = gl.SFrame.read_csv("../data/product_descriptions.csv")
Explanation: Load data from CSV files
End of explanation
# merge train with description
train = train.join(desc, on = 'product_uid', how = 'left')
# merge test with description
test = test.join(desc, on = 'product_uid', how = 'left')
Explanation: Data merging
End of explanation
first_doc = train[0]
first_doc
Explanation: Let's explore some data
Let's examine 3 different queries and products:
* first from the training set
* somewhere in the moddle in the training set
* the last one from the training set
End of explanation
middle_doc = train[37033]
middle_doc
Explanation: 'angle bracket' search term is not contained in the body. 'angle' would be after stemming however 'bracket' is not.
End of explanation
last_doc = train[-1]
last_doc
Explanation: only 'wood' is present from search term
End of explanation
train['search_term_word_count'] = gl.text_analytics.count_words(train['search_term'])
ranked3doc = train[train['relevance'] == 3]
print ranked3doc.head()
len(ranked3doc)
words_search = gl.text_analytics.tokenize(ranked3doc['search_term'], to_lower = True)
words_description = gl.text_analytics.tokenize(ranked3doc['product_description'], to_lower = True)
words_title = gl.text_analytics.tokenize(ranked3doc['product_title'], to_lower = True)
wordsdiff_desc = []
wordsdiff_title = []
puid = []
search_term = []
ws_count = []
ws_count_used_desc = []
ws_count_used_title = []
for item in xrange(len(ranked3doc)):
ws = words_search[item]
pd = words_description[item]
pt = words_title[item]
diff = set(ws) - set(pd)
if diff is None:
diff = 0
wordsdiff_desc.append(diff)
diff2 = set(ws) - set(pt)
if diff2 is None:
diff2 = 0
wordsdiff_title.append(diff2)
puid.append(ranked3doc[item]['product_uid'])
search_term.append(ranked3doc[item]['search_term'])
ws_count.append(len(ws))
ws_count_used_desc.append(len(ws) - len(diff))
ws_count_used_title.append(len(ws) - len(diff2))
differences = gl.SFrame({"puid" : puid,
"search term": search_term,
"diff desc" : wordsdiff_desc,
"diff title" : wordsdiff_title,
"ws count" : ws_count,
"ws count used desc" : ws_count_used_desc,
"ws count used title" : ws_count_used_title})
differences.sort(['ws count used desc', 'ws count used title'])
print "No terms used in description : " + str(len(differences[differences['ws count used desc'] == 0]))
print "No terms used in title : " + str(len(differences[differences['ws count used title'] == 0]))
print "No terms used in description and title : " + str(len(differences[(differences['ws count used desc'] == 0) &
(differences['ws count used title'] == 0)]))
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: 'sheer' and 'courtain' are present and that's all
How many search terms are not present in description and title for ranked 3 documents
Ranked 3 documents are the most relevents searches, but how many search queries doesn't include the searched term in the description and the title
End of explanation
train_search_tfidf = gl.text_analytics.tf_idf(train['search_term_word_count'])
train['search_tfidf'] = train_search_tfidf
train['product_desc_word_count'] = gl.text_analytics.count_words(train['product_description'])
train_desc_tfidf = gl.text_analytics.tf_idf(train['product_desc_word_count'])
train['desc_tfidf'] = train_desc_tfidf
train['product_title_word_count'] = gl.text_analytics.count_words(train['product_title'])
train_title_tfidf = gl.text_analytics.tf_idf(train['product_title_word_count'])
train['title_tfidf'] = train_title_tfidf
train['distance'] = train.apply(lambda x: gl.distances.cosine(x['search_tfidf'],x['desc_tfidf']))
train['distance2'] = train.apply(lambda x: gl.distances.cosine(x['search_tfidf'],x['title_tfidf']))
model1 = gl.linear_regression.create(train, target = 'relevance', features = ['distance', 'distance2'], validation_set = None)
#let's take a look at the weights before we plot
model1.get("coefficients")
test['search_term_word_count'] = gl.text_analytics.count_words(test['search_term'])
test_search_tfidf = gl.text_analytics.tf_idf(test['search_term_word_count'])
test['search_tfidf'] = test_search_tfidf
test['product_desc_word_count'] = gl.text_analytics.count_words(test['product_description'])
test_desc_tfidf = gl.text_analytics.tf_idf(test['product_desc_word_count'])
test['desc_tfidf'] = test_desc_tfidf
test['product_title_word_count'] = gl.text_analytics.count_words(test['product_title'])
test_title_tfidf = gl.text_analytics.tf_idf(test['product_title_word_count'])
test['title_tfidf'] = test_title_tfidf
test['distance'] = test.apply(lambda x: gl.distances.cosine(x['search_tfidf'],x['desc_tfidf']))
test['distance2'] = test.apply(lambda x: gl.distances.cosine(x['search_tfidf'],x['title_tfidf']))
'''
predictions_test = model1.predict(test)
test_errors = predictions_test - test['relevance']
RSS_test = sum(test_errors * test_errors)
print RSS_test
'''
output
submission = gl.SFrame(test['id'])
submission.add_column(output)
submission.rename({'X1': 'id', 'X2':'relevance'})
submission['relevance'] = submission.apply(lambda x: 3.0 if x['relevance'] > 3.0 else x['relevance'])
submission['relevance'] = submission.apply(lambda x: 1.0 if x['relevance'] < 1.0 else x['relevance'])
submission['relevance'] = submission.apply(lambda x: str(x['relevance']))
submission.export_csv('../data/submission.csv', quote_level = 3)
#gl.canvas.set_target('ipynb')
Explanation: TF-IDF with linear regression
End of explanation
<END_TASK> |
33 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Example Usage of HDFWriter
If properties of a class needs to be saved in a hdf file, then the class should inherit from HDFWriterMixin as demonstrated below.
hdf_properties (list)
Step1: You can now save properties using to_hdf method.
Parameters
file_path
Step2: You can now read hdf file using pd.HDFStore , or pd.read_hdf
Step3: Saving nested class objects.
Just extend hdf_properties list to include that class object. <br>
Step4: Modifed Usage
In BasePlasma class, the way properties of object are collected is different. It does not uses hdf_properties attribute.<br>
That`s why , PlasmaWriterMixin (which extends HDFWriterMixin) changes how the properties of BasePlasma class will be collected, by changing get_properties function.<br>
Here is a quick demonstration, if behaviour of default get_properties function inside HDFWriterMixin needs to be changed, by subclassing it to create a new Mixin.
Step5: A demo class , using this modified mixin. | <ASSISTANT_TASK:>
Python Code:
from tardis.io.util import HDFWriterMixin
class ExampleClass(HDFWriterMixin):
hdf_properties = ['property1', 'property2']
hdf_name = 'mock_setup'
def __init__(self, property1, property2):
self.property1 = property1
self.property2 = property2
import numpy as np
import pandas as pd
#Instantiating Object
property1 = np.array([4.0e14, 2, 2e14, 27.5])
property2 = pd.DataFrame({'one': pd.Series([1., 2., 3.], index=['a', 'b', 'c']),
'two': pd.Series([1., 2., 3., 4.], index=['a', 'b', 'c', 'd'])})
obj = ExampleClass(property1, property2)
Explanation: Example Usage of HDFWriter
If properties of a class needs to be saved in a hdf file, then the class should inherit from HDFWriterMixin as demonstrated below.
hdf_properties (list) : Contains names of all the properties that needs to be saved.<br>
hdf_name (str) : Specifies the default name of the group under which the properties will be saved.
End of explanation
obj.to_hdf(file_path='test.hdf', path='test')
#obj.to_hdf(file_path='test.hdf', path='test', name='hdf')
Explanation: You can now save properties using to_hdf method.
Parameters
file_path : Path where the HDF file will be saved<br>
path : Path inside the HDF store to store the elements<br>
name : Name of the group inside HDF store, under which properties will be saved.<br>
If not specified , then it uses the value specified in hdf_name attribute.<br>
If hdf_name is also not defined , then it converts the Class name into Snake Case, and uses this value.<br>
Like for example , if name is not passed as an argument , and hdf_name is also not defined for ExampleClass above, then , it will save properties under example_class group.
End of explanation
#Read HDF file
with pd.HDFStore('test.hdf','r') as data:
print data
#print data['/test/mock_setup/property1']
Explanation: You can now read hdf file using pd.HDFStore , or pd.read_hdf
End of explanation
class NestedExampleClass(HDFWriterMixin):
hdf_properties = ['property1', 'nested_object']
def __init__(self, property1, nested_obj):
self.property1 = property1
self.nested_object = nested_obj
obj2 = NestedExampleClass(property1, obj)
obj2.to_hdf(file_path='nested_test.hdf')
#Read HDF file
with pd.HDFStore('nested_test.hdf','r') as data:
print data
Explanation: Saving nested class objects.
Just extend hdf_properties list to include that class object. <br>
End of explanation
class ModifiedWriterMixin(HDFWriterMixin):
def get_properties(self):
#Change behaviour here, how properties will be collected from Class
data = {name: getattr(self, name) for name in self.outputs}
return data
Explanation: Modifed Usage
In BasePlasma class, the way properties of object are collected is different. It does not uses hdf_properties attribute.<br>
That`s why , PlasmaWriterMixin (which extends HDFWriterMixin) changes how the properties of BasePlasma class will be collected, by changing get_properties function.<br>
Here is a quick demonstration, if behaviour of default get_properties function inside HDFWriterMixin needs to be changed, by subclassing it to create a new Mixin.
End of explanation
class DemoClass(ModifiedWriterMixin):
outputs = ['property1']
hdf_name = 'demo'
def __init__(self, property1):
self.property1 = property1
obj3 = DemoClass('random_string')
obj3.to_hdf('demo_class.hdf')
with pd.HDFStore('demo_class.hdf','r') as data:
print data
Explanation: A demo class , using this modified mixin.
End of explanation
<END_TASK> |
34 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
2D Cylinder
Overview
The periodic shedding of laminar flow over a 2D cylinder at a Reynolds Number of 150 can be used to verify the time accuracy of the solver. For this case the shedding frequency is measured by monitoring the variation in pressure downstream of the cylinder and away from the centre of the wake so that it is not affected by the vortex shed by the opposite side.
References
http
Step1: zCFD Validation and Regression¶
Step2: Initialise Environment
Step3: Data Connection
This starts paraview server on remote host and connects
Step4: Get control dictionary¶
Step5: Get status file
Step6: Define test conditions
Step7: Plot pressure time-history at probe point
Step8: Convergence
Step9: Validation and regression
Step10: Cleaning up | <ASSISTANT_TASK:>
Python Code:
remote_data = True
remote_server_auto = True
case_name = 'cylinder'
data_dir='/gpfs/thirdparty/zenotech/home/dstandingford/VALIDATION/CYLINDER'
data_host='dstandingford@vis03'
paraview_cmd='mpiexec /gpfs/cfms/apps/zCFD/bin/pvserver'
if not remote_server_auto:
paraview_cmd=None
if not remote_data:
data_host='localhost'
paraview_cmd=None
Explanation: 2D Cylinder
Overview
The periodic shedding of laminar flow over a 2D cylinder at a Reynolds Number of 150 can be used to verify the time accuracy of the solver. For this case the shedding frequency is measured by monitoring the variation in pressure downstream of the cylinder and away from the centre of the wake so that it is not affected by the vortex shed by the opposite side.
References
http://www.grc.nasa.gov/WWW/wind/valid/lamcyl/Study1_files/Study1.html
Define Data Location
For remote data the interaction will use ssh to securely interact with the data
This uses the reverse connection capability in paraview so that the paraview server can be submitted to a job scheduler
Note: The default paraview server connection will use port 11111
End of explanation
# Validation criteria setup for cylinder - note that the timestep (dt=0.002) chosen is just
# small enough to capture the Direct frequency. A smaller timestep (dt=0.001) gives a more
# accurate output.
validate = True
regression = True
if (validate):
valid = True
valid_lower_strouhal = 0.1790
valid_upper_strouhal = 0.1820
print 'VALIDATING CYLINDER CASE'
if (regression):
print 'REGRESSION CYLINDER CASE'
Explanation: zCFD Validation and Regression¶
End of explanation
%pylab inline
from paraview.simple import *
paraview.simple._DisableFirstRenderCameraReset()
import pylab as pl
import math
Explanation: Initialise Environment
End of explanation
from zutil.post import pvserver_connect
if remote_data:
pvserver_connect(data_host=data_host,data_dir=data_dir,paraview_cmd=paraview_cmd)
Explanation: Data Connection
This starts paraview server on remote host and connects
End of explanation
from zutil.post import get_case_parameters,print_html_parameters
parameters=get_case_parameters(case_name,data_host=data_host,data_dir=data_dir)
Explanation: Get control dictionary¶
End of explanation
from zutil.post import get_status_dict
status=get_status_dict(case_name,data_host=data_host,data_dir=data_dir)
num_procs = str(status['num processor'])
Explanation: Get status file
End of explanation
# print parameters
from IPython.display import HTML
HTML(print_html_parameters(parameters))
diameter = 1.0
time_step = parameters['time marching']['unsteady']['time step']
cycles = parameters['time marching']['cycles']
mach = parameters['IC_1']['V']['Mach']
print 'mach = %.2f'%(mach)
kappa = 1.402
print 'kappa = %.3f'%(kappa)
R = 287.058
print 'R = %.3f'%(R)
temperature = parameters['IC_1']['temperature']
print 'temperature = %.2f'%(temperature) + ' Kelvin'
pressure = parameters['IC_1']['pressure']
print 'pressure = %.2f'%(pressure) + ' Pascals'
density = pressure/(R*temperature)
print 'density = %.2f'%(density) + ' kg/m^3'
speed_of_sound = sqrt(kappa*pressure/density)
print 'speed_of_sound = %.2f'%(speed_of_sound) + ' m/s'
u_ref = mach*speed_of_sound
print 'u_ref = %.2f'%(u_ref) + ' m/s'
Explanation: Define test conditions
End of explanation
from zutil.post import get_case_root, get_case_report, get_monitor_data
monitor_data = get_monitor_data(get_case_report(case_name),'probe','cp')
# clean up the probe history - remove the pseudo-timestep data
probe_data_x = []
probe_data_y = []
for i in range(0,len(monitor_data[0])):
if ((float(monitor_data[0][i])/float(cycles)) == int(monitor_data[0][i]/cycles)):
probe_data_x.append(float(monitor_data[0][i])*float(time_step)/float(cycles))
probe_data_y.append(float(monitor_data[1][i]))
# Find local maxima after 1 second
maxima_x = []
maxima_y = []
time_start = 1.0
for i in range(1,len(probe_data_x)-1):
time = probe_data_x[i]
if (time > time_start):
val_im1 = probe_data_y[i-1]
val_i = probe_data_y[i]
val_ip1 = probe_data_y[i+1]
if ((val_i > val_im1) and (val_i > val_ip1)):
maxima_x.append(probe_data_x[i])
maxima_y.append(probe_data_y[i])
# Calculate the Strouhal number
num_periods = len(maxima_x)-1
if (num_periods > 1):
frequency = num_periods/(maxima_x[len(maxima_x)-1]-maxima_x[0])
strouhal = frequency*diameter/u_ref
else:
print 'INSUFFICIENT NUMBER OF PERIODS'
strouhal = -100.0
if (validate):
valid = False
fig = pl.figure(figsize=(12, 8), dpi=150, facecolor='w', edgecolor='#E48B25')
fig.suptitle('2D Laminar Cylinder - Strouhal Number = ' + '%.4f'%strouhal,
fontsize=24, fontweight='normal', color = '#E48B25')
ax = fig.add_subplot(1,1,1)
ax.grid(True)
ax.set_xlabel('Time (seconds)', fontsize=18, fontweight='normal', color = '#5D5858')
ax.set_ylabel(r'$\mathbf{C_P}$' + ' at [1.07, 0.313]', fontsize=18, fontweight='normal', color = '#5D5858')
ax.set_xlim((0.0,2.0))
ax.set_ylim((-1.5,0.0))
ax.plot(probe_data_x, probe_data_y, color='r', label='Probe at [1.07, 0.313]')
ax.scatter(maxima_x, maxima_y, color='g', label='Local maxima ' + '(t > %.1f seconds)'%time_start)
legend = ax.legend(loc='best', scatterpoints=1, numpoints=1, shadow=False, fontsize=16)
legend.get_frame().set_facecolor('white')
ax.tick_params(axis='x', pad=8)
for tick in ax.xaxis.get_major_ticks():
tick.label.set_fontsize(18)
tick.label.set_fontweight('normal')
tick.label.set_color('#E48B25')
for tick in ax.yaxis.get_major_ticks():
tick.label.set_fontsize(18)
tick.label.set_fontweight('normal')
tick.label.set_color('#E48B25')
fig.savefig("images/cylinder_probe.png")
show()
from IPython.display import FileLink, display
display(FileLink('images/cylinder_probe.png'))
Explanation: Plot pressure time-history at probe point
End of explanation
from zutil.post import residual_plot, get_case_report
residual_plot(get_case_report(case_name))
show()
Explanation: Convergence
End of explanation
# define function to help with validation check
def validate_data(name, value, valid_lower, valid_upper):
if ((value < valid_lower) or (value > valid_upper)):
print 'INVALID: ' + name + ' %.4f '%valid_lower + '%.4f '%value + ' %.4f'%valid_upper
return False
else:
return True
if (validate):
valid = valid and validate_data('strouhal', strouhal, valid_lower_strouhal, valid_upper_strouhal)
if (valid):
print 'VALIDATION = PASS :-)'
else:
print 'VALIDATION = FAIL :-('
if (regression):
import pandas as pd
pd.options.display.float_format = '{:,.6f}'.format
print 'REGRESSION DATA'
regress = {'version' : ['dt=0.001', 'dt=0.002', 'dt=0.005', 'v0.1 (dt=0.001)', 'v0.1 (dt=0.002)', 'CURRENT (dt=%.3f)'%time_step],
'Strouhal': [0.179974 , 0.179189 , 0.149542, 0.179974 , 0.179189, strouhal]}
regression_table = pd.DataFrame(regress, columns=['version','Strouhal'])
print regression_table
Explanation: Validation and regression
End of explanation
if remote_data:
print 'Disconnecting from remote paraview server connection'
Disconnect()
Explanation: Cleaning up
End of explanation
<END_TASK> |
35 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
<span style="float
Step1: Contents
A. View the crystal structure
B. Build the biomolecular assembly
C. Isolate the DNA
D. Prep for simulation
E. Dynamics - equilibration
F. Dynamics - production
A. View the crystal structure
We start by downloading the 1KBU crystal structure.
It will generate several warnings. Especially note that it contains biomolecular "assembly" information. This means that the file from PDB doesn't contain the complete structure, but we can generate the missing parts using symmetry operations.
Step2: B. Build the biomolecular assembly
As you can read in the warning, 1KBU only has one biomolecular assembly, conveniently named '1'. This cell builds and views it
Step3: By evaulating the assembly object (it's a normal instance of the moldesign.Molecule class), we can get some information about it's content
Step4: Because we're only interested in DNA, we'll create a new molecule using only the DNA residues, and then assign a forcefield to it.
C. Isolate the DNA
This example will focus only on the DNA components of this structure, so we'll isolate the DNA atoms and create a new molecule from them.
We could do this with a list comprehension, e.g.
mdt.Molecule([atom for atom in assembly.atoms if atom.residue.type == 'dna'])
Here, however we'll use a shortcut for this - the molecule.get_atoms method, which allows you to run queries on the atoms
Step5: D. Prep for simulation
Next, we'll assign a forcefield and energy model, then minimize the structure.
Step6: E. Dynamics - equilibration
The structure is ready. We'll associate an integrator with the molecule, then do a 2 step equilibration - first freezing the peptide backbone and running 300K dynamics, then unfreezing and continuing dyanmics.
Step7: And now we run it. This is may take a while, depending on your hardware.
Step8: Next, we'll remove the constraints and do full dynamics
Step9: NOTE
Step10: G. Save your results
Any MDT object can be saved to disk. We recommend saving objects with the "Pickle" format to make sure that all the data is preserved.
This cell saves the final trajectory to disk as a compressed pickle file
Step11: To load the saved object, use | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from matplotlib.pyplot import *
import moldesign as mdt
from moldesign import units as u
Explanation: <span style="float:right"><a href="http://moldesign.bionano.autodesk.com/" target="_blank" title="About">About</a> <a href="https://github.com/autodesk/molecular-design-toolkit/issues" target="_blank" title="Issues">Issues</a> <a href="http://bionano.autodesk.com/MolecularDesignToolkit/explore.html" target="_blank" title="Tutorials">Tutorials</a> <a href="http://autodesk.github.io/molecular-design-toolkit/" target="_blank" title="Documentation">Documentation</a></span>
</span>
<br>
<center><h1>Example 3: Simulating a Holliday Junction PDB assembly </h1> </center>
This notebook takes a crystal structure from the PDB and prepares it for simulation.
Author: Aaron Virshup, Autodesk Research
Created on: July 1, 2016
Tags: DNA, holliday junction, assembly, PDB, MD
End of explanation
xtal = mdt.from_pdb('1kbu')
xtal.draw()
Explanation: Contents
A. View the crystal structure
B. Build the biomolecular assembly
C. Isolate the DNA
D. Prep for simulation
E. Dynamics - equilibration
F. Dynamics - production
A. View the crystal structure
We start by downloading the 1KBU crystal structure.
It will generate several warnings. Especially note that it contains biomolecular "assembly" information. This means that the file from PDB doesn't contain the complete structure, but we can generate the missing parts using symmetry operations.
End of explanation
assembly = mdt.build_assembly(xtal, 1)
assembly.draw()
Explanation: B. Build the biomolecular assembly
As you can read in the warning, 1KBU only has one biomolecular assembly, conveniently named '1'. This cell builds and views it:
End of explanation
assembly
Explanation: By evaulating the assembly object (it's a normal instance of the moldesign.Molecule class), we can get some information about it's content:
End of explanation
dna_atoms = assembly.get_atoms('dna')
dna_only = mdt.Molecule(dna_atoms)
dna_only.draw3d(display=True)
dna_only
Explanation: Because we're only interested in DNA, we'll create a new molecule using only the DNA residues, and then assign a forcefield to it.
C. Isolate the DNA
This example will focus only on the DNA components of this structure, so we'll isolate the DNA atoms and create a new molecule from them.
We could do this with a list comprehension, e.g.
mdt.Molecule([atom for atom in assembly.atoms if atom.residue.type == 'dna'])
Here, however we'll use a shortcut for this - the molecule.get_atoms method, which allows you to run queries on the atoms:
End of explanation
ff = mdt.forcefields.DefaultAmber()
dna = ff.create_prepped_molecule(dna_only)
dna.set_energy_model(mdt.models.OpenMMPotential, implicit_solvent='obc')
dna.configure_methods()
minimization = dna.minimize()
minimization.draw()
Explanation: D. Prep for simulation
Next, we'll assign a forcefield and energy model, then minimize the structure.
End of explanation
# Freeze the backbone:
for residue in dna.residues:
for atom in residue.backbone:
dna.constrain_atom(atom)
dna.set_integrator(mdt.integrators.OpenMMLangevin,
timestep=2.0*u.fs,
frame_interval=1.0*u.ps,
remove_rotation=True)
dna.integrator.configure()
Explanation: E. Dynamics - equilibration
The structure is ready. We'll associate an integrator with the molecule, then do a 2 step equilibration - first freezing the peptide backbone and running 300K dynamics, then unfreezing and continuing dyanmics.
End of explanation
equil1 = dna.run(20.0*u.ps)
equil1.draw()
Explanation: And now we run it. This is may take a while, depending on your hardware.
End of explanation
dna.clear_constraints()
equil2 = dna.run(20.0*u.ps)
equil = equil1 + equil2
equil.draw()
plot(equil2.time, equil2.rmsd())
xlabel('time / fs'); ylabel(u'rmsd / Å'); grid()
Explanation: Next, we'll remove the constraints and do full dynamics:
End of explanation
trajectory = dna.run(40.0*u.ps)
trajectory.draw()
Explanation: NOTE: THIS IS NOT A SUFFICIENT EQUILIBRATION FOR PRODUCTION MOLECULAR DYNAMICS!
In practice, before going to "production", we would at least want to run dynamics until the RMSD and thermodynamic observabled have converged. A variety of equilibration protocols are used in practice, including slow heating, reduced coupling, multiple constraints, etc.
F. Dynamics - production
Assuming that we're satisfied with our system's equilibration, we now gather data for "production". This will take a while.
End of explanation
trajectory.write('holliday_traj.P.gz')
Explanation: G. Save your results
Any MDT object can be saved to disk. We recommend saving objects with the "Pickle" format to make sure that all the data is preserved.
This cell saves the final trajectory to disk as a compressed pickle file:
End of explanation
traj = mdt.read('holliday_traj.P.gz')
traj.draw()
Explanation: To load the saved object, use:
End of explanation
<END_TASK> |
36 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Polynomial Surrogate Model
Consider a simple underlying function with Gaussian noise added to simulate experimental or noisy computational data.
$$y = 3x^2 + 2x + 1 + \mathcal{N}(0, \sigma), \text{ where } \sigma = 1$$
We will create the data at 20 points between $-2 \ldots 2$.
Step1: This data is our training data and consists of two pairs
Step2: Ideally w would be [3, 2, 1] bsaed on our underlying polynomial, but it won't recover that exactly because of the noise in the function.
We now have a polynomial model
Step3: Cross Validation
Let's consider another simple function with Gaussian noise
Step4: Visually, it's harder to determine what the best order for the polynomial is. Of course, in higher dimensions we cannot visualize the function and so generally won't know beforehand what terms to use in our polynomial model. We will use cross validation to help us choose an appropriate order. Below, we create a function to create and evaluate a polynomial for any order (in a 1D space)
Step5: Let's try different polynomial orders and check to see what the error is in our fit.
Step6: This suggests that the higher the order of the polynomial the better! Of course we know that's not true. Let's look at what the polynomial model looks like in 20 dimensions
Step7: We notice that the error at the points we are trying to fit is very small (which is what our least squares solution is doing), but the predictive capability of the model is very poor. The reason for this issue is that we tested our model using the same points we used to train our model, so of course the error was low. What we need to do instead is keep a separate set of training data and separate set of validation data to test how good the model is. There are many methods to do this. In this example, we use k-hold out cross validation.
Step8: Both plots are the same, but the axis is smaller on the bottom one because the error blows up quickly. Notice that now, instead of the error continualy decreasing, it decreases for a while then increases as we run into problems with overfitting. Generally, we like to choose the simplest model that gives reasonable error. The curve is pretty flat near the minimum, and in this case a good point is somewhere around a 4th order polynomial. | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
n = 20
sigma = 1.0
xdata = np.linspace(-2, 2, n)
fdata = 3*xdata**2 + 2*xdata + 1 + np.random.randn(n)*sigma
plt.figure()
plt.plot(xdata, fdata, 'o')
plt.xlabel('x')
plt.ylabel('f')
plt.show()
Explanation: Polynomial Surrogate Model
Consider a simple underlying function with Gaussian noise added to simulate experimental or noisy computational data.
$$y = 3x^2 + 2x + 1 + \mathcal{N}(0, \sigma), \text{ where } \sigma = 1$$
We will create the data at 20 points between $-2 \ldots 2$.
End of explanation
Psi = np.zeros((n, 3))
Psi[:, 0] = xdata**2
Psi[:, 1] = xdata
Psi[:, 2] = 1
w, residuals, rank, s = np.linalg.lstsq(Psi, fdata)
print w
Explanation: This data is our training data and consists of two pairs: $(x^{(i)}, f^{(i)})$. Given this data, we want to compute a polynomial fit. In this case, we know that it's quadratic, so let's use a quadratic model of the form:
$$\hat{f}(x) = a x^2 + b x + c$$
This is a simple least squares problem.
$$\text{minimize} \sum_i \left(\hat{f}(x^{(i)}) - f^{(i)} \right)^2 $$
Which we can rewrite in matrix form as:
$$\text{minimize} \; || \Psi w - f ||^2 $$
where
$$
\Psi =
\begin{bmatrix}
\text{---} & \psi(x^{(1)})^T & \text{---} \
\text{---} & \psi(x^{(2)})^T & \text{---} \
& \vdots \
\text{---} & \psi(x^{(n)})^T & \text{---} \
\end{bmatrix}
$$
$$
f = \begin{bmatrix}
f^{(1)}\
f^{(2)}\
\vdots \
f^{(n)}\
\end{bmatrix}
$$
In our specific case: $w = [a, b, c]$ and $\psi = [x^2, x, 1]$. The matrix equation can be solved as a least squares solution. Notice that this simple form works in any number of dimensions (not just one). In higher dimensions we just add more terms to $\psi$.
End of explanation
nfull = 200
xfull = np.linspace(-2, 2, nfull)
Psifull = np.zeros((nfull, 3))
Psifull[:, 0] = xfull**2
Psifull[:, 1] = xfull
Psifull[:, 2] = 1
ffull = np.dot(Psifull, w)
plt.figure()
plt.plot(xdata, fdata, 'o')
plt.plot(xfull, ffull, '--')
plt.xlabel('x')
plt.ylabel('f')
plt.show()
Explanation: Ideally w would be [3, 2, 1] bsaed on our underlying polynomial, but it won't recover that exactly because of the noise in the function.
We now have a polynomial model: $\hat{f} = \Psi w$ that we can evaluate at any point. Let's compute at many locations along the domain and compare out fit:
End of explanation
def func(x):
sigma = 1.0
return (6*x-2)**2*np.sin(12*x-4) + np.random.randn(len(x))*sigma
# ---- create training data ---------
ndata = 20
xdata = np.linspace(0, 1, ndata)
fdata = func(xdata)
plt.plot(xdata, fdata, 'o')
plt.xlabel('x')
plt.ylabel('f')
plt.show()
Explanation: Cross Validation
Let's consider another simple function with Gaussian noise:
$$f = (6x - 2)^2*\sin(12 x - 4) + \mathcal{N}(0, \sigma)$$
End of explanation
def getPsi(x, order):
n = len(x)
Psi = np.zeros((n, order+1))
for i in range(order+1):
Psi[:, i] = x**(order-i)
return Psi
def createpoly(x, f, order):
Psi = getPsi(x, order)
w, residuals, rank, s = np.linalg.lstsq(Psi, f)
return w
def evalpoly(x, w):
order = len(w) - 1
Psi = getPsi(x, order)
f = np.dot(Psi, w)
return f
Explanation: Visually, it's harder to determine what the best order for the polynomial is. Of course, in higher dimensions we cannot visualize the function and so generally won't know beforehand what terms to use in our polynomial model. We will use cross validation to help us choose an appropriate order. Below, we create a function to create and evaluate a polynomial for any order (in a 1D space):
End of explanation
ordervec = np.arange(1, 21)
error = np.zeros(20)
for idx, order in enumerate(ordervec):
# build a polynomial model from the training data
w = createpoly(xdata, fdata, order)
# test the error
fhat = evalpoly(xdata, w)
error[idx] = np.linalg.norm(fhat - fdata)
# plot error
plt.figure()
plt.plot(ordervec, error, 'o')
plt.xlabel('order of polynomial')
plt.ylabel('error')
plt.show()
Explanation: Let's try different polynomial orders and check to see what the error is in our fit.
End of explanation
order = 20
w = createpoly(xdata, fdata, order)
nhat = 200
xhat = np.linspace(0, 1, nhat)
fhat = evalpoly(xhat, w)
plt.figure()
plt.plot(xdata, fdata, 'o')
plt.plot(xhat, fhat, '--')
plt.ylim([-10, 20])
plt.xlabel('x')
plt.ylabel('f')
Explanation: This suggests that the higher the order of the polynomial the better! Of course we know that's not true. Let's look at what the polynomial model looks like in 20 dimensions:
End of explanation
div = 10 # we will divide our data into div segments
ndata = 20 # number of data points
arrlength = ndata/div # each segment should contain this much data
idxrand = np.random.permutation(n) # random index into data from 0 ... n
error = np.zeros(len(ordervec))
# iterate through polynomial orders
for i, order in enumerate(ordervec):
# iterate through divisions of data for k-holdout
for j in range(div):
# indicies of data to leave out from the random permutations
holdout = idxrand[arrlength*j:arrlength*(j+1)]
# separaet into training set and testing set
xtrain = np.delete(xdata, holdout)
ftrain = np.delete(fdata, holdout)
xtest = xdata[holdout]
ftest = fdata[holdout]
# build a polynomial model from the training data
w = createpoly(xtrain, ftrain, order)
# test the error with the validation set
fhat = evalpoly(xtest, w)
error[i] += np.linalg.norm(fhat - ftest) / div # average error across divisions
# plot error
plt.figure()
plt.plot(ordervec, error, 'o')
plt.xlabel('order of polynomial')
plt.ylabel('error')
plt.show()
# plot error
plt.figure()
plt.plot(ordervec, error, 'o')
plt.xlabel('order of polynomial')
plt.ylabel('error')
plt.ylim([0, 25])
plt.show()
Explanation: We notice that the error at the points we are trying to fit is very small (which is what our least squares solution is doing), but the predictive capability of the model is very poor. The reason for this issue is that we tested our model using the same points we used to train our model, so of course the error was low. What we need to do instead is keep a separate set of training data and separate set of validation data to test how good the model is. There are many methods to do this. In this example, we use k-hold out cross validation.
End of explanation
order = 4
w = createpoly(xdata, fdata, order)
nhat = 200
xhat = np.linspace(0, 1, nhat)
fhat = evalpoly(xhat, w)
plt.figure()
plt.plot(xdata, fdata, 'o')
plt.plot(xhat, fhat, '--')
plt.ylim([-10, 20])
plt.xlabel('x')
plt.ylabel('f')
Explanation: Both plots are the same, but the axis is smaller on the bottom one because the error blows up quickly. Notice that now, instead of the error continualy decreasing, it decreases for a while then increases as we run into problems with overfitting. Generally, we like to choose the simplest model that gives reasonable error. The curve is pretty flat near the minimum, and in this case a good point is somewhere around a 4th order polynomial.
End of explanation
<END_TASK> |
37 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Composites simulation
Step1: We need to import here the data, modify them if needed and proceed | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from simmit import smartplus as sim
import os
dir = os.path.dirname(os.path.realpath('__file__'))
Explanation: Composites simulation : plastic plies
End of explanation
umat_name = 'MIPLN' #This is the 5 character code for the periodic homogenization for multlayered composite
nstatev = 0 #The number of scalar variables required, everything is stored in sub-phases statevs
rho = 1.12 #The density of the material (overall)
c_p = 1.64 #The specific heat capacity (overall)
nphases = 2 #The number of phases
num_file = 0 #The num of the file that contains the subphases
psi_rve = 0.
theta_rve = 0.
phi_rve = 0.
props = np.array([nphases, num_file])
NPhases_file = dir + '/data/Nlayers0.dat'
NPhases = pd.read_csv(NPhases_file, delimiter=r'\s+', index_col=False, engine='python')
NPhases[::]
path_data = 'data'
path_results = 'results'
pathfile = 'path.txt'
outputfile = 'results_PLN.txt'
sim.solver(umat_name, props, nstatev, psi_rve, theta_rve, phi_rve, rho, c_p, path_data, path_results, pathfile, outputfile)
outputfile_macro = dir + '/' + path_results + '/results_PLN_global-0.txt'
outputfile_micro1 = dir + '/' + path_results + '/results_PLN_global-0-0.txt'
outputfile_micro2 = dir + '/' + path_results + '/results_PLN_global-0-1.txt'
fig = plt.figure()
e11, e22, e33, e12, e13, e23, s11, s22, s33, s12, s13, s23 = np.loadtxt(outputfile_macro, usecols=(8,9,10,11,12,13,14,15,16,17,18,19), unpack=True)
plt.grid(True)
plt.plot(e11,s11, c='black')
e11, e22, e33, e12, e13, e23, s11, s22, s33, s12, s13, s23 = np.loadtxt(outputfile_micro1, usecols=(8,9,10,11,12,13,14,15,16,17,18,19), unpack=True)
plt.grid(True)
plt.plot(e11,s11, c='red')
e11, e22, e33, e12, e13, e23, s11, s22, s33, s12, s13, s23 = np.loadtxt(outputfile_micro2, usecols=(8,9,10,11,12,13,14,15,16,17,18,19), unpack=True)
plt.grid(True)
plt.plot(e11,s11, c='blue')
plt.xlabel('Strain')
plt.ylabel('Stress (MPa)')
plt.show()
Explanation: We need to import here the data, modify them if needed and proceed
End of explanation
<END_TASK> |
38 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Original url
Step1: For the record, we should mention that there exist many other libraries in Python to parse XML, such as minidom or BeautifulSoup which is an interesting library, when you intend to scrape data from the web. While these might come with more advanced bells and whistles than lxml, they can also be more complex to use, which is why we stick to lxml in this course. Let us now import our sonnet in Python, which has been saved in the file sonnet18.xml
Step2: Python has now read and parsed our xml-file via the etree.parse() function. We have stored our XML tree structure, which is returned by the parse() function, in the tree variable, so that we can access it later. If we print tree as such, we don't get a lot of useful information. To have a closer look at the XML in a printable text version, we need to call the tostring() method on the tree before printing it.
Step3: You'll notice that we actually get a string in a raw format
Step4: If we have more complex data, it might also be to set the pretty_print parameter to True, to obtain a more beautifully formatted string, with Python taking care of indendation etc. In our example, it doesn't change much
Step5: Now let us start processing the contents of our file. Suppose that we are not really interested in the full hierarchical structure of our file, but just in the rhyme words occuring in it. The high-level function interfind() allows us to easily select all rhyme-element in our tree, regardless of where exactly they occur. Because this functions returns a list of nodes, we can simply loop over them
Step6: Note that the search expression ("//rhyme") has two forward slashes before our actual search term. This is in fact XPath syntax, and the two slashes indicate that the search term can occur anywhere (e.g. not necessarily among a node's direct children). Unfortunately, printing the nodes themselves again isn't really insightful
Step7: To extract the actual rhyme word contained in the element, we can use the .text property of the nodes
Step8: That looks better!
Just now, we have been iterating over our rhyme elements in simple order of appearance
Step9: We can access the value of the attributes of an element via .attrib, just like we would access the information in a Python dictionary, that is via key-based indexing. We know that our sonnet element, for instance, should have an author and year attribute. We can inspect the value of these as follows
Step10: If we wouldn't know which attributes were in fact available for a node, we could also retrieve the attribute names by calling keys() on the attributes property of a node, just like we would do with a regular dictionary
Step11: So far so good. Now that we have selected our root element, we can start drilling down our tree's structure. Let us first find out how many child nodes our root element has
Step12: Our root node turns out to have 15 child nodes, which makes a lot of sense, since we have 14 line elements and the volta. We can actually loop over these children, just as we would loop over any other list
Step13: To extract the actual text in our lines, we need one additional for-loop which will allow us to iteratre over the pieces of text under each line
Step14: Note that we get an empty line at the volta, since there isn't any actual text associated with this empty tag.
Quiz!
Could you now write your own code, which iterates over the lines in our tree and prints out the line number based on the n attribute of the line element?
Step15: Manipulating XML in Python
So far, we have parsed XML in Python, we haven't dealt with creating or manipulating XML in Python. Luckily, adapting or creating XML is fairly straightforward in Python. Let's first try and change the author's name in the author attribute of the sonnet. Because this boils down to manipulating a Python dictionary, the syntax should already be familiar to you
Step16: That was easy, wasn't it? Did you see we can just add new attributes to an element? Just take care only to put strings as attribute values
Step17: Adding whole elements is fairly easy too. Let's add a single dummy element (<break/>) to indicate a line break at the end of each line. Importantly, we have to create this element inside our loop, before we can add it
Step18: You'll notice that we actually created an empty <break/> tag. Now, let's add it add the end of each line
Step19: Adding an element with actual content is just as easy by the way
Step20: Quiz
The <break/> element is still empty
Step21: Python for TEI
In Digital Humanities, you hear a lot about the TEI nowadays, or the Text Encoding Initiative (tei-c.org). The TEI refers to an initiative which has developed a highly influential "dialect" of XML for encoding texts in the Humanities. The beauty about XML is that tag names aren't predefined and you can invent your own tag and attributes. Our Shakepearean example could just have well have read
Step22: Quiz
Processing TEI in Python, is really just processing XML in Python, the dark art which you already learned to master above! Let's try and practice the looping techniques we introduced above. Could you provide code which parses the xml and writes away the lines in this poem to a plain text file, with one verse line on a single line in the new file?
Step23: A hands-on case study
Step24: OK
Step25: As you can see, we have made you a nice subset selection of this data, containing only texts by the famous pair of brothers
Step26: | <ASSISTANT_TASK:>
Python Code:
from lxml import etree
Explanation: Original url: https://github.com/mikekestemont/ghent1516/blob/master/Chapter%208%20-%20Parsing%20XML.ipynb
XML in a nutshell
So far, we have primarily dealt with unstructured data in this course: we have learned to read, for example, the contents of plain text files in the previous chapters. Such raw textual data is often called 'unstructured', because it lacks annotations that make explicit the function or meaning of the words in the documents. If we read the contents of a play as a plain text, for instance, we don't have a clue to which scene or act a particular utterance belongs, or by which character the utterance was made. Nowadays, it is therefore increasingly common to add annotations to a text that give us a better insight into the
semantics and structure of the data. Adding annotations to texts (e.g. scholarly editions of Shakepeare), is typically done using some form of markup. Various markup-languages exist that allow us to provide structured and unambiguous annotations to a (digital) text. XML or the "eXtensible Mark-up Language" is currently one of the dominant standards to encode texts in the Digital Humanities. In this chapter, we'll assume that have at least some notion of XML, although we will have a quick refresher below. XML is a pretty straightforward mark-up language: let's have a look at Shakepeare's well-known sonnet 18 encoded in XML (you can find this poem also as sonnet.txt in your data/TEI folder).
```xml
<?xml version="1.0"?>
<sonnet author="William Shakepeare" year="1609">
<line n="1">Shall I compare thee to a summer's <rhyme>day</rhyme>?</line>
<line n="2">Thou art more lovely and more <rhyme>temperate</rhyme>:</line>
<line n="3">Rough winds do shake the darling buds of <rhyme>May</rhyme><break n="3"/>,</line>
<line n="4">And summer's lease hath all too short a <rhyme>date</rhyme>:</line>
<line n="5">Sometime too hot the eye of heaven <rhyme>shines</rhyme>,</line>
<line n="6">And often is his gold complexion <rhyme>dimm'd</rhyme>;</line>
<line n="7">And every fair from fair sometime <rhyme>declines</rhyme>,</line>
<line n="8">By chance, or nature's changing course, <rhyme>untrimm'd</rhyme>;</line>
<volta/>
<line n="9">But thy eternal summer shall not <rhyme>fade</rhyme></line>
<line n="10">Nor lose possession of that fair thou <rhyme>ow'st</rhyme>;</line>
<line n="11">Nor shall Death brag thou wander'st in his <rhyme>shade</rhyme>,</line>
<line n="12">When in eternal lines to time thou <rhyme>grow'st</rhyme>;</line>
<line n="13">So long as men can breathe or eyes can <rhyme>see</rhyme>,</line>
<line n="14">So long lives this, and this gives life to <rhyme>thee</rhyme>.</line>
</sonnet>
```
The first line in our Shakespearean example (<?xml version="1.0"?>) declares which exact version of XML we are using, in our case version 1. As you can see at a glance, XML typically encodes pieces of text using start tags (e.g. <line>, <rhyme>) and end tags (</line>, </rhyme>). Each start tag must correspond to exactly one end tag, although XML does allow for "solo" elements such the <volta/> tag after line 8 in this example. Interestingly, XML tag are not allowed to overlap. The following line would therefore not constitue valid XML:
xml
<line><sentence>This is a </line><line>sentence.</sentence></line>
The following two lines would be valid alternatives for this example, because here the tags don't overlap:
xml
<sentence><line>This is a </line><line>sentence.</line></sentence>
<sentence>This is a <linebreak/>sentence.</sentence>
This limitation has to with the fact that XML is a hierarchical markup language: it assumes that we can describe a text document as a tree of branching nodes. In this tree, elements cannot have more than one direct parent element (otherwise the hearchy would be ambiguous). The one exception is the so-called root element, which as the highest node in tree does not have a parent element itself. Logically speaking, all this entails that a valid XML document can only have a single root element. Note that all non-root elements can have as many siblings as we wish. All the <line> elements in our sonnet, for example, are siblings, in the sense that they have in common a direct parent element, i.e. the <sonnet> tag. Finally, note that we can add extra information to our elements using so-called attributes. The n attribute, for example, give us the line number for each line in the sonnet, surrounded by double quotation marks. The <sonnet> element illlustrates that we can add as many attributes as we want to a tag.
XML and Python
Researchers in the Digital Humanities nowadays often put a lot of time and effort in creating digital data sets for their research, such as scholarly editions with a rich markup encoded in XML. Nevertheless, once this data has been annotated, it can be tricky to get your texts out again, so to speak, and fully exploit the information which you painstakingly encoded. Therefore, it is crucial to be able to parse XML in an efficient manner. Luckily, Python provides all the tools necessary to do this. We will make use of the lxml library, which is part of the Anaconda Python distribution:
End of explanation
tree = etree.parse("data/books.xml")
print(tree)
Explanation: For the record, we should mention that there exist many other libraries in Python to parse XML, such as minidom or BeautifulSoup which is an interesting library, when you intend to scrape data from the web. While these might come with more advanced bells and whistles than lxml, they can also be more complex to use, which is why we stick to lxml in this course. Let us now import our sonnet in Python, which has been saved in the file sonnet18.xml:
End of explanation
print(etree.tostring(tree))
Explanation: Python has now read and parsed our xml-file via the etree.parse() function. We have stored our XML tree structure, which is returned by the parse() function, in the tree variable, so that we can access it later. If we print tree as such, we don't get a lot of useful information. To have a closer look at the XML in a printable text version, we need to call the tostring() method on the tree before printing it.
End of explanation
print(etree.tostring(tree).decode())
Explanation: You'll notice that we actually get a string in a raw format: if we want to display it properly, we have to decode it:
End of explanation
print(etree.tostring(tree, pretty_print=True).decode())
Explanation: If we have more complex data, it might also be to set the pretty_print parameter to True, to obtain a more beautifully formatted string, with Python taking care of indendation etc. In our example, it doesn't change much:
End of explanation
print(len(list(tree.iterfind("//book"))))
for node in tree.iterfind("//book"):
print(node)
Explanation: Now let us start processing the contents of our file. Suppose that we are not really interested in the full hierarchical structure of our file, but just in the rhyme words occuring in it. The high-level function interfind() allows us to easily select all rhyme-element in our tree, regardless of where exactly they occur. Because this functions returns a list of nodes, we can simply loop over them:
End of explanation
print(len(list(tree.iterfind("/catalog"))))
for node in tree.iterfind("/catalog/book"):
print(node)
for node in tree.iterfind("/book"):
print(node.tag)
Explanation: Note that the search expression ("//rhyme") has two forward slashes before our actual search term. This is in fact XPath syntax, and the two slashes indicate that the search term can occur anywhere (e.g. not necessarily among a node's direct children). Unfortunately, printing the nodes themselves again isn't really insightful: in this way, we only get rather prosaic information of the Python objects holding our rhyme nodes. We can use the .tag property to print the tag's name:
End of explanation
for node in tree.iterfind("//book"):
print(node.text)
Explanation: To extract the actual rhyme word contained in the element, we can use the .text property of the nodes:
End of explanation
root_node = tree.getroot()
print(root_node.tag)
Explanation: That looks better!
Just now, we have been iterating over our rhyme elements in simple order of appearance: we haven't been really been exploiting the hierarchy of our XML file yet. Let's see now how we can navigate our xml tree. Let's first select our root node: there's a function for that!
End of explanation
print(root_node.attrib["author"])
print(root_node.attrib["year"])
Explanation: We can access the value of the attributes of an element via .attrib, just like we would access the information in a Python dictionary, that is via key-based indexing. We know that our sonnet element, for instance, should have an author and year attribute. We can inspect the value of these as follows:
End of explanation
for key in root_node.attrib.keys():
print(root_node.attrib[key])
Explanation: If we wouldn't know which attributes were in fact available for a node, we could also retrieve the attribute names by calling keys() on the attributes property of a node, just like we would do with a regular dictionary:
End of explanation
print(len(root_node))
Explanation: So far so good. Now that we have selected our root element, we can start drilling down our tree's structure. Let us first find out how many child nodes our root element has:
End of explanation
for node in root_node:
print(node.tag)
Explanation: Our root node turns out to have 15 child nodes, which makes a lot of sense, since we have 14 line elements and the volta. We can actually loop over these children, just as we would loop over any other list:
End of explanation
for node in root_node:
if node.tag != "volta":
line_text = ""
for text in node.itertext():
line_text = line_text + text
print(line_text)
else:
print("=== Volta found! ===")
Explanation: To extract the actual text in our lines, we need one additional for-loop which will allow us to iteratre over the pieces of text under each line:
End of explanation
for node in root_node:
if node.tag == "line":
print(node.attrib["n"])
Explanation: Note that we get an empty line at the volta, since there isn't any actual text associated with this empty tag.
Quiz!
Could you now write your own code, which iterates over the lines in our tree and prints out the line number based on the n attribute of the line element?
End of explanation
root_node = tree.getroot()
root_node.attrib["author"] = "J.K. Rowling"
root_node.attrib["year"] = "2015"
root_node.attrib["new_element"] = "dummy string!"
root_node.attrib["place"] = "maynooth"
print(etree.tostring(root_node).decode())
Explanation: Manipulating XML in Python
So far, we have parsed XML in Python, we haven't dealt with creating or manipulating XML in Python. Luckily, adapting or creating XML is fairly straightforward in Python. Let's first try and change the author's name in the author attribute of the sonnet. Because this boils down to manipulating a Python dictionary, the syntax should already be familiar to you:
End of explanation
root_node.attrib["year"] = "2015"
Explanation: That was easy, wasn't it? Did you see we can just add new attributes to an element? Just take care only to put strings as attribute values: since we are working with XML, Python won't accept e.g. numbers and you will get an error:
End of explanation
break_el = etree.Element("break")
break_el.attrib["author"] = "Mike"
print(etree.tostring(break_el).decode())
Explanation: Adding whole elements is fairly easy too. Let's add a single dummy element (<break/>) to indicate a line break at the end of each line. Importantly, we have to create this element inside our loop, before we can add it:
End of explanation
for node in tree.iterfind("//line"):
break_el = etree.Element("break")
node.append(break_el)
print(etree.tostring(tree).decode())
Explanation: You'll notice that we actually created an empty <break/> tag. Now, let's add it add the end of each line:
End of explanation
break_el = etree.Element("break")
print(etree.tostring(break_el).decode())
break_el.text = "XXX"
print(etree.tostring(break_el).decode())
Explanation: Adding an element with actual content is just as easy by the way:
End of explanation
tree = etree.parse("data/TEI/sonnet18.xml")
root_node = tree.getroot()
for node in root_node:
if node.tag == "line":
v = node.attrib["n"]
break_el = etree.Element("break")
break_el.attrib["n"] = v
node.append(break_el)
print(etree.tostring(tree).decode())
Explanation: Quiz
The <break/> element is still empty: could you add to it an n attribute, to which you assign the line number from the current <line> element?
End of explanation
tree = etree.parse("data/TEI/sonnet17.xml")
print(etree.tostring(tree).decode())
Explanation: Python for TEI
In Digital Humanities, you hear a lot about the TEI nowadays, or the Text Encoding Initiative (tei-c.org). The TEI refers to an initiative which has developed a highly influential "dialect" of XML for encoding texts in the Humanities. The beauty about XML is that tag names aren't predefined and you can invent your own tag and attributes. Our Shakepearean example could just have well have read:
<?xml version="1.0"?>
<poem writer="William Shakepeare" date="1609">
<l nr="1">Shall I compare thee to a summer's <last>day</last>?</l>
<l nr="2">Thou art more lovely and more <last>temperate</last>:</l>
<l nr="3">Rough winds do shake the darling buds of <last>May</last>,</l>
<l nr="4">And summer's lease hath all too short a <last>date</last>:</l>
<l nr="5">Sometime too hot the eye of heaven <last>shines</last>,</l>
<l nr="6">And often is his gold complexion <last>dimm'd</last>;</l>
<l nr="7">And every fair from fair sometime <last>declines</last>,</l>
<l nr="8">By chance, or nature's changing course, <last>untrimm'd</last>;</l>
<break/>
<l nr="9">But thy eternal summer shall not <last>fade</last></l>
<l nr="10">Nor lose possession of that fair thou <last>ow'st</last>;</l>
<l nr="11">Nor shall Death brag thou wander'st in his <last>shade</last>,</l>
<l nr="12">When in eternal lines to time thou <last>grow'st</last>;</l>
<l nr="13">So long as men can breathe or eyes can <last>see</last>,</l>
<l nr="14">So long lives this, and this gives life to <last>thee</last>.</l>
</poem>
As you can see, all the tag and attribute names are different in this version, but the essential structure is still the same. You could therefore say that XML is a markup language which provides a syntax to talk about texts, but does not come with a default semantics. This freedom in choosing name tags etc. can also be a bit daunting: this is why the TEI provides Guidelines as how tag names etc. can be used to mark up specific phenomena in texts. The TEI therefore also refers to a rather bulky set of guidelines as to which tags could be used to properly encode a text. Below, we read in a fairly advanced example of Shakepeare's 17th sonnet encoded in TEI (note the use of the <TEI> tag as our root node!). Even the metrical structure has been encoded as you will see, so this can be considered an example "TEI on steroids".
End of explanation
# add your parsing code here...
Explanation: Quiz
Processing TEI in Python, is really just processing XML in Python, the dark art which you already learned to master above! Let's try and practice the looping techniques we introduced above. Could you provide code which parses the xml and writes away the lines in this poem to a plain text file, with one verse line on a single line in the new file?
End of explanation
import os
dirname = "data/TEI/french_plays/"
for filename in os.listdir(dirname):
if filename.endswith(".xml"):
print(filename)
Explanation: A hands-on case study: French plays
OK, it time to get your hands even more dirty. For textual analyses, there are a number of great datasets out there which have been encoded in rich XML. One excellent resource which we have recently worked with, can be found at theatre-classique.fr: this website holds an extensive collection of French plays from the time of the Classical and Enlightenment era in France. Some of the plays have been authored by some of France's finest authors such as Molière pr Pierre and Thomas Corneille. What is interesting about this resource, is that it provides a very rich XML markup: apart from extensive metadata on the play or a detailed descriptions of the actors involved, the actually lines have been encoded in such a manner, that we perfectly know which character uttered a particular line, or to which scene or act a line belongs. This allows us to perform much richer textual analyses than if we would only have a raw text version of the plays. We have collected a subset of these plays for you under the data/TEIdirectory:
End of explanation
import os
dirname="code/data/"
for filename in os.listdir(dirname):
if filename.endswith(".xml"):
print("*****")
print("\t-", filename)
tree = ET.parse(dirname+filename)
author_element = tree.find("//author") # find vs iterfind!
print("\t-", author_element.text)
title_element = tree.find("//title")
print("\t-", title_element.text)
Explanation: OK: under this directory, we appear to have a bunch of XML-files, but their titles are just numbers, which doesn't tell us a lot. Let's have a look at what's the title and author tags in these files:
End of explanation
# your code goes here
Explanation: As you can see, we have made you a nice subset selection of this data, containing only texts by the famous pair of brothers: Pierre and Thomas Corneille. We have provided a number of exercises in which you can practice your newly developed XML skills. In each of the fun little tasks below, you should compare the dramas of our two famous brothers:
* how many characters does each brother on average stage in a play?
* which brother has the highest vocabulary richness?
* which brother uses the lengthiest speeches per character on average?
* which brother gives most "speech time" to women, expressed in number of words (hint: you can derive a character's gender from the <castList> in most plays!)
End of explanation
# from IPython.core.display import HTML
# def css_styling():
# styles = open("styles/custom.css", "r").read()
# return HTML(styles)
# css_styling()
Explanation:
End of explanation
<END_TASK> |
39 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Errors and Exceptions Homework - Solution
Problem 1
Handle the exception thrown by the code below by using try and except blocks.
Step1: Problem 2
Handle the exception thrown by the code below by using try and except blocks. Then use a finally block to print 'All Done.'
Step2: Problem 3
Write a function that asks for an integer and prints the square of it. Use a while loop with a try,except, else block to account for incorrect inputs. | <ASSISTANT_TASK:>
Python Code:
try:
for i in ['a','b','c']:
print i**2
except:
print "An error ocurred!"
Explanation: Errors and Exceptions Homework - Solution
Problem 1
Handle the exception thrown by the code below by using try and except blocks.
End of explanation
x = 5
y = 0
try:
z = x/y
except ZeroDivisionError:
print "Can't divide by Zero!"
finally:
print 'All Done!'
Explanation: Problem 2
Handle the exception thrown by the code below by using try and except blocks. Then use a finally block to print 'All Done.'
End of explanation
def ask():
while True:
try:
n = input('Input an integer: ')
except:
print 'An error occured! Please try again!'
continue
else:
break
print 'Thank you, you number squared is: ',n**2
ask()
Explanation: Problem 3
Write a function that asks for an integer and prints the square of it. Use a while loop with a try,except, else block to account for incorrect inputs.
End of explanation
<END_TASK> |
40 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Exploring the Training Set
Author(s)
Step2: Eyes on the data!
Step4: Check out the colors at rapidtables.com/web/color/RGB_Color, but don't forget to flip order of the channels to BGR.
Step5: Do some sanity checks
For example
Step7: Get labels
Extract labels from the filename and create a pretty dataframe for analysis.
Step8: Create DataFrame
Step11: Basic Feature Engineering
Below, we show an example of a very simple set of features that can be derived from an image. This function simply pulls the mean, standard deviation, min, and max of pixel values in one image band (red, green, or blue)
Step13: These coarse features look pretty bad individually. Most of this is due to features capturing absolute pixel values. But photo lighting could vary significantly between different image shots. What we end up with is a lot of noise.
Are there some better feature detectors we can consider? Why yes, there are! Several common features involve finding corners in pictures, and looking for pixel gradients (differences in pixel values between neighboring pixels in different directions).
Harris Corner Detector
The following snippet runs code to visualize harris corner detection for a few sample images. Configuring the threshold determines how strong of a signal we need to determine if a pixel corresponds to a corner (high pixel gradients in all directions).
Note that because a Harris corner detector returns another image map with values corresponding to the likelihood of a corner at that pixel, it can also be fed into general_img_features() to extract additional features. What do you notice about corners on cat images? | <ASSISTANT_TASK:>
Python Code:
# Enter your username:
YOUR_GMAIL_ACCOUNT = '******' # Whatever is before @gmail.com in your email address
# Libraries for this section:
import os
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import pandas as pd
import cv2
import warnings
warnings.filterwarnings('ignore')
# Grab the filenames:
TRAINING_DIR = os.path.join('/home', YOUR_GMAIL_ACCOUNT, 'data/training_small/')
files = os.listdir(TRAINING_DIR) # Grab all the files in the VM images directory
print(files[0:5]) # Let's see some filenames
Explanation: Exploring the Training Set
Author(s): kozyr@google.com, bfoo@google.com
In this notebook, we gather exploratory data from our training set to do feature engineering and model tuning. Before running this notebook, make sure that:
You have already run steps 2 and 3 to collect and split your data into training, validation, and test.
Your training data is in a Google storage folder such as gs://[your-bucket]/[dataprep-dir]/training_images/
In the spirit of learning to walk before learning to run, we'll write this notebook in a more basic style than you'll see in a professional setting.
Setup
TODO for you: In Screen terminal 1 (to begin with Screen in the VM, first type
screen and Ctrl+a c), go to the VM shell and type Ctrl+a 1,
create a folder to store your training and debugging images, and then copy a small
sample of training images from Cloud Storage:
mkdir -p ~/data/training_small
gsutil -m cp gs://$BUCKET/catimages/training_images/000*.png ~/data/training_small/
gsutil -m cp gs://$BUCKET/catimages/training_images/001*.png ~/data/training_small/
mkdir -p ~/data/debugging_small
gsutil -m cp gs://$BUCKET/catimages/training_images/002*.png ~/data/debugging_small
echo "done!"
Note that we only take the images starting with those IDs to limit the total number we'll copy over to under 3 thousand images.
End of explanation
def show_pictures(filelist, dir, img_rows=2, img_cols=3, figsize=(20, 10)):
Display the first few images.
Args:
filelist: list of filenames to pull from
dir: directory where the files are stored
img_rows: number of rows of images to display
img_cols: number of columns of images to display
figsize: sizing for inline plots
Returns:
None
plt.close('all')
fig = plt.figure(figsize=figsize)
for i in range(img_rows * img_cols):
a=fig.add_subplot(img_rows, img_cols,i+1)
img = mpimg.imread(os.path.join(dir, filelist[i]))
plt.imshow(img)
plt.show()
show_pictures(files, TRAINING_DIR)
Explanation: Eyes on the data!
End of explanation
# What does the actual image matrix look like? There are three channels:
img = cv2.imread(os.path.join(TRAINING_DIR, files[0]))
print('\n***Colors in the middle of the first image***\n')
print('Blue channel:')
print(img[63:67,63:67,0])
print('Green channel:')
print(img[63:67,63:67,1])
print('Red channel:')
print(img[63:67,63:67,2])
def show_bgr(filelist, dir, img_rows=2, img_cols=3, figsize=(20, 10)):
Make histograms of the pixel color matrices of first few images.
Args:
filelist: list of filenames to pull from
dir: directory where the files are stored
img_rows: number of rows of images to display
img_cols: number of columns of images to display
figsize: sizing for inline plots
Returns:
None
plt.close('all')
fig = plt.figure(figsize=figsize)
color = ('b','g','r')
for i in range(img_rows * img_cols):
a=fig.add_subplot(img_rows, img_cols, i + 1)
img = cv2.imread(os.path.join(TRAINING_DIR, files[i]))
for c,col in enumerate(color):
histr = cv2.calcHist([img],[c],None,[256],[0,256])
plt.plot(histr,color = col)
plt.xlim([0,256])
plt.ylim([0,500])
plt.show()
show_bgr(files, TRAINING_DIR)
Explanation: Check out the colors at rapidtables.com/web/color/RGB_Color, but don't forget to flip order of the channels to BGR.
End of explanation
# Pull in blue channel for each image, reshape to vector, count unique values:
unique_colors = []
landscape = []
for f in files:
img = np.array(cv2.imread(os.path.join(TRAINING_DIR, f)))[:,:,0]
# Determine if landscape is more likely than portrait by comparing
#amount of zero channel in 3rd row vs 3rd col:
landscape_likely = (np.count_nonzero(img[:,2]) > np.count_nonzero(img[2,:])) * 1
# Count number of unique blue values:
col_count = len(set(img.ravel()))
# Append to array:
unique_colors.append(col_count)
landscape.append(landscape_likely)
unique_colors = pd.DataFrame({'files': files, 'unique_colors': unique_colors,
'landscape': landscape})
unique_colors = unique_colors.sort_values(by=['unique_colors'])
print(unique_colors[0:10])
# Plot the pictures with the lowest diversity of unique color values:
suspicious = unique_colors['files'].tolist()
show_pictures(suspicious, TRAINING_DIR, 1)
Explanation: Do some sanity checks
For example:
* Do we have blank images?
* Do we have images with very few colors?
End of explanation
def get_label(str):
Split out the label from the filename of the image, where we stored it.
Args:
str: filename string.
Returns:
label: an integer 1 or 0
split_filename = str.split('_')
label = int(split_filename[-1].split('.')[0])
return(label)
# Example:
get_label('12550_0.1574_1.png')
Explanation: Get labels
Extract labels from the filename and create a pretty dataframe for analysis.
End of explanation
df = unique_colors[:]
df['label'] = df['files'].apply(lambda x: get_label(x))
df['landscape_likely'] = df['landscape']
df = df.drop(['landscape', 'unique_colors'], axis=1)
df[:10]
Explanation: Create DataFrame
End of explanation
def general_img_features(band):
Define a set of features that we can look at for each color band
Args:
band: array which is one of blue, green, or red
Returns:
features: unique colors, nonzero count, mean, standard deviation,
min, and max of the channel's pixel values
return [len(set(band.ravel())), np.count_nonzero(band),
np.mean(band), np.std(band),
band.min(), band.max()]
def concat_all_band_features(file, dir):
Extract features from a single image.
Args:
file - single image filename
dir - directory where the files are stored
Returns:
features - descriptive statistics for pixels
img = cv2.imread(os.path.join(dir, file))
features = []
blue = np.float32(img[:,:,0])
green = np.float32(img[:,:,1])
red = np.float32(img[:,:,2])
features.extend(general_img_features(blue)) # indices 0-4
features.extend(general_img_features(green)) # indices 5-9
features.extend(general_img_features(red)) # indices 10-14
return features
# Let's see an example:
print(files[0] + '\n')
example = concat_all_band_features(files[0], TRAINING_DIR)
print(example)
# Apply it to our dataframe:
feature_names = ['blue_unique', 'blue_nonzero', 'blue_mean', 'blue_sd', 'blue_min', 'blue_max',
'green_unique', 'green_nonzero', 'green_mean', 'green_sd', 'green_min', 'green_max',
'red_unique', 'red_nonzero', 'red_mean', 'red_sd', 'red_min', 'red_max']
# Compute a series holding all band features as lists
band_features_series = df['files'].apply(lambda x: concat_all_band_features(x, TRAINING_DIR))
# Loop through lists and distribute them across new columns in the dataframe
for i in range(len(feature_names)):
df[feature_names[i]] = band_features_series.apply(lambda x: x[i])
df[:10]
# Are these features good for finding cats?
# Let's look at some basic correlations.
df.corr().round(2)
Explanation: Basic Feature Engineering
Below, we show an example of a very simple set of features that can be derived from an image. This function simply pulls the mean, standard deviation, min, and max of pixel values in one image band (red, green, or blue)
End of explanation
THRESHOLD = 0.05
def show_harris(filelist, dir, band=0, img_rows=4, img_cols=4, figsize=(20, 10)):
Display Harris corner detection for the first few images.
Args:
filelist: list of filenames to pull from
dir: directory where the files are stored
band: 0 = 'blue', 1 = 'green', 2 = 'red'
img_rows: number of rows of images to display
img_cols: number of columns of images to display
figsize: sizing for inline plots
Returns:
None
plt.close('all')
fig = plt.figure(figsize=figsize)
def plot_bands(src, band_img):
a=fig.add_subplot(img_rows, img_cols, i + 1)
dst = cv2.cornerHarris(band_img, 2, 3, 0.04)
dst = cv2.dilate(dst,None) # dilation makes the marks a little bigger
# Threshold for an optimal value, it may vary depending on the image.
new_img = src.copy()
new_img[dst > THRESHOLD * dst.max()]=[0, 0, 255]
# Note: openCV reverses the red-green-blue channels compared to matplotlib,
# so we have to flip the image before showing it
imgplot = plt.imshow(cv2.cvtColor(new_img, cv2.COLOR_BGR2RGB))
for i in range(img_rows * img_cols):
img = cv2.imread(os.path.join(dir, filelist[i]))
plot_bands(img, img[:,:,band])
plt.show()
show_harris(files, TRAINING_DIR)
Explanation: These coarse features look pretty bad individually. Most of this is due to features capturing absolute pixel values. But photo lighting could vary significantly between different image shots. What we end up with is a lot of noise.
Are there some better feature detectors we can consider? Why yes, there are! Several common features involve finding corners in pictures, and looking for pixel gradients (differences in pixel values between neighboring pixels in different directions).
Harris Corner Detector
The following snippet runs code to visualize harris corner detection for a few sample images. Configuring the threshold determines how strong of a signal we need to determine if a pixel corresponds to a corner (high pixel gradients in all directions).
Note that because a Harris corner detector returns another image map with values corresponding to the likelihood of a corner at that pixel, it can also be fed into general_img_features() to extract additional features. What do you notice about corners on cat images?
End of explanation
<END_TASK> |
41 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Copyright 2020 The TensorFlow Authors.
Step1: TF.Text Metrics
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: ROUGE-L
The Rouge-L metric is a score from 0 to 1 indicating how similar two sequences are, based on the length of the longest common subsequence (LCS). In particular, Rouge-L is the weighted harmonic mean (or f-measure) combining the LCS precision (the percentage of the hypothesis sequence covered by the LCS) and the LCS recall (the percentage of the reference sequence covered by the LCS).
Source
Step3: The hypotheses and references are expected to be tf.RaggedTensors of tokens. Tokens are required instead of raw sentences because no single tokenization strategy fits all tasks.
Now we can call text.metrics.rouge_l and get our result back
Step4: ROUGE-L has an additional hyperparameter, alpha, which determines the weight of the harmonic mean used for computing the F-Measure. Values closer to 0 treat Recall as more important and values closer to 1 treat Precision as more important. alpha defaults to .5, which corresponds to equal weight for Precision and Recall. | <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
!pip install -q "tensorflow-text==2.8.*"
import tensorflow as tf
import tensorflow_text as text
Explanation: TF.Text Metrics
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/text/tutorials/text_similarity"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/text/blob/master/docs/tutorials/text_similarity.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/text/blob/master/docs/tutorials/text_similarity.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/text/docs/tutorials/text_similarity.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Overview
TensorFlow Text provides a collection of text-metrics-related classes and ops ready to use with TensorFlow 2.0. The library contains implementations of text-similarity metrics such as ROUGE-L, required for automatic evaluation of text generation models.
The benefit of using these ops in evaluating your models is that they are compatible with TPU evaluation and work nicely with TF streaming metric APIs.
Setup
End of explanation
hypotheses = tf.ragged.constant([['captain', 'of', 'the', 'delta', 'flight'],
['the', '1990', 'transcript']])
references = tf.ragged.constant([['delta', 'air', 'lines', 'flight'],
['this', 'concludes', 'the', 'transcript']])
Explanation: ROUGE-L
The Rouge-L metric is a score from 0 to 1 indicating how similar two sequences are, based on the length of the longest common subsequence (LCS). In particular, Rouge-L is the weighted harmonic mean (or f-measure) combining the LCS precision (the percentage of the hypothesis sequence covered by the LCS) and the LCS recall (the percentage of the reference sequence covered by the LCS).
Source: https://www.microsoft.com/en-us/research/publication/rouge-a-package-for-automatic-evaluation-of-summaries/
The TF.Text implementation returns the F-measure, Precision, and Recall for each (hypothesis, reference) pair.
Consider the following hypothesis/reference pair:
End of explanation
result = text.metrics.rouge_l(hypotheses, references)
print('F-Measure: %s' % result.f_measure)
print('P-Measure: %s' % result.p_measure)
print('R-Measure: %s' % result.r_measure)
Explanation: The hypotheses and references are expected to be tf.RaggedTensors of tokens. Tokens are required instead of raw sentences because no single tokenization strategy fits all tasks.
Now we can call text.metrics.rouge_l and get our result back:
End of explanation
# Compute ROUGE-L with alpha=0
result = text.metrics.rouge_l(hypotheses, references, alpha=0)
print('F-Measure (alpha=0): %s' % result.f_measure)
print('P-Measure (alpha=0): %s' % result.p_measure)
print('R-Measure (alpha=0): %s' % result.r_measure)
# Compute ROUGE-L with alpha=1
result = text.metrics.rouge_l(hypotheses, references, alpha=1)
print('F-Measure (alpha=1): %s' % result.f_measure)
print('P-Measure (alpha=1): %s' % result.p_measure)
print('R-Measure (alpha=1): %s' % result.r_measure)
Explanation: ROUGE-L has an additional hyperparameter, alpha, which determines the weight of the harmonic mean used for computing the F-Measure. Values closer to 0 treat Recall as more important and values closer to 1 treat Precision as more important. alpha defaults to .5, which corresponds to equal weight for Precision and Recall.
End of explanation
<END_TASK> |
42 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
<a href="https
Step1: Computing receptive field parameters of tf.keras.applications models.
Step2: Bonus stuff | <ASSISTANT_TASK:>
Python Code:
from google.colab import drive
drive.mount('/content/gdrive')
! mkdir gdrive/MyDrive/rf_keras
%cd gdrive/MyDrive/rf_keras
! ls
! git clone https://github.com/google-research/receptive_field.git
! ls
%cd receptive_field/
! ls
! pip install .
! pip install tensorflow
Explanation: <a href="https://colab.research.google.com/github/google-research/receptive_field/blob/master/RF_Keras_Applications.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Copyright 2021 Google LLC.
SPDX-License-Identifier: Apache-2.0
Configuring Google Drive, cloning/installing repo and dependencies.
End of explanation
import tensorflow.compat.v1 as tf
import receptive_field as rf
# Example given here: InceptionV3.
g = tf.Graph()
with g.as_default():
tf.keras.backend.set_learning_phase(0) # Disable BN learning.
x = tf.keras.Input([None, None, 3], name='input_image')
model = tf.keras.applications.InceptionV3(input_tensor=x)
graph_def = g.as_graph_def()
input_node = 'input_image'
output_node = 'conv2d_85/Conv2D'
(receptive_field_x, receptive_field_y, effective_stride_x,
effective_stride_y, effective_padding_x, effective_padding_y) = (
rf.compute_receptive_field_from_graph_def(graph_def, input_node,
output_node))
print(receptive_field_x)
print(receptive_field_y)
print(effective_stride_x)
print(effective_stride_y)
print(effective_padding_x)
print(effective_padding_y)
Explanation: Computing receptive field parameters of tf.keras.applications models.
End of explanation
node_info, name_to_node = rf.get_compute_order(graph_def, input_node_name='input_image')
order_to_info = {}
for _, info in node_info.items():
order_to_info[info.order] = info
print(len(order_to_info.keys()))
for i in range(len(order_to_info.keys())):
print(order_to_info[i])
Explanation: Bonus stuff: parsing model graph, printing nodes, etc.
This can be helpful, for example, for finding out the exact node names to use.
End of explanation
<END_TASK> |
43 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
<br>
Weighted kernel density estimation to quickly reproduce the profile of a diffractometer
<br>
<br>
This example shows a work-arround for a quick visualization of a diffractorgram (similar to experimental powder diffractograms) from ImageD11 ".flt" or ".new" columnfile containing peaks information.
It is basically a probability density function (pdf) of the $2\theta$ position of the peak, which is weighted by the peak intensity.
<br>The smoothing of such gaussian kde is decided by the bandwidht value.
Weighted kde
Step1: Loading and visualizing the input data
Step2: Plotting the diffraction profile
Step3: The profile showed above is highly smoothed and the hkl peaks are merged.<br>
$\to$ A Smaller bandwidth should be used.
Choosing the right bandwidth of the estimator
The bandwidth can be passed as argument to the gaussian_kde() object or set afterward using the later set_badwidth() method. For example, the bandwidth can be reduced by a factor of 100 with respect to its previous value | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
from ImageD11.columnfile import columnfile
from ImageD11 import weighted_kde as wkde
%matplotlib inline
plt.rcParams['figure.figsize'] = (6,4)
plt.rcParams['figure.dpi'] = 150
plt.rcParams['mathtext.fontset'] = 'cm'
plt.rcParams['font.size'] = 12
Explanation: <br>
Weighted kernel density estimation to quickly reproduce the profile of a diffractometer
<br>
<br>
This example shows a work-arround for a quick visualization of a diffractorgram (similar to experimental powder diffractograms) from ImageD11 ".flt" or ".new" columnfile containing peaks information.
It is basically a probability density function (pdf) of the $2\theta$ position of the peak, which is weighted by the peak intensity.
<br>The smoothing of such gaussian kde is decided by the bandwidht value.
Weighted kde : The original Scipy gaussian kde was modified by Till Hoffmann to allow for heterogeneous sampling weights.
<br>
End of explanation
# read the peaks
flt = columnfile('sma_261N.flt.new')
# peaks indexed to phase 1
phase1 = flt.copy()
phase1.filter( phase1.labels > -1 )
# unindexed peaks (phase 2 + unindexed phase 1?)
phase2 = flt.copy()
phase2.filter( phase2.labels == -1 )
#plot radial transform for phase 1
plt.plot( phase1.tth_per_grain, phase1.eta_per_grain, 'x')
plt.xlabel( r'$ 2 \theta \, (\degree) $' )
plt.ylabel( r'$ \eta \, (\degree) $' )
plt.title( r'$Diffraction \, angles$' )
Explanation: Loading and visualizing the input data
End of explanation
# Probability density function (pdf) of 2theta
# weighted by the peak intensity and using default 2theta bandwidth
I_phase1 = phase1.sum_intensity * phase1.Lorentz_per_grain
pdf = wkde.gaussian_kde( phase1.tth_per_grain, weights = I_phase1)
# Plotting it over 2theta range
x = np.linspace( min(flt.tth), max(flt.tth), 500 )
y = pdf(x)
plt.plot(x, y)
plt.xlabel( r'$ 2 \theta \, (\degree) $' )
plt.ylabel( r'$ I $' )
plt.yticks([])
plt.title( ' With bandwidth = %.3f'%pdf.factor )
Explanation: Plotting the diffraction profile
End of explanation
pdf_phase1 = wkde.gaussian_kde( phase1.tth, weights = phase1.sum_intensity )
pdf_phase2 = wkde.gaussian_kde( phase2.tth, weights = phase2.sum_intensity )
frac_phase1 = np.sum( phase1.sum_intensity ) / np.sum( flt.sum_intensity )
frac_phase2 = np.sum( phase2.sum_intensity ) / np.sum( flt.sum_intensity )
from ipywidgets import interact
bw_range = ( 0.001, pdf_phase1.factor/3, 0.001)
@interact( bandwidth = bw_range)
def plot_pdf(bandwidth):
pdf_phase1.set_bandwidth(bandwidth)
pdf_phase2.set_bandwidth(bandwidth)
y_phase1 = pdf_phase1(x)
y_phase2 = pdf_phase2(x)
plt.plot( x, frac_phase1 * y_phase1, label = r'$Phase \, 1$' )
plt.plot( x, frac_phase2 * y_phase2, label = r'$Phase \, 2$' )
plt.legend(loc='best')
plt.xlabel( r'$ 2 \theta \, (\degree) $' )
plt.ylabel( r'$ I $' )
plt.yticks([])
plt.title( r'$ 3DXRD \, diffractogram $' )
Explanation: The profile showed above is highly smoothed and the hkl peaks are merged.<br>
$\to$ A Smaller bandwidth should be used.
Choosing the right bandwidth of the estimator
The bandwidth can be passed as argument to the gaussian_kde() object or set afterward using the later set_badwidth() method. For example, the bandwidth can be reduced by a factor of 100 with respect to its previous value:
Python
gaussian_kde().set_bandwidth( gaussian_kde().factor / 100 )
End of explanation
<END_TASK> |
44 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Combining different machine learning algorithms into an ensemble model
Model ensembling is a class of techniques for aggregating together multiple different predictive algorithm into a sort of mega-algorithm, which can often increase the accuracy and reduce the overfitting of your model. Ensembling approaches often work surprisingly well. Many winners of competitive data science competitions use model ensembling in one form or another. In this tutorial, we will take you through the steps of building your own ensemble of a random forest, support vector machine, and neural network for doing a classification problem. We’ll be working on the famous spam dataset and trying to predict whether a certain email is spam or not, and using the standard Python machine learning stack (scikit/numpy/pandas).
You have probably already encountered several uses of model ensembling. Random forests are a type of ensemble algorithm that aggregates together many individual tree base learners. If you’re interested in deep learning, one common technique for improving classification accuracies is training different networks and getting them to vote on classifications for test instances (look at dropout for a related but wacky take on ensembling). If you’re familiar with bagging or boosting algorithms, these are very explicit examples of ensembling.
Regardless of the specifics, the general idea behind ensembling is this
Step1: 2. Cleaning up and summarizing the data
Lookin' good! Let's convert the data into a nice format. We rearrange some columns, check out what the columns are.
Step2: 3) Splitting data into training and testing sets
Our day is now nice and squeaky clean! This definitely always happens in real life.
Next up, let's scale the data and split it into a training and test set.
Step3: 4. Running algorithms on the data
Blah blah now it's time to train algorithms. We are doing binary classification. Could ahve also used logistic regression, kNN, etc etc.
4.1 Random forests
Let’s build a random forest. A great explanation of random forests can be found here. Briefly, random forests build a collection of classification trees, which each try to predict classes by recursively splitting the data on features that split classes best. Each tree is trained on bootstrapped data, and each split is only allowed to use certain variables. So, an element of randomness is introduced, a variety of different trees are built, and the 'random forest' ensembles together these base learners.
A hyperparameter is something than influences the performance of your model, but isn't directly tuned during model training. The main hyperparameters to adjust for random forrests are n_estimators and max_features. n_estimators controls the number of trees in the forest - the more the better, but more trees comes at the expense of longer training time. max_features controls the size of the random selection of features the algorithm is allowed to consider when splitting a node.
We could also choose to tune various other hyperpramaters, like max_depth (the maximum depth of a tree, which controls how tall we grow our trees and influences overfitting) and the choice of the purity criterion (which are specific formulas for calculating how good or 'pure' our splits make the terminal nodes).
We are doing gridsearch to find optimal hyperparameter values, which tries out each given value for each hyperparameter of interst and sees how well it performs using (in this case) 10-fold cross-validation (CV). As a reminder, in cross-validation we try to estimate the test-set performance for a model; in k-fold CV, the estimate is done by repeatedly partitioning the dataset into k parts and 'testing' on 1/kth of it. We could have also tuned our hyperparameters using randomized search, which samples some values from a distribution rather than trying out all given values. Either is probably fine.
The following code block takes about a minute to run.
Step4: 93-95% accuracy, not too shabby! Have a look and see how random forests with suboptimal hyperparameters fare. We got around 91-92% accuracy on the out of the box (untuned) random forests, which actually isn't terrible.
2) Second algorithm
Step5: Looks good! This is similar performance to what we saw in the random forests.
3) Third algorithm
Step6: Looks like this neural network (given this dataset, architecture, and hyperparameterisation) is doing slightly worse on the spam dataset. That's okay, it could still be picking up on a signal that the random forest and SVM weren't.
Machine learning algorithns... ensemble!
4) Majority vote on classifications | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
# Import the dataset
dataset_path = "spam_dataset.csv"
dataset = pd.read_csv(dataset_path, sep=",")
# Take a peak at the data
dataset.head()
Explanation: Combining different machine learning algorithms into an ensemble model
Model ensembling is a class of techniques for aggregating together multiple different predictive algorithm into a sort of mega-algorithm, which can often increase the accuracy and reduce the overfitting of your model. Ensembling approaches often work surprisingly well. Many winners of competitive data science competitions use model ensembling in one form or another. In this tutorial, we will take you through the steps of building your own ensemble of a random forest, support vector machine, and neural network for doing a classification problem. We’ll be working on the famous spam dataset and trying to predict whether a certain email is spam or not, and using the standard Python machine learning stack (scikit/numpy/pandas).
You have probably already encountered several uses of model ensembling. Random forests are a type of ensemble algorithm that aggregates together many individual tree base learners. If you’re interested in deep learning, one common technique for improving classification accuracies is training different networks and getting them to vote on classifications for test instances (look at dropout for a related but wacky take on ensembling). If you’re familiar with bagging or boosting algorithms, these are very explicit examples of ensembling.
Regardless of the specifics, the general idea behind ensembling is this: different classes of algorithms (or differently parameterized versions of the same type of algorithm) might be good at picking up on different signals in the dataset. Combining them means that you can model the data better, leading to better predictions. Furthermore, different algorithms might be overfitting to the data in various ways, but by combining them, you can effectively average away some of this overfitting.
We won’t do fancy visualizations of the dataset here. Check out this tutorial or our bootcamp to learn Plotly and matplotlib. Here, we are focused on optimizing different algorithms and combining them to boost performance.
Let's get started!
1. Loading up the data
Load dataset. We often want our input data to be a matrix (X) and the vector of instance labels as a separate vector (y).
End of explanation
# Reorder the data columns and drop email_id
cols = dataset.columns.tolist()
cols = cols[2:] + [cols[1]]
dataset = dataset[cols]
# Examine shape of dataset and some column names
print dataset.shape
print dataset.columns.values[0:10]
# Summarise feature values
dataset.describe()
# Convert dataframe to numpy array and split
# data into input matrix X and class label vector y
npArray = np.array(dataset)
X = npArray[:,:-1].astype(float)
y = npArray[:,-1]
Explanation: 2. Cleaning up and summarizing the data
Lookin' good! Let's convert the data into a nice format. We rearrange some columns, check out what the columns are.
End of explanation
from sklearn import preprocessing
from sklearn.cross_validation import train_test_split
# Scale and split dataset
X_scaled = preprocessing.scale(X)
# Split into training and test sets
XTrain, XTest, yTrain, yTest = train_test_split(X_scaled, y, random_state=1)
Explanation: 3) Splitting data into training and testing sets
Our day is now nice and squeaky clean! This definitely always happens in real life.
Next up, let's scale the data and split it into a training and test set.
End of explanation
from sklearn import metrics
from sklearn.grid_search import GridSearchCV, RandomizedSearchCV
from sklearn.ensemble import RandomForestClassifier
from sklearn.tree import DecisionTreeClassifier
# Search for good hyperparameter values
# Specify values to grid search over
n_estimators = np.arange(1, 30, 5)
max_features = np.arange(1, X.shape[1], 10)
max_depth = np.arange(1, 100, 10)
hyperparameters = {'n_estimators': n_estimators,
'max_features': max_features,
'max_depth': max_depth}
# Grid search using cross-validation
gridCV = GridSearchCV(RandomForestClassifier(), param_grid=hyperparameters, cv=10, n_jobs=4)
gridCV.fit(XTrain, yTrain)
best_n_estim = gridCV.best_params_['n_estimators']
best_max_features = gridCV.best_params_['max_features']
best_max_depth = gridCV.best_params_['max_depth']
# Train classifier using optimal hyperparameter values
# We could have also gotten this model out from gridCV.best_estimator_
clfRDF = RandomForestClassifier(n_estimators=best_n_estim, max_features=best_max_features, max_depth=best_max_depth)
clfRDF.fit(XTrain, yTrain)
RF_predictions = clfRDF.predict(XTest)
print (metrics.classification_report(yTest, RF_predictions))
print ("Overall Accuracy:", round(metrics.accuracy_score(yTest, RF_predictions),2))
Explanation: 4. Running algorithms on the data
Blah blah now it's time to train algorithms. We are doing binary classification. Could ahve also used logistic regression, kNN, etc etc.
4.1 Random forests
Let’s build a random forest. A great explanation of random forests can be found here. Briefly, random forests build a collection of classification trees, which each try to predict classes by recursively splitting the data on features that split classes best. Each tree is trained on bootstrapped data, and each split is only allowed to use certain variables. So, an element of randomness is introduced, a variety of different trees are built, and the 'random forest' ensembles together these base learners.
A hyperparameter is something than influences the performance of your model, but isn't directly tuned during model training. The main hyperparameters to adjust for random forrests are n_estimators and max_features. n_estimators controls the number of trees in the forest - the more the better, but more trees comes at the expense of longer training time. max_features controls the size of the random selection of features the algorithm is allowed to consider when splitting a node.
We could also choose to tune various other hyperpramaters, like max_depth (the maximum depth of a tree, which controls how tall we grow our trees and influences overfitting) and the choice of the purity criterion (which are specific formulas for calculating how good or 'pure' our splits make the terminal nodes).
We are doing gridsearch to find optimal hyperparameter values, which tries out each given value for each hyperparameter of interst and sees how well it performs using (in this case) 10-fold cross-validation (CV). As a reminder, in cross-validation we try to estimate the test-set performance for a model; in k-fold CV, the estimate is done by repeatedly partitioning the dataset into k parts and 'testing' on 1/kth of it. We could have also tuned our hyperparameters using randomized search, which samples some values from a distribution rather than trying out all given values. Either is probably fine.
The following code block takes about a minute to run.
End of explanation
from sklearn.svm import SVC
# Search for good hyperparameter values
# Specify values to grid search over
g_range = 2. ** np.arange(-15, 5, step=2)
C_range = 2. ** np.arange(-5, 15, step=2)
hyperparameters = [{'gamma': g_range,
'C': C_range}]
# Grid search using cross-validation
grid = GridSearchCV(SVC(), param_grid=hyperparameters, cv= 10)
grid.fit(XTrain, yTrain)
bestG = grid.best_params_['gamma']
bestC = grid.best_params_['C']
# Train SVM and output predictions
rbfSVM = SVC(kernel='rbf', C=bestC, gamma=bestG)
rbfSVM.fit(XTrain, yTrain)
SVM_predictions = rbfSVM.predict(XTest)
print metrics.classification_report(yTest, SVM_predictions)
print "Overall Accuracy:", round(metrics.accuracy_score(yTest, SVM_predictions),2)
Explanation: 93-95% accuracy, not too shabby! Have a look and see how random forests with suboptimal hyperparameters fare. We got around 91-92% accuracy on the out of the box (untuned) random forests, which actually isn't terrible.
2) Second algorithm: support vector machines
Let's train our second algorithm, support vector machines (SVMs) to do the same exact prediction task. A great introduction to the theory behind SVMs can be read here. Briefly, SVMs search for hyperplanes in the feature space which best divide the different classes in your dataset. Crucially, SVMs can find non-linear decision boundaries between classes using a process called kernelling, which projects the data into a higher-dimensional space. This sounds a bit abstract, but if you've ever fit a linear regression to power-transformed variables (e.g. maybe you used x^2, x^3 as features), you're already familiar with the concept.
SVMs can use different types of kernels, like Gaussian or radial ones, to throw the data into a different space. The main hyperparameters we must tune for SVMs are gamma (a kernel parameter, controlling how far we 'throw' the data into the new feature space) and C (which controls the bias-variance tradeoff of the model).
End of explanation
from multilayer_perceptron import multilayer_perceptron
# Search for good hyperparameter values
# Specify values to grid search over
layer_size_range = [(3,2),(10,10),(2,2,2),10,5] # different networks shapes
learning_rate_range = np.linspace(.1,1,3)
hyperparameters = [{'hidden_layer_sizes': layer_size_range, 'learning_rate_init': learning_rate_range}]
# Grid search using cross-validation
grid = GridSearchCV(multilayer_perceptron.MultilayerPerceptronClassifier(), param_grid=hyperparameters, cv=10)
grid.fit(XTrain, yTrain)
# Output best hyperparameter values
best_size = grid.best_params_['hidden_layer_sizes']
best_best_lr = grid.best_params_['learning_rate_init']
# Train neural network and output predictions
nnet = multilayer_perceptron.MultilayerPerceptronClassifier(hidden_layer_sizes=best_size, learning_rate_init=best_best_lr)
nnet.fit(XTrain, yTrain)
NN_predictions = nnet.predict(XTest)
print metrics.classification_report(yTest, NN_predictions)
print "Overall Accuracy:", round(metrics.accuracy_score(yTest, NN_predictions),2)
Explanation: Looks good! This is similar performance to what we saw in the random forests.
3) Third algorithm: neural network
Finally, let's jump on the hype wagon and throw neural networks at our problem.
Neural networks (NNs) represent a different way of thinking about machine learning algorithms. A great place to start learning about neural networks and deep learning is this resource. Briefly, NNs are composed of multiple layers of artificial neurons, which individually are simple processing units that weigh up input data. Together, layers of neurons can work together to compute some very complex functions of the data, which in turn can make excellent predictions. You may be aware of some of the crazy results that NN research has recently achieved.
Here, we train a shallow, fully-connected, feedforward neural network on the spam dataset. Other types of neural network implementations in scikit are available here. The hyperparameters we optimize here are the overall architecture (number of neurons in each layer and the number of layers) and the learning rate (which controls how quickly the parameters in our network change during the training phase; see gradient descent and backpropagation).
End of explanation
# here's a rough solution
import collections
# stick all predictions into a dataframe
predictions = pd.DataFrame(np.array([RF_predictions, SVM_predictions, NN_predictions])).T
predictions.columns = ['RF', 'SVM', 'NN']
predictions = pd.DataFrame(np.where(predictions=='yes', 1, 0),
columns=predictions.columns,
index=predictions.index)
# initialise empty array for holding predictions
ensembled_predictions = np.zeros(shape=yTest.shape)
# majority vote and output final predictions
for test_point in range(predictions.shape[0]):
predictions.iloc[test_point,:]
counts = collections.Counter(predictions.iloc[test_point,:])
majority_vote = counts.most_common(1)[0][0]
# output votes
ensembled_predictions[test_point] = majority_vote.astype(int)
print "The majority vote for test point", test_point, "is: ", majority_vote
# Get final accuracy of ensembled model
yTest[yTest == "yes"] = 1
yTest[yTest == "no"] = 0
print metrics.classification_report(yTest.astype(int), ensembled_predictions.astype(int))
print "Ensemble Accuracy:", round(metrics.accuracy_score(yTest.astype(int), ensembled_predictions.astype(int)),2)
Explanation: Looks like this neural network (given this dataset, architecture, and hyperparameterisation) is doing slightly worse on the spam dataset. That's okay, it could still be picking up on a signal that the random forest and SVM weren't.
Machine learning algorithns... ensemble!
4) Majority vote on classifications
End of explanation
<END_TASK> |
45 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
New function to make a list and to select calibrator
I add a function to retrieve all the flux from the ALMA Calibrator list with its frequency and observing date, and to retrieve redshift (z) from NED.
Step1: Example, retrieve all the calibrator with a flux > 0.1 Jy
Step2: Select all calibrators that heve been observed at least in 3 Bands [ >60s in B3, B6, B7]
already queried and convert it to SQL
exclude Cycle 0, array 12m
Step3: We can write a "report file" or only use the "resume data", some will have redshift data retrieved from NED.
Step4: Sometimes there is no redshift information found in NED
Combining listcal and resume information.
Step5: Select objects which has redshift
collect the flux, band, freq, and obsdate
plot based on the Band
Step6: Plot Flux vs Redshift
same object will located in the same z
some of them will not have flux in all 3 bands.
Step7: Plot log(Luminosity) vs redshift
Step9: How to calculate luminosity
Step10: Plot $\log_{10}(L)$ vs $z$
Step11: Black-dashed line are for 0.1 Jy flux.
Without log10 | <ASSISTANT_TASK:>
Python Code:
file_listcal = "alma_sourcecat_searchresults_20180419.csv"
q = databaseQuery()
Explanation: New function to make a list and to select calibrator
I add a function to retrieve all the flux from the ALMA Calibrator list with its frequency and observing date, and to retrieve redshift (z) from NED.
End of explanation
listcal = q.read_calibratorlist(file_listcal, fluxrange=[0.1, 999999])
len(listcal)
print("Name: ", listcal[0][0])
print("J2000 RA, dec: ", listcal[0][1], listcal[0][2])
print("Alias: ", listcal[0][3])
print("Flux density: ", listcal[0][4])
print("Band: ", listcal[0][5])
print("Freq: ", listcal[0][6])
print("Obs date: ", listcal[0][4])
Explanation: Example, retrieve all the calibrator with a flux > 0.1 Jy:
End of explanation
report, resume = q.select_object_from_sqldb("calibrators_brighterthan_0.1Jy_20180419.db", \
maxFreqRes=999999999, array='12m', \
excludeCycle0=True, \
selectPol=False, \
minTimeBand={3:60., 6:60., 7:60.}, \
silent=True)
Explanation: Select all calibrators that heve been observed at least in 3 Bands [ >60s in B3, B6, B7]
already queried and convert it to SQL
exclude Cycle 0, array 12m
End of explanation
print("Name: ", resume[0][0])
print("From NED: ")
print("Name: ", resume[0][3])
print("J2000 RA, dec: ", resume[0][4], resume[0][5])
print("z: ", resume[0][6])
print("Total # of projects: ", resume[0][7])
print("Total # of UIDs: ", resume[0][8])
print("Gal lon: ", resume[0][9])
print("Gal lat: ", resume[0][10])
Explanation: We can write a "report file" or only use the "resume data", some will have redshift data retrieved from NED.
End of explanation
for i, obj in enumerate(resume):
for j, cal in enumerate(listcal):
if obj[0] == cal[0]: # same name
obj.append(cal[4:]) # add [flux, band, flux obsdate] in the "resume"
Explanation: Sometimes there is no redshift information found in NED
Combining listcal and resume information.
End of explanation
def collect_z_and_flux(Band):
z = []
flux = []
for idata in resume:
if idata[6] is not None: # select object which has redshift information
fluxnya = idata[11][0]
bandnya = idata[11][1]
freqnya = idata[11][2]
datenya = idata[11][3]
for i, band in enumerate(bandnya):
if band == str(Band): # take only first data
flux.append(fluxnya[i])
z.append(idata[6])
break
return z, flux
z3, f3 = collect_z_and_flux(3)
print("Number of seleted source in B3: ", len(z3))
z6, f6 = collect_z_and_flux(6)
print("Number of seleted source in B6: ", len(z6))
z7, f7 = collect_z_and_flux(7)
print("Number of seleted source in B7: ", len(z7))
Explanation: Select objects which has redshift
collect the flux, band, freq, and obsdate
plot based on the Band
End of explanation
plt.figure(figsize=(15,10))
plt.subplot(221)
plt.plot(z3, f3, 'ro')
plt.xlabel("z")
plt.ylabel("Flux density (Jy)")
plt.title("B3")
plt.subplot(222)
plt.plot(z6, f6, 'go')
plt.xlabel("z")
plt.ylabel("Flux density (Jy)")
plt.title("B6")
plt.subplot(223)
plt.plot(z7, f7, 'bo')
plt.xlabel("z")
plt.ylabel("Flux density (Jy)")
plt.title("B7")
plt.subplot(224)
plt.plot(z3, f3, 'ro', z6, f6, 'go', z7, f7, 'bo', alpha=0.3)
plt.xlabel("z")
plt.ylabel("Flux density (Jy)")
plt.title("B3, B6, B7")
Explanation: Plot Flux vs Redshift
same object will located in the same z
some of them will not have flux in all 3 bands.
End of explanation
from astropy.cosmology import FlatLambdaCDM
cosmo = FlatLambdaCDM(H0=70, Om0=0.3, Tcmb0=2.725)
Explanation: Plot log(Luminosity) vs redshift
End of explanation
def calc_power(z, flux):
z = redshift
flux in Jy
z = np.array(z)
flux = np.array(flux)
dL = cosmo.luminosity_distance(z).to(u.meter).value # Luminosity distance
luminosity = 4.0*np.pi*dL*dL/(1.0+z) * flux * 1e-26
return z, luminosity
Explanation: How to calculate luminosity:
$$L_{\nu} (\nu_{e}) = \frac{4 \pi D_{L}^2}{1+z} \cdot S_{\nu} (\nu_{o})$$
Notes:
- Calculate Luminosity or Power in a specific wavelength (without k-correction e.g. using spectral index)
- $L_{\nu}$ in watt/Hz, in emited freq
- $S_{\nu}$ in watt/m$^2$/Hz, in observed freq
- $D_L$ is luminosity distance, calculated using astropy.cosmology function
- need to calculate distance in meter
- need to convert Jy to watt/m$^2$/Hz ----- $\times 10^{-26}$
End of explanation
z3, l3 = calc_power(z3, f3)
z6, l6 = calc_power(z6, f6)
z7, l7 = calc_power(z7, f7)
zdummy = np.linspace(0.001, 2.5, 100)
fdummy = 0.1 # Jy, our cut threshold
zdummy, Ldummy0 = calc_power(zdummy, fdummy)
zdummy, Ldummy3 = calc_power(zdummy, np.max(f3))
zdummy, Ldummy6 = calc_power(zdummy, np.max(f6))
zdummy, Ldummy7 = calc_power(zdummy, np.max(f7))
plt.figure(figsize=(15,10))
plt.subplot(221)
plt.plot(z3, np.log10(l3), 'r*', \
zdummy, np.log10(Ldummy0), 'k--', zdummy, np.log10(Ldummy3), 'r--', alpha=0.5)
plt.xlabel(r"$z$"); plt.ylabel(r"$\log_{10}(L_{\nu_e})$"); plt.title("B3")
plt.subplot(222)
plt.plot(z6, np.log10(l6), 'g*', \
zdummy, np.log10(Ldummy0), 'k--', zdummy, np.log10(Ldummy6), 'g--', alpha=0.5)
plt.xlabel(r"$z$"); plt.ylabel(r"$\log_{10}(L_{\nu_e})$"); plt.title("B6")
plt.subplot(223)
plt.plot(z7, np.log10(l7), 'b*', \
zdummy, np.log10(Ldummy0), 'k--', zdummy, np.log10(Ldummy7), 'b--', alpha=0.5)
plt.xlabel(r"$z$"); plt.ylabel(r"$\log_{10}(L_{\nu_e})$"); plt.title("B7")
plt.subplot(224)
plt.plot(z3, np.log10(l3), 'r*', z6, np.log10(l6), 'g*', z7, np.log10(l7), 'b*', \
zdummy, np.log10(Ldummy0), 'k--', \
zdummy, np.log10(Ldummy3), 'r--', \
zdummy, np.log10(Ldummy6), 'g--', \
zdummy, np.log10(Ldummy7), 'b--', alpha=0.5)
plt.xlabel(r"$z$"); plt.ylabel(r"$\log_{10}(L_{\nu_e})$"); plt.title("B3, B6, B7")
Explanation: Plot $\log_{10}(L)$ vs $z$
End of explanation
plt.figure(figsize=(15,10))
plt.subplot(221)
plt.plot(z3, l3, 'r*', zdummy, Ldummy0, 'k--', zdummy, Ldummy3, 'r--', alpha=0.5)
plt.xlabel(r"$z$"); plt.ylabel(r"$\log_{10}(L_{\nu_e})$"); plt.title("B3")
plt.subplot(222)
plt.plot(z6, l6, 'g*', zdummy, Ldummy0, 'k--', zdummy, Ldummy6, 'g--', alpha=0.5)
plt.xlabel(r"$z$"); plt.ylabel(r"$\log_{10}(L_{\nu_e})$"); plt.title("B6")
plt.subplot(223)
plt.plot(z7, l7, 'b*', zdummy, Ldummy0, 'k--', zdummy, Ldummy7, 'b--', alpha=0.5)
plt.xlabel(r"$z$"); plt.ylabel(r"$\log_{10}(L_{\nu_e})$"); plt.title("B7")
plt.subplot(224)
plt.plot(z3, l3, 'r*', z6, l6, 'g*', z7, l7, 'b*', \
zdummy, Ldummy0, 'k--', zdummy, Ldummy3, 'r--', \
zdummy, Ldummy6, 'g--', zdummy, Ldummy7, 'b--', alpha=0.5)
plt.xlabel(r"$z$"); plt.ylabel(r"$\log_{10}(L_{\nu_e})$"); plt.title("B3, B6, B7")
Explanation: Black-dashed line are for 0.1 Jy flux.
Without log10
End of explanation
<END_TASK> |
46 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Copyright 2020 The TensorFlow Authors.
Step1: TensorFlow Addons 图像:运算
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: 准备和检查图像
下载图像
Step3: 检查图像
TensorFlow 图标
Step4: 制作黑白版本
Step5: 使用 tfa.image
均值滤波
均值滤波是一种滤波技术,通常用于去除图像或信号中的噪声。其理念是逐个像素遍历图像,然后用相邻像素的平均值替代该像素。
Step6: 旋转
此运算可将给定图像旋转用户输入的角度(以弧度为单位)。
Step7: 变换
此运算基于用户给定的变换向量来变换给定图像。
Step8: YIQ 中的随机 HSV
此运算可将给定 RGB 图像的色标更改为 YIQ,但此处是从给定范围随机选择色度增量和饱和度值。
Step9: 调整 YIQ 中的 HSV
此运算可将给定 RGB 图像的色标更改为 YIQ,但此处不是随机选择,而是由用户输入色度增量和饱和度值。
Step10: 密集图像变形
此运算用于由偏移向量的流场指定的任何图像的非线性变形(例如,此处使用随机值)。
Step11: 欧氏距离变换
此运算可将像素值更新为从前景像素到背景像素的欧氏距离。
注:它仅获取二进制图像并生成变换后的图像。如果指定不同的图像,将产生具有单一值的图像 | <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
!pip install -U tensorflow-addons
import tensorflow as tf
import numpy as np
import tensorflow_addons as tfa
import matplotlib.pyplot as plt
Explanation: TensorFlow Addons 图像:运算
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://tensorflow.google.cn/addons/tutorials/image_ops"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">在 TensorFlow.org 上查看</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/addons/tutorials/image_ops.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 中运行</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/addons/tutorials/image_ops.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">在 GitHub 上查看源代码</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/addons/tutorials/image_ops.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png">下载笔记本</a></td>
</table>
概述
此笔记本将演示如何使用 TensorFlow Addons 中的一些图像运算。
以下是本示例将涉及的图像运算列表:
tfa.image.mean_filter2d
tfa.image.rotate
tfa.image.transform
tfa.image.random_hsv_in_yiq
tfa.image.adjust_hsv_in_yiq
tfa.image.dense_image_warp
tfa.image.euclidean_dist_transform
设置
End of explanation
img_path = tf.keras.utils.get_file('tensorflow.png','https://tensorflow.org/images/tf_logo.png')
Explanation: 准备和检查图像
下载图像
End of explanation
img_raw = tf.io.read_file(img_path)
img = tf.io.decode_image(img_raw)
img = tf.image.convert_image_dtype(img, tf.float32)
img = tf.image.resize(img, [500,500])
plt.title("TensorFlow Logo with shape {}".format(img.shape))
_ = plt.imshow(img)
Explanation: 检查图像
TensorFlow 图标
End of explanation
bw_img = 1.0 - tf.image.rgb_to_grayscale(img)
plt.title("Mask image with shape {}".format(bw_img.shape))
_ = plt.imshow(bw_img[...,0], cmap='gray')
Explanation: 制作黑白版本
End of explanation
mean = tfa.image.mean_filter2d(img, filter_shape=11)
_ = plt.imshow(mean)
Explanation: 使用 tfa.image
均值滤波
均值滤波是一种滤波技术,通常用于去除图像或信号中的噪声。其理念是逐个像素遍历图像,然后用相邻像素的平均值替代该像素。
End of explanation
rotate = tfa.image.rotate(img, tf.constant(np.pi/8))
_ = plt.imshow(rotate)
Explanation: 旋转
此运算可将给定图像旋转用户输入的角度(以弧度为单位)。
End of explanation
transform = tfa.image.transform(img, [1.0, 1.0, -250, 0.0, 1.0, 0.0, 0.0, 0.0])
_ = plt.imshow(transform)
Explanation: 变换
此运算基于用户给定的变换向量来变换给定图像。
End of explanation
delta = 0.5
lower_saturation = 0.1
upper_saturation = 0.9
lower_value = 0.2
upper_value = 0.8
rand_hsvinyiq = tfa.image.random_hsv_in_yiq(img, delta, lower_saturation, upper_saturation, lower_value, upper_value)
_ = plt.imshow(rand_hsvinyiq)
Explanation: YIQ 中的随机 HSV
此运算可将给定 RGB 图像的色标更改为 YIQ,但此处是从给定范围随机选择色度增量和饱和度值。
End of explanation
delta = 0.5
saturation = 0.3
value = 0.6
adj_hsvinyiq = tfa.image.adjust_hsv_in_yiq(img, delta, saturation, value)
_ = plt.imshow(adj_hsvinyiq)
Explanation: 调整 YIQ 中的 HSV
此运算可将给定 RGB 图像的色标更改为 YIQ,但此处不是随机选择,而是由用户输入色度增量和饱和度值。
End of explanation
input_img = tf.image.convert_image_dtype(tf.expand_dims(img, 0), tf.dtypes.float32)
flow_shape = [1, input_img.shape[1], input_img.shape[2], 2]
init_flows = np.float32(np.random.normal(size=flow_shape) * 2.0)
dense_img_warp = tfa.image.dense_image_warp(input_img, init_flows)
dense_img_warp = tf.squeeze(dense_img_warp, 0)
_ = plt.imshow(dense_img_warp)
Explanation: 密集图像变形
此运算用于由偏移向量的流场指定的任何图像的非线性变形(例如,此处使用随机值)。
End of explanation
gray = tf.image.convert_image_dtype(bw_img,tf.uint8)
# The op expects a batch of images, so add a batch dimension
gray = tf.expand_dims(gray, 0)
eucid = tfa.image.euclidean_dist_transform(gray)
eucid = tf.squeeze(eucid, (0, -1))
_ = plt.imshow(eucid, cmap='gray')
Explanation: 欧氏距离变换
此运算可将像素值更新为从前景像素到背景像素的欧氏距离。
注:它仅获取二进制图像并生成变换后的图像。如果指定不同的图像,将产生具有单一值的图像
End of explanation
<END_TASK> |
47 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
<center>
<a href="http
Step1: 2.1 Valeurs propres et valeurs singulières de l'ACP non réduite
Attention Les valeurs singulières sont celles de la décomposition de la matrice centrée par rapport aux métriques usuelles
Step2: Les valeurs singulières associées à l'ACP sont celles de $(\bar{X}, I_p, \frac{1}{n-1}I_n)$
Step3: Pour retrouver les valeurs propres de l'ACP à partir des valeurs singulières de la matrice centrée
Step4: 2.2 Vecteurs propres de l'ACP non réduite
Step5: 2.3 Composantes principales de l'ACP non réduite
Step6: Q Comparer avec les résultats obtenus en R.
Tous les autres résultats (contributions, cossinus carrés, corrélations variables facteurs...) et surtout les graphes (éboulis, plans factoriels...) sont à construire car aucune fonction n'est disponible comme dans FactoMineR. C'est partièlement fait dans le jeu de données suivant et complété (biplot) dans les calepins plus completes des cas d'usage.
3 Les données "Caractères"
Il s'agit d'explorer les données issues de la pixellisation de tracés de caractères dont les procédés d'obtention et prétraitement sont décrits sur le site de l'UCI (Lichman, 2013). Les chiffres ont été saisies sur des tablettes à l'intérieur de cadres de résolution $500\times 500$. Des procédures de normalisation, ré-échantillonnage spatial puis de lissage ont été appliquées. Chaque caractère apparaît finalement discrétisé sous la forme d'une matrice $8\times 8$ de pixels à 16 niveaux de gris et identifié par un label. Les données sont archivées sous la forme d'une matrice ou tableau à trois indices. Elles sont également archivées après vectorisation des images sous la forme d'une matrice à $p=64$ colonnes.
L'étude du même type de données, mais nettement plus complexes (MNIST)
Step7: Voici un aperçu des empilements des images à décrire puis ensuite en principe à discriminer
Step8: 3.2 Analyse en composantes principales
Step9: Diagramme boîte des premières composantes principales.
Step10: Q Quelle dimension retenir en principe?
Représentation des caractères dans le premier plan principal.
La représentation des variables (pixels) et le biplot n'ont pas grand intérêt pour ces données.
Step11: Le même graphique avec une légende mais moins de couleurs.
Step12: Graphique en trois dimensions.
Step13: 4. Données "cubiques" de l'OCDE
4.1 Introduction
Objectif
L'objectif de cette section est l'exploration de données socio-économiques plus complexes. La principale spécificité de ces données est de se présenter sous la forme d'un cube de données ou tableau à trois entrées
Step14: 4. 2 Lecture des données
Step15: 4.3 Statistiques élémentaires
Consulter rapidement ces résultats; Que dire à propos de la symétrie des distributions, de leur normalité, des valeurs atypiques.
Step16: Q Quel est le graphique ci-dessous? Que représentent les blocs dagonaux? Que dire des structures de corrélation?
4.3 Analyse en composantes principales
Chaque pays étant observé 4 fois, la principale difficulté technique est de faire apparaître cette structure chronologique dans les graphique afin d'illustrer la dynamique économique de la période considérée.
Q Justifier la nécessité de réduire.
Q Pourqoi toutes les variables sont des taux?
Choix de dimension
Step17: Q Quel est le graphe ci-dessus. Que dire de la première composante? Quelle dimension choisir?
Représentation des variables
Step18: Q Interpréter chacun des deux premiers axes.
Exo représenter le plan (2,3) et interpréter le 3ème axe.
Représentation basique des individus
Step19: Représentation adaptée à ces données
La structure particulière des données nécessite un graphique adapté. Ceci est en fait le principal objectif d'une bonne exploration des données | <ASSISTANT_TASK:>
Python Code:
# Construire la matrice de notes
import pandas as pd
note=[[6,6,5,5.5],[8,8,8,8],[6,7,11,9.5],[14.5,14.5,15.5,15],
[14,14,12,12.5],[11,10,5.5,7],[5.5,7,14,11.5],[13,12.5,8.5,9.5],
[9,9.5,12.5,12]]
dat=pd.DataFrame(note,index=["jean","alai","anni","moni","didi","andr","pier","brig","evel"],
columns=["Math","Phys","Fran","Angl"])
dat
# Importation des fonctions
from sklearn.decomposition import PCA
from sklearn.preprocessing import scale
import numpy as np
Explanation: <center>
<a href="http://www.insa-toulouse.fr/" ><img src="http://www.math.univ-toulouse.fr/~besse/Wikistat/Images/logo-insa.jpg" style="float:left; max-width: 120px; display: inline" alt="INSA"/></a>
<a href="http://wikistat.fr/" ><img src="http://www.math.univ-toulouse.fr/~besse/Wikistat/Images/wikistat.jpg" style="float:right; max-width: 250px; display: inline" alt="Wikistat"/></a>
</center>
<a href="https://www.python.org/"><img src="https://upload.wikimedia.org/wikipedia/commons/thumb/f/f8/Python_logo_and_wordmark.svg/390px-Python_logo_and_wordmark.svg.png" style="max-width: 200px; display: inline" alt="Python"/></a> pour Statistique et Science des Données
Anayse en Composantes Principales avec <a href="https://www.python.org/"><img src="https://upload.wikimedia.org/wikipedia/commons/thumb/f/f8/Python_logo_and_wordmark.svg/390px-Python_logo_and_wordmark.svg.png" style="max-width: 150px; display: inline" alt="Python"/></a> & <a href="http://scikit-learn.org/stable/#"><img src="http://scikit-learn.org/stable/_static/scikit-learn-logo-small.png" style="max-width: 180px; display: inline" alt="Scikit-Learn"/></a>
Résumé: Ce calepin introduit l'utilisation de la librairie scikit-learn pour l'exploration statistique. Ceci est illustré par des exemples de mise en oeuvre de l'(ACP sur des données "jouet" puis sur des images élémentaires de caractères et enfin sur des données économiques sous une la forme particulière d'un cube ou tableauà trois indices.
1 Introduction
1.1 Scikit-learn vs. R
L'objectif de ce tutoriel est d'introduire l'utilisation de la librairie scikit-learn de Python pour l'exploration de données multidimensionnelles. Seule l'utilisation directe des fonctions d'exploration est abordée. Se pose rapidement une question: quand utiliser scikit-learn de Python plutôt que R plus complet et plus simple d'emploi?
Le choix repose sur les points suivants:
- Attention cette librairie manipule des objets de classe array de numpy chargés en mémoire et donc de taille limitée par la RAM de l'ordinateur; de façon analogue R charge en RAM des objets de type data.frame.
- Attention toujours, scikit-learn (0.18) ne reconnaît pas (ou pas encore ?) la classe DataFrame de pandas; scikit-learn utilise la classe array de numpy. C'est un problème pour la gestion de variables qualitatives complexes. Une variable binaire est simplement remplacée par un codage $(0,1)$ mais, en présence de plusieurs modalités, traiter celles-ci comme des entiers n'a pas de sens statistique et remplacer une variable qualitative par l'ensemble des indicatrices (dummy variables $(0,1)$) de ses modalités complique l'interprétation statistique.
- Les implémentations en Python de certains algorithmes dans scikit-learn sont aussi efficaces (e.g. $k$-means), voire beaucoup plus efficaces pour des données volumineuses car utilisent implicitement les capacités de parallélisation.
- R offre beaucoup plus de possibilités pour une exploration, des recherches et comparaisons, des interprétations mais les capacités de parallélisation de Python sont nettement plus performantes. Plus précisément, l'introduction de nouvelles librairies n'est pas ou peu contraintes dans R, alors que celle de nouvelles méthodes dans scikit-learn se fait sous contrôle d'un groupe qui en contrôle la pertinence et l'efficacité.
En conséquences:
- Préférer R et ses libraires si la présentation graphique des résultats et leur interprétation est prioritaire.
- Pour l'emploi de méthodes (analyse factorielle discriminante, canonique, positionnement multidimensionnel...) pas codées en Python.
- Préférer Python et scikit-learn pour mettre au point une chaîne de traitements (pipe line) opérationnelle de l'extraction à une analyse privilégiant la prévision brute à l'interprétation et pour des données quantitatives ou rendues quantitatives ("vectorisation" de corpus de textes).
1.2 Fonctions de scikit-learn
La communauté qui développe cette librairie est très active, elle évolue rapidement. Ne pas hésiter à consulter la documentation pour des compléments. Voici une sélection de ses principales fonctionnalités.
- Transformations (standardisation, discrétisation binaire, regroupement de modalités, imputations rudimentaires de données manquantes) , "vectorisation" de corpus de textes (encodage, catalogue, Tf-idf), images.
- Exploration: ACP, classification non supervisée (mélanges gaussiens, propagation d'affinité, ascendante hiérarchique, SOM,...). Une fonction est aojutée pour l'Analyse des Correspondances.
- Modélisation et apprentissage, voir le dépôt correspondant.
1.3 ACP avec scikit-learn
L'objectif est d'illustrer la mise en oeuvre de l'analyse en composantes principales. Consulter la documentation et ses nombreux exemples pour plus de détails sur l'utilisation de scikit-learn.
La librairie scikit-learn a principalement été conçu en vue de l'analyse de signaux. Aussi, de nombreuses options de l'ACP ne sont pas disponibles, notamment les graphiques usuels (biplot, cercle des corrélations...). En revanche des résultats sont liés à la version probabiliste de l'ACP sous hypothèse d'une distribution gaussienne multidimensionnelle des données.
Attention, l'ACP est évidemment centrée mais par réduite. L'option n'est pas prévue et les variables doivent être réduites (fonction sklearn.preprocessing.scale) avant si c'est nécessaire. L'attribut transform désigne les composantes principales, sous-entendu: transformation par réduction de la dimension; n_components fixe le nombre de composantes retenues, par défaut toutes; l'attribut components_ contient les n_components vecteurs propres mais non normalisés, c'est-à-dire de norme carrée la valeur propre associée, et donc à utiliser pour représenter les variables.
D'autres versions d'analyse en composantes principales sont proposées dans Scikit-learn: kernel PCA, sparse PCA, ICA...
Plusieurs jeux de données élémentaires sont utilisés donyt celui "jouet" déjà vu en R afin de bien comprendre les sorties proposées par la fonction disponible. L'autre ensemble de données est un problème classique et simplifié de reconnaissance de caractères qui est inclus dans la librairie scikit-learn.
2. ACP de données "jouet"
Les données sont celles de l'exemple introduction à l'ACP: les notes en maths, français, physique et anglais de 9 lycéens virtuels. L'objectif est de contrôler les résultats en les comparant avec ceux obtenus avec R.
C'est une façon générique de procéder à l'approche d'un nouveau logiciel ou de fonctionnalités inconnues: traiter des donées triviales dont les résultats de l'analyse sont parfaitement maîtrisés.
End of explanation
pca = PCA()
pca.fit(dat).explained_variance_
pca.singular_values_
Explanation: 2.1 Valeurs propres et valeurs singulières de l'ACP non réduite
Attention Les valeurs singulières sont celles de la décomposition de la matrice centrée par rapport aux métriques usuelles: $(\bar{X}, I_p, I_n)$ alors que le diviseur de la variance est celui d'une estimation sans biais: $(n-1)$.
Contrairement à beaucoup de logiciels, l'ACP de scikit-learn n'est pas réduite.
End of explanation
pca.singular_values_/np.sqrt(8)
Explanation: Les valeurs singulières associées à l'ACP sont celles de $(\bar{X}, I_p, \frac{1}{n-1}I_n)$
End of explanation
(pca.singular_values_/np.sqrt(8))**2
Explanation: Pour retrouver les valeurs propres de l'ACP à partir des valeurs singulières de la matrice centrée:
End of explanation
pca.components_.T
Explanation: 2.2 Vecteurs propres de l'ACP non réduite
End of explanation
pca.transform(dat)
Explanation: 2.3 Composantes principales de l'ACP non réduite
End of explanation
# Importations
import matplotlib.pyplot as plt
from sklearn import datasets
%matplotlib inline
# les données présentes dnas la librairie
digits = datasets.load_digits()
# Contenu et mode d'obtention
print(digits)
# Dimensions
digits.images.shape
# Sous forme d'un cube d'images 1797 x 8x8
print(digits.images)
# Sous forme d'une matrice 1797 x 64
print(digits.data)
# Label réel de chaque caractère
print(digits.target)
Explanation: Q Comparer avec les résultats obtenus en R.
Tous les autres résultats (contributions, cossinus carrés, corrélations variables facteurs...) et surtout les graphes (éboulis, plans factoriels...) sont à construire car aucune fonction n'est disponible comme dans FactoMineR. C'est partièlement fait dans le jeu de données suivant et complété (biplot) dans les calepins plus completes des cas d'usage.
3 Les données "Caractères"
Il s'agit d'explorer les données issues de la pixellisation de tracés de caractères dont les procédés d'obtention et prétraitement sont décrits sur le site de l'UCI (Lichman, 2013). Les chiffres ont été saisies sur des tablettes à l'intérieur de cadres de résolution $500\times 500$. Des procédures de normalisation, ré-échantillonnage spatial puis de lissage ont été appliquées. Chaque caractère apparaît finalement discrétisé sous la forme d'une matrice $8\times 8$ de pixels à 16 niveaux de gris et identifié par un label. Les données sont archivées sous la forme d'une matrice ou tableau à trois indices. Elles sont également archivées après vectorisation des images sous la forme d'une matrice à $p=64$ colonnes.
L'étude du même type de données, mais nettement plus complexes (MNIST): 60 000 caractères représentés par des images de 784 pixels (26 $\times$ 26) fait l'objet d'un autre calepin.
3.1 Prise en main des données
End of explanation
images_and_labels = list(zip(digits.images,
digits.target))
for index, (image, label) in enumerate(images_and_labels[:8]):
plt.subplot(2, 4, index + 1)
plt.axis('off')
plt.imshow(image, cmap=plt.cm.gray_r, interpolation='nearest')
plt.title('Chiffres: %i' % label)
Explanation: Voici un aperçu des empilements des images à décrire puis ensuite en principe à discriminer:
End of explanation
from sklearn.decomposition import PCA
X=digits.data
y=digits.target
target_name=[0,1,2,3,4,5,6,7,8,9]
# définition de la commande
pca = PCA()
# Estimation, calcul des composantes principales
C = pca.fit(X).transform(X)
# Décroissance de la variance expliquée
plt.plot(pca.explained_variance_ratio_)
plt.show()
Explanation: 3.2 Analyse en composantes principales
End of explanation
plt.boxplot(C[:,0:20])
plt.show()
Explanation: Diagramme boîte des premières composantes principales.
End of explanation
plt.scatter(C[:,0], C[:,1], c=y, label=target_name)
plt.show()
Explanation: Q Quelle dimension retenir en principe?
Représentation des caractères dans le premier plan principal.
La représentation des variables (pixels) et le biplot n'ont pas grand intérêt pour ces données.
End of explanation
# attention aux indentations
plt.figure()
for c, i, target_name in zip("rgbcmykrgb",[0,1,2,3,4,5,6,7,8,9], target_name):
plt.scatter(C[y == i,0], C[y == i,1], c=c, label=target_name)
plt.legend()
plt.title("ACP Digits")
plt.show()
Explanation: Le même graphique avec une légende mais moins de couleurs.
End of explanation
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure(1, figsize=(8, 6))
ax = Axes3D(fig, elev=-150, azim=110)
ax.scatter(C[:, 0], C[:, 1], C[:, 2], c=y, cmap=plt.cm.Paired)
ax.set_title("ACP: trois premieres composantes")
ax.set_xlabel("Comp1")
ax.w_xaxis.set_ticklabels([])
ax.set_ylabel("Comp2")
ax.w_yaxis.set_ticklabels([])
ax.set_zlabel("Comp3")
ax.w_zaxis.set_ticklabels([])
plt.show()
Explanation: Graphique en trois dimensions.
End of explanation
# Importaiton des principals librairies et
# Affichage des graphiques dans le notebook
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
Explanation: 4. Données "cubiques" de l'OCDE
4.1 Introduction
Objectif
L'objectif de cette section est l'exploration de données socio-économiques plus complexes. La principale spécificité de ces données est de se présenter sous la forme d'un cube de données ou tableau à trois entrées: le numéro de ligne, le numéro de variable et l'année d'observation de cette variable. Après une description classique, la mise en oeuvre de l'analyse en composantes principales avec python nécessite un effort particulier afin de produire les graphes adaptés à la structure particulière des données.
Les données
Les données sont issues de l'Observatoire de l'OCDE. Pour chaque pays membre et pour chacune des années 1975, 1977, 1979, 1981, on connaît les valeurs prises par les variables suivantes qui sont toutes des \emph{taux}~:
- Taux brut de natalité,
- Taux de chômage,
- Pourcentage d'actifs dans le secteur primaire,
- Pourcentage d'actifs dans le secteur secondaire,
- produit intérieur brut (par habitant),
- Formation brute de capital fixe (par habitant),
- Hausse des prix,
- Recettes courantes (par habitant),
- Mortalité infantile,
- Consommation de protéines animales (par habitant),
- Consommation d'énergie (par habitant).
Elles sont disponibles dans le fichier: ocdeR.dat.
Les mêmes variables sont donc observées, sur les mêmes pays ou individus à quatre dates différentes. Plusieurs stratégies d'analyse sont possibles (tableau moyen, tableaux concaténés, meilleur compromis ou double ACP). La plus adaptée pour ces données est de considérer les observations des variables pour chacun des individus: pays $\times$ années.
End of explanation
ocde=pd.read_table("Data/ocdeR.dat",sep='\s+',index_col=0)
ocde.head()
Explanation: 4. 2 Lecture des données
End of explanation
ocde.mean()
ocde["CNRJ"].hist(bins=20)
plt.show()
from pandas.plotting import scatter_matrix
scatter_matrix(ocde, alpha=0.2, figsize=(15, 15), diagonal='kde')
plt.show()
Explanation: 4.3 Statistiques élémentaires
Consulter rapidement ces résultats; Que dire à propos de la symétrie des distributions, de leur normalité, des valeurs atypiques.
End of explanation
from sklearn.decomposition import PCA
from sklearn.preprocessing import scale
# réduction
ocdeS=scale(ocde)
pca = PCA()
cpOcde = pca.fit_transform(ocdeS)
# Eboulis
plt.plot(pca.explained_variance_ratio_)
plt.show()
plt.boxplot(cpOcde)
plt.show()
Explanation: Q Quel est le graphique ci-dessous? Que représentent les blocs dagonaux? Que dire des structures de corrélation?
4.3 Analyse en composantes principales
Chaque pays étant observé 4 fois, la principale difficulté technique est de faire apparaître cette structure chronologique dans les graphique afin d'illustrer la dynamique économique de la période considérée.
Q Justifier la nécessité de réduire.
Q Pourqoi toutes les variables sont des taux?
Choix de dimension
End of explanation
coord1=pca.components_[0]*np.sqrt(pca.explained_variance_[0])
coord2=pca.components_[1]*np.sqrt(pca.explained_variance_[1])
fig = plt.figure(figsize=(5,5))
ax = fig.add_subplot(1, 1, 1)
for i, j, nom in zip(coord1,coord2, ocde.columns):
plt.text(i, j, nom)
plt.arrow(0,0,i,j,color='black')
plt.axis((-1.2,1.2,-1.2,1.2))
# cercle
c=plt.Circle((0,0), radius=1, color='gray', fill=False)
ax.add_patch(c)
plt.show()
Explanation: Q Quel est le graphe ci-dessus. Que dire de la première composante? Quelle dimension choisir?
Représentation des variables
End of explanation
plt.figure(figsize=(10,6))
for i, j, nom in zip(cpOcde[:,0], cpOcde[:,1], ocde.index):
# color = int(i/4)
plt.text(i, j, nom ,color="blue")
plt.axis((-5,7,-4,4))
plt.show()
Explanation: Q Interpréter chacun des deux premiers axes.
Exo représenter le plan (2,3) et interpréter le 3ème axe.
Représentation basique des individus
End of explanation
import matplotlib.patheffects as PathEffects
comp_0 = 0
comp_1 = 1
cmap = plt.get_cmap("tab20")
fig = plt.figure(figsize=(16,8))
ax = fig.add_subplot(1,1,1)
for i,k in enumerate(np.arange(0,cpOcde.shape[0],4)):
country =ocde.index[k]
xs = cpOcde[k:k+4,comp_0]
ys = cpOcde[k:k+4, comp_1]
ax.plot(xs,ys, color=cmap(i), marker=".", markersize=15)
txt = ax.text(xs[-4], ys[-4], country, horizontalalignment="left", verticalalignment="top",
color=cmap(i), fontweight="bold", fontsize=15)
# Add black line around text
#txt.set_path_effects([PathEffects.withStroke(linewidth=1, foreground='black')])
ax.set_xlabel("PC%d" %comp_0, fontsize=20)
ax.set_ylabel("PC%d" %comp_1, fontsize=20)
plt.tight_layout()
plt.show()
Explanation: Représentation adaptée à ces données
La structure particulière des données nécessite un graphique adapté. Ceci est en fait le principal objectif d'une bonne exploration des données: trouver la représentation graphique qui permet d'en comprendre toute la structure en une seule vue.
End of explanation
<END_TASK> |
48 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
[0] Database management
Python provides a interface with the databases GadFly, mSQL, MySQL, PostgreSQL, Microsoft SQL Server 2000, Informix, Interbase, Oracle, and Sybase.
Example (MySQL)
Step1: Exercise 1
Step2: Exercise 2
Step3: Exercise 3
Step4: Exercise 4
Step5: Exercise 5
Step6: Exercise 6 | <ASSISTANT_TASK:>
Python Code:
# Source: Manuel Torres. Universidad de Almería.
import pymysql
# Establecemos la conexion con la base de datos
bd = pymysql.connect("localhost", "root", "gebd", "RRHH")
# Preparamos el cursor que nos va a ayudar a realizar las operaciones con la base de datos
cursor = bd.cursor()
# Ejecutamos un query SQL usando el metodo execute() que nos proporciona el cursor
cursor.execute("SELECT * FROM Empleado")
resultado = cursor.fetchall()
# Extraemos una sola fila usando el metodo fetchone()
for fila in resultado:
print("%s %s %s" % (fila[0], fila[1], fila[2]))
# Nos desconectamos de la base de datos
bd.close()
resultado[0][2]
Explanation: [0] Database management
Python provides a interface with the databases GadFly, mSQL, MySQL, PostgreSQL, Microsoft SQL Server 2000, Informix, Interbase, Oracle, and Sybase.
Example (MySQL):
pip install pymysql
End of explanation
all_jobs = [x[2] for x in resultado]
all_jobs = []
for i in resultado:
all_jobs.append(i[2])
all_jobs
Explanation: Exercise 1:
Create a list with the jobs of the third column of resultado. Tip: a list comprehension can be useful!
End of explanation
diff_jobs = set(all_jobs)
diff_jobs
Explanation: Exercise 2:
Compute a set of different jobs in resultado (this set will store some may items a different jobs are found in resultado). Clue: sets cannot store dupplicate elements!
End of explanation
jobs_count = {}
for i in diff_jobs:
jobs_count[i] = 0
jobs_count
for i in all_jobs:
jobs_count[i] += 1
jobs_count
jobs_count = {}
for i in diff_jobs:
jobs_count[i] = all_jobs.count(i)
jobs_count
jobs_count['Analista']
Explanation: Exercise 3:
Compute the number of occurrences of each job in resultado. This can be accomplished by generating a dictionary with a structure: {job:number_of_occurrences}. Tip: use the result of the previous exercise (a set of different jobs) to create the required dictionary.
End of explanation
count_jobs = []
for i in jobs_count:
count_jobs.append((jobs_count[i], i))
count_jobs
Explanation: Exercise 4:
Python dictionaries cannot be sorted (by definition). For this reason, create a list of tuples with a structure (number_of_ocurrences, job), using as input the result of the previous exercise.
End of explanation
sorted_count_jobs = sorted(count_jobs, reverse=True)
sorted_count_jobs
sorted_count_jobs[1][1]
Explanation: Exercise 5:
Sort (in reverse order) the previous list. Create a sorted list (in descending order) of tuples with a structure (number_of_ocurrences, job), using as input the result of the previous exercise. Tip: visit https://docs.python.org/3/howto/sorting.html#sortinghowto and find out how you can use the reverse=True parameter.
End of explanation
i = 1
most_common_jobs = [(sorted_count_jobs[0][1], sorted_count_jobs[0][0])]
most_common_jobs
while(sorted_count_jobs[i][0] == sorted_count_jobs[0][0]):
x = (sorted_count_jobs[i][1], sorted_count_jobs[i][0])
most_common_jobs.append(x)
i += 1
most_common_jobs
Explanation: Exercise 6:
Compute the most common job(s) in resultado. Create a list of tuples with a structure (job, number_of_ocurrences) with the most common job(s), using as input the result of the previous exercise.
End of explanation
<END_TASK> |
49 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Personality prediction from tweet
by Angelo Basile
Step1: Dataset
Step2: Baseline
For the baseline we use an SVM with a sparse feature representation.
We use both character- and word-ngrams.
Step3: Results
The SVM works quite well already
Step4: Neural network
First we have to encode the labels in the one-hot format. Since this is a binary classification format, we don't convert them to a categorical format.
Step6: Text representation
For the baseline we used a one-hot encoding. For our neural model we are going to represent the text using a dense representation. We will be building it from characters.
Step7: My model | <ASSISTANT_TASK:>
Python Code:
import numpy as np
np.random.seed(113) #set seed before any keras import
import pandas as pd
import random
from sklearn.model_selection import train_test_split
from collections import defaultdict
from keras.preprocessing import sequence
from collections import Counter
import pydot
Explanation: Personality prediction from tweet
by Angelo Basile
End of explanation
seed=0
corpus = pd.read_csv('twistytest.csv',
index_col=0,
header=1,
names=['user_id', 'lang', 'text', 'mbti'])
corpus.sample(5)
#here we limit the corpus size. The SVM with all the text can learn somethign
corpus.text = corpus.text.apply(lambda x: x[:1000])
corpus.mbti = corpus.mbti.apply(lambda x: x[0])
#corpus = tmp.sample(frac=1, random_state=seed)
e = corpus[corpus.mbti.apply(lambda x: x == 'E')]
i = corpus[corpus.mbti.apply(lambda x: x == 'I')].sample(226)
corpus = pd.concat([e,i]).sample(frac=0.3, random_state=seed)
print(corpus.shape)
## set max length of doc per author
sentences = corpus.text#.apply(lambda x: x[:100000])
## trim labels: convert problem to binary classification I vs E
labels = corpus.mbti
## make sure we have a label for every data instance
assert(len(sentences)==len(labels))
data={}
np.random.seed(113) #seed
data['target']= np.random.permutation(labels)
np.random.seed(113) # use same seed!
data['data'] = np.random.permutation(sentences)
# preview the dataset
print(corpus.shape)
corpus.head()
# plot the distribution of labels
import matplotlib.pyplot as plt
l, v = zip(*Counter(y_train).items())
indexes = np.arange(len(l))
width = 1
plt.bar(indexes, v, width, color=['r', 'b'])
plt.xticks(indexes + width * 0.5, l)
plt.show()
#split the data into train, dev, test
X_rest, X_test, y_rest, y_test = train_test_split(data['data'], data['target'], test_size=0.2)
X_train, X_dev, y_train, y_dev = train_test_split(X_rest, y_rest, test_size=0.2)
del X_rest, y_rest
print("#train instances: {} #dev: {} #test: {}".format(len(X_train),len(X_dev),len(X_test)))
Explanation: Dataset
End of explanation
from sklearn.svm import LinearSVC
from sklearn.dummy import DummyClassifier
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics import accuracy_score
from sklearn.metrics import classification_report
pipeline = Pipeline([('features', FeatureUnion([('wngram', TfidfVectorizer(ngram_range=(1,2))),
('cngram', TfidfVectorizer(analyzer='char'))])),
('cls', LinearSVC())])
pipeline.fit(X_train, y_train)
Explanation: Baseline
For the baseline we use an SVM with a sparse feature representation.
We use both character- and word-ngrams.
End of explanation
testpred = pipeline.predict(X_test)
print(accuracy_score(testpred, y_test))
print(classification_report(testpred, y_test))
Explanation: Results
The SVM works quite well already: we outperform the random baseline by a signficant margin.
End of explanation
from keras.utils import np_utils
y2i = defaultdict(lambda: len(y2i))
y_train_num = [y2i[mbti] for mbti in y_train]
y_dev_num = [y2i[mbti] for mbti in y_dev]
y_test_num = [y2i[mbti] for mbti in y_test]
num_classes = len(np.unique(y_train_num))
print(num_classes)
Explanation: Neural network
First we have to encode the labels in the one-hot format. Since this is a binary classification format, we don't convert them to a categorical format.
End of explanation
from collections import defaultdict
# convert words to indices, taking care of UNKs
def get_characters(sentence, c2i):
out = []
for word in sentence.split(" "):
chars = []
for c in word:
chars.append(c2i[c])
out.append(chars)
return out
c2i = defaultdict(lambda: len(c2i))
PAD = c2i["<pad>"] # index 0 is padding
UNK = c2i["<unk>"] # index 1 is for UNK
X_train_num = [get_characters(sentence, c2i) for sentence in X_train]
c2i = defaultdict(lambda: UNK, c2i) # freeze - cute trick!
X_dev_num = [get_characters(sentence, c2i) for sentence in X_dev]
X_test_num = [get_characters(sentence, c2i) for sentence in X_test]
max_sentence_length=max([len(s.split(" ")) for s in X_train]
+ [len(s.split(" ")) for s in X_dev]
+ [len(s.split(" ")) for s in X_test] )
max_word_length = max([len(word) for sentence in X_train_num for word in sentence])
### we need both max sent and word length
print(max_sentence_length)
print(max_word_length)
print(X_train[0:2])
print(X_train_num[0][:100]) # example how the first two sentences are encoded
def pad_words(tensor_words, max_word_len, pad_symbol_id, max_sent_len=None):
pad character list all to same word length
padded = []
for words in tensor_words:
if max_sent_len: #pad all to same sentence length (insert empty word list)
words = [[[0]]*(max_sent_len-len(words))+ words][0] #prepending empty words
padded.append(sequence.pad_sequences(words, maxlen=max_word_len, value=pad_symbol_id))
return np.array(padded)
X_train_pad_char = pad_words(X_train_num, max_word_length, 0, max_sent_len=max_sentence_length)
X_dev_pad_char = pad_words(X_dev_num, max_word_length, 0, max_sent_len=max_sentence_length)
X_test_pad_char = pad_words(X_test_num, max_word_length, 0, max_sent_len=max_sentence_length)
X_train_pad_char.shape
from keras.models import Model, Sequential
from keras.layers import Dense, Input, GRU, TimeDistributed, Embedding, Bidirectional
import keras
Explanation: Text representation
For the baseline we used a one-hot encoding. For our neural model we are going to represent the text using a dense representation. We will be building it from characters.
End of explanation
batch_size=8
max_chars = len(c2i)
c_dim=50
c_h_dim=32
w_h_dim=32
char_vocab_size = len(c2i)
## lower-level character LSTM
input_chars = Input(shape=(max_sentence_length, max_word_length), name='main_input')
embedded_chars = TimeDistributed(Embedding(char_vocab_size, c_dim,
input_length=max_word_length), name='char_embedding')(input_chars)
char_lstm = TimeDistributed(Bidirectional(GRU(c_h_dim)), name='GRU_on_char')(embedded_chars)
word_lstm_from_char = Bidirectional(GRU(w_h_dim), name='GRU_on_words')(char_lstm)
# And add a prediction node on top
predictions = Dense(1, activation='relu', name='output_layer')(word_lstm_from_char)
model = Model(inputs=input_chars, outputs=predictions)
model.compile(loss='binary_crossentropy', optimizer='adam',
metrics=['accuracy'])
model.summary()
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot, plot_model
SVG(model_to_dot(model).create(prog='dot', format='svg'))
model.fit(X_train_pad_char, y_train_num, epochs=10, batch_size=8)
loss, accuracy = model.evaluate(X_test_pad_char, y_test_num)
print(accuracy)
Explanation: My model: WENP (WE Need more Power)
Instead of using a separate word embedding matrix, compose words through characters (see https://aclweb.org/anthology/W/W16/W16-4303.pdf)
End of explanation
<END_TASK> |
50 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Optimization Exercise 1
Imports
Step1: Hat potential
The following potential is often used in Physics and other fields to describe symmetry breaking and is often known as the "hat potential"
Step2: Plot this function over the range $x\in\left[-3,3\right]$ with $b=1.0$ and $a=5.0$
Step3: Write code that finds the two local minima of this function for $b=1.0$ and $a=5.0$.
Use scipy.optimize.minimize to find the minima. You will have to think carefully about how to get this function to find both minima.
Print the x values of the minima.
Plot the function as a blue line.
On the same axes, show the minima as red circles.
Customize your visualization to make it beatiful and effective. | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import scipy.optimize as opt
Explanation: Optimization Exercise 1
Imports
End of explanation
def hat(x,a,b):
return -1*a*(x**2) + b*(x**4)
assert hat(0.0, 1.0, 1.0)==0.0
assert hat(0.0, 1.0, 1.0)==0.0
assert hat(1.0, 10.0, 1.0)==-9.0
Explanation: Hat potential
The following potential is often used in Physics and other fields to describe symmetry breaking and is often known as the "hat potential":
$$ V(x) = -a x^2 + b x^4 $$
Write a function hat(x,a,b) that returns the value of this function:
End of explanation
a = 5.0
b = 1.0
x = np.linspace(-3,3,100)
v = hat(x,a,b)
graph = plt.plot(x,v)
assert True # leave this to grade the plot
Explanation: Plot this function over the range $x\in\left[-3,3\right]$ with $b=1.0$ and $a=5.0$:
End of explanation
f = lambda g: hat(g,a,b)
x1 = float(opt.minimize(f,-2 ).x)
x2 = float(opt.minimize(f,2 ).x)
print(x1)
print(x2)
graph = plt.plot(x,v)
plt.plot([x1,x2],[f(x1),f(x2)],'ro')
assert True # leave this for grading the plot
Explanation: Write code that finds the two local minima of this function for $b=1.0$ and $a=5.0$.
Use scipy.optimize.minimize to find the minima. You will have to think carefully about how to get this function to find both minima.
Print the x values of the minima.
Plot the function as a blue line.
On the same axes, show the minima as red circles.
Customize your visualization to make it beatiful and effective.
End of explanation
<END_TASK> |
51 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
ES-DOC CMIP6 Model Properties - Seaice
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required
Step7: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required
Step8: 3.2. Ocean Freezing Point Value
Is Required
Step9: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required
Step10: 4.2. Canonical Horizontal Resolution
Is Required
Step11: 4.3. Number Of Horizontal Gridpoints
Is Required
Step12: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required
Step13: 5.2. Target
Is Required
Step14: 5.3. Simulations
Is Required
Step15: 5.4. Metrics Used
Is Required
Step16: 5.5. Variables
Is Required
Step17: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required
Step18: 6.2. Additional Parameters
Is Required
Step19: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required
Step20: 7.2. On Diagnostic Variables
Is Required
Step21: 7.3. Missing Processes
Is Required
Step22: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required
Step23: 8.2. Properties
Is Required
Step24: 8.3. Budget
Is Required
Step25: 8.4. Was Flux Correction Used
Is Required
Step26: 8.5. Corrected Conserved Prognostic Variables
Is Required
Step27: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required
Step28: 9.2. Grid Type
Is Required
Step29: 9.3. Scheme
Is Required
Step30: 9.4. Thermodynamics Time Step
Is Required
Step31: 9.5. Dynamics Time Step
Is Required
Step32: 9.6. Additional Details
Is Required
Step33: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required
Step34: 10.2. Number Of Layers
Is Required
Step35: 10.3. Additional Details
Is Required
Step36: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required
Step37: 11.2. Number Of Categories
Is Required
Step38: 11.3. Category Limits
Is Required
Step39: 11.4. Ice Thickness Distribution Scheme
Is Required
Step40: 11.5. Other
Is Required
Step41: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required
Step42: 12.2. Number Of Snow Levels
Is Required
Step43: 12.3. Snow Fraction
Is Required
Step44: 12.4. Additional Details
Is Required
Step45: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required
Step46: 13.2. Transport In Thickness Space
Is Required
Step47: 13.3. Ice Strength Formulation
Is Required
Step48: 13.4. Redistribution
Is Required
Step49: 13.5. Rheology
Is Required
Step50: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required
Step51: 14.2. Thermal Conductivity
Is Required
Step52: 14.3. Heat Diffusion
Is Required
Step53: 14.4. Basal Heat Flux
Is Required
Step54: 14.5. Fixed Salinity Value
Is Required
Step55: 14.6. Heat Content Of Precipitation
Is Required
Step56: 14.7. Precipitation Effects On Salinity
Is Required
Step57: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required
Step58: 15.2. Ice Vertical Growth And Melt
Is Required
Step59: 15.3. Ice Lateral Melting
Is Required
Step60: 15.4. Ice Surface Sublimation
Is Required
Step61: 15.5. Frazil Ice
Is Required
Step62: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Is Required
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required
Step65: 17.2. Constant Salinity Value
Is Required
Step66: 17.3. Additional Details
Is Required
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required
Step68: 18.2. Constant Salinity Value
Is Required
Step69: 18.3. Additional Details
Is Required
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required
Step72: 20.2. Additional Details
Is Required
Step73: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required
Step74: 21.2. Formulation
Is Required
Step75: 21.3. Impacts
Is Required
Step76: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required
Step77: 22.2. Snow Aging Scheme
Is Required
Step78: 22.3. Has Snow Ice Formation
Is Required
Step79: 22.4. Snow Ice Formation Scheme
Is Required
Step80: 22.5. Redistribution
Is Required
Step81: 22.6. Heat Diffusion
Is Required
Step82: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required
Step83: 23.2. Ice Radiation Transmission
Is Required | <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cnrm-cerfacs', 'sandbox-1', 'seaice')
Explanation: ES-DOC CMIP6 Model Properties - Seaice
MIP Era: CMIP6
Institute: CNRM-CERFACS
Source ID: SANDBOX-1
Topic: Seaice
Sub-Topics: Dynamics, Thermodynamics, Radiative Processes.
Properties: 80 (63 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:52
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Model
2. Key Properties --> Variables
3. Key Properties --> Seawater Properties
4. Key Properties --> Resolution
5. Key Properties --> Tuning Applied
6. Key Properties --> Key Parameter Values
7. Key Properties --> Assumptions
8. Key Properties --> Conservation
9. Grid --> Discretisation --> Horizontal
10. Grid --> Discretisation --> Vertical
11. Grid --> Seaice Categories
12. Grid --> Snow On Seaice
13. Dynamics
14. Thermodynamics --> Energy
15. Thermodynamics --> Mass
16. Thermodynamics --> Salt
17. Thermodynamics --> Salt --> Mass Transport
18. Thermodynamics --> Salt --> Thermodynamics
19. Thermodynamics --> Ice Thickness Distribution
20. Thermodynamics --> Ice Floe Size Distribution
21. Thermodynamics --> Melt Ponds
22. Thermodynamics --> Snow Processes
23. Radiative Processes
1. Key Properties --> Model
Name of seaice model used.
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of sea ice model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Variables
List of prognostic variable in the sea ice model.
2.1. Prognostic
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the sea ice component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Seawater Properties
Properties of seawater relevant to sea ice
3.1. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Ocean Freezing Point Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant seawater freezing point, specify this value.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Resolution
Resolution of the sea ice grid
4.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning applied to sea ice model component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Target
Is Required: TRUE Type: STRING Cardinality: 1.1
What was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Simulations
Is Required: TRUE Type: STRING Cardinality: 1.1
*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Metrics Used
Is Required: TRUE Type: STRING Cardinality: 1.1
List any observed metrics used in tuning model/parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.5. Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Which variables were changed during the tuning process?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Key Parameter Values
Values of key parameters
6.1. Typical Parameters
Is Required: FALSE Type: ENUM Cardinality: 0.N
What values were specificed for the following parameters if used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Additional Parameters
Is Required: FALSE Type: STRING Cardinality: 0.N
If you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Assumptions
Assumptions made in the sea ice model
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.N
General overview description of any key assumptions made in this model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. On Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
Note any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Missing Processes
Is Required: TRUE Type: STRING Cardinality: 1.N
List any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the sea ice component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Provide a general description of conservation methodology.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Properties
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in sea ice by the numerical schemes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
For each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.4. Was Flux Correction Used
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does conservation involved flux correction?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Corrected Conserved Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List any variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Grid --> Discretisation --> Horizontal
Sea ice discretisation in the horizontal
9.1. Grid
Is Required: TRUE Type: ENUM Cardinality: 1.1
Grid on which sea ice is horizontal discretised?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.2. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the type of sea ice grid?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the advection scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.4. Thermodynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model thermodynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 9.5. Dynamics Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
What is the time step in the sea ice model dynamic component in seconds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional horizontal discretisation details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Sea ice vertical properties
10.1. Layering
Is Required: TRUE Type: ENUM Cardinality: 1.N
What type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.2. Number Of Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using multi-layers specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional vertical grid details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 11. Grid --> Seaice Categories
What method is used to represent sea ice categories ?
11.1. Has Mulitple Categories
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Set to true if the sea ice model has multiple sea ice categories.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Number Of Categories
Is Required: TRUE Type: INTEGER Cardinality: 1.1
If using sea ice categories specify how many.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Category Limits
Is Required: TRUE Type: STRING Cardinality: 1.1
If using sea ice categories specify each of the category limits.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Ice Thickness Distribution Scheme
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the sea ice thickness distribution scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Other
Is Required: FALSE Type: STRING Cardinality: 0.1
If the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Grid --> Snow On Seaice
Snow on sea ice details
12.1. Has Snow On Ice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow on ice represented in this model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12.2. Number Of Snow Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels of snow on ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Snow Fraction
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how the snow fraction on sea ice is determined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.4. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any additional details related to snow on ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamics
Sea Ice Dynamics
13.1. Horizontal Transport
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of horizontal advection of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Transport In Thickness Space
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice transport in thickness space (i.e. in thickness categories)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Ice Strength Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Which method of sea ice strength formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Redistribution
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which processes can redistribute sea ice (including thickness)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Rheology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Rheology, what is the ice deformation formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Thermodynamics --> Energy
Processes related to energy in sea ice thermodynamics
14.1. Enthalpy Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the energy formulation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Thermal Conductivity
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of thermal conductivity is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.3. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of heat diffusion?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.4. Basal Heat Flux
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method by which basal ocean heat flux is handled?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.5. Fixed Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.6. Heat Content Of Precipitation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which the heat content of precipitation is handled.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.7. Precipitation Effects On Salinity
Is Required: FALSE Type: STRING Cardinality: 0.1
If precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Thermodynamics --> Mass
Processes related to mass in sea ice thermodynamics
15.1. New Ice Formation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method by which new sea ice is formed in open water.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Ice Vertical Growth And Melt
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs the vertical growth and melt of sea ice.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Ice Lateral Melting
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the method of sea ice lateral melting?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.4. Ice Surface Sublimation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method that governs sea ice surface sublimation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.5. Frazil Ice
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of frazil ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16. Thermodynamics --> Salt
Processes related to salt in sea ice thermodynamics.
16.1. Has Multiple Sea Ice Salinities
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 16.2. Sea Ice Salinity Thermal Impacts
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does sea ice salinity impact the thermal properties of sea ice?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Thermodynamics --> Salt --> Mass Transport
Mass transport of salt
17.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the mass transport of salt calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Thermodynamics --> Salt --> Thermodynamics
Salt thermodynamics
18.1. Salinity Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is salinity determined in the thermodynamic calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.2. Constant Salinity Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If using a constant salinity value specify this value in PSU?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.3. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the salinity profile used.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Thermodynamics --> Ice Thickness Distribution
Ice thickness distribution details.
19.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice thickness distribution represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Thermodynamics --> Ice Floe Size Distribution
Ice floe-size distribution details.
20.1. Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is the sea ice floe-size represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Please provide further details on any parameterisation of floe-size.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 21. Thermodynamics --> Melt Ponds
Characteristics of melt ponds.
21.1. Are Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are melt ponds included in the sea ice model?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.2. Formulation
Is Required: TRUE Type: ENUM Cardinality: 1.1
What method of melt pond formulation is used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21.3. Impacts
Is Required: TRUE Type: ENUM Cardinality: 1.N
What do melt ponds have an impact on?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22. Thermodynamics --> Snow Processes
Thermodynamic processes in snow on sea ice
22.1. Has Snow Aging
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has a snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Snow Aging Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow aging scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.3. Has Snow Ice Formation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.N
Set to True if the sea ice model has snow ice formation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.4. Snow Ice Formation Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow ice formation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.5. Redistribution
Is Required: TRUE Type: STRING Cardinality: 1.1
What is the impact of ridging on snow cover?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.6. Heat Diffusion
Is Required: TRUE Type: ENUM Cardinality: 1.1
What is the heat diffusion through snow methodology in sea ice thermodynamics?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Processes
Sea Ice Radiative Processes
23.1. Surface Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used to handle surface albedo.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Ice Radiation Transmission
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method by which solar radiation through sea ice is handled.
End of explanation
<END_TASK> |
52 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Causal Inference of Lalonde
<table align="left">
<td>
<a target="_blank" href="https
Step1: Data
The data are downloaded from
Dehejia and Wahba (1999)'s
data hosting website via functions
under ec.data.lalonde.
Experiment benchmark
The first column is the treatment assignment. The last column is the post-intervention outcome variable. The remaining 8 columns are pre-intervention covariates.
Step2: We use two methods to estimate the average treatment effects on the treated.
Difference Estimator
Using a simple difference in means, we estimate that the program increased the average earnings of the treated by $1794.
Step3: Using the estimator for the sampling variance (see formula 6.18 of Imbens and Rubin (2014)), we estimate the 95% $z$-interval of the difference as [479, 3109].
Step4: Regression Estimator
The experimental treated and experimental control group are not perfectly balanced.
Step5: Thus, we can fit a linear outcome regression model to "clean up" the imbalance. It yields an estimated average treatment effect on the treated of $1698 with a slightly more efficient 95% confidence interval [458, 2938].
Step6: Observational controls
Step7: We create two additional indicators umemployed1974 and umemployed1975 based on the earnings.
Step8: The observational control group is significantly different from the experimental treatment group.
Step9: The experimental treated subjects tend to be younger, with less education, black, single, without high school degree, and have smaller pre-intervention earnings.
Step10: Empirical Calibration
Empirical calibration assigns weights to the observational control subjects such that the reweighted control group equates the treated group on moments of covariates. The weights are also sought to be as uniform as possible.
Prepare Balancing Constraints
There are 8 raw pre-intervention covariates. Simply balancing on their first moment might not be sufficient. We follow Hainmueller (2011)'s analysis and consider a total of 52 balancing constraints, which include single terms, selected one-way interactions, and selected squared terms. Note that balancing on all 52 transformed covariates might be excessive, we choose them simply to reproduce Hainmueller's results.
We use patsy's formula API to construct the covariate matrices, where "
Step11: The full list of the 52 balancing constraints is shown below.
Step12: Entropy Balancing
We first use the default ENTROPY objective which mimizes the Kullback-Leibler divergence between the balancing weights and uniform weights.
Step13: It turns out that we need to relax the equality constraint a tiny bit to ensure convergence.
Step14: Top 3 control observations account for 12% of total weight.
Step15: The effective sample size of the control group is merely 103.4, a huge reduction from the original sample size 15992.
Step16: We confirm that the means of the raw covariates are matched after weighting.
Step17: The difference in means between the treated group and the reweighted control group yields an average treatment effect on the treated of $1571, which reproduces Hainmueller (2011)'s result.
Step18: Under the homoskedastic assumption (i.e., equal unit-level conditional variance), we estimate the variance using formula prescribed in section 19.7 of Imbens and Rubin (2015).
Step19: The 95% confidence interval is [55, 3088], wider than the reported [97, 3044] by Hainmueller (2011) (it is unclear what method was used there).
Step20: Quadratic Balancing
We then use the QUADRATIC objective which minimizes the Euclidean distance between the balancing weights and the uniform weights.
Step21: Due to being the explicit optimizing objective, the effective control sample size increases.
Step22: The point estimate $1713 is getting closer to the experimental benchmark $1794.
Step23: The 95% confidence interval becomes narrower due to increased effective control sample size.
Step24: Imposing an Upper Bound on Weights
For both entropy balancing and quadratic balancing, one can optionally impose an upper bound on the weights to avoid extreme individual weights. For illustration, we impose an upper bound 0.015 on quadratic balancing weights, i.e., no single control individual accounts for 1.5% or more of total weights.
Step25: We can confirm the resulting weights are indeed capped at 0.015.
Step26: The cost of this additional constraint is a small loss in effective sample size.
Step27: Note that this additional weights constraint is an integral part of the optimization, as opposed to a separate trimming step used with propensity score weighting type method. We don't sacrifice the balancing conditions.
Step28: The point estimate is very close to the experimental estimate $1794. | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import patsy
import seaborn as sns
import statsmodels.api as sm
# install and import ec
!pip install -q git+https://github.com/google/empirical_calibration
import empirical_calibration as ec
sns.set_style('whitegrid')
%config InlineBackend.figure_format='retina'
%precision 3
Explanation: Causal Inference of Lalonde
<table align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/google/empirical_calibration/blob/master/notebooks/causal_inference_lalonde.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/google/empirical_calibration/blob/master/notebooks/causal_inference_lalonde.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
Causal Inference of Lalonde
Background
Imports
Data
Experiment benchmark
Difference Estimator
Regression Estimator
Observational controls
Empirical Calibration
Prepare Balancing Constraints
Entropy Balancing
Quadratic Balancing
Imposing an Upper Bound on Weights
We illustrate the use of empirical calibration on the observational part of the LaLonde data to evaluate the effect of a job training program.
Background
The LaLonde (1986) data is a
canonical benchmark in the causal inference literature. It consists of three
groups for evaluating the effect of a large scale job training program — the
National Supported Work Demonstration (NSW):
An experimental treatment group with 185 observations.
An experimental control group with 260 observations.
An observational control group drawn from the Current Population Survey
(CPS), with 15,992 observations.
The outcome variable is the post-intervention earnings in 1978. The following
pre-intervention covariates are available for all three groups.
Name | Description
------------ | ---------------------------------------------------
age | age.
education | years of schooling.
black | 1 if black, 0 otherwise.
hispanic | 1 if hispanic, 0 otherwise.
married | 1 if married, 0 otherwise.
nodegree | 1 if not completed high school degree, 0 otherwise.
earnings1974 | pre-intervention earnings in 1974.
earnings1975 | pre-intervention earnings in 1975.
Imports
End of explanation
#@title experimentl treatment group
treated = ec.data.lalonde.experimental_treated()
treated.head()
#@title experimental control group
control = ec.data.lalonde.experimental_control()
control.head()
Explanation: Data
The data are downloaded from
Dehejia and Wahba (1999)'s
data hosting website via functions
under ec.data.lalonde.
Experiment benchmark
The first column is the treatment assignment. The last column is the post-intervention outcome variable. The remaining 8 columns are pre-intervention covariates.
End of explanation
diff = treated["earnings1978"].mean() - control["earnings1978"].mean()
diff
Explanation: We use two methods to estimate the average treatment effects on the treated.
Difference Estimator
Using a simple difference in means, we estimate that the program increased the average earnings of the treated by $1794.
End of explanation
sd = np.sqrt(
treated["earnings1978"].var(ddof=1) / len(treated["earnings1978"]) +
control["earnings1978"].var(ddof=1) / len(control["earnings1978"]))
sd
Explanation: Using the estimator for the sampling variance (see formula 6.18 of Imbens and Rubin (2014)), we estimate the 95% $z$-interval of the difference as [479, 3109].
End of explanation
df = pd.concat([treated, control])
print(df.groupby("treatment").mean().T)
Explanation: Regression Estimator
The experimental treated and experimental control group are not perfectly balanced.
End of explanation
lm = sm.OLS(df["earnings1978"], df.iloc[:, :-1])
print(lm.fit().summary())
Explanation: Thus, we can fit a linear outcome regression model to "clean up" the imbalance. It yields an estimated average treatment effect on the treated of $1698 with a slightly more efficient 95% confidence interval [458, 2938].
End of explanation
control = ec.data.lalonde.observational_control()
control.head()
Explanation: Observational controls
End of explanation
treated["unemployed1974"] = treated["earnings1974"] == 0
treated["unemployed1975"] = treated["earnings1975"] == 0
control["unemployed1974"] = control["earnings1974"] == 0
control["unemployed1975"] = control["earnings1975"] == 0
Explanation: We create two additional indicators umemployed1974 and umemployed1975 based on the earnings.
End of explanation
df = pd.concat([treated, control])
print(df.groupby("treatment").mean().T)
Explanation: The observational control group is significantly different from the experimental treatment group.
End of explanation
for name in df.columns[1:]:
axes = df.hist(column=name, by=df["treatment"], figsize=(10, 2))
axes[0].set_xlabel(name)
axes[1].set_xlabel(name)
Explanation: The experimental treated subjects tend to be younger, with less education, black, single, without high school degree, and have smaller pre-intervention earnings.
End of explanation
formula = (
"(age + education + black + hispanic + married + nodegree + earnings1974 +"
" earnings1975 + unemployed1974 + unemployed1975)**2 + I(age**2) + "
"I(education**2) - black:hispanic - education:nodegree - "
"earnings1974:earnings1975 - earnings1974:unemployed1974 - "
"earnings1975:unemployed1975"
)
treated2 = patsy.dmatrix(formula, treated, return_type="dataframe").iloc[:, 1:]
control2 = patsy.dmatrix(formula, control, return_type="dataframe").iloc[:, 1:]
Explanation: Empirical Calibration
Empirical calibration assigns weights to the observational control subjects such that the reweighted control group equates the treated group on moments of covariates. The weights are also sought to be as uniform as possible.
Prepare Balancing Constraints
There are 8 raw pre-intervention covariates. Simply balancing on their first moment might not be sufficient. We follow Hainmueller (2011)'s analysis and consider a total of 52 balancing constraints, which include single terms, selected one-way interactions, and selected squared terms. Note that balancing on all 52 transformed covariates might be excessive, we choose them simply to reproduce Hainmueller's results.
We use patsy's formula API to construct the covariate matrices, where ":" indicates interaction, and "-" means removes the term on the right.
End of explanation
for i, name in enumerate(control2.columns):
print("%s: %s" % (i, name))
Explanation: The full list of the 52 balancing constraints is shown below.
End of explanation
weights, success = ec.calibrate(covariates=control2.values,
target_covariates=treated2.values)
success
Explanation: Entropy Balancing
We first use the default ENTROPY objective which mimizes the Kullback-Leibler divergence between the balancing weights and uniform weights.
End of explanation
weights, success = ec.calibrate(covariates=control2.values,
target_covariates=treated2.values,
l2_norm=1e-5)
success
Explanation: It turns out that we need to relax the equality constraint a tiny bit to ensure convergence.
End of explanation
pd.Series(weights).plot(style='o', figsize=(12, 3))
Explanation: Top 3 control observations account for 12% of total weight.
End of explanation
effective_control_size = 1.0 / np.square(weights).sum()
effective_control_size
Explanation: The effective sample size of the control group is merely 103.4, a huge reduction from the original sample size 15992.
End of explanation
pd.concat([treated.mean(axis=0),
pd.Series(np.matmul(control.astype('float64').T, weights),
index=treated.columns)], axis=1)
Explanation: We confirm that the means of the raw covariates are matched after weighting.
End of explanation
diff = treated["earnings1978"].mean() - np.sum(control["earnings1978"] * weights)
diff
Explanation: The difference in means between the treated group and the reweighted control group yields an average treatment effect on the treated of $1571, which reproduces Hainmueller (2011)'s result.
End of explanation
sd = np.sqrt(treated["earnings1978"].var(ddof=1) *
(1 / len(treated["earnings1978"]) + 1 / effective_control_size))
sd
Explanation: Under the homoskedastic assumption (i.e., equal unit-level conditional variance), we estimate the variance using formula prescribed in section 19.7 of Imbens and Rubin (2015).
End of explanation
(diff - 1.96 * sd, diff + 1.96 * sd)
Explanation: The 95% confidence interval is [55, 3088], wider than the reported [97, 3044] by Hainmueller (2011) (it is unclear what method was used there).
End of explanation
weights, success = ec.calibrate(
covariates=control2.values,
target_covariates=treated2.values,
objective=ec.Objective.QUADRATIC,
l2_norm=1e-2)
success
Explanation: Quadratic Balancing
We then use the QUADRATIC objective which minimizes the Euclidean distance between the balancing weights and the uniform weights.
End of explanation
effective_control_size = 1.0 / np.square(weights).sum()
effective_control_size
pd.Series(weights).plot(style='o', figsize=(12, 3))
Explanation: Due to being the explicit optimizing objective, the effective control sample size increases.
End of explanation
diff = treated["earnings1978"].mean() - np.sum(control["earnings1978"] * weights)
diff
Explanation: The point estimate $1713 is getting closer to the experimental benchmark $1794.
End of explanation
sd = np.sqrt(treated["earnings1978"].var(ddof=1) *
(1 / len(treated["earnings1978"]) + 1 / effective_control_size))
sd
(diff - 1.96 * sd, diff + 1.96 * sd)
Explanation: The 95% confidence interval becomes narrower due to increased effective control sample size.
End of explanation
weights, success = ec.calibrate(
covariates=control2.values,
target_covariates=treated2.values,
objective=ec.Objective.QUADRATIC,
max_weight=0.015,
l2_norm=1e-2)
success
Explanation: Imposing an Upper Bound on Weights
For both entropy balancing and quadratic balancing, one can optionally impose an upper bound on the weights to avoid extreme individual weights. For illustration, we impose an upper bound 0.015 on quadratic balancing weights, i.e., no single control individual accounts for 1.5% or more of total weights.
End of explanation
pd.Series(weights).plot(style='o', figsize=(12, 3))
Explanation: We can confirm the resulting weights are indeed capped at 0.015.
End of explanation
effective_control_size = 1.0 / np.square(weights).sum()
effective_control_size
Explanation: The cost of this additional constraint is a small loss in effective sample size.
End of explanation
pd.concat([treated.mean(axis=0),
pd.Series(np.matmul(control.astype('float64').T, weights),
index=treated.columns)], axis=1)
Explanation: Note that this additional weights constraint is an integral part of the optimization, as opposed to a separate trimming step used with propensity score weighting type method. We don't sacrifice the balancing conditions.
End of explanation
diff = treated["earnings1978"].mean() - np.sum(control["earnings1978"] * weights)
diff
Explanation: The point estimate is very close to the experimental estimate $1794.
End of explanation
<END_TASK> |
53 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Type
Is Required
Step7: 1.4. Elemental Stoichiometry
Is Required
Step8: 1.5. Elemental Stoichiometry Details
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 1.7. Diagnostic Variables
Is Required
Step11: 1.8. Damping
Is Required
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required
Step13: 2.2. Timestep If Not From Ocean
Is Required
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required
Step15: 3.2. Timestep If Not From Ocean
Is Required
Step16: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required
Step17: 4.2. Scheme
Is Required
Step18: 4.3. Use Different Scheme
Is Required
Step19: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required
Step20: 5.2. River Input
Is Required
Step21: 5.3. Sediments From Boundary Conditions
Is Required
Step22: 5.4. Sediments From Explicit Model
Is Required
Step23: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required
Step24: 6.2. CO2 Exchange Type
Is Required
Step25: 6.3. O2 Exchange Present
Is Required
Step26: 6.4. O2 Exchange Type
Is Required
Step27: 6.5. DMS Exchange Present
Is Required
Step28: 6.6. DMS Exchange Type
Is Required
Step29: 6.7. N2 Exchange Present
Is Required
Step30: 6.8. N2 Exchange Type
Is Required
Step31: 6.9. N2O Exchange Present
Is Required
Step32: 6.10. N2O Exchange Type
Is Required
Step33: 6.11. CFC11 Exchange Present
Is Required
Step34: 6.12. CFC11 Exchange Type
Is Required
Step35: 6.13. CFC12 Exchange Present
Is Required
Step36: 6.14. CFC12 Exchange Type
Is Required
Step37: 6.15. SF6 Exchange Present
Is Required
Step38: 6.16. SF6 Exchange Type
Is Required
Step39: 6.17. 13CO2 Exchange Present
Is Required
Step40: 6.18. 13CO2 Exchange Type
Is Required
Step41: 6.19. 14CO2 Exchange Present
Is Required
Step42: 6.20. 14CO2 Exchange Type
Is Required
Step43: 6.21. Other Gases
Is Required
Step44: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required
Step45: 7.2. PH Scale
Is Required
Step46: 7.3. Constants If Not OMIP
Is Required
Step47: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required
Step48: 8.2. Sulfur Cycle Present
Is Required
Step49: 8.3. Nutrients Present
Is Required
Step50: 8.4. Nitrous Species If N
Is Required
Step51: 8.5. Nitrous Processes If N
Is Required
Step52: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required
Step53: 9.2. Upper Trophic Levels Treatment
Is Required
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required
Step55: 10.2. Pft
Is Required
Step56: 10.3. Size Classes
Is Required
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required
Step58: 11.2. Size Classes
Is Required
Step59: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required
Step60: 12.2. Lability
Is Required
Step61: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required
Step62: 13.2. Types If Prognostic
Is Required
Step63: 13.3. Size If Prognostic
Is Required
Step64: 13.4. Size If Discrete
Is Required
Step65: 13.5. Sinking Speed If Prognostic
Is Required
Step66: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required
Step67: 14.2. Abiotic Carbon
Is Required
Step68: 14.3. Alkalinity
Is Required | <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ncc', 'noresm2-hh', 'ocnbgchem')
Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era: CMIP6
Institute: NCC
Source ID: NORESM2-HH
Topic: Ocnbgchem
Sub-Topics: Tracers.
Properties: 65 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:24
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean biogeochemistry model code (PISCES 2.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
Explanation: 1.4. Elemental Stoichiometry
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe elemental stoichiometry (fixed, variable, mix of the two)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Elemental Stoichiometry Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe which elements have fixed/variable stoichiometry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all prognostic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all diagnotic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Damping
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any tracer damping used (such as artificial correction or relaxation to climatology,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for passive tracers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for passive tracers (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for biology sources and sinks
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for biology sources and sinks (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transport scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Transport scheme used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Use Different Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Decribe transport scheme if different than that of ocean model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how atmospheric deposition is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
Explanation: 5.2. River Input
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how river input is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Sediments From Boundary Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Sediments From Explicit Model
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from explicit sediment model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.2. CO2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe CO2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.3. O2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is O2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. O2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe O2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.5. DMS Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is DMS gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. DMS Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify DMS gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.7. N2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.8. N2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.9. N2O Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2O gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.10. N2O Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2O gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.11. CFC11 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC11 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.12. CFC11 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC11 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.13. CFC12 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC12 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.14. CFC12 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC12 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.15. SF6 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is SF6 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.16. SF6 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify SF6 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.17. 13CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 13CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.18. 13CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 13CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.19. 14CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 14CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.20. 14CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 14CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.21. Other Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any other gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how carbon chemistry is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.2. PH Scale
Is Required: FALSE Type: ENUM Cardinality: 0.1
If NOT OMIP protocol, describe pH scale.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Constants If Not OMIP
Is Required: FALSE Type: STRING Cardinality: 0.1
If NOT OMIP protocol, list carbon chemistry constants.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of tracers in ocean biogeochemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Sulfur Cycle Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sulfur cycle modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Nutrients Present
Is Required: TRUE Type: ENUM Cardinality: 1.N
List nutrient species present in ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Nitrous Species If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous species.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.5. Nitrous Processes If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous processes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required: TRUE Type: STRING Cardinality: 1.1
Definition of upper trophic level (e.g. based on size) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Upper Trophic Levels Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Define how upper trophic level are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
Explanation: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of phytoplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Pft
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton functional types (PFT) (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of zooplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Zooplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there bacteria representation ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Lability
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe treatment of lability in dissolved organic matter
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is particulate carbon represented in ocean biogeochemistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, type(s) of particulate matter taken into account
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
Explanation: 13.3. Size If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.4. Size If Discrete
Is Required: FALSE Type: STRING Cardinality: 0.1
If prognostic and discrete size, describe which size classes are used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Sinking Speed If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, method for calculation of sinking speed of particules
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
Explanation: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which carbon isotopes are modelled (C13, C14)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.2. Abiotic Carbon
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is abiotic carbon modelled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
Explanation: 14.3. Alkalinity
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is alkalinity modelled ?
End of explanation
<END_TASK> |
54 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Introduction to data analytics with pandas
Quentin Caudron
PyData Seattle, July 2017
Systems check
Do you have a working Python installation, with the pandas package ?
Step1: Note
Step2: Note
Step3: Note
Step4: Let's just look at the first few rows.
Step5: We have an index, and three columns
Step6: Definitely a string. We'll note this as something to fix after we finish looking around.
Note
Step7: Note
Step8: What else can we find out ?
Step9: Looks like we also have some missing data - we have 671 rows, but the coffees column only has 658 entries.
Note
Step10: Note
Step11: The contributor column makes sense as object, because we expect strings there; but surely the timestamp should be a timestamp-type, and coffees should be numerical ?
Let's inspect what's in the timestamp column.
Step12: It looks like the timestamp field was read from CSV as a string. That makes sense - CSV files are very basic. We'll have pandas interpret these strings as datetimes for us automatically.
Note
Step13: #### The coffees column contains NaNs.
Step14: The coffees column is of type float.
Step15: Let's have pandas parse the timestamp strings to datetime objects.
Step16: So where do we stand ?
Step17: Note
Step18: pandas is plotting the coffees against the index, which is just a series of integers.
Note
Step19: We have some very uneven spacing in places. We might start by cutting off the last few points of this time-series, which is missing a lot of data.
We'll inspect the last few points of this time-series.
Step20: After mid-March, things start getting spaced rather erratically.
Let's cut off the tail of the time-series, anything after 2013-03-01.
Step21: Note
Step22: 1. Contributions to the time-series
Who are our main contributors ?
Step23: Note
Step24: On which weekdays were contributions made ?
Step25: Can we replace these integers with actual weekdays ?
Step26: Let's group by these weekdays.
Step27: Note
Step28: 2. Weekday trends
First, we'll set our timestamps to the dataframe's index
Step29: Let's add some rows at midnight on every day.
Step30: Note
Step31: Note
Step32: Note
Step33: We're now ready to resample the time-series at a daily frequency.
Step34: Let's begin by figuring out how many coffees are made on any given day.
Step35: Note
Step36: Let's order this series and then plot it.
Step37: Wednesdays was seminar day...
3. Coffee per person
We can now pull in data on how many people were in the department.
Step38: Let's join the datasets.
Step39: Note
Step40: We can now plot this column.
Step41: Those are strange plateaus. We'll pull in another dataset, telling us when the machine was broken.
Step42: Note
Step43: A quick trick to plot this as a time-series...
Step44: Note
Step45: We'll bring in this numerical representation of status column into our dataframe too.
Step46: Let's plot both the coffees per person and the numerical status.
Step47: We see a strong weekday-weekend effect. Resampling weekly will fix that. | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
Explanation: Introduction to data analytics with pandas
Quentin Caudron
PyData Seattle, July 2017
Systems check
Do you have a working Python installation, with the pandas package ?
End of explanation
import pandas as pd
%matplotlib inline
Explanation: Note : This cell should run without raising a traceback. Assuming it runs, you can also try printing the value of pd.__version__ to see what version of pandas you have installed.
A little about me
Lapsed computational physicist
PhD computational neuroscience, postdoc statistical epidemiology
Data Scientist at CBRE - www.cbredev.com
ATOM at Seattle
A little about the hero of this story
<center><img src="images/coffee_machine.jpg" width="400px" /></center>
We'll be analysing a real-world dataset together. It's about my favourite thing in the world : coffee. This dataset was collected at the Mathematics Institute at the University of Warwick. It's a time-series dataset, describing the total number of coffees made by our espresso machine by a certain date.
A little about this workshop
We'll be running through an analysis of this dataset as a way to expose you to the pandas API. The aim is to develop a little familiarity with how to work with pandas.
Slides are available at https://github.com/QCaudron/pydata_pandas. One notebook contains solutions; beware of spoilers.
The notebooks contain notes about what we're doing that I'll skip during this workshop, but try to explain on the way.
The pandas API is enormous. The documentation is excellent, don't hesitate to look things up.
Key questions
The dataset : total number of coffees made vs time.
Who are the main contributors to this dataset, and when are contributions generally made ?
What are the department's weekday coffee habits ?
How much coffee are people drinking ?
Let's begin
End of explanation
# Read data from data/coffees.csv
data = pd.read_csv("data/coffees.csv")
Explanation: Note : The second line here tells matplotlib to plot directly under the cell where any plotting code is called. pandas uses matplotlib to generate graphs, and without this, the graphs would appear outside the Jupyter notebook when you called plt.show() - but we just want them to appear without having to do this.
http://ipython.readthedocs.io/en/stable/interactive/plotting.html#id1
Importing the data
Let's import the coffee data from CSV.
End of explanation
data
Explanation: Note : pandas can read from many data formats : CSV, JSON, Excel, HDF5, SQL, and more.
http://pandas.pydata.org/pandas-docs/version/0.20/io.html
What does this data look like ?
End of explanation
# .head()
data.head()
Explanation: Let's just look at the first few rows.
End of explanation
# .loc or .iloc
data.loc[2]
Explanation: We have an index, and three columns : timestamp, coffees, and contributor.
Uh-oh. Why is there a string of text, testing, in our coffee numbers ? What's going on in the coffees column in the row after that ?
Note : df.head(n=10) would show the first ten rows. The default is n=5.
https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.head.html
Let's look at that string in the third row.
End of explanation
# [] indexing on a series
data.coffees[:5]
Explanation: Definitely a string. We'll note this as something to fix after we finish looking around.
Note : .loc uses a label-based lookup, which means that the value you pass into the square brackets must be in the index. Another method, .iloc, is integer-location-based, so .iloc[2] would return the third row. In this case, they're the same, but had we changed our index, as we'll see later, things would work differently.
Indexing a dataframe with [] directly returns a pd.Series or pd.DataFrame by searching over columns, not rows. Indexing a pd.Series with [] is like indexing a dataframe with .iloc.
https://pandas.pydata.org/pandas-docs/stable/indexing.html
We should also take a look at that NaN. In fact, let's look at the first five values in coffees.
End of explanation
print("Dataset length :")
# len()
print(len(data))
Explanation: Note : here, we're indexing a series ( a pd.Series object ). From a pd.DataFrame ( here, data ), when you access a single column ( data.coffees or data["coffees"] ), the object returned is a pd.Series. From that, indexing directly with [] works in an integer-location-based manner, and like with numpy arrays, you can take slices ( [:5] ).
https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.html
How long is the dataset ?
End of explanation
# .describe()
data.describe()
Explanation: What else can we find out ?
End of explanation
# .isnull() and boolean indexing with []
data[data.coffees.isnull()]
Explanation: Looks like we also have some missing data - we have 671 rows, but the coffees column only has 658 entries.
Note : .describe() returns different things based on what's in the dataframe, as we'll see later. For numerical columns, it will return things like the mean, standard deviation, and percentiles. For object columns ( strings or datetimes ), it will return the most frequent entry and the first and last items. For all columns, .describe() will return the count of objects in that column ( not counting NaNs ) and the unique number of entries. You can determine what's returned using .describe()'s keyword arguments.
https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.describe.html
Let's look at the dataframe where coffees is null.
End of explanation
# .dtypes
data.dtypes
Explanation: Note : .isnull() returns a boolean array ( an array of Trues and Falses ), that you can then use to index the dataframe directly. Here, our boolean array tells us which entries in the coffees column are null, and we use that to index against the full dataframe - so we get back every column in the dataframe, but only those rows where coffees is null.
https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.isnull.html
What type of Python objects are the columns ?
End of explanation
# print the first element of the series with [] indexing
print(data.timestamp[0])
# print its type()
print(type(data.timestamp[0]))
Explanation: The contributor column makes sense as object, because we expect strings there; but surely the timestamp should be a timestamp-type, and coffees should be numerical ?
Let's inspect what's in the timestamp column.
End of explanation
# cast the coffees column using pd.to_numeric, and coerce errors
data.coffees = pd.to_numeric(data.coffees, errors="coerce")
data.head()
Explanation: It looks like the timestamp field was read from CSV as a string. That makes sense - CSV files are very basic. We'll have pandas interpret these strings as datetimes for us automatically.
Note : here's an example of using direct [] indexing on a pd.Series. We're accessing the first entry, just to see what type of object we have there.
On our first pass, what problems did we find ?
The timestamp column contains strings; these need to be datetimes
The coffees column contains some null values and at least one string
Cleaning the data
The coffees column should only contain numerical data.
End of explanation
# Use .dropna() using a subset, and pass inplace
data.dropna(subset=["coffees"], inplace=True)
data.head()
Explanation: #### The coffees column contains NaNs.
End of explanation
# Cast to int using .astype()
data.coffees = data.coffees.astype(int)
data.head()
Explanation: The coffees column is of type float.
End of explanation
# pd.to_datetime()
data.timestamp = pd.to_datetime(data.timestamp)
# Confirm dtypes
data.dtypes
Explanation: Let's have pandas parse the timestamp strings to datetime objects.
End of explanation
# .describe(), passing the include kwarg to see all information
data.describe(include="all")
# What do the first few rows look like ?
data.head()
Explanation: So where do we stand ?
End of explanation
# .plot() on the coffees series
data.coffees.plot()
Explanation: Note : .describe(include="all") is describing all attributes of all columns, but some don't make sense based on the column's dtype. For example, the contributor column has no first and last attributes, because those describe the first and last entries in an ordered series. That makes sense for the timestamp - those have an intuitive definition of sorting - but not so much for strings ( alphabetical order doesn't really matter when they're arbitrary strings ). Similary, the timestamp column has no mean or other numerical traits. What does it mean to calculate the mean timestamp ?
The time-series at a glance
Let's begin by visualising the coffee counts.
End of explanation
# .plot() on the dataframe, setting x to the timestamp, with dot-dash style
data.plot(x=data.timestamp, style=".-")
Explanation: pandas is plotting the coffees against the index, which is just a series of integers.
Note : .plot() on a pd.Series will plot the data against the index. On a pd.DataFrame, the .plot() method allows plotting of one column against another.
By default, .plot() renders a line graph, but you can specify which type of plot you'd like - bar, line, histogram, area, scatter, etc..
https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.plot.html
https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.plot.html
Let's use the dataframe's plot() method rather than that of the series.
End of explanation
# .tail() with ten rows
data.tail(n=10)
Explanation: We have some very uneven spacing in places. We might start by cutting off the last few points of this time-series, which is missing a lot of data.
We'll inspect the last few points of this time-series.
End of explanation
# Use conditional indexing against the timestamp
data = data[data.timestamp < "2013-03-01"]
data.tail()
Explanation: After mid-March, things start getting spaced rather erratically.
Let's cut off the tail of the time-series, anything after 2013-03-01.
End of explanation
# Once again, plot the data against the timestamp
data.plot(x=data.timestamp, style=".-")
Explanation: Note : this is another example of boolean indexing. data.timestamp < "2013-03-01" is a boolean array, and can be passed into the dataframe immediately in [], much like with a np.ndarray.
One final look.
End of explanation
# .value_counts()
data.contributor.value_counts()
Explanation: 1. Contributions to the time-series
Who are our main contributors ?
End of explanation
# .plot() a bar chart from the value counts
data.contributor.value_counts().plot(kind="bar")
Explanation: Note : .value_counts() counts the unique values in a series. It's similar to doing a .groupby() followed by a .count(), as we'll see soon.
https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.value_counts.html
Let's plot this.
End of explanation
# Create a series of the weekdays for each entry using .dt.weekday
weekdays = data.timestamp.dt.weekday
# assign() it to our dataframe
data = data.assign(weekdays=weekdays)
data.head()
Explanation: On which weekdays were contributions made ?
End of explanation
weekday_names = ["Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday", "Sunday"]
weekday_dict = {key: weekday_names[key] for key in range(7)}
# Use .apply() to apply a custom function to the weekdays column
data.weekdays = data.weekdays.apply(lambda x: weekday_dict[x])
data.head()
Explanation: Can we replace these integers with actual weekdays ?
End of explanation
# .groupby() the weekdays and then .count() rows in each group
weekday_counts = data.groupby("weekdays").count()
# We can reorder this dataframe by our weekday_names list
weekday_counts = weekday_counts.loc[weekday_names]
weekday_counts
Explanation: Let's group by these weekdays.
End of explanation
# .plot() a bar chart of data in weekday_counts
weekday_counts.timestamp.plot(kind="bar", title="Datapoints added on each weekday")
Explanation: Note : this first line could be replaced by weekday_counts = data.weekdays.value_counts(), with the only difference being that that would return a series to us, and here, we got back a dataframe.
We can now visualise these weekday counts.
End of explanation
# Set the dataframe's .index property
data.index = data.timestamp
# Let's drop the timestamp column, as we no longer need it
data.drop(["timestamp"], axis=1, inplace=True)
data.head()
Explanation: 2. Weekday trends
First, we'll set our timestamps to the dataframe's index
End of explanation
# pd.date_range, with daily frequency, and normalisation
midnights = pd.date_range(data.index[0], data.index[-1], freq="D", normalize=True)
midnights
Explanation: Let's add some rows at midnight on every day.
End of explanation
# Take the union of the existing and new indices
new_index = midnights.union(data.index)
new_index
Explanation: Note : pd.date_range creates a fixed-frequency DatetimeIndex. normalize=True ensures these datetimes are at midnight, and not at whatever time the starting point is.
https://pandas.pydata.org/pandas-docs/stable/generated/pandas.date_range.html
Let's take the union of this index and our dataset's index.
End of explanation
# .reindex() the dataframe
upsampled_data = data.reindex(new_index)
upsampled_data.head(10)
Explanation: Note : the union of these indices is just a new index where entries from both indices are present. It's sorted by time.
Now we can reindex our dataframe with this new index.
End of explanation
# .interpolate the upsampled_data using the time method
upsampled_data = upsampled_data.interpolate(method="time")
upsampled_data.head(10)
Explanation: Note : .reindex() keeps any values that conform to the new index, and inserts NaNs where we have no values.
https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reindex.html
We can fill in these NaNs using interpolation.
End of explanation
# .resample() followed by .asfreq()
daily_data = upsampled_data.resample("D").asfreq()
# Drop the contributor column, we no longer need it
daily_data = daily_data.drop(["contributor"], axis=1)
# Generate a column of weekday names
daily_data["weekdays"] = daily_data.index.weekday_name # We did it the slow way before...
daily_data.head()
# Let's plot the data once more, to see how we're doing
daily_data.plot(figsize=(15, 4), style=".")
Explanation: We're now ready to resample the time-series at a daily frequency.
End of explanation
# Use .diff() on the coffees column; follow up with .shift()
coffees_made = daily_data.coffees.diff().shift(-1)
# Add this as a column to the dataframe
daily_data["coffees_made_today"] = coffees_made
daily_data.head()
Explanation: Let's begin by figuring out how many coffees are made on any given day.
End of explanation
# .groupby weekdays, take the mean, and grab the coffees_made_today column
coffees_by_day = daily_data.groupby("weekdays").mean().coffees_made_today
coffees_by_day
Explanation: Note : we use .shift() here because if we look at the .diff() between a Monday and a Tuesday, those coffees are attributed to the Tuesday. However, what we want to say is "this many coffees were made at some point on the Monday", so we shift the entire series up one.
Now we can group this by weekday.
End of explanation
# Sort coffees_by_day by our list of weekday names
coffees_by_day = coffees_by_day[weekday_names]
# Plot a bar chart
coffees_by_day.plot(kind="bar")
Explanation: Let's order this series and then plot it.
End of explanation
# Bring in data/department_members.csv;
# have the first column be the index, and parse the dates
people = pd.read_csv("data/department_members.csv", index_col=[0], parse_dates=True)
people.head()
Explanation: Wednesdays was seminar day...
3. Coffee per person
We can now pull in data on how many people were in the department.
End of explanation
# Use an outer join, then interpolate over missing values using nearest values
daily_data = daily_data.join(people, how="outer").interpolate(method="nearest")
daily_data.head()
Explanation: Let's join the datasets.
End of explanation
# New column is the ratio of coffees made on a given day to number of members in the department
daily_data["coffees_per_person"] = daily_data.coffees_made_today / daily_data.members
# Let's drop those remaining NaNs while we're at it
daily_data.dropna(inplace=True)
daily_data.head()
Explanation: Note : by default, inner joins are performed. That is, if a row from one of the datasets has an index that isn't in the other dataset, that row is dropped. You can specify whether you want outer, left, or right joins, as well plenty of other useful options. The pandas API for joining or merging datasets is very developed.
https://pandas.pydata.org/pandas-docs/stable/merging.html
Let's create a column for the number of coffees consumed per person.
End of explanation
# Plot the coffees_per_person column
daily_data.coffees_per_person.plot()
Explanation: We can now plot this column.
End of explanation
# pd.read_csv(); try using data/coffee_status.csv
# parse_dates as kwarg; also pass index_col
machine_status = pd.read_csv("data/coffee_status.csv", parse_dates=["date"], index_col="date")
machine_status.head()
Explanation: Those are strange plateaus. We'll pull in another dataset, telling us when the machine was broken.
End of explanation
# .value_counts()
machine_status.status.value_counts()
Explanation: Note : the parse_dates keyword argument takes several values. By passing in a list of strings, we're telling pandas to attempt to parse the dates in columns with those names.
https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html
What values are in the status column ?
End of explanation
# Make a pd.Series from the status series where things are OK
numerical_status = machine_status.status == "OK"
numerical_status.plot()
Explanation: A quick trick to plot this as a time-series...
End of explanation
# .join()
daily_data = daily_data.join(machine_status)
daily_data.head()
Explanation: Note : the first line here creates a boolean pd.Series, holding the value True when machine_status.status is "OK", and False otherwise. Because it's a pd.Series, its index stays the same as that of machine_status, which was a DatetimeIndex. Then, we can plot the boolean series ( True appearing as 1, and False appearing as 0 ), and just quickly scan to see that there are long areas where the coffee machine was operations, with short bouts ( thankfully ! ) of the machine being broken.
Let's join the datasets on the date field !
End of explanation
# Column depicting when the status was "OK"
# Cast the series to ints before as you create a new column in the dataframe
daily_data["numerical_status"] = (daily_data.status == "OK").astype(int)
daily_data.head()
Explanation: We'll bring in this numerical representation of status column into our dataframe too.
End of explanation
# Plot both columns on the same graph, using default args
daily_data[["coffees_per_person", "numerical_status"]].plot()
Explanation: Let's plot both the coffees per person and the numerical status.
End of explanation
# Resample weekly, taking the mean of each week to get a weekly value
weekly_data = daily_data.resample("W").mean()
weekly_data[["coffees_per_person", "numerical_status"]].plot()
Explanation: We see a strong weekday-weekend effect. Resampling weekly will fix that.
End of explanation
<END_TASK> |
55 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Anna KaRNNa
In this notebook, we'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
Step1: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
Step2: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
Step3: And we can see the characters encoded as integers.
Step4: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
Step5: Making training mini-batches
Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this
Step6: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
Step7: If you implemented get_batches correctly, the above output should look something like
```
x
[[55 63 69 22 6 76 45 5 16 35]
[ 5 69 1 5 12 52 6 5 56 52]
[48 29 12 61 35 35 8 64 76 78]
[12 5 24 39 45 29 12 56 5 63]
[ 5 29 6 5 29 78 28 5 78 29]
[ 5 13 6 5 36 69 78 35 52 12]
[63 76 12 5 18 52 1 76 5 58]
[34 5 73 39 6 5 12 52 36 5]
[ 6 5 29 78 12 79 6 61 5 59]
[ 5 78 69 29 24 5 6 52 5 63]]
y
[[63 69 22 6 76 45 5 16 35 35]
[69 1 5 12 52 6 5 56 52 29]
[29 12 61 35 35 8 64 76 78 28]
[ 5 24 39 45 29 12 56 5 63 29]
[29 6 5 29 78 28 5 78 29 45]
[13 6 5 36 69 78 35 52 12 43]
[76 12 5 18 52 1 76 5 58 52]
[ 5 73 39 6 5 12 52 36 5 78]
[ 5 29 78 12 79 6 61 5 59 63]
[78 69 29 24 5 6 52 5 63 76]]
``
although the exact numbers will be different. Check to make sure the data is shifted over one step fory`.
Building the model
Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.
<img src="assets/charRNN.png" width=500px>
Inputs
First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob. This will be a scalar, that is a 0-D tensor. To make a scalar, you create a placeholder without giving it a size.
Exercise
Step8: LSTM Cell
Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.
We first create a basic LSTM cell with
python
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with
python
tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. Previously with TensorFlow 1.0, you could do this
python
tf.contrib.rnn.MultiRNNCell([cell]*num_layers)
This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow 1.0 will create different weight matrices for all cell objects. But, starting with TensorFlow 1.1 you actually need to create new cell objects in the list. To get it to work in TensorFlow 1.1, it should look like
```python
def build_cell(num_units, keep_prob)
Step9: RNN Output
Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character, so we want this layer to have size $C$, the number of classes/characters we have in our text.
If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$.
We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells. We get the LSTM output as a list, lstm_output. First we need to concatenate this whole list into one array with tf.concat. Then, reshape it (with tf.reshape) to size $(M * N) \times L$.
One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.
Exercise
Step10: Training loss
Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(MN) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(MN) \times C$.
Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss.
Exercise
Step11: Optimizer
Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.
Step12: Build the network
Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.
Exercise
Step13: Hyperparameters
Here are the hyperparameters for the network.
batch_size - Number of sequences running through the network in one pass.
num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
lstm_size - The number of units in the hidden layers.
num_layers - Number of hidden LSTM layers to use
learning_rate - Learning rate for training
keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from.
Tips and Tricks
Monitoring Validation Loss vs. Training Loss
If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular
Step14: Time for training
This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint.
Here I'm saving checkpoints with the format
i{iteration number}_l{# hidden layer units}.ckpt
Exercise
Step15: Saved checkpoints
Read up on saving and loading checkpoints here
Step16: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
Step17: Here, pass in the path to a checkpoint and sample from the network. | <ASSISTANT_TASK:>
Python Code:
import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
Explanation: Anna KaRNNa
In this notebook, we'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
End of explanation
with open('anna.txt', 'r') as f:
text=f.read()
vocab = sorted(set(text))
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
encoded = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
Explanation: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
End of explanation
text[:100]
Explanation: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
End of explanation
encoded[:100]
Explanation: And we can see the characters encoded as integers.
End of explanation
len(vocab)
Explanation: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
End of explanation
def get_batches(arr, n_seqs, n_steps):
'''Create a generator that returns batches of size
n_seqs x n_steps from arr.
Arguments
---------
arr: Array you want to make batches from
n_seqs: Batch size, the number of sequences per batch
n_steps: Number of sequence steps per batch
'''
# Get the number of characters per batch and number of batches we can make
characters_per_batch =
n_batches =
# Keep only enough characters to make full batches
arr =
# Reshape into n_seqs rows
arr =
for n in range(0, arr.shape[1], n_steps):
# The features
x =
# The targets, shifted by one
y =
yield x, y
Explanation: Making training mini-batches
Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this:
<img src="assets/sequence_batching@1x.png" width=500px>
<br>
We have our text encoded as integers as one long array in encoded. Let's create a function that will give us an iterator for our batches. I like using generator functions to do this. Then we can pass encoded into this function and get our batch generator.
The first thing we need to do is discard some of the text so we only have completely full batches. Each batch contains $N \times M$ characters, where $N$ is the batch size (the number of sequences) and $M$ is the number of steps. Then, to get the number of batches we can make from some array arr, you divide the length of arr by the batch size. Once you know the number of batches and the batch size, you can get the total number of characters to keep.
After that, we need to split arr into $N$ sequences. You can do this using arr.reshape(size) where size is a tuple containing the dimensions sizes of the reshaped array. We know we want $N$ sequences (n_seqs below), let's make that the size of the first dimension. For the second dimension, you can use -1 as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is $N \times (M * K)$ where $K$ is the number of batches.
Now that we have this array, we can iterate through it to get our batches. The idea is each batch is a $N \times M$ window on the array. For each subsequent batch, the window moves over by n_steps. We also want to create both the input and target arrays. Remember that the targets are the inputs shifted over one character. You'll usually see the first input character used as the last target character, so something like this:
python
y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
where x is the input batch and y is the target batch.
The way I like to do this window is use range to take steps of size n_steps from $0$ to arr.shape[1], the total number of steps in each sequence. That way, the integers you get from range always point to the start of a batch, and each window is n_steps wide.
Exercise: Write the code for creating batches in the function below. The exercises in this notebook will not be easy. I've provided a notebook with solutions alongside this notebook. If you get stuck, checkout the solutions. The most important thing is that you don't copy and paste the code into here, type out the solution code yourself.
End of explanation
batches = get_batches(encoded, 10, 50)
x, y = next(batches)
print('x\n', x[:10, :10])
print('\ny\n', y[:10, :10])
Explanation: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
End of explanation
def build_inputs(batch_size, num_steps):
''' Define placeholders for inputs, targets, and dropout
Arguments
---------
batch_size: Batch size, number of sequences per batch
num_steps: Number of sequence steps in a batch
'''
# Declare placeholders we'll feed into the graph
inputs =
targets =
# Keep probability placeholder for drop out layers
keep_prob =
return inputs, targets, keep_prob
Explanation: If you implemented get_batches correctly, the above output should look something like
```
x
[[55 63 69 22 6 76 45 5 16 35]
[ 5 69 1 5 12 52 6 5 56 52]
[48 29 12 61 35 35 8 64 76 78]
[12 5 24 39 45 29 12 56 5 63]
[ 5 29 6 5 29 78 28 5 78 29]
[ 5 13 6 5 36 69 78 35 52 12]
[63 76 12 5 18 52 1 76 5 58]
[34 5 73 39 6 5 12 52 36 5]
[ 6 5 29 78 12 79 6 61 5 59]
[ 5 78 69 29 24 5 6 52 5 63]]
y
[[63 69 22 6 76 45 5 16 35 35]
[69 1 5 12 52 6 5 56 52 29]
[29 12 61 35 35 8 64 76 78 28]
[ 5 24 39 45 29 12 56 5 63 29]
[29 6 5 29 78 28 5 78 29 45]
[13 6 5 36 69 78 35 52 12 43]
[76 12 5 18 52 1 76 5 58 52]
[ 5 73 39 6 5 12 52 36 5 78]
[ 5 29 78 12 79 6 61 5 59 63]
[78 69 29 24 5 6 52 5 63 76]]
``
although the exact numbers will be different. Check to make sure the data is shifted over one step fory`.
Building the model
Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.
<img src="assets/charRNN.png" width=500px>
Inputs
First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob. This will be a scalar, that is a 0-D tensor. To make a scalar, you create a placeholder without giving it a size.
Exercise: Create the input placeholders in the function below.
End of explanation
def build_lstm(lstm_size, num_layers, batch_size, keep_prob):
''' Build LSTM cell.
Arguments
---------
keep_prob: Scalar tensor (tf.placeholder) for the dropout keep probability
lstm_size: Size of the hidden layers in the LSTM cells
num_layers: Number of LSTM layers
batch_size: Batch size
'''
### Build the LSTM Cell
# Use a basic LSTM cell
lstm =
# Add dropout to the cell outputs
drop =
# Stack up multiple LSTM layers, for deep learning
cell =
initial_state =
return cell, initial_state
Explanation: LSTM Cell
Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.
We first create a basic LSTM cell with
python
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with
python
tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. Previously with TensorFlow 1.0, you could do this
python
tf.contrib.rnn.MultiRNNCell([cell]*num_layers)
This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow 1.0 will create different weight matrices for all cell objects. But, starting with TensorFlow 1.1 you actually need to create new cell objects in the list. To get it to work in TensorFlow 1.1, it should look like
```python
def build_cell(num_units, keep_prob):
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
return drop
tf.contrib.rnn.MultiRNNCell([build_cell(num_units, keep_prob) for _ in range(num_layers)])
```
Even though this is actually multiple LSTM cells stacked on each other, you can treat the multiple layers as one cell.
We also need to create an initial cell state of all zeros. This can be done like so
python
initial_state = cell.zero_state(batch_size, tf.float32)
Below, we implement the build_lstm function to create these LSTM cells and the initial state.
End of explanation
def build_output(lstm_output, in_size, out_size):
''' Build a softmax layer, return the softmax output and logits.
Arguments
---------
lstm_output: List of output tensors from the LSTM layer
in_size: Size of the input tensor, for example, size of the LSTM cells
out_size: Size of this softmax layer
'''
# Reshape output so it's a bunch of rows, one row for each step for each sequence.
# Concatenate lstm_output over axis 1 (the columns)
seq_output =
# Reshape seq_output to a 2D tensor with lstm_size columns
x =
# Connect the RNN outputs to a softmax layer
with tf.variable_scope('softmax'):
# Create the weight and bias variables here
softmax_w =
softmax_b =
# Since output is a bunch of rows of RNN cell outputs, logits will be a bunch
# of rows of logit outputs, one for each step and sequence
logits =
# Use softmax to get the probabilities for predicted characters
out =
return out, logits
Explanation: RNN Output
Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character, so we want this layer to have size $C$, the number of classes/characters we have in our text.
If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$.
We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells. We get the LSTM output as a list, lstm_output. First we need to concatenate this whole list into one array with tf.concat. Then, reshape it (with tf.reshape) to size $(M * N) \times L$.
One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.
Exercise: Implement the output layer in the function below.
End of explanation
def build_loss(logits, targets, lstm_size, num_classes):
''' Calculate the loss from the logits and the targets.
Arguments
---------
logits: Logits from final fully connected layer
targets: Targets for supervised learning
lstm_size: Number of LSTM hidden units
num_classes: Number of classes in targets
'''
# One-hot encode targets and reshape to match logits, one row per sequence per step
y_one_hot =
y_reshaped =
# Softmax cross entropy loss
loss =
return loss
Explanation: Training loss
Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(MN) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(MN) \times C$.
Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss.
Exercise: Implement the loss calculation in the function below.
End of explanation
def build_optimizer(loss, learning_rate, grad_clip):
''' Build optmizer for training, using gradient clipping.
Arguments:
loss: Network loss
learning_rate: Learning rate for optimizer
'''
# Optimizer for training, using gradient clipping to control exploding gradients
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
return optimizer
Explanation: Optimizer
Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.
End of explanation
class CharRNN:
def __init__(self, num_classes, batch_size=64, num_steps=50,
lstm_size=128, num_layers=2, learning_rate=0.001,
grad_clip=5, sampling=False):
# When we're using this network for sampling later, we'll be passing in
# one character at a time, so providing an option for that
if sampling == True:
batch_size, num_steps = 1, 1
else:
batch_size, num_steps = batch_size, num_steps
tf.reset_default_graph()
# Build the input placeholder tensors
self.inputs, self.targets, self.keep_prob =
# Build the LSTM cell
cell, self.initial_state =
### Run the data through the RNN layers
# First, one-hot encode the input tokens
x_one_hot =
# Run each sequence step through the RNN with tf.nn.dynamic_rnn
outputs, state =
self.final_state = state
# Get softmax predictions and logits
self.prediction, self.logits =
# Loss and optimizer (with gradient clipping)
self.loss =
self.optimizer =
Explanation: Build the network
Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.
Exercise: Use the functions you've implemented previously and tf.nn.dynamic_rnn to build the network.
End of explanation
batch_size = 10 # Sequences per batch
num_steps = 50 # Number of sequence steps per batch
lstm_size = 128 # Size of hidden layers in LSTMs
num_layers = 2 # Number of LSTM layers
learning_rate = 0.01 # Learning rate
keep_prob = 0.5 # Dropout keep probability
Explanation: Hyperparameters
Here are the hyperparameters for the network.
batch_size - Number of sequences running through the network in one pass.
num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
lstm_size - The number of units in the hidden layers.
num_layers - Number of hidden LSTM layers to use
learning_rate - Learning rate for training
keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from.
Tips and Tricks
Monitoring Validation Loss vs. Training Loss
If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:
If your training loss is much lower than validation loss then this means the network might be overfitting. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.
If your training/validation loss are about equal then your model is underfitting. Increase the size of your model (either number of layers or the raw number of neurons per layer)
Approximate number of parameters
The two most important parameters that control the model are lstm_size and num_layers. I would advise that you always use num_layers of either 2/3. The lstm_size can be adjusted based on how much data you have. The two important quantities to keep track of here are:
The number of parameters in your model. This is printed when you start training.
The size of your dataset. 1MB file is approximately 1 million characters.
These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:
I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make lstm_size larger.
I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss.
Best models strategy
The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.
It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.
By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.
End of explanation
epochs = 20
# Save every N iterations
save_every_n = 200
model = CharRNN(len(vocab), batch_size=batch_size, num_steps=num_steps,
lstm_size=lstm_size, num_layers=num_layers,
learning_rate=learning_rate)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/______.ckpt')
counter = 0
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for x, y in get_batches(encoded, batch_size, num_steps):
counter += 1
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: keep_prob,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.loss,
model.final_state,
model.optimizer],
feed_dict=feed)
end = time.time()
print('Epoch: {}/{}... '.format(e+1, epochs),
'Training Step: {}... '.format(counter),
'Training loss: {:.4f}... '.format(batch_loss),
'{:.4f} sec/batch'.format((end-start)))
if (counter % save_every_n == 0):
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
Explanation: Time for training
This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint.
Here I'm saving checkpoints with the format
i{iteration number}_l{# hidden layer units}.ckpt
Exercise: Set the hyperparameters above to train the network. Watch the training loss, it should be consistently dropping. Also, I highly advise running this on a GPU.
End of explanation
tf.train.get_checkpoint_state('checkpoints')
Explanation: Saved checkpoints
Read up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables
End of explanation
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
samples = [c for c in prime]
model = CharRNN(len(vocab), lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
Explanation: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
End of explanation
tf.train.latest_checkpoint('checkpoints')
checkpoint = tf.train.latest_checkpoint('checkpoints')
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i600_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i1200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
Explanation: Here, pass in the path to a checkpoint and sample from the network.
End of explanation
<END_TASK> |
56 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Parameterizing with Continuous Variables
Step1: Continuous Factors
Base Class for Continuous Factors
Joint Gaussian Distributions
Canonical Factors
Linear Gaussian CPD
In many situations, some variables are best modeled as taking values in some continuous space. Examples include variables such as position, velocity, temperature, and pressure. Clearly, we cannot use a table representation in this case.
Nothing in the formulation of a Bayesian network requires that we restrict attention to discrete variables. The only requirement is that the CPD, $P(X | Y_1, Y_2, \cdots Y_n)$ represent, for every assignment of values $y_1 \in Val(Y_1), y_2 \in Val(Y_2), \cdots, y_n \in val(Y_n)$, a distribution over $X$. In this case, $X$ might be continuous, in which case the CPD would need to represent distributions over a continuum of values; we might also have $X$’s parents continuous, so that the CPD would also need to represent a continuum of different probability distributions. There exists implicit representations for CPDs of this type, allowing us to apply all the network machinery for the continuous case as well.
Base Class for Continuous Factors
This class will behave as a base class for the continuous factor representations. All the present and future factor classes will be derived from this base class. We need to specify the variable names and a pdf function to initialize this class.
Step2: This class supports methods like marginalize, reduce, product and divide just like what we have with discrete classes. One caveat is that when there are a number of variables involved, these methods prove to be inefficient and hence we resort to certain Gaussian or some other approximations which are discussed later.
Step3: The ContinuousFactor class also has a method discretize that takes a pgmpy Discretizer class as input. It will output a list of discrete probability masses or a Factor or TabularCPD object depending upon the discretization method used. Although, we do not have inbuilt discretization algorithms for multivariate distributions for now, the users can always define their own Discretizer class by subclassing the pgmpy.BaseDiscretizer class.
Joint Gaussian Distributions
In its most common representation, a multivariate Gaussian distribution over $X_1 \cdots X_n$ is characterized by an n-dimensional mean vector $\mu$, and a symmetric $n \times n$ covariance matrix $\Sigma$. The density function is most defined as -
$$p(x) = \dfrac{1}{(2\pi)^{n/2}| \Sigma |^{1/2}} \exp[-0.5*(x- \mu )^T \Sigma^{-1}(x- \mu)]$$
The class pgmpy.JointGaussianDistribution provides its representation. This is derived from the class pgmpy.ContinuousFactor. We need to specify the variable names, a mean vector and a covariance matrix for its inialization. It will automatically comute the pdf function given these parameters.
Step4: This class overrides the basic operation methods (marginalize, reduce, normalize, product and divide) as these operations here are more efficient than the ones in its parent class. Most of these operation involve a matrix inversion which is $\mathcal{O}(n^3)$ with repect to the number of variables.
Step5: The others methods can also be used in a similar fashion.
Canonical Factors
While the Joint Gaussian representation is useful for certain sampling algorithms, a closer look reveals that it can also not be used directly in the sum-product algorithms. Why? Because operations like product and reduce, as mentioned above involve matrix inversions at each step.
So, in order to compactly describe the intermediate factors in a Gaussian network without the costly matrix inversions at each step, a simple parametric representation is used known as the Canonical Factor. This representation is closed under the basic operations used in inference
Step6: This class also has a method, to_joint_gaussian to convert the canoncial representation back into the joint gaussian distribution.
Step7: Linear Gaussian CPD
A linear gaussian conditional probability distribution is defined on a continuous variable. All the parents of this variable are also continuous. The mean of this variable, is linearly dependent on the mean of its parent variables and the variance is independent.
For example,
$$P(Y ; x_1, x_2, x_3) = N(\beta_1 x_1 + \beta_2 x_2 + \beta_3 x_3 + \beta_0 ; \sigma^2)$$
Let $Y$ be a linear Gaussian of its parents $X_1, \cdots, X_k
Step8: A Gaussian Bayesian is defined as a network all of whose variables are continuous, and where all of the CPDs are linear Gaussians. These networks are of particular interest as these are an alternate form of representaion of the Joint Gaussian distribution.
These networks are implemented as the LinearGaussianBayesianNetwork class in the module, pgmpy.models.continuous. This class is a subclass of the BayesianModel class in pgmpy.models and will inherit most of the methods from it. It will have a special method known as to_joint_gaussian that will return an equivalent JointGuassianDistribution object for the model. | <ASSISTANT_TASK:>
Python Code:
from IPython.display import Image
Explanation: Parameterizing with Continuous Variables
End of explanation
import numpy as np
from scipy.special import beta
# Two variable drichlet ditribution with alpha = (1,2)
def drichlet_pdf(x, y):
return (np.power(x, 1)*np.power(y, 2))/beta(x, y)
from pgmpy.factors.continuous import ContinuousFactor
drichlet_factor = ContinuousFactor(['x', 'y'], drichlet_pdf)
drichlet_factor.scope(), drichlet_factor.assignment(5,6)
Explanation: Continuous Factors
Base Class for Continuous Factors
Joint Gaussian Distributions
Canonical Factors
Linear Gaussian CPD
In many situations, some variables are best modeled as taking values in some continuous space. Examples include variables such as position, velocity, temperature, and pressure. Clearly, we cannot use a table representation in this case.
Nothing in the formulation of a Bayesian network requires that we restrict attention to discrete variables. The only requirement is that the CPD, $P(X | Y_1, Y_2, \cdots Y_n)$ represent, for every assignment of values $y_1 \in Val(Y_1), y_2 \in Val(Y_2), \cdots, y_n \in val(Y_n)$, a distribution over $X$. In this case, $X$ might be continuous, in which case the CPD would need to represent distributions over a continuum of values; we might also have $X$’s parents continuous, so that the CPD would also need to represent a continuum of different probability distributions. There exists implicit representations for CPDs of this type, allowing us to apply all the network machinery for the continuous case as well.
Base Class for Continuous Factors
This class will behave as a base class for the continuous factor representations. All the present and future factor classes will be derived from this base class. We need to specify the variable names and a pdf function to initialize this class.
End of explanation
def custom_pdf(x, y, z):
return z*(np.power(x, 1)*np.power(y, 2))/beta(x, y)
custom_factor = ContinuousFactor(['x', 'y', 'z'], custom_pdf)
custom_factor.scope(), custom_factor.assignment(1, 2, 3)
custom_factor.reduce([('y', 2)])
custom_factor.scope(), custom_factor.assignment(1, 3)
from scipy.stats import multivariate_normal
std_normal_pdf = lambda *x: multivariate_normal.pdf(x, [0, 0], [[1, 0], [0, 1]])
std_normal = ContinuousFactor(['x1', 'x2'], std_normal_pdf)
std_normal.scope(), std_normal.assignment([1, 1])
std_normal.marginalize(['x2'])
std_normal.scope(), std_normal.assignment(1)
sn_pdf1 = lambda x: multivariate_normal.pdf([x], [0], [[1]])
sn_pdf2 = lambda x1,x2: multivariate_normal.pdf([x1, x2], [0, 0], [[1, 0], [0, 1]])
sn1 = ContinuousFactor(['x2'], sn_pdf1)
sn2 = ContinuousFactor(['x1', 'x2'], sn_pdf2)
sn3 = sn1 * sn2
sn4 = sn2 / sn1
sn3.assignment(0, 0), sn4.assignment(0, 0)
Explanation: This class supports methods like marginalize, reduce, product and divide just like what we have with discrete classes. One caveat is that when there are a number of variables involved, these methods prove to be inefficient and hence we resort to certain Gaussian or some other approximations which are discussed later.
End of explanation
from pgmpy.factors.distributions import GaussianDistribution as JGD
dis = JGD(['x1', 'x2', 'x3'], np.array([[1], [-3], [4]]),
np.array([[4, 2, -2], [2, 5, -5], [-2, -5, 8]]))
dis.variables
dis.mean
dis.covariance
dis.pdf([0,0,0])
Explanation: The ContinuousFactor class also has a method discretize that takes a pgmpy Discretizer class as input. It will output a list of discrete probability masses or a Factor or TabularCPD object depending upon the discretization method used. Although, we do not have inbuilt discretization algorithms for multivariate distributions for now, the users can always define their own Discretizer class by subclassing the pgmpy.BaseDiscretizer class.
Joint Gaussian Distributions
In its most common representation, a multivariate Gaussian distribution over $X_1 \cdots X_n$ is characterized by an n-dimensional mean vector $\mu$, and a symmetric $n \times n$ covariance matrix $\Sigma$. The density function is most defined as -
$$p(x) = \dfrac{1}{(2\pi)^{n/2}| \Sigma |^{1/2}} \exp[-0.5*(x- \mu )^T \Sigma^{-1}(x- \mu)]$$
The class pgmpy.JointGaussianDistribution provides its representation. This is derived from the class pgmpy.ContinuousFactor. We need to specify the variable names, a mean vector and a covariance matrix for its inialization. It will automatically comute the pdf function given these parameters.
End of explanation
dis1 = JGD(['x1', 'x2', 'x3'], np.array([[1], [-3], [4]]),
np.array([[4, 2, -2], [2, 5, -5], [-2, -5, 8]]))
dis2 = JGD(['x3', 'x4'], [1, 2], [[2, 3], [5, 6]])
dis3 = dis1 * dis2
dis3.variables
dis3.mean
dis3.covariance
Explanation: This class overrides the basic operation methods (marginalize, reduce, normalize, product and divide) as these operations here are more efficient than the ones in its parent class. Most of these operation involve a matrix inversion which is $\mathcal{O}(n^3)$ with repect to the number of variables.
End of explanation
from pgmpy.factors.continuous import CanonicalDistribution
phi1 = CanonicalDistribution(['x1', 'x2', 'x3'],
np.array([[1, -1, 0], [-1, 4, -2], [0, -2, 4]]),
np.array([[1], [4], [-1]]), -2)
phi2 = CanonicalDistribution(['x1', 'x2'], np.array([[3, -2], [-2, 4]]),
np.array([[5], [-1]]), 1)
phi3 = phi1 * phi2
phi3.variables
phi3.h
phi3.K
phi3.g
Explanation: The others methods can also be used in a similar fashion.
Canonical Factors
While the Joint Gaussian representation is useful for certain sampling algorithms, a closer look reveals that it can also not be used directly in the sum-product algorithms. Why? Because operations like product and reduce, as mentioned above involve matrix inversions at each step.
So, in order to compactly describe the intermediate factors in a Gaussian network without the costly matrix inversions at each step, a simple parametric representation is used known as the Canonical Factor. This representation is closed under the basic operations used in inference: factor product, factor division, factor reduction, and marginalization. Thus, we can define a set of simple data structures that allow the inference process to be performed. Moreover, the integration operation required by marginalization is always well defined, and it is guaranteed to produce a finite integral under certain conditions; when it is well defined, it has a simple analytical solution.
A canonical form $C (X; K,h, g)$ is defined as:
$$C(X; K,h,g) = \exp(-0.5X^TKX + h^TX + g)$$
We can represent every Gaussian as a canonical form. Rewriting the joint Gaussian pdf we obtain,
$N (\mu; \Sigma) = C (K, h, g)$ where:
$$K = \Sigma^{-1}$$
$$h = \Sigma^{-1} \mu$$
$$g = -0.5 \mu^T \Sigma^{-1} \mu - \log((2 \pi)^{n/2}| \Sigma |^{1/2}$$
Similar to the JointGaussainDistribution class, the CanonicalFactor class is also derived from the ContinuousFactor class but with its own implementations of the methods required for the sum-product algorithms that are much more efficient than its parent class methods. Let us have a look at the API of a few methods in this class.
End of explanation
phi = CanonicalDistribution(['x1', 'x2'], np.array([[3, -2], [-2, 4]]),
np.array([[5], [-1]]), 1)
jgd = phi.to_joint_gaussian()
jgd.variables
jgd.covariance
jgd.mean
Explanation: This class also has a method, to_joint_gaussian to convert the canoncial representation back into the joint gaussian distribution.
End of explanation
# For P(Y| X1, X2, X3) = N(-2x1 + 3x2 + 7x3 + 0.2; 9.6)
from pgmpy.factors.continuous import LinearGaussianCPD
cpd = LinearGaussianCPD('Y', [0.2, -2, 3, 7], 9.6, ['X1', 'X2', 'X3'])
print(cpd)
Explanation: Linear Gaussian CPD
A linear gaussian conditional probability distribution is defined on a continuous variable. All the parents of this variable are also continuous. The mean of this variable, is linearly dependent on the mean of its parent variables and the variance is independent.
For example,
$$P(Y ; x_1, x_2, x_3) = N(\beta_1 x_1 + \beta_2 x_2 + \beta_3 x_3 + \beta_0 ; \sigma^2)$$
Let $Y$ be a linear Gaussian of its parents $X_1, \cdots, X_k:
$$p(Y | x) = N(\beta_0 + \beta^T x ; \sigma^2)$$
The distribution of $Y$ is a normal distribution $p(Y)$ where:
$$\mu_Y = \beta_0 + \beta^T \mu$$
$$\sigma^2_Y = \sigma^2 + \beta^{T \Sigma \beta}$$
The joint distribution over ${X, Y}$ is a normal distribution where:
$$ Cov[X_i; Y] = {\sum_{j=1}^{k} \beta_j \Sigma_{i,j}}$$
Assume that $X_1, \cdots, X_k$ are jointly Gaussian with distribution $\mathcal{N}(\mu; \Sigma)$. Then:
For its representation pgmpy has a class named LinearGaussianCPD in the module pgmpy.factors.continuous. To instantiate an object of this class, one needs to provide a variable name, the value of the $\beta_0$ term, the variance, a list of the parent variable names and a list of the coefficient values of the linear equation (beta_vector), where the list of parent variable names and beta_vector list is optional and defaults to None.
End of explanation
from pgmpy.models import LinearGaussianBayesianNetwork
model = LinearGaussianBayesianNetwork([('x1', 'x2'), ('x2', 'x3')])
cpd1 = LinearGaussianCPD('x1', [1], 4)
cpd2 = LinearGaussianCPD('x2', [-5, 0.5], 4, ['x1'])
cpd3 = LinearGaussianCPD('x3', [4, -1], 3, ['x2'])
# This is a hack due to a bug in pgmpy (LinearGaussianCPD
# doesn't have `variables` attribute but `add_cpds` function
# wants to check that...)
cpd1.variables = [*cpd1.evidence, cpd1.variable]
cpd2.variables = [*cpd2.evidence, cpd2.variable]
cpd3.variables = [*cpd3.evidence, cpd3.variable]
model.add_cpds(cpd1, cpd2, cpd3)
jgd = model.to_joint_gaussian()
jgd.variables
jgd.mean
jgd.covariance
Explanation: A Gaussian Bayesian is defined as a network all of whose variables are continuous, and where all of the CPDs are linear Gaussians. These networks are of particular interest as these are an alternate form of representaion of the Joint Gaussian distribution.
These networks are implemented as the LinearGaussianBayesianNetwork class in the module, pgmpy.models.continuous. This class is a subclass of the BayesianModel class in pgmpy.models and will inherit most of the methods from it. It will have a special method known as to_joint_gaussian that will return an equivalent JointGuassianDistribution object for the model.
End of explanation
<END_TASK> |
57 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
MNIST ML Pros tutorial
This notebook is based on the tutorial found here
This tutorial is very similar to the beginners tutorial except for some incremental improvements added to the end to improve the accuracy
Get the MNIST dataset
Step1: Helper functions
Here we place our helper functions for creating weight & bias variables as well as doing our vanilla 2D convolution and Pooling operations
Step2: First Convolution Layer
Our first layer consists of a convolution layer followed by a max pooling layer. It will compute 32 features for each 5x5 patch.
The weight tensor has the shape
[patch_width, patch_height, num_input_channels, num_output_channels]
Reshape our image to a 4D tensor with the second and third dimension corresponding to image size and the final dimension for the number of colors. The -1 in this case indicates the dimension that will be automatically modified to keep the size of the new tensor the same as the original.
Step3: Second Convolution Layer
We create a similar structure except now we have 32 inputs and 64 feature outputs for each 5x5 patch.
Step4: Densely Connected Layer
We have now done 2 2x2 convolutions which have reduced our image size to 7x7 since every 2x2 convolution effectively produces a new image that is half the size of the input image.
But for each 7x7 image, we now have 64 features. So we will add a layer with 1024 neurons to allow processing on the entire image.
Step5: Dropout Layer
The dropout layer helps to reduce overfitting by dropping connections between neurons in the densely connected layers. This paper has a nice discussion on the matter.
Step6: Readout Layer
We add a layer that takes the output of our fully connected layer and does a softmax regression into our classes.
Step7: Training
It should not that depending on the CPU available this could take some time to complete. | <ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/data/MNIST/",one_hot=True)
sess = tf.InteractiveSession()
Explanation: MNIST ML Pros tutorial
This notebook is based on the tutorial found here
This tutorial is very similar to the beginners tutorial except for some incremental improvements added to the end to improve the accuracy
Get the MNIST dataset
End of explanation
def weight_variable(shape):
initial = tf.truncated_normal(shape,stddev=0.1)
return tf.Variable(initial)
def bias_variable(shape):
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial)
def conv2d(x,W):
return tf.nn.conv2d(x,W,strides=[1,1,1,1],padding='SAME')
def max_pool_2x2(x):
return tf.nn.max_pool(x,ksize=[1,2,2,1],strides=[1,2,2,1], padding='SAME')
# Setup our Input placeholder
x = tf.placeholder(tf.float32, [None, 784])
# Define loss and optimizer
y_ = tf.placeholder(tf.float32,[None,10])
Explanation: Helper functions
Here we place our helper functions for creating weight & bias variables as well as doing our vanilla 2D convolution and Pooling operations
End of explanation
W_conv1 = weight_variable([5,5,1,32])
b_conv1 = bias_variable([32])
x_image = tf.reshape(x,[-1,28,28,1])
h_conv1 = tf.nn.relu(conv2d(x_image,W_conv1) + b_conv1)
h_pool1 = max_pool_2x2(h_conv1)
Explanation: First Convolution Layer
Our first layer consists of a convolution layer followed by a max pooling layer. It will compute 32 features for each 5x5 patch.
The weight tensor has the shape
[patch_width, patch_height, num_input_channels, num_output_channels]
Reshape our image to a 4D tensor with the second and third dimension corresponding to image size and the final dimension for the number of colors. The -1 in this case indicates the dimension that will be automatically modified to keep the size of the new tensor the same as the original.
End of explanation
W_conv2 = weight_variable([5,5,32,64])
b_conv2 = bias_variable([64])
h_conv2 = tf.nn.relu(conv2d(h_pool1,W_conv2)+b_conv2)
h_pool2 = max_pool_2x2(h_conv2)
Explanation: Second Convolution Layer
We create a similar structure except now we have 32 inputs and 64 feature outputs for each 5x5 patch.
End of explanation
W_fc1 = weight_variable([7*7*64,1024])
b_fc1 = bias_variable([1024])
h_pool2_flat = tf.reshape(h_pool2, [-1,7*7*64])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat,W_fc1)+b_fc1)
Explanation: Densely Connected Layer
We have now done 2 2x2 convolutions which have reduced our image size to 7x7 since every 2x2 convolution effectively produces a new image that is half the size of the input image.
But for each 7x7 image, we now have 64 features. So we will add a layer with 1024 neurons to allow processing on the entire image.
End of explanation
keep_prob = tf.placeholder(tf.float32)
h_fc1_drop = tf.nn.dropout(h_fc1,keep_prob)
Explanation: Dropout Layer
The dropout layer helps to reduce overfitting by dropping connections between neurons in the densely connected layers. This paper has a nice discussion on the matter.
End of explanation
W_fc2 = weight_variable([1024, 10])
b_fc2 = bias_variable([10])
y_conv = tf.matmul(h_fc1_drop, W_fc2) + b_fc2
Explanation: Readout Layer
We add a layer that takes the output of our fully connected layer and does a softmax regression into our classes.
End of explanation
cross_entropy = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y_conv))
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
sess.run(tf.global_variables_initializer())
for i in range(10000):
batch = mnist.train.next_batch(50)
if i%100 == 0:
train_accuracy = accuracy.eval(feed_dict={
x:batch[0], y_: batch[1], keep_prob: 1.0})
print("step %d, training accuracy %g"%(i, train_accuracy))
train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
print("test accuracy %g"%accuracy.eval(feed_dict={
x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}))
Explanation: Training
It should not that depending on the CPU available this could take some time to complete.
End of explanation
<END_TASK> |
58 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Sensor space least squares regression
Predict single trial activity from a continuous variable.
A single-trial regression is performed in each sensor and timepoint
individually, resulting in an Evoked object which contains the
regression coefficient (beta value) for each combination of sensor
and timepoint. Example also shows the T statistics and the associated
p-values.
Note that this example is for educational purposes and that the data used
here do not contain any significant effect.
(See Hauk et al. (2006). The time course of visual word recognition as
revealed by linear regression analysis of ERP data. Neuroimage.)
Step1: Set parameters and read data
Step2: Run regression | <ASSISTANT_TASK:>
Python Code:
# Authors: Tal Linzen <linzen@nyu.edu>
# Denis A. Engemann <denis.engemann@gmail.com>
#
# License: BSD (3-clause)
import numpy as np
import mne
from mne.datasets import sample
from mne.stats.regression import linear_regression
print(__doc__)
data_path = sample.data_path()
Explanation: Sensor space least squares regression
Predict single trial activity from a continuous variable.
A single-trial regression is performed in each sensor and timepoint
individually, resulting in an Evoked object which contains the
regression coefficient (beta value) for each combination of sensor
and timepoint. Example also shows the T statistics and the associated
p-values.
Note that this example is for educational purposes and that the data used
here do not contain any significant effect.
(See Hauk et al. (2006). The time course of visual word recognition as
revealed by linear regression analysis of ERP data. Neuroimage.)
End of explanation
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
tmin, tmax = -0.2, 0.5
event_id = dict(aud_l=1, aud_r=2)
# Setup for reading the raw data
raw = mne.io.read_raw_fif(raw_fname)
events = mne.read_events(event_fname)
picks = mne.pick_types(raw.info, meg='mag', eeg=False, stim=False,
eog=False, exclude='bads')
# Reject some epochs based on amplitude
reject = dict(mag=5e-12)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,
picks=picks, baseline=(None, 0), preload=True,
reject=reject)
Explanation: Set parameters and read data
End of explanation
names = ['intercept', 'trial-count']
intercept = np.ones((len(epochs),), dtype=np.float)
design_matrix = np.column_stack([intercept, # intercept
np.linspace(0, 1, len(intercept))])
# also accepts source estimates
lm = linear_regression(epochs, design_matrix, names)
def plot_topomap(x, unit):
x.plot_topomap(ch_type='mag', scale=1, size=1.5, vmax=np.max,
unit=unit, times=np.linspace(0.1, 0.2, 5))
trial_count = lm['trial-count']
plot_topomap(trial_count.beta, unit='z (beta)')
plot_topomap(trial_count.t_val, unit='t')
plot_topomap(trial_count.mlog10_p_val, unit='-log10 p')
plot_topomap(trial_count.stderr, unit='z (error)')
Explanation: Run regression
End of explanation
<END_TASK> |
59 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Model fitting with cluster-lensing & emcee
Step1: Generate a noisy measurement to fit
Step2: Write down likelihood, prior, and posterior probilities
The model parameters are the mass and centroid offsets. Redshift is assumed to be known.
Step3: Sample the posterior using emcee
Step4: Check walker positions for burn-in
Step5: Model parameter results | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
import seaborn; seaborn.set()
from clusterlensing import ClusterEnsemble
import emcee
import corner
% matplotlib inline
import matplotlib
matplotlib.rcParams["axes.labelsize"] = 20
matplotlib.rcParams["legend.fontsize"] = 12
Explanation: Model fitting with cluster-lensing & emcee
End of explanation
logm_true = 14
off_true = 0.3
nbins = 10
redshifts = [0.2]
mass = [10**logm_true]
offsets = [off_true]
rbins = np.logspace(np.log10(0.1), np.log10(5), num = nbins)
cdata = ClusterEnsemble(redshifts)
cdata.m200 = mass
cdata.calc_nfw(rbins=rbins, offsets=offsets)
dsigma_true = cdata.deltasigma_nfw.mean(axis=0).value
# add scatter with a stddev of 20% of data
noise = np.random.normal(scale=dsigma_true*0.2, size=nbins)
y = dsigma_true + noise
yerr = np.abs(dsigma_true/3) # 33% error bars
plt.plot(rbins, dsigma_true, 'bo-', label='True $\Delta\Sigma(R)$')
plt.plot(rbins, y, 'g^-', label='Noisy $\Delta\Sigma(R)$')
plt.errorbar(rbins, y, yerr=yerr, color='g', linestyle='None')
plt.xscale('log')
plt.legend(loc='best')
plt.show()
Explanation: Generate a noisy measurement to fit
End of explanation
# probability of the data given the model
def lnlike(theta, z, rbins, data, stddev):
logm, offsets = theta
# calculate the model
c = ClusterEnsemble(z)
c.m200 = [10 ** logm]
c.calc_nfw(rbins=rbins, offsets=[offsets])
model = c.deltasigma_nfw.mean(axis=0).value
diff = data - model
lnlikelihood = -0.5 * np.sum(diff**2 / stddev**2)
return lnlikelihood
# uninformative prior
def lnprior(theta):
logm, offset = theta
if 10 < logm < 16 and 0.0 <= offset < 5.0:
return 0.0
else:
return -np.inf
# posterior probability
def lnprob(theta, z, rbins, data, stddev):
lp = lnprior(theta)
if not np.isfinite(lp):
return -np.inf
else:
return lp + lnlike(theta, z, rbins, data, stddev)
Explanation: Write down likelihood, prior, and posterior probilities
The model parameters are the mass and centroid offsets. Redshift is assumed to be known.
End of explanation
ndim = 2
nwalkers = 20
p0 = np.random.rand(ndim * nwalkers).reshape((nwalkers, ndim))
p0[:,0] = p0[:,0] + 13.5 # start somewhere close to true logm ~ 14
sampler = emcee.EnsembleSampler(nwalkers, ndim, lnprob,
args=(redshifts, rbins, y, yerr), threads=8)
# the MCMC chains take some time: about 49 minutes for the 500 samples below
i_can_wait = False # or can you? Set to True to run the MCMC chains
if i_can_wait:
pos, prob, state = sampler.run_mcmc(p0, 500)
Explanation: Sample the posterior using emcee
End of explanation
if i_can_wait:
fig, axes = plt.subplots(2, 1, sharex=True, figsize=(8, 6))
axes[0].plot(sampler.chain[:, :, 0].T, color="k", alpha=0.4)
axes[0].axhline(logm_true, color="g", lw=2)
axes[0].set_ylabel("log-mass")
axes[1].plot(sampler.chain[:, :, 1].T, color="k", alpha=0.4)
axes[1].axhline(off_true, color="g", lw=2)
axes[1].set_ylabel("offset")
axes[1].set_xlabel("step number")
Explanation: Check walker positions for burn-in
End of explanation
if i_can_wait:
burn_in_step = 50 # based on a rough look at the walker positions above
samples = sampler.chain[:, burn_in_step:, :].reshape((-1, ndim))
else:
# read in a previously generated chain
samples = np.loadtxt('samples.txt')
fig = corner.corner(samples,
labels=["$\mathrm{log}M_{200}$", "$\sigma_\mathrm{off}$"],
truths=[logm_true, off_true])
fig.savefig('cornerplot.png')
# save the chain for later
np.savetxt('samples.txt', samples)
Explanation: Model parameter results
End of explanation
<END_TASK> |
60 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Edit this next cell to choose a different country / year report
Step1: These next few conversions don't really work. The PPP data field seems wrong.
Step2: But this one only works if you use the PPP given applied to sample mean in LCU, which is odd.
Step3: Minimum and maximum can't be checked, but load them anyway in case we use them later.
Step4: Gini is calculate directly from from $L(p)$, or perhaps unit record data underlying.
Step5: Estimating tail statistics, like headcount poverty, is a little harder. Povcalnet likely uses the underlying unit record data, and the 100 point Lorenz curve is likely too coarse to get a comparable result. So at this step we fit a model of the Lorenz curve using splines.
We use weights to gently encourage the optimization to fit better at the minimum and maximum incomes, and to the first 10% of Lorenz points (to upweight the left tail fit). Unfortunately very large weights tend to produce spurious peaks in the PDF as the algorithm tries to fit the tails, so we sacrifice the tails to avoid this. It's more of a suggestion than a constraint
Step6: Although the sample extrema only say a little about the population extrema, it's interesting to see how they compare with those implied by the spline fit. It is technically disqualifying, but not surprising, if the sample extrema lie outside those from the fit.
If the natural computed maximum is less than the given, we want to use the upper weights. If not, it's better not to as this will force down the maximum which might result in an ill-formed distribution. A good strategy would be to fit once without weights, then choose the weights depending on how the extrema look.
Step7: The FGT2 index is very sensitive to distribution, and neither method reproduces the Povcalnet statistic well.
Step8: Errors here usually mean the distributions are not well specified. | <ASSISTANT_TASK:>
Python Code:
# BGR_3_2001.json
# BRA_3_2001.json
# MWI_3_2010.23.json
# ECU_3_2014.json
# ARM_3_2010.json
# NGA_3_2009.83.json
# IDN_1_2014.json quite pointed / triangular
# PHL_3_2009.json
# ZAR_3_2012.4.json
# TZA_3_2011.77.json
# VNM_3_2008.json
# MOZ_3_2008.67.json quite rounded
# UZB_3_2003.json
# KIR_3_2006.json needs 1e-4
# PNG_3_2009.67.json needs False and 1e-5
# PAK_3_2013.5.json
# BGD_3_2010.json not super good need 1e-5
# ARG_2_1991.json needs False currency scales weird
with open("../jsoncache/MOZ_3_2008.67.json","r") as f:
d = json.loads(f.read())
print("Sample size".ljust(20),d['sample']['N'])
for k in d['dataset']:
print(k.ljust(20),d['dataset'][k])
Explanation: Edit this next cell to choose a different country / year report:
End of explanation
# Check poverty line conversion
DAYS_PER_MONTH = 30.4167
line_month_ppp_calc = d['inputs']['line_day_ppp'] * DAYS_PER_MONTH
line_month_ppp_given = d['inputs']['line_month_ppp']
myassert("Poverty line (PPP):", line_month_ppp_calc, line_month_ppp_given)
ppp = d['inputs']['ppp']
line_month_lcu_calc = line_month_ppp_calc * ppp
line_month_lcu_given = d['inputs']['line_month_lcu']
myassert("Poverty line (LCU):", line_month_lcu_calc, line_month_lcu_given)
# Check data mean
sample_mean_ppp_calc = d['sample']['mean_month_lcu'] / ppp
sample_mean_ppp_given = d['sample']['mean_month_ppp']
myassert("Data mean (PPP):", sample_mean_ppp_calc, sample_mean_ppp_given)
implied_ppp = d['sample']['mean_month_lcu'] / d['sample']['mean_month_ppp']
myassert("Implied PPP:", implied_ppp, ppp)
Explanation: These next few conversions don't really work. The PPP data field seems wrong.
End of explanation
pop_N = d['sample']['effective_pop_N']
total_wealth_calc = pop_N * sample_mean_ppp_calc
total_wealth_given = d['sample']['effective_pop_wealth']
myassert("Total wealth:", total_wealth_calc, total_wealth_given)
Explanation: But this one only works if you use the PPP given applied to sample mean in LCU, which is odd.
End of explanation
# Load the min and max in case we use them to fit the Lorenz curve
sample_max_ppp_given = d['sample']['month_max']
sample_min_ppp_given = d['sample']['month_min']
Explanation: Minimum and maximum can't be checked, but load them anyway in case we use them later.
End of explanation
# Load the Lorenz curve
L = d['lorenz']['L']
p = d['lorenz']['p']
# We need to add the origin, by definition
p = [0.0] + p
L = [0.0] + L
# We can, if we want, use the sample min and max to add a point to the curve
if True:
dp = 1 / d['sample']['N']
dlorenz_at_0 = sample_min_ppp_given/sample_mean_ppp_given
dlorenz_at_1 = sample_max_ppp_given/sample_mean_ppp_given
p_second = 0 + dp
p_penultimate = 1 - dp
L_second = 0 + dlorenz_at_0 * dp
L_penultimate = 1 - dlorenz_at_1 * dp
p = [0.0, p_second] + p[1:-1] + [p_penultimate, 1.0]
L = [0.0, L_second] + L[1:-1] + [L_penultimate, 1.0]
lorenz = pd.DataFrame({'p': p, 'L': L})
lorenz['dp'] = lorenz.p.shift(-1)[:-1] - lorenz.p[:-1]
lorenz['dL'] = lorenz.L.shift(-1)[:-1] - lorenz.L[:-1]
lorenz['dLdp'] = lorenz.dL / lorenz.dp
# Now, F(y) = inverse of Q(p)
lorenz['y'] = lorenz.dLdp * sample_mean_ppp_given
# Calc and compare Ginis
G_calc = 1 - sum(0.5 * lorenz.dp[:-1] * (lorenz.L.shift(-1)[:-1] + lorenz.L[:-1])) / 0.5
G_given = d['dist']['Gini']
myassert("Gini:",G_calc, G_given)
Explanation: Gini is calculate directly from from $L(p)$, or perhaps unit record data underlying.
End of explanation
##########################################
plt.rcParams["figure.figsize"] = (12,2.5)
fig, ax = plt.subplots(1, 4)
##########################################
thehead = int(len(lorenz)*0.1)
themiddle = len(lorenz) - thehead - 2 - 2
lorenz.w = ([100, 100] + [10] * thehead) + ([1] * themiddle) + [1, 1]
#lorenz.w = [10]*thehead + [1]*(len(lorenz)-thehead)
lorenz_interp = scipy.interpolate.UnivariateSpline(lorenz.p,lorenz.L,w=lorenz.w,k=5,s=1e-7)
quantile = lambda p: sample_mean_ppp_given * lorenz_interp.derivative()(p)
cdf = inverse(quantile)
pdf = derivative(cdf)
pgrid = np.linspace(0, 1, 1000)
ax[0].plot(pgrid, lorenz_interp(pgrid))
ax[1].plot(pgrid, quantile(pgrid))
ygrid = np.linspace(0, quantile(0.97), 1000)
ax[2].plot(ygrid, cdf(ygrid))
ax[3].plot(ygrid, pdf(ygrid));
Explanation: Estimating tail statistics, like headcount poverty, is a little harder. Povcalnet likely uses the underlying unit record data, and the 100 point Lorenz curve is likely too coarse to get a comparable result. So at this step we fit a model of the Lorenz curve using splines.
We use weights to gently encourage the optimization to fit better at the minimum and maximum incomes, and to the first 10% of Lorenz points (to upweight the left tail fit). Unfortunately very large weights tend to produce spurious peaks in the PDF as the algorithm tries to fit the tails, so we sacrifice the tails to avoid this. It's more of a suggestion than a constraint :-)
End of explanation
myassert("Minimum",quantile(0),sample_min_ppp_given)
myassert("Maximum",quantile(1),sample_max_ppp_given)
myassert("Minimum / mean",quantile(0)/sample_mean_ppp_given,sample_min_ppp_given/sample_mean_ppp_given)
HC_calc = float(cdf(line_month_ppp_given))
HC_given = float(d['dist']['HC'])
myassert("HC",HC_calc,HC_given)
Explanation: Although the sample extrema only say a little about the population extrema, it's interesting to see how they compare with those implied by the spline fit. It is technically disqualifying, but not surprising, if the sample extrema lie outside those from the fit.
If the natural computed maximum is less than the given, we want to use the upper weights. If not, it's better not to as this will force down the maximum which might result in an ill-formed distribution. A good strategy would be to fit once without weights, then choose the weights depending on how the extrema look.
End of explanation
# Poverty gap
lorenz['PG'] = (line_month_ppp_given - lorenz.y) / line_month_ppp_given
lorenz.PG[lorenz.PG < 0] = 0
PG_direct = sum(lorenz.PG[:-1] * lorenz.dp[:-1])
PG_f = lambda y: pdf(y) * (line_month_ppp_given - y) # PL * Q(PL) - mu * L(Q(PL))
PG_model = (line_month_ppp_given * cdf(line_month_ppp_given) - sample_mean_ppp_given * lorenz_interp(cdf(line_month_ppp_given)) ) / line_month_ppp_given
PG_given = d['dist']['PG']
myassert("PG direct",PG_direct,PG_given)
myassert("PG model",PG_model,PG_given)
# Poverty gap squared (FGT2)
lorenz.FGT2 = lorenz.PG * lorenz.PG
FGT2_direct = sum(lorenz.FGT2[:-1] * lorenz.dp[:-1])
# Numerical integration doesn't work great for second moments so we simulate
M = 100000
FGT2_sim = 0
Watts_sim = 0
#bottom = cdf(sample_min_ppp_given)
bottom = 0.0
top = cdf(line_month_ppp_given)
for m in range(M):
sim_y = quantile(np.random.uniform(bottom, top))
FGT2_sim += (line_month_ppp_given - sim_y)**2 / line_month_ppp_given**2
Watts_sim += np.log(line_month_ppp_given / sim_y)
FGT2_sim /= (M / cdf(line_month_ppp_given))
Watts_sim /= (M / cdf(line_month_ppp_given))
FGT2_given = d['dist']['FGT2']
myassert("FGT2 direct",FGT2_direct,FGT2_given)
myassert("FGT2 model simulated",FGT2_sim,FGT2_given)
# Median
median_calc = lorenz.y[(lorenz.p - 0.5).abs().argmin()]
median_interp_calc = quantile(0.5)
median_given = d['dist']['median_ppp']
myassert("Median direct",median_calc,median_given)
myassert("Median model",median_interp_calc,median_given)
Explanation: The FGT2 index is very sensitive to distribution, and neither method reproduces the Povcalnet statistic well.
End of explanation
# Mean log deviation (MLD)
lorenz.LD = np.log(sample_mean_ppp_given) - np.log(lorenz.y)
MLD_calc = sum(lorenz.LD[:-1] * lorenz.dp[:-1])
# Numerical integration doesn't work great for weird things so we simulate
M = 100000
MLD_sim = 0
for m in range(M):
sim_y = quantile(np.random.uniform(0, 1))
increment = np.log(sample_mean_ppp_given / sim_y)
MLD_sim += increment
MLD_sim /= M
MLD_given = d['dist']['MLD']
myassert("MLD direct",MLD_calc,MLD_given)
myassert("MLD model simulated",MLD_sim,MLD_given)
# Watts index
lorenz.Watts = np.log(line_month_ppp_given) - np.log(lorenz.y)
lorenz.Watts[lorenz.Watts < 0] = 0
Watts_calc = sum(lorenz.Watts[:-1] * lorenz.dp[:-1])
# Watts_sim simulated above with FGT2
Watts_given = d['dist']['Watt']
myassert("Watts direct",Watts_calc,Watts_given)
myassert("Watts model simulated",Watts_sim,Watts_given)
Explanation: Errors here usually mean the distributions are not well specified.
End of explanation
<END_TASK> |
61 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Language Classifier
Step1: Text Classification from Folders
Step2: Footnote
Step3: Bigrams/Trigrams
Step4: specify the ngram_range - the smallest ngram to use, and the largest. the default is (1,1), so only 1-grams are used. this example calculates the 1-gram and the 2-gram (bi-gram) | <ASSISTANT_TASK:>
Python Code:
count,feature_names=text.count_letters('data/languages/E3.txt')
print((count,feature_names))
count,feature_names=text.count_letters('data/languages/E3.txt')
print((count,feature_names))
p=text.letter_freq('English',feature_names)
print(p)
print((sum(count*log10(p))))
C=text.LanguageFileClassifier()
result=C.loglikelihood('data/languages/E*.txt',verbose=True)
C.predict('data/languages/E*',verbose=True)
[C.target_names[i] for i in C.predict('data/languages/E*')]
Explanation: Language Classifier
End of explanation
from classy import text
train=text.load_files('data/films/train',verbose=True)
test=text.load_files('data/films/test',verbose=True)
train,test=text.text_to_vectors('data/films/train','data/films/test',verbose=True)
train.vectors
v=array(train.vectors[0,:].todense()).ravel()
v.max()
v.shape
v=array(train.vectors[0,:].todense()).ravel()
plot(v,'.')
v=array(train.vectors[10,:].todense()).ravel()
plot(v,'.')
xlabel('feature number')
ylabel('frequency of feature')
train.vectors.shape
C=text.Multinomial()
C.fit(train.vectors,train.targets)
C.predict(test.vectors)
C.percent_correct(test.vectors,test.targets)
Explanation: Text Classification from Folders
End of explanation
from classy import *
train_files=text.load_files('data/films/train',verbose=True)
test_files=text.load_files('data/films/test',verbose=True)
train_data,test_data=text.text_to_vectors(train_files,test_files,verbose=True)
train_data.vectors
vectors_to_image(train_data.vectors,binary=True)
vectors_to_image(train_data.vectors,binary=False)
Explanation: Footnote
End of explanation
from classy import text
train_files=text.load_files('data/films/train',verbose=True)
test_files=text.load_files('data/films/test',verbose=True)
train_data,test_data=text.text_to_vectors(train_files,test_files,verbose=True)
train_data.vectors
Explanation: Bigrams/Trigrams
End of explanation
train_data,test_data=text.text_to_vectors(train_files,test_files,ngram_range=(1,2),verbose=True)
train_data.vectors
print((train_data.feature_names[:100]))
Explanation: specify the ngram_range - the smallest ngram to use, and the largest. the default is (1,1), so only 1-grams are used. this example calculates the 1-gram and the 2-gram (bi-gram)
End of explanation
<END_TASK> |
62 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
ES-DOC CMIP6 Model Properties - Aerosol
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step12: 2.2. Code Version
Is Required
Step13: 2.3. Code Languages
Is Required
Step14: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required
Step15: 3.2. Split Operator Advection Timestep
Is Required
Step16: 3.3. Split Operator Physical Timestep
Is Required
Step17: 3.4. Integrated Timestep
Is Required
Step18: 3.5. Integrated Scheme Type
Is Required
Step19: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required
Step20: 4.2. Variables 2D
Is Required
Step21: 4.3. Frequency
Is Required
Step22: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required
Step23: 5.2. Canonical Horizontal Resolution
Is Required
Step24: 5.3. Number Of Horizontal Gridpoints
Is Required
Step25: 5.4. Number Of Vertical Levels
Is Required
Step26: 5.5. Is Adaptive Grid
Is Required
Step27: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required
Step28: 6.2. Global Mean Metrics Used
Is Required
Step29: 6.3. Regional Metrics Used
Is Required
Step30: 6.4. Trend Metrics Used
Is Required
Step31: 7. Transport
Aerosol transport
7.1. Overview
Is Required
Step32: 7.2. Scheme
Is Required
Step33: 7.3. Mass Conservation Scheme
Is Required
Step34: 7.4. Convention
Is Required
Step35: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required
Step36: 8.2. Method
Is Required
Step37: 8.3. Sources
Is Required
Step38: 8.4. Prescribed Climatology
Is Required
Step39: 8.5. Prescribed Climatology Emitted Species
Is Required
Step40: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required
Step41: 8.7. Interactive Emitted Species
Is Required
Step42: 8.8. Other Emitted Species
Is Required
Step43: 8.9. Other Method Characteristics
Is Required
Step44: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required
Step45: 9.2. Prescribed Lower Boundary
Is Required
Step46: 9.3. Prescribed Upper Boundary
Is Required
Step47: 9.4. Prescribed Fields Mmr
Is Required
Step48: 9.5. Prescribed Fields Mmr
Is Required
Step49: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required
Step50: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required
Step51: 11.2. Dust
Is Required
Step52: 11.3. Organics
Is Required
Step53: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required
Step54: 12.2. Internal
Is Required
Step55: 12.3. Mixing Rule
Is Required
Step56: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required
Step57: 13.2. Internal Mixture
Is Required
Step58: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required
Step59: 14.2. Shortwave Bands
Is Required
Step60: 14.3. Longwave Bands
Is Required
Step61: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required
Step62: 15.2. Twomey
Is Required
Step63: 15.3. Twomey Minimum Ccn
Is Required
Step64: 15.4. Drizzle
Is Required
Step65: 15.5. Cloud Lifetime
Is Required
Step66: 15.6. Longwave Bands
Is Required
Step67: 16. Model
Aerosol model
16.1. Overview
Is Required
Step68: 16.2. Processes
Is Required
Step69: 16.3. Coupling
Is Required
Step70: 16.4. Gas Phase Precursors
Is Required
Step71: 16.5. Scheme Type
Is Required
Step72: 16.6. Bulk Scheme Species
Is Required | <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mohc', 'sandbox-3', 'aerosol')
Explanation: ES-DOC CMIP6 Model Properties - Aerosol
MIP Era: CMIP6
Institute: MOHC
Source ID: SANDBOX-3
Topic: Aerosol
Sub-Topics: Transport, Emissions, Concentrations, Optical Radiative Properties, Model.
Properties: 69 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:15
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Meteorological Forcings
5. Key Properties --> Resolution
6. Key Properties --> Tuning Applied
7. Transport
8. Emissions
9. Concentrations
10. Optical Radiative Properties
11. Optical Radiative Properties --> Absorption
12. Optical Radiative Properties --> Mixtures
13. Optical Radiative Properties --> Impact Of H2o
14. Optical Radiative Properties --> Radiative Scheme
15. Optical Radiative Properties --> Cloud Interactions
16. Model
1. Key Properties
Key properties of the aerosol model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of aerosol model code
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/volume ratio for aerosols"
# "3D number concenttration for aerosols"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Prognostic variables in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of tracers in the aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are aerosol calculations generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses atmospheric chemistry time stepping"
# "Specific timestepping (operator splitting)"
# "Specific timestepping (integrated)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Physical properties of seawater in ocean
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the time evolution of the prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for aerosol physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the aerosol model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.5. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_3D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Meteorological Forcings
**
4.1. Variables 3D
Is Required: FALSE Type: STRING Cardinality: 0.1
Three dimensionsal forcing variables, e.g. U, V, W, T, Q, P, conventive mass flux
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.variables_2D')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Variables 2D
Is Required: FALSE Type: STRING Cardinality: 0.1
Two dimensionsal forcing variables, e.g. land-sea mask definition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.meteorological_forcings.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Frequency
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Frequency with which meteological forcings are applied (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Resolution
Resolution in the aersosol model grid
5.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 5.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for aerosol model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Transport
Aerosol transport
7.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of transport in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Specific transport scheme (eulerian)"
# "Specific transport scheme (semi-lagrangian)"
# "Specific transport scheme (eulerian and semi-lagrangian)"
# "Specific transport scheme (lagrangian)"
# TODO - please enter value(s)
Explanation: 7.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for aerosol transport modeling
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.mass_conservation_scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Mass adjustment"
# "Concentrations positivity"
# "Gradients monotonicity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.3. Mass Conservation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to ensure mass conservation.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.transport.convention')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Uses Atmospheric chemistry transport scheme"
# "Convective fluxes connected to tracers"
# "Vertical velocities connected to tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.4. Convention
Is Required: TRUE Type: ENUM Cardinality: 1.N
Transport by convention
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Emissions
Atmospheric aerosol emissions
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of emissions in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Prescribed (climatology)"
# "Prescribed CMIP6"
# "Prescribed above surface"
# "Interactive"
# "Interactive above surface"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Method used to define aerosol species (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Volcanos"
# "Bare ground"
# "Sea surface"
# "Lightning"
# "Fires"
# "Aircraft"
# "Anthropogenic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the aerosol species are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Interannual"
# "Annual"
# "Monthly"
# "Daily"
# TODO - please enter value(s)
Explanation: 8.4. Prescribed Climatology
Is Required: FALSE Type: ENUM Cardinality: 0.1
Specify the climatology type for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed via a climatology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of aerosol species emitted and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.emissions.other_method_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Other Method Characteristics
Is Required: FALSE Type: STRING Cardinality: 0.1
Characteristics of the "other method" used for aerosol emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Concentrations
Atmospheric aerosol concentrations
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of concentrations in atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as mass mixing ratios.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.concentrations.prescribed_fields_mmr')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Prescribed Fields Mmr
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed as AOD plus CCNs.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Optical Radiative Properties
Aerosol optical and radiative properties
10.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of optical and radiative properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.black_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11. Optical Radiative Properties --> Absorption
Absortion properties in aerosol scheme
11.1. Black Carbon
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of black carbon at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.dust')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Dust
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of dust at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.absorption.organics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.3. Organics
Is Required: FALSE Type: FLOAT Cardinality: 0.1
Absorption mass coefficient of organics at 550nm (if non-absorbing enter 0)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.external')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Optical Radiative Properties --> Mixtures
**
12.1. External
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there external mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.internal')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12.2. Internal
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there internal mixing with respect to chemical composition?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.mixtures.mixing_rule')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Mixing Rule
Is Required: FALSE Type: STRING Cardinality: 0.1
If there is internal mixing with respect to chemical composition then indicate the mixinrg rule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.size')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13. Optical Radiative Properties --> Impact Of H2o
**
13.1. Size
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact size?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.impact_of_h2o.internal_mixture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.2. Internal Mixture
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does H2O impact internal mixture?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Optical Radiative Properties --> Radiative Scheme
Radiative scheme for aerosol
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.shortwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Shortwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of shortwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.radiative_scheme.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Optical Radiative Properties --> Cloud Interactions
Aerosol-cloud interactions
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of aerosol-cloud interactions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.2. Twomey
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the Twomey effect included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.twomey_minimum_ccn')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Twomey Minimum Ccn
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If the Twomey effect is included, then what is the minimum CCN number?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.drizzle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.4. Drizzle
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect drizzle?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.cloud_lifetime')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Cloud Lifetime
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the scheme affect cloud lifetime?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.optical_radiative_properties.cloud_interactions.longwave_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.6. Longwave Bands
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of longwave bands
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Model
Aerosol model
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosperic aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dry deposition"
# "Sedimentation"
# "Wet deposition (impaction scavenging)"
# "Wet deposition (nucleation scavenging)"
# "Coagulation"
# "Oxidation (gas phase)"
# "Oxidation (in cloud)"
# "Condensation"
# "Ageing"
# "Advection (horizontal)"
# "Advection (vertical)"
# "Heterogeneous chemistry"
# "Nucleation"
# TODO - please enter value(s)
Explanation: 16.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the Aerosol model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Radiation"
# "Land surface"
# "Heterogeneous chemistry"
# "Clouds"
# "Ocean"
# "Cryosphere"
# "Gas phase chemistry"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other model components coupled to the Aerosol model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.gas_phase_precursors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "DMS"
# "SO2"
# "Ammonia"
# "Iodine"
# "Terpene"
# "Isoprene"
# "VOC"
# "NOx"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.4. Gas Phase Precursors
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of gas phase aerosol precursors.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bulk"
# "Modal"
# "Bin"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.5. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type(s) of aerosol scheme used by the aerosols model (potentially multiple: some species may be covered by one type of aerosol scheme and other species covered by another type).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.aerosol.model.bulk_scheme_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon / soot"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.6. Bulk Scheme Species
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of species covered by the bulk scheme.
End of explanation
<END_TASK> |
63 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Exercicio 1 - Caixa Eletrônico
Você está desenvolvendo o sistema de um caixa eletrônico para um banco. O cliente do banco fornece ao caixa quanto dinheiro deseja retirar e o caixa deve liberar o valor utilizando as cédulas disponíveis. Por exemplo, no caso de um caixa que tenha apenas notas de 10, 20 e 50 reais, um usuário que solicita o saque de 180 reais deve receber 3 notas de 50, 2 notas de 20 e 1 nota de 10.
Escreva a função CaixaEletronico cujo paramêtro de entrada é o valor a ser sacado. A função deve retornar o número de notas de cada valor devolvidas, em ordem decrescente do valor da nota. O caixa deve devolver o menor número possível de células que contemplem o valor solicitado. Considere que o caixa eletrônico tenha notas de 50, 20, 10, 5 e 1.
Dica
Step1: Exercicio 2 - Juros Compostos
O regime de juros compostos é o mais comum no sistema financeiro e portanto, o mais útil para cálculos de problemas do dia-a-dia. Os juros gerados a cada período são incorporados ao principal para o cálculo dos juros do período seguinte
Step2: Exercicio 3 - Imposto de renda
Sobre o salário bruto de um trabalhador incide contribuções e impostos. Os mais comuns são a contribuição para a aposentadoria e o imposto de renda. A base de cálculo do imposto de renda considera o salário bruto descontado das contribuições da aposentadoria. Considere que a alíquota da contribuição para aposentadoria seja de 11% sobre o total do salário bruto, independentemente do valor recebido durante o ano. Considere ainda que sobre a base de cálculo anual para o imposto de renda, para o valor de até 24 mil reais por ano, o trabalhador é isento de Imposto de renda. Somente sobre o que ultrapassar esse valor, será cobrado 15% a título de imposto de renda.
Escreva a função ImpostoRenda cuja entrada é o valor do salário bruto anual e cuja saída é o valor líquido recebido durante o ano.
Step3: Exercício 4 - Somatória
Crie a função Soma que permita o calculo da seguinte somatória
Step4: Exercicio 5 - Palíndromo
Fazer uma função Palíndromo que receba como parâmetro de entrada uma string e retorne True se ela for palíndroma e False, caso contrário. Uma string é dita palíndromoa se ela é igual lida da esquerda para a direita e vice-versa. Exemplos
Step5: Exercício 6 - Menor e maior da lista
Crie a função MaiorMenor, que dada uma lista de números como entrada, retorne o maior e menor elemento dessa lista.
Step6: Exercício 7 - N-ímpares consecutivos
Sabe-se que um número da forma $n^3$ é igual a soma de $n$ ímpares consecutivos | <ASSISTANT_TASK:>
Python Code:
def CaixaEletronico(valor):
notas50 = valor // 50
valor = valor % 50
notas20 = valor // 20
valor = valor % 20
notas10 = valor // 10
valor = valor % 10
notas5 = valor // 5
valor = valor % 5
notas1 = valor // 1
return (notas50, notas20, notas10, notas5, notas1)
CaixaEletronico(184)
Explanation: Exercicio 1 - Caixa Eletrônico
Você está desenvolvendo o sistema de um caixa eletrônico para um banco. O cliente do banco fornece ao caixa quanto dinheiro deseja retirar e o caixa deve liberar o valor utilizando as cédulas disponíveis. Por exemplo, no caso de um caixa que tenha apenas notas de 10, 20 e 50 reais, um usuário que solicita o saque de 180 reais deve receber 3 notas de 50, 2 notas de 20 e 1 nota de 10.
Escreva a função CaixaEletronico cujo paramêtro de entrada é o valor a ser sacado. A função deve retornar o número de notas de cada valor devolvidas, em ordem decrescente do valor da nota. O caixa deve devolver o menor número possível de células que contemplem o valor solicitado. Considere que o caixa eletrônico tenha notas de 50, 20, 10, 5 e 1.
Dica: utilize os operadores // e %.
End of explanation
def JurosCompostos(montante,taxa,meses):
correcao = 1
for i in range(meses):
correcao *= (1+(taxa/100))
return montante*correcao, (correcao - 1)*100
print(JurosCompostos(1000,1,12))
Explanation: Exercicio 2 - Juros Compostos
O regime de juros compostos é o mais comum no sistema financeiro e portanto, o mais útil para cálculos de problemas do dia-a-dia. Os juros gerados a cada período são incorporados ao principal para o cálculo dos juros do período seguinte:
$$montante \cdot (1+\frac{taxa}{100}))^{meses}$$
Escreva a função JurosCompostos, cujas entradas são: o valor do montante original, a taxa de juros ao mês da aplicação (em %) e o número de meses da aplicação. A função deve retornar o montante total corrigido e o valor total da taxa dos juros compostos aplicados sobre o montante original (em %).
End of explanation
def ImpostoRenda(salario):
salario = salario * 0.89
if (salario <= 24000):
return salario
else:
return salario - (0.15 * (salario-24000))
print(ImpostoRenda(30000))
Explanation: Exercicio 3 - Imposto de renda
Sobre o salário bruto de um trabalhador incide contribuções e impostos. Os mais comuns são a contribuição para a aposentadoria e o imposto de renda. A base de cálculo do imposto de renda considera o salário bruto descontado das contribuições da aposentadoria. Considere que a alíquota da contribuição para aposentadoria seja de 11% sobre o total do salário bruto, independentemente do valor recebido durante o ano. Considere ainda que sobre a base de cálculo anual para o imposto de renda, para o valor de até 24 mil reais por ano, o trabalhador é isento de Imposto de renda. Somente sobre o que ultrapassar esse valor, será cobrado 15% a título de imposto de renda.
Escreva a função ImpostoRenda cuja entrada é o valor do salário bruto anual e cuja saída é o valor líquido recebido durante o ano.
End of explanation
def Soma(n):
soma = 0
for i in range(1,n+1):
soma = soma + i*( (-1)**i)
return soma
print(Soma(10))
Explanation: Exercício 4 - Somatória
Crie a função Soma que permita o calculo da seguinte somatória:
−1 + 2 − 3 + 4 − 5 + 6 + · · · + N
A função soma deve ter como parâmetro de entrada o número N e a saída igual a somatória acima.
End of explanation
def Palindromo(nome):
n = len(nome)
k = 0
while k<n/2:
if nome[k] != nome[n-k-1]:
return False
k = k+1
return True
print(Palindromo('RADAR'))
print(Palindromo('Python'))
Explanation: Exercicio 5 - Palíndromo
Fazer uma função Palíndromo que receba como parâmetro de entrada uma string e retorne True se ela for palíndroma e False, caso contrário. Uma string é dita palíndromoa se ela é igual lida da esquerda para a direita e vice-versa. Exemplos: RADAR é palíndroma e PYTHON não é.
End of explanation
def MaiorMenor(lista):
menor = lista[0]
maior = lista[0]
for elemento in lista:
if elemento > maior:
maior = elemento
if elemento < menor:
menor = elemento
return maior, menor
lista = [1,2,4,16,32,64,-128]
print(MaiorMenor(lista))
Explanation: Exercício 6 - Menor e maior da lista
Crie a função MaiorMenor, que dada uma lista de números como entrada, retorne o maior e menor elemento dessa lista.
End of explanation
def Consecutivos(n):
soma = 0
for i in range(1,n+1):
soma += i
impares = [x for x in range(2*soma) if x%2!=0 ]
return impares[soma-n:soma]
print(Consecutivos(4), sum(Consecutivos(4)), 4**3)
type( (1,2))
Explanation: Exercício 7 - N-ímpares consecutivos
Sabe-se que um número da forma $n^3$ é igual a soma de $n$ ímpares consecutivos:
$$1^3 = 1,
2^3 = 3+5,
3^3 = 7+9+11,
4^3 = 13+15+17+19,...$$
Crie uma função que determine os $n$ ímpares consecutivos de $n^3$.
End of explanation
<END_TASK> |
64 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Module 3 Demo
What is happening under the sea surface?
Tidal Currents
A current is generated by a difference in the sea surface elevation between different points in space, which makes water move back and forth as the surface tilt changes in time. This water movement, dominated by tides, is known as TIDAL CURRENTS.
Learning Objectives
I. What are tidal currents?
II. What is Ebb, Slack, and Flood Tides?
II. What is Ebb, Slack, and Flood Tides
III. Explore Ferry Measured Currents
Step1: Slack Tide
When the sea level in the estuary and in the ocean side are the same, water will not move to either side. This stage of zero water velocity is known as Slack Tide.
Step2: Flood Tide
When the sea level in the ocean side is higher that in the estuary side, water will move inland (towards the estuary). This condition is known as Flood Tide.
Step3: Tidal Currents Exploration
Lets explore how the current behaves for varying conditions in the ocean and in the estuary. We have a simple model that will tell us how fast the water is going, and in which stage, depending on the difference in the sea elevation between the two channel ends.
Step4: In the following interactive plot you can calculate the velocity of the current between the ocean and the estuary, and know the stage of the tidal current. The following parameters determine the value of the velocity and its stage.
a1 = the amplitude of the tide in the ocean side (meters)
a2 = the amplitude of the tide in the estuary side (meters)
alpha = the phase difference, which is related to the delay of the tidal wave (degrees)
N = time at which you want to see the velocity value and direction
Modify this parameters and see how the current is changes in time
Step5: Tidal Currents in Admiralty Inlet
Admiralty Inlet is the main entrance of Puget Sound from the Strait of San Juan de Fuca. The currents in this area depend on the difference in sea level between the outside of Puget Sound and the inside of it.
Lets see what happends here.
We will show you a time series of sea elevation at Port Townsend and a map of velocities across Admiralty Inlet.
Tidal currents data at Admiralty Inlet comes from the Ferry-Based Monitoring of Puget Sound Currents project, you can learn more about it in here http
Step6: This takes a long time...be patience!
Step7: Now lets take a quiz on Tidal Currents | <ASSISTANT_TASK:>
Python Code:
from IPython.display import Image
Image("Figures/EbbTideCurrent.jpg")
Explanation: Module 3 Demo
What is happening under the sea surface?
Tidal Currents
A current is generated by a difference in the sea surface elevation between different points in space, which makes water move back and forth as the surface tilt changes in time. This water movement, dominated by tides, is known as TIDAL CURRENTS.
Learning Objectives
I. What are tidal currents?
II. What is Ebb, Slack, and Flood Tides?
II. What is Ebb, Slack, and Flood Tides
III. Explore Ferry Measured Currents
End of explanation
Image("Figures/SlackTide.jpg")
Explanation: Slack Tide
When the sea level in the estuary and in the ocean side are the same, water will not move to either side. This stage of zero water velocity is known as Slack Tide.
End of explanation
Image("Figures/FloodTideCurrent.jpg")
Explanation: Flood Tide
When the sea level in the ocean side is higher that in the estuary side, water will move inland (towards the estuary). This condition is known as Flood Tide.
End of explanation
import numpy as np
import xarray as xr
import matplotlib.pyplot as plt
from ipywidgets import interact, interactive, fixed
from tydal.module3_utils import plot_currents
%matplotlib inline
Explanation: Tidal Currents Exploration
Lets explore how the current behaves for varying conditions in the ocean and in the estuary. We have a simple model that will tell us how fast the water is going, and in which stage, depending on the difference in the sea elevation between the two channel ends.
End of explanation
interact(plot_currents,T=fixed(12.42),a1=[0,4],a2=[0,4],alpha=(0,90),N=(0,399))
Explanation: In the following interactive plot you can calculate the velocity of the current between the ocean and the estuary, and know the stage of the tidal current. The following parameters determine the value of the velocity and its stage.
a1 = the amplitude of the tide in the ocean side (meters)
a2 = the amplitude of the tide in the estuary side (meters)
alpha = the phase difference, which is related to the delay of the tidal wave (degrees)
N = time at which you want to see the velocity value and direction
Modify this parameters and see how the current is changes in time:
End of explanation
import tydal.module3_utils as m3
import tydal.module2_utils as tu
URL1='http://107.170.217.21:8080/thredds/dodsC/Salish_L1_STA/Salish_L1_STA.ncml'
[ferry, ferry_download, message]=m3.ferry_data_download(URL1)
Explanation: Tidal Currents in Admiralty Inlet
Admiralty Inlet is the main entrance of Puget Sound from the Strait of San Juan de Fuca. The currents in this area depend on the difference in sea level between the outside of Puget Sound and the inside of it.
Lets see what happends here.
We will show you a time series of sea elevation at Port Townsend and a map of velocities across Admiralty Inlet.
Tidal currents data at Admiralty Inlet comes from the Ferry-Based Monitoring of Puget Sound Currents project, you can learn more about it in here http://www.apl.washington.edu/project/project.php?id=ferries_for_science.
End of explanation
ferryQC= m3.ferry_data_QC(ferry,6.5,4,4)
ferryQC = m3.count_route_num(ferryQC[0])
#import tides
pt_tide = tu.load_Port_Townsend('Data/')
pt_tide = pt_tide['Water Level']
start_date = '2016-10-01'
end_date = '2016-11-01'
#plt.style.use('ggplot')
%matplotlib inline
interact(m3.plt_ferry_and_tide, ferryQc=fixed(ferryQC),
pt_tide=fixed(pt_tide), crossing_index = (0,280),
start_date = fixed(start_date), end_date = fixed(end_date))
Explanation: This takes a long time...be patience!
End of explanation
import tydal.quiz3
tydal.quiz3
Explanation: Now lets take a quiz on Tidal Currents:
End of explanation
<END_TASK> |
65 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Load Data
Step1: Q
Step2: The probability of being blocked after making a personal attack and increases as a function of how many times the user has been blocked before. This could indicate heightened scrutiny by administrators. The pattern could also occur if users who continue to attack after being blocked make more frequent or more toxic attacks and are hence more likely to be discovered.
TODO
- check if users make more or more toxic attacks after being blocked
- it could even be that they get blocked for smaller offenses
Q
Step3: Most attacking comments do not lead to the user being warned/blocked within the next 7 days.
Q
Step4: The more attacks a user makes, the more likely it is that they will have been blocked at least once.
Probablity of block in 7 days after attack as a function of # of previous blocks | <ASSISTANT_TASK:>
Python Code:
# Load scored diffs and moderation event data
d = load_diffs()
df_block_events, df_blocked_user_text = load_block_events_and_users()
df_warn_events, df_warned_user_text = load_warn_events_and_users()
moderated_users = [('warned', df_warned_user_text),
('blocked', df_blocked_user_text),
('either', pd.concat([df_warned_user_text, df_blocked_user_text]))
]
moderation_events = [('warned', df_warn_events),
('blocked', df_block_events),
('either', pd.concat([df_warn_events, df_block_events]))
]
moderation_events_2015 = [('warned', df_warn_events.query('year == 2015')),
('blocked', df_block_events.query('year == 2015')),
('either', pd.concat([df_warn_events.query('year == 2015'), df_block_events.query('year == 2015')]))
]
moderated_users_2015 = [('warn', df_warn_events.query('year == 2015')[['user_text']].assign(blocked = 1)),
('block', df_block_events.query('year == 2015')[['user_text']].assign(blocked = 1)),
('either', pd.concat([df_warn_events.query('year == 2015')[['user_text']].assign(blocked = 1), df_block_events.query('year == 2015')[['user_text']].assign(blocked = 1)]))
]
Explanation: Load Data
End of explanation
K = 6
sample = 'blocked'
er_t = 0.425
events = {}
# null events set
e = d[sample][['user_text']].drop_duplicates()
e['timestamp'] = pd.to_datetime('1900')
events[0] = e
# rank block events
ranked_events = df_block_events.copy()
ranks = df_block_events\
.groupby('user_text')['timestamp']\
.rank()
ranked_events['rank'] = ranks
for k in range(1,K):
e = ranked_events.query("rank==%d" % k)[['user_text', 'timestamp']]
events[k] = e
attacks = {}
for k in range(0, K-1):
c = d[sample].merge(events[k], how = 'inner', on='user_text')
c = c.query('timestamp < rev_timestamp')
del c['timestamp']
c = c.merge(events[k+1], how = 'left', on = 'user_text')
c['timestamp'] = c['timestamp'].fillna(pd.to_datetime('2100'))
c = c.query('rev_timestamp < timestamp')
c = c.query('pred_recipient_score_uncalibrated > %f' % er_t)
attacks[k] = c
blocked_users = {i:set(events[i]['user_text']) for i in events.keys()}
attackers = {i:set(attacks[i]['user_text']) for i in attacks.keys()}
dfs_sns = []
for k in range(1, K-1):
u_a = attackers[k]
u_b = blocked_users[k+1]
u_ab = u_a.intersection(u_b)
n_a = len(u_a)
n_ab = len(u_ab)
print('k:',k, n_ab/n_a)
dfs_sns.append(pd.DataFrame({'blocked': [1]*n_ab, 'k': [k]*n_ab}))
dfs_sns.append(pd.DataFrame({'blocked': [0]*(n_a- n_ab), 'k': [k]*(n_a- n_ab)}))
sns.set(font_scale=1.5)
sns.pointplot(x = 'k', y = 'blocked', data = pd.concat(dfs_sns), capsize=.1)
plt.xlabel('k')
plt.ylabel('P(blocked | new attack and blocked k times already)')
plt.savefig('../../paper/figs/p_of_blocked_given_new_attack_and_blocked_already.png')
Explanation: Q: Are new attacks from blocked users more likely to lead to a block? P(blocked | attacked and blocked k times already)
The methodology for this is a bit involved.
events[i]: the set of ith block events per user
blocked_users[i] = set of users blocked i times, e.g. set of users in events[i]
attacks[i]: the set of attacks made by users after their ith block, excluding comments made after their (i+1)th block, if it happened.
attackers[i]: set of users in attacks[i]
P(blocked | attacked and blocked k times already): $$ \frac{|blocked[k+1] \cap attackers[k]|}{|attackers[k]|}$$
End of explanation
dfs = []
ts = np.arange(0.325, 0.96, 0.1)
def get_delta(x):
if x['timestamp'] is not None and x['rev_timestamp'] is not None:
return x['timestamp'] - x['rev_timestamp']
else:
return pd.Timedelta('0 seconds')
for t in ts:
for (event_type, events) in moderation_events:
dfs.append(
d['2015'].query('pred_recipient_score_uncalibrated >= %f' % t)\
.loc[:, ['user_text', 'rev_id', 'rev_timestamp']]\
.merge(events, how = 'left', on = 'user_text')\
.assign(delta = lambda x: get_delta(x))\
.assign(blocked= lambda x: 100 * ((x['delta'] < pd.Timedelta('7 days')) & (x['delta'] > pd.Timedelta('0 seconds'))))\
.drop_duplicates(subset = ['rev_id'])\
.assign(threshold = t, event=event_type)
)
ax = sns.pointplot(x='threshold', y='blocked', hue='event', data = pd.concat(dfs), dodge=0.15, capsize=.1, linestyles=[" ", "", " "])
plt.xlabel('Threshold')
#ax.set_ylabels('% of attacks followed bymoderation')
pd.concat(dfs).groupby(['threshold','event'])['blocked'].mean()
Explanation: The probability of being blocked after making a personal attack and increases as a function of how many times the user has been blocked before. This could indicate heightened scrutiny by administrators. The pattern could also occur if users who continue to attack after being blocked make more frequent or more toxic attacks and are hence more likely to be discovered.
TODO
- check if users make more or more toxic attacks after being blocked
- it could even be that they get blocked for smaller offenses
Q: What fraction of attacking comments were followed by a warn or block event?
Within one week
End of explanation
def remap(x):
if x < 5:
return str(int(x))
else:
return '5+'
sns.set(font_scale=2.5)
dfs = []
for event_type, users in moderated_users_2015:
dfs.append(\
d['2015'].assign(attack = lambda x: x.pred_recipient_score_uncalibrated >= 0.425)\
.groupby('user_text', as_index = False)['attack'].sum()\
.rename(columns={'attack':'num_attacks'})\
.merge(users, how = 'left', on = 'user_text')\
.assign(
blocked = lambda x: x.blocked.fillna(0,),
num_attacks = lambda x: x.num_attacks.apply(remap),
event = event_type)
)
df = pd.concat(dfs)
g = sns.factorplot(x = 'num_attacks',
y = 'blocked',
col = 'event', data = df, order = ('0', '1', '2', '3','4', '5+'), capsize=.1)
g.set_ylabels('P(event)')
g.set_xlabels('Number of attacks')
plt.savefig('../../paper/figs/fraction_blocked_given_num_attacks.png')
Explanation: Most attacking comments do not lead to the user being warned/blocked within the next 7 days.
Q: How does the probablity of a user being blocked change with the number of attacking comments?
End of explanation
attacks = d['blocked'].query("pred_recipient_score_uncalibrated >= 0.425").query("not author_anon").query("not own_page")
results = []
for i , r in attacks.iterrows():
ts = r['rev_timestamp']
user = r['user_text']
user_blocks = df_block_events[df_block_events['user_text'] == user]
prior_blocks = user_blocks[user_blocks['timestamp'] < ts ]
max_ts = prior_blocks['timestamp'].max()
#if ts < (max_ts + pd.Timedelta('60 days')):
# continue
post_blocks = user_blocks[user_blocks['timestamp'] > ts]
n_blocks_prior = prior_blocks.shape[0]
blocked_again = post_blocks.shape[0] > 0
for days in [7, 14, 30, 60, 90, 180]:
within_x = post_blocks[post_blocks['timestamp'] < (ts + pd.Timedelta('%d days' % days)) ].shape[0] > 0
results.append({'n_blocks_prior': n_blocks_prior,
'blocked_again': within_x,
'within_x': days
})
df = pd.DataFrame(results)
def remap(x):
if x < 5:
return str(int(x))
else:
return '5+'
df['n_blocks_prior'] = df['n_blocks_prior'].apply(remap)
sns.set(font_scale=1.5)
g = sns.factorplot(x = 'n_blocks_prior',
y = 'blocked_again',
col = 'within_x',
data = df,
capsize=.1,
order = [ '1', '2', '3', '4', '5+']
)
#g.set_ylabels('P(attack followed by block | # prior blocks)')
#g.set_xlabels('Number of prior blocks')
plt.savefig('../../paper/figs/8.png')
Explanation: The more attacks a user makes, the more likely it is that they will have been blocked at least once.
Probablity of block in 7 days after attack as a function of # of previous blocks
End of explanation
<END_TASK> |
66 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Modeling and Simulation in Python
Case study
Step3: Testing make_system
Step4: Testing slope_func
Step5: Now we can run the simulation.
Step6: Plotting r
Step7: We can also see the relationship between y and r, which I derive analytically in the book.
Step8: And here's the figure from the book.
Step9: We can use interpolation to find the time when y is 47 meters.
Step10: At that point r is 55 mm, which is Rmax, as expected.
Step11: The total amount of rotation is 1253 rad.
Step12: Unrolling
For unrolling the paper, we need more units
Step13: And a few more parameters in the Condition object.
Step15: make_system computes rho_h, which we'll need to compute moment of inertia, and k, which we'll use to compute r.
Step16: Testing make_system
Step18: Here's how we compute I as a function of r
Step19: When r is Rmin, I is small.
Step20: As r increases, so does I.
Step22: Here's the slope function.
Step23: Testing slope_func
Step24: Now we can run the simulation.
Step25: And look at the results.
Step26: Extrating the time series
Step27: Plotting theta
Step28: Plotting omega
Step29: Plotting y
Step30: Here's the figure from the book.
Step31: Yo-yo
Exercise
Step33: Here's a make_system function that computes I and k based on the system parameters.
I estimated I by modeling the yo-yo as a solid cylinder with uniform density (see here). In reality, the distribution of weight in a yo-yo is often designed to achieve desired effects. But we'll keep it simple.
Step34: Testing make_system
Step35: Write a slope function for this system, using these results from the book
Step36: Test your slope function with the initial conditions.
Step37: Then run the simulation.
Step38: Check the final conditions. If things have gone according to plan, the final value of y should be close to 0.
Step39: Plot the results.
Step40: theta should increase and accelerate.
Step41: y should decrease and accelerate down. | <ASSISTANT_TASK:>
Python Code:
# If you want the figures to appear in the notebook,
# and you want to interact with them, use
# %matplotlib notebook
# If you want the figures to appear in the notebook,
# and you don't want to interact with them, use
# %matplotlib inline
# If you want the figures to appear in separate windows, use
# %matplotlib qt5
# tempo switch from one to another, you have to select Kernel->Restart
%matplotlib inline
from modsim import *
kg = UNITS.kilogram
m = UNITS.meter
s = UNITS.second
N = UNITS.newton
condition = Condition(mass = 0.03 * kg,
fraction = 1 / 3,
k = 9810.0 * N / m,
duration = 0.3 * s,
L = 0.05 * m,
d = 0.005 * m,
v1 = 0 * m / s,
v2 = 0 * m / s,
g = 9.8 * m / s**2)
condition = Condition(mass = 0.03,
fraction = 1 / 3,
k = 9810.0,
duration = 0.3,
L = 0.05,
d = 0.005,
v1 = 0,
v2 = 0,
g = 9.8)
def make_system(condition):
Make a system object.
condition: Condition with
returns: System with init
unpack(condition)
x1 = L - d # upper mass
x2 = 0 # lower mass
init = State(x1=x1, x2=x2, v1=v1, v2=v2)
m1, m2 = fraction*mass, (1-fraction)*mass
ts = linspace(0, duration, 1001)
return System(init=init, m1=m1, m2=m2, k=k, L=L, ts=ts)
Explanation: Modeling and Simulation in Python
Case study: Hopper optimization
Copyright 2017 Allen Downey
License: Creative Commons Attribution 4.0 International
End of explanation
system = make_system(condition)
system
system.init
def slope_func(state, t, system):
Computes the derivatives of the state variables.
state: State object with theta, y, r
t: time
system: System object with r, k
returns: sequence of derivatives
x1, x2, v1, v2 = state
unpack(system)
dx = x1 - x2
f_spring = k * (L - dx)
a1 = f_spring/m1 - g
a2 = -f_spring/m2 - g
if t < 0.003 and a2 < 0:
a2 = 0
return v1, v2, a1, a2
Explanation: Testing make_system
End of explanation
slope_func(system.init, 0, system)
Explanation: Testing slope_func
End of explanation
run_odeint(system, slope_func)
system.results.tail()
plot(system.results.x1)
plot(system.results.x2)
plot(system.results.x1 - system.results.x2)
plot(ys, color='green', label='y')
decorate(xlabel='Time (s)',
ylabel='Length (m)')
Explanation: Now we can run the simulation.
End of explanation
plot(rs, color='red', label='r')
decorate(xlabel='Time (s)',
ylabel='Radius (mm)')
Explanation: Plotting r
End of explanation
plot(rs, ys, color='purple')
decorate(xlabel='Radius (mm)',
ylabel='Length (m)',
legend=False)
Explanation: We can also see the relationship between y and r, which I derive analytically in the book.
End of explanation
subplot(3, 1, 1)
plot(thetas, label='theta')
decorate(ylabel='Angle (rad)')
subplot(3, 1, 2)
plot(ys, color='green', label='y')
decorate(ylabel='Length (m)')
subplot(3, 1, 3)
plot(rs, color='red', label='r')
decorate(xlabel='Time(s)',
ylabel='Radius (mm)')
savefig('chap11-fig01.pdf')
Explanation: And here's the figure from the book.
End of explanation
T = interp_inverse(ys, kind='cubic')
t_end = T(47)
t_end
Explanation: We can use interpolation to find the time when y is 47 meters.
End of explanation
R = interpolate(rs, kind='cubic')
R(t_end)
Explanation: At that point r is 55 mm, which is Rmax, as expected.
End of explanation
THETA = interpolate(thetas, kind='cubic')
THETA(t_end)
Explanation: The total amount of rotation is 1253 rad.
End of explanation
kg = UNITS.kilogram
N = UNITS.newton
Explanation: Unrolling
For unrolling the paper, we need more units:
End of explanation
condition = Condition(Rmin = 0.02 * m,
Rmax = 0.055 * m,
Mcore = 15e-3 * kg,
Mroll = 215e-3 * kg,
L = 47 * m,
tension = 2e-4 * N,
duration = 180 * s)
Explanation: And a few more parameters in the Condition object.
End of explanation
def make_system(condition):
Make a system object.
condition: Condition with Rmin, Rmax, Mcore, Mroll,
L, tension, and duration
returns: System with init, k, rho_h, Rmin, Rmax,
Mcore, Mroll, ts
unpack(condition)
init = State(theta = 0 * radian,
omega = 0 * radian/s,
y = L)
area = pi * (Rmax**2 - Rmin**2)
rho_h = Mroll / area
k = (Rmax**2 - Rmin**2) / 2 / L / radian
ts = linspace(0, duration, 101)
return System(init=init, k=k, rho_h=rho_h,
Rmin=Rmin, Rmax=Rmax,
Mcore=Mcore, Mroll=Mroll,
ts=ts)
Explanation: make_system computes rho_h, which we'll need to compute moment of inertia, and k, which we'll use to compute r.
End of explanation
system = make_system(condition)
system
system.init
Explanation: Testing make_system
End of explanation
def moment_of_inertia(r, system):
Moment of inertia for a roll of toilet paper.
r: current radius of roll in meters
system: System object with Mcore, rho, Rmin, Rmax
returns: moment of inertia in kg m**2
unpack(system)
Icore = Mcore * Rmin**2
Iroll = pi * rho_h / 2 * (r**4 - Rmin**4)
return Icore + Iroll
Explanation: Here's how we compute I as a function of r:
End of explanation
moment_of_inertia(system.Rmin, system)
Explanation: When r is Rmin, I is small.
End of explanation
moment_of_inertia(system.Rmax, system)
Explanation: As r increases, so does I.
End of explanation
def slope_func(state, t, system):
Computes the derivatives of the state variables.
state: State object with theta, omega, y
t: time
system: System object with Rmin, k, Mcore, rho_h, tension
returns: sequence of derivatives
theta, omega, y = state
unpack(system)
r = sqrt(2*k*y + Rmin**2)
I = moment_of_inertia(r, system)
tau = r * tension
alpha = tau / I
dydt = -r * omega
return omega, alpha, dydt
Explanation: Here's the slope function.
End of explanation
slope_func(system.init, 0*s, system)
Explanation: Testing slope_func
End of explanation
run_odeint(system, slope_func)
Explanation: Now we can run the simulation.
End of explanation
system.results.tail()
Explanation: And look at the results.
End of explanation
thetas = system.results.theta
omegas = system.results.omega
ys = system.results.y
Explanation: Extrating the time series
End of explanation
plot(thetas, label='theta')
decorate(xlabel='Time (s)',
ylabel='Angle (rad)')
Explanation: Plotting theta
End of explanation
plot(omegas, color='orange', label='omega')
decorate(xlabel='Time (s)',
ylabel='Angular velocity (rad/s)')
Explanation: Plotting omega
End of explanation
plot(ys, color='green', label='y')
decorate(xlabel='Time (s)',
ylabel='Length (m)')
Explanation: Plotting y
End of explanation
subplot(3, 1, 1)
plot(thetas, label='theta')
decorate(ylabel='Angle (rad)')
subplot(3, 1, 2)
plot(omegas, color='orange', label='omega')
decorate(ylabel='Angular velocity (rad/s)')
subplot(3, 1, 3)
plot(ys, color='green', label='y')
decorate(xlabel='Time(s)',
ylabel='Length (m)')
savefig('chap11-fig02.pdf')
Explanation: Here's the figure from the book.
End of explanation
condition = Condition(Rmin = 8e-3 * m,
Rmax = 16e-3 * m,
Rout = 35e-3 * m,
mass = 50e-3 * kg,
L = 1 * m,
g = 9.8 * m / s**2,
duration = 1 * s)
Explanation: Yo-yo
Exercise: Simulate the descent of a yo-yo. How long does it take to reach the end of the string.
I provide a Condition object with the system parameters:
Rmin is the radius of the axle. Rmax is the radius of the axle plus rolled string.
Rout is the radius of the yo-yo body. mass is the total mass of the yo-yo, ignoring the string.
L is the length of the string.
g is the acceleration of gravity.
End of explanation
def make_system(condition):
Make a system object.
condition: Condition with Rmin, Rmax, Rout,
mass, L, g, duration
returns: System with init, k, Rmin, Rmax, mass,
I, g, ts
unpack(condition)
init = State(theta = 0 * radian,
omega = 0 * radian/s,
y = L,
v = 0 * m / s)
I = mass * Rout**2 / 2
k = (Rmax**2 - Rmin**2) / 2 / L / radian
ts = linspace(0, duration, 101)
return System(init=init, k=k,
Rmin=Rmin, Rmax=Rmax,
mass=mass, I=I, g=g,
ts=ts)
Explanation: Here's a make_system function that computes I and k based on the system parameters.
I estimated I by modeling the yo-yo as a solid cylinder with uniform density (see here). In reality, the distribution of weight in a yo-yo is often designed to achieve desired effects. But we'll keep it simple.
End of explanation
system = make_system(condition)
system
system.init
Explanation: Testing make_system
End of explanation
# Solution goes here
Explanation: Write a slope function for this system, using these results from the book:
$ r = \sqrt{2 k y + R_{min}^2} $
$ T = m g I / I^* $
$ a = -m g r^2 / I^* $
$ \alpha = m g r / I^* $
where $I^*$ is the augmented moment of inertia, $I + m r^2$.
Hint: If y is less than 0, it means you have reached the end of the string, so the equation for r is no longer valid. In this case, the simplest thing to do it return the sequence of derivatives 0, 0, 0, 0
End of explanation
slope_func(system.init, 0*s, system)
Explanation: Test your slope function with the initial conditions.
End of explanation
run_odeint(system, slope_func)
Explanation: Then run the simulation.
End of explanation
system.results.tail()
Explanation: Check the final conditions. If things have gone according to plan, the final value of y should be close to 0.
End of explanation
thetas = system.results.theta
ys = system.results.y
Explanation: Plot the results.
End of explanation
plot(thetas, label='theta')
decorate(xlabel='Time (s)',
ylabel='Angle (rad)')
Explanation: theta should increase and accelerate.
End of explanation
plot(ys, color='green', label='y')
decorate(xlabel='Time (s)',
ylabel='Length (m)')
Explanation: y should decrease and accelerate down.
End of explanation
<END_TASK> |
67 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
If you don't care about the confidence interval of parameter
Step1: If you want the confidence intervals | <ASSISTANT_TASK:>
Python Code:
from lmfit.models import GaussianModel
# initialize the gaussian model
gm = GaussianModel()
# take a look at the parameter names
print gm.param_names
# I get RuntimeError since my numpy version is a little old
# guess parameters
par_guess = gm.guess(n,x=xpos)
# fit data
result = gm.fit(n, par_guess, x=xpos, method='leastsq')
# quick look at result
print result.fit_report()
# get best fit error and stderr
print result.params['amplitude'].value,result.params['amplitude'].stderr
print result.params['center'].value,result.params['center'].stderr
print result.params['sigma'].value,result.params['sigma'].stderr
fig = plt.figure()
plt.hist(xdata, bins=bins)
plt.plot(xpos, result.best_fit, 'green')
Explanation: If you don't care about the confidence interval of parameter
End of explanation
import lmfit
def my_gaussian_model(p, x, y):
a = np.float(p['a'])
b = np.float(p['b'])
c = np.float(p['c'])
return a/np.sqrt(2.*c) * np.exp( -np.power(x-b,2.)/2./np.power(c, 2.)) - y
pars = lmfit.Parameters()
pars.add_many(('a',0.1), ('b',0.1), ('c',0.1))
# initialize the minimizer
mini = lmfit.Minimizer(my_gaussian_model, pars, (xpos, n))
# do the minimization
result = mini.minimize(method='leastsq')
# print the fit report
print lmfit.fit_report(mini.params)
# NOTE
# the parameter 'a' in function my_gaussian_model is different from the built-in model in lmfit
# so the amplitude value is a little different
# predit the confidence interval of all parameters
ci, trace = lmfit.conf_interval(mini, sigmas=[0.68,0.95],
trace=True, verbose=False)
# ci = lmfit.conf_interval(mini)
lmfit.printfuncs.report_ci(ci)
print ci.values()
a,b,prob = trace['a']['a'], trace['a']['b'], trace['a']['prob']
cx, cy, grid = lmfit.conf_interval2d(mini, 'a','b',30,30)
plt.contourf(cx, cy, grid, np.linspace(0,1,11))
plt.xlabel('a')
plt.colorbar()
plt.ylabel('b')
Explanation: If you want the confidence intervals
End of explanation
<END_TASK> |
68 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Basic Tests
Step1: I plot the error of the filtered wave. I use the absulte values of the difference between sine wave and median filtered wave and calculate the mean, to get the error. I use a wave number of 5 and different window lengths
Functions
Step2: Plotting Figures | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.backends.backend_pdf import PdfPages
% matplotlib inline
Explanation: Basic Tests: Error of the median filter with different window lengths and wave number 5
2015.10.09 DW
End of explanation
def ErrorPlot( waveNumber,windowLength ):
data = np.fromfunction( lambda x: np.sin((x-windowLength / 2)/128 * 2 * np.pi * waveNumber), (128 + windowLength / 2, ) ) #creating an array with a sine wave
datafiltered = medianFilter(data, windowLength) #calculate the filtered wave with the medianFiltered function
data = data[ windowLength / 2 : - windowLength ] # slice the data array to synchronize both waves
datafiltered = datafiltered[ : len(data) ] # cut the filtered wave to the same length as the data wave
error = ErrorRate(data,datafiltered,windowLength,waveNumber) #calculate the error with the ErrorRate function
plt.axis([0, y + 1, 0, 1.2])
plt.xlabel('Window Length', fontsize = 20)
plt.ylabel('Error rate', fontsize = 20)
plt.scatter(*error)
def ErrorRate(data,datafiltered,windowLength, waveNumber):
errorrate = data-datafiltered #calculate the difference between the sine wave and the filtered wave
error = [] #creating a list and save the error rate with the matching wavenumber in it
errorrate = np.abs(errorrate)
error.append([windowLength ,np.mean(errorrate)])# fill the list with the errorrate and corresponding window length
error = zip(*error) #zip the error ([1,1],[2,2],[3,3]) = ([1,2,3],[1,2,3])
return error
def medianFilter( data, windowLength ):
if (windowLength < len(data)and data.ndim == 1):
tempret = np.zeros(len(data)-windowLength+1) # creating an array where the filtered values will be saved in
if windowLength % 2 ==0: # check if the window length is odd or even because with even window length we get an unsynchrone filtered wave
for c in range(0, len(tempret)):
tempret[c] = np.median( data[ c : c + windowLength +1 ] ) # write the values of the median filtered wave in tempret, calculate the median of all values in the window
return tempret
else:
for c in range(0, len(tempret)):
tempret[c] = np.median( data[ c : c + windowLength ] )
return tempret
else:
raise ValueError("windowLength must be smaller than len(data) and data must be a 1D array")
Explanation: I plot the error of the filtered wave. I use the absulte values of the difference between sine wave and median filtered wave and calculate the mean, to get the error. I use a wave number of 5 and different window lengths
Functions
End of explanation
fig = plt.figure()
for y in range (0,40,2):
ErrorPlot(5,y)
pp = PdfPages( 'Error of the median filtered sine waves with different window lengths.pdf')
pp.savefig(fig)
pp.close()
Explanation: Plotting Figures
End of explanation
<END_TASK> |
69 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
PMOD TIMER
In this notebook, PMOD Timer functionalities are illustrated. The Timer has two sub-modules
Step1: Instantiate Pmod_Timer class. The method stop() will stop both timer sub-modules.
In this example, we will use pin 0 of the PMODA interface. PMODB and other pins can also be used.
Step2: 2. Generate pulses for a certain period of time
In this example, we choose the Digilent Analog Discovery 2 as the scope.
The 1+ pin (of channel 1) has to be connected to pin 0 on PMODA interface.
Use the following settings for waveform.
<img src="data/generate_1us_forever_settings.jpg" width="200px"/>
Generate a 10 ns clock pulse every 1 microseconds for 4 seconds and then stop the generation.
Note that pulses are generated every $count\times10$ ns. Here count is defined as period.
You should see output like this
Step3: 3. Generate a certain number of pulses
Note first parameter is the period interval.
Denoting the desired period as $T$ (in ns), we need to set the first parameter period to
Step4: Now generate the pulses at every 1 $\mu$s interval.
Step5: Stop the generation.
Step6: 4. Determine if an event has occurred at the input
An event is either a rising edge or a high logic level. The parameter is duration, $period\times10$ ns, in which the event is to be detected. It returns 0 if no event occurred, otherwise it returns 1.
Use a waveform generator in this example. Connect W1 channel of the Analog Discovery to pin 0 of PMODA.
Do not run the waveform generation in the next cell.
Step7: Now run the waveform generation and then run the next cell. Set the waveform generator settings as shown below
Step8: 5. Count number of events occurred during a desired period
An event is either a rising edge or a high logic level. The parameter is duration, $period\times10$ ns, in which the number of event are counted. In this example we are interested in number of events occurring in 10 $\mu$s.
Use a waveform generator in this example. Use the following settings of the waveform generator and run the generator. Then run the next example.
<img src="data/count_events_10us.jpg" width="200px"/>
Step9: 6. Measure period between two rising edges
An event is either a rising edge or a high logic level. It expects at least two rising edges. The return result is in units of nanoseconds.
Use a waveform generator in this example. Use the following settings of the waveform generator and run the generation. Then run the next example.
<img src="data/measure_period_200KHz.jpg" width="200px"/> | <ASSISTANT_TASK:>
Python Code:
from pynq.overlays.base import BaseOverlay
base = BaseOverlay("base.bit")
Explanation: PMOD TIMER
In this notebook, PMOD Timer functionalities are illustrated. The Timer has two sub-modules: Timer0 and Timer1.
The Generate output and Capture Input of Timer 0 are assumed to be connected to PMODA pin 0.
1. The Generate function outputs one clock (10 ns) pulse after a desired period.
2. The Capture input is sensitive to a rising edge or high level logic.
To see the results of this notebook, you will need a Digilent Analog Discovery 2
<td> <img src="http://cdn6.bigcommerce.com/s-7gavg/products/468/images/2617/Analog_Discovery_2_obl_Academic_600__01249.1447804398.1280.1280.png" alt="Drawing" style="width: 250px;"/> </td>
and WaveForms 2015
<td> <img src="https://reference.digilentinc.com/_media/reference/software/waveforms/waveforms-3/waveforms3-0.png" alt="Drawing" style="width: 250px;"/> </td>
1. Instantiation
Import overlay to use the timers.
End of explanation
from time import sleep
from pynq.lib import Pmod_Timer
pt = Pmod_Timer(base.PMODA,0)
pt.stop()
Explanation: Instantiate Pmod_Timer class. The method stop() will stop both timer sub-modules.
In this example, we will use pin 0 of the PMODA interface. PMODB and other pins can also be used.
End of explanation
# Generate a 10 ns pulse every period*10 ns
period=100
pt.generate_pulse(period)
# Sleep for 4 seconds and stop the timer
sleep(4)
pt.stop()
Explanation: 2. Generate pulses for a certain period of time
In this example, we choose the Digilent Analog Discovery 2 as the scope.
The 1+ pin (of channel 1) has to be connected to pin 0 on PMODA interface.
Use the following settings for waveform.
<img src="data/generate_1us_forever_settings.jpg" width="200px"/>
Generate a 10 ns clock pulse every 1 microseconds for 4 seconds and then stop the generation.
Note that pulses are generated every $count\times10$ ns. Here count is defined as period.
You should see output like this:
<img src="data/generate_1us_forever.jpg" width="800px"/>
End of explanation
# Generate 3 pulses at every 1 us
count=3
period=100
pt.generate_pulse(period, count)
Explanation: 3. Generate a certain number of pulses
Note first parameter is the period interval.
Denoting the desired period as $T$ (in ns), we need to set the first parameter period to:
$period = \frac{T}{10} $
The second parameter is the number of pulses to be generated.
Run the following cell and you should see output in the scope like this:
<img src="data/generate_1us_n_times.jpg" width="800px"/>
End of explanation
# Generate pulses per 1 us forever
count=0
period=100
pt.generate_pulse(period, count)
Explanation: Now generate the pulses at every 1 $\mu$s interval.
End of explanation
pt.stop()
Explanation: Stop the generation.
End of explanation
# Detect any event within 10 us
period=1000
pt.event_detected(period)
Explanation: 4. Determine if an event has occurred at the input
An event is either a rising edge or a high logic level. The parameter is duration, $period\times10$ ns, in which the event is to be detected. It returns 0 if no event occurred, otherwise it returns 1.
Use a waveform generator in this example. Connect W1 channel of the Analog Discovery to pin 0 of PMODA.
Do not run the waveform generation in the next cell.
End of explanation
# Detect any event within 20 ms
period=200000
pt.event_detected(period)
Explanation: Now run the waveform generation and then run the next cell. Set the waveform generator settings as shown below:
<img src="data/measure_period_200KHz.jpg" width="200px"/>
End of explanation
# Count number of events within 10 us
period=1000
pt.event_count(period)
Explanation: 5. Count number of events occurred during a desired period
An event is either a rising edge or a high logic level. The parameter is duration, $period\times10$ ns, in which the number of event are counted. In this example we are interested in number of events occurring in 10 $\mu$s.
Use a waveform generator in this example. Use the following settings of the waveform generator and run the generator. Then run the next example.
<img src="data/count_events_10us.jpg" width="200px"/>
End of explanation
period = pt.get_period_ns()
print("The measured waveform frequency: {} Hz".format(1e9/period))
Explanation: 6. Measure period between two rising edges
An event is either a rising edge or a high logic level. It expects at least two rising edges. The return result is in units of nanoseconds.
Use a waveform generator in this example. Use the following settings of the waveform generator and run the generation. Then run the next example.
<img src="data/measure_period_200KHz.jpg" width="200px"/>
End of explanation
<END_TASK> |
70 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Moving to Shallow Neural Networks
In this tutorial, you'll implement a shallow neural network to classify digits ranging from 0 to 9. The dataset you'll use is quite famous, it's called 'MNIST' http
Step2: You can now implement a 2 layers NN
Now that you have the data, you can build the a shallow neural network (SNN). I expect your SNN to have two layers.
- Layer 1 has 20 neurons with a sigmoid activation
- Layer 2 has 10 neurons with a softmax activation
- Loss is Negative Log Likelihood (wich is also the cross entropy)
You'll need to comment your work such that I understand that you understand what you are doing
1 - Define Parameters
Step5: 2 - Define Model
Step8: 3 - Define Derivatives | <ASSISTANT_TASK:>
Python Code:
# Download the dataset in this directory (does that work on Windows OS ?)
! wget http://deeplearning.net/data/mnist/mnist.pkl.gz
import cPickle, gzip, numpy
import numpy as np
# Load the dataset
f = gzip.open('mnist.pkl.gz', 'rb')
train_set, valid_set, test_set = cPickle.load(f)
f.close()
def to_one_hot(y, n_classes=10): # You might want to use this as some point...
_y = np.zeros((len(y), n_classes))
_y[np.arange(len(y)), y] = 1
return _y
X_train, y_train = train_set[0], train_set[1]
X_valid, y_valid = valid_set[0], valid_set[1]
X_test, y_test = test_set[0], test_set[1]
Explanation: Moving to Shallow Neural Networks
In this tutorial, you'll implement a shallow neural network to classify digits ranging from 0 to 9. The dataset you'll use is quite famous, it's called 'MNIST' http://yann.lecun.com/exdb/mnist/. A French guy put it up, he's very famous in the DL comunity, he's called Yann Lecun and is now both head of the Facebook AI reseach program and head of something in the University of New York...
First step
As a first step, I invite you to discover what is MNIST. You might find this notebook to be usefull, but feel to browse the web.
Once you get the idea, you can download the dataset
End of explanation
# HELPER
def softmax(Z):
Z is a vector eg. [1,2,3]
return: the vector softmax(Z) eg. [.09, .24, .67]
return np.exp(Z) / np.exp(Z).sum(axis=0)
# Define the variables here (initialize the weights with the np.random.normal module):
W1, b1 =
W2, b2 =
Explanation: You can now implement a 2 layers NN
Now that you have the data, you can build the a shallow neural network (SNN). I expect your SNN to have two layers.
- Layer 1 has 20 neurons with a sigmoid activation
- Layer 2 has 10 neurons with a softmax activation
- Loss is Negative Log Likelihood (wich is also the cross entropy)
You'll need to comment your work such that I understand that you understand what you are doing
1 - Define Parameters
End of explanation
def Pred(X, ??? ):
Explanations ...
Arguments:
X: An input image (as a vector)(shape is <784,1>)
Returns : a vector ???
pass
def loss(P, Y):
Explanations :
Arguments:
P: The prediction vector corresponding to an image (X^s)
Y: The ground truth of an image
Returns: a vector ???
pass
Explanation: 2 - Define Model
End of explanation
def dW1( ??? ):
Explanations ??
Returns: A vector which is the derivative of the loss with respect to W1
pass
def db1(L, ???):
Explanations ??
Arguments:
L is the loss af a sample (a scalar)
Returns: A scalar which is the derivative of the Loss with respect to b1
pass
def dW2( ??? ):
pass
def db2( ??? ):
pass
Explanation: 3 - Define Derivatives
End of explanation
<END_TASK> |
71 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
In Paramz/GPy we can implement an own optimizer in a really simple way. We need to supply GPy with an implementation of the Optimizer class.
The Optimizer has a name, which is the most important.
It also provides an opt() method, which opitmizes the result given an optimization function and a starting point.
Step1: This is all we need, GPy/Paramz will handle the rest for you
Step2: This is the model plot before optimization
Step3: And then the optimized state after running RProp | <ASSISTANT_TASK:>
Python Code:
# Get the parameters for Rprop of climin:
climin.Rprop?
class RProp(Optimizer):
# We want the optimizer to know some things in the Optimizer implementation:
def __init__(self, step_shrink=0.5, step_grow=1.2, min_step=1e-06, max_step=1, changes_max=0.1, *args, **kwargs):
super(RProp, self).__init__(*args, **kwargs)
self.opt_name = 'RProp (climin)'
self.step_shrink = step_shrink
self.step_grow = step_grow
self.min_step = min_step
self.max_step = max_step
self.changes_max = changes_max
def opt(self, x_init, f_fp=None, f=None, fp=None):
# We only need the gradient of the
assert not fp is None
# Do the optimization, giving previously stored parameters
opt = climin.rprop.Rprop(x_init, fp,
step_shrink=self.step_shrink, step_grow=self.step_grow,
min_step=self.min_step, max_step=self.max_step,
changes_max=self.changes_max)
# Get the optimized state and transform it into Paramz readable format by setting
# values on this object:
# Important ones are x_opt and status:
for info in opt:
if info['n_iter']>=self.max_iters:
self.x_opt = opt.wrt
self.status = 'maximum number of function evaluations exceeded'
break
Explanation: In Paramz/GPy we can implement an own optimizer in a really simple way. We need to supply GPy with an implementation of the Optimizer class.
The Optimizer has a name, which is the most important.
It also provides an opt() method, which opitmizes the result given an optimization function and a starting point.
End of explanation
m = GPy.examples.regression.toy_rbf_1d_50(optimize=False, plot=False)
Explanation: This is all we need, GPy/Paramz will handle the rest for you : )
End of explanation
m
m.plot()
m.optimize(RProp(), messages=1)
Explanation: This is the model plot before optimization:
End of explanation
m
m.plot()
Explanation: And then the optimized state after running RProp:
End of explanation
<END_TASK> |
72 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Creating records with date, time and timestamp fields
Step1: Let's first import fmrest, its FileMakerError and requests.
Step2: Now access the FMS and log in.
Step3: Before writing anything to the database, we create an empty record.
Step4: The table in the test database contains five fields, one for every field type FM provides, i.e. text, number, date, time and timestamp.
UTF-8 text can also include some characters / code points in the private areas, like the apple-sign ( ).
Date does not accept the ISO 8601 format though, instead use 'MM/dd/yyyy'.
The same applies to the date part in timestamps.
In order to check via the API itself, get https
Step5: Slightly different input here. We pass a number as a string, a date without a leading zero and we use an AM/PM alternative for the date. | <ASSISTANT_TASK:>
Python Code:
import sys
print(sys.version)
Explanation: Creating records with date, time and timestamp fields
End of explanation
import fmrest
print(fmrest.__version__)
from fmrest.exceptions import FileMakerError
import requests
requests.packages.urllib3.disable_warnings()
Explanation: Let's first import fmrest, its FileMakerError and requests.
End of explanation
fms = fmrest.Server('https://10.10.10.10', # https://filemaker.example.org
user='fmsrest',
password='fmsrest',
database='testdatabase', # Using a test database
# The test database contains one table and one layout,
# both are named 'datatypes'
layout='datatypes',
# if you are testing without cert/domain
# you may need the parameter verify_ssl=False here.
verify_ssl=False
)
print(fms)
fms.login()
print(fms)
print()
Explanation: Now access the FMS and log in.
End of explanation
new_data = {} # empty dict
empty_record = fms.create_record( new_data )
print(empty_record)
Explanation: Before writing anything to the database, we create an empty record.
End of explanation
new_data_2 = {
'mytext': ' Ͼ Ͽ ༺ ༻ ༼ ༽ ༾ ༿ ᐸ ᐳ ⎨ ⎬ ❮ ❯ ⫷ ⫸ ⸨ ⸩ ﹤ ﹥ < > ',
'mynum': 1234567,
'mydate': '02/22/2022', # month/day/year, with leading zero
'mytime': '9:15:55', # 24h format, no 'AM/PM', see above
'mystamp': '02/22/2022 21:15:55', # no 'AM/PM', see above
}
new_record_2 = fms.create_record(new_data_2)
print(new_record_2)
Explanation: The table in the test database contains five fields, one for every field type FM provides, i.e. text, number, date, time and timestamp.
UTF-8 text can also include some characters / code points in the private areas, like the apple-sign ( ).
Date does not accept the ISO 8601 format though, instead use 'MM/dd/yyyy'.
The same applies to the date part in timestamps.
In order to check via the API itself, get https://[your_fms_host_here]/fmi/data/vLatest/productInfo. This will return - at the time of this writing - "dateFormat":"MM/dd/yyyy", "timeFormat":"HH:mm:ss", "timeStampFormat":"MM/dd/yyyy HH:mm:ss".
Apparently, FM accepts time and the time part of a timestamp in two formats. Either use a 24h format or an AM/PM alternative (see below).
End of explanation
new_data_3 = {
'mytext': 'Любви, мира и счастья!',
'mynum': '667',
'mydate': '2/22/2022', # month/day/year, no leading zero
'mytime': '9:15:55 AM', # use 'AM/PM' format
'mystamp': '2/22/2022 9:15:55 PM', # use 'AM/PM'
}
new_record_3 = fms.create_record(new_data_3)
print(new_record_3)
Explanation: Slightly different input here. We pass a number as a string, a date without a leading zero and we use an AM/PM alternative for the date.
End of explanation
<END_TASK> |
73 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
<h3>Basic Recipe for Training a POS Tagger with SpaCy</h3>
<ol>
<li id="loaddatatitle"><a href="#-Load-Data-">Load Data </a>
<ol><li>We'll be using a sample from Web Treebank corpus, in ConllX format</ol>
<li><a href="#Prepare-Environment-for-New-Model">Prepare environment for a new model</a>
<ol><li>New model directory, with tagger and parser subdirectories. (Ensure you have permission)</ol>
<li><a href="#Build-a-Vocabulary">Build a vocabulary</a>
<ol>
<li>We are just going to load the default English Vocabulary
<li>Defines how we get attributes (like suffix) from a token string
<li>Includes brown cluster data on lexemes, we'll use as a feature for the parser
</ol>
<li> <a href="#Build-a-Tagger">Build a Tagger</a>
<ol><li>Ensure tagmap is provided if needed</ol>
<ol><li>Which features should be used to train tagger?</ol>
<li><a href="#Train-Tagger"> Train Tagger</a>
<ol><li>Averaged Perceptron algorithm
<li>For each epoch
Step1: <a href="#loaddatatitle">back</a>
<br>
Prepare Environment for New Model
Step2: <a href="#loaddatatitle">back</a>
<br>
Build a Vocabulary
Step3: <a href="#loaddatatitle">back</a>
<br>
Build a Tagger
Step4: <a href="#loaddatatitle">back</a>
<br>
Train Tagger
Step5: <a href="#loaddatatitle">back</a>
<br>
Save Tagger | <ASSISTANT_TASK:>
Python Code:
import sys
sys.path.append('/home/jupyter/site-packages/')
import requests
from spacy.syntax.arc_eager import PseudoProjectivity
def read_conllx(text):
bad_lines = 0
#t = text.strip()
#print(type(t), type('\n\n'))
# u = t.split(b'\n\n')
n_sent = 0
n_line = 0
print('text=%d' % len(text))
# text = str(text)
# print('text=%d' % len(text))
for sent in text.strip().split('\n\n'):
n_sent += 1
lines = sent.strip().split('\n')
if lines:
while lines[0].startswith('#'):
lines.pop(0)
tokens = []
for line in lines:
n_line += 1
try:
id_, word, lemma, tag, pos, morph, head, dep, _1, _2 = line.split()
if '-' in id_:
continue
id_ = float(id_) - 1
try:
head = (int(head) - 1) if head != '0' else id_
except:
head = id_
dep = 'ROOT' if dep == 'root' else dep
tokens.append((id_, word, pos, int(head), dep, 'O'))
except:
bad_lines += 1
print('***', line)
raise
if not tokens:
continue
tuples = [list(t) for t in zip(*tokens)]
yield (None, [[tuples, []]])
print("Skipped %d malformed lines" % bad_lines)
print('n_sent=%d' % n_sent)
print('n_line=%d' % n_line)
def LoadData(url, path, make_projective=False):
if url:
conll_string = str(requests.get(url).content)
elif path:
conll_string = open(path).read()
print('conll_string=%d' % len(conll_string))
sents = list(read_conllx(conll_string))
if make_projective:
sents = PseudoProjectivity.preprocess_training_data(sents)
return sents
train_url = 'https://raw.githubusercontent.com/UniversalDependencies/UD_English/master/en-ud-train.conllu'
test_url = 'https://raw.githubusercontent.com/UniversalDependencies/UD_English/master/en-ud-test.conllu'
train_path = '/Users/pcadmin/code/spacy-examples/en-ud-train.conllu.txt'
train_sents = LoadData(None, train_path)
# test_sents = LoadData(test_url, None)
print('train=%d' % len(train_sents))
#print('test =%d' % len(test_sents))
def sent_iter(conll_corpus):
for _, doc_sents in conll_corpus:
# print(len(doc_sents))
# print(doc_sents[0])
for (ids, words, tags, heads, deps, ner), _ in doc_sents:
yield ids, words, tags, heads, deps, ner
print('train=%d' % len(train_sents))
sent_counter = 0
unique_tags = set()
for ids, words, tags, heads, deps, ner in sent_iter(train_sents):
unique_tags.update(tags)
sent_counter += 1
doc_counter = len(train_sents)
print("Training corpus metadata")
print()
print("Number of Sentences: %d" % sent_counter)
print("Number of Unique Tags: %d" % len(unique_tags))
print("Unique Tags: %s" % sorted(unique_tags))
Explanation: <h3>Basic Recipe for Training a POS Tagger with SpaCy</h3>
<ol>
<li id="loaddatatitle"><a href="#-Load-Data-">Load Data </a>
<ol><li>We'll be using a sample from Web Treebank corpus, in ConllX format</ol>
<li><a href="#Prepare-Environment-for-New-Model">Prepare environment for a new model</a>
<ol><li>New model directory, with tagger and parser subdirectories. (Ensure you have permission)</ol>
<li><a href="#Build-a-Vocabulary">Build a vocabulary</a>
<ol>
<li>We are just going to load the default English Vocabulary
<li>Defines how we get attributes (like suffix) from a token string
<li>Includes brown cluster data on lexemes, we'll use as a feature for the parser
</ol>
<li> <a href="#Build-a-Tagger">Build a Tagger</a>
<ol><li>Ensure tagmap is provided if needed</ol>
<ol><li>Which features should be used to train tagger?</ol>
<li><a href="#Train-Tagger"> Train Tagger</a>
<ol><li>Averaged Perceptron algorithm
<li>For each epoch:
<ol><li>For each document in training data:
<ol><li>For each sentence in document:
<ol>
<li>Create document with sentence words (tagger not yet applied)
<li>Create GoldParse object with annotated labels
<li>Apply the tagger to the document to get predictions
<li>Update the tagger with GoldParse, Document (actual v predicted)
</ol>
</ol>
<li> Score predictions on validation set
</ol>
</ol>
<li><a href="#Save-Tagger">Save Tagger</a>
<h3> Load Data </h3>
End of explanation
from pathlib import Path
import spacy
def prepare_environment_for_new_tagger(model_path, tagger_path):
if not model_dir.exists():
model_dir.mkdir()
if not tagger_path.exists():
tagger_path.mkdir()
data_dir = spacy.en.get_data_path()
model_dir = data_dir / 'en-1.1.0'
tagger_dir = model_dir / 'custom-pos-tagger'
prepare_environment_for_new_tagger(model_dir, tagger_dir)
Explanation: <a href="#loaddatatitle">back</a>
<br>
Prepare Environment for New Model
End of explanation
from spacy.vocab import Vocab
def build_vocab(model_dir, vec_path = None, lexeme_path = None):
vocab = Vocab.load(model_dir)
if lexeme_path:
vocab.load_lexemes(lexeme_path)
if vec_path:
vocab.load_vectors_from_bin_loc(vec_path)
return vocab
lexeme_path = model_dir / 'vocab' / 'lexemes.bin'
vocab = build_vocab(model_dir, lexeme_path=lexeme_path)
#test clusters are available
from spacy.tokens import Doc
doc = Doc(vocab, words=[u'He',u'ate',u'pizza',u'.'])
print "Cluster Value for '{}': {}".format(*[doc[0], doc[0].cluster])
Explanation: <a href="#loaddatatitle">back</a>
<br>
Build a Vocabulary
End of explanation
from spacy.tagger import Tagger
from spacy.tagger import *
features = [
(W_orth,),(W_shape,),(W_cluster,),(W_flags,),(W_suffix,),(W_prefix,), #current word attributes
(P1_pos,),(P1_cluster,),(P1_flags,),(P1_suffix,), #-1 word attributes
(P2_pos,),(P2_cluster,),(P2_flags,), #-2 word attributes
(N1_orth,),(N1_suffix,),(N1_cluster,),(N1_flags,), #+1 word attributes
(N2_orth,),(N2_cluster,),(N2_flags,), #+2 word attributes
(P1_lemma, P1_pos),(P2_lemma, P2_pos), (P1_pos, P2_pos),(P1_pos, W_orth) #combination attributes
]
features = spacy.en.English.Defaults.tagger_features
tag_map = spacy.en.tag_map
statistical_model = spacy.tagger.TaggerModel(features)
tagger = Tagger(vocab, tag_map=tag_map, statistical_model = statistical_model)
Explanation: <a href="#loaddatatitle">back</a>
<br>
Build a Tagger
End of explanation
from spacy.scorer import Scorer
from spacy.gold import GoldParse
import random
def score_model(vocab, tagger, gold_docs, verbose=False):
scorer = Scorer()
for _, gold_doc in gold_docs:
for (ids, words, tags, heads, deps, entities), _ in gold_doc:
doc = Doc(vocab, words=map(unicode,words))
tagger(doc)
gold = GoldParse(doc, tags=tags)
scorer.score(doc, gold, verbose=verbose)
return scorer
def train(tagger, vocab, train_sents, test_sents, model_dir, n_iter=20, seed = 0, feat_set = u'basic'):
scorer = score_model(vocab, tagger, test_sents)
print('%s:\t\t%s' % ("Iteration", "POS Tag Accuracy"))
print('%s:\t\t%.3f' % ("Pretraining", scorer.tags_acc))
#TRAINING STARTS HERE
for itn in range(n_iter):
for ids, words, tags, heads, deps, ner in sent_iter(train_sents):
doc = Doc(vocab, words=map(unicode,words))
gold = GoldParse(doc, tags=tags, heads=heads, deps=deps)
tagger(doc)
tagger.update(doc, gold)
random.shuffle(train_sents)
scorer = score_model(vocab, tagger, test_sents)
print('%d:\t\t\t%.3f' % (itn, scorer.tags_acc))
return tagger
trained_tagger = train(tagger, vocab, train_sents, test_sents, model_dir, n_iter = 10)
Explanation: <a href="#loaddatatitle">back</a>
<br>
Train Tagger
End of explanation
def ensure_dir(path):
if not path.exists():
path.mkdir()
ensure_dir(tagger_dir)
trained_tagger.model.dump(str(tagger_dir / 'model'))
Explanation: <a href="#loaddatatitle">back</a>
<br>
Save Tagger
End of explanation
<END_TASK> |
74 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
syncID
Step1: As well as our function to read the hdf5 reflectance files and associated metadata
Step2: Define the location where you are holding the data for the data institute. The h5_filename will be the flightline which contains the tarps, and the tarp_48_filename and tarp_03_filename contain the field validated spectra for the white and black tarp respectively, organized by wavelength and reflectance.
Step3: We want to pull the spectra from the airborne data from the center of the tarp to minimize any errors introduced by infiltrating light in adjecent pixels, or through errors in ortho-rectification (source 2). We have pre-determined the coordinates for the center of each tarp which are as follows
Step4: Now we'll use our function designed for NEON AOP's HDF5 files to access the hyperspectral data
Step5: Within the reflectance curves there are areas with noisey data due to atmospheric windows in the water absorption bands. For this exercise we do not want to plot these areas as they obscure detailes in the plots due to their anamolous values. The meta data assocaited with these band locations is contained in the metadata gatherd by our function. We will pull out these areas as 'bad band windows' and determine which indexes in the reflectance curves contain the bad bands
Step6: Now join the list of indexes together into a single variable
Step7: The reflectance data is saved in files which are 'tab delimited.' We will use a numpy function (genfromtxt) to quickly import the tarp reflectance curves observed with the ASD using the '\t' delimeter to indicate tabs are used.
Step8: Now we'll set all the data inside of those windows to NaNs (not a number) so they will not be included in the plots
Step9: The next step is to determine which pixel in the reflectance data belongs to the center of each tarp. To do this, we will subtract the tarp center pixel location from the upper left corner pixels specified in the map info of the H5 file. This information is saved in the metadata dictionary output from our function that reads NEON AOP HDF5 files. The difference between these coordaintes gives us the x and y index of the reflectance curve.
Step10: Next, we will plot both the curve from the airborne data taken at the center of the tarps as well as the curves obtained from the ASD data to provide a visualisation of their consistency for both tarps. Once generated, we will also save the figure to a pre-determined location.
Step11: This produces plots showing the results of the ASD and airborne measurements over the 48% tarp. Visually, the comparison between the two appears to be fairly good. However, over the 3% tarp we appear to be over-estimating the reflectance. Large absolute differences could be associated with ATCOR input parameters (source 4). For example, the user must input the local visibility, which is related to aerosal optical thickness (AOT). We don't measure this at every site, therefore input a standard parameter for all sites.
Given the 3% reflectance tarp has much lower overall reflactance, it may be more informative to determine what the absolute difference between the two curves are and plot that as well.
Step12: From this we are able to see that the 48% tarp actually has larger absolute differences than the 3% tarp. The 48% tarp performs poorly at the shortest and longest waveleghts as well as near the edges of the 'bad band windows.' This is related to difficulty in calibrating the sensor in these sensitive areas (source 1).
Let's now determine the result of the percent difference, which is the metric used by ATCOR to report accuracy. We can do this by calculating the ratio of the absolute difference between curves to the total reflectance | <ASSISTANT_TASK:>
Python Code:
import h5py
import csv
import numpy as np
import os
import gdal
import matplotlib.pyplot as plt
import sys
from math import floor
import time
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
Explanation: syncID: 84457ead9b964c8d916eacde9f271ec7
title: "Assessing Spectrometer Accuracy using Validation Tarps with Python"
description: "Learn to analyze the difference between rasters taken a few days apart to assess the uncertainty between days."
dateCreated: 2017-06-21
authors: Tristan Goulden
contributors: Donal O'Leary
estimatedTime: 0.5 hour
packagesLibraries: numpy, gdal, matplotlib
topics: hyperspectral-remote-sensing, remote-sensing
languagesTool: python
dataProduct:
code1: https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/tutorials/Python/Hyperspectral/uncertainty-and-validation/hyperspectral_validation_py/hyperspectral_validation_py.ipynb
tutorialSeries: rs-uncertainty-py-series
urlTitle: hyperspectral-validation-py
In this tutorial we will learn how to retrieve relfectance curves from a
pre-specified coordainte in a NEON AOP HDF5 file, learn how to read a
tab delimited text file, retrieve bad band window indexes and mask portions of
a reflectance curve, plot reflectance curves on a graph and save the file,
gain an understanding of some sources of uncertainty in NIS data.
<div id="ds-objectives" markdown="1">
### Objectives
After completing this tutorial, you will be able to:
* Retrieve relfectance curves from a pre-specified coordainte in a NEON AOP HDF5 file,
* Read a tab delimited text file
* Retrive bad band window indexes and mask portions of a reflectance curve
* Plot reflectance curves on a graph and save the file
* Explain some sources of uncertainty in NEON image spectrometry data.
### Install Python Packages
* **numpy**
* **pandas**
* **gdal**
* **matplotlib**
* **h5py**
* **IPython.display**
### Download Data
To complete this tutorial, you will use data available from the NEON 2017 Data
Institute.
This tutorial uses the following files:
<ul>
<li>CHEQ_Tarp_03_02_refl_bavg.txt (9 KB)</li>
<li>CHEQ_Tarp_48_01_refl_bavg.txt (9 KB)</li>
<li>NEON_D05_CHEQ_DP1_20160912_160540_reflectance.h5 (2.7 GB)</li>
</ul>
Which may be downloaded <a href="https://neondata.sharefile.com/share/view/cdc8242e24ad4517/fofeb6d6-9ebf-4310-814f-9ae4aea8fbd9" target="_blank">from our ShareFile directory here<a/>.
<a href="https://neondata.sharefile.com/share/view/cdc8242e24ad4517/fofeb6d6-9ebf-4310-814f-9ae4aea8fbd9" class="link--button link--arrow">
Download Dataset</a>
The LiDAR and imagery data used to create this raster teaching data subset
were collected over the
<a href="http://www.neonscience.org/" target="_blank"> National Ecological Observatory Network's</a>
<a href="http://www.neonscience.org/science-design/field-sites/" target="_blank" >field sites</a>
and processed at NEON headquarters.
The entire dataset can be accessed on the
<a href="http://data.neonscience.org" target="_blank"> NEON data portal</a>.
These data are a part of the NEON 2017 Remote Sensing Data Institute. The complete archive may be found here -<a href="https://neondata.sharefile.com/d-s11d5c8b9c53426db"> NEON Teaching Data Subset: Data Institute 2017 Data Set</a>
### Recommended prerequisites
We recommend you complete the following tutorials prior to this tutorial to have
the necessary background.
1. <a href="https://www.neonscience.org/neon-aop-hdf5-py"> *NEON AOP Hyperspectral Data in HDF5 format with Python*</a>
1. <a href="https://www.neonscience.org/neon-hsi-aop-functions-python"> *Band Stacking, RGB & False Color Images, and Interactive Widgets in Python*</a>
1. <a href="https://www.neonscience.org/plot-spec-sig-python/"> *Plot a Spectral Signature in Python*</a>
</div>
In this tutorial we will be examing the accuracy of the Neon Imaging Spectrometer
(NIS) against targets with known reflectance. The targets consist of two 10 x 10 m
tarps which have been specially designed to have 3% reflectance (black tarp) and
48% reflectance (white tarp) across all of the wavelengths collected by the NIS
(see images below). During the Sept. 12 2016 flight over the
Chequamegon-Nicolet National Forest, an area in D05 which is part of
Steigerwaldt (STEI) site, these tarps were deployed in a gravel pit. During the
airborne overflight, observations were also taken over the tarps with an ASD
field spectrometer. The ASD measurments provide a validation source against the
the airborne measurements.
<figure class="half">
<a href="https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/neon-aop/tarps_close.jpg">
<img src="https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/neon-aop/tarps_close.jpg">
</a>
<a href="https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/neon-aop/tarps_far.jpg">
<img src="https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/neon-aop/tarps_far.jpg">
</a>
</figure>
<figure>
<a href="https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/neon-aop/tarps_aerial.jpg">
<img src="https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/neon-aop/tarps_aerial.jpg"></a>
<figcaption> The validation tarps, 3% reflectance (black tarp) and
48% reflectance (white tarp), laid out in the field.
Source: National Ecological Observatory Network (NEON)
</figcaption>
</figure>
To test the accuracy, we will utilize reflectance curves from the tarps as well
as from the associated flight line and execute absolute and relative comparisons.
The major error sources in the NIS can be generally categorized into the
following sources:
Calibration of the sensor
Quality of ortho-rectification
Accuracy of radiative transfer code and subsequent ATCOR interpolation
Selection of atmospheric input parameters
Terrain relief
Terrain cover
Note that the manual for ATCOR, the atmospheric correction software used by AOP,
specifies the accuracy of reflectance retrievals to be between 3 and 5% of
total reflectance. The tarps are located in a flat area, therefore, influences
by terrain releif should be minimal. We will ahve to keep the remining errors
in mind as we analyze the data.
Get Started
We'll start by adding all of the necessary libraries to our python script.
End of explanation
def h5refl2array(h5_filename):
hdf5_file = h5py.File(h5_filename,'r')
#Get the site name
file_attrs_string = str(list(hdf5_file.items()))
file_attrs_string_split = file_attrs_string.split("'")
sitename = file_attrs_string_split[1]
refl = hdf5_file[sitename]['Reflectance']
reflArray = refl['Reflectance_Data']
refl_shape = reflArray.shape
wavelengths = refl['Metadata']['Spectral_Data']['Wavelength']
#Create dictionary containing relevant metadata information
metadata = {}
metadata['shape'] = reflArray.shape
metadata['mapInfo'] = refl['Metadata']['Coordinate_System']['Map_Info']
#Extract no data value & set no data value to NaN\n",
metadata['scaleFactor'] = float(reflArray.attrs['Scale_Factor'])
metadata['noDataVal'] = float(reflArray.attrs['Data_Ignore_Value'])
metadata['bad_band_window1'] = (refl.attrs['Band_Window_1_Nanometers'])
metadata['bad_band_window2'] = (refl.attrs['Band_Window_2_Nanometers'])
metadata['projection'] = refl['Metadata']['Coordinate_System']['Proj4'].value
metadata['EPSG'] = int(refl['Metadata']['Coordinate_System']['EPSG Code'].value)
mapInfo = refl['Metadata']['Coordinate_System']['Map_Info'].value
mapInfo_string = str(mapInfo); #print('Map Info:',mapInfo_string)\n",
mapInfo_split = mapInfo_string.split(",")
#Extract the resolution & convert to floating decimal number
metadata['res'] = {}
metadata['res']['pixelWidth'] = mapInfo_split[5]
metadata['res']['pixelHeight'] = mapInfo_split[6]
#Extract the upper left-hand corner coordinates from mapInfo\n",
xMin = float(mapInfo_split[3]) #convert from string to floating point number\n",
yMax = float(mapInfo_split[4])
#Calculate the xMax and yMin values from the dimensions\n",
xMax = xMin + (refl_shape[1]*float(metadata['res']['pixelWidth'])) #xMax = left edge + (# of columns * resolution)\n",
yMin = yMax - (refl_shape[0]*float(metadata['res']['pixelHeight'])) #yMin = top edge - (# of rows * resolution)\n",
metadata['extent'] = (xMin,xMax,yMin,yMax),
metadata['ext_dict'] = {}
metadata['ext_dict']['xMin'] = xMin
metadata['ext_dict']['xMax'] = xMax
metadata['ext_dict']['yMin'] = yMin
metadata['ext_dict']['yMax'] = yMax
hdf5_file.close
return reflArray, metadata, wavelengths
Explanation: As well as our function to read the hdf5 reflectance files and associated metadata
End of explanation
print('Start CHEQ tarp uncertainty script')
## You will need to change these filepaths according to your own machine
## As you can see here, I saved the files downloaded above into my ~/Git/data/ directory
h5_filename = '/Users/olearyd/Git/data/NEON_D05_CHEQ_DP1_20160912_160540_reflectance.h5'
tarp_48_filename = '/Users/olearyd/Git/data/CHEQ_Tarp_48_01_refl_bavg.txt'
tarp_03_filename = '/Users/olearyd/Git/data/CHEQ_Tarp_03_02_refl_bavg.txt'
Explanation: Define the location where you are holding the data for the data institute. The h5_filename will be the flightline which contains the tarps, and the tarp_48_filename and tarp_03_filename contain the field validated spectra for the white and black tarp respectively, organized by wavelength and reflectance.
End of explanation
tarp_48_center = np.array([727487,5078970])
tarp_03_center = np.array([727497,5078970])
Explanation: We want to pull the spectra from the airborne data from the center of the tarp to minimize any errors introduced by infiltrating light in adjecent pixels, or through errors in ortho-rectification (source 2). We have pre-determined the coordinates for the center of each tarp which are as follows:
48% reflectance tarp UTMx: 727487, UTMy: 5078970
3% reflectance tarp UTMx: 727497, UTMy: 5078970
<figure>
<a href="https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/neon-aop/tarp_centers.jpg">
<img src="https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/graphics/neon-aop/tarp_centers.jpg"></a>
<figcaption> The validation tarps, 3% reflectance (black tarp) and
48% reflectance (white tarp), laid out in the field.
Source: National Ecological Observatory Network (NEON)
</figcaption>
</figure>
Let's define these coordaintes
End of explanation
[reflArray,metadata,wavelengths] = h5refl2array(h5_filename)
Explanation: Now we'll use our function designed for NEON AOP's HDF5 files to access the hyperspectral data
End of explanation
bad_band_window1 = (metadata['bad_band_window1'])
bad_band_window2 = (metadata['bad_band_window2'])
index_bad_window1 = [i for i, x in enumerate(wavelengths) if x > bad_band_window1[0] and x < bad_band_window1[1]]
index_bad_window2 = [i for i, x in enumerate(wavelengths) if x > bad_band_window2[0] and x < bad_band_window2[1]]
Explanation: Within the reflectance curves there are areas with noisey data due to atmospheric windows in the water absorption bands. For this exercise we do not want to plot these areas as they obscure detailes in the plots due to their anamolous values. The meta data assocaited with these band locations is contained in the metadata gatherd by our function. We will pull out these areas as 'bad band windows' and determine which indexes in the reflectance curves contain the bad bands
End of explanation
index_bad_windows = index_bad_window1+index_bad_window2
Explanation: Now join the list of indexes together into a single variable
End of explanation
tarp_48_data = np.genfromtxt(tarp_48_filename, delimiter = '\t')
tarp_03_data = np.genfromtxt(tarp_03_filename, delimiter = '\t')
Explanation: The reflectance data is saved in files which are 'tab delimited.' We will use a numpy function (genfromtxt) to quickly import the tarp reflectance curves observed with the ASD using the '\t' delimeter to indicate tabs are used.
End of explanation
tarp_48_data[index_bad_windows] = np.nan
tarp_03_data[index_bad_windows] = np.nan
Explanation: Now we'll set all the data inside of those windows to NaNs (not a number) so they will not be included in the plots
End of explanation
x_tarp_48_index = int((tarp_48_center[0] - metadata['ext_dict']['xMin'])/float(metadata['res']['pixelWidth']))
y_tarp_48_index = int((metadata['ext_dict']['yMax'] - tarp_48_center[1])/float(metadata['res']['pixelHeight']))
x_tarp_03_index = int((tarp_03_center[0] - metadata['ext_dict']['xMin'])/float(metadata['res']['pixelWidth']))
y_tarp_03_index = int((metadata['ext_dict']['yMax'] - tarp_03_center[1])/float(metadata['res']['pixelHeight']))
Explanation: The next step is to determine which pixel in the reflectance data belongs to the center of each tarp. To do this, we will subtract the tarp center pixel location from the upper left corner pixels specified in the map info of the H5 file. This information is saved in the metadata dictionary output from our function that reads NEON AOP HDF5 files. The difference between these coordaintes gives us the x and y index of the reflectance curve.
End of explanation
plt.figure(1)
tarp_48_reflectance = np.asarray(reflArray[y_tarp_48_index,x_tarp_48_index,:], dtype=np.float32)/metadata['scaleFactor']
tarp_48_reflectance[index_bad_windows] = np.nan
plt.plot(wavelengths,tarp_48_reflectance,label = 'Airborne Reflectance')
plt.plot(wavelengths,tarp_48_data[:,1], label = 'ASD Reflectance')
plt.title('CHEQ 20160912 48% tarp')
plt.xlabel('Wavelength (nm)'); plt.ylabel('Refelctance (%)')
plt.legend()
#plt.savefig('CHEQ_20160912_48_tarp.png',dpi=300,orientation='landscape',bbox_inches='tight',pad_inches=0.1)
plt.figure(2)
tarp_03_reflectance = np.asarray(reflArray[y_tarp_03_index,x_tarp_03_index,:], dtype=np.float32)/ metadata['scaleFactor']
tarp_03_reflectance[index_bad_windows] = np.nan
plt.plot(wavelengths,tarp_03_reflectance,label = 'Airborne Reflectance')
plt.plot(wavelengths,tarp_03_data[:,1],label = 'ASD Reflectance')
plt.title('CHEQ 20160912 3% tarp')
plt.xlabel('Wavelength (nm)'); plt.ylabel('Refelctance (%)')
plt.legend()
#plt.savefig('CHEQ_20160912_3_tarp.png',dpi=300,orientation='landscape',bbox_inches='tight',pad_inches=0.1)
Explanation: Next, we will plot both the curve from the airborne data taken at the center of the tarps as well as the curves obtained from the ASD data to provide a visualisation of their consistency for both tarps. Once generated, we will also save the figure to a pre-determined location.
End of explanation
plt.figure(3)
plt.plot(wavelengths,tarp_48_reflectance-tarp_48_data[:,1])
plt.title('CHEQ 20160912 48% tarp absolute difference')
plt.xlabel('Wavelength (nm)'); plt.ylabel('Absolute Refelctance Difference (%)')
#plt.savefig('CHEQ_20160912_48_tarp_absolute_diff.png',dpi=300,orientation='landscape',bbox_inches='tight',pad_inches=0.1)
plt.figure(4)
plt.plot(wavelengths,tarp_03_reflectance-tarp_03_data[:,1])
plt.title('CHEQ 20160912 3% tarp absolute difference')
plt.xlabel('Wavelength (nm)'); plt.ylabel('Absolute Refelctance Difference (%)')
#plt.savefig('CHEQ_20160912_3_tarp_absolute_diff.png',dpi=300,orientation='landscape',bbox_inches='tight',pad_inches=0.1)
Explanation: This produces plots showing the results of the ASD and airborne measurements over the 48% tarp. Visually, the comparison between the two appears to be fairly good. However, over the 3% tarp we appear to be over-estimating the reflectance. Large absolute differences could be associated with ATCOR input parameters (source 4). For example, the user must input the local visibility, which is related to aerosal optical thickness (AOT). We don't measure this at every site, therefore input a standard parameter for all sites.
Given the 3% reflectance tarp has much lower overall reflactance, it may be more informative to determine what the absolute difference between the two curves are and plot that as well.
End of explanation
plt.figure(5)
plt.plot(wavelengths,100*np.divide(tarp_48_reflectance-tarp_48_data[:,1],tarp_48_data[:,1]))
plt.title('CHEQ 20160912 48% tarp percent difference')
plt.xlabel('Wavelength (nm)'); plt.ylabel('Percent Refelctance Difference')
plt.ylim((-100,100))
#plt.savefig('CHEQ_20160912_48_tarp_relative_diff.png',dpi=300,orientation='landscape',bbox_inches='tight',pad_inches=0.1)
plt.figure(6)
plt.plot(wavelengths,100*np.divide(tarp_03_reflectance-tarp_03_data[:,1],tarp_03_data[:,1]))
plt.title('CHEQ 20160912 3% tarp percent difference')
plt.xlabel('Wavelength (nm)'); plt.ylabel('Percent Refelctance Difference')
plt.ylim((-100,150))
#plt.savefig('CHEQ_20160912_3_tarp_relative_diff.png',dpi=300,orientation='landscape',bbox_inches='tight',pad_inches=0.1)
Explanation: From this we are able to see that the 48% tarp actually has larger absolute differences than the 3% tarp. The 48% tarp performs poorly at the shortest and longest waveleghts as well as near the edges of the 'bad band windows.' This is related to difficulty in calibrating the sensor in these sensitive areas (source 1).
Let's now determine the result of the percent difference, which is the metric used by ATCOR to report accuracy. We can do this by calculating the ratio of the absolute difference between curves to the total reflectance
End of explanation
<END_TASK> |
75 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
Step2: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following
Step5: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
Step8: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint
Step10: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
Step12: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step17: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note
Step20: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling
Step23: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option
Step26: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option
Step29: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option
Step32: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model
Step35: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following
Step37: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
Step38: Hyperparameters
Tune the following parameters
Step40: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
Step42: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
Step45: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters. | <ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
Explanation: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
End of explanation
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
Explanation: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:
* airplane
* automobile
* bird
* cat
* deer
* dog
* frog
* horse
* ship
* truck
Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.
Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
End of explanation
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
# TODO: Implement Function
arrays = []
for x_ in x:
array = np.array(x_)
arrays.append(array)
return np.stack(arrays, axis=0) / 256.
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_normalize(normalize)
Explanation: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
End of explanation
def one_hot_encode(x):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
# TODO: Implement Function
# class_num = np.array(x).max()
class_num = 10
num = len(x)
out = np.zeros((num, class_num))
for i in range(num):
out[i, x[i]-1] = 1
return out
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_one_hot_encode(one_hot_encode)
Explanation: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint: Don't reinvent the wheel.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
Explanation: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
import tensorflow as tf
def neural_net_image_input(image_shape):
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
# TODO: Implement Function
# print ('image_shape')
# print (image_shape)
shape = (None, )
shape = shape + image_shape
# print ('shape')
# print (shape)
inputs = tf.placeholder(tf.float32, shape=shape, name='x')
# print ('inputs')
# print (inputs)
return inputs
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
# TODO: Implement Function
shape = (None, )
shape = shape + (n_classes, )
return tf.placeholder(tf.float32, shape=shape, name='y')
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
# TODO: Implement Function
return tf.placeholder(tf.float32, name='keep_prob')
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
Explanation: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.
However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d.
Let's begin!
Input
The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions
* Implement neural_net_image_input
* Return a TF Placeholder
* Set the shape using image_shape with batch size set to None.
* Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_label_input
* Return a TF Placeholder
* Set the shape using n_classes with batch size set to None.
* Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_keep_prob_input
* Return a TF Placeholder for dropout keep probability.
* Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.
These names will be used at the end of the project to load your saved model.
Note: None for shapes in TensorFlow allow for a dynamic size.
End of explanation
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides, maxpool=True):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
# TODO: Implement Function
input_channel = x_tensor.get_shape().as_list()[-1]
weights_size = conv_ksize + (input_channel,) + (conv_num_outputs,)
conv_strides = (1,) + conv_strides + (1,)
pool_ksize = (1,) + pool_ksize + (1,)
pool_strides = (1,) + pool_strides + (1,)
weights = tf.Variable(tf.random_normal(weights_size, stddev=0.01))
biases = tf.Variable(tf.zeros(conv_num_outputs))
out = tf.nn.conv2d(x_tensor, weights, conv_strides, padding='SAME')
out = out + biases
out = tf.nn.relu(out)
if maxpool:
out = tf.nn.max_pool(out, pool_ksize, pool_strides, padding='SAME')
return out
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_con_pool(conv2d_maxpool)
Explanation: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:
* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
* Apply a convolution to x_tensor using weight and conv_strides.
* We recommend you use same padding, but you're welcome to use any padding.
* Add bias
* Add a nonlinear activation to the convolution.
* Apply Max Pooling using pool_ksize and pool_strides.
* We recommend you use same padding, but you're welcome to use any padding.
Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.
End of explanation
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
# TODO: Implement Function
num, hight, width, channel = tuple(x_tensor.get_shape().as_list())
new_shape = (-1, hight * width * channel)
# print ('new_shape')
# print (new_shape)
return tf.reshape(x_tensor, new_shape)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_flatten(flatten)
Explanation: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
def fully_conn(x_tensor, num_outputs):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
num, dim = x_tensor.get_shape().as_list()
weights = tf.Variable(tf.random_normal((dim, num_outputs), stddev=np.sqrt(2 / num_outputs)))
biases = tf.Variable(tf.zeros(num_outputs))
return tf.nn.relu(tf.matmul(x_tensor, weights) + biases)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_fully_conn(fully_conn)
Explanation: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
def output(x_tensor, num_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
num, dim = x_tensor.get_shape().as_list()
weights = tf.Variable(tf.random_normal((dim, num_outputs), np.sqrt(2 / num_outputs)))
biases = tf.Variable(tf.zeros(num_outputs))
return tf.matmul(x_tensor, weights) + biases
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_output(output)
Explanation: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
Note: Activation, softmax, or cross entropy should not be applied to this.
End of explanation
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
conv_ksize3 = (3, 3)
conv_ksize1 = (1, 1)
conv_ksize5 = (5, 5)
conv_ksize7 = (7, 7)
conv_strides1 = (1, 1)
conv_strides2 = (2, 2)
pool_ksize = (2, 2)
pool_strides = (2, 2)
channels = [32,128,512,512]
# L = 4
out = x
# 6 layers
# for i in range(int(L / 4)):
out = conv2d_maxpool(out, channels[0], conv_ksize7, conv_strides1, pool_ksize, pool_strides, maxpool=True)
out = conv2d_maxpool(out, channels[1], conv_ksize5, conv_strides1, pool_ksize, pool_strides, maxpool=True)
out = conv2d_maxpool(out, channels[2], conv_ksize3, conv_strides1, pool_ksize, pool_strides, maxpool=True)
# out = conv2d_maxpool(out, channels[3], conv_ksize5, conv_strides2, pool_ksize, pool_strides, maxpool=True)
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
out = flatten(out)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
# by remove this fully connected layer can improve performance
out = fully_conn(out, 256)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
out = tf.nn.dropout(out, keep_prob)
out = output(out, 10)
# TODO: return output
return out
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
Explanation: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:
Apply 1, 2, or 3 Convolution and Max Pool layers
Apply a Flatten Layer
Apply 1, 2, or 3 Fully Connected Layers
Apply an Output Layer
Return the output
Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
End of explanation
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
# TODO: Implement Function
feed_dict = {keep_prob: keep_probability, x: feature_batch, y: label_batch}
session.run(optimizer, feed_dict=feed_dict)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_train_nn(train_neural_network)
Explanation: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:
* x for image input
* y for labels
* keep_prob for keep probability for dropout
This function will be called for each batch, so tf.global_variables_initializer() has already been called.
Note: Nothing needs to be returned. This function is only optimizing the neural network.
End of explanation
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
# TODO: Implement Function
# here will print loss, train_accuracy, and val_accuracy
# I implemented the val_accuracy, please read them all, thanks
# print train_accuracy to see overfit
loss = session.run(cost, feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.0})
train_accuracy = session.run(accuracy, feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.0})
batch = feature_batch.shape[0]
num_valid = valid_features.shape[0]
val_accuracy = 0
for i in range(0, num_valid, batch):
end_i = i + batch
if end_i > num_valid:
end_i = num_valid
batch_accuracy = session.run(accuracy, feed_dict={
x: valid_features[i:end_i], y: valid_labels[i:end_i], keep_prob: 1.0})
batch_accuracy *= (end_i - i)
val_accuracy += batch_accuracy
val_accuracy /= num_valid
print ('loss is {}, train_accuracy is {}, val_accuracy is {}'.format(loss, train_accuracy, val_accuracy))
Explanation: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
End of explanation
# TODO: Tune Parameters
epochs = 10
batch_size = 128
keep_probability = 0.8
Explanation: Hyperparameters
Tune the following parameters:
* Set epochs to the number of iterations until the network stops learning or start overfitting
* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
* 64
* 128
* 256
* ...
* Set keep_probability to the probability of keeping a node using dropout
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
Explanation: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
Explanation: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
Explanation: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.
End of explanation
<END_TASK> |
76 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Different definitions of momentum
By Evgenia "Jenny" Nitishinskaya
Notebook released under the Creative Commons Attribution 4.0 License.
A momentum strategy is one that buys stocks whose prices seem to be rising and sells those whose prices seem to be falling. That is, it assumes that if a stock price has a lot of momentum in some direction, it will keep moving in that direction. In this notebook I try out the different definitions of momentum described in http
Step2: In order to use this in a strategy, we should wrap our momentum calculator in a function
Step3: Now we implement the strategy described in the paper | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
k = 30
start = '2014-01-01'
end = '2015-01-01'
pricing = get_pricing('PEP', fields='price', start_date=start, end_date=end)
fundamentals = init_fundamentals()
num_shares = get_fundamentals(query(fundamentals.earnings_report.basic_average_shares,)
.filter(fundamentals.company_reference.primary_symbol == 'PEP',), end)
x = np.log(pricing)
v = x.diff()
m = get_pricing('PEP', fields='volume', start_date=start, end_date=end)/num_shares.values[0,0]
p0 = pd.rolling_sum(v, k)
p1 = pd.rolling_sum(m*v, k)
p2 = p1/pd.rolling_sum(m, k)
p3 = pd.rolling_mean(v, k)/pd.rolling_std(v, k)
f, (ax1, ax2) = plt.subplots(2,1)
ax1.plot(p0)
ax1.plot(p1)
ax1.plot(p2)
ax1.plot(p3)
ax1.set_title('Momentum of PEP')
ax1.legend(['p(0)', 'p(1)', 'p(2)', 'p(3)'], bbox_to_anchor=(1.1, 1))
ax2.plot(p0)
ax2.plot(p1)
ax2.plot(p2)
ax2.plot(p3)
ax2.axis([0, 300, -0.005, 0.005])
ax2.set_xlabel('Time');
Explanation: Different definitions of momentum
By Evgenia "Jenny" Nitishinskaya
Notebook released under the Creative Commons Attribution 4.0 License.
A momentum strategy is one that buys stocks whose prices seem to be rising and sells those whose prices seem to be falling. That is, it assumes that if a stock price has a lot of momentum in some direction, it will keep moving in that direction. In this notebook I try out the different definitions of momentum described in http://arxiv.org/pdf/1208.2775.pdf. They define 4 different measures, called $p^{(1)}$, $p^{(0)}$, $p^{(2)}$, and $p^{(3)}$.
Their approach is based in physics, where the momentum is defined as $p = mv$, the product of the mass and the velocity. First, they define $x(t)$ to be the log of the price of the security. Conveniently, the return on the security is then the derivative of $x(t)$, which is called the velocity $v(t)$. Then they suggest a number of different definitions of mass $m(t)$; in the examples below, we'll use the inverse of standard deviation and turnover rate as mass. This works with our analogy because the more volatile or the less liquid an asset (the smaller its mass), the easier it is to move its price (i.e. change its position). The different momenta are then defined (for a lookback window $k$) as:
$$p^{(0)}(t) = \sum_{i=0}^{k-1} v(t-i)$$
$$p^{(1)}(t) = \sum_{i=0}^{k-1} m(t-i) v(t-i)$$
$$p^{(2)}(t) = \frac{\sum_{i=0}^{k-1} m(t-i) v(t-i)}{\sum_{i=0}^{k-1} m(t-i)}$$
$$p^{(3)}(t) = \frac{\mu(v(t-k+1),\ldots, v(t))}{\sigma(v(t-k+1),\ldots, v(t))} $$
First, let's just implement the different momentum definitions, and plot the rolling momenta for one stock:
End of explanation
def get_p(prices, m, d, k):
Returns the dth-degree rolling momentum of data using lookback window length k
x = np.log(prices)
v = x.diff()
m = np.array(m)
if d == 0:
return pd.rolling_sum(v, k)
elif d == 1:
return pd.rolling_sum(m*v, k)
elif d == 2:
return pd.rolling_sum(m*v, k)/pd.rolling_sum(m, k)
elif d == 3:
return pd.rolling_mean(v, k)/pd.rolling_std(v, k)
Explanation: In order to use this in a strategy, we should wrap our momentum calculator in a function:
End of explanation
# Load the assets we want to trade
start = '2010-01-01'
end = '2015-01-01'
assets = sorted(['STX', 'WDC', 'CBI', 'JEC', 'VMC', 'PG', 'AAPL', 'PEP', 'AON', 'DAL'])
data = get_pricing(assets, start_date='2010-01-01', end_date='2015-01-01').loc['price', :, :]
# Get turnover rate for the assets
fundamentals = init_fundamentals()
num_shares = get_fundamentals(query(fundamentals.earnings_report.basic_average_shares,)
.filter(fundamentals.company_reference.primary_symbol.in_(assets),), end)
turnover = get_pricing(assets, fields='volume', start_date=start, end_date=end)/num_shares.values[0]
# Plot the prices just for fun
data.plot(figsize=(10,7), colors=['r', 'g', 'b', 'k', 'c', 'm', 'orange',
'chartreuse', 'slateblue', 'silver'])
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.)
plt.ylabel('Price')
plt.xlabel('Time');
# Calculate all the rolling momenta for the data and compute daily ranking of assets by momentum
lookback = 30
ps = np.array([np.array(get_p(data, turnover, j, lookback).T) for j in range(4)])
orders = [[ps[j].T[i].argsort() for i in range(len(ps[0,0]))] for j in range(4)]
ranks = [[orders[j][i].argsort() for i in range(len(orders[1]))] for j in range(4)]
# Cast data to numpy array for easier manipulation
data_array = np.array(data)
# Simulate going long on high-momentum stocks and short low-momentum stocks
# Our first 2*lookback - 2 values will be NaN since we used 2 lookback windows, so start on day 2*lookback
tots = [[0]*4 for j in range(len(data) - 2*lookback)]
for t in range(2*lookback, len(ranks[0]) - 2*lookback):
tots[t] = list(tots[t-1])
# Only update portfolio every 2*lookback days
if t%(2*lookback):
continue
# Go long top quintile of stocks and short bottom quintile
shorts = np.array([[int(x < 2)for x in ranks[j][t]] for j in range(4)])
longs = np.array([[int(x > 7) for x in ranks[j][t]] for j in range(4)])
# How many shares of each stock are in $1000
shares_in_1k = 1000/data_array[t]
# Go long and short $1000 each in the specified stocks, then clear holdings in 2*lookback days
returns = (data_array[t+2*lookback]*shares_in_1k - [1000]*len(assets))*(longs - shorts)
tots[t] += np.sum(returns, 1)
# Adjust so that tots[t] is actually money on day t
tots = [[0,0,0,0]]*2*lookback + tots
# Plot total money earned using the 3 different momentum definitions
plt.plot(tots)
plt.title('Cash in portfolio')
plt.legend(['p(0)', 'p(1)', 'p(2)', 'p(3)'], loc=4)
plt.xlabel('Time')
plt.ylabel('$');
Explanation: Now we implement the strategy described in the paper: pick a holding period, and every period rebalance your portfolio to be long \$1000 each in the highest-momentum stocks and short \$1000 each in the lowest-momentum stocks. In the code below, my holding period is simply twice the lookback period.
End of explanation
<END_TASK> |
77 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Copyright 2020 The Cirq Developers
Step3: Shor's algorithm
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step6: Order finding
Factoring an integer $n$ can be reduced to finding the period of the <i>modular exponential function</i> (to be defined). Finding this period can be accomplished (with high probability) by finding the <i>order</i> of a randomly chosen element of the multiplicative group modulo $n$.
Let $n$ be a positive integer and
$$ \mathbb{Z}n
Step8: For example, the multiplicative group modulo $n = 15$ is shown below.
Step11: One can check that this set of elements indeed forms a group (under ordinary multiplication).
Classical order finding
A function for classically computing the order $r$ of an element $x \in \mathbb{Z}_n$ is provided below. This function simply computes the sequence
$$ x^2 \text{ mod } n $$
$$ x^3 \text{ mod } n $$
$$ x^4 \text{ mod } n $$
$$ \vdots $$
until an integer $r$ is found such that $x^r = 1 \text{ mod } n$. Since $|\mathbb{Z}_n| = \phi(n)$, this algorithm for order finding has time complexity $O(\phi(n))$ which is inefficient. (Roughly $O(2^{L / 2})$ where $L$ is the number of bits in $n$.)
Step13: An example of computing $r$ for a given $x \in \mathbb{Z}_n$ and given $n$ is shown in the code block below.
Step16: The quantum part of Shor's algorithm is order finding, but done via a quantum circuit, which we'll discuss below.
Quantum order finding
Quantum order finding is essentially quantum phase estimation with unitary $U$ that computes the modular exponential function $f_x(z)$ for some randomly chosen $x \in \mathbb{Z}_n$. The full details of how $U$ is computed in terms of elementary gates can be complex to unravel, especially on a first reading. In this tutorial, we'll use arithmetic operations in Cirq which can implement such a unitary $U$ without fully delving into the details of elementary gates.
Below we first show an example of a simple arithmetic operation in Cirq (addition) then discuss the operation we care about (modular exponentiation).
Quantum arithmetic operations in Cirq
Here we discuss an example of defining an arithmetic operation in Cirq, namely modular addition. This operation adds the value of the input register into the target register. More specifically, this operation acts on two qubit registers as
$$ |a\rangle_i |b\rangle_t \mapsto |a\rangle_i |a + b \text{ mod } N_t \rangle_t . $$
Here, the subscripts $i$ and $t$ denote <i>i</i>nput and <i>t</i>arget register, respectively, and $N_t$ is the dimension of the target register.
To define this operation, called Adder, we inherit from cirq.ArithmeticOperation and override the four methods shown below. The main method is the apply method which defines the arithmetic. Here, we simply state the expression as $a + b$ instead of the more accurate $a + b \text{ mod } N_t$ above -- the cirq.ArithmeticOperation class is able to deduce what we mean by simply $a + b$ since the operation must be reversible.
Step18: Now that we have the operation defined, we can use it in a circuit. The cell below creates two qubit registers, then sets the first register to be $|10\rangle$ (in binary) and the second register to be $|01\rangle$ (in binary) via $X$ gates. Then, we use the Adder operation, then measure all the qubits.
Since $10 + 01 = 11$ (in binary), we expect to measure $|11\rangle$ in the target register every time. Additionally, since we do not alter the input register, we expect to measure $|10\rangle$ in the input register every time. In short, the only bitstring we expect to measure is $1011$.
Step20: In the output of this code block, we first see the circuit which shows the initial $X$ gates, the Adder operation, then the final measurements. Next, we see the measurement outcomes which are all the bitstring $1011$ as expected.
It is also possible to see the unitary of the adder operation, which we do below. Here, we set the target register to be two qubits in the zero state, i.e. $|00\rangle$. We specify the input register as the integer one which corresponds to the qubit register $|01\rangle$.
Step23: We can understand this unitary as follows. The $i$th column of the unitary is the state $|i + 1 \text{ mod } 4\rangle$. For example, if we look at the $0$th column of the unitary, we see the state $|i + 1 \text{ mod } 4\rangle = |0 + 1 \text{ mod } 4\rangle = |1\rangle$. If we look at the $1$st column of the unitary, we see the state $|i + 1 \text{ mod } 4\rangle = |1 + 1 \text{ mod } 4\rangle = |2\rangle$. Similarly for the last two columns.
Modular exponential arithmetic operation
We can define the modular exponential arithmetic operation in a similar way to the simple addition arithmetic operation, shown below. For the purposes of understanding Shor's algorithm, the most important part of the following code block is the apply method which defines the arithmetic operation.
Step25: In the apply method, we see that we evaluate (target * base**exponent) % modulus. The target and the exponent depend on the values of the respective qubit registers, and the base and modulus are constant -- namely, the modulus is $n$ and the base is some $x \in \mathbb{Z}_n$.
The total number of qubits we will use is $3 (L + 1)$ where $L$ is the number of bits needed to store the integer $n$ to factor. The size of the unitary which implements the modular exponential is thus $4^{3(L + 1)}$. For a modest $n = 15$, the unitary requires storing $2^{30}$ floating point numbers in memory which is out of reach of most current standard laptops.
Step27: As with the simple adder operation, this modular exponential operation has a unitary which we can display (memory permitting) as follows.
Step30: Using the modular exponentional operation in a circuit
The quantum part of Shor's algorithm is just phase estimation with the unitary $U$ corresponding to the modular exponential operation. The following cell defines a function which creates the circuit for Shor's algorithm using the ModularExp operation we defined above.
Step32: Using this function, we can visualize the circuit for a given $x$ and $n$ as follows.
Step34: As previously described, we put the exponent register into an equal superposition via Hadamard gates. The $X$ gate on the last qubit in the target register is used for phase kickback. The modular exponential operation performs the sequence of controlled unitaries in phase estimation, then we apply the inverse quantum Fourier transform to the exponent register and measure to read out the result.
To illustrate the measurement results, we can sample from a smaller circuit. (Note that in practice we would never run Shor's algorithm with $n = 6$ because it is even. This is just an example to illustrate the measurement outcomes.)
Step36: We interpret each measured bitstring as an integer, but what do these integers tell us? In the next section we look at how to classically post-process to interpret them.
Classical post-processing
The integer we measure is close to $s / r$ where $r$ is the order of $x \in \mathbb{Z}_n$ and $0 \le s < r$ is an integer. We use the continued fractions algorithm to determine $r$ from $s / r$ then return it if the order finding circuit succeeded, else we return None.
Step38: The next code block shows an example of creating an order finding circuit, executing it, then using the classical postprocessing function to determine the order. Recall that the quantum part of the algorithm succeeds with some probability. If the order is None, try re-running the cell a few times.
Step40: You should see that the order of $x = 5$ in $\mathbb{Z}_6$ is $r = 2$. Indeed, $5^2 \text{ mod } 6 = 25 \text{ mod } 6 = 1$.
Quantum order finder
We can now define a streamlined function for the quantum version of order finding using the functions we have previously written. The quantum order finder below creates the circuit, executes it, and processes the measurement result.
Step44: This completes our quantum implementation of an order finder, and the quantum part of Shor's algorithm.
The complete factoring algorithm
We can use this quantum order finder (or the classical order finder) to complete Shor's algorithm. In the following code block, we add a few pre-processing steps which
Step47: The function find_factor uses the quantum_order_finder by default, in which case it is executing Shor's algorithm. As previously mentioned, due to the large memory requirements for classically simulating this circuit, we cannot run Shor's algorithm for $n \ge 15$. However, we can use the classical order finder as a substitute. | <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The Cirq Developers
End of explanation
Install Cirq.
try:
import cirq
except ImportError:
print("installing cirq...")
!pip install --quiet cirq
print("installed cirq.")
Imports for the notebook.
import fractions
import math
import random
import numpy as np
import sympy
from typing import Callable, List, Optional, Sequence, Union
import cirq
Explanation: Shor's algorithm
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://quantumai.google/cirq/tutorials/shor"><img src="https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png" />View on QuantumAI</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/quantumlib/Cirq/blob/master/docs/tutorials/shor.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/quantumlib/Cirq/blob/master/docs/tutorials/shor.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/github_logo_1x.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/Cirq/docs/tutorials/shor.ipynb"><img src="https://quantumai.google/site-assets/images/buttons/download_icon_1x.png" />Download notebook</a>
</td>
</table>
This tutorial presents a pedagogical demonstration of Shor's algorithm. It is a modified and expanded version of this Cirq example.
End of explanation
Function to compute the elements of Z_n.
def multiplicative_group(n: int) -> List[int]:
Returns the multiplicative group modulo n.
Args:
n: Modulus of the multiplicative group.
assert n > 1
group = [1]
for x in range(2, n):
if math.gcd(x, n) == 1:
group.append(x)
return group
Explanation: Order finding
Factoring an integer $n$ can be reduced to finding the period of the <i>modular exponential function</i> (to be defined). Finding this period can be accomplished (with high probability) by finding the <i>order</i> of a randomly chosen element of the multiplicative group modulo $n$.
Let $n$ be a positive integer and
$$ \mathbb{Z}n := {x \in \mathbb{Z}+ : x < n \text{ and } \text{gcd}(x, n) = 1} $$
be the multiplicative group modulo $n$.
Given $x \in \mathbb{Z}_n$, compute the smallest positive integer $r$ such that $x^r \text{ mod } n = 1$.
It can be shown from group/number theory that:
(1) Such an integer $r$ exists. (Note that $g^{|G|} = 1_G$ for any group $G$ with cardinality $|G|$ and element $g \in G$, but it's possible that $r < |G|$.)
(2) If $n = pq$ for primes $p$ and $q$, then $|\mathbb{Z}_n| = \phi(n) = (p - 1) (q - 1)$. (The function $\phi$ is called Euler's totient function.)
(3) The modular exponential function
$$ f_x(z) := x^z \mod n $$
is periodic with period $r$ (the order of the element $x \in \mathbb{Z}_n$). That is, $f_x(z + r) = f_x(z)$.
(4) If we know the period of the modular exponential function, we can (with high probability) figure out $p$ and $q$ -- that is, factor $n$.
As a refresher, we can visualize the elements of some multiplicative groups $\mathbb{Z}_n$ for integers $n$ via the following simple function.
End of explanation
Example of a multiplicative group.
n = 15
print(f"The multiplicative group modulo n = {n} is:")
print(multiplicative_group(n))
Explanation: For example, the multiplicative group modulo $n = 15$ is shown below.
End of explanation
Function for classically computing the order of an element of Z_n.
def classical_order_finder(x: int, n: int) -> Optional[int]:
Computes smallest positive r such that x**r mod n == 1.
Args:
x: Integer whose order is to be computed, must be greater than one
and belong to the multiplicative group of integers modulo n (which
consists of positive integers relatively prime to n),
n: Modulus of the multiplicative group.
Returns:
Smallest positive integer r such that x**r == 1 mod n.
Always succeeds (and hence never returns None).
Raises:
ValueError when x is 1 or not an element of the multiplicative
group of integers modulo n.
# Make sure x is both valid and in Z_n.
if x < 2 or x >= n or math.gcd(x, n) > 1:
raise ValueError(f"Invalid x={x} for modulus n={n}.")
# Determine the order.
r, y = 1, x
while y != 1:
y = (x * y) % n
r += 1
return r
Explanation: One can check that this set of elements indeed forms a group (under ordinary multiplication).
Classical order finding
A function for classically computing the order $r$ of an element $x \in \mathbb{Z}_n$ is provided below. This function simply computes the sequence
$$ x^2 \text{ mod } n $$
$$ x^3 \text{ mod } n $$
$$ x^4 \text{ mod } n $$
$$ \vdots $$
until an integer $r$ is found such that $x^r = 1 \text{ mod } n$. Since $|\mathbb{Z}_n| = \phi(n)$, this algorithm for order finding has time complexity $O(\phi(n))$ which is inefficient. (Roughly $O(2^{L / 2})$ where $L$ is the number of bits in $n$.)
End of explanation
Example of (classically) computing the order of an element.
n = 15 # The multiplicative group is [1, 2, 4, 7, 8, 11, 13, 14].
x = 8
r = classical_order_finder(x, n)
# Check that the order is indeed correct.
print(f"x^r mod n = {x}^{r} mod {n} = {x**r % n}")
Explanation: An example of computing $r$ for a given $x \in \mathbb{Z}_n$ and given $n$ is shown in the code block below.
End of explanation
Example of defining an arithmetic (quantum) operation in Cirq.
class Adder(cirq.ArithmeticOperation):
Quantum addition.
def __init__(self, target_register, input_register):
self.input_register = input_register
self.target_register = target_register
def registers(self):
return self.target_register, self.input_register
def with_registers(self, *new_registers):
return Adder(*new_registers)
def apply(self, target_value, input_value):
return target_value + input_value
Explanation: The quantum part of Shor's algorithm is order finding, but done via a quantum circuit, which we'll discuss below.
Quantum order finding
Quantum order finding is essentially quantum phase estimation with unitary $U$ that computes the modular exponential function $f_x(z)$ for some randomly chosen $x \in \mathbb{Z}_n$. The full details of how $U$ is computed in terms of elementary gates can be complex to unravel, especially on a first reading. In this tutorial, we'll use arithmetic operations in Cirq which can implement such a unitary $U$ without fully delving into the details of elementary gates.
Below we first show an example of a simple arithmetic operation in Cirq (addition) then discuss the operation we care about (modular exponentiation).
Quantum arithmetic operations in Cirq
Here we discuss an example of defining an arithmetic operation in Cirq, namely modular addition. This operation adds the value of the input register into the target register. More specifically, this operation acts on two qubit registers as
$$ |a\rangle_i |b\rangle_t \mapsto |a\rangle_i |a + b \text{ mod } N_t \rangle_t . $$
Here, the subscripts $i$ and $t$ denote <i>i</i>nput and <i>t</i>arget register, respectively, and $N_t$ is the dimension of the target register.
To define this operation, called Adder, we inherit from cirq.ArithmeticOperation and override the four methods shown below. The main method is the apply method which defines the arithmetic. Here, we simply state the expression as $a + b$ instead of the more accurate $a + b \text{ mod } N_t$ above -- the cirq.ArithmeticOperation class is able to deduce what we mean by simply $a + b$ since the operation must be reversible.
End of explanation
Example of using an Adder in a circuit.
# Two qubit registers.
qreg1 = cirq.LineQubit.range(2)
qreg2 = cirq.LineQubit.range(2, 4)
# Define the circuit.
circ = cirq.Circuit(
cirq.ops.X.on(qreg1[0]),
cirq.ops.X.on(qreg2[1]),
Adder(input_register=qreg1, target_register=qreg2),
cirq.measure_each(*qreg1),
cirq.measure_each(*qreg2)
)
# Display it.
print("Circuit:\n")
print(circ)
# Print the measurement outcomes.
print("\n\nMeasurement outcomes:\n")
print(cirq.sample(circ, repetitions=5).data)
Explanation: Now that we have the operation defined, we can use it in a circuit. The cell below creates two qubit registers, then sets the first register to be $|10\rangle$ (in binary) and the second register to be $|01\rangle$ (in binary) via $X$ gates. Then, we use the Adder operation, then measure all the qubits.
Since $10 + 01 = 11$ (in binary), we expect to measure $|11\rangle$ in the target register every time. Additionally, since we do not alter the input register, we expect to measure $|10\rangle$ in the input register every time. In short, the only bitstring we expect to measure is $1011$.
End of explanation
Example of the unitary of an Adder operation.
cirq.unitary(
Adder(target_register=cirq.LineQubit.range(2),
input_register=1)
).real
Explanation: In the output of this code block, we first see the circuit which shows the initial $X$ gates, the Adder operation, then the final measurements. Next, we see the measurement outcomes which are all the bitstring $1011$ as expected.
It is also possible to see the unitary of the adder operation, which we do below. Here, we set the target register to be two qubits in the zero state, i.e. $|00\rangle$. We specify the input register as the integer one which corresponds to the qubit register $|01\rangle$.
End of explanation
Defines the modular exponential operation used in Shor's algorithm.
class ModularExp(cirq.ArithmeticOperation):
Quantum modular exponentiation.
This class represents the unitary which multiplies base raised to exponent
into the target modulo the given modulus. More precisely, it represents the
unitary V which computes modular exponentiation x**e mod n:
V|y⟩|e⟩ = |y * x**e mod n⟩ |e⟩ 0 <= y < n
V|y⟩|e⟩ = |y⟩ |e⟩ n <= y
where y is the target register, e is the exponent register, x is the base
and n is the modulus. Consequently,
V|y⟩|e⟩ = (U**e|y)|e⟩
where U is the unitary defined as
U|y⟩ = |y * x mod n⟩ 0 <= y < n
U|y⟩ = |y⟩ n <= y
def __init__(
self,
target: Sequence[cirq.Qid],
exponent: Union[int, Sequence[cirq.Qid]],
base: int,
modulus: int
) -> None:
if len(target) < modulus.bit_length():
raise ValueError(f'Register with {len(target)} qubits is too small '
f'for modulus {modulus}')
self.target = target
self.exponent = exponent
self.base = base
self.modulus = modulus
def registers(self) -> Sequence[Union[int, Sequence[cirq.Qid]]]:
return self.target, self.exponent, self.base, self.modulus
def with_registers(
self,
*new_registers: Union[int, Sequence['cirq.Qid']],
) -> cirq.ArithmeticOperation:
if len(new_registers) != 4:
raise ValueError(f'Expected 4 registers (target, exponent, base, '
f'modulus), but got {len(new_registers)}')
target, exponent, base, modulus = new_registers
if not isinstance(target, Sequence):
raise ValueError(
f'Target must be a qubit register, got {type(target)}')
if not isinstance(base, int):
raise ValueError(
f'Base must be a classical constant, got {type(base)}')
if not isinstance(modulus, int):
raise ValueError(
f'Modulus must be a classical constant, got {type(modulus)}')
return ModularExp(target, exponent, base, modulus)
def apply(self, *register_values: int) -> int:
assert len(register_values) == 4
target, exponent, base, modulus = register_values
if target >= modulus:
return target
return (target * base**exponent) % modulus
def _circuit_diagram_info_(
self,
args: cirq.CircuitDiagramInfoArgs,
) -> cirq.CircuitDiagramInfo:
assert args.known_qubits is not None
wire_symbols: List[str] = []
t, e = 0, 0
for qubit in args.known_qubits:
if qubit in self.target:
if t == 0:
if isinstance(self.exponent, Sequence):
e_str = 'e'
else:
e_str = str(self.exponent)
wire_symbols.append(
f'ModularExp(t*{self.base}**{e_str} % {self.modulus})')
else:
wire_symbols.append('t' + str(t))
t += 1
if isinstance(self.exponent, Sequence) and qubit in self.exponent:
wire_symbols.append('e' + str(e))
e += 1
return cirq.CircuitDiagramInfo(wire_symbols=tuple(wire_symbols))
Explanation: We can understand this unitary as follows. The $i$th column of the unitary is the state $|i + 1 \text{ mod } 4\rangle$. For example, if we look at the $0$th column of the unitary, we see the state $|i + 1 \text{ mod } 4\rangle = |0 + 1 \text{ mod } 4\rangle = |1\rangle$. If we look at the $1$st column of the unitary, we see the state $|i + 1 \text{ mod } 4\rangle = |1 + 1 \text{ mod } 4\rangle = |2\rangle$. Similarly for the last two columns.
Modular exponential arithmetic operation
We can define the modular exponential arithmetic operation in a similar way to the simple addition arithmetic operation, shown below. For the purposes of understanding Shor's algorithm, the most important part of the following code block is the apply method which defines the arithmetic operation.
End of explanation
Create the target and exponent registers for phase estimation,
and see the number of qubits needed for Shor's algorithm.
n = 15
L = n.bit_length()
# The target register has L qubits.
target = cirq.LineQubit.range(L)
# The exponent register has 2L + 3 qubits.
exponent = cirq.LineQubit.range(L, 3 * L + 3)
# Display the total number of qubits to factor this n.
print(f"To factor n = {n} which has L = {L} bits, we need 3L + 3 = {3 * L + 3} qubits.")
Explanation: In the apply method, we see that we evaluate (target * base**exponent) % modulus. The target and the exponent depend on the values of the respective qubit registers, and the base and modulus are constant -- namely, the modulus is $n$ and the base is some $x \in \mathbb{Z}_n$.
The total number of qubits we will use is $3 (L + 1)$ where $L$ is the number of bits needed to store the integer $n$ to factor. The size of the unitary which implements the modular exponential is thus $4^{3(L + 1)}$. For a modest $n = 15$, the unitary requires storing $2^{30}$ floating point numbers in memory which is out of reach of most current standard laptops.
End of explanation
See (part of) the unitary for a modular exponential operation.
# Pick some element of the multiplicative group modulo n.
x = 5
# Display (part of) the unitary. Uncomment if n is small enough.
# cirq.unitary(ModularExp(target, exponent, x, n))
Explanation: As with the simple adder operation, this modular exponential operation has a unitary which we can display (memory permitting) as follows.
End of explanation
Function to make the quantum circuit for order finding.
def make_order_finding_circuit(x: int, n: int) -> cirq.Circuit:
Returns quantum circuit which computes the order of x modulo n.
The circuit uses Quantum Phase Estimation to compute an eigenvalue of
the unitary
U|y⟩ = |y * x mod n⟩ 0 <= y < n
U|y⟩ = |y⟩ n <= y
Args:
x: positive integer whose order modulo n is to be found
n: modulus relative to which the order of x is to be found
Returns:
Quantum circuit for finding the order of x modulo n
L = n.bit_length()
target = cirq.LineQubit.range(L)
exponent = cirq.LineQubit.range(L, 3 * L + 3)
return cirq.Circuit(
cirq.X(target[L - 1]),
cirq.H.on_each(*exponent),
ModularExp(target, exponent, x, n),
cirq.qft(*exponent, inverse=True),
cirq.measure(*exponent, key='exponent'),
)
Explanation: Using the modular exponentional operation in a circuit
The quantum part of Shor's algorithm is just phase estimation with the unitary $U$ corresponding to the modular exponential operation. The following cell defines a function which creates the circuit for Shor's algorithm using the ModularExp operation we defined above.
End of explanation
Example of the quantum circuit for period finding.
n = 15
x = 7
circuit = make_order_finding_circuit(x, n)
print(circuit)
Explanation: Using this function, we can visualize the circuit for a given $x$ and $n$ as follows.
End of explanation
Measuring Shor's period finding circuit.
circuit = make_order_finding_circuit(x=5, n=6)
res = cirq.sample(circuit, repetitions=8)
print("Raw measurements:")
print(res)
print("\nInteger in exponent register:")
print(res.data)
Explanation: As previously described, we put the exponent register into an equal superposition via Hadamard gates. The $X$ gate on the last qubit in the target register is used for phase kickback. The modular exponential operation performs the sequence of controlled unitaries in phase estimation, then we apply the inverse quantum Fourier transform to the exponent register and measure to read out the result.
To illustrate the measurement results, we can sample from a smaller circuit. (Note that in practice we would never run Shor's algorithm with $n = 6$ because it is even. This is just an example to illustrate the measurement outcomes.)
End of explanation
def process_measurement(result: cirq.Result, x: int, n: int) -> Optional[int]:
Interprets the output of the order finding circuit.
Specifically, it determines s/r such that exp(2πis/r) is an eigenvalue
of the unitary
U|y⟩ = |xy mod n⟩ 0 <= y < n
U|y⟩ = |y⟩ n <= y
then computes r (by continued fractions) if possible, and returns it.
Args:
result: result obtained by sampling the output of the
circuit built by make_order_finding_circuit
Returns:
r, the order of x modulo n or None.
# Read the output integer of the exponent register.
exponent_as_integer = result.data["exponent"][0]
exponent_num_bits = result.measurements["exponent"].shape[1]
eigenphase = float(exponent_as_integer / 2**exponent_num_bits)
# Run the continued fractions algorithm to determine f = s / r.
f = fractions.Fraction.from_float(eigenphase).limit_denominator(n)
# If the numerator is zero, the order finder failed.
if f.numerator == 0:
return None
# Else, return the denominator if it is valid.
r = f.denominator
if x**r % n != 1:
return None
return r
Explanation: We interpret each measured bitstring as an integer, but what do these integers tell us? In the next section we look at how to classically post-process to interpret them.
Classical post-processing
The integer we measure is close to $s / r$ where $r$ is the order of $x \in \mathbb{Z}_n$ and $0 \le s < r$ is an integer. We use the continued fractions algorithm to determine $r$ from $s / r$ then return it if the order finding circuit succeeded, else we return None.
End of explanation
Example of the classical post-processing.
# Set n and x here
n = 6
x = 5
print(f"Finding the order of x = {x} modulo n = {n}\n")
measurement = cirq.sample(circuit, repetitions=1)
print("Raw measurements:")
print(measurement)
print("\nInteger in exponent register:")
print(measurement.data)
r = process_measurement(measurement, x, n)
print("\nOrder r =", r)
if r is not None:
print(f"x^r mod n = {x}^{r} mod {n} = {x**r % n}")
Explanation: The next code block shows an example of creating an order finding circuit, executing it, then using the classical postprocessing function to determine the order. Recall that the quantum part of the algorithm succeeds with some probability. If the order is None, try re-running the cell a few times.
End of explanation
def quantum_order_finder(x: int, n: int) -> Optional[int]:
Computes smallest positive r such that x**r mod n == 1.
Args:
x: integer whose order is to be computed, must be greater than one
and belong to the multiplicative group of integers modulo n (which
consists of positive integers relatively prime to n),
n: modulus of the multiplicative group.
# Check that the integer x is a valid element of the multiplicative group
# modulo n.
if x < 2 or n <= x or math.gcd(x, n) > 1:
raise ValueError(f'Invalid x={x} for modulus n={n}.')
# Create the order finding circuit.
circuit = make_order_finding_circuit(x, n)
# Sample from the order finding circuit.
measurement = cirq.sample(circuit)
# Return the processed measurement result.
return process_measurement(measurement, x, n)
Explanation: You should see that the order of $x = 5$ in $\mathbb{Z}_6$ is $r = 2$. Indeed, $5^2 \text{ mod } 6 = 25 \text{ mod } 6 = 1$.
Quantum order finder
We can now define a streamlined function for the quantum version of order finding using the functions we have previously written. The quantum order finder below creates the circuit, executes it, and processes the measurement result.
End of explanation
Functions for factoring from start to finish.
def find_factor_of_prime_power(n: int) -> Optional[int]:
Returns non-trivial factor of n if n is a prime power, else None.
for k in range(2, math.floor(math.log2(n)) + 1):
c = math.pow(n, 1 / k)
c1 = math.floor(c)
if c1**k == n:
return c1
c2 = math.ceil(c)
if c2**k == n:
return c2
return None
def find_factor(
n: int,
order_finder: Callable[[int, int], Optional[int]] = quantum_order_finder,
max_attempts: int = 30
) -> Optional[int]:
Returns a non-trivial factor of composite integer n.
Args:
n: Integer to factor.
order_finder: Function for finding the order of elements of the
multiplicative group of integers modulo n.
max_attempts: number of random x's to try, also an upper limit
on the number of order_finder invocations.
Returns:
Non-trivial factor of n or None if no such factor was found.
Factor k of n is trivial if it is 1 or n.
# If the number is prime, there are no non-trivial factors.
if sympy.isprime(n):
print("n is prime!")
return None
# If the number is even, two is a non-trivial factor.
if n % 2 == 0:
return 2
# If n is a prime power, we can find a non-trivial factor efficiently.
c = find_factor_of_prime_power(n)
if c is not None:
return c
for _ in range(max_attempts):
# Choose a random number between 2 and n - 1.
x = random.randint(2, n - 1)
# Most likely x and n will be relatively prime.
c = math.gcd(x, n)
# If x and n are not relatively prime, we got lucky and found
# a non-trivial factor.
if 1 < c < n:
return c
# Compute the order r of x modulo n using the order finder.
r = order_finder(x, n)
# If the order finder failed, try again.
if r is None:
continue
# If the order r is even, try again.
if r % 2 != 0:
continue
# Compute the non-trivial factor.
y = x**(r // 2) % n
assert 1 < y < n
c = math.gcd(y - 1, n)
if 1 < c < n:
return c
print(f"Failed to find a non-trivial factor in {max_attempts} attempts.")
return None
Explanation: This completes our quantum implementation of an order finder, and the quantum part of Shor's algorithm.
The complete factoring algorithm
We can use this quantum order finder (or the classical order finder) to complete Shor's algorithm. In the following code block, we add a few pre-processing steps which:
(1) Check if $n$ is even,
(2) Check if $n$ is prime,
(3) Check if $n$ is a prime power,
all of which can be done efficiently with a classical computer. Additionally, we add the last necessary post-processing step which uses the order $r$ to compute a non-trivial factor $p$ of $n$. This is achieved by computing $y = x^{r / 2} \text{ mod } n$ (assuming $r$ is even), then computing $p = \text{gcd}(y - 1, n)$.
End of explanation
Example of factoring via Shor's algorithm (order finding).
# Number to factor
n = 184573
# Attempt to find a factor
p = find_factor(n, order_finder=classical_order_finder)
q = n // p
print("Factoring n = pq =", n)
print("p =", p)
print("q =", q)
Check the answer is correct.
p * q == n
Explanation: The function find_factor uses the quantum_order_finder by default, in which case it is executing Shor's algorithm. As previously mentioned, due to the large memory requirements for classically simulating this circuit, we cannot run Shor's algorithm for $n \ge 15$. However, we can use the classical order finder as a substitute.
End of explanation
<END_TASK> |
78 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Logistic Regression with a Neural Network mindset
Welcome to your first (required) programming assignment! You will build a logistic regression classifier to recognize cats. This assignment will step you through how to do this with a Neural Network mindset, and so will also hone your intuitions about deep learning.
Instructions
Step1: 2 - Overview of the Problem set
Problem Statement
Step2: We added "_orig" at the end of image datasets (train and test) because we are going to preprocess them. After preprocessing, we will end up with train_set_x and test_set_x (the labels train_set_y and test_set_y don't need any preprocessing).
Each line of your train_set_x_orig and test_set_x_orig is an array representing an image. You can visualize an example by running the following code. Feel free also to change the index value and re-run to see other images.
Step3: Many software bugs in deep learning come from having matrix/vector dimensions that don't fit. If you can keep your matrix/vector dimensions straight you will go a long way toward eliminating many bugs.
Exercise
Step4: Expected Output for m_train, m_test and num_px
Step5: Expected Output
Step7: <font color='blue'>
What you need to remember
Step9: Expected Output
Step11: Expected Output
Step13: Expected Output
Step14: Expected Output
Step16: Expected Output
Step17: Run the following cell to train your model.
Step18: Expected Output
Step19: Let's also plot the cost function and the gradients.
Step20: Interpretation
Step21: Interpretation | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
import h5py
import scipy
from PIL import Image
from scipy import ndimage
from lr_utils import load_dataset
%matplotlib inline
Explanation: Logistic Regression with a Neural Network mindset
Welcome to your first (required) programming assignment! You will build a logistic regression classifier to recognize cats. This assignment will step you through how to do this with a Neural Network mindset, and so will also hone your intuitions about deep learning.
Instructions:
- Do not use loops (for/while) in your code, unless the instructions explicitly ask you to do so.
You will learn to:
- Build the general architecture of a learning algorithm, including:
- Initializing parameters
- Calculating the cost function and its gradient
- Using an optimization algorithm (gradient descent)
- Gather all three functions above into a main model function, in the right order.
1 - Packages
First, let's run the cell below to import all the packages that you will need during this assignment.
- numpy is the fundamental package for scientific computing with Python.
- h5py is a common package to interact with a dataset that is stored on an H5 file.
- matplotlib is a famous library to plot graphs in Python.
- PIL and scipy are used here to test your model with your own picture at the end.
End of explanation
# Loading the data (cat/non-cat)
train_set_x_orig, train_set_y, test_set_x_orig, test_set_y, classes = load_dataset()
Explanation: 2 - Overview of the Problem set
Problem Statement: You are given a dataset ("data.h5") containing:
- a training set of m_train images labeled as cat (y=1) or non-cat (y=0)
- a test set of m_test images labeled as cat or non-cat
- each image is of shape (num_px, num_px, 3) where 3 is for the 3 channels (RGB). Thus, each image is square (height = num_px) and (width = num_px).
You will build a simple image-recognition algorithm that can correctly classify pictures as cat or non-cat.
Let's get more familiar with the dataset. Load the data by running the following code.
End of explanation
# Example of a picture
index = 25
plt.imshow(train_set_x_orig[index])
print ("y = " + str(train_set_y[:, index]) + ", it's a '" + classes[np.squeeze(train_set_y[:, index])].decode("utf-8") + "' picture.")
Explanation: We added "_orig" at the end of image datasets (train and test) because we are going to preprocess them. After preprocessing, we will end up with train_set_x and test_set_x (the labels train_set_y and test_set_y don't need any preprocessing).
Each line of your train_set_x_orig and test_set_x_orig is an array representing an image. You can visualize an example by running the following code. Feel free also to change the index value and re-run to see other images.
End of explanation
### START CODE HERE ### (≈ 3 lines of code)
m_train = None
m_test = None
num_px = None
### END CODE HERE ###
print ("Number of training examples: m_train = " + str(m_train))
print ("Number of testing examples: m_test = " + str(m_test))
print ("Height/Width of each image: num_px = " + str(num_px))
print ("Each image is of size: (" + str(num_px) + ", " + str(num_px) + ", 3)")
print ("train_set_x shape: " + str(train_set_x_orig.shape))
print ("train_set_y shape: " + str(train_set_y.shape))
print ("test_set_x shape: " + str(test_set_x_orig.shape))
print ("test_set_y shape: " + str(test_set_y.shape))
Explanation: Many software bugs in deep learning come from having matrix/vector dimensions that don't fit. If you can keep your matrix/vector dimensions straight you will go a long way toward eliminating many bugs.
Exercise: Find the values for:
- m_train (number of training examples)
- m_test (number of test examples)
- num_px (= height = width of a training image)
Remember that train_set_x_orig is a numpy-array of shape (m_train, num_px, num_px, 3). For instance, you can access m_train by writing train_set_x_orig.shape[0].
End of explanation
# Reshape the training and test examples
### START CODE HERE ### (≈ 2 lines of code)
train_set_x_flatten = None
test_set_x_flatten = None
### END CODE HERE ###
print ("train_set_x_flatten shape: " + str(train_set_x_flatten.shape))
print ("train_set_y shape: " + str(train_set_y.shape))
print ("test_set_x_flatten shape: " + str(test_set_x_flatten.shape))
print ("test_set_y shape: " + str(test_set_y.shape))
print ("sanity check after reshaping: " + str(train_set_x_flatten[0:5,0]))
Explanation: Expected Output for m_train, m_test and num_px:
<table style="width:15%">
<tr>
<td>**m_train**</td>
<td> 209 </td>
</tr>
<tr>
<td>**m_test**</td>
<td> 50 </td>
</tr>
<tr>
<td>**num_px**</td>
<td> 64 </td>
</tr>
</table>
For convenience, you should now reshape images of shape (num_px, num_px, 3) in a numpy-array of shape (num_px $$ num_px $$ 3, 1). After this, our training (and test) dataset is a numpy-array where each column represents a flattened image. There should be m_train (respectively m_test) columns.
Exercise: Reshape the training and test data sets so that images of size (num_px, num_px, 3) are flattened into single vectors of shape (num_px $$ num_px $$ 3, 1).
A trick when you want to flatten a matrix X of shape (a,b,c,d) to a matrix X_flatten of shape (b$$c$$d, a) is to use:
python
X_flatten = X.reshape(X.shape[0], -1).T # X.T is the transpose of X
End of explanation
train_set_x = train_set_x_flatten/255.
test_set_x = test_set_x_flatten/255.
Explanation: Expected Output:
<table style="width:35%">
<tr>
<td>**train_set_x_flatten shape**</td>
<td> (12288, 209)</td>
</tr>
<tr>
<td>**train_set_y shape**</td>
<td>(1, 209)</td>
</tr>
<tr>
<td>**test_set_x_flatten shape**</td>
<td>(12288, 50)</td>
</tr>
<tr>
<td>**test_set_y shape**</td>
<td>(1, 50)</td>
</tr>
<tr>
<td>**sanity check after reshaping**</td>
<td>[17 31 56 22 33]</td>
</tr>
</table>
To represent color images, the red, green and blue channels (RGB) must be specified for each pixel, and so the pixel value is actually a vector of three numbers ranging from 0 to 255.
One common preprocessing step in machine learning is to center and standardize your dataset, meaning that you substract the mean of the whole numpy array from each example, and then divide each example by the standard deviation of the whole numpy array. But for picture datasets, it is simpler and more convenient and works almost as well to just divide every row of the dataset by 255 (the maximum value of a pixel channel).
<!-- During the training of your model, you're going to multiply weights and add biases to some initial inputs in order to observe neuron activations. Then you backpropogate with the gradients to train the model. But, it is extremely important for each feature to have a similar range such that our gradients don't explode. You will see that more in detail later in the lectures. !-->
Let's standardize our dataset.
End of explanation
# GRADED FUNCTION: sigmoid
def sigmoid(z):
Compute the sigmoid of z
Arguments:
z -- A scalar or numpy array of any size.
Return:
s -- sigmoid(z)
### START CODE HERE ### (≈ 1 line of code)
s = None
### END CODE HERE ###
return s
print ("sigmoid([0, 2]) = " + str(sigmoid(np.array([0,2]))))
Explanation: <font color='blue'>
What you need to remember:
Common steps for pre-processing a new dataset are:
- Figure out the dimensions and shapes of the problem (m_train, m_test, num_px, ...)
- Reshape the datasets such that each example is now a vector of size (num_px * num_px * 3, 1)
- "Standardize" the data
3 - General Architecture of the learning algorithm
It's time to design a simple algorithm to distinguish cat images from non-cat images.
You will build a Logistic Regression, using a Neural Network mindset. The following Figure explains why Logistic Regression is actually a very simple Neural Network!
<img src="images/LogReg_kiank.png" style="width:650px;height:400px;">
Mathematical expression of the algorithm:
For one example $x^{(i)}$:
$$z^{(i)} = w^T x^{(i)} + b \tag{1}$$
$$\hat{y}^{(i)} = a^{(i)} = sigmoid(z^{(i)})\tag{2}$$
$$ \mathcal{L}(a^{(i)}, y^{(i)}) = - y^{(i)} \log(a^{(i)}) - (1-y^{(i)} ) \log(1-a^{(i)})\tag{3}$$
The cost is then computed by summing over all training examples:
$$ J = \frac{1}{m} \sum_{i=1}^m \mathcal{L}(a^{(i)}, y^{(i)})\tag{6}$$
Key steps:
In this exercise, you will carry out the following steps:
- Initialize the parameters of the model
- Learn the parameters for the model by minimizing the cost
- Use the learned parameters to make predictions (on the test set)
- Analyse the results and conclude
4 - Building the parts of our algorithm ##
The main steps for building a Neural Network are:
1. Define the model structure (such as number of input features)
2. Initialize the model's parameters
3. Loop:
- Calculate current loss (forward propagation)
- Calculate current gradient (backward propagation)
- Update parameters (gradient descent)
You often build 1-3 separately and integrate them into one function we call model().
4.1 - Helper functions
Exercise: Using your code from "Python Basics", implement sigmoid(). As you've seen in the figure above, you need to compute $sigmoid( w^T x + b) = \frac{1}{1 + e^{-(w^T x + b)}}$ to make predictions. Use np.exp().
End of explanation
# GRADED FUNCTION: initialize_with_zeros
def initialize_with_zeros(dim):
This function creates a vector of zeros of shape (dim, 1) for w and initializes b to 0.
Argument:
dim -- size of the w vector we want (or number of parameters in this case)
Returns:
w -- initialized vector of shape (dim, 1)
b -- initialized scalar (corresponds to the bias)
### START CODE HERE ### (≈ 1 line of code)
w = None
b = None
### END CODE HERE ###
assert(w.shape == (dim, 1))
assert(isinstance(b, float) or isinstance(b, int))
return w, b
dim = 2
w, b = initialize_with_zeros(dim)
print ("w = " + str(w))
print ("b = " + str(b))
Explanation: Expected Output:
<table>
<tr>
<td>**sigmoid([0, 2])**</td>
<td> [ 0.5 0.88079708]</td>
</tr>
</table>
4.2 - Initializing parameters
Exercise: Implement parameter initialization in the cell below. You have to initialize w as a vector of zeros. If you don't know what numpy function to use, look up np.zeros() in the Numpy library's documentation.
End of explanation
# GRADED FUNCTION: propagate
def propagate(w, b, X, Y):
Implement the cost function and its gradient for the propagation explained above
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of size (num_px * num_px * 3, number of examples)
Y -- true "label" vector (containing 0 if non-cat, 1 if cat) of size (1, number of examples)
Return:
cost -- negative log-likelihood cost for logistic regression
dw -- gradient of the loss with respect to w, thus same shape as w
db -- gradient of the loss with respect to b, thus same shape as b
Tips:
- Write your code step by step for the propagation. np.log(), np.dot()
m = X.shape[1]
# FORWARD PROPAGATION (FROM X TO COST)
### START CODE HERE ### (≈ 2 lines of code)
A = None # compute activation
cost = None # compute cost
### END CODE HERE ###
# BACKWARD PROPAGATION (TO FIND GRAD)
### START CODE HERE ### (≈ 2 lines of code)
dw = None
db = None
### END CODE HERE ###
assert(dw.shape == w.shape)
assert(db.dtype == float)
cost = np.squeeze(cost)
assert(cost.shape == ())
grads = {"dw": dw,
"db": db}
return grads, cost
w, b, X, Y = np.array([[1],[2]]), 2, np.array([[1,2],[3,4]]), np.array([[1,0]])
grads, cost = propagate(w, b, X, Y)
print ("dw = " + str(grads["dw"]))
print ("db = " + str(grads["db"]))
print ("cost = " + str(cost))
Explanation: Expected Output:
<table style="width:15%">
<tr>
<td> ** w ** </td>
<td> [[ 0.]
[ 0.]] </td>
</tr>
<tr>
<td> ** b ** </td>
<td> 0 </td>
</tr>
</table>
For image inputs, w will be of shape (num_px $\times$ num_px $\times$ 3, 1).
4.3 - Forward and Backward propagation
Now that your parameters are initialized, you can do the "forward" and "backward" propagation steps for learning the parameters.
Exercise: Implement a function propagate() that computes the cost function and its gradient.
Hints:
Forward Propagation:
- You get X
- You compute $A = \sigma(w^T X + b) = (a^{(0)}, a^{(1)}, ..., a^{(m-1)}, a^{(m)})$
- You calculate the cost function: $J = -\frac{1}{m}\sum_{i=1}^{m}y^{(i)}\log(a^{(i)})+(1-y^{(i)})\log(1-a^{(i)})$
Here are the two formulas you will be using:
$$ \frac{\partial J}{\partial w} = \frac{1}{m}X(A-Y)^T\tag{7}$$
$$ \frac{\partial J}{\partial b} = \frac{1}{m} \sum_{i=1}^m (a^{(i)}-y^{(i)})\tag{8}$$
End of explanation
# GRADED FUNCTION: optimize
def optimize(w, b, X, Y, num_iterations, learning_rate, print_cost = False):
This function optimizes w and b by running a gradient descent algorithm
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of shape (num_px * num_px * 3, number of examples)
Y -- true "label" vector (containing 0 if non-cat, 1 if cat), of shape (1, number of examples)
num_iterations -- number of iterations of the optimization loop
learning_rate -- learning rate of the gradient descent update rule
print_cost -- True to print the loss every 100 steps
Returns:
params -- dictionary containing the weights w and bias b
grads -- dictionary containing the gradients of the weights and bias with respect to the cost function
costs -- list of all the costs computed during the optimization, this will be used to plot the learning curve.
Tips:
You basically need to write down two steps and iterate through them:
1) Calculate the cost and the gradient for the current parameters. Use propagate().
2) Update the parameters using gradient descent rule for w and b.
costs = []
for i in range(num_iterations):
# Cost and gradient calculation (≈ 1-4 lines of code)
### START CODE HERE ###
grads, cost = None
### END CODE HERE ###
# Retrieve derivatives from grads
dw = grads["dw"]
db = grads["db"]
# update rule (≈ 2 lines of code)
### START CODE HERE ###
w = None
b = None
### END CODE HERE ###
# Record the costs
if i % 100 == 0:
costs.append(cost)
# Print the cost every 100 training examples
if print_cost and i % 100 == 0:
print ("Cost after iteration %i: %f" %(i, cost))
params = {"w": w,
"b": b}
grads = {"dw": dw,
"db": db}
return params, grads, costs
params, grads, costs = optimize(w, b, X, Y, num_iterations= 100, learning_rate = 0.009, print_cost = False)
print ("w = " + str(params["w"]))
print ("b = " + str(params["b"]))
print ("dw = " + str(grads["dw"]))
print ("db = " + str(grads["db"]))
Explanation: Expected Output:
<table style="width:50%">
<tr>
<td> ** dw ** </td>
<td> [[ 0.99993216]
[ 1.99980262]]</td>
</tr>
<tr>
<td> ** db ** </td>
<td> 0.499935230625 </td>
</tr>
<tr>
<td> ** cost ** </td>
<td> 6.000064773192205</td>
</tr>
</table>
d) Optimization
You have initialized your parameters.
You are also able to compute a cost function and its gradient.
Now, you want to update the parameters using gradient descent.
Exercise: Write down the optimization function. The goal is to learn $w$ and $b$ by minimizing the cost function $J$. For a parameter $\theta$, the update rule is $ \theta = \theta - \alpha \text{ } d\theta$, where $\alpha$ is the learning rate.
End of explanation
# GRADED FUNCTION: predict
def predict(w, b, X):
'''
Predict whether the label is 0 or 1 using learned logistic regression parameters (w, b)
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of size (num_px * num_px * 3, number of examples)
Returns:
Y_prediction -- a numpy array (vector) containing all predictions (0/1) for the examples in X
'''
m = X.shape[1]
Y_prediction = np.zeros((1,m))
w = w.reshape(X.shape[0], 1)
# Compute vector "A" predicting the probabilities of a cat being present in the picture
### START CODE HERE ### (≈ 1 line of code)
A = None
### END CODE HERE ###
for i in range(A.shape[1]):
# Convert probabilities A[0,i] to actual predictions p[0,i]
### START CODE HERE ### (≈ 4 lines of code)
pass
### END CODE HERE ###
assert(Y_prediction.shape == (1, m))
return Y_prediction
print ("predictions = " + str(predict(w, b, X)))
Explanation: Expected Output:
<table style="width:40%">
<tr>
<td> **w** </td>
<td>[[ 0.1124579 ]
[ 0.23106775]] </td>
</tr>
<tr>
<td> **b** </td>
<td> 1.55930492484 </td>
</tr>
<tr>
<td> **dw** </td>
<td> [[ 0.90158428]
[ 1.76250842]] </td>
</tr>
<tr>
<td> **db** </td>
<td> 0.430462071679 </td>
</tr>
</table>
Exercise: The previous function will output the learned w and b. We are able to use w and b to predict the labels for a dataset X. Implement the predict() function. There is two steps to computing predictions:
Calculate $\hat{Y} = A = \sigma(w^T X + b)$
Convert the entries of a into 0 (if activation <= 0.5) or 1 (if activation > 0.5), stores the predictions in a vector Y_prediction. If you wish, you can use an if/else statement in a for loop (though there is also a way to vectorize this).
End of explanation
# GRADED FUNCTION: model
def model(X_train, Y_train, X_test, Y_test, num_iterations = 2000, learning_rate = 0.5, print_cost = False):
Builds the logistic regression model by calling the function you've implemented previously
Arguments:
X_train -- training set represented by a numpy array of shape (num_px * num_px * 3, m_train)
Y_train -- training labels represented by a numpy array (vector) of shape (1, m_train)
X_test -- test set represented by a numpy array of shape (num_px * num_px * 3, m_test)
Y_test -- test labels represented by a numpy array (vector) of shape (1, m_test)
num_iterations -- hyperparameter representing the number of iterations to optimize the parameters
learning_rate -- hyperparameter representing the learning rate used in the update rule of optimize()
print_cost -- Set to true to print the cost every 100 iterations
Returns:
d -- dictionary containing information about the model.
### START CODE HERE ###
# initialize parameters with zeros (≈ 1 line of code)
w, b = None
# Gradient descent (≈ 1 line of code)
parameters, grads, costs = None
# Retrieve parameters w and b from dictionary "parameters"
w = parameters["w"]
b = parameters["b"]
# Predict test/train set examples (≈ 2 lines of code)
Y_prediction_test = None
Y_prediction_train = None
### END CODE HERE ###
# Print train/test Errors
print("train accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_train - Y_train)) * 100))
print("test accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_test - Y_test)) * 100))
d = {"costs": costs,
"Y_prediction_test": Y_prediction_test,
"Y_prediction_train" : Y_prediction_train,
"w" : w,
"b" : b,
"learning_rate" : learning_rate,
"num_iterations": num_iterations}
return d
Explanation: Expected Output:
<table style="width:30%">
<tr>
<td>
**predictions**
</td>
<td>
[[ 1. 1.]]
</td>
</tr>
</table>
<font color='blue'>
What to remember:
You've implemented several functions that:
- Initialize (w,b)
- Optimize the loss iteratively to learn parameters (w,b):
- computing the cost and its gradient
- updating the parameters using gradient descent
- Use the learned (w,b) to predict the labels for a given set of examples
5 - Merge all functions into a model
You will now see how the overall model is structured by putting together all the building blocks (functions implemented in the previous parts) together, in the right order.
Exercise: Implement the model function. Use the following notation:
- Y_prediction for your predictions on the test set
- Y_prediction_train for your predictions on the train set
- w, costs, grads for the outputs of optimize()
End of explanation
d = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 2000, learning_rate = 0.005, print_cost = True)
Explanation: Run the following cell to train your model.
End of explanation
# Example of a picture that was wrongly classified.
index = 1
plt.imshow(test_set_x[:,index].reshape((num_px, num_px, 3)))
print ("y = " + str(test_set_y[0,index]) + ", you predicted that it is a \"" + classes[d["Y_prediction_test"][0,index]].decode("utf-8") + "\" picture.")
Explanation: Expected Output:
<table style="width:40%">
<tr>
<td> **Train Accuracy** </td>
<td> 99.04306220095694 % </td>
</tr>
<tr>
<td>**Test Accuracy** </td>
<td> 70.0 % </td>
</tr>
</table>
Comment: Training accuracy is close to 100%. This is a good sanity check: your model is working and has high enough capacity to fit the training data. Test error is 68%. It is actually not bad for this simple model, given the small dataset we used and that logistic regression is a linear classifier. But no worries, you'll build an even better classifier next week!
Also, you see that the model is clearly overfitting the training data. Later in this specialization you will learn how to reduce overfitting, for example by using regularization. Using the code below (and changing the index variable) you can look at predictions on pictures of the test set.
End of explanation
# Plot learning curve (with costs)
costs = np.squeeze(d['costs'])
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (per hundreds)')
plt.title("Learning rate =" + str(d["learning_rate"]))
plt.show()
Explanation: Let's also plot the cost function and the gradients.
End of explanation
learning_rates = [0.01, 0.001, 0.0001]
models = {}
for i in learning_rates:
print ("learning rate is: " + str(i))
models[str(i)] = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 1500, learning_rate = i, print_cost = False)
print ('\n' + "-------------------------------------------------------" + '\n')
for i in learning_rates:
plt.plot(np.squeeze(models[str(i)]["costs"]), label= str(models[str(i)]["learning_rate"]))
plt.ylabel('cost')
plt.xlabel('iterations')
legend = plt.legend(loc='upper center', shadow=True)
frame = legend.get_frame()
frame.set_facecolor('0.90')
plt.show()
Explanation: Interpretation:
You can see the cost decreasing. It shows that the parameters are being learned. However, you see that you could train the model even more on the training set. Try to increase the number of iterations in the cell above and rerun the cells. You might see that the training set accuracy goes up, but the test set accuracy goes down. This is called overfitting.
6 - Further analysis (optional/ungraded exercise)
Congratulations on building your first image classification model. Let's analyze it further, and examine possible choices for the learning rate $\alpha$.
Choice of learning rate
Reminder:
In order for Gradient Descent to work you must choose the learning rate wisely. The learning rate $\alpha$ determines how rapidly we update the parameters. If the learning rate is too large we may "overshoot" the optimal value. Similarly, if it is too small we will need too many iterations to converge to the best values. That's why it is crucial to use a well-tuned learning rate.
Let's compare the learning curve of our model with several choices of learning rates. Run the cell below. This should take about 1 minute. Feel free also to try different values than the three we have initialized the learning_rates variable to contain, and see what happens.
End of explanation
## START CODE HERE ## (PUT YOUR IMAGE NAME)
my_image = "my_image.jpg" # change this to the name of your image file
## END CODE HERE ##
# We preprocess the image to fit your algorithm.
fname = "images/" + my_image
image = np.array(ndimage.imread(fname, flatten=False))
my_image = scipy.misc.imresize(image, size=(num_px,num_px)).reshape((1, num_px*num_px*3)).T
my_predicted_image = predict(d["w"], d["b"], my_image)
plt.imshow(image)
print("y = " + str(np.squeeze(my_predicted_image)) + ", your algorithm predicts a \"" + classes[int(np.squeeze(my_predicted_image)),].decode("utf-8") + "\" picture.")
Explanation: Interpretation:
- Different learning rates give different costs and thus different predictions results.
- If the learning rate is too large (0.01), the cost may oscillate up and down. It may even diverge (though in this example, using 0.01 still eventually ends up at a good value for the cost).
- A lower cost doesn't mean a better model. You have to check if there is possibly overfitting. It happens when the training accuracy is a lot higher than the test accuracy.
- In deep learning, we usually recommend that you:
- Choose the learning rate that better minimizes the cost function.
- If your model overfits, use other techniques to reduce overfitting. (We'll talk about this in later videos.)
7 - Test with your own image (optional/ungraded exercise)
Congratulations on finishing this assignment. You can use your own image and see the output of your model. To do that:
1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub.
2. Add your image to this Jupyter Notebook's directory, in the "images" folder
3. Change your image's name in the following code
4. Run the code and check if the algorithm is right (1 = cat, 0 = non-cat)!
End of explanation
<END_TASK> |
79 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Visualizing output from the Mass Balance workflow
This notebook is designed to work with output from the Mass Balance workflow [iceflow] developed during Geohackweek2016 at the University of Washington (https
Step1: Set file names and directories
Step2: Plot the glacier outlines based on their specific mass balance | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import os
import matplotlib.pyplot as plt
# The two statements below are used mainly to set up a plotting
# default style that's better than the default from matplotlib
#import seaborn as sns
plt.style.use('bmh')
from shapely.geometry import Point
#import pandas as pd
import geopandas as gpd
from geopandas import GeoSeries, GeoDataFrame
Explanation: Visualizing output from the Mass Balance workflow
This notebook is designed to work with output from the Mass Balance workflow [iceflow] developed during Geohackweek2016 at the University of Washington (https://github.com/dshean/iceflow).
1. Viewing the specific mass balance of glacier polygons
Set up the environment
This notebook requires the following packages:
matplotlib
shapely
geopandas
End of explanation
file_pth = 'rgi_centralasia/13_rgi32_CentralAsia.shp'
rgi_glac = gpd.read_file(file_pth)
timeframe='[time between DEMs]'
rgi_glac.head()
Explanation: Set file names and directories
End of explanation
# test data set-up
gdf = rgi_glac
gdf.plot()
# test data set-up
import random
my_randoms = random.sample(xrange(-50,50), 15)
gdf["spec"]= my_randoms
gdf.to_file("rgi_test.shp")
f, ax = plt.subplots(1, figsize=(6, 4))
rgi_glac.plot(column='[spec mb]', scheme='fisher_jenks', k=7,
alpha=0.9, cmap=plt.cm.Blues, legend=True, ax=ax)
plt.axis('equal')
ax.set_title('Specific Mass Balance'+timeframe)
Explanation: Plot the glacier outlines based on their specific mass balance
End of explanation
<END_TASK> |
80 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Comparing Different Stream Environments
This Jupyter Notebook compares four streams in different environments in the U.S.
Using hydrofunctions, we are able to plot the flow duration graphs for all four streams and compare them.
Step1: Choose four streams from different environments from HydroCloud. Import data for three years.
In this example, all four streams are in places with low development | <ASSISTANT_TASK:>
Python Code:
import hydrofunctions as hf
%matplotlib inline
Explanation: Comparing Different Stream Environments
This Jupyter Notebook compares four streams in different environments in the U.S.
Using hydrofunctions, we are able to plot the flow duration graphs for all four streams and compare them.
End of explanation
streams = ['09073400','11480390','01074520','09498502']
sites = hf.NWIS(streams, 'dv', start_date='2001-01-01', end_date='2003-12-31')
sites
#Create a dataframe of the four sites
Q = sites.df('discharge')
#Show the first few lines of the dataframe
Q.head()
# rename the columns based on the names of the sites from HydroCloud
Q.columns=['White Mountains National Park', 'White River National Forest', 'Tonto National Forest', 'Mendicino National Park']
# show the first few rows of the data to confirm the changes
Q.head()
#use the built-in functions from hydrofunctions to create a flow duration graph for the dataframe.
hf.flow_duration(Q)
#Pull the stats for each of the four sites.
Q.describe()
Explanation: Choose four streams from different environments from HydroCloud. Import data for three years.
In this example, all four streams are in places with low development:
Colorado Western Slopes: ROARING FORK RIVER NEAR ASPEN, CO.
California Mendicino National Park: MAD R AB RUTH RES NR FOREST GLEN CA
White Mountains, NH: EAST BRANCH PEMIGEWASSET RIVER AT LINCOLN, NH
PINTO CREEK NEAR MIAMI, AZ
End of explanation
<END_TASK> |
81 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Take the set of pings, make sure we have actual clientIds and remove duplicate pings. We collect each unique ping.
Step1: Transform and sanitize the pings into arrays.
Step2: Create a set of pings from "core" to build a set of core client data. Output the data to CSV or Parquet.
This script is designed to loop over a range of days and output a single day for the given channels. Use explicit date ranges for backfilling, or now() - '1day' for automated runs. | <ASSISTANT_TASK:>
Python Code:
def dedupe_pings(rdd):
return rdd.filter(lambda p: p["meta/clientId"] is not None)\
.map(lambda p: (p["meta/documentId"], p))\
.reduceByKey(lambda x, y: x)\
.map(lambda x: x[1])
Explanation: Take the set of pings, make sure we have actual clientIds and remove duplicate pings. We collect each unique ping.
End of explanation
def transform(ping):
# Should not be None since we filter those out.
clientId = ping["meta/clientId"]
# Added via the ingestion process so should not be None.
submissionDate = dt.datetime.strptime(ping["meta/submissionDate"], "%Y%m%d")
geoCountry = ping["meta/geoCountry"]
profileDate = None
profileDaynum = ping["profileDate"]
if profileDaynum is not None:
try:
# Bad data could push profileDaynum > 32767 (size of a C int) and throw exception
profileDate = dt.datetime(1970, 1, 1) + dt.timedelta(int(profileDaynum))
except:
profileDate = None
# Create date should already be in ISO format
creationDate = ping["creationDate"]
if creationDate is not None:
# This is only accurate because we know the creation date is always in 'Z' (zulu) time.
creationDate = dt.datetime.strptime(ping["creationDate"], "%Y-%m-%dT%H:%M:%S.%fZ")
appVersion = ping["meta/appVersion"]
buildId = ping["meta/appBuildId"]
locale = ping["locale"]
os = ping["os"]
osVersion = ping["osversion"]
device = ping["device"]
arch = ping["arch"]
defaultSearch = ping["defaultSearch"]
distributionId = ping["distributionId"]
experiments = ping["experiments"]
if experiments is None:
experiments = []
#bug 1315028
defaultNewTabExperience = ping["defaultNewTabExperience"]
defaultMailClient = ping["defaultMailClient"]
#bug 1307419
searches = ping["searches"]
durations = ping["durations"]
sessions = ping["sessions"]
return [clientId, submissionDate, creationDate, profileDate, geoCountry, locale, os,
osVersion, buildId, appVersion, device, arch, defaultSearch, distributionId,
json.dumps(experiments), defaultNewTabExperience, defaultMailClient, searches,
durations, sessions]
Explanation: Transform and sanitize the pings into arrays.
End of explanation
channels = ["nightly", "aurora", "beta", "release"]
batch_date = os.environ.get('date')
if batch_date:
start = end = dt.datetime.strptime(batch_date, '%Y%m%d')
else:
start = dt.datetime.now() - dt.timedelta(1)
end = dt.datetime.now() - dt.timedelta(1)
day = start
while day <= end:
for channel in channels:
print "\nchannel: " + channel + ", date: " + day.strftime("%Y%m%d")
kwargs = dict(
doc_type="core",
submission_date=(day.strftime("%Y%m%d"), day.strftime("%Y%m%d")),
channel=channel,
app="Fennec",
fraction=1
)
# Grab all available source_version pings
pings = get_pings(sc, source_version="*", **kwargs)
subset = get_pings_properties(pings, ["meta/clientId",
"meta/documentId",
"meta/submissionDate",
"meta/appVersion",
"meta/appBuildId",
"meta/geoCountry",
"locale",
"os",
"osversion",
"device",
"arch",
"profileDate",
"creationDate",
"defaultSearch",
"distributionId",
"experiments",
"defaultNewTabExperience",
"defaultMailClient",
"searches",
"durations",
"sessions"])
subset = dedupe_pings(subset)
print "\nDe-duped pings:" + str(subset.count())
print subset.first()
transformed = subset.map(transform)
print "\nTransformed pings:" + str(transformed.count())
print transformed.first()
s3_output = "s3n://net-mozaws-prod-us-west-2-pipeline-analysis/mobile/mobile_clients"
s3_output += "/v1/channel=" + channel + "/submission=" + day.strftime("%Y%m%d")
schema = StructType([
StructField("clientid", StringType(), False),
StructField("submissiondate", TimestampType(), False),
StructField("creationdate", TimestampType(), True),
StructField("profiledate", TimestampType(), True),
StructField("geocountry", StringType(), True),
StructField("locale", StringType(), True),
StructField("os", StringType(), True),
StructField("osversion", StringType(), True),
StructField("buildid", StringType(), True),
StructField("appversion", StringType(), True),
StructField("device", StringType(), True),
StructField("arch", StringType(), True),
StructField("defaultsearch", StringType(), True),
StructField("distributionid", StringType(), True),
StructField("experiments", StringType(), True),
StructField("defaultNewTabExperience", StringType(), True),
StructField("defaultMailClient", StringType(), True),
StructField("searches", StringType(), True),
StructField("durations", StringType(), True),
StructField("sessions", StringType(), True)
])
# Make parquet parition file size large, but not too large for s3 to handle
coalesce = 1
if channel == "release":
coalesce = 4
grouped = sqlContext.createDataFrame(transformed, schema)
grouped.coalesce(coalesce).write.mode('overwrite').parquet(s3_output)
day += dt.timedelta(1)
Explanation: Create a set of pings from "core" to build a set of core client data. Output the data to CSV or Parquet.
This script is designed to loop over a range of days and output a single day for the given channels. Use explicit date ranges for backfilling, or now() - '1day' for automated runs.
End of explanation
<END_TASK> |
82 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Workshop Installation Guide
如何使用和开发微信聊天机器人的系列教程
A workshop to develop & use an intelligent and interactive chat-bot in WeChat
WeChat is a popular social media app, which has more than 800 million monthly active users.
<img src='http
Step1: Download and install WeChat API-2
Step2: Housekeeping after installation
Step3: If above importing has no error, then installation is successful.
You are now ready to rock! Go to folder | <ASSISTANT_TASK:>
Python Code:
!python --version
!pip install -U html
!pip install -U pyqrcode
!pip install -U config
!pip install -U backports.tempfile
!mv docs org_docs
Explanation: Workshop Installation Guide
如何使用和开发微信聊天机器人的系列教程
A workshop to develop & use an intelligent and interactive chat-bot in WeChat
WeChat is a popular social media app, which has more than 800 million monthly active users.
<img src='http://www.kudosdata.com/wp-content/uploads/2016/11/cropped-KudosLogo1.png' width=30% style="float: right;">
<img src='wechat_tool/reference/WeChat_SamGu_QR.png' width=10% style="float: right;">
http://www.KudosData.com
by: Sam.Gu@KudosData.com
April 2017 ============= Scan the QR code to become trainer's friend in WeChat ===========>>
Option 1: Use Cloud Platform (Difficulty level: Easy, like being a boss)
End of explanation
!yes | pip uninstall itchat
!rm -rf ItChat
!git clone https://github.com/telescopeuser/ItChat.git
!cp -r ItChat/* .
!python setup.py install
Explanation: Download and install WeChat API-2
End of explanation
!rm -rf itchat
!rm -rf ItChat
!rm -rf wxpy
!rm -rf README*
!rm -rf LICENSE
!rm -rf MANIFEST*
!rm -rf mkdocs*
!rm -rf build
!rm -rf dist
!rm -rf docs*
!rm -rf requirements.txt
!rm -rf setup.py
!rm -rf *.egg-info
!mv org_docs docs
!pip install -U google-api-python-client
!pip install -U gTTS
!apt-get update -y
!apt-get install libav-tools -y --allow-unauthenticated
!avconv -version
Explanation: Housekeeping after installation
End of explanation
print('')
print('+-------------------------------------------------------------------------------------------------+')
print('| www.KudosData.com: Google Cloud Datalab Python 2 setup successful! |')
print('| You are now ready to rock! Go to folder: workshop_blog/wechat_tool, open Notebook and follow... |')
print('+-------------------------------------------------------------------------------------------------+')
Explanation: If above importing has no error, then installation is successful.
You are now ready to rock! Go to folder: workshop_blog/wechat_tool, open Notebook and follow...
<img src='./wechat_tool/reference/setup_ref_01.png' width=100% style="float: left;">
End of explanation
<END_TASK> |
83 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Introduction
In the Intro to SQL micro-course, you learned how to use INNER JOIN to consolidate information from two different tables. Now you'll learn about a few more types of JOIN, along with how to use UNIONs to pull information from multiple tables.
Along the way, we'll work with two imaginary tables, called owners and pets.
Each row of the owners table identifies a different pet owner, where the ID column is a unique identifier. The Pet_ID column (in the owners table) contains the ID for the pet that belongs to the owner (this number matches the ID for the pet from the pets table).
For example,
- the pets table shows that Dr. Harris Bonkers is the pet with ID 1.
- The owners table shows that Aubrey Little is the owner of the pet with ID 1.
Putting these two facts together, Dr. Harris Bonkers is owned by Aubrey Little. Likewise, since Veronica Dunn does not have a corresponding Pet_ID, she does not have a pet. And, since 5 does not appear in the Pet_ID column, Maisie does not have an owner.
JOINs
Recall that we can use an INNER JOIN to pull rows from both tables where the value in the Pet_ID column in the owners table has a match in the ID column of the pets table.
In this case, Veronica Dunn and Maisie are not included in the results. But what if we instead want to create a table containing all pets, regardless of whether they have owners? Or, what if we want to combine all of the rows in both tables? In these cases, we need only use a different type of JOIN.
For instance, to create a table containing all rows from the owners table, we use a LEFT JOIN. In this case, "left" refers to the table that appears before the JOIN in the query. ("Right" refers to the table that is after the JOIN.)
Replacing INNER JOIN in the query above with LEFT JOIN returns all rows where the two tables have matching entries, along with all of the rows in the left table (whether there is a match or not).
If we instead use a RIGHT JOIN, we get the matching rows, along with all rows in the right table (whether there is a match or not).
Finally, a FULL JOIN returns all rows from both tables. Note that in general, any row that does not have a match in both tables will have NULL entries for the missing values. You can see this in the image below.
UNIONs
As you've seen, JOINs horizontally combine results from different tables. If you instead would like to vertically concatenate columns, you can do so with a UNION. The example query below combines the Age columns from both tables.
Note that with a UNION, the data types of both columns must be the same, but the column names can be different. (So, for instance, we cannot take the UNION of the Age column from the owners table and the Pet_Name column from the pets table.)
We use UNION ALL to include duplicate values - you'll notice that 9 appears in both the owners table and the pets table, and shows up twice in the concatenated results. If you'd like to drop duplicate values, you need only change UNION ALL in the query to UNION DISTINCT.
Example
We'll work with the Hacker News dataset. We begin by reviewing the first several rows of the comments table. (The corresponding code is hidden, but you can un-hide it by clicking on the "Code" button below.)
Step1: You'll also work with the stories table.
Step3: Since you are already familiar with JOINs from the Intro to SQL micro-course, we'll work with a relatively complex example of a JOIN that uses a common table expression (CTE).
The query below pulls information from the stories and comments tables to create a table showing all stories posted on January 1, 2012, along with the corresponding number of comments. We use a LEFT JOIN so that the results include stories that didn't receive any comments.
Step4: Since the results are ordered by the num_comments column, stories without comments appear at the end of the DataFrame. (Remember that NaN stands for "not a number".)
Step6: Next, we write a query to select all usernames corresponding to users who wrote stories or comments on January 1, 2014. We use UNION DISTINCT (instead of UNION ALL) to ensure that each user appears in the table at most once.
Step7: To get the number of users who posted on January 1, 2014, we need only take the length of the DataFrame. | <ASSISTANT_TASK:>
Python Code:
#$HIDE_INPUT$
from google.cloud import bigquery
# Create a "Client" object
client = bigquery.Client()
# Construct a reference to the "hacker_news" dataset
dataset_ref = client.dataset("hacker_news", project="bigquery-public-data")
# API request - fetch the dataset
dataset = client.get_dataset(dataset_ref)
# Construct a reference to the "comments" table
table_ref = dataset_ref.table("comments")
# API request - fetch the table
table = client.get_table(table_ref)
# Preview the first five lines of the table
client.list_rows(table, max_results=5).to_dataframe()
Explanation: Introduction
In the Intro to SQL micro-course, you learned how to use INNER JOIN to consolidate information from two different tables. Now you'll learn about a few more types of JOIN, along with how to use UNIONs to pull information from multiple tables.
Along the way, we'll work with two imaginary tables, called owners and pets.
Each row of the owners table identifies a different pet owner, where the ID column is a unique identifier. The Pet_ID column (in the owners table) contains the ID for the pet that belongs to the owner (this number matches the ID for the pet from the pets table).
For example,
- the pets table shows that Dr. Harris Bonkers is the pet with ID 1.
- The owners table shows that Aubrey Little is the owner of the pet with ID 1.
Putting these two facts together, Dr. Harris Bonkers is owned by Aubrey Little. Likewise, since Veronica Dunn does not have a corresponding Pet_ID, she does not have a pet. And, since 5 does not appear in the Pet_ID column, Maisie does not have an owner.
JOINs
Recall that we can use an INNER JOIN to pull rows from both tables where the value in the Pet_ID column in the owners table has a match in the ID column of the pets table.
In this case, Veronica Dunn and Maisie are not included in the results. But what if we instead want to create a table containing all pets, regardless of whether they have owners? Or, what if we want to combine all of the rows in both tables? In these cases, we need only use a different type of JOIN.
For instance, to create a table containing all rows from the owners table, we use a LEFT JOIN. In this case, "left" refers to the table that appears before the JOIN in the query. ("Right" refers to the table that is after the JOIN.)
Replacing INNER JOIN in the query above with LEFT JOIN returns all rows where the two tables have matching entries, along with all of the rows in the left table (whether there is a match or not).
If we instead use a RIGHT JOIN, we get the matching rows, along with all rows in the right table (whether there is a match or not).
Finally, a FULL JOIN returns all rows from both tables. Note that in general, any row that does not have a match in both tables will have NULL entries for the missing values. You can see this in the image below.
UNIONs
As you've seen, JOINs horizontally combine results from different tables. If you instead would like to vertically concatenate columns, you can do so with a UNION. The example query below combines the Age columns from both tables.
Note that with a UNION, the data types of both columns must be the same, but the column names can be different. (So, for instance, we cannot take the UNION of the Age column from the owners table and the Pet_Name column from the pets table.)
We use UNION ALL to include duplicate values - you'll notice that 9 appears in both the owners table and the pets table, and shows up twice in the concatenated results. If you'd like to drop duplicate values, you need only change UNION ALL in the query to UNION DISTINCT.
Example
We'll work with the Hacker News dataset. We begin by reviewing the first several rows of the comments table. (The corresponding code is hidden, but you can un-hide it by clicking on the "Code" button below.)
End of explanation
# Construct a reference to the "stories" table
table_ref = dataset_ref.table("stories")
# API request - fetch the table
table = client.get_table(table_ref)
# Preview the first five lines of the table
client.list_rows(table, max_results=5).to_dataframe()
Explanation: You'll also work with the stories table.
End of explanation
# Query to select all stories posted on January 1, 2012, with number of comments
join_query =
WITH c AS
(
SELECT parent, COUNT(*) as num_comments
FROM `bigquery-public-data.hacker_news.comments`
GROUP BY parent
)
SELECT s.id as story_id, s.by, s.title, c.num_comments
FROM `bigquery-public-data.hacker_news.stories` AS s
LEFT JOIN c
ON s.id = c.parent
WHERE EXTRACT(DATE FROM s.time_ts) = '2012-01-01'
ORDER BY c.num_comments DESC
# Run the query, and return a pandas DataFrame
join_result = client.query(join_query).result().to_dataframe()
join_result.head()
Explanation: Since you are already familiar with JOINs from the Intro to SQL micro-course, we'll work with a relatively complex example of a JOIN that uses a common table expression (CTE).
The query below pulls information from the stories and comments tables to create a table showing all stories posted on January 1, 2012, along with the corresponding number of comments. We use a LEFT JOIN so that the results include stories that didn't receive any comments.
End of explanation
# None of these stories received any comments
join_result.tail()
Explanation: Since the results are ordered by the num_comments column, stories without comments appear at the end of the DataFrame. (Remember that NaN stands for "not a number".)
End of explanation
# Query to select all users who posted stories or comments on January 1, 2014
union_query =
SELECT c.by
FROM `bigquery-public-data.hacker_news.comments` AS c
WHERE EXTRACT(DATE FROM c.time_ts) = '2014-01-01'
UNION DISTINCT
SELECT s.by
FROM `bigquery-public-data.hacker_news.stories` AS s
WHERE EXTRACT(DATE FROM s.time_ts) = '2014-01-01'
# Run the query, and return a pandas DataFrame
union_result = client.query(union_query).result().to_dataframe()
union_result.head()
Explanation: Next, we write a query to select all usernames corresponding to users who wrote stories or comments on January 1, 2014. We use UNION DISTINCT (instead of UNION ALL) to ensure that each user appears in the table at most once.
End of explanation
# Number of users who posted stories or comments on January 1, 2014
len(union_result)
Explanation: To get the number of users who posted on January 1, 2014, we need only take the length of the DataFrame.
End of explanation
<END_TASK> |
84 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
A Nonsensical Language Model using Theano LSTM
Today we will train a nonsensical language model !
We will first collect some language data, convert it to numbers, and then feed it to a recurrent neural network and ask it to predict upcoming words. When we are done we will have a machine that can generate sentences from our made-up language ad-infinitum !
Collect Language Data
The first step here is to get some data. Since we are basing our language on nonsense, we need to generate good nonsense using a sampler.
Our sampler will take a probability table as input, e.g. a language where people are equally likely to say "a" or "b" would be written as follows
Step1: Parts of Speech
Now that we have a Sampler we can create a couple different word groups that our language uses to distinguish between different probability distributions easily
Step2: Simple Grammar
To create sentences from our language we create a simple recursion that goes as follows
Step4: Utilities
Now that we have our training corpus for our language model (optionally you could gather an actual corpus from the web
Step5: Create a Mapping from numbers to words
Now we can use the Vocab class to gather all the words and store an Index
Step6: To send our sentences in one big chunk to our neural network we transform each sentence into a row vector and place each of these rows into a bigger matrix that holds all these rows. Not all sentences have the same length, so we will pad those that are too short with 0s in pad_into_matrix
Step12: Build a Recurrent Neural Network
Now the real work is upon us! Thank goodness we have our language data ready. We now create a recurrent neural network by connecting an Embedding $E$ for each word in our corpus, and stacking some special cells together to form a prediction function. Mathematically we want
Step16: Construct model
We now declare the model and parametrize it to use an RNN, and make predictions in the range provided by our vocabulary. We also tell the greedy reconstruction search that it can consider a sentence as being over when the symbol corresponding to a period appears
Step17: Train Model
We run 10,000 times through our data and every 500 epochs of training we output what the model considers to be a natural continuation to the sentence "the" | <ASSISTANT_TASK:>
Python Code:
## Fake dataset:
class Sampler:
def __init__(self, prob_table):
total_prob = 0.0
if type(prob_table) is dict:
for key, value in prob_table.items():
total_prob += value
elif type(prob_table) is list:
prob_table_gen = {}
for key in prob_table:
prob_table_gen[key] = 1.0 / (float(len(prob_table)))
total_prob = 1.0
prob_table = prob_table_gen
else:
raise ArgumentError("__init__ takes either a dict or a list as its first argument")
if total_prob <= 0.0:
raise ValueError("Probability is not strictly positive.")
self._keys = []
self._probs = []
for key in prob_table:
self._keys.append(key)
self._probs.append(prob_table[key] / total_prob)
def __call__(self):
sample = random.random()
seen_prob = 0.0
for key, prob in zip(self._keys, self._probs):
if (seen_prob + prob) >= sample:
return key
else:
seen_prob += prob
return key
Explanation: A Nonsensical Language Model using Theano LSTM
Today we will train a nonsensical language model !
We will first collect some language data, convert it to numbers, and then feed it to a recurrent neural network and ask it to predict upcoming words. When we are done we will have a machine that can generate sentences from our made-up language ad-infinitum !
Collect Language Data
The first step here is to get some data. Since we are basing our language on nonsense, we need to generate good nonsense using a sampler.
Our sampler will take a probability table as input, e.g. a language where people are equally likely to say "a" or "b" would be written as follows:
nonsense = Sampler({"a": 0.5, "b": 0.5})
We get samples from this language like this:
word = nonsense()
We overloaded the __call__ method and got this syntactic sugar.
End of explanation
samplers = {
"punctuation": Sampler({".": 0.49, ",": 0.5, ";": 0.03, "?": 0.05, "!": 0.05}),
"stop": Sampler({"the": 10, "from": 5, "a": 9, "they": 3, "he": 3, "it" : 2.5, "she": 2.7, "in": 4.5}),
"noun": Sampler(["cat", "broom", "boat", "dog", "car", "wrangler", "mexico", "lantern", "book", "paper", "joke","calendar", "ship", "event"]),
"verb": Sampler(["ran", "stole", "carried", "could", "would", "do", "can", "carry", "catapult", "jump", "duck"]),
"adverb": Sampler(["rapidly", "calmly", "cooly", "in jest", "fantastically", "angrily", "dazily"])
}
Explanation: Parts of Speech
Now that we have a Sampler we can create a couple different word groups that our language uses to distinguish between different probability distributions easily:
End of explanation
def generate_nonsense(word = ""):
if word.endswith("."):
return word
else:
if len(word) > 0:
word += " "
word += samplers["stop"]()
word += " " + samplers["noun"]()
if random.random() > 0.7:
word += " " + samplers["adverb"]()
if random.random() > 0.7:
word += " " + samplers["adverb"]()
word += " " + samplers["verb"]()
if random.random() > 0.8:
word += " " + samplers["noun"]()
if random.random() > 0.9:
word += "-" + samplers["noun"]()
if len(word) > 500:
word += "."
else:
word += " " + samplers["punctuation"]()
return generate_nonsense(word)
def generate_dataset(total_size, ):
sentences = []
for i in range(total_size):
sentences.append(generate_nonsense())
return sentences
# generate dataset
lines = generate_dataset(100)
Explanation: Simple Grammar
To create sentences from our language we create a simple recursion that goes as follows:
If the sentence we have ends with a full stop, a question mark, or an exclamation point then end at once!
Else our sentence should have:
A stop word
A noun
An adverb (with prob 0.3), or 2 adverbs (with prob 0.3*0.3=0.09)
A verb
Another noun (with prob 0.2), or 2 more nouns connected by a dash (with prob 0.2*0.1=0.02)
If our sentence is now over 500 characters, add a full stop and end at once!
Else add some punctuation and go back to (1)
End of explanation
### Utilities:
class Vocab:
__slots__ = ["word2index", "index2word", "unknown"]
def __init__(self, index2word = None):
self.word2index = {}
self.index2word = []
# add unknown word:
self.add_words(["**UNKNOWN**"])
self.unknown = 0
if index2word is not None:
self.add_words(index2word)
def add_words(self, words):
for word in words:
if word not in self.word2index:
self.word2index[word] = len(self.word2index)
self.index2word.append(word)
def __call__(self, line):
Convert from numerical representation to words
and vice-versa.
if type(line) is np.ndarray:
return " ".join([self.index2word[word] for word in line])
if type(line) is list:
if len(line) > 0:
if line[0] is int:
return " ".join([self.index2word[word] for word in line])
indices = np.zeros(len(line), dtype=np.int32)
else:
line = line.split(" ")
indices = np.zeros(len(line), dtype=np.int32)
for i, word in enumerate(line):
indices[i] = self.word2index.get(word, self.unknown)
return indices
@property
def size(self):
return len(self.index2word)
def __len__(self):
return len(self.index2word)
Explanation: Utilities
Now that we have our training corpus for our language model (optionally you could gather an actual corpus from the web :), we can now create our first utility, Vocab, that will hold the mapping from words to an index, and perfom the conversions from words to indices and vice-versa:
End of explanation
vocab = Vocab()
for line in lines:
vocab.add_words(line.split(" "))
Explanation: Create a Mapping from numbers to words
Now we can use the Vocab class to gather all the words and store an Index:
End of explanation
def pad_into_matrix(rows, padding = 0):
if len(rows) == 0:
return np.array([0, 0], dtype=np.int32)
lengths = map(len, rows)
width = max(lengths)
height = len(rows)
mat = np.empty([height, width], dtype=rows[0].dtype)
mat.fill(padding)
for i, row in enumerate(rows):
mat[i, 0:len(row)] = row
return mat, list(lengths)
# transform into big numerical matrix of sentences:
numerical_lines = []
for line in lines:
numerical_lines.append(vocab(line))
numerical_lines, numerical_lengths = pad_into_matrix(numerical_lines)
Explanation: To send our sentences in one big chunk to our neural network we transform each sentence into a row vector and place each of these rows into a bigger matrix that holds all these rows. Not all sentences have the same length, so we will pad those that are too short with 0s in pad_into_matrix:
End of explanation
from theano_lstm import Embedding, LSTM, RNN, StackedCells, Layer, create_optimization_updates, masked_loss
def softmax(x):
Wrapper for softmax, helps with
pickling, and removing one extra
dimension that Theano adds during
its exponential normalization.
return T.nnet.softmax(x.T)
def has_hidden(layer):
Whether a layer has a trainable
initial hidden state.
return hasattr(layer, 'initial_hidden_state')
def matrixify(vector, n):
return T.repeat(T.shape_padleft(vector), n, axis=0)
def initial_state(layer, dimensions = None):
Initalizes the recurrence relation with an initial hidden state
if needed, else replaces with a "None" to tell Theano that
the network **will** return something, but it does not need
to send it to the next step of the recurrence
if dimensions is None:
return layer.initial_hidden_state if has_hidden(layer) else None
else:
return matrixify(layer.initial_hidden_state, dimensions) if has_hidden(layer) else None
def initial_state_with_taps(layer, dimensions = None):
Optionally wrap tensor variable into a dict with taps=[-1]
state = initial_state(layer, dimensions)
if state is not None:
return dict(initial=state, taps=[-1])
else:
return None
class Model:
Simple predictive model for forecasting words from
sequence using LSTMs. Choose how many LSTMs to stack
what size their memory should be, and how many
words can be predicted.
def __init__(self, hidden_size, input_size, vocab_size, stack_size=1, celltype=LSTM):
# declare model
self.model = StackedCells(input_size, celltype=celltype, layers =[hidden_size] * stack_size)
# add an embedding
self.model.layers.insert(0, Embedding(vocab_size, input_size))
# add a classifier:
self.model.layers.append(Layer(hidden_size, vocab_size, activation = softmax))
# inputs are matrices of indices,
# each row is a sentence, each column a timestep
self._stop_word = theano.shared(np.int32(999999999), name="stop word")
self.for_how_long = T.ivector()
self.input_mat = T.imatrix()
self.priming_word = T.iscalar()
self.srng = T.shared_randomstreams.RandomStreams(np.random.randint(0, 1024))
# create symbolic variables for prediction:
self.predictions = self.create_prediction()
# create symbolic variable for greedy search:
self.greedy_predictions = self.create_prediction(greedy=True)
# create gradient training functions:
self.create_cost_fun()
self.create_training_function()
self.create_predict_function()
def stop_on(self, idx):
self._stop_word.set_value(idx)
@property
def params(self):
return self.model.params
def create_prediction(self, greedy=False):
def step(idx, *states):
# new hiddens are the states we need to pass to LSTMs
# from past. Because the StackedCells also include
# the embeddings, and those have no state, we pass
# a "None" instead:
new_hiddens = [None] + list(states)
new_states = self.model.forward(idx, prev_hiddens = new_hiddens)
if greedy:
new_idxes = new_states[-1]
new_idx = new_idxes.argmax()
# provide a stopping condition for greedy search:
return ([new_idx.astype(self.priming_word.dtype)] + new_states[1:-1]), theano.scan_module.until(T.eq(new_idx,self._stop_word))
else:
return new_states[1:]
# in sequence forecasting scenario we take everything
# up to the before last step, and predict subsequent
# steps ergo, 0 ... n - 1, hence:
inputs = self.input_mat[:, 0:-1]
num_examples = inputs.shape[0]
# pass this to Theano's recurrence relation function:
# choose what gets outputted at each timestep:
if greedy:
outputs_info = [dict(initial=self.priming_word, taps=[-1])] + [initial_state_with_taps(layer) for layer in self.model.layers[1:-1]]
result, _ = theano.scan(fn=step,
n_steps=200,
outputs_info=outputs_info)
else:
outputs_info = [initial_state_with_taps(layer, num_examples) for layer in self.model.layers[1:]]
result, _ = theano.scan(fn=step,
sequences=[inputs.T],
outputs_info=outputs_info)
if greedy:
return result[0]
# softmaxes are the last layer of our network,
# and are at the end of our results list:
return result[-1].transpose((2,0,1))
# we reorder the predictions to be:
# 1. what row / example
# 2. what timestep
# 3. softmax dimension
def create_cost_fun (self):
# create a cost function that
# takes each prediction at every timestep
# and guesses next timestep's value:
what_to_predict = self.input_mat[:, 1:]
# because some sentences are shorter, we
# place masks where the sentences end:
# (for how long is zero indexed, e.g. an example going from `[2,3)`)
# has this value set 0 (here we substract by 1):
for_how_long = self.for_how_long - 1
# all sentences start at T=0:
starting_when = T.zeros_like(self.for_how_long)
self.cost = masked_loss(self.predictions,
what_to_predict,
for_how_long,
starting_when).sum()
def create_predict_function(self):
self.pred_fun = theano.function(
inputs=[self.input_mat],
outputs =self.predictions,
allow_input_downcast=True
)
self.greedy_fun = theano.function(
inputs=[self.priming_word],
outputs=T.concatenate([T.shape_padleft(self.priming_word), self.greedy_predictions]),
allow_input_downcast=True
)
def create_training_function(self):
updates, _, _, _, _ = create_optimization_updates(self.cost, self.params, method="adadelta")
self.update_fun = theano.function(
inputs=[self.input_mat, self.for_how_long],
outputs=self.cost,
updates=updates,
allow_input_downcast=True)
def __call__(self, x):
return self.pred_fun(x)
Explanation: Build a Recurrent Neural Network
Now the real work is upon us! Thank goodness we have our language data ready. We now create a recurrent neural network by connecting an Embedding $E$ for each word in our corpus, and stacking some special cells together to form a prediction function. Mathematically we want:
$$\mathrm{argmax_{E, \Phi}} {\bf P}(w_{k+1}| w_{k}, \dots, w_{0}; E, \Phi) = f(x, h)$$
with $f(\cdot, \cdot)$ the function our recurrent neural network performs at each timestep that takes as inputs:
an observation $x$, and
a previous state $h$,
and outputs a probability distribution $\hat{p}$ over the next word.
We have $x = E[ w_{k}]$ our observation at time $k$, and $h$ the internal state of our neural network, and $\Phi$ is the set of parameters used by our classifier, and recurrent neural network, and $E$ is the embedding for our words.
In practice we will obtain $E$ and $\Phi$ iteratively using gradient descent on the error our network is making in its prediction. To do this we define our error as the Kullback-Leibler divergence (a distance between probability distributions) between our estimate of $\hat{p} = {\bf P}(w_{k+1}| w_{k}, \dots, w_{0}; E, \Phi)$ and the actual value of ${\bf P}(w_{k+1}| w_{k}, \dots, w_{0})$ from the data (e.g. a probability distribution that is 1 for word $w_k$ and 0 elsewhere).
Theano LSTM StackedCells function
To build this predictive model we make use of theano_lstm, a Python module for building recurrent neural networks using Theano. The first step we take is to declare what kind of cells we want to use by declaring a celltype. There are many different celltypes we can use, but the most common these days (and incidentally most effective) are RNN and LSTM. For a more in-depth discussion of how these work I suggest checking out Arxiv, or Alex Graves' website, or Wikipedia. Here we use celltype = LSTM.
self.model = StackedCells(input_size, celltype=celltype, layers =[hidden_size] * stack_size)
Once we've declared what kind of cells we want to use, we can now choose to add an Embedding to map integers (indices) to vectors (and in our case map words to their indices, then indices to word vectors we wish to train). Intuitively this lets the network separate and recognize what it is "seeing" or "receiving" at each timestep. To add an Embedding we create Embedding(vocabulary_size, size_of_embedding_vectors) and insert it at the begging of the StackedCells's layers list (thereby telling StackedCells that this Embedding layer needs to be activated before the other ones):
# add an embedding
self.model.layers.insert(0, Embedding(vocab_size, input_size))
The final output of our network needs to be a probability distribution over the next words (but in different application areas this could be a sentiment classification, a decision, a topic, etc...) so we add another layer that maps the internal state of the LSTMs to a probability distribution over the all the words in our language. To ensure that our prediction is indeed a probability distribution we "activate" our layer with a Softmax, meaning that we will exponentiate every value of the output, $q_i = e^{x_i}$, so that all values are positive, and then we will divide the output by its sum so that the output sums to 1:
$$p_i = \frac{q_i}{\sum_j q_j}\text{, and }\sum_i p_i = 1.$$
# add a classifier:
self.model.layers.append(Layer(hidden_size, vocab_size, activation = softmax))
For convenience we wrap this all in one class below.
Prediction
We have now defined our network. At each timestep we can produce a probability distribution for each input index:
def create_prediction(self, greedy=False):
def step(idx, *states):
# new hiddens are the states we need to pass to LSTMs
# from past. Because the StackedCells also include
# the embeddings, and those have no state, we pass
# a "None" instead:
new_hiddens = [None] + list(states)
new_states = self.model.forward(idx, prev_hiddens = new_hiddens)
return new_states[1:]
...
Our inputs are an integer matrix Theano symbolic variable:
...
# in sequence forecasting scenario we take everything
# up to the before last step, and predict subsequent
# steps ergo, 0 ... n - 1, hence:
inputs = self.input_mat[:, 0:-1]
num_examples = inputs.shape[0]
# pass this to Theano's recurrence relation function:
....
Scan receives our recurrence relation step from above, and also needs to know what will be outputted at each step in outputs_info. We give outputs_info a set of variables corresponding to the hidden states of our StackedCells. Some of the layers have no hidden state, and thus we should simply pass a None to Theano, while others do require some initial state. In those cases with wrap their initial state inside a dictionary:
def has_hidden(layer):
Whether a layer has a trainable
initial hidden state.
return hasattr(layer, 'initial_hidden_state')
def matrixify(vector, n):
return T.repeat(T.shape_padleft(vector), n, axis=0)
def initial_state(layer, dimensions = None):
Initalizes the recurrence relation with an initial hidden state
if needed, else replaces with a "None" to tell Theano that
the network **will** return something, but it does not need
to send it to the next step of the recurrence
if dimensions is None:
return layer.initial_hidden_state if has_hidden(layer) else None
else:
return matrixify(layer.initial_hidden_state, dimensions) if has_hidden(layer) else None
def initial_state_with_taps(layer, dimensions = None):
Optionally wrap tensor variable into a dict with taps=[-1]
state = initial_state(layer, dimensions)
if state is not None:
return dict(initial=state, taps=[-1])
else:
return None
Let's now create these inital states (note how we skip layer 1, the embeddings by doing self.model.layers[1:] in the iteration, this is because there is no point in passing these embeddings around in our recurrence because word vectors are only seen at the timestep they are received in this network):
# choose what gets outputted at each timestep:
outputs_info = [initial_state_with_taps(layer, num_examples) for layer in self.model.layers[1:]]
result, _ = theano.scan(fn=step,
sequences=[inputs.T],
outputs_info=outputs_info)
if greedy:
return result[0]
# softmaxes are the last layer of our network,
# and are at the end of our results list:
return result[-1].transpose((2,0,1))
# we reorder the predictions to be:
# 1. what row / example
# 2. what timestep
# 3. softmax dimension
Error Function:
Our error function uses theano_lstm's masked_loss method. This method allows us to define ranges over which a probability distribution should obey a particular target distribution. We control this method by setting start and end points for these ranges. In doing so we mask the areas where we do not care what the network predicted.
In our case our network predicts words we care about during the sentence, but when we pad our short sentences with 0s to fill our matrix, we do not care what the network does there, because this is happening outside the sentence we collected:
def create_cost_fun (self):
# create a cost function that
# takes each prediction at every timestep
# and guesses next timestep's value:
what_to_predict = self.input_mat[:, 1:]
# because some sentences are shorter, we
# place masks where the sentences end:
# (for how long is zero indexed, e.g. an example going from `[2,3)`)
# has this value set 0 (here we substract by 1):
for_how_long = self.for_how_long - 1
# all sentences start at T=0:
starting_when = T.zeros_like(self.for_how_long)
self.cost = masked_loss(self.predictions,
what_to_predict,
for_how_long,
starting_when).sum()
Training Function
We now have a cost function. To perform gradient descent we now need to tell Theano how each parameter must be updated at every training epoch. We theano_lstm's create_optimization_udpates method to generate a dictionary of updates and to apply special gradient descent rules that accelerate and facilitate training (for instance scaling the gradients when they are too large or too little, and preventing gradients from becoming too big and making our model numerically unstable -- in this example we use Adadelta:
def create_training_function(self):
updates, _, _, _, _ = create_optimization_updates(self.cost, self.params, method="adadelta")
self.update_fun = theano.function(
inputs=[self.input_mat, self.for_how_long],
outputs=self.cost,
updates=updates,
allow_input_downcast=True)
PS: our parameters are obtained by calling self.model.params:
@property
def params(self):
return self.model.params
Final Code
End of explanation
# construct model & theano functions:
model = Model(
input_size=10,
hidden_size=10,
vocab_size=len(vocab),
stack_size=1, # make this bigger, but makes compilation slow
celltype=RNN # use RNN or LSTM
)
model.stop_on(vocab.word2index["."])
Explanation: Construct model
We now declare the model and parametrize it to use an RNN, and make predictions in the range provided by our vocabulary. We also tell the greedy reconstruction search that it can consider a sentence as being over when the symbol corresponding to a period appears:
End of explanation
# train:
for i in range(10000):
error = model.update_fun(numerical_lines, numerical_lengths) if i % 100 == 0:
print("epoch %(epoch)d, error=%(error).2f" % ({"epoch": i, "error": error}))
if i % 500 == 0:
print(vocab(model.greedy_fun(vocab.word2index["the"])))
a=1
print a
import numpy as np
import os
from setuptools import setup, find_packages
Explanation: Train Model
We run 10,000 times through our data and every 500 epochs of training we output what the model considers to be a natural continuation to the sentence "the":
End of explanation
<END_TASK> |
85 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
In the tutorial, you learned about six different types of bias. In this exercise, you'll train a model with real data and get practice with identifying bias. Don't worry if you're new to coding
Step1: Run the next code cell without changes to use the data to train a simple model. The output shows the accuracy of the model on some test data.
Step2: Roughly 93% of the comments in the test data are classified correctly!
1) Try out the model
You'll use the next code cell to write your own comments and supply them to the model
Step3: Once you're done with testing comments, we'll move on to understand how the model makes decisions. Run the next code cell without changes.
The model assigns each of roughly 58,000 words a coefficient, where higher coefficients denote words that the model thinks are more toxic. The code cell outputs the ten words that are considered most toxic, along with their coefficients.
Step4: 2) Most toxic words
Take a look at the most toxic words from the code cell above. Are you surprised to see any of them? Are there any words that seem like they should not be in the list?
Step5: 3) A closer investigation
We'll take a closer look at how the model classifies comments.
1. Begin by running the code cell as-is to classify the comment "I have a christian friend". You should see that was classified as "NOT TOXIC". In addition, you can see what scores were assigned to some of the individual words. Note that all words in the comment likely won't appear.
2. Next, try out another comment
Step6: 4) Identify bias
Do you see any signs of potential bias in the model? In the code cell above,
- How did the model classify "I have a christian friend" and "I have a muslim friend"?
- How did it classify "I have a white friend" and "I have a black friend"?
Once you have an answer, run the next code cell.
Step7: 5) Test your understanding
We'll step away from the Jigsaw competition data and consider a similar (but hypothetical!) scenario where you're working with a dataset of online comments to train a model to classify comments as toxic.
You notice that comments that refer to Islam are more likely to be toxic than comments that refer to other religions, because the online community is islamophobic. What type of bias can this introduce to your model?
Once you have answered the question, run the next code cell to see the official answer.
Step8: 6) Test your understanding, part 2
We'll continue with the same hypothetical scenario, where you're trying to train a model to classify online comments as toxic.
You take any comments that are not already in English and translate them to English with a separate tool. Then, you treat all posts as if they were originally expressed in English. What type of bias will your model suffer from?
Once you have answered the question, run the next code cell to see the official answer.
Step9: 7) Test your understanding, part 3
We'll continue with the same hypothetical scenario, where you're trying to train a model to classify online comments as toxic.
The dataset you're using to train the model contains comments primarily from users based in the United Kingdom.
After training a model, you evaluate its performance with another dataset of comments, also primarily from users based in the United Kingdom -- and it gets great performance! You deploy it for a company based in Australia, and it does not perform well, because of differences between British and Australian English. What types of bias does the model suffer from?
Once you have answered the question, run the next code cell to see the official answer. | <ASSISTANT_TASK:>
Python Code:
# Set up feedback system
from learntools.core import binder
binder.bind(globals())
from learntools.ethics.ex3 import *
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import CountVectorizer
# Get the same results each time
np.random.seed(0)
# Load the training data
data = pd.read_csv("../input/jigsaw-snapshot/data.csv")
comments = data["comment_text"]
target = (data["target"]>0.7).astype(int)
# Break into training and test sets
comments_train, comments_test, y_train, y_test = train_test_split(comments, target, test_size=0.30, stratify=target)
# Get vocabulary from training data
vectorizer = CountVectorizer()
vectorizer.fit(comments_train)
# Get word counts for training and test sets
X_train = vectorizer.transform(comments_train)
X_test = vectorizer.transform(comments_test)
# Preview the dataset
print("Data successfully loaded!\n")
print("Sample toxic comment:", comments_train.iloc[22])
print("Sample not-toxic comment:", comments_train.iloc[17])
Explanation: In the tutorial, you learned about six different types of bias. In this exercise, you'll train a model with real data and get practice with identifying bias. Don't worry if you're new to coding: you'll still be able to complete the exercise!
Introduction
At the end of 2017, the Civil Comments platform shut down and released their ~2 million public comments in a lasting open archive. Jigsaw sponsored this effort and helped to comprehensively annotate the data. In 2019, Kaggle held the Jigsaw Unintended Bias in Toxicity Classification competition so that data scientists worldwide could work together to investigate ways to mitigate bias.
The code cell below loads some of the data from the competition. We'll work with thousands of comments, where each comment is labeled as either "toxic" or "not toxic".
Begin by running the next code cell.
- Clicking inside the code cell.
- Click on the triangle (in the shape of a "Play button") that appears to the left of the code cell.
The code will run for approximately 30 seconds. When it finishes, you should see as output a message saying that the data was successfully loaded, along with two examples of comments: one is toxic, and the other is not.
End of explanation
from sklearn.linear_model import LogisticRegression
# Train a model and evaluate performance on test dataset
classifier = LogisticRegression(max_iter=2000)
classifier.fit(X_train, y_train)
score = classifier.score(X_test, y_test)
print("Accuracy:", score)
# Function to classify any string
def classify_string(string, investigate=False):
prediction = classifier.predict(vectorizer.transform([string]))[0]
if prediction == 0:
print("NOT TOXIC:", string)
else:
print("TOXIC:", string)
Explanation: Run the next code cell without changes to use the data to train a simple model. The output shows the accuracy of the model on some test data.
End of explanation
# Comment to pass through the model
my_comment = "I love apples"
# Do not change the code below
classify_string(my_comment)
q_1.check()
Explanation: Roughly 93% of the comments in the test data are classified correctly!
1) Try out the model
You'll use the next code cell to write your own comments and supply them to the model: does the model classify them as toxic?
Begin by running the code cell as-is to classify the comment "I love apples". You should see that was classified as "NOT TOXIC".
Then, try out another comment: "Apples are stupid". To do this, change only "I love apples" and leaving the rest of the code as-is. Make sure that your comment is enclosed in quotes, as below.
python
my_comment = "Apples are stupid"
Try out several comments (not necessarily about apples!), to see how the model performs: does it perform as suspected?
End of explanation
coefficients = pd.DataFrame({"word": sorted(list(vectorizer.vocabulary_.keys())), "coeff": classifier.coef_[0]})
coefficients.sort_values(by=['coeff']).tail(10)
Explanation: Once you're done with testing comments, we'll move on to understand how the model makes decisions. Run the next code cell without changes.
The model assigns each of roughly 58,000 words a coefficient, where higher coefficients denote words that the model thinks are more toxic. The code cell outputs the ten words that are considered most toxic, along with their coefficients.
End of explanation
# Check your answer (Run this code cell to get credit!)
q_2.check()
Explanation: 2) Most toxic words
Take a look at the most toxic words from the code cell above. Are you surprised to see any of them? Are there any words that seem like they should not be in the list?
End of explanation
# Set the value of new_comment
new_comment = "I have a christian friend"
# Do not change the code below
classify_string(new_comment)
coefficients[coefficients.word.isin(new_comment.split())]
q_3.check()
Explanation: 3) A closer investigation
We'll take a closer look at how the model classifies comments.
1. Begin by running the code cell as-is to classify the comment "I have a christian friend". You should see that was classified as "NOT TOXIC". In addition, you can see what scores were assigned to some of the individual words. Note that all words in the comment likely won't appear.
2. Next, try out another comment: "I have a muslim friend". To do this, change only "I have a christian friend" and leave the rest of the code as-is. Make sure that your comment is enclosed in quotes, as below.
python
new_comment = "I have a muslim friend"
3. Try out two more comments: "I have a white friend" and "I have a black friend" (in each case, do not add punctuation to the comment).
4. Feel free to try out more comments, to see how the model classifies them.
End of explanation
# Check your answer (Run this code cell to get credit!)
q_4.check()
Explanation: 4) Identify bias
Do you see any signs of potential bias in the model? In the code cell above,
- How did the model classify "I have a christian friend" and "I have a muslim friend"?
- How did it classify "I have a white friend" and "I have a black friend"?
Once you have an answer, run the next code cell.
End of explanation
# Check your answer (Run this code cell to get credit!)
q_5.check()
Explanation: 5) Test your understanding
We'll step away from the Jigsaw competition data and consider a similar (but hypothetical!) scenario where you're working with a dataset of online comments to train a model to classify comments as toxic.
You notice that comments that refer to Islam are more likely to be toxic than comments that refer to other religions, because the online community is islamophobic. What type of bias can this introduce to your model?
Once you have answered the question, run the next code cell to see the official answer.
End of explanation
# Check your answer (Run this code cell to get credit!)
q_6.check()
Explanation: 6) Test your understanding, part 2
We'll continue with the same hypothetical scenario, where you're trying to train a model to classify online comments as toxic.
You take any comments that are not already in English and translate them to English with a separate tool. Then, you treat all posts as if they were originally expressed in English. What type of bias will your model suffer from?
Once you have answered the question, run the next code cell to see the official answer.
End of explanation
# Check your answer (Run this code cell to get credit!)
q_7.check()
Explanation: 7) Test your understanding, part 3
We'll continue with the same hypothetical scenario, where you're trying to train a model to classify online comments as toxic.
The dataset you're using to train the model contains comments primarily from users based in the United Kingdom.
After training a model, you evaluate its performance with another dataset of comments, also primarily from users based in the United Kingdom -- and it gets great performance! You deploy it for a company based in Australia, and it does not perform well, because of differences between British and Australian English. What types of bias does the model suffer from?
Once you have answered the question, run the next code cell to see the official answer.
End of explanation
<END_TASK> |
86 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
ES-DOC CMIP6 Model Properties - Toplevel
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required
Step7: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required
Step8: 3.2. CMIP3 Parent
Is Required
Step9: 3.3. CMIP5 Parent
Is Required
Step10: 3.4. Previous Name
Is Required
Step11: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required
Step12: 4.2. Code Version
Is Required
Step13: 4.3. Code Languages
Is Required
Step14: 4.4. Components Structure
Is Required
Step15: 4.5. Coupler
Is Required
Step16: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required
Step17: 5.2. Atmosphere Double Flux
Is Required
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Is Required
Step19: 5.4. Atmosphere Relative Winds
Is Required
Step20: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required
Step21: 6.2. Global Mean Metrics Used
Is Required
Step22: 6.3. Regional Metrics Used
Is Required
Step23: 6.4. Trend Metrics Used
Is Required
Step24: 6.5. Energy Balance
Is Required
Step25: 6.6. Fresh Water Balance
Is Required
Step26: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required
Step27: 7.2. Atmos Ocean Interface
Is Required
Step28: 7.3. Atmos Land Interface
Is Required
Step29: 7.4. Atmos Sea-ice Interface
Is Required
Step30: 7.5. Ocean Seaice Interface
Is Required
Step31: 7.6. Land Ocean Interface
Is Required
Step32: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required
Step33: 8.2. Atmos Ocean Interface
Is Required
Step34: 8.3. Atmos Land Interface
Is Required
Step35: 8.4. Atmos Sea-ice Interface
Is Required
Step36: 8.5. Ocean Seaice Interface
Is Required
Step37: 8.6. Runoff
Is Required
Step38: 8.7. Iceberg Calving
Is Required
Step39: 8.8. Endoreic Basins
Is Required
Step40: 8.9. Snow Accumulation
Is Required
Step41: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required
Step42: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required
Step43: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required
Step45: 12.2. Additional Information
Is Required
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required
Step47: 13.2. Additional Information
Is Required
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required
Step49: 14.2. Additional Information
Is Required
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required
Step51: 15.2. Additional Information
Is Required
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required
Step53: 16.2. Additional Information
Is Required
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required
Step55: 17.2. Equivalence Concentration
Is Required
Step56: 17.3. Additional Information
Is Required
Step57: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required
Step58: 18.2. Additional Information
Is Required
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required
Step60: 19.2. Additional Information
Is Required
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required
Step62: 20.2. Additional Information
Is Required
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required
Step64: 21.2. Additional Information
Is Required
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required
Step66: 22.2. Aerosol Effect On Ice Clouds
Is Required
Step67: 22.3. Additional Information
Is Required
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required
Step69: 23.2. Aerosol Effect On Ice Clouds
Is Required
Step70: 23.3. RFaci From Sulfate Only
Is Required
Step71: 23.4. Additional Information
Is Required
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required
Step73: 24.2. Additional Information
Is Required
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step77: 25.4. Additional Information
Is Required
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required
Step81: 26.4. Additional Information
Is Required
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required
Step83: 27.2. Additional Information
Is Required
Step84: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required
Step85: 28.2. Crop Change Only
Is Required
Step86: 28.3. Additional Information
Is Required
Step87: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required
Step88: 29.2. Additional Information
Is Required | <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nims-kma', 'sandbox-3', 'toplevel')
Explanation: ES-DOC CMIP6 Model Properties - Toplevel
MIP Era: CMIP6
Institute: NIMS-KMA
Source ID: SANDBOX-3
Sub-Topics: Radiative Forcings.
Properties: 85 (42 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:29
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Flux Correction
3. Key Properties --> Genealogy
4. Key Properties --> Software Properties
5. Key Properties --> Coupling
6. Key Properties --> Tuning Applied
7. Key Properties --> Conservation --> Heat
8. Key Properties --> Conservation --> Fresh Water
9. Key Properties --> Conservation --> Salt
10. Key Properties --> Conservation --> Momentum
11. Radiative Forcings
12. Radiative Forcings --> Greenhouse Gases --> CO2
13. Radiative Forcings --> Greenhouse Gases --> CH4
14. Radiative Forcings --> Greenhouse Gases --> N2O
15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
17. Radiative Forcings --> Greenhouse Gases --> CFC
18. Radiative Forcings --> Aerosols --> SO4
19. Radiative Forcings --> Aerosols --> Black Carbon
20. Radiative Forcings --> Aerosols --> Organic Carbon
21. Radiative Forcings --> Aerosols --> Nitrate
22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
24. Radiative Forcings --> Aerosols --> Dust
25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
27. Radiative Forcings --> Aerosols --> Sea Salt
28. Radiative Forcings --> Other --> Land Use
29. Radiative Forcings --> Other --> Solar
1. Key Properties
Key properties of the model
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Top level overview of coupled model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of coupled model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Flux Correction
Flux correction properties of the model
2.1. Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how flux corrections are applied in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Genealogy
Genealogy and history of the model
3.1. Year Released
Is Required: TRUE Type: STRING Cardinality: 1.1
Year the model was released
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. CMIP3 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP3 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. CMIP5 Parent
Is Required: FALSE Type: STRING Cardinality: 0.1
CMIP5 parent if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Previous Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Previously known as
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of model
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.4. Components Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how model realms are structured into independent software components (coupled via a coupler) and internal software components.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.5. Coupler
Is Required: FALSE Type: ENUM Cardinality: 0.1
Overarching coupling framework for model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Coupling
**
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of coupling in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.2. Atmosphere Double Flux
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 5.3. Atmosphere Fluxes Calculation Grid
Is Required: FALSE Type: ENUM Cardinality: 0.1
Where are the air-sea fluxes calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 5.4. Atmosphere Relative Winds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Tuning Applied
Tuning methodology for model
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics/diagnostics of the global mean state used in tuning model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics/diagnostics used in tuning model/component (such as 20th century)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.5. Energy Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. Fresh Water Balance
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Conservation --> Heat
Global heat convervation properties of the model
7.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how heat is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.6. Land Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how heat is conserved at the land/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation --> Fresh Water
Global fresh water convervation properties of the model
8.1. Global
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh_water is conserved globally
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Atmos Ocean Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh_water is conserved at the atmosphere/ocean coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Atmos Land Interface
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how fresh water is conserved at the atmosphere/land coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Atmos Sea-ice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.5. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how fresh water is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.6. Runoff
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how runoff is distributed and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.7. Iceberg Calving
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how iceberg calving is modeled and conserved
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.8. Endoreic Basins
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how endoreic basins (no ocean access) are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.9. Snow Accumulation
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how snow accumulation over land and over sea-ice is treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Key Properties --> Conservation --> Salt
Global salt convervation properties of the model
9.1. Ocean Seaice Interface
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how salt is conserved at the ocean/sea-ice coupling interface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10. Key Properties --> Conservation --> Momentum
Global momentum convervation properties of the model
10.1. Details
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how momentum is conserved in the model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Radiative Forcings
Radiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)
11.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of radiative forcings (GHG and aerosols) implementation in model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Carbon dioxide forcing
12.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Methane forcing
13.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Nitrous oxide forcing
14.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Troposheric ozone forcing
15.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Stratospheric ozone forcing
16.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Ozone-depleting and non-ozone-depleting fluorinated gases forcing
17.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Equivalence Concentration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Details of any equivalence concentrations used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiative Forcings --> Aerosols --> SO4
SO4 aerosol forcing
18.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiative Forcings --> Aerosols --> Black Carbon
Black carbon aerosol forcing
19.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Organic carbon aerosol forcing
20.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiative Forcings --> Aerosols --> Nitrate
Nitrate forcing
21.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Cloud albedo effect forcing (RFaci)
22.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 22.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Cloud lifetime effect forcing (ERFaci)
23.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.2. Aerosol Effect On Ice Clouds
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative effects of aerosols on ice clouds are represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.3. RFaci From Sulfate Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Radiative forcing from aerosol cloud interactions from sulfate aerosol only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiative Forcings --> Aerosols --> Dust
Dust forcing
24.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Tropospheric volcanic forcing
25.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Stratospheric volcanic forcing
26.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.2. Historical Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in historical simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26.3. Future Explosive Volcanic Aerosol Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How explosive volcanic aerosol is implemented in future simulations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiative Forcings --> Aerosols --> Sea Salt
Sea salt forcing
27.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiative Forcings --> Other --> Land Use
Land use forcing
28.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28.2. Crop Change Only
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Land use change represented via crop change only?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 29. Radiative Forcings --> Other --> Solar
Solar forcing
29.1. Provision
Is Required: TRUE Type: ENUM Cardinality: 1.N
How solar forcing is provided
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Additional Information
Is Required: FALSE Type: STRING Cardinality: 0.1
Additional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).
End of explanation
<END_TASK> |
87 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
W2 Lab
Step1: You can check the version of the library. Because pandas is fast-evolving library, you want to make sure that you have the up-to-date version of the library.
Step2: You also need matplotlib, which is used by pandas to plot figures. The following is the most common convention to import matplotlib library.
Step3: Let's check its version too.
Step4: Loading a CSV data file
Using pandas, you can read tabular data files in many formats and through many protocols. Pandas supports not only flat files such as .csv, but also various other formats including clipboard, Excel, JSON, HTML, Feather, Parquet, SQL, Google BigQuery, and so on. Moreover, you can pass a local file path or a URL. If it's on Amazon S3, just pass a url like s3
Step5: df stands for "Data Frame", which is a fundamental data object in Pandas. You can take a look at the dataset by looking at the first few lines.
Step6: Q1
Step7: You can also sample several rows randomly. If the data is sorted in some ways, sampling may give you a rather unbiased view of the dataset.
Step8: You can also figure out the number of rows in the dataset by running
Step9: Note that df.size does not give you the number of rows. It tells you the number of elements.
Step10: You can also look into the shape of the dataset as well as what are the columns in the dataset.
Step11: You can also check out basic descriptive statistics of the whole dataset by using describe() method.
Step12: You can slice the data like a list
Step13: or filter rows using some conditions.
Step14: Now let's load another CSV file that documents the cholera deaths. The URL is https
Step15: Some visualizations?
Let's visualize them! Pandas actually provides a nice visualization interface that uses matplotlib under the hood. You can do many basic plots without learning matplotlib. So let's try.
Step16: This is not what we want! When asked to plot the data, it tries to figure out what we want based on the type of the data. However, that doesn't mean that it will successfully do so!
Oh by the way, depending on your environment, you may not see any plot. If you don't see anything run the following command.
Step17: The commands that start with % is called the magic commands, which are available in IPython and Jupyter. The purpose of this command is telling the IPython / Jupyter to show the plot right here instead of trying to use other external viewers.
Anyway, this doesn't seem like the plot we want. Instead of putting each row as a point in a 2D plane by using the X and Y as the coordinate, it just created a line chart. Let's fix it. Please take a look at the plot method documentation. How should we change the command? Which kind of plot do we want to draw?
Yes, we want to draw a scatter plot using x and y as the Cartesian coordinates.
Step18: I think I want to reduce the size of the dots and change the color to black. But it is difficult to find how to do that! It is sometimes quite annoying to figure out how to change how the visualization looks, especially when we use matplotlib. Unlike some other advanced tools, matplotlib does not provide a very coherent way to adjust your visualizations. That's one of the reasons why there are lots of visualization libraries that wrap matplotlib. Anyway, this is how you do it.
Step19: Can we visualize both deaths and pumps?
Step20: Oh well, this is not what we want! We want to overlay them to see them together, right? How can we do that? Before going into that, we probably want to understand some key components of matplotlib figures.
Figure and Axes
Why do we have two separate plots? The reason is that, by default, the plot method creates a new \emph{figure} instead of putting them inside a single figure. In order to avoid it, we need to either create an Axes and tell plot to use that axes. What is an axes? See this illustration.
<img src="https
Step21: This object contains all the information and objects in the plot we see. Whatever we want to do with this axes (e.g., changing x or y scale, overlaying other data, changing the color or size of symbols, etc.) can be done by accessing this object.
Then you can pass this axes object to another plot to put both plots in the same axes. Note ax=ax in the second plot command. It tells the plot command where to draw the points.
Step22: Although simply invoking the plot() command is quick and easy when doing an exploratory data analysis, it is usually better to be formal about figure and axes objects.
Here is the recommended way to create a plot. Call the subplots() method (see https
Step23: Voronoi diagram
Let's try the Voronoi diagram. You can use the scipy.spatial.Voronoi and scipy.spatial.voronoi_plot_2d from scipy, the scientific python library.
Step24: Take a look at the documentation of Voronoi and voronoi_plot_2d and
Q3
Step25: Saving the figure
You can also save your figure into PDF, PNG, etc. If you run the following, the plot will not only be displayed here, but also be saved as foo.png.
Step26: Q4
Step27: Ok, that was a brief introduction to pandas and some simple visualizations. Now let's talk about web a little bit.
HTML & CSS Basics
HTML review
Webpages are written in a standard markup language called HTML (HyperText Markup Language). The basic syntax of HTML consists of elements enclosed within < and > symbols. Markup tags often come in a pair, the opening tag without / and the closing tag with /. For instance, when we assign the title of the webpage, we write <title>This is the title of the page</title>. You can find tutorials and references from many websites, including W3Schools. Here is an example of a simple HTML document (from w3schools homepage) | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
Explanation: W2 Lab: Data processing basics and John Snow's map
Follow the contents of this notebook and answer all questions (e.g. Q1: ...)
If you are not familiar with Python yet...
Maybe you can quickly review these basics first.
numbers and strings;
if and for statement
data structure (lists, sets, dictionaries)
functions and modules
Google's Python Class is also a nice resource.
Jupyter + Pandas = Awesomeness
Jupyter notebook (lab) (and IPython) and Pandas may be two most important libraries responsible for the Python's rise in data science. Jupyter lets you interactively explore datasets and code; Pandas lets you handle tabular datasets with superb speed and convenience. And they work so well together! In many cases, Jupyter and Pandas are all you need to load, clean, transform, visualize, and understand a dataset.
If you are not familiar with Pandas, you may want to follow their official tutorial called 10 Minutes to pandas now or in the near future.
Importing pandas
The convention for importing pandas is the following
End of explanation
pd.__version__
Explanation: You can check the version of the library. Because pandas is fast-evolving library, you want to make sure that you have the up-to-date version of the library.
End of explanation
import matplotlib.pyplot as plt
Explanation: You also need matplotlib, which is used by pandas to plot figures. The following is the most common convention to import matplotlib library.
End of explanation
import matplotlib
matplotlib.__version__
Explanation: Let's check its version too.
End of explanation
pump_df = pd.read_csv('https://raw.githubusercontent.com/yy/dviz-course/master/data/pumps.csv')
Explanation: Loading a CSV data file
Using pandas, you can read tabular data files in many formats and through many protocols. Pandas supports not only flat files such as .csv, but also various other formats including clipboard, Excel, JSON, HTML, Feather, Parquet, SQL, Google BigQuery, and so on. Moreover, you can pass a local file path or a URL. If it's on Amazon S3, just pass a url like s3://path/to/file.csv. If it's on a webpage, then just use https://some/url.csv.
Let's load a dataset about the location of pumps in the John Snow's map. You can download the file to your computer and try to load it using the local path too.
End of explanation
pump_df.head()
Explanation: df stands for "Data Frame", which is a fundamental data object in Pandas. You can take a look at the dataset by looking at the first few lines.
End of explanation
# TODO: write your code here
Explanation: Q1: can you print only the first three lines? Refer: http://pandas.pydata.org/pandas-docs/stable/index.html
End of explanation
# Your code here
Explanation: You can also sample several rows randomly. If the data is sorted in some ways, sampling may give you a rather unbiased view of the dataset.
End of explanation
len(pump_df)
Explanation: You can also figure out the number of rows in the dataset by running
End of explanation
pump_df.size
Explanation: Note that df.size does not give you the number of rows. It tells you the number of elements.
End of explanation
pump_df.shape # 13 rows and 2 columns
pump_df.columns
Explanation: You can also look into the shape of the dataset as well as what are the columns in the dataset.
End of explanation
pump_df.describe()
Explanation: You can also check out basic descriptive statistics of the whole dataset by using describe() method.
End of explanation
pump_df[:2]
pump_df[-2:]
pump_df[1:5]
Explanation: You can slice the data like a list
End of explanation
pump_df[pump_df.X > 13]
Explanation: or filter rows using some conditions.
End of explanation
# TODO: Remove below dummy dataframe and write your code here. You probably want to create multiple cells.
death_df = pd.DataFrame({"X": [2., 3.], "Y": [1., 2.]})
Explanation: Now let's load another CSV file that documents the cholera deaths. The URL is https://raw.githubusercontent.com/yy/dviz-course/master/data/deaths.csv
Q2: load the death dataset and inspect it
load this dataset as death_df.
show the first 2 rows.
show the total number of rows.
End of explanation
death_df.plot()
Explanation: Some visualizations?
Let's visualize them! Pandas actually provides a nice visualization interface that uses matplotlib under the hood. You can do many basic plots without learning matplotlib. So let's try.
End of explanation
%matplotlib inline
Explanation: This is not what we want! When asked to plot the data, it tries to figure out what we want based on the type of the data. However, that doesn't mean that it will successfully do so!
Oh by the way, depending on your environment, you may not see any plot. If you don't see anything run the following command.
End of explanation
death_df.plot(x='X', y='Y', kind='scatter', label='Deaths')
Explanation: The commands that start with % is called the magic commands, which are available in IPython and Jupyter. The purpose of this command is telling the IPython / Jupyter to show the plot right here instead of trying to use other external viewers.
Anyway, this doesn't seem like the plot we want. Instead of putting each row as a point in a 2D plane by using the X and Y as the coordinate, it just created a line chart. Let's fix it. Please take a look at the plot method documentation. How should we change the command? Which kind of plot do we want to draw?
Yes, we want to draw a scatter plot using x and y as the Cartesian coordinates.
End of explanation
death_df.plot(x='X', y='Y', kind='scatter', label='Deaths', s=2, c='black')
Explanation: I think I want to reduce the size of the dots and change the color to black. But it is difficult to find how to do that! It is sometimes quite annoying to figure out how to change how the visualization looks, especially when we use matplotlib. Unlike some other advanced tools, matplotlib does not provide a very coherent way to adjust your visualizations. That's one of the reasons why there are lots of visualization libraries that wrap matplotlib. Anyway, this is how you do it.
End of explanation
death_df.plot(x='X', y='Y', s=2, c='black', kind='scatter', label='Deaths')
pump_df.plot(x='X', y='Y', kind='scatter', c='red', s=8, label='Pumps')
Explanation: Can we visualize both deaths and pumps?
End of explanation
ax = death_df.plot(x='X', y='Y', s=2, c='black', kind='scatter', label='Deaths')
ax
Explanation: Oh well, this is not what we want! We want to overlay them to see them together, right? How can we do that? Before going into that, we probably want to understand some key components of matplotlib figures.
Figure and Axes
Why do we have two separate plots? The reason is that, by default, the plot method creates a new \emph{figure} instead of putting them inside a single figure. In order to avoid it, we need to either create an Axes and tell plot to use that axes. What is an axes? See this illustration.
<img src="https://matplotlib.org/1.5.1/_images/fig_map.png" alt="figure, axes, and axis" style="width: 500px;"/>
A figure can contain multiple axes (link). The figure below contains two axes:
and an axes can contain multiple plots (link).
Conveniently, when you call plot method, it creates an axes and returns it to you
End of explanation
ax = death_df.plot(x='X', y='Y', s=2, c='black', alpha=0.5, kind='scatter', label='Deaths')
pump_df.plot(x='X', y='Y', kind='scatter', c='red', s=8, label='Pumps', ax=ax)
Explanation: This object contains all the information and objects in the plot we see. Whatever we want to do with this axes (e.g., changing x or y scale, overlaying other data, changing the color or size of symbols, etc.) can be done by accessing this object.
Then you can pass this axes object to another plot to put both plots in the same axes. Note ax=ax in the second plot command. It tells the plot command where to draw the points.
End of explanation
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
# your code here
Explanation: Although simply invoking the plot() command is quick and easy when doing an exploratory data analysis, it is usually better to be formal about figure and axes objects.
Here is the recommended way to create a plot. Call the subplots() method (see https://matplotlib.org/3.2.1/api/_as_gen/matplotlib.pyplot.subplots.html) to get the figure and axes objects explicitly.
As you can see below, subplots() creates an empty figure and returns the figure and axes object to you. Then you can fill this empty canvas with your plots. Whatever manipulation you want to make about your figure (e.g., changing the size of the figure) or axes (e.g., drawing a new plot on it) can be done with fig and ax objects. So whenever possible, use this method!
Now, can you use this method to produce the same plot just above?
End of explanation
from scipy.spatial import Voronoi, voronoi_plot_2d
Explanation: Voronoi diagram
Let's try the Voronoi diagram. You can use the scipy.spatial.Voronoi and scipy.spatial.voronoi_plot_2d from scipy, the scientific python library.
End of explanation
# you'll need this
points = pump_df.values
points
# TODO: your code here
Explanation: Take a look at the documentation of Voronoi and voronoi_plot_2d and
Q3: produce a Voronoi diagram that shows the deaths, pumps, and voronoi cells
End of explanation
import matplotlib.pyplot as plt
plt.plot([1,2,3], [4,2,3])
plt.savefig('foo.png')
Explanation: Saving the figure
You can also save your figure into PDF, PNG, etc. If you run the following, the plot will not only be displayed here, but also be saved as foo.png.
End of explanation
# TODO: your code here
Explanation: Q4: Save your Voronoi diagram. Make sure that your plot contains the scatterplot of deaths & pumps as well as the Voronoi cells
End of explanation
%%html
<!DOCTYPE html>
<html>
<head>
<style>
.para {
font: 20px times-new-roman;
color: green;
padding: 10px;
border: 1px solid black;
}
</style>
</head>
<body>
<p class='para'>Hello World!</p>
<!--
You can also add an image in your html code
<img src='location'/>
-->
</body>
</html>
Explanation: Ok, that was a brief introduction to pandas and some simple visualizations. Now let's talk about web a little bit.
HTML & CSS Basics
HTML review
Webpages are written in a standard markup language called HTML (HyperText Markup Language). The basic syntax of HTML consists of elements enclosed within < and > symbols. Markup tags often come in a pair, the opening tag without / and the closing tag with /. For instance, when we assign the title of the webpage, we write <title>This is the title of the page</title>. You can find tutorials and references from many websites, including W3Schools. Here is an example of a simple HTML document (from w3schools homepage):
```html
<!DOCTYPE html>
<html>
<title>HTML Tutorial</title>
<body>
<h1>This is a heading</h1>
<p>This is a paragraph.</p>
</body>
</html>
```
Here is a list of important tags and their descriptions.
<html> - Surrounds the entire document.
<head> - Contains information about the document. E.g. the title, metadata, scripts to load, stylesheets, etc.
<title> - Assigns title to the page. This is what you see in the tab and what you have when the page is bookmarked.
<body> - The main part of the document.
<h1>, <h2>, <h3>, ... - Headings (Smaller the number, larger the size).
<p> - Paragraph. e.g., <p>Here is a paragraph</p>
<br> - Line break.
<em> - emphasize text.
<strong> - Bold font.
<a> - Defines a hyperlink and allows you to link out to the other webpages. See examples
<img> - Place an image. See examples
<ul>, <ol>, <li> - Unordered lists with bullets, ordered lists with numbers and each item in list respectively. See examples
<table> - Make a table, specifying contents of each cell. See examples
<!--> - Comments – will not be displayed.
<span> - This will mark a certain part of text but will not necessarily change how they look. CSS or Javascript can access them and change how they look or behave.
<div> - Similar to <span>, but used for a block that contains many elements.
CSS review
While HTML specifies the content and structure, it does not say how they should look.
CSS (Cascading Style Sheets) is the primary language that is used for the look and formatting of a web document. In the context of creating visualization, CSS becomes critical when you create web-based (Javascript-based) visualizations.
A CSS stylesheet consists of one or more selectors, properties and values. For example:
css
body {
background-color: white;
color: steelblue;
}
Selectors are the HTML elements to which the specific styles (combination of properties and values) will be applied. In the above example, all text within the body tags will be in steelblue.
There are three ways to include CSS code in HTML. This is called "referencing".
Embed CSS in HTML - You can place the CSS code within style tags inside the head tags. This way you can keep everything within a single HTML file but does make the code lengthy.
```html
<head>
<style type="text/css">
.description {
font: 16px times-new-roman;
}
.viz {
font: 10px sans-serif;
}
</style>
</head>
``
Reference an external stylesheet from HTML is a much cleaner way but results in the creation of another file. To do this, you can copy the CSS code into a text file and save it as a.css` file in the same folder as the HTML file. In the document head in the HTML code, you can then do the following:
html
<head>
<link rel="stylesheet" href="main.css">
</head>
Attach inline styles - You can also directly attach the styles in-line along with the main HTML code in the body. This makes it easy to customize specific elements but makes the code very messy, because the design and content get mixed up.
```html
<p style="color: green; font-size:36px; font-weight:bold;">Inline styles can be handy sometimes.</p>
```
%%html magic command in jupyter
You can use built-in magic command in jupyter notebook to render the cell as a block of HTML. You just need to add %%html at the beginning of the code cell, this command explicitly tells jupyter that the code in this cell will be html.
You can find more about magic commands in jupyter here: https://ipython.readthedocs.io/en/stable/interactive/magics.html#cellmagic-html
Below is an example of how to render html code in jupyter code cell:
End of explanation
<END_TASK> |
88 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Looking for Correlations with Data Binned by time
Since the data was collected at slightly different times by different sensors, we decided to make uniform 'bins', so that
* we have uniform timestamps to analyze the correlations between data meaningfully, and
* noise in the sensors cancels out over time.
In the first example, we are using data binned/averaged over 40 minutes. Since the sensors make measurements approximately every 5 minutes, this means that each data point is an average of about 8 measurements.
Step1: Let's remove all the NaN values using the Pandas DataFrame.dropna function.
Step2: Now let's use the Pandas DataFrame.corr function to make a correlation matrix.
Step3: Finally, let's make a beautiful plot using Seaborn and Matplotlib.
Step4: Patterns Over Longer Periods of Time
By averaging/binning over longer periods of time, like weeks or months, we can eliminate smaller day-to-day variations in temperature, pressure, etc., thereby by finding longer-term correlations between radiation and weather/environmental phenomena.
Step5: There seems to be some correlation between humidity and radiation, especially in the long term! Let's try averaging over 2 weeks this time and see what the correlation looks like.
Step6: Currently, trying to broaden the timespan over which we are averaging data might lead to a loss of statistical significance, as the number of samples has an inverse relationship with the length of time that we sample over. Thus we are not going to try to find correlations over longer time intervals right now.
Now let's see the statistical significance of our latest correlation results between radiation and both humidity and temperature.
Step7: Interesting! At the $\alpha=0.10$ significance level, there is a moderately strong positive correlation between radiation and humidity! At that significance level, the correlation between radiation and temperature is very close to being statistically significant, and at the $\alpha=0.15$ significance level, it is statistically significant. Let's plot them against each other in order to visually appreciate the correlations!
Step8: Let's make the same plots as above, but with data averaged over 5 days instead of 14, so that we can see finer changes (and some more noise) in the measured quantities. | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np # math
import pandas as pd # manipulating data
import matplotlib.pyplot as plt # graphing
import os # useful for handling filenames etc.
from scipy.stats import pearsonr # calculates the Pearson correlation coefficient and p-value
from datetime import datetime, date # working with date and time
import seaborn as sns # makes matplotlib beautiful
sns.set_style('darkgrid')
import matplotlib as mpl # control formatting
mpl.rcParams['axes.titlesize'] = 16
mpl.rcParams['axes.titleweight'] = 'semibold'
# interactive graphs
from bokeh.io import output_notebook, show, push_notebook
from bokeh.plotting import figure
from bokeh.layouts import row, column
from bokeh.models import DatetimeTickFormatter
output_notebook()
DATA_DIR = 'binned_data'
# load all the data
# `usecols=[1]` instructs pandas to only read the second column (index 1).
# This is useful because the first column is the timestamp, which is identical
# for all the .csv files, and also will not be useful for finding correlations
# (no one cares about the correlation between radiation and the number of seconds elapsed).
co2_data = pd.read_csv(os.path.join(DATA_DIR, 'data_co2_ppm_2400.csv'),
header=0, names=['unix_time', 'co2'], usecols=[1])
radiation_data = pd.read_csv(os.path.join(DATA_DIR, 'data_cpm_2400.csv'),
header=0, names=['unix_time', 'radiation'], usecols=[1])
humidity_data = pd.read_csv(os.path.join(DATA_DIR, 'data_humidity_2400.csv'),
header=0, names=['unix_time', 'humidity'], usecols=[1])
temperature_data = pd.read_csv(os.path.join(DATA_DIR, 'data_temperature_2400.csv'),
header=0, names=['unix_time', 'temperature'], usecols=[1])
pressure_data = pd.read_csv(os.path.join(DATA_DIR, 'data_pressure_2400.csv'),
header=0, names=['unix_time', 'pressure'], usecols=[1])
# make one awesome DataFrame object containing all the data
all_data = pd.concat([co2_data, radiation_data, humidity_data, temperature_data, pressure_data],
axis=1)
all_data.head()
Explanation: Looking for Correlations with Data Binned by time
Since the data was collected at slightly different times by different sensors, we decided to make uniform 'bins', so that
* we have uniform timestamps to analyze the correlations between data meaningfully, and
* noise in the sensors cancels out over time.
In the first example, we are using data binned/averaged over 40 minutes. Since the sensors make measurements approximately every 5 minutes, this means that each data point is an average of about 8 measurements.
End of explanation
all_data.dropna(axis=0, how='any', inplace=True)
all_data.head()
Explanation: Let's remove all the NaN values using the Pandas DataFrame.dropna function.
End of explanation
correlation_matrix = all_data.corr(method='pearson')
correlation_matrix
Explanation: Now let's use the Pandas DataFrame.corr function to make a correlation matrix.
End of explanation
with sns.axes_style('dark'):
sns.heatmap(correlation_matrix, vmin=-1, vmax=1, annot=True, cbar=True, linecolor='#000000')
plt.xticks(rotation=45)
plt.yticks(rotation=45)
plt.title('Correlations Averaged Over 40 Minutes')
plt.show()
Explanation: Finally, let's make a beautiful plot using Seaborn and Matplotlib.
End of explanation
co2_data = pd.read_csv(os.path.join(DATA_DIR, 'data_co2_ppm_604800.csv'),
header=0, names=['unix_time', 'co2'], usecols=[1])
radiation_data = pd.read_csv(os.path.join(DATA_DIR, 'data_cpm_604800.csv'),
header=0, names=['unix_time', 'radiation'], usecols=[1])
humidity_data = pd.read_csv(os.path.join(DATA_DIR, 'data_humidity_604800.csv'),
header=0, names=['unix_time', 'humidity'], usecols=[1])
temperature_data = pd.read_csv(os.path.join(DATA_DIR, 'data_temperature_604800.csv'),
header=0, names=['unix_time', 'temperature'], usecols=[1])
pressure_data = pd.read_csv(os.path.join(DATA_DIR, 'data_pressure_604800.csv'),
header=0, names=['unix_time', 'pressure'], usecols=[1])
# make one awesome DataFrame object containing all the data
all_data = pd.concat([co2_data, radiation_data, humidity_data, temperature_data, pressure_data],
axis=1)
all_data.dropna(axis=0, how='any', inplace=True)
correlation_matrix = all_data.corr(method='pearson')
with sns.axes_style('dark'):
sns.heatmap(correlation_matrix, vmin=-1, vmax=1, annot=True, cbar=True, linecolor='#000000')
plt.xticks(rotation=45)
plt.yticks(rotation=45)
plt.title('Correlations Averaged Over 1 Week')
plt.show()
Explanation: Patterns Over Longer Periods of Time
By averaging/binning over longer periods of time, like weeks or months, we can eliminate smaller day-to-day variations in temperature, pressure, etc., thereby by finding longer-term correlations between radiation and weather/environmental phenomena.
End of explanation
co2_data = pd.read_csv(os.path.join(DATA_DIR, 'data_co2_ppm_1209600.csv'),
header=0, names=['unix_time', 'co2'], usecols=[1])
radiation_data = pd.read_csv(os.path.join(DATA_DIR, 'data_cpm_1209600.csv'),
header=0, names=['unix_time', 'radiation'], usecols=[1])
humidity_data = pd.read_csv(os.path.join(DATA_DIR, 'data_humidity_1209600.csv'),
header=0, names=['unix_time', 'humidity'], usecols=[1])
temperature_data = pd.read_csv(os.path.join(DATA_DIR, 'data_temperature_1209600.csv'),
header=0, names=['unix_time', 'temperature'], usecols=[1])
pressure_data = pd.read_csv(os.path.join(DATA_DIR, 'data_pressure_1209600.csv'),
header=0, names=['unix_time', 'pressure'], usecols=[1])
# make one awesome DataFrame object containing all the data
all_data = pd.concat([co2_data, radiation_data, humidity_data, temperature_data, pressure_data],
axis=1)
all_data.dropna(axis=0, how='any', inplace=True)
correlation_matrix = all_data.corr(method='pearson')
with sns.axes_style('dark'):
sns.heatmap(correlation_matrix, vmin=-1, vmax=1, annot=True, cbar=True, linecolor='#000000')
plt.xticks(rotation=45)
plt.yticks(rotation=45)
plt.title('Correlations Averaged Over 2 Weeks')
plt.show()
Explanation: There seems to be some correlation between humidity and radiation, especially in the long term! Let's try averaging over 2 weeks this time and see what the correlation looks like.
End of explanation
# function to print correlations and their statistical significance
def corr_and_sig(col_1, col_2):
r, p = pearsonr(all_data[col_1], all_data[col_2])
print(f'Correlation between {col_1} and {col_2} is {r:.2f}, and its two-tailed p-value is {p:.2f}.\n')
for s in ['co2', 'humidity', 'temperature', 'pressure']:
corr_and_sig('radiation', s)
Explanation: Currently, trying to broaden the timespan over which we are averaging data might lead to a loss of statistical significance, as the number of samples has an inverse relationship with the length of time that we sample over. Thus we are not going to try to find correlations over longer time intervals right now.
Now let's see the statistical significance of our latest correlation results between radiation and both humidity and temperature.
End of explanation
radiation_data = pd.read_csv(os.path.join(DATA_DIR, 'data_cpm_1209600.csv'),
header=0, names=['unix_time', 'radiation'])
humidity_data = pd.read_csv(os.path.join(DATA_DIR, 'data_humidity_1209600.csv'),
header=0, names=['unix_time', 'humidity'], usecols=['humidity'])
temperature_data = pd.read_csv(os.path.join(DATA_DIR, 'data_temperature_1209600.csv'),
header=0, names=['unix_time', 'temperature'], usecols=['temperature'])
rad_humid = pd.concat([radiation_data, humidity_data, temperature_data], axis=1)
rad_humid.dropna(axis=0, how='any', inplace=True)
sns.regplot(x='radiation', y='humidity', data=rad_humid)
plt.ylim(ymin=0)
plt.xlabel('Radiation (counts per minute)', fontdict={'fontsize': 12})
plt.ylabel('Humidity (percentage)', rotation=90, fontdict={'fontsize': 12})
plt.title('Radiation vs. Humidity', loc='left')
plt.plot()
sns.regplot(x='radiation', y='temperature', data=rad_humid)
plt.xlabel('Radiation (counts per minute)', fontdict={'fontsize': 12})
plt.ylabel('Temperature ($^oC$)', rotation=90, fontdict={'fontsize': 12})
plt.title('Radiation vs. Temperature', loc='left')
plt.show()
times = []
for idx, time, _, _, _ in rad_humid.itertuples():
times.append(datetime.fromtimestamp(int(time)))
dates = pd.Series(times)
pass
# humidity vs. time
time_humid = figure(plot_width=480, plot_height=320,
title='Humidity vs. Time', x_axis_type='datetime')
time_humid.line(dates, rad_humid['humidity'], line_width=2)
time_humid.xaxis.formatter=DatetimeTickFormatter(months=["%B %Y"])
time_humid.xaxis.major_label_orientation = 0.854 # 90 degrees in radians
time_humid.yaxis.axis_label = 'Humidity (percentage)'
# temperature vs. time
time_temp = figure(plot_width=480, plot_height=320,
title='Temperature vs. Time', x_axis_type='datetime')
time_temp.line(dates, rad_humid['temperature'], line_width=2)
time_temp.xaxis.formatter=DatetimeTickFormatter(months=["%B %Y"])
time_temp.xaxis.major_label_orientation = 0.854
time_temp.yaxis.axis_label = 'Temperature (degrees C)' # bokeh does not support LaTeX yet
# radiation vs. time
time_rad = figure(plot_width=480, plot_height=320,
title='Radiation vs. Time', x_axis_type='datetime')
time_rad.line(dates, rad_humid['radiation'], line_width=2, line_color='#EE00A0')
time_rad.xaxis.formatter=DatetimeTickFormatter(months=["%B %Y"])
time_rad.xaxis.major_label_orientation = 0.854
time_rad.yaxis.axis_label = 'Radiation (counts per minute)'
show(column(row(time_humid, time_temp), time_rad))
Explanation: Interesting! At the $\alpha=0.10$ significance level, there is a moderately strong positive correlation between radiation and humidity! At that significance level, the correlation between radiation and temperature is very close to being statistically significant, and at the $\alpha=0.15$ significance level, it is statistically significant. Let's plot them against each other in order to visually appreciate the correlations!
End of explanation
radiation_data = pd.read_csv(os.path.join(DATA_DIR, 'data_cpm_432000.csv'),
header=0, names=['unix_time', 'radiation'])
humidity_data = pd.read_csv(os.path.join(DATA_DIR, 'data_humidity_432000.csv'),
header=0, names=['unix_time', 'humidity'], usecols=['humidity'])
temperature_data = pd.read_csv(os.path.join(DATA_DIR, 'data_temperature_432000.csv'),
header=0, names=['unix_time', 'temperature'], usecols=['temperature'])
rad_humid = pd.concat([radiation_data, humidity_data, temperature_data], axis=1)
rad_humid.dropna(axis=0, how='any', inplace=True)
times = []
for idx, time, _, _, _ in rad_humid.itertuples():
times.append(datetime.fromtimestamp(int(time)))
dates = pd.Series(times)
# humidity vs. time
time_humid = figure(plot_width=480, plot_height=320,
title='Humidity vs. Time', x_axis_type='datetime')
time_humid.line(dates, rad_humid['humidity'], line_width=2)
time_humid.xaxis.formatter=DatetimeTickFormatter(months=["%B %Y"])
time_humid.xaxis.major_label_orientation = 0.854 # 90 degrees in radians
time_humid.yaxis.axis_label = 'Humidity (percentage)'
# temperature vs. time
time_temp = figure(plot_width=480, plot_height=320,
title='Temperature vs. Time', x_axis_type='datetime')
time_temp.line(dates, rad_humid['temperature'], line_width=2)
time_temp.xaxis.formatter=DatetimeTickFormatter(months=["%B %Y"])
time_temp.xaxis.major_label_orientation = 0.854
time_temp.yaxis.axis_label = 'Temperature (degrees C)' # bokeh does not support LaTeX yet
# radiation vs. time
time_rad = figure(plot_width=480, plot_height=320,
title='Radiation vs. Time', x_axis_type='datetime')
time_rad.line(dates, rad_humid['radiation'], line_width=2, line_color='#EE00A0')
time_rad.xaxis.formatter=DatetimeTickFormatter(months=["%B %Y"])
time_rad.xaxis.major_label_orientation = 0.854
time_rad.yaxis.axis_label = 'Radiation (counts per minute)'
show(column(row(time_humid, time_temp), time_rad))
SECONDS_IN_A_HALF_DAY = 43200 # 60 * 60 * 12
df_list = []
correlation_list = []
def get_corrs(interval):
co2_data = pd.read_csv(os.path.join(DATA_DIR, f'data_co2_ppm_{interval}.csv'),
header=0, names=['unix_time', 'co2'], usecols=[1])
radiation_data = pd.read_csv(os.path.join(DATA_DIR, f'data_cpm_{interval}.csv'),
header=0, names=['unix_time', 'radiation'], usecols=[1])
pgradiation_data = pd.read_csv(os.path.join(DATA_DIR, f'data_cpmpg_{interval}.csv'),
header=0, names=['unix_time', 'pgradiation'], usecols=[1])
humidity_data = pd.read_csv(os.path.join(DATA_DIR, f'data_humidity_{interval}.csv'),
header=0, names=['unix_time', 'humidity'], usecols=[1])
temperature_data = pd.read_csv(os.path.join(DATA_DIR, f'data_temperature_{interval}.csv'),
header=0, names=['unix_time', 'temperature'], usecols=[1])
pressure_data = pd.read_csv(os.path.join(DATA_DIR, f'data_pressure_{interval}.csv'),
header=0, names=['unix_time', 'pressure'], usecols=[1])
# make one awesome DataFrame object containing all the data
all_data = pd.concat([co2_data, radiation_data, humidity_data, temperature_data,
pressure_data, pgradiation_data],
axis=1)
all_data.dropna(axis=0, how='any', inplace=True)
correlation_matrix = all_data.corr(method='pearson')
return correlation_matrix.iloc[1]
for i in range(1, 29):
if i <= 2:
correlation_list.append(get_corrs(int(i * SECONDS_IN_A_HALF_DAY)).rename(f'{i/2:.1f} day'))
else:
correlation_list.append(get_corrs(int(i * SECONDS_IN_A_HALF_DAY)).rename(f'{i/2:.1f} days'))
correlation_df = pd.DataFrame(correlation_list)
# print(correlation_df)
with sns.axes_style('dark'):
plt.figure(figsize=(10, 9.5))
sns.heatmap(correlation_df, vmin=-1, vmax=1, annot=True, cbar=True, linecolor='#000000')
# plt.xticks(rotation=45)
# plt.yticks(rotation=45)
plt.title('Correlations with Radiation Using Data Averaged Over Different Time Intervals')
plt.show()
plt.figure(figsize=(12, 7))
plt.plot([i/2 for i in range(1, 29)], correlation_df['co2'],
[i/2 for i in range(1, 29)], correlation_df['radiation'],
[i/2 for i in range(1, 29)], correlation_df['humidity'],
[i/2 for i in range(1, 29)], correlation_df['temperature'],
[i/2 for i in range(1, 29)], correlation_df['pressure'],
[i/2 for i in range(1, 29)], correlation_df['pgradiation'])
plt.legend(['$CO_2$ concentration', 'Radiation', 'Humidity', 'Temperature', 'Pressure',
'Pocket Geiger Radiation'], loc=(1.05, 0.4))
plt.xlabel('Time Interval of Averaging (days)')
plt.ylabel("Pearson's Correlation Coefficient of Radiation vs. Quantity", rotation=90)
plt.title('Correlations with Radiation Using Data Averaged Over Different Time Intervals')
plt.show()
Explanation: Let's make the same plots as above, but with data averaged over 5 days instead of 14, so that we can see finer changes (and some more noise) in the measured quantities.
End of explanation
<END_TASK> |
89 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
You can use the single line call "analyze," which does all the available analysis simultaneously
Step1: ... or call all the methods individually | <ASSISTANT_TASK:>
Python Code:
# NOTE: This will take several minutes depending on the performance of your machine
audio_features = audioAnalyzer.analyze(audio_filename)
# plot the features
plt.rcParams['figure.figsize'] = [20, 8]
audioAnalyzer.plot(audio_features)
plt.show()
Explanation: You can use the single line call "analyze," which does all the available analysis simultaneously
End of explanation
# audio metadata extraction
metadata = audioAnalyzer.crawl_musicbrainz_metadata(audio_filename)
# predominant melody extraction
pitch = audioAnalyzer.extract_pitch(audio_filename)
# pitch post filtering
pitch_filtered = audioAnalyzer.filter_pitch(pitch)
# histogram computation
pitch_distribution = audioAnalyzer.compute_pitch_distribution(pitch_filtered)
pitch_class_distribution = copy.deepcopy(pitch_distribution)
pitch_class_distribution.to_pcd()
# tonic identification
tonic = audioAnalyzer.identify_tonic(pitch_filtered)
# get the makam from metadata if possible else apply makam recognition
makams = audioAnalyzer.get_makams(metadata, pitch_filtered, tonic)
makam = list(makams)[0] # for now get the first makam
# transposition (ahenk) identification
transposition = audioAnalyzer.identify_transposition(tonic, makam)
# stable note extraction (tuning analysis)
note_models = audioAnalyzer.compute_note_models(pitch_distribution, tonic, makam)
# get the melodic progression model
melodic_progression = audioAnalyzer.compute_melodic_progression(pitch_filtered)
Explanation: ... or call all the methods individually
End of explanation
<END_TASK> |
90 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
What is an Adsorption Isotherm?
In a typical isotherm analysis, a small sample of material, which one wants to determine the surface properties of, is placed in a test tube and put under vacuum. A known species of gas such as Nitrogen is then incrementally dosed to the sample tube with each increment followed by a measurement of the equilibrated pressure $P$. Much of the dosed analysis gas just fills the free space in the sample tube resulting in an increase in gas pressure, but some fraction of the dosed gas is adsorbed to the surface of the sample material, also finding its way into any pores on the surface of the sample. Using the measurement of the pressures before and after the dose, and using knowledge of the exact amount of gas that was dosed, one may then infer the quantity of gas $Q_{ads}$ that is adsorbed onto the sample. The series of controlled doses all happens at a nearly constant temperature (typically 77K for Nitrogen analysis), and the convention is to refer to the collected $P$ vs. $Q_{ads}$ data as an "Isotherm." The above description is somewhat simplified, and the details of the experimental method, the apparatus, and the determination of the quantity adsorbed is explained in detail in many references such as [1].
The quantity of gas adsorbed onto the sample's surface can be expressed as the number of moles of gas $n$. However the convention is to use the ideal gas law at standard temperature and pressure to represent this quantity as a volume of gas $V = nR T_{STD}/P_{STD}$. Then dividing by the mass of the sample, the quantities adsorbed $Q_{ads}$ are typically reported in units of $\textrm{cm}^3 / \textrm{g STP}$. This is the system of units employed here. When discussing particular models however, the number of moles $n$ or the number of molecules $N$ will be employed.
Another convention for isotherm data is to convert absolute pressures to relative pressures. Specifically, for analysis gases which can condense to a liquid at the analysis temperature, it is the relative pressure which is typically reported rather than the absolute pressure. The relative pressure is simply $P_{rel} = \frac{P_{abs}}{P_0}$ where $P_{abs}$ is the absolute pressure measured in millimeters Mercury (mmHg) or some other pressure unit and $P_0$ is the saturation pressure of the analysis gas which is also typically measured in the course of the experiment. The relative pressure is a dimensionless quantity. Many gas adsorption calculations will us relative pressure units rather than the absolute pressure.
[1] Webb, Paul A., and Clyde Orr. Analytical methods in fine particle technology. Micromeritics Instrument Corp, 1997.
Isotherm adsorption data shown in a few representations
A few isotherms from reference data sets are shown below. These example sets are available on github report-models-python in the 'micromeritics' python package. Isotherm data may be obtained from other online resources as well.
First we show a few data sets using a linear scale with relative pressure as the dependent variable
Step1: It is also useful to show the isotherm with the Pressure axis scaled as logarithmic.
Step2: While it is more common to show isotherm data using relative pressure, it is also worth while to have the absolute pressures available. Below is an example data set for ZSM-5 analyzed with argon gas at 87k shown with absolute pressure as the dependent variable. | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from micromeritics import util
from micromeritics import isotherm_examples as ex
import matplotlib.pyplot as plt
carb = ex.carbon_black() # example isotherm of Carbon Black with N2 at 77K
sial = ex.silica_alumina() # example isotherm of Silica Alumina with N2 at 77K
mcm = ex.mcm_41() # example isotherm of MCM 41 with N2 at 77K
fig = plt.figure(figsize=(12,5))
axes = fig.add_subplot(111)
plt.title('Isotherm Plot')
plt.ylabel("Quantity Adsorbed (cm^3/g STP)")
plt.xlabel("Relative Pressure")
plt.gca().set_xscale('linear')
plt.plot( carb.Prel, carb.Qads, 'ro', label='Carbon Black with N2 at 77K' )
plt.plot( sial.Prel, sial.Qads, 'bo-', label='Silica Alumina with N2 at 77K')
plt.plot( mcm.Prel, mcm.Qads, 'go-', label='MCM 41 with N2 at 77K')
legend = axes.legend(loc='upper left', shadow=True)
plt.show()
Explanation: What is an Adsorption Isotherm?
In a typical isotherm analysis, a small sample of material, which one wants to determine the surface properties of, is placed in a test tube and put under vacuum. A known species of gas such as Nitrogen is then incrementally dosed to the sample tube with each increment followed by a measurement of the equilibrated pressure $P$. Much of the dosed analysis gas just fills the free space in the sample tube resulting in an increase in gas pressure, but some fraction of the dosed gas is adsorbed to the surface of the sample material, also finding its way into any pores on the surface of the sample. Using the measurement of the pressures before and after the dose, and using knowledge of the exact amount of gas that was dosed, one may then infer the quantity of gas $Q_{ads}$ that is adsorbed onto the sample. The series of controlled doses all happens at a nearly constant temperature (typically 77K for Nitrogen analysis), and the convention is to refer to the collected $P$ vs. $Q_{ads}$ data as an "Isotherm." The above description is somewhat simplified, and the details of the experimental method, the apparatus, and the determination of the quantity adsorbed is explained in detail in many references such as [1].
The quantity of gas adsorbed onto the sample's surface can be expressed as the number of moles of gas $n$. However the convention is to use the ideal gas law at standard temperature and pressure to represent this quantity as a volume of gas $V = nR T_{STD}/P_{STD}$. Then dividing by the mass of the sample, the quantities adsorbed $Q_{ads}$ are typically reported in units of $\textrm{cm}^3 / \textrm{g STP}$. This is the system of units employed here. When discussing particular models however, the number of moles $n$ or the number of molecules $N$ will be employed.
Another convention for isotherm data is to convert absolute pressures to relative pressures. Specifically, for analysis gases which can condense to a liquid at the analysis temperature, it is the relative pressure which is typically reported rather than the absolute pressure. The relative pressure is simply $P_{rel} = \frac{P_{abs}}{P_0}$ where $P_{abs}$ is the absolute pressure measured in millimeters Mercury (mmHg) or some other pressure unit and $P_0$ is the saturation pressure of the analysis gas which is also typically measured in the course of the experiment. The relative pressure is a dimensionless quantity. Many gas adsorption calculations will us relative pressure units rather than the absolute pressure.
[1] Webb, Paul A., and Clyde Orr. Analytical methods in fine particle technology. Micromeritics Instrument Corp, 1997.
Isotherm adsorption data shown in a few representations
A few isotherms from reference data sets are shown below. These example sets are available on github report-models-python in the 'micromeritics' python package. Isotherm data may be obtained from other online resources as well.
First we show a few data sets using a linear scale with relative pressure as the dependent variable
End of explanation
fig = plt.figure(figsize=(12,5))
axes = fig.add_subplot(111)
plt.title('Isotherm Plot')
plt.ylabel("Quantity Adsorbed (cm^3/g STP)")
plt.xlabel("Relative Pressure")
plt.gca().set_xscale('log')
plt.plot( carb.Prel, carb.Qads, 'ro', label='Carbon Black with N2 at 77K' )
plt.plot( sial.Prel, sial.Qads, 'bo-', label='Silica Alumina with N2 at 77K')
plt.plot( mcm.Prel, mcm.Qads, 'go-', label='MCM 41 with N2 at 77K')
legend = axes.legend(loc='upper left', shadow=True)
plt.show()
Explanation: It is also useful to show the isotherm with the Pressure axis scaled as logarithmic.
End of explanation
zsm = ex.zsm_5() # example isotherm of ZSM-5 with Ar at 87K
fig = plt.figure(figsize=(12,5))
axes = fig.add_subplot(111)
plt.title('Isotherm Plot')
plt.ylabel("Quantity Adsorbed (cm^3/g STP)")
plt.xlabel("Absolute Pressure (mmHg)")
plt.gca().set_xscale('log')
plt.plot( zsm.Pabs, zsm.Qads, 'ro', label='ZSM-5 with Ar at 87K' )
legend = axes.legend(loc='upper left', shadow=True)
plt.show()
Explanation: While it is more common to show isotherm data using relative pressure, it is also worth while to have the absolute pressures available. Below is an example data set for ZSM-5 analyzed with argon gas at 87k shown with absolute pressure as the dependent variable.
End of explanation
<END_TASK> |
91 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
<font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 5</font>
Download
Step1: Exercícios | <ASSISTANT_TASK:>
Python Code:
# Versão da Linguagem Python
from platform import python_version
print('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version())
Explanation: <font color='blue'>Data Science Academy - Python Fundamentos - Capítulo 5</font>
Download: http://github.com/dsacademybr
End of explanation
# Exercício 1 - Crie um objeto a partir da classe abaixo, chamado roc1, passando 2 parâmetros e depois faça uma chamada
# aos atributos e métodos
from math import sqrt
class Rocket():
def __init__(self, x=0, y=0):
self.x = x
self.y = y
def move_rocket(self, x_increment=0, y_increment=1):
self.x += x_increment
self.y += y_increment
def print_rocket(self):
print(self.x, self.y)
# Exercício 2 - Crie uma classe chamada Pessoa() com os atributos: nome, cidade, telefone e e-mail. Use pelo menos 2
# métodos especiais na sua classe. Crie um objeto da sua classe e faça uma chamada a pelo menos um dos seus métodos
# especiais.
# Exercício 3 - Crie a classe Smartphone com 2 atributos, tamanho e interface e crie a classe MP3Player com os
# atributos capacidade. A classe MP3player deve herdar os atributos da classe Smartphone.
Explanation: Exercícios
End of explanation
<END_TASK> |
92 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Solving problems by Searching
This notebook serves as supporting material for topics covered in Chapter 3 - Solving Problems by Searching and Chapter 4 - Beyond Classical Search from the book Artificial Intelligence
Step1: CONTENTS
Overview
Problem
Node
Simple Problem Solving Agent
Search Algorithms Visualization
Breadth-First Tree Search
Breadth-First Search
Best First Search
Uniform Cost Search
Greedy Best First Search
A* Search
Hill Climbing
Simulated Annealing
Genetic Algorithm
AND-OR Graph Search
Online DFS Agent
LRTA* Agent
OVERVIEW
Here, we learn about a specific kind of problem solving - building goal-based agents that can plan ahead to solve problems. In particular, we examine navigation problem/route finding problem. We must begin by precisely defining problems and their solutions. We will look at several general-purpose search algorithms.
Search algorithms can be classified into two types
Step2: PROBLEM
Let's see how we define a Problem. Run the next cell to see how abstract class Problem is defined in the search module.
Step3: The Problem class has six methods.
__init__(self, initial, goal)
Step4: The Node class has nine methods. The first is the __init__ method.
__init__(self, state, parent, action, path_cost)
Step5: Have a look at our romania_map, which is an Undirected Graph containing a dict of nodes as keys and neighbours as values.
Step6: It is pretty straightforward to understand this romania_map. The first node Arad has three neighbours named Zerind, Sibiu, Timisoara. Each of these nodes are 75, 140, 118 units apart from Arad respectively. And the same goes with other nodes.
And romania_map.locations contains the positions of each of the nodes. We will use the straight line distance (which is different from the one provided in romania_map) between two cities in algorithms like A*-search and Recursive Best First Search.
Define a problem
Step7: Romania Map Visualisation
Let's have a visualisation of Romania map [Figure 3.2] from the book and see how different searching algorithms perform / how frontier expands in each search algorithm for a simple problem named romania_problem.
Have a look at romania_locations. It is a dictionary defined in search module. We will use these location values to draw the romania graph using networkx.
Step8: Let's get started by initializing an empty graph. We will add nodes, place the nodes in their location as shown in the book, add edges to the graph.
Step9: We have completed building our graph based on romania_map and its locations. It's time to display it here in the notebook. This function show_map(node_colors) helps us do that. We will be calling this function later on to display the map at each and every interval step while searching, using variety of algorithms from the book.
We can simply call the function with node_colors dictionary object to display it.
Step10: Voila! You see, the romania map as shown in the Figure[3.2] in the book. Now, see how different searching algorithms perform with our problem statements.
SIMPLE PROBLEM SOLVING AGENT PROGRAM
Let us now define a Simple Problem Solving Agent Program. Run the next cell to see how the abstract class SimpleProblemSolvingAgentProgram is defined in the search module.
Step11: The SimpleProblemSolvingAgentProgram class has six methods
Step12: Now, we will define all the 8 states and create an object of the above class. Then, we will pass it different states and check the output
Step14: SEARCHING ALGORITHMS VISUALIZATION
In this section, we have visualizations of the following searching algorithms
Step15: Now, we use ipywidgets to display a slider, a button and our romania map. By sliding the slider we can have a look at all the intermediate steps of a particular search algorithm. By pressing the button Visualize, you can see all the steps without interacting with the slider. These two helper functions are the callback functions which are called when we interact with the slider and the button.
Step17: 2. DEPTH-FIRST TREE SEARCH
Now let's discuss another searching algorithm, Depth-First Tree Search.
Step18: 3. BREADTH-FIRST GRAPH SEARCH
Let's change all the node_colors to starting position and define a different problem statement.
Step21: 4. DEPTH-FIRST GRAPH SEARCH
Although we have a working implementation in search module, we have to make a few changes in the algorithm to make it suitable for visualization.
Step23: 5. BEST FIRST SEARCH
Let's change all the node_colors to starting position and define a different problem statement.
Step24: 6. UNIFORM COST SEARCH
Let's change all the node_colors to starting position and define a different problem statement.
Step26: 7. DEPTH LIMITED SEARCH
Let's change all the 'node_colors' to starting position and define a different problem statement.
Although we have a working implementation, but we need to make changes.
Step27: 8. ITERATIVE DEEPENING SEARCH
Let's change all the 'node_colors' to starting position and define a different problem statement.
Step29: 9. GREEDY BEST FIRST SEARCH
Let's change all the node_colors to starting position and define a different problem statement.
Step31: 10. A* SEARCH
Let's change all the node_colors to starting position and define a different problem statement.
Step33: 11. RECURSIVE BEST FIRST SEARCH
Let's change all the node_colors to starting position and define a different problem statement.
Step34: RECURSIVE BEST-FIRST SEARCH
Recursive best-first search is a simple recursive algorithm that improves upon heuristic search by reducing the memory requirement.
RBFS uses only linear space and it attempts to mimic the operation of standard best-first search.
Its structure is similar to recursive depth-first search but it doesn't continue indefinitely down the current path, the f_limit variable is used to keep track of the f-value of the best alternative path available from any ancestor of the current node.
RBFS remembers the f-value of the best leaf in the forgotten subtree and can decide whether it is worth re-expanding the tree later.
<br>
However, RBFS still suffers from excessive node regeneration.
<br>
Let's have a look at the implementation.
Step35: This is how recursive_best_first_search can solve the romania_problem
Step36: recursive_best_first_search can be used to solve the 8 puzzle problem too, as discussed later.
Step37: A* HEURISTICS
Different heuristics provide different efficiency in solving A* problems which are generally defined by the number of explored nodes as well as the branching factor. With the classic 8 puzzle we can show the efficiency of different heuristics through the number of explored nodes.
8 Puzzle Problem
The 8 Puzzle Problem consists of a 3x3 tray in which the goal is to get the initial configuration to the goal state by shifting the numbered tiles into the blank space.
example
Step38: Heuristics
Step39: We can solve the puzzle using the astar_search method.
Step40: This case is solvable, let's proceed.
<br>
The default heuristic function returns the number of misplaced tiles.
Step41: In the following cells, we use different heuristic functions.
<br>
Step42: And here's how recursive_best_first_search can be used to solve this problem too.
Step43: Even though all the heuristic functions give the same solution, the difference lies in the computation time.
<br>
This might make all the difference in a scenario where high computational efficiency is required.
<br>
Let's define a few puzzle states and time astar_search for every heuristic function.
We will use the %%timeit magic for this.
Step44: The default heuristic function is the same as the linear heuristic function, but we'll still check both.
Step45: We can infer that the manhattan heuristic function works the fastest.
<br>
sqrt_manhattan has an extra sqrt operation which makes it quite a lot slower than the others.
<br>
max_heuristic should have been a bit slower as it calls two functions, but in this case, those values were already calculated which saved some time.
Feel free to play around with these functions.
For comparison, this is how RBFS performs on this problem.
Step46: It is quite a lot slower than astar_search as we can see.
HILL CLIMBING
Hill Climbing is a heuristic search used for optimization problems.
Given a large set of inputs and a good heuristic function, it tries to find a sufficiently good solution to the problem.
This solution may or may not be the global optimum.
The algorithm is a variant of generate and test algorithm.
<br>
As a whole, the algorithm works as follows
Step53: We will find an approximate solution to the traveling salespersons problem using this algorithm.
<br>
We need to define a class for this problem.
<br>
Problem will be used as a base class.
Step54: We will use cities from the Romania map as our cities for this problem.
<br>
A list of all cities and a dictionary storing distances between them will be populated.
Step55: Next, we need to populate the individual lists inside the dictionary with the manhattan distance between the cities.
Step59: The way neighbours are chosen currently isn't suitable for the travelling salespersons problem.
We need a neighboring state that is similar in total path distance to the current state.
<br>
We need to change the function that finds neighbors.
Step60: An instance of the TSP_problem class will be created.
Step61: We can now generate an approximate solution to the problem by calling hill_climbing.
The results will vary a bit each time you run it.
Step62: The solution looks like this.
It is not difficult to see why this might be a good solution.
<br>
SIMULATED ANNEALING
The intuition behind Hill Climbing was developed from the metaphor of climbing up the graph of a function to find its peak.
There is a fundamental problem in the implementation of the algorithm however.
To find the highest hill, we take one step at a time, always uphill, hoping to find the highest point,
but if we are unlucky to start from the shoulder of the second-highest hill, there is no way we can find the highest one.
The algorithm will always converge to the local optimum.
Hill Climbing is also bad at dealing with functions that flatline in certain regions.
If all neighboring states have the same value, we cannot find the global optimum using this algorithm.
<br>
<br>
Let's now look at an algorithm that can deal with these situations.
<br>
Simulated Annealing is quite similar to Hill Climbing,
but instead of picking the best move every iteration, it picks a random move.
If this random move brings us closer to the global optimum, it will be accepted,
but if it doesn't, the algorithm may accept or reject the move based on a probability dictated by the temperature.
When the temperature is high, the algorithm is more likely to accept a random move even if it is bad.
At low temperatures, only good moves are accepted, with the occasional exception.
This allows exploration of the state space and prevents the algorithm from getting stuck at the local optimum.
Step63: The temperature is gradually decreased over the course of the iteration.
This is done by a scheduling routine.
The current implementation uses exponential decay of temperature, but we can use a different scheduling routine instead.
Step64: Next, we'll define a peak-finding problem and try to solve it using Simulated Annealing.
Let's define the grid and the initial state first.
Step65: We want to allow only four directions, namely N, S, E and W.
Let's use the predefined directions4 dictionary.
Step66: Define a problem with these parameters.
Step67: We'll run simulated_annealing a few times and store the solutions in a set.
Step68: Hence, the maximum value is 9.
Let's find the peak of a two-dimensional gaussian distribution.
We'll use the gaussian_kernel function from notebook.py to get the distribution.
Step69: Let's use the heatmap function from notebook.py to plot this.
Step70: Let's define the problem.
This time, we will allow movement in eight directions as defined in directions8.
Step71: We'll solve the problem just like we did last time.
<br>
Let's also time it.
Step72: The peak is at 1.0 which is how gaussian distributions are defined.
<br>
This could also be solved by Hill Climbing as follows.
Step73: As you can see, Hill-Climbing is about 24 times faster than Simulated Annealing.
(Notice that we ran Simulated Annealing for 100 iterations whereas we ran Hill Climbing only once.)
<br>
Simulated Annealing makes up for its tardiness by its ability to be applicable in a larger number of scenarios than Hill Climbing as illustrated by the example below.
<br>
Let's define a 2D surface as a matrix.
Step74: The peak value is 32 at the lower right corner.
<br>
The region at the upper left corner is planar.
Let's instantiate PeakFindingProblem one last time.
Step75: Solution by Hill Climbing
Step76: Solution by Simulated Annealing
Step77: Notice that even though both algorithms started at the same initial state,
Hill Climbing could never escape from the planar region and gave a locally optimum solution of 0,
whereas Simulated Annealing could reach the peak at 32.
<br>
A very similar situation arises when there are two peaks of different heights.
One should carefully consider the possible search space before choosing the algorithm for the task.
GENETIC ALGORITHM
Genetic algorithms (or GA) are inspired by natural evolution and are particularly useful in optimization and search problems with large state spaces.
Given a problem, algorithms in the domain make use of a population of solutions (also called states), where each solution/state represents a feasible solution. At each iteration (often called generation), the population gets updated using methods inspired by biology and evolution, like crossover, mutation and natural selection.
Overview
A genetic algorithm works in the following way
Step78: The algorithm takes the following input
Step79: The method picks at random a point and merges the parents (x and y) around it.
The mutation is done in the method mutate
Step80: We pick a gene in x to mutate and a gene from the gene pool to replace it with.
To help initializing the population we have the helper function init_population
Step81: The function takes as input the number of individuals in the population, the gene pool and the length of each individual/state. It creates individuals with random genes and returns the population when done.
Explanation
Before we solve problems using the genetic algorithm, we will explain how to intuitively understand the algorithm using a trivial example.
Generating Phrases
In this problem, we use a genetic algorithm to generate a particular target phrase from a population of random strings. This is a classic example that helps build intuition about how to use this algorithm in other problems as well. Before we break the problem down, let us try to brute force the solution. Let us say that we want to generate the phrase "genetic algorithm". The phrase is 17 characters long. We can use any character from the 26 lowercase characters and the space character. To generate a random phrase of length 17, each space can be filled in 27 ways. So the total number of possible phrases is
$$ 27^{17} = 2153693963075557766310747 $$
which is a massive number. If we wanted to generate the phrase "Genetic Algorithm", we would also have to include all the 26 uppercase characters into consideration thereby increasing the sample space from 27 characters to 53 characters and the total number of possible phrases then would be
$$ 53^{17} = 205442259656281392806087233013 $$
If we wanted to include punctuations and numerals into the sample space, we would have further complicated an already impossible problem. Hence, brute forcing is not an option. Now we'll apply the genetic algorithm and see how it significantly reduces the search space. We essentially want to evolve our population of random strings so that they better approximate the target phrase as the number of generations increase. Genetic algorithms work on the principle of Darwinian Natural Selection according to which, there are three key concepts that need to be in place for evolution to happen. They are
Step82: We then need to define our gene pool, i.e the elements which an individual from the population might comprise of. Here, the gene pool contains all uppercase and lowercase letters of the English alphabet and the space character.
Step83: We now need to define the maximum size of each population. Larger populations have more variation but are computationally more expensive to run algorithms on.
Step84: As our population is not very large, we can afford to keep a relatively large mutation rate.
Step85: Great! Now, we need to define the most important metric for the genetic algorithm, i.e the fitness function. This will simply return the number of matching characters between the generated sample and the target phrase.
Step86: Before we run our genetic algorithm, we need to initialize a random population. We will use the init_population function to do this. We need to pass in the maximum population size, the gene pool and the length of each individual, which in this case will be the same as the length of the target phrase.
Step87: We will now define how the individuals in the population should change as the number of generations increases. First, the select function will be run on the population to select two individuals with high fitness values. These will be the parents which will then be recombined using the recombine function to generate the child.
Step88: Next, we need to apply a mutation according to the mutation rate. We call the mutate function on the child with the gene pool and mutation rate as the additional arguments.
Step89: The above lines can be condensed into
child = mutate(recombine(*select(2, population, fitness_fn)), gene_pool, mutation_rate)
And, we need to do this for every individual in the current population to generate the new population.
Step90: The individual with the highest fitness can then be found using the max function.
Step91: Let's print this out
Step92: We see that this is a list of characters. This can be converted to a string using the join function
Step93: We now need to define the conditions to terminate the algorithm. This can happen in two ways
1. Termination after a predefined number of generations
2. Termination when the fitness of the best individual of the current generation reaches a predefined threshold value.
We define these variables below
Step94: To generate ngen number of generations, we run a for loop ngen number of times. After each generation, we calculate the fitness of the best individual of the generation and compare it to the value of f_thres using the fitness_threshold function. After every generation, we print out the best individual of the generation and the corresponding fitness value. Lets now write a function to do this.
Step95: The function defined above is essentially the same as the one defined in search.py with the added functionality of printing out the data of each generation.
Step96: We have defined all the required functions and variables. Let's now create a new population and test the function we wrote above.
Step97: The genetic algorithm was able to converge!
We implore you to rerun the above cell and play around with target, max_population, f_thres, ngen etc parameters to get a better intuition of how the algorithm works. To summarize, if we can define the problem states in simple array format and if we can create a fitness function to gauge how good or bad our approximate solutions are, there is a high chance that we can get a satisfactory solution using a genetic algorithm.
- There is also a better GUI version of this program genetic_algorithm_example.py in the GUI folder for you to play around with.
Usage
Below we give two example usages for the genetic algorithm, for a graph coloring problem and the 8 queens problem.
Graph Coloring
First we will take on the simpler problem of coloring a small graph with two colors. Before we do anything, let's imagine how a solution might look. First, we have to represent our colors. Say, 'R' for red and 'G' for green. These make up our gene pool. What of the individual solutions though? For that, we will look at our problem. We stated we have a graph. A graph has nodes and edges, and we want to color the nodes. Naturally, we want to store each node's color. If we have four nodes, we can store their colors in a list of genes, one for each node. A possible solution will then look like this
Step98: Edge 'A' connects nodes 0 and 1, edge 'B' connects nodes 0 and 3 etc.
We already said our gene pool is 'R' and 'G', so we can jump right into initializing our population. Since we have only four nodes, state_length should be 4. For the number of individuals, we will try 8. We can increase this number if we need higher accuracy, but be careful! Larger populations need more computating power and take longer. You need to strike that sweet balance between accuracy and cost (the ultimate dilemma of the programmer!).
Step99: We created and printed the population. You can see that the genes in the individuals are random and there are 8 individuals each with 4 genes.
Next we need to write our fitness function. We previously said we want the function to count how many edges are valid. So, given a coloring/individual c, we will do just that
Step100: Great! Now we will run the genetic algorithm and see what solution it gives.
Step101: The algorithm converged to a solution. Let's check its score
Step102: The solution has a score of 4. Which means it is optimal, since we have exactly 4 edges in our graph, meaning all are valid!
NOTE
Step103: We have a population of 100 and each individual has 8 genes. The gene pool is the integers from 0 to 7, in string form. Above you can see the first five individuals.
Next we need to write our fitness function. Remember, queens threaten each other if they are at the same row, column or diagonal.
Since positionings are mutual, we must take care not to count them twice. Therefore for each queen, we will only check for conflicts for the queens after her.
A gene's value in an individual q denotes the queen's column, and the position of the gene denotes its row. We can check if the aforementioned values between two genes are the same. We also need to check for diagonals. A queen a is in the diagonal of another queen, b, if the difference of the rows between them is equal to either their difference in columns (for the diagonal on the right of a) or equal to the negative difference of their columns (for the left diagonal of a). Below is given the fitness function.
Step104: Note that the best score achievable is 28. That is because for each queen we only check for the queens after her. For the first queen we check 7 other queens, for the second queen 6 others and so on. In short, the number of checks we make is the sum 7+6+5+...+1. Which is equal to 7*(7+1)/2 = 28.
Because it is very hard and will take long to find a perfect solution, we will set the fitness threshold at 25. If we find an individual with a score greater or equal to that, we will halt. Let's see how the genetic algorithm will fare.
Step105: Above you can see the solution and its fitness score, which should be no less than 25.
This is where we conclude Genetic Algorithms.
N-Queens Problem
Here, we will look at the generalized cae of the Eight Queens problem.
<br>
We are given a N x N chessboard, with N queens, and we need to place them in such a way that no two queens can attack each other.
<br>
We will solve this problem using search algorithms.
To do this, we already have a NQueensProblem class in search.py.
Step106: In csp.ipynb we have seen that the N-Queens problem can be formulated as a CSP and can be solved by
the min_conflicts algorithm in a way similar to Hill-Climbing.
Here, we want to solve it using heuristic search algorithms and even some classical search algorithms.
The NQueensProblem class derives from the Problem class and is implemented in such a way that the search algorithms we already have, can solve it.
<br>
Let's instantiate the class.
Step107: Let's use depth_first_tree_search first.
<br>
We will also use the %%timeit magic with each algorithm to see how much time they take.
Step108: breadth_first_tree_search
Step109: uniform_cost_search
Step110: depth_first_tree_search is almost 20 times faster than breadth_first_tree_search and more than 200 times faster than uniform_cost_search.
We can also solve this problem using astar_search with a suitable heuristic function.
<br>
The best heuristic function for this scenario will be one that returns the number of conflicts in the current state.
Step111: astar_search is faster than both uniform_cost_search and breadth_first_tree_search.
Step112: AND-OR GRAPH SEARCH
An AND-OR graph is a graphical representation of the reduction of goals to conjunctions and disjunctions of subgoals.
<br>
An AND-OR graph can be seen as a generalization of a directed graph.
It contains a number of vertices and generalized edges that connect the vertices.
<br>
Each connector in an AND-OR graph connects a set of vertices $V$ to a single vertex, $v_0$.
A connector can be an AND connector or an OR connector.
An AND connector connects two edges having a logical AND relationship,
while and OR connector connects two edges having a logical OR relationship.
<br>
A vertex can have more than one AND or OR connector.
This is why AND-OR graphs can be expressed as logical statements.
<br>
<br>
AND-OR graphs also provide a computational model for executing logic programs and you will come across this data-structure in the logic module as well.
AND-OR graphs can be searched in depth-first, breadth-first or best-first ways searching the state sapce linearly or parallely.
<br>
Our implementation of AND-OR search searches over graphs generated by non-deterministic environments and returns a conditional plan that reaches a goal state in all circumstances.
Let's have a look at the implementation of and_or_graph_search.
Step113: The search is carried out by two functions and_search and or_search that recursively call each other, traversing nodes sequentially.
It is a recursive depth-first algorithm for searching an AND-OR graph.
<br>
A very similar algorithm fol_bc_ask can be found in the logic module, which carries out inference on first-order logic knowledge bases using AND-OR graph-derived data-structures.
<br>
AND-OR trees can also be used to represent the search spaces for two-player games, where a vertex of the tree represents the problem of one of the players winning the game, starting from the initial state of the game.
<br>
Problems involving MIN-MAX trees can be reformulated as AND-OR trees by representing MAX nodes as OR nodes and MIN nodes as AND nodes.
and_or_graph_search can then be used to find the optimal solution.
Standard algorithms like minimax and expectiminimax (for belief states) can also be applied on it with a few modifications.
Here's how and_or_graph_search can be applied to a simple vacuum-world example.
Step114: ONLINE DFS AGENT
So far, we have seen agents that use offline search algorithms,
which is a class of algorithms that compute a complete solution before executing it.
In contrast, an online search agent interleaves computation and action.
Online search is better for most dynamic environments and necessary for unknown environments.
<br>
Online search problems are solved by an agent executing actions, rather than just by pure computation.
For a fully observable environment, an online agent cycles through three steps
Step115: It maintains two dictionaries untried and unbacktracked.
untried contains nodes that have not been visited yet.
unbacktracked contains the sequence of nodes that the agent has visited so it can backtrack to it later, if required.
s and a store the state and the action respectively and result stores the final path or solution of the problem.
<br>
Let's look at another online search algorithm.
LRTA* AGENT
We can infer now that hill-climbing is an online search algorithm, but it is not very useful natively because for complicated search spaces, it might converge to the local minima and indefinitely stay there.
In such a case, we can choose to randomly restart it a few times with different starting conditions and return the result with the lowest total cost.
Sometimes, it is better to use random walks instead of random restarts depending on the problem, but progress can still be very slow.
<br>
A better improvement would be to give hill-climbing a memory element.
We store the current best heuristic estimate and it is updated as the agent gains experience in the state space.
The estimated optimal cost is made more and more accurate as time passes and each time the the local minima is "flattened out" until we escape it.
<br>
This learning scheme is a simple improvement upon traditional hill-climbing and is called learning real-time A_ or __LRTA__.
Similar to _Online DFS-Agent, it builds a map of the environment and chooses the best possible move according to its current heuristic estimates.
<br>
Actions that haven't been tried yet are assumed to lead immediately to the goal with the least possible cost.
This is called optimism under uncertainty and encourages the agent to explore new promising paths.
This algorithm might not terminate if the state space is infinite, unlike A* search.
<br>
Let's have a look at the LRTAStarAgent class.
Step116: H stores the heuristic cost of the paths the agent may travel to.
<br>
s and a store the state and the action respectively.
<br>
problem stores the problem definition and the current map of the environment is stored in problem.result.
<br>
The LRTA_cost method computes the cost of a new path given the current state s, the action a, the next state s1 and the estimated cost to get from s to s1 is extracted from H.
Let's use LRTAStarAgent to solve a simple problem.
We'll define a new LRTA_problem instance based on our one_dim_state_space.
Step117: Let's define an instance of OnlineSearchProblem.
Step118: Now we initialize a LRTAStarAgent object for the problem we just defined.
Step119: We'll pass the percepts [State_3, State_4, State_3, State_4, State_5] one-by-one to our agent to see what action it comes up with at each timestep.
Step120: If you manually try to see what the optimal action should be at each step, the outputs of the lrta_agent will start to make sense if it doesn't already. | <ASSISTANT_TASK:>
Python Code:
from search import *
from notebook import psource, heatmap, gaussian_kernel, show_map, final_path_colors, display_visual, plot_NQueens
# Needed to hide warnings in the matplotlib sections
import warnings
warnings.filterwarnings("ignore")
Explanation: Solving problems by Searching
This notebook serves as supporting material for topics covered in Chapter 3 - Solving Problems by Searching and Chapter 4 - Beyond Classical Search from the book Artificial Intelligence: A Modern Approach. This notebook uses implementations from search.py module. Let's start by importing everything from search module.
End of explanation
%matplotlib inline
import networkx as nx
import matplotlib.pyplot as plt
from matplotlib import lines
from ipywidgets import interact
import ipywidgets as widgets
from IPython.display import display
import time
Explanation: CONTENTS
Overview
Problem
Node
Simple Problem Solving Agent
Search Algorithms Visualization
Breadth-First Tree Search
Breadth-First Search
Best First Search
Uniform Cost Search
Greedy Best First Search
A* Search
Hill Climbing
Simulated Annealing
Genetic Algorithm
AND-OR Graph Search
Online DFS Agent
LRTA* Agent
OVERVIEW
Here, we learn about a specific kind of problem solving - building goal-based agents that can plan ahead to solve problems. In particular, we examine navigation problem/route finding problem. We must begin by precisely defining problems and their solutions. We will look at several general-purpose search algorithms.
Search algorithms can be classified into two types:
Uninformed search algorithms: Search algorithms which explore the search space without having any information about the problem other than its definition.
Examples:
Breadth First Search
Depth First Search
Depth Limited Search
Iterative Deepening Search
Informed search algorithms: These type of algorithms leverage any information (heuristics, path cost) on the problem to search through the search space to find the solution efficiently.
Examples:
Best First Search
Uniform Cost Search
A* Search
Recursive Best First Search
Don't miss the visualisations of these algorithms solving the route-finding problem defined on Romania map at the end of this notebook.
For visualisations, we use networkx and matplotlib to show the map in the notebook and we use ipywidgets to interact with the map to see how the searching algorithm works. These are imported as required in notebook.py.
End of explanation
psource(Problem)
Explanation: PROBLEM
Let's see how we define a Problem. Run the next cell to see how abstract class Problem is defined in the search module.
End of explanation
psource(Node)
Explanation: The Problem class has six methods.
__init__(self, initial, goal) : This is what is called a constructor. It is the first method called when you create an instance of the class as Problem(initial, goal). The variable initial specifies the initial state $s_0$ of the search problem. It represents the beginning state. From here, our agent begins its task of exploration to find the goal state(s) which is given in the goal parameter.
actions(self, state) : This method returns all the possible actions agent can execute in the given state state.
result(self, state, action) : This returns the resulting state if action action is taken in the state state. This Problem class only deals with deterministic outcomes. So we know for sure what every action in a state would result to.
goal_test(self, state) : Return a boolean for a given state - True if it is a goal state, else False.
path_cost(self, c, state1, action, state2) : Return the cost of the path that arrives at state2 as a result of taking action from state1, assuming total cost of c to get up to state1.
value(self, state) : This acts as a bit of extra information in problems where we try to optimise a value when we cannot do a goal test.
NODE
Let's see how we define a Node. Run the next cell to see how abstract class Node is defined in the search module.
End of explanation
psource(GraphProblem)
Explanation: The Node class has nine methods. The first is the __init__ method.
__init__(self, state, parent, action, path_cost) : This method creates a node. parent represents the node that this is a successor of and action is the action required to get from the parent node to this node. path_cost is the cost to reach current node from parent node.
The next 4 methods are specific Node-related functions.
expand(self, problem) : This method lists all the neighbouring(reachable in one step) nodes of current node.
child_node(self, problem, action) : Given an action, this method returns the immediate neighbour that can be reached with that action.
solution(self) : This returns the sequence of actions required to reach this node from the root node.
path(self) : This returns a list of all the nodes that lies in the path from the root to this node.
The remaining 4 methods override standards Python functionality for representing an object as a string, the less-than ($<$) operator, the equal-to ($=$) operator, and the hash function.
__repr__(self) : This returns the state of this node.
__lt__(self, node) : Given a node, this method returns True if the state of current node is less than the state of the node. Otherwise it returns False.
__eq__(self, other) : This method returns True if the state of current node is equal to the other node. Else it returns False.
__hash__(self) : This returns the hash of the state of current node.
We will use the abstract class Problem to define our real problem named GraphProblem. You can see how we define GraphProblem by running the next cell.
End of explanation
romania_map = UndirectedGraph(dict(
Arad=dict(Zerind=75, Sibiu=140, Timisoara=118),
Bucharest=dict(Urziceni=85, Pitesti=101, Giurgiu=90, Fagaras=211),
Craiova=dict(Drobeta=120, Rimnicu=146, Pitesti=138),
Drobeta=dict(Mehadia=75),
Eforie=dict(Hirsova=86),
Fagaras=dict(Sibiu=99),
Hirsova=dict(Urziceni=98),
Iasi=dict(Vaslui=92, Neamt=87),
Lugoj=dict(Timisoara=111, Mehadia=70),
Oradea=dict(Zerind=71, Sibiu=151),
Pitesti=dict(Rimnicu=97),
Rimnicu=dict(Sibiu=80),
Urziceni=dict(Vaslui=142)))
romania_map.locations = dict(
Arad=(91, 492), Bucharest=(400, 327), Craiova=(253, 288),
Drobeta=(165, 299), Eforie=(562, 293), Fagaras=(305, 449),
Giurgiu=(375, 270), Hirsova=(534, 350), Iasi=(473, 506),
Lugoj=(165, 379), Mehadia=(168, 339), Neamt=(406, 537),
Oradea=(131, 571), Pitesti=(320, 368), Rimnicu=(233, 410),
Sibiu=(207, 457), Timisoara=(94, 410), Urziceni=(456, 350),
Vaslui=(509, 444), Zerind=(108, 531))
Explanation: Have a look at our romania_map, which is an Undirected Graph containing a dict of nodes as keys and neighbours as values.
End of explanation
romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)
Explanation: It is pretty straightforward to understand this romania_map. The first node Arad has three neighbours named Zerind, Sibiu, Timisoara. Each of these nodes are 75, 140, 118 units apart from Arad respectively. And the same goes with other nodes.
And romania_map.locations contains the positions of each of the nodes. We will use the straight line distance (which is different from the one provided in romania_map) between two cities in algorithms like A*-search and Recursive Best First Search.
Define a problem:
Now it's time to define our problem. We will define it by passing initial, goal, graph to GraphProblem. So, our problem is to find the goal state starting from the given initial state on the provided graph.
Say we want to start exploring from Arad and try to find Bucharest in our romania_map. So, this is how we do it.
End of explanation
romania_locations = romania_map.locations
print(romania_locations)
Explanation: Romania Map Visualisation
Let's have a visualisation of Romania map [Figure 3.2] from the book and see how different searching algorithms perform / how frontier expands in each search algorithm for a simple problem named romania_problem.
Have a look at romania_locations. It is a dictionary defined in search module. We will use these location values to draw the romania graph using networkx.
End of explanation
# node colors, node positions and node label positions
node_colors = {node: 'white' for node in romania_map.locations.keys()}
node_positions = romania_map.locations
node_label_pos = { k:[v[0],v[1]-10] for k,v in romania_map.locations.items() }
edge_weights = {(k, k2) : v2 for k, v in romania_map.graph_dict.items() for k2, v2 in v.items()}
romania_graph_data = { 'graph_dict' : romania_map.graph_dict,
'node_colors': node_colors,
'node_positions': node_positions,
'node_label_positions': node_label_pos,
'edge_weights': edge_weights
}
Explanation: Let's get started by initializing an empty graph. We will add nodes, place the nodes in their location as shown in the book, add edges to the graph.
End of explanation
show_map(romania_graph_data)
Explanation: We have completed building our graph based on romania_map and its locations. It's time to display it here in the notebook. This function show_map(node_colors) helps us do that. We will be calling this function later on to display the map at each and every interval step while searching, using variety of algorithms from the book.
We can simply call the function with node_colors dictionary object to display it.
End of explanation
psource(SimpleProblemSolvingAgentProgram)
Explanation: Voila! You see, the romania map as shown in the Figure[3.2] in the book. Now, see how different searching algorithms perform with our problem statements.
SIMPLE PROBLEM SOLVING AGENT PROGRAM
Let us now define a Simple Problem Solving Agent Program. Run the next cell to see how the abstract class SimpleProblemSolvingAgentProgram is defined in the search module.
End of explanation
class vacuumAgent(SimpleProblemSolvingAgentProgram):
def update_state(self, state, percept):
return percept
def formulate_goal(self, state):
goal = [state7, state8]
return goal
def formulate_problem(self, state, goal):
problem = state
return problem
def search(self, problem):
if problem == state1:
seq = ["Suck", "Right", "Suck"]
elif problem == state2:
seq = ["Suck", "Left", "Suck"]
elif problem == state3:
seq = ["Right", "Suck"]
elif problem == state4:
seq = ["Suck"]
elif problem == state5:
seq = ["Suck"]
elif problem == state6:
seq = ["Left", "Suck"]
return seq
Explanation: The SimpleProblemSolvingAgentProgram class has six methods:
__init__(self, intial_state=None): This is the contructor of the class and is the first method to be called when the class is instantiated. It takes in a keyword argument, initial_state which is initially None. The argument initial_state represents the state from which the agent starts.
__call__(self, percept): This method updates the state of the agent based on its percept using the update_state method. It then formulates a goal with the help of formulate_goal method and a problem using the formulate_problem method and returns a sequence of actions to solve it (using the search method).
update_state(self, percept): This method updates the state of the agent based on its percept.
formulate_goal(self, state): Given a state of the agent, this method formulates the goal for it.
formulate_problem(self, state, goal): It is used in problem formulation given a state and a goal for the agent.
search(self, problem): This method is used to search a sequence of actions to solve a problem.
Let us now define a Simple Problem Solving Agent Program. We will create a simple vacuumAgent class which will inherit from the abstract class SimpleProblemSolvingAgentProgram and overrides its methods. We will create a simple intelligent vacuum agent which can be in any one of the following states. It will move to any other state depending upon the current state as shown in the picture by arrows:
End of explanation
state1 = [(0, 0), [(0, 0), "Dirty"], [(1, 0), ["Dirty"]]]
state2 = [(1, 0), [(0, 0), "Dirty"], [(1, 0), ["Dirty"]]]
state3 = [(0, 0), [(0, 0), "Clean"], [(1, 0), ["Dirty"]]]
state4 = [(1, 0), [(0, 0), "Clean"], [(1, 0), ["Dirty"]]]
state5 = [(0, 0), [(0, 0), "Dirty"], [(1, 0), ["Clean"]]]
state6 = [(1, 0), [(0, 0), "Dirty"], [(1, 0), ["Clean"]]]
state7 = [(0, 0), [(0, 0), "Clean"], [(1, 0), ["Clean"]]]
state8 = [(1, 0), [(0, 0), "Clean"], [(1, 0), ["Clean"]]]
a = vacuumAgent(state1)
print(a(state6))
print(a(state1))
print(a(state3))
Explanation: Now, we will define all the 8 states and create an object of the above class. Then, we will pass it different states and check the output:
End of explanation
def tree_breadth_search_for_vis(problem):
Search through the successors of a problem to find a goal.
The argument frontier should be an empty queue.
Don't worry about repeated paths to a state. [Figure 3.7]
# we use these two variables at the time of visualisations
iterations = 0
all_node_colors = []
node_colors = {k : 'white' for k in problem.graph.nodes()}
#Adding first node to the queue
frontier = deque([Node(problem.initial)])
node_colors[Node(problem.initial).state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
while frontier:
#Popping first node of queue
node = frontier.popleft()
# modify the currently searching node to red
node_colors[node.state] = "red"
iterations += 1
all_node_colors.append(dict(node_colors))
if problem.goal_test(node.state):
# modify goal node to green after reaching the goal
node_colors[node.state] = "green"
iterations += 1
all_node_colors.append(dict(node_colors))
return(iterations, all_node_colors, node)
frontier.extend(node.expand(problem))
for n in node.expand(problem):
node_colors[n.state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
# modify the color of explored nodes to gray
node_colors[node.state] = "gray"
iterations += 1
all_node_colors.append(dict(node_colors))
return None
def breadth_first_tree_search(problem):
"Search the shallowest nodes in the search tree first."
iterations, all_node_colors, node = tree_breadth_search_for_vis(problem)
return(iterations, all_node_colors, node)
Explanation: SEARCHING ALGORITHMS VISUALIZATION
In this section, we have visualizations of the following searching algorithms:
Breadth First Tree Search
Depth First Tree Search
Breadth First Search
Depth First Graph Search
Best First Graph Search
Uniform Cost Search
Depth Limited Search
Iterative Deepening Search
Greedy Best First Search
A*-Search
Recursive Best First Search
We add the colors to the nodes to have a nice visualisation when displaying. So, these are the different colors we are using in these visuals:
* Un-explored nodes - <font color='black'>white</font>
* Frontier nodes - <font color='orange'>orange</font>
* Currently exploring node - <font color='red'>red</font>
* Already explored nodes - <font color='gray'>gray</font>
1. BREADTH-FIRST TREE SEARCH
We have a working implementation in search module. But as we want to interact with the graph while it is searching, we need to modify the implementation. Here's the modified breadth first tree search.
End of explanation
all_node_colors = []
romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)
a, b, c = breadth_first_tree_search(romania_problem)
display_visual(romania_graph_data, user_input=False,
algorithm=breadth_first_tree_search,
problem=romania_problem)
Explanation: Now, we use ipywidgets to display a slider, a button and our romania map. By sliding the slider we can have a look at all the intermediate steps of a particular search algorithm. By pressing the button Visualize, you can see all the steps without interacting with the slider. These two helper functions are the callback functions which are called when we interact with the slider and the button.
End of explanation
def tree_depth_search_for_vis(problem):
Search through the successors of a problem to find a goal.
The argument frontier should be an empty queue.
Don't worry about repeated paths to a state. [Figure 3.7]
# we use these two variables at the time of visualisations
iterations = 0
all_node_colors = []
node_colors = {k : 'white' for k in problem.graph.nodes()}
#Adding first node to the stack
frontier = [Node(problem.initial)]
node_colors[Node(problem.initial).state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
while frontier:
#Popping first node of stack
node = frontier.pop()
# modify the currently searching node to red
node_colors[node.state] = "red"
iterations += 1
all_node_colors.append(dict(node_colors))
if problem.goal_test(node.state):
# modify goal node to green after reaching the goal
node_colors[node.state] = "green"
iterations += 1
all_node_colors.append(dict(node_colors))
return(iterations, all_node_colors, node)
frontier.extend(node.expand(problem))
for n in node.expand(problem):
node_colors[n.state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
# modify the color of explored nodes to gray
node_colors[node.state] = "gray"
iterations += 1
all_node_colors.append(dict(node_colors))
return None
def depth_first_tree_search(problem):
"Search the deepest nodes in the search tree first."
iterations, all_node_colors, node = tree_depth_search_for_vis(problem)
return(iterations, all_node_colors, node)
all_node_colors = []
romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)
display_visual(romania_graph_data, user_input=False,
algorithm=depth_first_tree_search,
problem=romania_problem)
Explanation: 2. DEPTH-FIRST TREE SEARCH
Now let's discuss another searching algorithm, Depth-First Tree Search.
End of explanation
def breadth_first_search_graph(problem):
"[Figure 3.11]"
# we use these two variables at the time of visualisations
iterations = 0
all_node_colors = []
node_colors = {k : 'white' for k in problem.graph.nodes()}
node = Node(problem.initial)
node_colors[node.state] = "red"
iterations += 1
all_node_colors.append(dict(node_colors))
if problem.goal_test(node.state):
node_colors[node.state] = "green"
iterations += 1
all_node_colors.append(dict(node_colors))
return(iterations, all_node_colors, node)
frontier = deque([node])
# modify the color of frontier nodes to blue
node_colors[node.state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
explored = set()
while frontier:
node = frontier.popleft()
node_colors[node.state] = "red"
iterations += 1
all_node_colors.append(dict(node_colors))
explored.add(node.state)
for child in node.expand(problem):
if child.state not in explored and child not in frontier:
if problem.goal_test(child.state):
node_colors[child.state] = "green"
iterations += 1
all_node_colors.append(dict(node_colors))
return(iterations, all_node_colors, child)
frontier.append(child)
node_colors[child.state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
node_colors[node.state] = "gray"
iterations += 1
all_node_colors.append(dict(node_colors))
return None
all_node_colors = []
romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)
display_visual(romania_graph_data, user_input=False,
algorithm=breadth_first_search_graph,
problem=romania_problem)
Explanation: 3. BREADTH-FIRST GRAPH SEARCH
Let's change all the node_colors to starting position and define a different problem statement.
End of explanation
def graph_search_for_vis(problem):
Search through the successors of a problem to find a goal.
The argument frontier should be an empty queue.
If two paths reach a state, only use the first one. [Figure 3.7]
# we use these two variables at the time of visualisations
iterations = 0
all_node_colors = []
node_colors = {k : 'white' for k in problem.graph.nodes()}
frontier = [(Node(problem.initial))]
explored = set()
# modify the color of frontier nodes to orange
node_colors[Node(problem.initial).state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
while frontier:
# Popping first node of stack
node = frontier.pop()
# modify the currently searching node to red
node_colors[node.state] = "red"
iterations += 1
all_node_colors.append(dict(node_colors))
if problem.goal_test(node.state):
# modify goal node to green after reaching the goal
node_colors[node.state] = "green"
iterations += 1
all_node_colors.append(dict(node_colors))
return(iterations, all_node_colors, node)
explored.add(node.state)
frontier.extend(child for child in node.expand(problem)
if child.state not in explored and
child not in frontier)
for n in frontier:
# modify the color of frontier nodes to orange
node_colors[n.state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
# modify the color of explored nodes to gray
node_colors[node.state] = "gray"
iterations += 1
all_node_colors.append(dict(node_colors))
return None
def depth_first_graph_search(problem):
Search the deepest nodes in the search tree first.
iterations, all_node_colors, node = graph_search_for_vis(problem)
return(iterations, all_node_colors, node)
all_node_colors = []
romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)
display_visual(romania_graph_data, user_input=False,
algorithm=depth_first_graph_search,
problem=romania_problem)
Explanation: 4. DEPTH-FIRST GRAPH SEARCH
Although we have a working implementation in search module, we have to make a few changes in the algorithm to make it suitable for visualization.
End of explanation
def best_first_graph_search_for_vis(problem, f):
Search the nodes with the lowest f scores first.
You specify the function f(node) that you want to minimize; for example,
if f is a heuristic estimate to the goal, then we have greedy best
first search; if f is node.depth then we have breadth-first search.
There is a subtlety: the line "f = memoize(f, 'f')" means that the f
values will be cached on the nodes as they are computed. So after doing
a best first search you can examine the f values of the path returned.
# we use these two variables at the time of visualisations
iterations = 0
all_node_colors = []
node_colors = {k : 'white' for k in problem.graph.nodes()}
f = memoize(f, 'f')
node = Node(problem.initial)
node_colors[node.state] = "red"
iterations += 1
all_node_colors.append(dict(node_colors))
if problem.goal_test(node.state):
node_colors[node.state] = "green"
iterations += 1
all_node_colors.append(dict(node_colors))
return(iterations, all_node_colors, node)
frontier = PriorityQueue('min', f)
frontier.append(node)
node_colors[node.state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
explored = set()
while frontier:
node = frontier.pop()
node_colors[node.state] = "red"
iterations += 1
all_node_colors.append(dict(node_colors))
if problem.goal_test(node.state):
node_colors[node.state] = "green"
iterations += 1
all_node_colors.append(dict(node_colors))
return(iterations, all_node_colors, node)
explored.add(node.state)
for child in node.expand(problem):
if child.state not in explored and child not in frontier:
frontier.append(child)
node_colors[child.state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
elif child in frontier:
incumbent = frontier[child]
if f(child) < incumbent:
del frontier[child]
frontier.append(child)
node_colors[child.state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
node_colors[node.state] = "gray"
iterations += 1
all_node_colors.append(dict(node_colors))
return None
Explanation: 5. BEST FIRST SEARCH
Let's change all the node_colors to starting position and define a different problem statement.
End of explanation
def uniform_cost_search_graph(problem):
"[Figure 3.14]"
#Uniform Cost Search uses Best First Search algorithm with f(n) = g(n)
iterations, all_node_colors, node = best_first_graph_search_for_vis(problem, lambda node: node.path_cost)
return(iterations, all_node_colors, node)
all_node_colors = []
romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)
display_visual(romania_graph_data, user_input=False,
algorithm=uniform_cost_search_graph,
problem=romania_problem)
Explanation: 6. UNIFORM COST SEARCH
Let's change all the node_colors to starting position and define a different problem statement.
End of explanation
def depth_limited_search_graph(problem, limit = -1):
'''
Perform depth first search of graph g.
if limit >= 0, that is the maximum depth of the search.
'''
# we use these two variables at the time of visualisations
iterations = 0
all_node_colors = []
node_colors = {k : 'white' for k in problem.graph.nodes()}
frontier = [Node(problem.initial)]
explored = set()
cutoff_occurred = False
node_colors[Node(problem.initial).state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
while frontier:
# Popping first node of queue
node = frontier.pop()
# modify the currently searching node to red
node_colors[node.state] = "red"
iterations += 1
all_node_colors.append(dict(node_colors))
if problem.goal_test(node.state):
# modify goal node to green after reaching the goal
node_colors[node.state] = "green"
iterations += 1
all_node_colors.append(dict(node_colors))
return(iterations, all_node_colors, node)
elif limit >= 0:
cutoff_occurred = True
limit += 1
all_node_colors.pop()
iterations -= 1
node_colors[node.state] = "gray"
explored.add(node.state)
frontier.extend(child for child in node.expand(problem)
if child.state not in explored and
child not in frontier)
for n in frontier:
limit -= 1
# modify the color of frontier nodes to orange
node_colors[n.state] = "orange"
iterations += 1
all_node_colors.append(dict(node_colors))
# modify the color of explored nodes to gray
node_colors[node.state] = "gray"
iterations += 1
all_node_colors.append(dict(node_colors))
return 'cutoff' if cutoff_occurred else None
def depth_limited_search_for_vis(problem):
Search the deepest nodes in the search tree first.
iterations, all_node_colors, node = depth_limited_search_graph(problem)
return(iterations, all_node_colors, node)
all_node_colors = []
romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)
display_visual(romania_graph_data, user_input=False,
algorithm=depth_limited_search_for_vis,
problem=romania_problem)
Explanation: 7. DEPTH LIMITED SEARCH
Let's change all the 'node_colors' to starting position and define a different problem statement.
Although we have a working implementation, but we need to make changes.
End of explanation
def iterative_deepening_search_for_vis(problem):
for depth in range(sys.maxsize):
iterations, all_node_colors, node=depth_limited_search_for_vis(problem)
if iterations:
return (iterations, all_node_colors, node)
all_node_colors = []
romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)
display_visual(romania_graph_data, user_input=False,
algorithm=iterative_deepening_search_for_vis,
problem=romania_problem)
Explanation: 8. ITERATIVE DEEPENING SEARCH
Let's change all the 'node_colors' to starting position and define a different problem statement.
End of explanation
def greedy_best_first_search(problem, h=None):
Greedy Best-first graph search is an informative searching algorithm with f(n) = h(n).
You need to specify the h function when you call best_first_search, or
else in your Problem subclass.
h = memoize(h or problem.h, 'h')
iterations, all_node_colors, node = best_first_graph_search_for_vis(problem, lambda n: h(n))
return(iterations, all_node_colors, node)
all_node_colors = []
romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)
display_visual(romania_graph_data, user_input=False,
algorithm=greedy_best_first_search,
problem=romania_problem)
Explanation: 9. GREEDY BEST FIRST SEARCH
Let's change all the node_colors to starting position and define a different problem statement.
End of explanation
def astar_search_graph(problem, h=None):
A* search is best-first graph search with f(n) = g(n)+h(n).
You need to specify the h function when you call astar_search, or
else in your Problem subclass.
h = memoize(h or problem.h, 'h')
iterations, all_node_colors, node = best_first_graph_search_for_vis(problem,
lambda n: n.path_cost + h(n))
return(iterations, all_node_colors, node)
all_node_colors = []
romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)
display_visual(romania_graph_data, user_input=False,
algorithm=astar_search_graph,
problem=romania_problem)
Explanation: 10. A* SEARCH
Let's change all the node_colors to starting position and define a different problem statement.
End of explanation
def recursive_best_first_search_for_vis(problem, h=None):
[Figure 3.26] Recursive best-first search
# we use these two variables at the time of visualizations
iterations = 0
all_node_colors = []
node_colors = {k : 'white' for k in problem.graph.nodes()}
h = memoize(h or problem.h, 'h')
def RBFS(problem, node, flimit):
nonlocal iterations
def color_city_and_update_map(node, color):
node_colors[node.state] = color
nonlocal iterations
iterations += 1
all_node_colors.append(dict(node_colors))
if problem.goal_test(node.state):
color_city_and_update_map(node, 'green')
return (iterations, all_node_colors, node), 0 # the second value is immaterial
successors = node.expand(problem)
if len(successors) == 0:
color_city_and_update_map(node, 'gray')
return (iterations, all_node_colors, None), infinity
for s in successors:
color_city_and_update_map(s, 'orange')
s.f = max(s.path_cost + h(s), node.f)
while True:
# Order by lowest f value
successors.sort(key=lambda x: x.f)
best = successors[0]
if best.f > flimit:
color_city_and_update_map(node, 'gray')
return (iterations, all_node_colors, None), best.f
if len(successors) > 1:
alternative = successors[1].f
else:
alternative = infinity
node_colors[node.state] = 'gray'
node_colors[best.state] = 'red'
iterations += 1
all_node_colors.append(dict(node_colors))
result, best.f = RBFS(problem, best, min(flimit, alternative))
if result[2] is not None:
color_city_and_update_map(node, 'green')
return result, best.f
else:
color_city_and_update_map(node, 'red')
node = Node(problem.initial)
node.f = h(node)
node_colors[node.state] = 'red'
iterations += 1
all_node_colors.append(dict(node_colors))
result, bestf = RBFS(problem, node, infinity)
return result
all_node_colors = []
romania_problem = GraphProblem('Arad', 'Bucharest', romania_map)
display_visual(romania_graph_data, user_input=False,
algorithm=recursive_best_first_search_for_vis,
problem=romania_problem)
all_node_colors = []
# display_visual(romania_graph_data, user_input=True, algorithm=breadth_first_tree_search)
algorithms = { "Breadth First Tree Search": tree_breadth_search_for_vis,
"Depth First Tree Search": tree_depth_search_for_vis,
"Breadth First Search": breadth_first_search_graph,
"Depth First Graph Search": graph_search_for_vis,
"Best First Graph Search": best_first_graph_search_for_vis,
"Uniform Cost Search": uniform_cost_search_graph,
"Depth Limited Search": depth_limited_search_for_vis,
"Iterative Deepening Search": iterative_deepening_search_for_vis,
"Greedy Best First Search": greedy_best_first_search,
"A-star Search": astar_search_graph,
"Recursive Best First Search": recursive_best_first_search_for_vis}
display_visual(romania_graph_data, algorithm=algorithms, user_input=True)
Explanation: 11. RECURSIVE BEST FIRST SEARCH
Let's change all the node_colors to starting position and define a different problem statement.
End of explanation
psource(recursive_best_first_search)
Explanation: RECURSIVE BEST-FIRST SEARCH
Recursive best-first search is a simple recursive algorithm that improves upon heuristic search by reducing the memory requirement.
RBFS uses only linear space and it attempts to mimic the operation of standard best-first search.
Its structure is similar to recursive depth-first search but it doesn't continue indefinitely down the current path, the f_limit variable is used to keep track of the f-value of the best alternative path available from any ancestor of the current node.
RBFS remembers the f-value of the best leaf in the forgotten subtree and can decide whether it is worth re-expanding the tree later.
<br>
However, RBFS still suffers from excessive node regeneration.
<br>
Let's have a look at the implementation.
End of explanation
recursive_best_first_search(romania_problem).solution()
Explanation: This is how recursive_best_first_search can solve the romania_problem
End of explanation
puzzle = EightPuzzle((2, 4, 3, 1, 5, 6, 7, 8, 0))
assert puzzle.check_solvability((2, 4, 3, 1, 5, 6, 7, 8, 0))
recursive_best_first_search(puzzle).solution()
Explanation: recursive_best_first_search can be used to solve the 8 puzzle problem too, as discussed later.
End of explanation
goal = [1, 2, 3, 4, 5, 6, 7, 8, 0]
Explanation: A* HEURISTICS
Different heuristics provide different efficiency in solving A* problems which are generally defined by the number of explored nodes as well as the branching factor. With the classic 8 puzzle we can show the efficiency of different heuristics through the number of explored nodes.
8 Puzzle Problem
The 8 Puzzle Problem consists of a 3x3 tray in which the goal is to get the initial configuration to the goal state by shifting the numbered tiles into the blank space.
example:-
Initial State Goal State
| 7 | 2 | 4 | | 1 | 2 | 3 |
| 5 | 0 | 6 | | 4 | 5 | 6 |
| 8 | 3 | 1 | | 7 | 8 | 0 |
We have a total of 9 blank tiles giving us a total of 9! initial configuration but not all of these are solvable. The solvability of a configuration can be checked by calculating the Inversion Permutation. If the total Inversion Permutation is even then the initial configuration is solvable else the initial configuration is not solvable which means that only 9!/2 initial states lead to a solution.
<br>
Let's define our goal state.
End of explanation
# Heuristics for 8 Puzzle Problem
import math
def linear(node):
return sum([1 if node.state[i] != goal[i] else 0 for i in range(8)])
def manhattan(node):
state = node.state
index_goal = {0:[2,2], 1:[0,0], 2:[0,1], 3:[0,2], 4:[1,0], 5:[1,1], 6:[1,2], 7:[2,0], 8:[2,1]}
index_state = {}
index = [[0,0], [0,1], [0,2], [1,0], [1,1], [1,2], [2,0], [2,1], [2,2]]
x, y = 0, 0
for i in range(len(state)):
index_state[state[i]] = index[i]
mhd = 0
for i in range(8):
for j in range(2):
mhd = abs(index_goal[i][j] - index_state[i][j]) + mhd
return mhd
def sqrt_manhattan(node):
state = node.state
index_goal = {0:[2,2], 1:[0,0], 2:[0,1], 3:[0,2], 4:[1,0], 5:[1,1], 6:[1,2], 7:[2,0], 8:[2,1]}
index_state = {}
index = [[0,0], [0,1], [0,2], [1,0], [1,1], [1,2], [2,0], [2,1], [2,2]]
x, y = 0, 0
for i in range(len(state)):
index_state[state[i]] = index[i]
mhd = 0
for i in range(8):
for j in range(2):
mhd = (index_goal[i][j] - index_state[i][j])**2 + mhd
return math.sqrt(mhd)
def max_heuristic(node):
score1 = manhattan(node)
score2 = linear(node)
return max(score1, score2)
Explanation: Heuristics :-
1) Manhattan Distance:- For the 8 puzzle problem Manhattan distance is defined as the distance of a tile from its goal state( for the tile numbered '1' in the initial configuration Manhattan distance is 4 "2 for left and 2 for upward displacement").
2) No. of Misplaced Tiles:- The heuristic calculates the number of misplaced tiles between the current state and goal state.
3) Sqrt of Manhattan Distance:- It calculates the square root of Manhattan distance.
4) Max Heuristic:- It assign the score as the maximum between "Manhattan Distance" and "No. of Misplaced Tiles".
End of explanation
# Solving the puzzle
puzzle = EightPuzzle((2, 4, 3, 1, 5, 6, 7, 8, 0))
puzzle.check_solvability((2, 4, 3, 1, 5, 6, 7, 8, 0)) # checks whether the initialized configuration is solvable or not
Explanation: We can solve the puzzle using the astar_search method.
End of explanation
astar_search(puzzle).solution()
Explanation: This case is solvable, let's proceed.
<br>
The default heuristic function returns the number of misplaced tiles.
End of explanation
astar_search(puzzle, linear).solution()
astar_search(puzzle, manhattan).solution()
astar_search(puzzle, sqrt_manhattan).solution()
astar_search(puzzle, max_heuristic).solution()
Explanation: In the following cells, we use different heuristic functions.
<br>
End of explanation
recursive_best_first_search(puzzle, manhattan).solution()
Explanation: And here's how recursive_best_first_search can be used to solve this problem too.
End of explanation
puzzle_1 = EightPuzzle((2, 4, 3, 1, 5, 6, 7, 8, 0))
puzzle_2 = EightPuzzle((1, 2, 3, 4, 5, 6, 0, 7, 8))
puzzle_3 = EightPuzzle((1, 2, 3, 4, 5, 7, 8, 6, 0))
Explanation: Even though all the heuristic functions give the same solution, the difference lies in the computation time.
<br>
This might make all the difference in a scenario where high computational efficiency is required.
<br>
Let's define a few puzzle states and time astar_search for every heuristic function.
We will use the %%timeit magic for this.
End of explanation
%%timeit
astar_search(puzzle_1)
astar_search(puzzle_2)
astar_search(puzzle_3)
%%timeit
astar_search(puzzle_1, linear)
astar_search(puzzle_2, linear)
astar_search(puzzle_3, linear)
%%timeit
astar_search(puzzle_1, manhattan)
astar_search(puzzle_2, manhattan)
astar_search(puzzle_3, manhattan)
%%timeit
astar_search(puzzle_1, sqrt_manhattan)
astar_search(puzzle_2, sqrt_manhattan)
astar_search(puzzle_3, sqrt_manhattan)
%%timeit
astar_search(puzzle_1, max_heuristic)
astar_search(puzzle_2, max_heuristic)
astar_search(puzzle_3, max_heuristic)
Explanation: The default heuristic function is the same as the linear heuristic function, but we'll still check both.
End of explanation
%%timeit
recursive_best_first_search(puzzle_1, linear)
recursive_best_first_search(puzzle_2, linear)
recursive_best_first_search(puzzle_3, linear)
Explanation: We can infer that the manhattan heuristic function works the fastest.
<br>
sqrt_manhattan has an extra sqrt operation which makes it quite a lot slower than the others.
<br>
max_heuristic should have been a bit slower as it calls two functions, but in this case, those values were already calculated which saved some time.
Feel free to play around with these functions.
For comparison, this is how RBFS performs on this problem.
End of explanation
psource(hill_climbing)
Explanation: It is quite a lot slower than astar_search as we can see.
HILL CLIMBING
Hill Climbing is a heuristic search used for optimization problems.
Given a large set of inputs and a good heuristic function, it tries to find a sufficiently good solution to the problem.
This solution may or may not be the global optimum.
The algorithm is a variant of generate and test algorithm.
<br>
As a whole, the algorithm works as follows:
- Evaluate the initial state.
- If it is equal to the goal state, return.
- Find a neighboring state (one which is heuristically similar to the current state)
- Evaluate this state. If it is closer to the goal state than before, replace the initial state with this state and repeat these steps.
<br>
End of explanation
class TSP_problem(Problem):
subclass of Problem to define various functions
def two_opt(self, state):
Neighbour generating function for Traveling Salesman Problem
neighbour_state = state[:]
left = random.randint(0, len(neighbour_state) - 1)
right = random.randint(0, len(neighbour_state) - 1)
if left > right:
left, right = right, left
neighbour_state[left: right + 1] = reversed(neighbour_state[left: right + 1])
return neighbour_state
def actions(self, state):
action that can be excuted in given state
return [self.two_opt]
def result(self, state, action):
result after applying the given action on the given state
return action(state)
def path_cost(self, c, state1, action, state2):
total distance for the Traveling Salesman to be covered if in state2
cost = 0
for i in range(len(state2) - 1):
cost += distances[state2[i]][state2[i + 1]]
cost += distances[state2[0]][state2[-1]]
return cost
def value(self, state):
value of path cost given negative for the given state
return -1 * self.path_cost(None, None, None, state)
Explanation: We will find an approximate solution to the traveling salespersons problem using this algorithm.
<br>
We need to define a class for this problem.
<br>
Problem will be used as a base class.
End of explanation
distances = {}
all_cities = []
for city in romania_map.locations.keys():
distances[city] = {}
all_cities.append(city)
all_cities.sort()
print(all_cities)
Explanation: We will use cities from the Romania map as our cities for this problem.
<br>
A list of all cities and a dictionary storing distances between them will be populated.
End of explanation
import numpy as np
for name_1, coordinates_1 in romania_map.locations.items():
for name_2, coordinates_2 in romania_map.locations.items():
distances[name_1][name_2] = np.linalg.norm(
[coordinates_1[0] - coordinates_2[0], coordinates_1[1] - coordinates_2[1]])
distances[name_2][name_1] = np.linalg.norm(
[coordinates_1[0] - coordinates_2[0], coordinates_1[1] - coordinates_2[1]])
Explanation: Next, we need to populate the individual lists inside the dictionary with the manhattan distance between the cities.
End of explanation
def hill_climbing(problem):
From the initial node, keep choosing the neighbor with highest value,
stopping when no neighbor is better. [Figure 4.2]
def find_neighbors(state, number_of_neighbors=100):
finds neighbors using two_opt method
neighbors = []
for i in range(number_of_neighbors):
new_state = problem.two_opt(state)
neighbors.append(Node(new_state))
state = new_state
return neighbors
# as this is a stochastic algorithm, we will set a cap on the number of iterations
iterations = 10000
current = Node(problem.initial)
while iterations:
neighbors = find_neighbors(current.state)
if not neighbors:
break
neighbor = argmax_random_tie(neighbors,
key=lambda node: problem.value(node.state))
if problem.value(neighbor.state) <= problem.value(current.state):
Note that it is based on negative path cost method
current.state = neighbor.state
iterations -= 1
return current.state
Explanation: The way neighbours are chosen currently isn't suitable for the travelling salespersons problem.
We need a neighboring state that is similar in total path distance to the current state.
<br>
We need to change the function that finds neighbors.
End of explanation
tsp = TSP_problem(all_cities)
Explanation: An instance of the TSP_problem class will be created.
End of explanation
hill_climbing(tsp)
Explanation: We can now generate an approximate solution to the problem by calling hill_climbing.
The results will vary a bit each time you run it.
End of explanation
psource(simulated_annealing)
Explanation: The solution looks like this.
It is not difficult to see why this might be a good solution.
<br>
SIMULATED ANNEALING
The intuition behind Hill Climbing was developed from the metaphor of climbing up the graph of a function to find its peak.
There is a fundamental problem in the implementation of the algorithm however.
To find the highest hill, we take one step at a time, always uphill, hoping to find the highest point,
but if we are unlucky to start from the shoulder of the second-highest hill, there is no way we can find the highest one.
The algorithm will always converge to the local optimum.
Hill Climbing is also bad at dealing with functions that flatline in certain regions.
If all neighboring states have the same value, we cannot find the global optimum using this algorithm.
<br>
<br>
Let's now look at an algorithm that can deal with these situations.
<br>
Simulated Annealing is quite similar to Hill Climbing,
but instead of picking the best move every iteration, it picks a random move.
If this random move brings us closer to the global optimum, it will be accepted,
but if it doesn't, the algorithm may accept or reject the move based on a probability dictated by the temperature.
When the temperature is high, the algorithm is more likely to accept a random move even if it is bad.
At low temperatures, only good moves are accepted, with the occasional exception.
This allows exploration of the state space and prevents the algorithm from getting stuck at the local optimum.
End of explanation
psource(exp_schedule)
Explanation: The temperature is gradually decreased over the course of the iteration.
This is done by a scheduling routine.
The current implementation uses exponential decay of temperature, but we can use a different scheduling routine instead.
End of explanation
initial = (0, 0)
grid = [[3, 7, 2, 8], [5, 2, 9, 1], [5, 3, 3, 1]]
Explanation: Next, we'll define a peak-finding problem and try to solve it using Simulated Annealing.
Let's define the grid and the initial state first.
End of explanation
directions4
Explanation: We want to allow only four directions, namely N, S, E and W.
Let's use the predefined directions4 dictionary.
End of explanation
problem = PeakFindingProblem(initial, grid, directions4)
Explanation: Define a problem with these parameters.
End of explanation
solutions = {problem.value(simulated_annealing(problem)) for i in range(100)}
max(solutions)
Explanation: We'll run simulated_annealing a few times and store the solutions in a set.
End of explanation
grid = gaussian_kernel()
Explanation: Hence, the maximum value is 9.
Let's find the peak of a two-dimensional gaussian distribution.
We'll use the gaussian_kernel function from notebook.py to get the distribution.
End of explanation
heatmap(grid, cmap='jet', interpolation='spline16')
Explanation: Let's use the heatmap function from notebook.py to plot this.
End of explanation
directions8
Explanation: Let's define the problem.
This time, we will allow movement in eight directions as defined in directions8.
End of explanation
problem = PeakFindingProblem(initial, grid, directions8)
%%timeit
solutions = {problem.value(simulated_annealing(problem)) for i in range(100)}
max(solutions)
Explanation: We'll solve the problem just like we did last time.
<br>
Let's also time it.
End of explanation
%%timeit
solution = problem.value(hill_climbing(problem))
solution = problem.value(hill_climbing(problem))
solution
Explanation: The peak is at 1.0 which is how gaussian distributions are defined.
<br>
This could also be solved by Hill Climbing as follows.
End of explanation
grid = [[0, 0, 0, 1, 4],
[0, 0, 2, 8, 10],
[0, 0, 2, 4, 12],
[0, 2, 4, 8, 16],
[1, 4, 8, 16, 32]]
heatmap(grid, cmap='jet', interpolation='spline16')
Explanation: As you can see, Hill-Climbing is about 24 times faster than Simulated Annealing.
(Notice that we ran Simulated Annealing for 100 iterations whereas we ran Hill Climbing only once.)
<br>
Simulated Annealing makes up for its tardiness by its ability to be applicable in a larger number of scenarios than Hill Climbing as illustrated by the example below.
<br>
Let's define a 2D surface as a matrix.
End of explanation
problem = PeakFindingProblem(initial, grid, directions8)
Explanation: The peak value is 32 at the lower right corner.
<br>
The region at the upper left corner is planar.
Let's instantiate PeakFindingProblem one last time.
End of explanation
solution = problem.value(hill_climbing(problem))
solution
Explanation: Solution by Hill Climbing
End of explanation
solutions = {problem.value(simulated_annealing(problem)) for i in range(100)}
max(solutions)
Explanation: Solution by Simulated Annealing
End of explanation
psource(genetic_algorithm)
Explanation: Notice that even though both algorithms started at the same initial state,
Hill Climbing could never escape from the planar region and gave a locally optimum solution of 0,
whereas Simulated Annealing could reach the peak at 32.
<br>
A very similar situation arises when there are two peaks of different heights.
One should carefully consider the possible search space before choosing the algorithm for the task.
GENETIC ALGORITHM
Genetic algorithms (or GA) are inspired by natural evolution and are particularly useful in optimization and search problems with large state spaces.
Given a problem, algorithms in the domain make use of a population of solutions (also called states), where each solution/state represents a feasible solution. At each iteration (often called generation), the population gets updated using methods inspired by biology and evolution, like crossover, mutation and natural selection.
Overview
A genetic algorithm works in the following way:
1) Initialize random population.
2) Calculate population fitness.
3) Select individuals for mating.
4) Mate selected individuals to produce new population.
* Random chance to mutate individuals.
5) Repeat from step 2) until an individual is fit enough or the maximum number of iterations is reached.
Glossary
Before we continue, we will lay the basic terminology of the algorithm.
Individual/State: A list of elements (called genes) that represent possible solutions.
Population: The list of all the individuals/states.
Gene pool: The alphabet of possible values for an individual's genes.
Generation/Iteration: The number of times the population will be updated.
Fitness: An individual's score, calculated by a function specific to the problem.
Crossover
Two individuals/states can "mate" and produce one child. This offspring bears characteristics from both of its parents. There are many ways we can implement this crossover. Here we will take a look at the most common ones. Most other methods are variations of those below.
Point Crossover: The crossover occurs around one (or more) point. The parents get "split" at the chosen point or points and then get merged. In the example below we see two parents get split and merged at the 3rd digit, producing the following offspring after the crossover.
Uniform Crossover: This type of crossover chooses randomly the genes to get merged. Here the genes 1, 2 and 5 were chosen from the first parent, so the genes 3, 4 were added by the second parent.
Mutation
When an offspring is produced, there is a chance it will mutate, having one (or more, depending on the implementation) of its genes altered.
For example, let's say the new individual to undergo mutation is "abcde". Randomly we pick to change its third gene to 'z'. The individual now becomes "abzde" and is added to the population.
Selection
At each iteration, the fittest individuals are picked randomly to mate and produce offsprings. We measure an individual's fitness with a fitness function. That function depends on the given problem and it is used to score an individual. Usually the higher the better.
The selection process is this:
1) Individuals are scored by the fitness function.
2) Individuals are picked randomly, according to their score (higher score means higher chance to get picked). Usually the formula to calculate the chance to pick an individual is the following (for population P and individual i):
$$ chance(i) = \dfrac{fitness(i)}{\sum_{k \, in \, P}{fitness(k)}} $$
Implementation
Below we look over the implementation of the algorithm in the search module.
First the implementation of the main core of the algorithm:
End of explanation
psource(recombine)
Explanation: The algorithm takes the following input:
population: The initial population.
fitness_fn: The problem's fitness function.
gene_pool: The gene pool of the states/individuals. By default 0 and 1.
f_thres: The fitness threshold. If an individual reaches that score, iteration stops. By default 'None', which means the algorithm will not halt until the generations are ran.
ngen: The number of iterations/generations.
pmut: The probability of mutation.
The algorithm gives as output the state with the largest score.
For each generation, the algorithm updates the population. First it calculates the fitnesses of the individuals, then it selects the most fit ones and finally crosses them over to produce offsprings. There is a chance that the offspring will be mutated, given by pmut. If at the end of the generation an individual meets the fitness threshold, the algorithm halts and returns that individual.
The function of mating is accomplished by the method recombine:
End of explanation
psource(mutate)
Explanation: The method picks at random a point and merges the parents (x and y) around it.
The mutation is done in the method mutate:
End of explanation
psource(init_population)
Explanation: We pick a gene in x to mutate and a gene from the gene pool to replace it with.
To help initializing the population we have the helper function init_population:
End of explanation
target = 'Genetic Algorithm'
Explanation: The function takes as input the number of individuals in the population, the gene pool and the length of each individual/state. It creates individuals with random genes and returns the population when done.
Explanation
Before we solve problems using the genetic algorithm, we will explain how to intuitively understand the algorithm using a trivial example.
Generating Phrases
In this problem, we use a genetic algorithm to generate a particular target phrase from a population of random strings. This is a classic example that helps build intuition about how to use this algorithm in other problems as well. Before we break the problem down, let us try to brute force the solution. Let us say that we want to generate the phrase "genetic algorithm". The phrase is 17 characters long. We can use any character from the 26 lowercase characters and the space character. To generate a random phrase of length 17, each space can be filled in 27 ways. So the total number of possible phrases is
$$ 27^{17} = 2153693963075557766310747 $$
which is a massive number. If we wanted to generate the phrase "Genetic Algorithm", we would also have to include all the 26 uppercase characters into consideration thereby increasing the sample space from 27 characters to 53 characters and the total number of possible phrases then would be
$$ 53^{17} = 205442259656281392806087233013 $$
If we wanted to include punctuations and numerals into the sample space, we would have further complicated an already impossible problem. Hence, brute forcing is not an option. Now we'll apply the genetic algorithm and see how it significantly reduces the search space. We essentially want to evolve our population of random strings so that they better approximate the target phrase as the number of generations increase. Genetic algorithms work on the principle of Darwinian Natural Selection according to which, there are three key concepts that need to be in place for evolution to happen. They are:
Heredity: There must be a process in place by which children receive the properties of their parents. <br>
For this particular problem, two strings from the population will be chosen as parents and will be split at a random index and recombined as described in the recombine function to create a child. This child string will then be added to the new generation.
Variation: There must be a variety of traits present in the population or a means with which to introduce variation. <br>If there is no variation in the sample space, we might never reach the global optimum. To ensure that there is enough variation, we can initialize a large population, but this gets computationally expensive as the population gets larger. Hence, we often use another method called mutation. In this method, we randomly change one or more characters of some strings in the population based on a predefined probability value called the mutation rate or mutation probability as described in the mutate function. The mutation rate is usually kept quite low. A mutation rate of zero fails to introduce variation in the population and a high mutation rate (say 50%) is as good as a coin flip and the population fails to benefit from the previous recombinations. An optimum balance has to be maintained between population size and mutation rate so as to reduce the computational cost as well as have sufficient variation in the population.
Selection: There must be some mechanism by which some members of the population have the opportunity to be parents and pass down their genetic information and some do not. This is typically referred to as "survival of the fittest". <br>
There has to be some way of determining which phrases in our population have a better chance of eventually evolving into the target phrase. This is done by introducing a fitness function that calculates how close the generated phrase is to the target phrase. The function will simply return a scalar value corresponding to the number of matching characters between the generated phrase and the target phrase.
Before solving the problem, we first need to define our target phrase.
End of explanation
# The ASCII values of uppercase characters ranges from 65 to 91
u_case = [chr(x) for x in range(65, 91)]
# The ASCII values of lowercase characters ranges from 97 to 123
l_case = [chr(x) for x in range(97, 123)]
gene_pool = []
gene_pool.extend(u_case) # adds the uppercase list to the gene pool
gene_pool.extend(l_case) # adds the lowercase list to the gene pool
gene_pool.append(' ') # adds the space character to the gene pool
Explanation: We then need to define our gene pool, i.e the elements which an individual from the population might comprise of. Here, the gene pool contains all uppercase and lowercase letters of the English alphabet and the space character.
End of explanation
max_population = 100
Explanation: We now need to define the maximum size of each population. Larger populations have more variation but are computationally more expensive to run algorithms on.
End of explanation
mutation_rate = 0.07 # 7%
Explanation: As our population is not very large, we can afford to keep a relatively large mutation rate.
End of explanation
def fitness_fn(sample):
# initialize fitness to 0
fitness = 0
for i in range(len(sample)):
# increment fitness by 1 for every matching character
if sample[i] == target[i]:
fitness += 1
return fitness
Explanation: Great! Now, we need to define the most important metric for the genetic algorithm, i.e the fitness function. This will simply return the number of matching characters between the generated sample and the target phrase.
End of explanation
population = init_population(max_population, gene_pool, len(target))
Explanation: Before we run our genetic algorithm, we need to initialize a random population. We will use the init_population function to do this. We need to pass in the maximum population size, the gene pool and the length of each individual, which in this case will be the same as the length of the target phrase.
End of explanation
parents = select(2, population, fitness_fn)
# The recombine function takes two parents as arguments, so we need to unpack the previous variable
child = recombine(*parents)
Explanation: We will now define how the individuals in the population should change as the number of generations increases. First, the select function will be run on the population to select two individuals with high fitness values. These will be the parents which will then be recombined using the recombine function to generate the child.
End of explanation
child = mutate(child, gene_pool, mutation_rate)
Explanation: Next, we need to apply a mutation according to the mutation rate. We call the mutate function on the child with the gene pool and mutation rate as the additional arguments.
End of explanation
population = [mutate(recombine(*select(2, population, fitness_fn)), gene_pool, mutation_rate) for i in range(len(population))]
Explanation: The above lines can be condensed into
child = mutate(recombine(*select(2, population, fitness_fn)), gene_pool, mutation_rate)
And, we need to do this for every individual in the current population to generate the new population.
End of explanation
current_best = max(population, key=fitness_fn)
Explanation: The individual with the highest fitness can then be found using the max function.
End of explanation
print(current_best)
Explanation: Let's print this out
End of explanation
current_best_string = ''.join(current_best)
print(current_best_string)
Explanation: We see that this is a list of characters. This can be converted to a string using the join function
End of explanation
ngen = 1200 # maximum number of generations
# we set the threshold fitness equal to the length of the target phrase
# i.e the algorithm only terminates whne it has got all the characters correct
# or it has completed 'ngen' number of generations
f_thres = len(target)
Explanation: We now need to define the conditions to terminate the algorithm. This can happen in two ways
1. Termination after a predefined number of generations
2. Termination when the fitness of the best individual of the current generation reaches a predefined threshold value.
We define these variables below
End of explanation
def genetic_algorithm_stepwise(population, fitness_fn, gene_pool=[0, 1], f_thres=None, ngen=1200, pmut=0.1):
for generation in range(ngen):
population = [mutate(recombine(*select(2, population, fitness_fn)), gene_pool, pmut) for i in range(len(population))]
# stores the individual genome with the highest fitness in the current population
current_best = ''.join(max(population, key=fitness_fn))
print(f'Current best: {current_best}\t\tGeneration: {str(generation)}\t\tFitness: {fitness_fn(current_best)}\r', end='')
# compare the fitness of the current best individual to f_thres
fittest_individual = fitness_threshold(fitness_fn, f_thres, population)
# if fitness is greater than or equal to f_thres, we terminate the algorithm
if fittest_individual:
return fittest_individual, generation
return max(population, key=fitness_fn) , generation
Explanation: To generate ngen number of generations, we run a for loop ngen number of times. After each generation, we calculate the fitness of the best individual of the generation and compare it to the value of f_thres using the fitness_threshold function. After every generation, we print out the best individual of the generation and the corresponding fitness value. Lets now write a function to do this.
End of explanation
psource(genetic_algorithm)
Explanation: The function defined above is essentially the same as the one defined in search.py with the added functionality of printing out the data of each generation.
End of explanation
population = init_population(max_population, gene_pool, len(target))
solution, generations = genetic_algorithm_stepwise(population, fitness_fn, gene_pool, f_thres, ngen, mutation_rate)
Explanation: We have defined all the required functions and variables. Let's now create a new population and test the function we wrote above.
End of explanation
edges = {
'A': [0, 1],
'B': [0, 3],
'C': [1, 2],
'D': [2, 3]
}
Explanation: The genetic algorithm was able to converge!
We implore you to rerun the above cell and play around with target, max_population, f_thres, ngen etc parameters to get a better intuition of how the algorithm works. To summarize, if we can define the problem states in simple array format and if we can create a fitness function to gauge how good or bad our approximate solutions are, there is a high chance that we can get a satisfactory solution using a genetic algorithm.
- There is also a better GUI version of this program genetic_algorithm_example.py in the GUI folder for you to play around with.
Usage
Below we give two example usages for the genetic algorithm, for a graph coloring problem and the 8 queens problem.
Graph Coloring
First we will take on the simpler problem of coloring a small graph with two colors. Before we do anything, let's imagine how a solution might look. First, we have to represent our colors. Say, 'R' for red and 'G' for green. These make up our gene pool. What of the individual solutions though? For that, we will look at our problem. We stated we have a graph. A graph has nodes and edges, and we want to color the nodes. Naturally, we want to store each node's color. If we have four nodes, we can store their colors in a list of genes, one for each node. A possible solution will then look like this: ['R', 'R', 'G', 'R']. In the general case, we will represent each solution with a list of chars ('R' and 'G'), with length the number of nodes.
Next we need to come up with a fitness function that appropriately scores individuals. Again, we will look at the problem definition at hand. We want to color a graph. For a solution to be optimal, no edge should connect two nodes of the same color. How can we use this information to score a solution? A naive (and ineffective) approach would be to count the different colors in the string. So ['R', 'R', 'R', 'R'] has a score of 1 and ['R', 'R', 'G', 'G'] has a score of 2. Why that fitness function is not ideal though? Why, we forgot the information about the edges! The edges are pivotal to the problem and the above function only deals with node colors. We didn't use all the information at hand and ended up with an ineffective answer. How, then, can we use that information to our advantage?
We said that the optimal solution will have all the edges connecting nodes of different color. So, to score a solution we can count how many edges are valid (aka connecting nodes of different color). That is a great fitness function!
Let's jump into solving this problem using the genetic_algorithm function.
First we need to represent the graph. Since we mostly need information about edges, we will just store the edges. We will denote edges with capital letters and nodes with integers:
End of explanation
population = init_population(8, ['R', 'G'], 4)
print(population)
Explanation: Edge 'A' connects nodes 0 and 1, edge 'B' connects nodes 0 and 3 etc.
We already said our gene pool is 'R' and 'G', so we can jump right into initializing our population. Since we have only four nodes, state_length should be 4. For the number of individuals, we will try 8. We can increase this number if we need higher accuracy, but be careful! Larger populations need more computating power and take longer. You need to strike that sweet balance between accuracy and cost (the ultimate dilemma of the programmer!).
End of explanation
def fitness(c):
return sum(c[n1] != c[n2] for (n1, n2) in edges.values())
Explanation: We created and printed the population. You can see that the genes in the individuals are random and there are 8 individuals each with 4 genes.
Next we need to write our fitness function. We previously said we want the function to count how many edges are valid. So, given a coloring/individual c, we will do just that:
End of explanation
solution = genetic_algorithm(population, fitness, gene_pool=['R', 'G'])
print(solution)
Explanation: Great! Now we will run the genetic algorithm and see what solution it gives.
End of explanation
print(fitness(solution))
Explanation: The algorithm converged to a solution. Let's check its score:
End of explanation
population = init_population(100, range(8), 8)
print(population[:5])
Explanation: The solution has a score of 4. Which means it is optimal, since we have exactly 4 edges in our graph, meaning all are valid!
NOTE: Because the algorithm is non-deterministic, there is a chance a different solution is given. It might even be wrong, if we are very unlucky!
Eight Queens
Let's take a look at a more complicated problem.
In the Eight Queens problem, we are tasked with placing eight queens on an 8x8 chessboard without any queen threatening the others (aka queens should not be in the same row, column or diagonal). In its general form the problem is defined as placing N queens in an NxN chessboard without any conflicts.
First we need to think about the representation of each solution. We can go the naive route of representing the whole chessboard with the queens' placements on it. That is definitely one way to go about it, but for the purpose of this tutorial we will do something different. We have eight queens, so we will have a gene for each of them. The gene pool will be numbers from 0 to 7, for the different columns. The position of the gene in the state will denote the row the particular queen is placed in.
For example, we can have the state "03304577". Here the first gene with a value of 0 means "the queen at row 0 is placed at column 0", for the second gene "the queen at row 1 is placed at column 3" and so forth.
We now need to think about the fitness function. On the graph coloring problem we counted the valid edges. The same thought process can be applied here. Instead of edges though, we have positioning between queens. If two queens are not threatening each other, we say they are at a "non-attacking" positioning. We can, therefore, count how many such positionings are there.
Let's dive right in and initialize our population:
End of explanation
def fitness(q):
non_attacking = 0
for row1 in range(len(q)):
for row2 in range(row1+1, len(q)):
col1 = int(q[row1])
col2 = int(q[row2])
row_diff = row1 - row2
col_diff = col1 - col2
if col1 != col2 and row_diff != col_diff and row_diff != -col_diff:
non_attacking += 1
return non_attacking
Explanation: We have a population of 100 and each individual has 8 genes. The gene pool is the integers from 0 to 7, in string form. Above you can see the first five individuals.
Next we need to write our fitness function. Remember, queens threaten each other if they are at the same row, column or diagonal.
Since positionings are mutual, we must take care not to count them twice. Therefore for each queen, we will only check for conflicts for the queens after her.
A gene's value in an individual q denotes the queen's column, and the position of the gene denotes its row. We can check if the aforementioned values between two genes are the same. We also need to check for diagonals. A queen a is in the diagonal of another queen, b, if the difference of the rows between them is equal to either their difference in columns (for the diagonal on the right of a) or equal to the negative difference of their columns (for the left diagonal of a). Below is given the fitness function.
End of explanation
solution = genetic_algorithm(population, fitness, f_thres=25, gene_pool=range(8))
print(solution)
print(fitness(solution))
Explanation: Note that the best score achievable is 28. That is because for each queen we only check for the queens after her. For the first queen we check 7 other queens, for the second queen 6 others and so on. In short, the number of checks we make is the sum 7+6+5+...+1. Which is equal to 7*(7+1)/2 = 28.
Because it is very hard and will take long to find a perfect solution, we will set the fitness threshold at 25. If we find an individual with a score greater or equal to that, we will halt. Let's see how the genetic algorithm will fare.
End of explanation
psource(NQueensProblem)
Explanation: Above you can see the solution and its fitness score, which should be no less than 25.
This is where we conclude Genetic Algorithms.
N-Queens Problem
Here, we will look at the generalized cae of the Eight Queens problem.
<br>
We are given a N x N chessboard, with N queens, and we need to place them in such a way that no two queens can attack each other.
<br>
We will solve this problem using search algorithms.
To do this, we already have a NQueensProblem class in search.py.
End of explanation
nqp = NQueensProblem(8)
Explanation: In csp.ipynb we have seen that the N-Queens problem can be formulated as a CSP and can be solved by
the min_conflicts algorithm in a way similar to Hill-Climbing.
Here, we want to solve it using heuristic search algorithms and even some classical search algorithms.
The NQueensProblem class derives from the Problem class and is implemented in such a way that the search algorithms we already have, can solve it.
<br>
Let's instantiate the class.
End of explanation
%%timeit
depth_first_tree_search(nqp)
dfts = depth_first_tree_search(nqp).solution()
plot_NQueens(dfts)
Explanation: Let's use depth_first_tree_search first.
<br>
We will also use the %%timeit magic with each algorithm to see how much time they take.
End of explanation
%%timeit
breadth_first_tree_search(nqp)
bfts = breadth_first_tree_search(nqp).solution()
plot_NQueens(bfts)
Explanation: breadth_first_tree_search
End of explanation
%%timeit
uniform_cost_search(nqp)
ucs = uniform_cost_search(nqp).solution()
plot_NQueens(ucs)
Explanation: uniform_cost_search
End of explanation
psource(NQueensProblem.h)
%%timeit
astar_search(nqp)
Explanation: depth_first_tree_search is almost 20 times faster than breadth_first_tree_search and more than 200 times faster than uniform_cost_search.
We can also solve this problem using astar_search with a suitable heuristic function.
<br>
The best heuristic function for this scenario will be one that returns the number of conflicts in the current state.
End of explanation
astar = astar_search(nqp).solution()
plot_NQueens(astar)
Explanation: astar_search is faster than both uniform_cost_search and breadth_first_tree_search.
End of explanation
psource(and_or_graph_search)
Explanation: AND-OR GRAPH SEARCH
An AND-OR graph is a graphical representation of the reduction of goals to conjunctions and disjunctions of subgoals.
<br>
An AND-OR graph can be seen as a generalization of a directed graph.
It contains a number of vertices and generalized edges that connect the vertices.
<br>
Each connector in an AND-OR graph connects a set of vertices $V$ to a single vertex, $v_0$.
A connector can be an AND connector or an OR connector.
An AND connector connects two edges having a logical AND relationship,
while and OR connector connects two edges having a logical OR relationship.
<br>
A vertex can have more than one AND or OR connector.
This is why AND-OR graphs can be expressed as logical statements.
<br>
<br>
AND-OR graphs also provide a computational model for executing logic programs and you will come across this data-structure in the logic module as well.
AND-OR graphs can be searched in depth-first, breadth-first or best-first ways searching the state sapce linearly or parallely.
<br>
Our implementation of AND-OR search searches over graphs generated by non-deterministic environments and returns a conditional plan that reaches a goal state in all circumstances.
Let's have a look at the implementation of and_or_graph_search.
End of explanation
vacuum_world = GraphProblemStochastic('State_1', ['State_7', 'State_8'], vacuum_world)
plan = and_or_graph_search(vacuum_world)
plan
def run_plan(state, problem, plan):
if problem.goal_test(state):
return True
if len(plan) is not 2:
return False
predicate = lambda x: run_plan(x, problem, plan[1][x])
return all(predicate(r) for r in problem.result(state, plan[0]))
run_plan('State_1', vacuum_world, plan)
Explanation: The search is carried out by two functions and_search and or_search that recursively call each other, traversing nodes sequentially.
It is a recursive depth-first algorithm for searching an AND-OR graph.
<br>
A very similar algorithm fol_bc_ask can be found in the logic module, which carries out inference on first-order logic knowledge bases using AND-OR graph-derived data-structures.
<br>
AND-OR trees can also be used to represent the search spaces for two-player games, where a vertex of the tree represents the problem of one of the players winning the game, starting from the initial state of the game.
<br>
Problems involving MIN-MAX trees can be reformulated as AND-OR trees by representing MAX nodes as OR nodes and MIN nodes as AND nodes.
and_or_graph_search can then be used to find the optimal solution.
Standard algorithms like minimax and expectiminimax (for belief states) can also be applied on it with a few modifications.
Here's how and_or_graph_search can be applied to a simple vacuum-world example.
End of explanation
psource(OnlineDFSAgent)
Explanation: ONLINE DFS AGENT
So far, we have seen agents that use offline search algorithms,
which is a class of algorithms that compute a complete solution before executing it.
In contrast, an online search agent interleaves computation and action.
Online search is better for most dynamic environments and necessary for unknown environments.
<br>
Online search problems are solved by an agent executing actions, rather than just by pure computation.
For a fully observable environment, an online agent cycles through three steps: taking an action, computing the step cost and checking if the goal has been reached.
<br>
For online algorithms in partially-observable environments, there is usually a tradeoff between exploration and exploitation to be taken care of.
<br>
<br>
Whenever an online agent takes an action, it receives a percept or an observation that tells it something about its immediate environment.
Using this percept, the agent can augment its map of the current environment.
For a partially observable environment, this is called the belief state.
<br>
Online algorithms expand nodes in a local order, just like depth-first search as it does not have the option of observing farther nodes like A* search.
Whenever an action from the current state has not been explored, the agent tries that action.
<br>
Difficulty arises when the agent has tried all actions in a particular state.
An offline search algorithm would simply drop the state from the queue in this scenario whereas an online search agent has to physically move back to the previous state.
To do this, the agent needs to maintain a table where it stores the order of nodes it has been to.
This is how our implementation of Online DFS-Agent works.
This agent works only in state spaces where the action is reversible, because of the use of backtracking.
<br>
Let's have a look at the OnlineDFSAgent class.
End of explanation
psource(LRTAStarAgent)
Explanation: It maintains two dictionaries untried and unbacktracked.
untried contains nodes that have not been visited yet.
unbacktracked contains the sequence of nodes that the agent has visited so it can backtrack to it later, if required.
s and a store the state and the action respectively and result stores the final path or solution of the problem.
<br>
Let's look at another online search algorithm.
LRTA* AGENT
We can infer now that hill-climbing is an online search algorithm, but it is not very useful natively because for complicated search spaces, it might converge to the local minima and indefinitely stay there.
In such a case, we can choose to randomly restart it a few times with different starting conditions and return the result with the lowest total cost.
Sometimes, it is better to use random walks instead of random restarts depending on the problem, but progress can still be very slow.
<br>
A better improvement would be to give hill-climbing a memory element.
We store the current best heuristic estimate and it is updated as the agent gains experience in the state space.
The estimated optimal cost is made more and more accurate as time passes and each time the the local minima is "flattened out" until we escape it.
<br>
This learning scheme is a simple improvement upon traditional hill-climbing and is called learning real-time A_ or __LRTA__.
Similar to _Online DFS-Agent, it builds a map of the environment and chooses the best possible move according to its current heuristic estimates.
<br>
Actions that haven't been tried yet are assumed to lead immediately to the goal with the least possible cost.
This is called optimism under uncertainty and encourages the agent to explore new promising paths.
This algorithm might not terminate if the state space is infinite, unlike A* search.
<br>
Let's have a look at the LRTAStarAgent class.
End of explanation
one_dim_state_space
Explanation: H stores the heuristic cost of the paths the agent may travel to.
<br>
s and a store the state and the action respectively.
<br>
problem stores the problem definition and the current map of the environment is stored in problem.result.
<br>
The LRTA_cost method computes the cost of a new path given the current state s, the action a, the next state s1 and the estimated cost to get from s to s1 is extracted from H.
Let's use LRTAStarAgent to solve a simple problem.
We'll define a new LRTA_problem instance based on our one_dim_state_space.
End of explanation
LRTA_problem = OnlineSearchProblem('State_3', 'State_5', one_dim_state_space)
Explanation: Let's define an instance of OnlineSearchProblem.
End of explanation
lrta_agent = LRTAStarAgent(LRTA_problem)
Explanation: Now we initialize a LRTAStarAgent object for the problem we just defined.
End of explanation
lrta_agent('State_3')
lrta_agent('State_4')
lrta_agent('State_3')
lrta_agent('State_4')
Explanation: We'll pass the percepts [State_3, State_4, State_3, State_4, State_5] one-by-one to our agent to see what action it comes up with at each timestep.
End of explanation
lrta_agent('State_5')
Explanation: If you manually try to see what the optimal action should be at each step, the outputs of the lrta_agent will start to make sense if it doesn't already.
End of explanation
<END_TASK> |
93 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Lecture
Step3: 2. List Methods -- 100xp, status
Step5: 3. List Methods II -- 100xp, status | <ASSISTANT_TASK:>
Python Code:
Instructions:
+ Use the upper() method on room and store the result in room_up.
Use the dot notation.
+ Print out room and room_up. Did both change?
+ Print out the number of o's on the variable room by calling count()
on room and passing the letter "o" as an input to the method.
- We're talking about the variable room, not the word "room"!
# string to experiment with: room
room = "poolhouse"
# Use upper() on room: room_up
room_up = room.upper()
# Print out room and room_up
print(room)
print( "\n" + room_up )
# Print out the number of o's in room
print("\n" + str( room.count("o") ) )
Explanation: Lecture: Methods
Following are some of the functions used formarly in this course:
Max()
len()
round()
sorted()
Let's learn a few more:
Get index in a list: ?
Reversing a list: ?
Note: all the data structures in python are called objects.
Python has built-in methods which informaly are:
Functions that belong to python objects, e.g. A python object of type string has methods, such as:
capitalize and
replace
Further, objects of type float have "specific methods" depending on the type.
Syntax:
object.method_name( <arguments> )
The .(dot) is called address off operator which uses reference of specified method to be searched from the standard library of python, which contains list of functions and methods.
To sum things up:
In Python, everything is an object, and each object has a specific method associated with it, depending on the type of object.
Note:
Some methods can also, change the objects they are called on.
e.g. The .append() method!
Consequently, some don't and thus a caution is needed while usig such methods!
Lab: Methods
Objective:
Get to know different kinds of methods.
Understand the nuances that come packaged with methods.
by practising them on data types such as string and list.
String Methods -- 100xp, status: Earned
List Methods -- 100xp, status: Earned
List Methods II -- 100xp, status: Earned
1. String Methods -- 100xp, status: earned
End of explanation
Instructions:
+ Use the index() method to get the index of the element
in areas that is equal to 20.0. Print out this index.
+ Call count() on areas to find out how many times 14.5
appears in the list. Again, simply print out this number.
# first let's look more about these methods
help(str.count)
print(2*"\n===================================================")
help(str.index)
# Create list areas
areas = [11.25, 18.0, 20.0, 10.75, 9.50]
# Print out the index of the element 20.0
print( "\nThe index of the element 20.0 is: " + str( areas.index( 20 ) ) )
# Print out how often 14.5 appears in areas
print("\nThe number of times 14.5 occurs is: " + str( areas.count( 14.5 ) ) )
Explanation: 2. List Methods -- 100xp, status: earned
Other Python data types alos have many common method's associated with them, some of these methods are exclusive to some data types.
A few of them we will be experimenting on them:
index(), to get the index of the first element of a slist that matches its input.
count(), to get the number of times an element appears in a list.
End of explanation
Instructions:
+ Use the append method twice to add the size of the
poolhouse and the garage again:
- 24.5 and 15.45, respectively.
- Add them in order
+ Print out the areas.
+ Use the reverse() method to reverse the order of the
elements in areas.
+ Print out the area once more.
# Let's look at the help on these methods
help( list.append )
print("=====================================================")
help( list.remove )
print("=====================================================")
help( list.reverse )
# Create list areas
areas = [11.25, 18.0, 20.0, 10.75, 9.50]
# Use append twice to add poolhouse and garage size
areas.append( 24.5 )
areas.append( 15.45 )
# Print out areas
print("\nThe new list contains two new items: " + str( areas ) )
# Reverse the orders of the elements in areas
areas.reverse()
# Print out areas
print("\nThe new list has been reversed: " + str( areas ) )
Explanation: 3. List Methods II -- 100xp, status: earned
Most list methods will change the list they're called on. E.g.
append() : adds and element to the list it is called on.
remove(): removes the "1st element" of a list that matches the inuput.
reverse() : reverse the order of the elements in the list it is called on.
End of explanation
<END_TASK> |
94 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
robobrowser
Step1: opds
Step2: Failed attempt with requests to submit to yaml loader
Step3: travis webhooks
For https
Step4: travis webhook authentication
I think the documention is incorrect. Instead of 'username/repository', just use the header Travis-Repo-Slug, which, I think, is just the full name of the repo -- e.g., GITenberg/Adventures-of-Huckleberry-Finn_76
When Travis CI makes the POST request, a header named Authorization is included. Its value is the SHA2 hash of the GitHub username (see below), the name of the repository, and your Travis CI token.
For instance, in Python, use this snippet
Step5: testing my webhook implementation | <ASSISTANT_TASK:>
Python Code:
from robobrowser import RoboBrowser
def post_to_yaml_loader(url, unglue_url="https://unglue.it/api/loader/yaml"):
browser = RoboBrowser(history=True)
browser.open(unglue_url)
form = browser.get_forms()[0]
form['repo_url'] = url
# weird I have to manually set referer
browser.session.headers['referer'] = unglue_url
browser.submit_form(form)
return browser
b = post_to_yaml_loader('https://github.com/GITenberg/Adventures-of-Huckleberry-Finn_76/raw/master/metadata.yaml')
(b.url, b.response)
Explanation: robobrowser
End of explanation
from lxml import etree
import requests
opds_url = "https://unglue.it/api/opds/"
doc = etree.fromstring(requests.get(opds_url).content)
doc
Explanation: opds
End of explanation
import requests
from lxml import etree
from lxml.cssselect import CSSSelector
unglue_url = "https://unglue.it/api/loader/yaml"
r = requests.get(unglue_url)
doc = etree.HTML(r.content)
sel = CSSSelector('input[name="csrfmiddlewaretoken"]')
csrftoken = sel(doc)[0].attrib.get('value')
csrftoken
r = requests.post(unglue_url,
data={'repo_url':
'https://github.com/GITenberg/Adventures-of-Huckleberry-Finn_76/raw/master/metadata.yaml',
'csrfmiddlewaretoken':csrftoken
},
headers={'referer':unglue_url})
(r.status_code, r.content)
Explanation: Failed attempt with requests to submit to yaml loader
End of explanation
import requests
raw_url_1 = (
"https://gist.githubusercontent.com/rdhyee/7f33050732a09dfa93f3/raw/8abf5661911e7aedf434d464dd1a28b3d24d6f83/travis_webhook_1.json"
)
raw_url_2 = (
"https://gist.githubusercontent.com/rdhyee/8dc04b8fe52a9fefe3c2/raw/8f9968f481df3f4d4ecd44624c2dc1b0a8e02a17/travis_webhook_2.json"
)
r1 = requests.get(raw_url_1).json()
r2 = requests.get(raw_url_2).json()
# url of metadata.yaml to load:
# https://github.com/GITenberg/Adventures-of-Huckleberry-Finn_76/raw/master/metadata.yaml
r1.get('commit'), r1.get('repository', {}).get('name')
r1
r1.get('type'), r1['state'], r1['result'], r1.get('status_message')
r2.get('type'), r2['state'], r2['result'], r2.get('status_message')
Explanation: travis webhooks
For https://travis-ci.org/GITenberg/Adventures-of-Huckleberry-Finn_76/builds/109712115 -- 2 webhooks were sent to http://requestb.in/wrr6l3wr?inspect:
Travis webhook #1 for https://travis-ci.org/GITenberg/Adventures-of-Huckleberry-Finn_76/builds/109712115
second webhook for https://travis-ci.org/GITenberg/Adventures-of-Huckleberry-Finn_76/builds/109712115
End of explanation
sent_token = "6fba7d2102f66b16139a54e1b434471f6fb64d20c0787ec773e92a5155fad4a9"
from github_settings import TRAVIS_TOKEN, username
from hashlib import sha256
sha256('GITenberg/Adventures-of-Huckleberry-Finn_76' + TRAVIS_TOKEN).hexdigest()
Explanation: travis webhook authentication
I think the documention is incorrect. Instead of 'username/repository', just use the header Travis-Repo-Slug, which, I think, is just the full name of the repo -- e.g., GITenberg/Adventures-of-Huckleberry-Finn_76
When Travis CI makes the POST request, a header named Authorization is included. Its value is the SHA2 hash of the GitHub username (see below), the name of the repository, and your Travis CI token.
For instance, in Python, use this snippet:
Python
from hashlib import sha256
sha256('username/repository' + TRAVIS_TOKEN).hexdigest()
Use this to ensure Travis CI is the one making requests to your webhook.
How to find TRAVIS_TOKEN? You have to go your profile (I thought you can use the travis CLI: travis token -- but that's for the "access token". There are 3 different types of tokens in play for travis: The Travis CI Blog: Token, Token, Token)
So I'm waiting for https://travis-ci.org/profile/rdhyee-GITenberg to load up -- very slow on Chrome but fast on Firefox?
End of explanation
import requests
url = "http://127.0.0.1:8000/api/travisci/webhook"
test_headers_url = \
"https://gist.githubusercontent.com/rdhyee/a9242f60b568b5a9e8fa/raw/e5d71c9a17964e0d43f6a35bbf03efe3f8a7d752/webhook_headers.txt"
test_body_url = \
"https://gist.githubusercontent.com/rdhyee/a9242f60b568b5a9e8fa/raw/e5d71c9a17964e0d43f6a35bbf03efe3f8a7d752/webook_body.json"
payload = requests.get(test_body_url).content
headers = dict([(k,v.strip()) for (k,v) in [line.split(":") for line in requests.get(test_headers_url).content.split('\n')]])
r = requests.post(url, data={'payload':payload}, headers=headers, allow_redirects=True)
(r.status_code, r.content)
# example of a request to exercise exception
import json
payload = json.dumps({
"repository":{
"id":4651401,
"name":"Adventures-of-Huckleberry-Finn_76",
"owner_name":"GITenberg",
"url":"http://GITenberg.github.com/"
},
"status_message": "Passed",
"type": "push"
})
r = requests.post(url, data={'payload':payload}, headers={}, allow_redirects=True)
(r.status_code, r.content)
r = requests.get(url, allow_redirects=True)
(r.status_code, r.content)
Explanation: testing my webhook implementation
End of explanation
<END_TASK> |
95 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
<h2><span style="color
Step1: Development of the Processor class to calculate all the stats
Step2: Prototyping the dcons function to split alleles per base
Step3: Loading pickled results
Step4: Prototype Dxy
Step5: Prototype Fst
Step6: Prototyping collating stats across runs | <ASSISTANT_TASK:>
Python Code:
!hostname
%load_ext autoreload
%autoreload 2
%matplotlib inline
import ipyrad
import ipyrad.analysis as ipa
import ipyparallel as ipp
from ipyrad.analysis.popgen import Popgen
from ipyrad import Assembly
from ipyrad.analysis.locus_extracter import LocusExtracter
ipyclient = ipp.Client(cluster_id="popgen")
print(len(ipyclient))
# popgen tools can accept either an ipyrad assembly
data = ipyrad.load_json("/tmp/ipyrad-test/watdo.json")
# or alternatively the path to your VCF or HDF5 formatted snps file
#data = "/tmp/ipyrad-test/watdo_outfiles/watdo.snps.hdf5"
imap = {
"pop1" : ["1A_0", "1B_0", "1C_0", "1D_0"],
"pop2" : ["2E_0", "2F_0", "2G_0", "2H_0"],
"pop3" : ["3I_0", "3J_0", "3K_0", "3L_0"],
}
popgen = Popgen(data=data, imap=imap)
popgen.samples
popgen.params
from IPython.display import display
popgen.run(ipyclient=ipyclient)
popgen.results
display(popgen.results)
from ipyrad.analysis.locus_extracter import LocusExtracter
import ipyparallel as ipp
ipyclient = ipp.Client(cluster_id="popgen")
print(len(ipyclient))
lex = LocusExtracter(
data=data.seqs_database,
imap=imap,
mincov=len(imap), # ENFORCE at least 1 per spp.
)
lex.run(ipyclient=ipyclient)
print(len(popgen.lex.loci))
popgen.lex.get_locus(1, as_df=True)
import pandas as pd
wat = pd.DataFrame()
with h5py.File(data.snps_database, 'r') as io5:
diffs = io5["snps"][0] != io5["snps"][1]
for idx, name in enumerate(io5["snps"].attrs["names"]):
wat[name.decode("utf-8")] = io5["snps"][idx]
wat["1A_0"]
import h5py
with h5py.File(data.seqs_database, 'r') as io5:
print(io5.keys())
print(io5["phymap"].attrs.keys())
print(io5["phymap"].attrs["phynames"])
print(io5["phy"][0])
Explanation: <h2><span style="color:gray">ipyrad-analysis toolkit:</span> Popgen summary statistics</h2>
Calculate summary statistics such as pi, Tajima's D, Fst
End of explanation
from collections import Counter
from itertools import combinations
import numpy as np
# Make a processor and give it some data
loci = [lex.get_locus(x, as_df=True) for x in range(2)]
proc = ipa.popgen.Processor(popgen.params, 0, loci)
proc.run()
#proc._pi()
locus = loci[0]
#locus[10][:5] = 82
#display(locus)
#%timeit proc.pi(locus)
print(proc.pi(locus))
print(proc.Watterson(locus))
print(proc.TajimasD(locus))
print(proc.results)
Explanation: Development of the Processor class to calculate all the stats
End of explanation
from ipyrad.assemble.utils import DCONS
import pandas as pd
import itertools
p1 = popgen.imap["pop1"]
#locus.loc[p1, :].apply(lambda x: [DCONS[y] for y in x])
cts = np.array(locus.apply(lambda bases:\
Counter(x for x in bases if x not in [45, 78])))
snps = np.array([len(x) for x in cts]) > 1
cts = cts[snps]
def dcons(counter):
new = list(itertools.chain(*[DCONS[x]*ct for x, ct in counter.items()]))
return Counter(new)
print(cts)
%timeit list(map(dcons, cts))
Explanation: Prototyping the dcons function to split alleles per base
End of explanation
ipyrad.analysis.popgen._calc_sumstats(popgen, 10, loci)
import pickle
!ls analysis-popgen/
with open("analysis-popgen/0.p", 'rb') as inp:
dat = pickle.load(inp)
dat
Explanation: Loading pickled results
End of explanation
proc._process_locus_pops(locus, ["pop1", "pop3"])
pop_cts, sidxs = proc._process_locus_pops(locus, ["pop1", "pop2"])
# Between population summary statistics
def _dxy(cts_a, cts_b):
Dxy = 0
ncomps = 0
for cta, ctb in zip(cts_a, cts_b):
ncomps += sum(list(cta.values())) *\
sum(list(ctb.values()))
for ka, va in cta.items():
for kb, vb in ctb.items():
if ka == kb: continue
Dxy += va*vb
print(Dxy, ncomps)
return Dxy/ncomps
Dxy = _dxy(pop_cts["pop1"], pop_cts["pop2"])
Dxy/len(locus)
%timeit proc.Dxy(locus, ["pop1", "pop2"])
Explanation: Prototype Dxy
End of explanation
proc._fst_full(locus)
print(np.zeros(len(proc.data.imap), len(proc.data.imap)))
Dxy_arr = pd.DataFrame(
data=np.zeros(len(proc.data.imap), len(proc.data.imap)),
index=proc.data.imap.keys(),
columns=proc.data.imap.keys(),
)
loci = [lex.get_locus(x, as_df=True) for x in range(100)]
proc = ipa.popgen.Processor(popgen.params, 0, loci)
proc.run()
Explanation: Prototype Fst
End of explanation
import glob
pickles = glob.glob(os.path.join(popgen.workdir, "*.p"))
sorted(pickles, key=lambda x: int(x.rsplit("/", 1)[-1][:-2]))
#pickles[0].rsplit("/", 1)[-1][:-2]
pdicts = {}
for pkl in pickles:
with open(pkl, 'rb') as inp:
pdicts[int(pkl.rsplit("/", 1)[-1][:-2])] = pickle.load(inp)
pdicts[0]["pi"]
#print(pdicts[0]["pi"])
pdicts[0]["Fst"].keys()
full_res = {}
for d in [pdicts]: full_res.update(d)
full_res.keys()
pidx = sorted(full_res.keys())
pi_dict = {}
w_theta_dict = {}
tajd_dict = {}
for idx in pidx:
pi_dict.update(full_res[idx]["pi"])
w_theta_dict.update(full_res[idx]["Watterson"])
tajd_dict.update(full_res[idx]["TajimasD"])
popstats = {}
for pop in proc.imap:
popstats[pop] = pd.DataFrame([], columns=["pi",
"raw_pi",
"Watterson",
"raw_Watterson",
"TajimasD"], index=range(len(popgen.lex.loci)))
for lidx in range(len(popgen.lex.loci)):
popstats[pop]["pi"].loc[lidx] = pi_dict[lidx][pop]["pi_per_base"]
popstats[pop]["raw_pi"].loc[lidx] = pi_dict[lidx][pop]["pi"]
popstats[pop]["Watterson"].loc[lidx] = w_theta_dict[lidx][pop]["w_theta_per_base"]
popstats[pop]["raw_Watterson"].loc[lidx] = w_theta_dict[lidx][pop]["w_theta"]
popstats[pop]["TajimasD"].loc[lidx] = tajd_dict[lidx][pop]
lidx = sorted(full_res.keys())
for idx in lidx[:1]:
for pop in proc.imap:
for bidx in full_res[idx]["pi"]:
print(full_res[idx]["pi"][bidx][pop]["pi_per_base"])
# pi_per_base = np.mean(full_res[idx]["pi"][idx][pop]["pi_per_base"])
# print(pop, pi_per_base)
pi_dict[0]
#popstats["pop1"].mean()
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
df = pd.read_csv("/tmp/gorg-tropics_sags_tableS2 - SAGs.csv")
print(df.columns)
set(df["Lineage"])
#df[["Genome completeness (%)", "Lineage"]]
df[["Raw read count", "Lineage"]]
plt.hist(df[df["Lineage"] == "AEGEAN-169"]["Genome completeness (%)"])
import locale
locale.setlocale(locale.LC_ALL, 'en_US.UTF8')
fig, ax = plt.subplots(figsize=(10, 10))
for l in set(df["Lineage"]):
#print(l, np.mean(df[df["Lineage"] == l]["Genome completeness (%)"]))
#print(l, np.mean([locale.atoi(x) for x in df[df["Lineage"] == l]["Raw read count"]]))
#print(l, np.std([locale.atoi(x) for x in df[df["Lineage"] == l]["Assembly size (bp)"]]))
lmask = df[np.array(df["Lineage"] == l) + np.array(df["Genome completeness (%)"]>80)]
# cmask = df[df["Genome completeness (%)" > 80]]
try:
alpha=0.05
if l == "AEGEAN-169": alpha=1
plt.hist(lmask["Genome completeness (%)"], alpha=alpha, label=l, bins=40)
except: pass
plt.xlim(80, 100)
plt.legend()
nsamps=10
nspecies=3
dfs = []
for idx in range(nspecies):
df = pd.DataFrame([
[idx] * nsamps,
range(nsamps),
np.random.normal(0, 10, nsamps),
np.random.normal(0, 1, nsamps),
np.random.randint(0, 100, nsamps),
np.random.choice(["small", "medium", "large"], nsamps),
],
index=["Species_ID", "Sample_ID", "Trait1", "Trait2", "Trait3", "Trait4"]).T
dfs.append(df)
df = pd.concat(dfs)
df.to_csv("/tmp/watdo.csv", index=False)
!cat /tmp/watdo.csv
pd.set_option('display.max_rows', 999)
pd.set_option('display.max_columns', 500)
pd.set_option('display.width', 1000)
inf = "/home/isaac/Continuosity/NEON/NEON_seq-metabarcode-zooplankton/NEON.D03.BARC.DP1.20221.001.2019-07.expanded.20210123T023002Z.RELEASE-2021/NEON.D03.BARC.DP1.20221.001.zoo_metabarcodeTaxonomy.2019-07.expanded.20201218T153238Z.csv"
df = pd.read_csv(inf)
df
Explanation: Prototyping collating stats across runs
End of explanation
<END_TASK> |
96 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
9. Линейная регрессия
2. В четырехугольнике $ABCD$ независимые равные по точности измерения углов $ABD$, $DBC$, $ABC$, $BCD$, $CDB$, $BDA$, $CDA$, $DAB$ (в градусах) дали результаты $50.78$, $30.25$, $78.29$, $99.57$, $50.42$, $40.59$, $88.87$, $89.86$ соответственно. Считая, что ошибки измерений распределены нормально по закону $N(0, \sigma^2)$, найдите оптимальные оценки углов $\beta_1 = ABD$, $\beta_2 = DBC$, $\beta_3 = CDB$, $\beta_4 = BDA$ и неизвестной дисперсии $\sigma^2$. Численно посчитать можно в Python.
Step1: Заметим, что $ABD = \beta_1$, $DBC = \beta_2$, $ABC = \beta_1 + \beta_2$, $BCD = 180 - \beta_2 - \beta_3$, $CDB = \beta_3$, $BDA= \beta_4$, $CDA = \beta_3 + \beta_4$, $DAB = 180 - \beta_1 - \beta_4$. Тогда | <ASSISTANT_TASK:>
Python Code:
import numpy
Explanation: 9. Линейная регрессия
2. В четырехугольнике $ABCD$ независимые равные по точности измерения углов $ABD$, $DBC$, $ABC$, $BCD$, $CDB$, $BDA$, $CDA$, $DAB$ (в градусах) дали результаты $50.78$, $30.25$, $78.29$, $99.57$, $50.42$, $40.59$, $88.87$, $89.86$ соответственно. Считая, что ошибки измерений распределены нормально по закону $N(0, \sigma^2)$, найдите оптимальные оценки углов $\beta_1 = ABD$, $\beta_2 = DBC$, $\beta_3 = CDB$, $\beta_4 = BDA$ и неизвестной дисперсии $\sigma^2$. Численно посчитать можно в Python.
End of explanation
X = numpy.array([
[1, 0, 0, 0],
[0, 1, 0, 0],
[1, 1, 0, 0],
[0, -1, -1, 0],
[0, 0, 1, 0],
[0, 0, 0, 1],
[0, 0, 1, 1],
[-1, 0, 0, -1]
])
Y = numpy.array([50.78, 30.25, 78.29, 99.57 - 180, 50.42, 40.59, 88.87, 89.86 - 180]).T
Beta = numpy.linalg.inv(X.T @ X) @ X.T @ Y
print('Beta =', Beta)
n, k = X.shape
RSS = (Y - X @ Beta).T @ (Y - X @ Beta)
sigma2 = RSS / (n - k)
print('sigma^2 =', sigma2)
Explanation: Заметим, что $ABD = \beta_1$, $DBC = \beta_2$, $ABC = \beta_1 + \beta_2$, $BCD = 180 - \beta_2 - \beta_3$, $CDB = \beta_3$, $BDA= \beta_4$, $CDA = \beta_3 + \beta_4$, $DAB = 180 - \beta_1 - \beta_4$. Тогда:
End of explanation
<END_TASK> |
97 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
This note will show you how to use BigBang to investigate a particular project participant's activity.
We will focus on Fernando Perez's role within the IPython community.
First, imports.
Step1: Let's get all the available date from the IPython community. For now, this is just the mailing lists. One day, BigBang will also get its issue tracker data! That will be very exciting.
Step2: Now let's isolate the messages involving Fernando Perez.
This includes both messages from Fernando, and messages to Fernando.
Step3: Note that our way of finding Fernando Perez was not very precise. We've picked up another Fernando.
Step4: In future iterations, we will use a more sensitive entity recognition technique to find Fernando. This will have to do for now.
We will also need the data for all the emails that were not sent by Fernando.
Step5: We now have two Archives made from the original Archive, with the same range of dates, but one with and the other without Fernando. Both contain emails from many addresses. We want to get a single metric of activity.
Step6: Let's make a stackplot of this data to see how much of the conversation on the IPython developer's list has been Fernando, over time. | <ASSISTANT_TASK:>
Python Code:
from bigbang.archive import Archive
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
Explanation: This note will show you how to use BigBang to investigate a particular project participant's activity.
We will focus on Fernando Perez's role within the IPython community.
First, imports.
End of explanation
url = "ipython-user"
arx = Archive(url)
Explanation: Let's get all the available date from the IPython community. For now, this is just the mailing lists. One day, BigBang will also get its issue tracker data! That will be very exciting.
End of explanation
fernandos = Archive(arx.data[arx.data.From.map(lambda x: 'Fernando' in x)])
fernandos.data[:3]
Explanation: Now let's isolate the messages involving Fernando Perez.
This includes both messages from Fernando, and messages to Fernando.
End of explanation
[x for x in fernandos.get_activity()]
Explanation: Note that our way of finding Fernando Perez was not very precise. We've picked up another Fernando.
End of explanation
not_fernandos = Archive(arx.data[arx.data.From.map(lambda x: 'Fernando' not in x)])
not_fernandos.data[:3]
Explanation: In future iterations, we will use a more sensitive entity recognition technique to find Fernando. This will have to do for now.
We will also need the data for all the emails that were not sent by Fernando.
End of explanation
not_fernandos.get_activity().sum(1).values.shape
nf = pd.DataFrame(not_fernandos.get_activity().sum(1))
f = pd.DataFrame(fernandos.get_activity().sum(1))
both = pd.merge(nf,f,how="outer",left_index=True,right_index=True,suffixes=("_nf","_f")).fillna(0)
Explanation: We now have two Archives made from the original Archive, with the same range of dates, but one with and the other without Fernando. Both contain emails from many addresses. We want to get a single metric of activity.
End of explanation
fig = plt.figure(figsize=(12.5, 7.5))
fa = fernandos.get_activity()
d = np.row_stack((both['0_f'],
both['0_nf']))
plt.stackplot(both.index.values,d,linewidth=0)
fig.axes[0].xaxis_date()
plt.show()
Explanation: Let's make a stackplot of this data to see how much of the conversation on the IPython developer's list has been Fernando, over time.
End of explanation
<END_TASK> |
98 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Update TOC trends analysis
Tore has previously written code to calculate Mann-Kendall (M-K) trend statistics and Sen's slope estimates for data series in RESA2. According to my notes from a meeting with Tore on 13/05/2016, the workflow goes something like this
Step3: 2. Statistical functions
Looking at the output in the ICPW_STATISTICS3 table of RESA2, we need to calculate the following statistcs (only some of which are output by the Excel macro)
Step4: 3. Perform comparison
Step5: And below is the output from the Excel macro for comparison.
Step6: My code gives near-identical results to those from the Excel macro, although there are a few edge cases that might be worth investigating further. For example, if there are fewer than 10 non-null values, my code currently prints a warning. I'm not sure exactly what the Excel macro does yet, but in general it seems that for fewer than 10 values it's necessary to use a lookup table (see e.g. the Instructions sheet of the file here).
4. Get data from RESA2
The next step is to read the correct data directly from RESA2 and summarise it to look like raw_df, above. Start off by connecting to the database.
Step7: Looking at the ICPW_STATISTICS table in RESA2, it seems as though trends have been assessed for 14 parameters and several different time periods for each site of interest. The length and number of time periods vary from site to site, so I'll need to check with Heleen regarding how these varaibles should be chosen. The 14 parameters are as follows
Step8: 4.1.2. Sea-salt corrected values
The Xs are sea-salt corrected values (also sometimes denoted with an asterisk e.g. Ca*). They are calculated by comparison to chloride concentrations, which are generall assumed to be conservative. The usual equation is
Step9: 4.3. Extract time series
The next step is to get time series for the desired parameters for each of these stations.
Step10: 4.4. Aggregate to annual
Step11: 4.4. Convert units and apply sea-salt correction
I haven't calculated all 14 parameters here, as I'm not sure exactly what they all are. The ones I'm reasonably certain of are included below.
Step13: 4.5. Calculate trends
Step14: 5. Compare to previous trends analysis
This seems to be working OK so far, but I need to do some more testing to see that my results more-or-less agree with those calculated previously by Tore. As a start, let's compare the results above with those in the ICPW_STATISTICS3 table of RESA2, which is where (I think) Tore has saved his previous output.
Step15: For e.g. site 23499, I can now re-run my code for the period from 1990 to 2004 and compare my results to those above. | <ASSISTANT_TASK:>
Python Code:
# Read data and results from the Excel macro
in_xlsx = (r'C:\Data\James_Work\Staff\Heleen_d_W\ICP_Waters\TOC_Trends_Analysis_2015'
r'\Data\mk_sen_test_data.xlsx')
raw_df = pd.read_excel(in_xlsx, sheetname='input')
res_df = pd.read_excel(in_xlsx, sheetname='results')
raw_df
res_df
Explanation: Update TOC trends analysis
Tore has previously written code to calculate Mann-Kendall (M-K) trend statistics and Sen's slope estimates for data series in RESA2. According to my notes from a meeting with Tore on 13/05/2016, the workflow goes something like this:
Run code to extract and summarise time series from RESA2, insert this data into Mann-Kendall_Sen.xls, then read the results back into a new table in RESA2 called e.g. ICPW_STATISTICS. <br><br>
Run the ICPStat query in Icp-waters2001_2000.accdb to summarise the data in ICPW_STATISTICS. This creates a new table currently called aaa, but Tore says he'll rename it to something more descriptive before he leaves. <br><br>
Run the export() subroutine in the Export module of Icp-waters2001_2000.accdb to reformat the aaa table and write the results to an Excel file.
Mann-Kendall_Sen.xls is an early version of the popular Excel macro MULTIMK/CONDMK, which Tore has modified slightly for use in this analysis. (A more recent version of the same file is available here). This Excel macro permits some quite sophisticated multivariate and conditional analyses, but as far as I can tell the TOC trends code is only making use of the most basic functionality - performing repeated independent trend tests on annually summarised time series.
Unfortunately, although the workflow above makes sense, I've so far failed to find and run Tore's code for step 1 (I can find everything for steps 2 and 3, but not the code for interacting with the Excel workbook). It also seems a bit messy to be switching back and forth between RESA2, Excel and Access in this way, so the code here is a first step towards refactoring the whole analysis into Python.
1. Test data
The Mann-Kendall_Sen.xls file on the network already had some example ICPW data in it, which I can use to test my code. The raw input data and the results obtained from the Excel macro are saved as mk_sen_test_data.xlsx.
End of explanation
def mk_test(x, stn_id, par, alpha=0.05):
Adapted from http://pydoc.net/Python/ambhas/0.4.0/ambhas.stats/
by Sat Kumar Tomer.
Perform the MK test for monotonic trends. Uses the "normal
approximation" to determine significance and therefore should
only be used if the number of values is >= 10.
Args:
x: 1D array of data
name: Name for data series (string)
alpha: Significance level
Returns:
var_s: Variance of test statistic
s: M-K test statistic
z: Normalised test statistic
p: p-value of the significance test
trend: Whether to reject the null hypothesis (no trend) at
the specified significance level. One of:
'increasing', 'decreasing' or 'no trend'
import numpy as np
from scipy.stats import norm
n = len(x)
if n < 10:
print (' Data series for %s at site %s has fewer than 10 non-null values. '
'Significance estimates may be unreliable.' % (par, int(stn_id)))
# calculate S
s = 0
for k in xrange(n-1):
for j in xrange(k+1,n):
s += np.sign(x[j] - x[k])
# calculate the unique data
unique_x = np.unique(x)
g = len(unique_x)
# calculate the var(s)
if n == g: # there is no tie
var_s = (n*(n-1)*(2*n+5))/18.
else: # there are some ties in data
tp = np.zeros(unique_x.shape)
for i in xrange(len(unique_x)):
tp[i] = sum(unique_x[i] == x)
# Sat Kumar's code has "+ np.sum", which is incorrect
var_s = (n*(n-1)*(2*n+5) - np.sum(tp*(tp-1)*(2*tp+5)))/18.
if s>0:
z = (s - 1)/np.sqrt(var_s)
elif s == 0:
z = 0
elif s<0:
z = (s + 1)/np.sqrt(var_s)
else:
z = np.nan
# calculate the p_value
p = 2*(1-norm.cdf(abs(z))) # two tail test
h = abs(z) > norm.ppf(1-alpha/2.)
if (z<0) and h:
trend = 'decreasing'
elif (z>0) and h:
trend = 'increasing'
elif np.isnan(z):
trend = np.nan
else:
trend = 'no trend'
return var_s, s, z, p, trend
def wc_stats(raw_df, st_yr=None, end_yr=None):
Calculate key statistics for the TOC trends analysis:
'station_id'
'par_id'
'non_missing'
'median'
'mean'
'std_dev'
'period'
'mk_std_dev'
'mk_stat'
'norm_mk_stat'
'mk_p_val'
'trend'
'sen_slp'
Args:
raw_df: Dataframe with annual data for a single station. Columns must
be: [station_id, year, par1, par2, ... parn]
st_yr: First year to include in analysis. Pass None to start
at the beginning of the series
end_year: Last year to include in analysis. Pass None to start
at the beginning of the series
Returns:
df of key statistics.
import numpy as np, pandas as pd
from scipy.stats import theilslopes
# Checking
df = raw_df.copy()
assert list(df.columns[:2]) == ['STATION_ID', 'YEAR'], 'Columns must be: [STATION_ID, YEAR, par1, par2, ... parn]'
assert len(df['STATION_ID'].unique()) == 1, 'You can only process data for one site at a time'
# Get just the period of interest
if st_yr:
df = df.query('YEAR >= @st_yr')
if end_yr:
df = df.query('YEAR <= @end_yr')
# Get stn_id
stn_id = df['STATION_ID'].iloc[0]
# Tidy up df
df.index = df['YEAR']
df.sort_index(inplace=True)
del df['STATION_ID'], df['YEAR']
# Container for results
data_dict = {'station_id':[],
'par_id':[],
'non_missing':[],
'median':[],
'mean':[],
'std_dev':[],
'period':[],
'mk_std_dev':[],
'mk_stat':[],
'norm_mk_stat':[],
'mk_p_val':[],
'trend':[],
'sen_slp':[]}
# Loop over pars
for col in df.columns:
# 1. Station ID
data_dict['station_id'].append(stn_id)
# 2. Par ID
data_dict['par_id'].append(col)
# 3. Non-missing
data_dict['non_missing'].append(pd.notnull(df[col]).sum())
# 4. Median
data_dict['median'].append(df[col].median())
# 5. Mean
data_dict['mean'].append(df[col].mean())
# 6. Std dev
data_dict['std_dev'].append(df[col].std())
# 7. Period
st_yr = df.index.min()
end_yr = df.index.max()
per = '%s-%s' % (st_yr, end_yr)
data_dict['period'].append(per)
# 8. M-K test
# Drop missing values
mk_df = df[[col]].dropna(how='any')
# Only run stats if more than 1 valid value
if len(mk_df) > 1:
var_s, s, z, p, trend = mk_test(mk_df[col].values, stn_id, col)
data_dict['mk_std_dev'].append(np.sqrt(var_s))
data_dict['mk_stat'].append(s)
data_dict['norm_mk_stat'].append(z)
data_dict['mk_p_val'].append(p)
data_dict['trend'].append(trend)
# 8. Sen's slope
# First element of output gives median slope. Other results could
# also be useful - see docs
sslp = theilslopes(mk_df[col].values, mk_df.index, 0.95)[0]
data_dict['sen_slp'].append(sslp)
# Otherwise all NaN
else:
for par in ['mk_std_dev', 'mk_stat', 'norm_mk_stat',
'mk_p_val', 'trend', 'sen_slp']:
data_dict[par].append(np.nan)
# Build to df
res_df = pd.DataFrame(data_dict)
res_df = res_df[['station_id', 'par_id', 'period', 'non_missing',
'mean', 'median', 'std_dev', 'mk_stat', 'norm_mk_stat',
'mk_p_val', 'mk_std_dev', 'trend', 'sen_slp']]
return res_df
Explanation: 2. Statistical functions
Looking at the output in the ICPW_STATISTICS3 table of RESA2, we need to calculate the following statistcs (only some of which are output by the Excel macro):
Number of non-missing values
Median
Mean
Period over which data are available (start and end years)
Standard deviation (of the data)
Standard deviation (expected under the null hypothesis of the M-K test)
M-K statistic
Normalised M-K statistic $\left(= \frac{M-K \; statistic}{Standard \; deviation} \right)$
M-K p-value
Sen's slope (a.k.a. the Theil-Sen slope)
Most of these should be quite straightforward. We'll start off by defining a function to calculate the M-K statistic (note that Scipy already has a function for the Theil-Sen slope). We'll also define another function to bundle everything together and return a dataframe of the results.
End of explanation
# Run analysis on test data and print results
out_df = wc_stats(raw_df)
del out_df['station_id']
out_df
Explanation: 3. Perform comparison
End of explanation
res_df
Explanation: And below is the output from the Excel macro for comparison.
End of explanation
# Use custom RESA2 function to connect to db
r2_func_path = r'C:\Data\James_Work\Staff\Heleen_d_W\ICP_Waters\Upload_Template\useful_resa2_code.py'
resa2 = imp.load_source('useful_resa2_code', r2_func_path)
engine, conn = resa2.connect_to_resa2()
Explanation: My code gives near-identical results to those from the Excel macro, although there are a few edge cases that might be worth investigating further. For example, if there are fewer than 10 non-null values, my code currently prints a warning. I'm not sure exactly what the Excel macro does yet, but in general it seems that for fewer than 10 values it's necessary to use a lookup table (see e.g. the Instructions sheet of the file here).
4. Get data from RESA2
The next step is to read the correct data directly from RESA2 and summarise it to look like raw_df, above. Start off by connecting to the database.
End of explanation
# Tabulate chemical properties
chem_dict = {'molar_mass':[96, 35, 40, 24, 14],
'valency':[2, 1, 2, 2, 1],
'resa2_ref_ratio':[0.103, 1., 0.037, 0.196, 'N/A']}
chem_df = pd.DataFrame(chem_dict, index=['SO4', 'Cl', 'Ca', 'Mg', 'NO3-N'])
chem_df = chem_df[['molar_mass', 'valency', 'resa2_ref_ratio']]
chem_df
Explanation: Looking at the ICPW_STATISTICS table in RESA2, it seems as though trends have been assessed for 14 parameters and several different time periods for each site of interest. The length and number of time periods vary from site to site, so I'll need to check with Heleen regarding how these varaibles should be chosen. The 14 parameters are as follows:
ESO4
ESO4X
ECl
ESO4Cl
TOC_DOC
ECaEMg
ECaXEMgX
ENO3
Al
ANC
ALK
HPLUS
ESO4EClENO3
ENO3DIVENO3ESO4X
Many of these quantities are unfamiliar to me, but presumably the equations for calculating them can be found in Tore's code (which I can't find at present). Check with Heleen whether all of these are still required and find equations as necessary.
The other issue is how to aggregate the values in the database from their original temporal resolution to annual summaries. I assume the median annual value is probably appropriate in most cases, but it would be good to know what Tore did previosuly.
For now, I'll focus on:
Extracting the data from the database for a specified time period, <br><br>
Calculating the required water chemistry parameters, <br><br>
Taking annual medians and <br><br>
Estimating the trend statistics.
It should then be fairly easy to modify this code later as necessary.
4.1. Equations
Some of the quantities listed above are straightforward to calculate.
4.1.1. Micro-equivalents per litre
The Es in the parameter names are just unit conversions to micro-equivalents per litre:
$$EPAR \; (\mu eq/l) = \frac{1.10^6 * valency}{molar \; mass \; (g/mol)} * PAR \; (g/l)$$
Molar masses and valencies for the key species listed above are given in the table below.
End of explanation
# Get stations for a specified list of projects
proj_list = ['ICPW_TOCTRENDS_2015_CZ', 'ICPW_TOCTRENDS_2015_IT']
sql = ('SELECT station_id, station_code '
'FROM resa2.stations '
'WHERE station_id IN (SELECT UNIQUE(station_id) '
'FROM resa2.projects_stations '
'WHERE project_id IN (SELECT project_id '
'FROM resa2.projects '
'WHERE project_name IN %s))'
% str(tuple(proj_list)))
stn_df = pd.read_sql(sql, engine)
stn_df
Explanation: 4.1.2. Sea-salt corrected values
The Xs are sea-salt corrected values (also sometimes denoted with an asterisk e.g. Ca*). They are calculated by comparison to chloride concentrations, which are generall assumed to be conservative. The usual equation is:
$$PARX = PAR_{sample} - \left[ \left( \frac{PAR}{Cl} \right){ref} * Cl{sample} \right]$$
where $PAR_{sample}$ and $Cl_{sample}$ are the concentrations measured in the lake or river and $\left( \frac{PAR}{Cl} \right)_{ref}$ is (ideally) the long-term average concentration in incoming rainwater. In some cases the reference values are simply taken from sea water concentrations (ignoring effects such as evaporative fractionation etc.).
I'm not sure what values to assume, but by rearranging the above equations and applying it to data extarcted from RESA2 I can back-calculate the reference values. For example, brief testing using data from Italy, Switzerland and the Czech Republic implies that RESA2 uses a standard reference value for sulphate of 0.103.
The reference ratios inferred from RESA2 for the key species listed are given in the table above.
NB: In doing this I've identified some additional erros in the database, where this correction has not beeen performed correctly. For some reason, ESO4X values have been set to zero, despite valid ESO4 and ECl measurements being available. The problem only affects a handful od sample, but could be enough to generate false trends. Return to this later?
NB2: Leah's experiences with the RECOVER project suggest that assuming a single reference concentration for all countires in the world is a bad idea. For example, I believe in e.g. the Czech Republic and Italy it is usual not to calculate sea-salt corrected concentrations at all, because most of the chloride input comes from industry rather than marine sources. Rainwater concentrations are also likely to vary dramatically from place to place, especially given the range of geographic and climatic conditions covered by this project. Check with Heleen.
4.1.3. ANC
Need to calculate this ANC, ALK, HPLUS and ENO3DIVENO3ESO4X.
4.2. Choose projects
The first step is to specify a list of RESA2 projects and get the stations associated with them.
End of explanation
# Specify parameters of interest
par_list = ['SO4', 'Cl', 'Ca', 'Mg', 'NO3-N', 'TOC', 'Al']
if 'DOC' in par_list:
print ('The database treats DOC and TOC similarly.\n'
'You should probably enter "TOC" instead')
# Check pars are valid
if len(par_list)==1:
sql = ("SELECT * FROM resa2.parameter_definitions "
"WHERE name = '%s'" % par_list[0])
else:
sql = ('SELECT * FROM resa2.parameter_definitions '
'WHERE name in %s' % str(tuple(par_list)))
par_df = pd.read_sql_query(sql, engine)
assert len(par_df) == len(par_list), 'One or more parameters not valid.'
# Get results for ALL pars for sites and period of interest
if len(stn_df)==1:
sql = ("SELECT * FROM resa2.water_chemistry_values2 "
"WHERE sample_id IN (SELECT water_sample_id FROM resa2.water_samples "
"WHERE station_id = %s)"
% stn_df['station_id'].iloc[0])
else:
sql = ("SELECT * FROM resa2.water_chemistry_values2 "
"WHERE sample_id IN (SELECT water_sample_id FROM resa2.water_samples "
"WHERE station_id IN %s)"
% str(tuple(stn_df['station_id'].values)))
wc_df = pd.read_sql_query(sql, engine)
# Get all sample dates for sites and period of interest
if len(stn_df)==1:
sql = ("SELECT water_sample_id, station_id, sample_date "
"FROM resa2.water_samples "
"WHERE station_id = %s " % stn_df['station_id'].iloc[0])
else:
sql = ("SELECT water_sample_id, station_id, sample_date "
"FROM resa2.water_samples "
"WHERE station_id IN %s " % str(tuple(stn_df['station_id'].values)))
samp_df = pd.read_sql_query(sql, engine)
# Join in par IDs based on method IDs
sql = ('SELECT * FROM resa2.wc_parameters_methods')
meth_par_df = pd.read_sql_query(sql, engine)
wc_df = pd.merge(wc_df, meth_par_df, how='left',
left_on='method_id', right_on='wc_method_id')
# Get just the parameters of interest
wc_df = wc_df.query('wc_parameter_id in %s' % str(tuple(par_df['parameter_id'].values)))
# Join in sample dates
wc_df = pd.merge(wc_df, samp_df, how='left',
left_on='sample_id', right_on='water_sample_id')
# Join in parameter units
sql = ('SELECT * FROM resa2.parameter_definitions')
all_par_df = pd.read_sql_query(sql, engine)
wc_df = pd.merge(wc_df, all_par_df, how='left',
left_on='wc_parameter_id', right_on='parameter_id')
# Join in station codes
wc_df = pd.merge(wc_df, stn_df, how='left',
left_on='station_id', right_on='station_id')
# Convert units
wc_df['value'] = wc_df['value'] * wc_df['conversion_factor']
# Extract columns of interest
wc_df = wc_df[['station_id', 'sample_date', 'name', 'value']]
# Unstack
wc_df.set_index(['station_id', 'sample_date', 'name'], inplace=True)
wc_df = wc_df.unstack(level='name')
wc_df.columns = wc_df.columns.droplevel()
wc_df.reset_index(inplace=True)
wc_df.columns.name = None
wc_df.head()
Explanation: 4.3. Extract time series
The next step is to get time series for the desired parameters for each of these stations.
End of explanation
# Extract year from date column
wc_df['year'] = wc_df['sample_date'].map(lambda x: x.year)
del wc_df['sample_date']
# Groupby station_id and year
grpd = wc_df.groupby(['station_id', 'year'])
# Calculate median
wc_df = grpd.agg('median')
wc_df.head()
Explanation: 4.4. Aggregate to annual
End of explanation
# 1. Convert to ueq/l
for par in ['SO4', 'Cl', 'Mg', 'Ca', 'NO3-N']:
val = chem_df.ix[par, 'valency']
mm = chem_df.ix[par, 'molar_mass']
if par == 'NO3-N':
wc_df['ENO3'] = wc_df[par] * val / mm
else:
wc_df['E%s' % par] = wc_df[par] * val * 1000. / mm
# 2. Apply sea-salt correction
for par in ['ESO4', 'EMg', 'ECa']:
ref = chem_df.ix[par[1:], 'resa2_ref_ratio']
wc_df['%sX' % par] = wc_df[par] - (ref*wc_df['ECl'])
# 3. Calculate combinations
# 3.1. ESO4 + ECl
wc_df['ESO4_ECl'] = wc_df['ESO4'] + wc_df['ECl']
# 3.2. ECa + EMg
wc_df['ECa_EMg'] = wc_df['ECa'] + wc_df['EMg']
# 3.3. ECaX + EMgX
wc_df['ECaX_EMgX'] = wc_df['ECaX'] + wc_df['EMgX']
# 3.4. ESO4 + ECl + ENO3
wc_df['ESO4_ECl_ENO3'] = wc_df['ESO4'] + wc_df['ECl'] + wc_df['ENO3']
# 4. Delete unnecessary columns and tidy
for col in ['SO4', 'Cl', 'Mg', 'Ca', 'NO3-N']:
del wc_df[col]
wc_df.reset_index(inplace=True)
wc_df.head()
Explanation: 4.4. Convert units and apply sea-salt correction
I haven't calculated all 14 parameters here, as I'm not sure exactly what they all are. The ones I'm reasonably certain of are included below.
End of explanation
def process_water_chem_df(stn_df, wc_df, st_yr=None, end_yr=None):
Calculate statistics for the stations, parameters and time
periods specified.
Args:
stn_df: Dataframe of station_ids
wc_df: Dataframe of water chemistry time series for stations
and parameters of interest
st_yr: First year to include in analysis. Pass None to start
at the beginning of the series
end_year: Last year to include in analysis. Pass None to start
at the beginning of the series
Returns:
Dataframe of statistics
# Container for output
df_list = []
# Loop over sites
for stn_id in stn_df['station_id']:
# Extract data for this site
df = wc_df.query('station_id == @stn_id')
# Modify col names
names = list(df.columns)
names[:2] = ['STATION_ID', 'YEAR']
df.columns = names
# Run analysis
df_list.append(toc_stats(df, st_yr=st_yr, end_yr=end_yr))
res_df = pd.concat(df_list, axis=0)
return res_df
res_df = process_water_chem_df(stn_df, wc_df)
res_df.head()
Explanation: 4.5. Calculate trends
End of explanation
# Get results for test sites from RESA2
sql = ('SELECT * FROM resa2.icpw_statistics3 '
'WHERE station_id IN %s'
% str(tuple(stn_df['station_id'].values)))
stat_df = pd.read_sql(sql, engine)
# Get just the cols to compare to my output
stat_df = stat_df[['station_id', 'parameter', 'period', 'nonmiss',
'average', 'median', 'stdev', 'test_stat',
'mk_stat', 'mkp', 'senslope']]
stat_df.head(14).sort_values(by='parameter')
Explanation: 5. Compare to previous trends analysis
This seems to be working OK so far, but I need to do some more testing to see that my results more-or-less agree with those calculated previously by Tore. As a start, let's compare the results above with those in the ICPW_STATISTICS3 table of RESA2, which is where (I think) Tore has saved his previous output.
End of explanation
# Re-run python analysis for the period 1990 - 2004
res_df = process_water_chem_df(stn_df, wc_df, st_yr=1990, end_yr=2004)
# Delete mk_std_dev as not relevant here
del res_df['mk_std_dev']
res_df.head(14).sort_values(by='par_id')
Explanation: For e.g. site 23499, I can now re-run my code for the period from 1990 to 2004 and compare my results to those above.
End of explanation
<END_TASK> |
99 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Exercise 3
We're going to switch gears a little and talk about the astrophysical part of Astrophysical Machine Learning. This exercise will have you examine two different forms of data. The first is an actual image of the sky, and the second a catalog of sources (galaxies).
PART I
Step1: Now the image is just a numpy array (matrix) that can be indexed like any other array. The np.flipup function ("flip up-down") was used so that when you display the array it will have the same orientation as when you look at the fits image with DS9. Once you read in the image, apply each filter to the image and display the image. To display, you will need to use the matplotlib module. For example, to display the image above, you could use | <ASSISTANT_TASK:>
Python Code:
from astropy.io import fits as fits
fitsimage=fits.open('filename.fits')
image=np.flipud(fitsimage[0].data)
Explanation: Exercise 3
We're going to switch gears a little and talk about the astrophysical part of Astrophysical Machine Learning. This exercise will have you examine two different forms of data. The first is an actual image of the sky, and the second a catalog of sources (galaxies).
PART I: Astronomical images (and catalogs for the that matter) are most often stored in FITS format, which stands for Flexible Image Transport System. There are several programs for opening and examining FITS images. Probably the easiest one to install would be the SAOImage DS9 Astronomical Data Visualization Application. I recommend installing DS9 on your system. For this part of the exercise, download this image of a region of the sky (near the Coma cluster). For this, you can use the SkyView virtual observatory page. Go to the page and enter "coma cluster" in the “Coordinates or Source” field, then under the Optical:DSS: section select the "DSS1 Red" and press submit. This should open another page which has an image that looks like this:
Download the FITS file associated with the image (it should say “FITS” below the image) and save it in your working python directory. For the following exercise, you will need to have the Astopy package installed.
MEDIAN, MEAN, MAX AND MIN: A common way to manipulate an image in order to highlight features that might not be obvious at first glance, is to modify the pixel values by applying a filter-function to the image. The way these filter-functions are applied is to replace the value of each pixel by another value that is related in some way to the values of surrounding pixels. For example, a $maxFilter()$ function might replace each pixel value by the maximum pixel value in a 3×3 or 5×5 box surrounding the pixel (the pixel itself is also included). A minFilter() would do the same thing, except replace each pixel by the minimum value in the box. For this part of the exercise, you must:
A) Start a python script labeled image_filters.py. This script should contain four functions for computing a $medianFilter()$, $meanFilter()$, $maxFilter()$, and $minFilter()$ of an input image. For now, make the filter size 5×5 pixels, and ignore the edge of the image where the filter would run over the edge.
B) After the functions, read in the FITS file that you got above into an array. This can be done by using astropy like so:
End of explanation
import matplotlib.pyplot as plt
plt.imshow(image)
plt.show()
Explanation: Now the image is just a numpy array (matrix) that can be indexed like any other array. The np.flipup function ("flip up-down") was used so that when you display the array it will have the same orientation as when you look at the fits image with DS9. Once you read in the image, apply each filter to the image and display the image. To display, you will need to use the matplotlib module. For example, to display the image above, you could use:
End of explanation
<END_TASK> |