Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 110
62.1k
| code_prompt
stringlengths 37
152k
|
---|---|---|
15,700 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
What's new in the Forecastwrapper
Solar Irradiance on a tilted plane
Wind on an oriented building face
No more "include this", "include that". Everything is included. (I implemented these flags to speed to speed up some things (which you cannot notice), but it complicates the code so much that it is not worth it)
Daytime aggregates have been deprecated (we don't need this anymore since we have irradiance from dark sky. But if anyone incists, i can perhaps re-implement it)
No more special timezone stuff, you get the data in a timezone-aware format, localized to the location of the request. If you want another timezone, use tz_convert
Demo of the forecast.io wrapper to get past and future weather data
Important
Step1: Import API wrapper module
Step2: Get weather data in daily and hourly resolution
To get started, create a Weather object for a certain location and a period
Step3: You can use the methods days() and hours() to get a dataframe in daily or hourly resolution
Step4: Degree Days
Daily resolution has the option of adding degree days.
By default, the temperature equivalent and heating degree days with a base temperature of 16.5°C are added.
Heating degree days are calculated as follows
Step5: Hourly resolution example
Location can also be coördinates
Step6: Built-In Caching
Caching is turned on by default, so when you try and get dataframes the first time it takes a long time...
Step7: ... but now try that again and it goes very fast
Step8: You can turn of the behaviour by setting the cache flag to false
Step9: Solar Irradiance!
Dark Sky has added Solar Irradiance data as a beta.
Note
Step10: Hourly data
Step11: Global Horizontal Irradiance is the amount of Solar Irradiance that shines on a horizontal surface, direct and diffuse, in Wh/m<sup>2</sup>. It is calculated by transforming the Direct Normal Irradiance (DNI) to the horizontal plane and adding the Diffuse Horizontal Irradiance (DHI)
Step12: Add Global Irradiance on a tilted surface!
Create a list with all the different irradiances you want
A surface is specified by the orientation and tilt
- Orientation in degrees from the north
Step13: The names of the columns reflect the orientation and the tilt
Step14: Wind on an oriented building face
The hourly wind speed and bearing is projected on an oriented building face.
We call this the windComponent for a given orientation.
This value is also squared and called windComponentSquared. This can be equated with the force or pressure of the wind on a static surface, like a building face.
The value is also cubed and called windComponentCubed. This can be correlated with the power output of a windturbine.
First, define some orientations you want the wind calculated for. Orientation in degrees starting from the north and going clockwise | Python Code:
import os
import sys
import inspect
import pandas as pd
import charts
Explanation: What's new in the Forecastwrapper
Solar Irradiance on a tilted plane
Wind on an oriented building face
No more "include this", "include that". Everything is included. (I implemented these flags to speed to speed up some things (which you cannot notice), but it complicates the code so much that it is not worth it)
Daytime aggregates have been deprecated (we don't need this anymore since we have irradiance from dark sky. But if anyone incists, i can perhaps re-implement it)
No more special timezone stuff, you get the data in a timezone-aware format, localized to the location of the request. If you want another timezone, use tz_convert
Demo of the forecast.io wrapper to get past and future weather data
Important: you need to register for an apikey here: https://developer.forecast.io/ Put the key you obtain in the opengrid.cfg file as follows:
[Forecast.io]
apikey: your_key
End of explanation
from opengrid.library import forecastwrapper
Explanation: Import API wrapper module
End of explanation
start = pd.Timestamp('20150813')
end = pd.Timestamp('20150816')
Weather_Ukkel = forecastwrapper.Weather(location='Ukkel', start=start, end=end)
Explanation: Get weather data in daily and hourly resolution
To get started, create a Weather object for a certain location and a period
End of explanation
Weather_Ukkel.days()
Weather_Ukkel.hours().info()
Explanation: You can use the methods days() and hours() to get a dataframe in daily or hourly resolution
End of explanation
Weather_Ukkel.days(heating_base_temperatures = [15,18],
cooling_base_temperatures = [18,24]).filter(like='DegreeDays')
Weather_Ukkel.days()
Explanation: Degree Days
Daily resolution has the option of adding degree days.
By default, the temperature equivalent and heating degree days with a base temperature of 16.5°C are added.
Heating degree days are calculated as follows:
$$heatingDegreeDays = max(0 , baseTemp - 0.6 * T_{today} + 0.3 * T_{today-1} + 0.1 * T_{today-2} )$$
Cooling degree days are calculated in an analog way:
$$coolingDegreeDays = max(0, 0.6 * T_{today} + 0.3 * T_{today-1} + 0.1 * T_{today-2} - baseTemp )$$
Add degree days by supplying heating_base_temperatures and/or cooling_base_temperatures as a list (you can add multiple base temperatures, or just a list of 1 element)
Get some more degree days
End of explanation
start = pd.Timestamp('20150916')
end = pd.Timestamp('20150918')
Weather_Brussel = forecastwrapper.Weather(location=[50.8503396, 4.3517103], start=start, end=end)
Weather_Boutersem = forecastwrapper.Weather(location='Kapelstraat 1, 3370 Boutersem', start=start, end=end)
df_combined = pd.merge(Weather_Brussel.hours(), Weather_Boutersem.hours(), suffixes=('_Brussel', '_Boutersem'),
left_index=True, right_index=True)
charts.plot(df_combined.filter(like='cloud'), stock=True, show='inline')
Explanation: Hourly resolution example
Location can also be coördinates
End of explanation
start = pd.Timestamp('20170131', tz='Europe/Brussels')
end = pd.Timestamp('20170201', tz='Europe/Brussels')
Weather_Ukkel = forecastwrapper.Weather(location='Ukkel', start=start, end=end)
Weather_Ukkel.days().head(1)
Explanation: Built-In Caching
Caching is turned on by default, so when you try and get dataframes the first time it takes a long time...
End of explanation
Weather_Ukkel = forecastwrapper.Weather(location='Ukkel', start=start, end=end)
Weather_Ukkel.days().head(1)
Explanation: ... but now try that again and it goes very fast
End of explanation
Weather_Ukkel = forecastwrapper.Weather(location='Ukkel', start=start, end=end, cache=False)
Explanation: You can turn of the behaviour by setting the cache flag to false:
End of explanation
Weather_Ukkel = forecastwrapper.Weather(location='Ukkel', start=start, end=end)
Explanation: Solar Irradiance!
Dark Sky has added Solar Irradiance data as a beta.
Note:
- The values are calculated, not measured. Dark Sky uses the position of the sun in combination with cloud cover.
- Western Europe is not in Dark Sky's "primary region", therefore the data is not super-accurate.
- Since it is a beta, the algorithms and therefore the values may change
- I (JrtPec) have done a qualitative analysis that compared these values with those measured by KNMI (Netherlands). The differences were significant (27% lower). I have notified Dark Sky and they will investigate and possibly update their algorithms.
- You need to delete your cached files in order to include these new values (everything will have to be re-downloaded)
- If Dark Sky were to update their values, the cache needs to be deleted again.
End of explanation
Weather_Ukkel.hours()[[
'GlobalHorizontalIrradiance',
'DirectNormalIrradiance',
'DiffuseHorizontalIrradiance',
'ExtraTerrestrialRadiation',
'SolarAltitude',
'SolarAzimuth']].dropna().head()
Explanation: Hourly data
End of explanation
Weather_Ukkel.days()
Explanation: Global Horizontal Irradiance is the amount of Solar Irradiance that shines on a horizontal surface, direct and diffuse, in Wh/m<sup>2</sup>. It is calculated by transforming the Direct Normal Irradiance (DNI) to the horizontal plane and adding the Diffuse Horizontal Irradiance (DHI):
$$GHI = DNI * cos(90° - Altitude) + DHI$$
The GHI is what you would use to benchmark PV-panels
Direct Normal Irradiance is the amount of solar irradiance that shines directly on a plane tilted towards the sun. In Wh/m<sup>2</sup>.
Diffuse Horizontal Irradiance is the amount of solar irradiance that is scattered in the atmosphere and by clouds. In Wh/m<sup>2</sup>.
Extra-Terrestrial Radiation is the GHI a point would receive if there was no atmosphere.
Altitude of the Sun is measured in degrees above the horizon.
Azimuth is the direction of the Sun in degrees, measured from the true north going clockwise.
At night, all values will be NaN
Daily data
The daily sum of the GHI is included in the day dataframe. Values are in Wh/m<sup>2</sup>
If you need other daily aggregates, give me a shout!
End of explanation
# Lets get the vertical faces of a house
irradiances=[
(0, 90), # north vertical
(90, 90), # east vertical
(180, 90), # south vertical
(270, 90), # west vertical
]
Weather_Ukkel.hours(irradiances=irradiances).filter(like='GlobalIrradiance').dropna().head()
Explanation: Add Global Irradiance on a tilted surface!
Create a list with all the different irradiances you want
A surface is specified by the orientation and tilt
- Orientation in degrees from the north: 0 = North, 90 = East, 180 = South, 270 = West
- Tilt in de degrees from the horizontal plane: 0 = Horizontal, 90 = Vertical
End of explanation
Weather_Ukkel.days(irradiances=irradiances).filter(like='GlobalIrradiance')
Explanation: The names of the columns reflect the orientation and the tilt
End of explanation
orientations = [0, 90, 180, 270]
Weather_Ukkel.hours(wind_orients=orientations).filter(like='wind').head()
Weather_Ukkel.days(wind_orients=orientations).filter(like='wind').head()
Explanation: Wind on an oriented building face
The hourly wind speed and bearing is projected on an oriented building face.
We call this the windComponent for a given orientation.
This value is also squared and called windComponentSquared. This can be equated with the force or pressure of the wind on a static surface, like a building face.
The value is also cubed and called windComponentCubed. This can be correlated with the power output of a windturbine.
First, define some orientations you want the wind calculated for. Orientation in degrees starting from the north and going clockwise
End of explanation |
15,701 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ABC inference of upper limit on star-formation timescale from lack of CCSN
The ABC sampling assuming K stars that go CCSN in tccsn years
Step1: The PDF for 1, 2, and 5 CCSN
Step2: And the 95% confidence limits
Step3: For 11 lighter CCSNe with 6 Myr lag | Python Code:
def sftime_ABC(n=100,K=1,tccsn=4.,tmax=20.):
out= []
for ii in range(n):
while True:
# Sample from prior
tsf= numpy.random.uniform()*tmax
# Sample K massive-star formation times
stars= numpy.random.uniform(size=K)*tsf
# Only accept if all go CCSN after SF ceases
if numpy.all(stars+tccsn > tsf): break
out.append(tsf)
return out
Explanation: ABC inference of upper limit on star-formation timescale from lack of CCSN
The ABC sampling assuming K stars that go CCSN in tccsn years:
End of explanation
pdf_1ccsn= sftime_ABC(n=100000)
pdf_2ccsn= sftime_ABC(n=100000,K=2)
pdf_5ccsn= sftime_ABC(n=100000,K=5)
dum=bovy_plot.bovy_hist(pdf_1ccsn,range=[0.,20.],
bins=31,normed=True,
histtype='step')
dum=bovy_plot.bovy_hist(pdf_2ccsn,range=[0.,20.],
bins=31,normed=True,
histtype='step',overplot=True)
dum=bovy_plot.bovy_hist(pdf_5ccsn,range=[0.,20.],
bins=31,normed=True,
histtype='step',overplot=True)
#My analytical calculation for 1
xs= numpy.linspace(0.,20.,1001)
ys= 4./xs
ys[xs < 4.]= 1.
ys/= numpy.sum(ys)*(xs[1]-xs[0])
plot(xs,ys)
Explanation: The PDF for 1, 2, and 5 CCSN
End of explanation
print numpy.percentile(pdf_1ccsn,95)
print numpy.percentile(pdf_2ccsn,95)
print numpy.percentile(pdf_5ccsn,95)
Explanation: And the 95% confidence limits
End of explanation
pdf_11ccsn= sftime_ABC(n=100000,K=11,tccsn=6.)
print numpy.percentile(pdf_11ccsn,95)
Explanation: For 11 lighter CCSNe with 6 Myr lag
End of explanation |
15,702 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Content under Creative Commons Attribution license CC-BY 4.0, code under BSD 3-Clause License © 2018 by D. Koehn, heterogeneous models are from this Jupyter notebook by Heiner Igel (@heinerigel), Florian Wölfl and Lion Krischer (@krischer) which is a supplemenatry material to the book Computational Seismology
Step1: Viscoelasticity
Step2: "Damping" of elastic waves
Even without the incorporation of viscoelastic effects, seismic waves can be damped.
Due to geometrical spreading the seismic energy is distributed over the surface of the wavefront, as we can see in the acoustic modelling result below
Step3: Another way to damp the seismic wavefield is by scattering the seismic energy at small scale structures like the random medium in this acoustic example
Step4: By comparing the seismograms for the homogeneous with the random medium we recongnize a significant amplitude difference. Note also the significant seismic coda | Python Code:
# Execute this cell to load the notebook's style sheet, then ignore it
from IPython.core.display import HTML
css_file = '../style/custom.css'
HTML(open(css_file, "r").read())
Explanation: Content under Creative Commons Attribution license CC-BY 4.0, code under BSD 3-Clause License © 2018 by D. Koehn, heterogeneous models are from this Jupyter notebook by Heiner Igel (@heinerigel), Florian Wölfl and Lion Krischer (@krischer) which is a supplemenatry material to the book Computational Seismology: A Practical Introduction, notebook style sheet by L.A. Barba, N.C. Clementi
End of explanation
# Import Libraries
# ----------------
import numpy as np
from numba import jit
import matplotlib
import matplotlib.pyplot as plt
from tew2.FD_2DAC import FD_2D_acoustic_JIT
from pylab import rcParams
# Ignore Warning Messages
# -----------------------
import warnings
warnings.filterwarnings("ignore")
from mpl_toolkits.axes_grid1 import make_axes_locatable
Explanation: Viscoelasticity: Introduction
As introduction, we first distinguish different elastic "damping" effects of seismic waves and non-elastic damping. How can we describe seismic damping and what are the reasons?
End of explanation
%matplotlib notebook
time, seis_hom = FD_2D_acoustic_JIT('hom')
Explanation: "Damping" of elastic waves
Even without the incorporation of viscoelastic effects, seismic waves can be damped.
Due to geometrical spreading the seismic energy is distributed over the surface of the wavefront, as we can see in the acoustic modelling result below:
End of explanation
%matplotlib notebook
time_rand, seis_rand = FD_2D_acoustic_JIT('rand')
Explanation: Another way to damp the seismic wavefield is by scattering the seismic energy at small scale structures like the random medium in this acoustic example:
End of explanation
%matplotlib notebook
# Compare FD seismograms
# ----------------------
# Define figure size
rcParams['figure.figsize'] = 7, 5
# plot seismogram hom. model
plt.plot(time, seis_hom, 'b-',lw=3,label="hom. medium")
# plot seismogram random model
plt.plot(time_rand, seis_rand, 'r--',lw=3,label="random medium")
plt.xlim(0.4, time[-1])
plt.title('Seismogram comparison')
plt.xlabel('Time (s)')
plt.ylabel('Amplitude')
plt.legend()
plt.grid()
plt.show()
Explanation: By comparing the seismograms for the homogeneous with the random medium we recongnize a significant amplitude difference. Note also the significant seismic coda
End of explanation |
15,703 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Let's start with the original regular expression and
string to search from Travis'
regex problem.
Step3: The regex had two bugs.
- Two [[ near the end of the pattern string.
- The significant spaces in the pattern (such as after object-group) were being ignored because of re.VERBOSE.
So those bugs are fixed in the pattern below.
Step5: The above works, but keeping track of the indexes of the unnamed groups drives me crazy. So I add names for all groups.
Step6: The following shows me just the groups that matched.
Step8: Looking at the above,
I see that I probably don't care about the big groups,
just the parameters,
so I remove the big groups (except for "any")
from the regular expression.
Step9: Now it tells me just the meat of what I want to know. | Python Code:
pattern = re.compile(r
(?P<any>any4?) # "any"
# association
| # or
(?P<object_eq>object ([\w-]+) eq (\d+)) # object
alone
# association
| # or
(?P<object_range>object ([a-z0-9A-Z-]+) range (\d+) (\d+)) # object range
# association
| # or
(?P<object_group>object-group ([a-z0-9A-Z-]+)) # object group
# association
| # or
(?P<object_alone>object ([[a-z0-9A-Z-]+)) # object alone
# association
, re.VERBOSE)
s = ''' object-group jfi-ip-ranges object DA-TD-WEB01 eq 8850
'''
Explanation: Let's start with the original regular expression and
string to search from Travis'
regex problem.
End of explanation
pattern = re.compile(r
(?P<any>any4?) # "any"
# association
| # or
(?P<object_eq>object\ ([\w-]+)\ eq\ (\d+)) # object
alone
# association
| # or
(?P<object_range>object\ ([a-z0-9A-Z-]+)\ range\ (\d+)\ (\d+)) # object range
# association
| # or
(?P<object_group>object-group\ ([a-z0-9A-Z-]+)) # object group
# association
| # or
(?P<object_alone>object\ ([a-z0-9A-Z-]+)) # object alone
# association
, re.VERBOSE)
re.findall(pattern, s)
for m in re.finditer(pattern, s):
print(repr(m))
print('groups', m.groups())
print('groupdict', m.groupdict())
Explanation: The regex had two bugs.
- Two [[ near the end of the pattern string.
- The significant spaces in the pattern (such as after object-group) were being ignored because of re.VERBOSE.
So those bugs are fixed in the pattern below.
End of explanation
pattern = re.compile(r
(?P<any>any4?) # "any"
# association
| # or
(?P<object_eq>object\ (?P<oe_name>[\w-]+)\ eq\ (?P<oe_i>\d+)) # object
alone
# association
| # or
(?P<object_range>object\ (?P<or_name>[a-z0-9A-Z-]+)
\ range\ (?P<oe_r_start>\d+)\ (?P<oe_r_end>\d+)) # object range
# association
| # or
(?P<object_group>object-group\ (?P<og_name>[a-z0-9A-Z-]+)) # object group
# association
| # or
(?P<object_alone>object\ (?P<oa_name>[a-z0-9A-Z-]+)) # object alone
# association
, re.VERBOSE)
for m in re.finditer(pattern, s):
print(repr(m))
print('groups', m.groups())
print('groupdict', m.groupdict())
Explanation: The above works, but keeping track of the indexes of the unnamed groups drives me crazy. So I add names for all groups.
End of explanation
for m in re.finditer(pattern, s):
for key, value in m.groupdict().items():
if value is not None:
print(key, repr(value))
print()
Explanation: The following shows me just the groups that matched.
End of explanation
pattern = re.compile(r
(?P<any>any4?) # "any"
# association
| # or
(object\ (?P<oe_name>[\w-]+)\ eq\ (?P<oe_i>\d+)) # object
alone
# association
| # or
(object\ (?P<or_name>[a-z0-9A-Z-]+)
\ range\ (?P<oe_r_start>\d+)\ (?P<oe_r_end>\d+)) # object range
# association
| # or
(object-group\ (?P<og_name>[a-z0-9A-Z-]+)) # object group
# association
| # or
(object\ (?P<oa_name>[a-z0-9A-Z-]+)) # object alone
# association
, re.VERBOSE)
Explanation: Looking at the above,
I see that I probably don't care about the big groups,
just the parameters,
so I remove the big groups (except for "any")
from the regular expression.
End of explanation
for m in re.finditer(pattern, s):
for key, value in m.groupdict().items():
if value is not None:
print(key, repr(value))
print()
Explanation: Now it tells me just the meat of what I want to know.
End of explanation |
15,704 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
First we import some datasets of interest
Step1: Now we separate the winners from the losers and organize our dataset
Step2: Now we match the detailed results to the merge dataset above
Step3: Here we get our submission info
Step4: Training Data Creation
Step5: We will only consider years relevant to our test submission
Step6: Now lets just look at TeamID2, or just the second team info.
Step7: From the inner join, we will create data per team id to estimate the parameters we are missing that are independent of the year. Essentially, we are trying to estimate the average behavior of the team across the year.
Step8: Here we look at the comparable statistics. For the TeamID2 column, we would consider the inverse of the ratio, and 1 minus the score attempt percentage.
Step9: Now lets create a model just solely based on the inner group and predict those probabilities.
We will get the teams with the missing result.
Step10: We scale our data for our keras classifier, and make sure our categorical variables are properly processed.
Step11: Here we store our probabilities
Step12: We merge our predictions
Step13: We get the 'average' probability of success for each team
Step14: Any missing value for the prediciton will be imputed with the product of the probabilities calculated above. We assume these are independent events. | Python Code:
#the seed information
df_seeds = pd.read_csv('../input/WNCAATourneySeeds_SampleTourney2018.csv')
#tour information
df_tour = pd.read_csv('../input/WRegularSeasonCompactResults_PrelimData2018.csv')
Explanation: First we import some datasets of interest
End of explanation
df_seeds['seed_int'] = df_seeds['Seed'].apply( lambda x : int(x[1:3]) )
df_winseeds = df_seeds.loc[:, ['TeamID', 'Season', 'seed_int']].rename(columns={'TeamID':'WTeamID', 'seed_int':'WSeed'})
df_lossseeds = df_seeds.loc[:, ['TeamID', 'Season', 'seed_int']].rename(columns={'TeamID':'LTeamID', 'seed_int':'LSeed'})
df_dummy = pd.merge(left=df_tour, right=df_winseeds, how='left', on=['Season', 'WTeamID'])
df_concat = pd.merge(left=df_dummy, right=df_lossseeds, on=['Season', 'LTeamID'])
Explanation: Now we separate the winners from the losers and organize our dataset
End of explanation
df_concat['DiffSeed'] = df_concat[['LSeed', 'WSeed']].apply(lambda x : 0 if x[0] == x[1] else 1, axis = 1)
Explanation: Now we match the detailed results to the merge dataset above
End of explanation
#prepares sample submission
df_sample_sub = pd.read_csv('../input/WSampleSubmissionStage2.csv')
df_sample_sub['Season'] = df_sample_sub['ID'].apply(lambda x : int(x.split('_')[0]) )
df_sample_sub['TeamID1'] = df_sample_sub['ID'].apply(lambda x : int(x.split('_')[1]) )
df_sample_sub['TeamID2'] = df_sample_sub['ID'].apply(lambda x : int(x.split('_')[2]) )
Explanation: Here we get our submission info
End of explanation
winners = df_concat.rename( columns = { 'WTeamID' : 'TeamID1',
'LTeamID' : 'TeamID2',
'WScore' : 'Team1_Score',
'LScore' : 'Team2_Score'}).drop(['WSeed', 'LSeed', 'WLoc'], axis = 1)
winners['Result'] = 1.0
losers = df_concat.rename( columns = { 'WTeamID' : 'TeamID2',
'LTeamID' : 'TeamID1',
'WScore' : 'Team2_Score',
'LScore' : 'Team1_Score'}).drop(['WSeed', 'LSeed', 'WLoc'], axis = 1)
losers['Result'] = 0.0
train = pd.concat( [winners, losers], axis = 0).reset_index(drop = True)
train['Score_Ratio'] = train['Team1_Score'] / train['Team2_Score']
train['Score_Total'] = train['Team1_Score'] + train['Team2_Score']
train['Score_Pct'] = train['Team1_Score'] / train['Score_Total']
Explanation: Training Data Creation
End of explanation
df_sample_sub['Season'].unique()
Explanation: We will only consider years relevant to our test submission
End of explanation
train_test_inner = pd.merge( train.loc[ train['Season'].isin([2018]), : ].reset_index(drop = True),
df_sample_sub.drop(['ID', 'Pred'], axis = 1),
on = ['Season', 'TeamID1', 'TeamID2'], how = 'inner' )
train_test_inner.head()
Explanation: Now lets just look at TeamID2, or just the second team info.
End of explanation
team1d_num_ot = train_test_inner.groupby(['Season', 'TeamID1'])['NumOT'].median().reset_index()\
.set_index('Season').rename(columns = {'NumOT' : 'NumOT1'})
team2d_num_ot = train_test_inner.groupby(['Season', 'TeamID2'])['NumOT'].median().reset_index()\
.set_index('Season').rename(columns = {'NumOT' : 'NumOT2'})
num_ot = team1d_num_ot.join(team2d_num_ot).reset_index()
#sum the number of ot calls and subtract by one to prevent overcounting
num_ot['NumOT'] = num_ot[['NumOT1', 'NumOT2']].apply(lambda x : round( x.sum() ), axis = 1 )
num_ot.head()
Explanation: From the inner join, we will create data per team id to estimate the parameters we are missing that are independent of the year. Essentially, we are trying to estimate the average behavior of the team across the year.
End of explanation
team1d_score_spread = train_test_inner.groupby(['Season', 'TeamID1'])[['Score_Ratio', 'Score_Pct']].median().reset_index()\
.set_index('Season').rename(columns = {'Score_Ratio' : 'Score_Ratio1', 'Score_Pct' : 'Score_Pct1'})
team2d_score_spread = train_test_inner.groupby(['Season', 'TeamID2'])[['Score_Ratio', 'Score_Pct']].median().reset_index()\
.set_index('Season').rename(columns = {'Score_Ratio' : 'Score_Ratio2', 'Score_Pct' : 'Score_Pct2'})
score_spread = team1d_score_spread.join(team2d_score_spread).reset_index()
#geometric mean of score ratio of team 1 and inverse of team 2
score_spread['Score_Ratio'] = score_spread[['Score_Ratio1', 'Score_Ratio2']].apply(lambda x : ( x[0] * ( x[1] ** -1.0) ), axis = 1 ) ** 0.5
#harmonic mean of score pct
score_spread['Score_Pct'] = score_spread[['Score_Pct1', 'Score_Pct2']].apply(lambda x : 0.5*( x[0] ** -1.0 ) + 0.5*( 1.0 - x[1] ) ** -1.0, axis = 1 ) ** -1.0
score_spread.head()
Explanation: Here we look at the comparable statistics. For the TeamID2 column, we would consider the inverse of the ratio, and 1 minus the score attempt percentage.
End of explanation
X_train = train_test_inner.loc[:, ['Season', 'NumOT', 'Score_Ratio', 'Score_Pct']]
train_labels = train_test_inner['Result']
train_test_outer = pd.merge( train.loc[ train['Season'].isin([2014, 2015, 2016, 2017]), : ].reset_index(drop = True),
df_sample_sub.drop(['ID', 'Pred'], axis = 1),
on = ['Season', 'TeamID1', 'TeamID2'], how = 'outer' )
train_test_outer = train_test_outer.loc[ train_test_outer['Result'].isnull(),
['TeamID1', 'TeamID2', 'Season']]
train_test_missing = pd.merge( pd.merge( score_spread.loc[:, ['TeamID1', 'TeamID2', 'Season', 'Score_Ratio', 'Score_Pct']],
train_test_outer, on = ['TeamID1', 'TeamID2', 'Season']),
num_ot.loc[:, ['TeamID1', 'TeamID2', 'Season', 'NumOT']],
on = ['TeamID1', 'TeamID2', 'Season'])
Explanation: Now lets create a model just solely based on the inner group and predict those probabilities.
We will get the teams with the missing result.
End of explanation
X_test = train_test_missing.loc[:, ['Season', 'NumOT', 'Score_Ratio', 'Score_Pct']]
n = X_train.shape[0]
train_test_merge = pd.concat( [X_train, X_test], axis = 0 ).reset_index(drop = True)
train_test_merge = pd.concat( [pd.get_dummies( train_test_merge['Season'].astype(object) ),
train_test_merge.drop('Season', axis = 1) ], axis = 1 )
train_test_merge = pd.concat( [pd.get_dummies( train_test_merge['NumOT'].astype(object) ),
train_test_merge.drop('NumOT', axis = 1) ], axis = 1 )
X_train = train_test_merge.loc[:(n - 1), :].reset_index(drop = True)
X_test = train_test_merge.loc[n:, :].reset_index(drop = True)
x_max = X_train.max()
x_min = X_train.min()
X_train = ( X_train - x_min ) / ( x_max - x_min + 1e-14)
X_test = ( X_test - x_min ) / ( x_max - x_min + 1e-14)
train_labels.value_counts()
X_train.head()
from sklearn.linear_model import LogisticRegressionCV
model = LogisticRegressionCV(cv=80,scoring="neg_log_loss",random_state=1
#,penalty="l1"
#,Cs= Cs_#list(np.arange(1e-7,1e-9,-0.5e-9)) # [0.5,0.1,0.01,0.001] #list(np.power(1, np.arange(-10, 10)))
#,max_iter=1000, tol=1e-11
#,solver="liblinear"
#,n_jobs=4
)
model.fit(X_train, train_labels)
#---
Cs = model.Cs_
list(np.power(10.0, np.arange(-10, 10)))
dir(model)
sco = model.scores_[1].mean(axis=0)
#---
import matplotlib.pyplot as plt
plt.plot(Cs
#np.log10(Cs)
,sco)
# plt.ylabel('some numbers')
plt.show()
sco.min()
Cs_= list(np.arange(1.1e-9 - 5e-11
,1.051e-9
,0.2e-13))
len(Cs_)
Cs_= list(np.arange(1e-11
,9.04e-11#1.0508e-9
,0.2e-12))
len(Cs_)
#Cs_= list(np.arange(5.6e-13 - ( (0.01e-13)*1)
# ,5.61e-13 - ( (0.01e-13)*1)#1.0508e-9
# ,0.2e-15))
#len(Cs_)
Cs_= list(np.arange(1e-11
,5.5e-11#1.0508e-9
,0.2e-12))
len(Cs_)
Cs_= list(np.arange(1e-14
,5.5e-11#1.0508e-9
,0.2e-12))
len(Cs_)#awsome
#Cs_= list(np.arange(1.5e-11
# ,2.53e-11#1.0508e-9
# ,0.2e-13)) #+[3.761e-11]
#len(Cs_)
#X_train.dtypes
Cs_= list(np.arange(1e-15
,0.51e-10 #1.0508e-9
,0.1e-12))
len(Cs_)#new again
Cs_= list(np.arange(9e-14
,10.1e-13 #1.0508e-9
,0.1e-14))
len(Cs_)#new again cont. lowerlevel
Cs_= list(np.arange(9e-14
,10.1e-13 #1.0508e-9
,0.1e-14))
len(Cs_)#new again cont. lowerlevel
#LogisticRegressionCV(Cs=10, class_weight=None, cv=107, dual=False,
# fit_intercept=True, intercept_scaling=1.0, max_iter=100,
# multi_class='ovr', n_jobs=1, penalty='l2', random_state=2,
# refit=True, scoring='neg_log_loss', solver='lbfgs', tol=0.0001,
# verbose=0) #-0.7
from sklearn.linear_model import LogisticRegression
model = LogisticRegression(scoring="neg_log_loss",random_state=1
#,penalty="l1"
,C=8.129999999999969e-13#list(np.arange(1e-7,1e-9,-0.5e-9)) # [0.5,0.1,0.01,0.001] #list(np.power(1, np.arange(-10, 10)))
,max_iter=1000, tol=1e-11
#,solver="liblinear"
,n_jobs=4)
model.fit(X_train, train_labels)
#---
Cs = model.Cs_
list(np.power(10.0, np.arange(-10, 10)))
dir(model)
sco = model.scores_[1].mean(axis=0)
#---
import matplotlib.pyplot as plt
plt.plot(Cs
#np.log10(Cs)
,sco)
# plt.ylabel('some numbers')
plt.show()
Cs= list(np.linspace(9e-15
,10.1e-14 #1.0508e-9
,200))
len(Cs)#new again cont. lowerlevel
from sklearn import svm, grid_search, datasets
parameters = dict(C=Cs)
model = LogisticRegression(random_state=1
#,penalty="l1"
,C=8.129999999999969e-13#list(np.arange(1e-7,1e-9,-0.5e-9)) # [0.5,0.1,0.01,0.001] #list(np.power(1, np.arange(-10, 10)))
,max_iter=1000, tol=1e-11
,solver="lbfgs"
,n_jobs=1)
clf = grid_search.GridSearchCV(model, parameters,scoring="neg_log_loss",cv=80,n_jobs=8)
clf.fit(X_train, train_labels)
scores = [x[1] for x in clf.grid_scores_]
scores = np.array(scores).reshape(len(Cs))
plt.plot(Cs, scores)
plt.legend()
plt.xlabel('Cs')
plt.ylabel('Mean score')
plt.show()
print("C:",clf.best_estimator_.C," loss:",clf.best_score_)
clf.grid_scores_
scores = [x[1] for x in clf.grid_scores_]
scores = np.array(scores).reshape(len(Cs))
plt.plot(Cs, scores)
plt.legend()
plt.xlabel('Cs')
plt.ylabel('Mean score')
plt.show()
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(clf.grid_scores_)
# plt.ylabel('some numbers')
plt.show()
index_min = np.argmin(sco)
Cs_[index_min] #3.761e-11
sco.min()
#list(np.power(10.0, np.arange(-10, 10)))
#list(np.arange(0.5,1e-4,-0.05))
print(sco.max())
#-0.6931471779248422
print(sco.min() < -0.693270048530996)
print(sco.min()+0.693270048530996)
sco.min()
import matplotlib.pyplot as plt
plt.plot(model.scores_[1])
# plt.ylabel('some numbers')
plt.show()
Explanation: We scale our data for our keras classifier, and make sure our categorical variables are properly processed.
End of explanation
train_test_inner['Pred1'] = model.predict_proba(X_train)[:,1]
train_test_missing['Pred1'] = model.predict_proba(X_test)[:,1]
Explanation: Here we store our probabilities
End of explanation
sub = pd.merge(df_sample_sub,
pd.concat( [train_test_missing.loc[:, ['Season', 'TeamID1', 'TeamID2', 'Pred1']],
train_test_inner.loc[:, ['Season', 'TeamID1', 'TeamID2', 'Pred1']] ],
axis = 0).reset_index(drop = True),
on = ['Season', 'TeamID1', 'TeamID2'], how = 'outer')
Explanation: We merge our predictions
End of explanation
team1_probs = sub.groupby('TeamID1')['Pred1'].apply(lambda x : (x ** -1.0).mean() ** -1.0 ).fillna(0.5).to_dict()
team2_probs = sub.groupby('TeamID2')['Pred1'].apply(lambda x : (x ** -1.0).mean() ** -1.0 ).fillna(0.5).to_dict()
Explanation: We get the 'average' probability of success for each team
End of explanation
sub['Pred'] = sub[['TeamID1', 'TeamID2','Pred1']]\
.apply(lambda x : team1_probs.get(x[0]) * ( 1 - team2_probs.get(x[1]) ) if np.isnan(x[2]) else x[2],
axis = 1)
sub = sub.drop_duplicates(subset=["ID"], keep='first')
sub[['ID', 'Pred']].to_csv('sub.csv', index = False)
sub[['ID', 'Pred']].head(20)
Explanation: Any missing value for the prediciton will be imputed with the product of the probabilities calculated above. We assume these are independent events.
End of explanation |
15,705 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Model13
Step2: Feature functions(private)
Step3: Feature function(public)
Step4: Utility functions
Step5: GMM
Classifying questions
features
Step7: B. Modeling
Select model
Step8: Training and testing model
Step9: Writing result | Python Code:
import gzip
import pickle
from os import path
from collections import defaultdict
from numpy import sign
Load buzz data as a dictionary.
You can give parameter for data so that you will get what you need only.
def load_buzz(root='../data', data=['train', 'test', 'questions'], format='pklz'):
buzz_data = {}
for ii in data:
file_path = path.join(root, ii + "." + format)
with gzip.open(file_path, "rb") as fp:
buzz_data[ii] = pickle.load(fp)
return buzz_data
Explanation: Model13: DPGMM
A. Functions
There have four different functions.
Data reader: Read data from file.
Feature functions(private): Functions which extract features are placed in here. It means that if you make a specific feature function, you can add the one into here.
Feature function(public): We can use only this function for feature extraction.
Utility functions: All the funtions except functions which are mentioned in above should be placed in here.
Data reader
End of explanation
from numpy import sign, abs
def _feat_basic(bd, group):
X = []
for item in bd[group].items():
qid = item[1]['qid']
q = bd['questions'][qid]
#item[1]['q_length'] = max(q['pos_token'].keys())
item[1]['q_length'] = len(q['question'].split())
item[1]['category'] = q['category'].lower()
item[1]['answer'] = q['answer'].lower()
X.append(item[1])
return X
def _feat_sign_val(data):
for item in data:
item['sign_val'] = sign(item['position'])
def _get_pos(bd, sign_val=None):
# bd is not bd, bd is bd['train']
unwanted_index = []
pos_uid = defaultdict(list)
pos_qid = defaultdict(list)
for index, key in enumerate(bd):
if sign_val and sign(bd[key]['position']) != sign_val:
unwanted_index.append(index)
else:
pos_uid[bd[key]['uid']].append(bd[key]['position'])
pos_qid[bd[key]['qid']].append(bd[key]['position'])
return pos_uid, pos_qid, unwanted_index
def _get_avg_pos(bd, sign_val=None):
pos_uid, pos_qid, unwanted_index = _get_pos(bd, sign_val)
avg_pos_uid = {}
avg_pos_qid = {}
if not sign_val:
sign_val = 1
for key in pos_uid:
pos = pos_uid[key]
avg_pos_uid[key] = sign_val * (sum(pos) / len(pos))
for key in pos_qid:
pos = pos_qid[key]
avg_pos_qid[key] = sign_val * (sum(pos) / len(pos))
return avg_pos_uid, avg_pos_qid, unwanted_index
def _feat_avg_pos(data, bd, group, sign_val):
avg_pos_uid, avg_pos_qid, unwanted_index = _get_avg_pos(bd['train'], sign_val=sign_val)
if group == 'train':
for index in sorted(unwanted_index, reverse=True):
del data[index]
for item in data:
if item['uid'] in avg_pos_uid:
item['avg_pos_uid'] = avg_pos_uid[item['uid']]
else:
vals = avg_pos_uid.values()
item['avg_pos_uid'] = sum(vals) / float(len(vals))
if item['qid'] in avg_pos_qid:
item['avg_pos_qid'] = avg_pos_qid[item['qid']]
else:
vals = avg_pos_qid.values()
item['avg_pos_qid'] = sum(vals) / float(len(vals))
# Response position can be longer than length of question
if item['avg_pos_uid'] > item['q_length']:
item['avg_pos_uid'] = item['q_length']
if item['avg_pos_qid'] > item['q_length']:
item['avg_pos_qid'] = item['q_length']
Explanation: Feature functions(private)
End of explanation
def featurize(bd, group, sign_val=None, extra=None):
# Basic features
# qid(string), uid(string), position(float)
# answer'(string), 'potistion'(float), 'qid'(string), 'uid'(string)
X = _feat_basic(bd, group=group)
# Some extra features
if extra:
for func_name in extra:
func_name = '_feat_' + func_name
if func_name in ['_feat_avg_pos']:
globals()[func_name](X, bd, group=group, sign_val=sign_val)
else:
globals()[func_name](X)
if group == 'train':
y = []
for item in X:
y.append(item['position'])
del item['position']
return X, y
elif group == 'test':
return X
else:
raise ValueError(group, 'is not the proper type')
Explanation: Feature function(public)
End of explanation
import csv
def select(data, keys):
unwanted = data[0].keys() - keys
for item in data:
for unwanted_key in unwanted:
del item[unwanted_key]
return data
def write_result(test_set, predictions, file_name='guess.csv'):
predictions = sorted([[id, predictions[index]] for index, id in enumerate(test_set.keys())])
predictions.insert(0,["id", "position"])
with open(file_name, "w") as fp:
writer = csv.writer(fp, delimiter=',')
writer.writerows(predictions)
Explanation: Utility functions
End of explanation
%matplotlib inline
import numpy as np
from scipy import linalg
import matplotlib.pyplot as plt
import matplotlib as mpl
from sklearn import mixture
def plot_gmm(X, models, n_components, covariance_type='diag',
figsize=(10, 20), suptitle=None, xlabel=None, ylabel=None):
color_iter = ['r', 'g', 'b', 'c', 'm', 'y', 'k', 'gray', 'pink', 'lime']
plt.figure(figsize=figsize)
plt.suptitle(suptitle, fontsize=20)
for i, model in enumerate(models):
mm = getattr(mixture, model)(n_components=n_components,
covariance_type=covariance_type)
mm.fit(X_pos_qid)
Y = mm.predict(X_pos_qid)
plt.subplot(len(models), 1, 1 + i)
for i, color in enumerate(color_iter):
plt.scatter(X_pos_qid[Y == i, 0], X_pos_qid[Y == i, 1], .7, color=color)
plt.title(model, fontsize=15)
plt.xlabel(xlabel, fontsize=12)
plt.ylabel(ylabel, fontsize=12)
plt.grid()
plt.show()
from collections import UserDict
import numpy as np
class DictDict(UserDict):
def __init__(self, bd):
UserDict.__init__(self)
self._set_bd(bd)
def sub_keys(self):
return self[list(self.keys())[0]].keys()
def select(self, sub_keys):
vals = []
for key in self:
vals.append([self[key][sub_key] for sub_key in sub_keys])
return np.array(vals)
def sub_append(self, sub_key, values):
for index, key in enumerate(self):
self[key][sub_key] = values[index]
class Users(DictDict):
def _set_bd(self, bd):
pos_uid, _, _ = _get_pos(bd['train'], sign_val=None)
for key in pos_uid:
u = np.array(pos_uid[key])
ave_pos_uid = sum(abs(u)) / float(len(u))
acc_ratio_uid = len(u[u > 0]) / float(len(u))
self[key] = {'ave_pos_uid': ave_pos_uid,
'acc_ratio_uid': acc_ratio_uid}
class Questions(DictDict):
def _set_bd(self, bd):
_, pos_qid, _ = _get_pos(bd['train'], sign_val=None)
for key in pos_qid:
u = np.array(pos_qid[key])
ave_pos_qid = sum(abs(u)) / float(len(u))
acc_ratio_qid = len(u[u > 0]) / float(len(u))
self[key] = bd['questions'][key]
self[key]['ave_pos_qid'] = ave_pos_qid
self[key]['acc_ratio_qid'] = acc_ratio_qid
users = Users(load_buzz())
questions = Questions(load_buzz())
X_pos_uid = users.select(['ave_pos_uid', 'acc_ratio_uid'])
X_pos_qid = questions.select(['ave_pos_qid', 'acc_ratio_qid'])
plot_gmm(X_pos_uid,
models=['GMM', 'VBGMM', 'DPGMM'],
n_components=8,
covariance_type='diag',
figsize=(10, 20),
suptitle='Classifying users',
xlabel='abs(position)',
ylabel='accuracy ratio')
plot_gmm(X_pos_qid,
models=['GMM', 'VBGMM', 'DPGMM'],
n_components=8,
covariance_type='diag',
figsize=(10, 20),
suptitle='Classifying questions',
xlabel='abs(position)',
ylabel='accuracy ratio')
# Question category
n_components = 8
gmm = mixture.DPGMM(n_components=n_components, covariance_type='diag')
gmm.fit(X_pos_qid)
pred_cat_qid = gmm.predict(X_pos_qid)
plt.hist(pred_cat_qid, bins=50, facecolor='g', alpha=0.75)
plt.xlabel("Category number")
plt.ylabel("Count")
plt.title("Question Category: " + str(n_components) + " categories")
plt.grid(True)
plt.show()
# User category
n_components = 8
gmm = mixture.DPGMM(n_components=n_components, covariance_type='diag')
gmm.fit(X_pos_uid)
pred_cat_uid = gmm.predict(X_pos_uid)
plt.hist(pred_cat_uid, bins=50, facecolor='g', alpha=0.75)
plt.xlabel("Category number")
plt.ylabel("Count")
plt.title("User Category: " + str(n_components) + " categories")
plt.grid(True)
plt.show()
from collections import Counter
users.sub_append('cat_uid', [str(x) for x in pred_cat_uid])
questions.sub_append('cat_qid', [str(x) for x in pred_cat_qid])
# to get most frequent cat for some test data which do not have ids in train set
most_pred_cat_uid = Counter(pred_cat_uid).most_common(1)[0][0]
most_pred_cat_qid = Counter(pred_cat_qid).most_common(1)[0][0]
print(most_pred_cat_uid)
print(most_pred_cat_qid)
print(users[1])
print(questions[1])
Explanation: GMM
Classifying questions
features: avg_pos, accuracy rate
End of explanation
regression_keys = ['category', 'q_length', 'qid', 'uid', 'answer', 'avg_pos_uid', 'avg_pos_qid']
X_train, y_train = featurize(load_buzz(), group='train', sign_val=None, extra=['sign_val', 'avg_pos'])
X_train = select(X_train, regression_keys)
def transform(X):
for index, item in enumerate(X):
uid = int(item['uid'])
qid = int(item['qid'])
# uid
if int(uid) in users:
item['acc_ratio_uid'] = users[uid]['acc_ratio_uid']
item['cat_uid'] = users[uid]['cat_uid']
else:
print('Not found uid:', uid)
acc = users.select(['acc_ratio_uid'])
item['acc_ratio_uid'] = sum(acc) / float(len(acc))
item['cat_uid'] = most_pred_cat_uid
# qid
if int(qid) in questions:
item['acc_ratio_qid'] = questions[qid]['acc_ratio_qid']
item['cat_qid'] = questions[qid]['cat_qid']
else:
print('Not found qid:', qid)
acc = questions.select(['acc_ratio_qid'])
item['acc_ratio_qid'] = sum(acc) / float(len(acc))
item['cat_qid'] = most_pred_cat_qid
item['uid'] = str(uid)
item['qid'] = str(qid)
transform(X_train)
X_train[1]
from sklearn.feature_extraction import DictVectorizer
vec = DictVectorizer()
X_train_dict_vec = vec.fit_transform(X_train)
import multiprocessing
from sklearn import linear_model
from sklearn.cross_validation import train_test_split, cross_val_score
import math
from numpy import abs, sqrt
regressor_names =
LinearRegression
LassoCV
ElasticNetCV
print ("=== Linear Cross validation RMSE scores:")
for regressor in regressor_names.split():
scores = cross_val_score(getattr(linear_model, regressor)(normalize=True, n_jobs=multiprocessing.cpu_count()-1),
X_train_dict_vec, y_train,
cv=2,
scoring='mean_squared_error'
)
print (regressor, sqrt(abs(scores)).mean())
Explanation: B. Modeling
Select model
End of explanation
regression_keys = ['category', 'q_length', 'qid', 'uid', 'answer', 'avg_pos_uid', 'avg_pos_qid']
X_train, y_train = featurize(load_buzz(), group='train', sign_val=None, extra=['avg_pos'])
X_train = select(X_train, regression_keys)
X_test = featurize(load_buzz(), group='test', sign_val=None, extra=['avg_pos'])
X_test = select(X_test, regression_keys)
transform(X_train)
transform(X_test)
X_train[1]
X_test[1]
vec = DictVectorizer()
vec.fit(X_train + X_test)
X_train = vec.transform(X_train)
X_test = vec.transform(X_test)
regressor = linear_model.ElasticNetCV(n_jobs=3, normalize=True)
regressor.fit(X_train, y_train)
print(regressor.coef_)
print(regressor.alpha_)
predictions = regressor.predict(X_test)
Explanation: Training and testing model
End of explanation
write_result(load_buzz()['test'], predictions)
Explanation: Writing result
End of explanation |
15,706 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Recreate Figure 5
The purpose of this notebook is to combine all the digital gene expression data for the retina cells, downloaded from the Gene Expression Omnibus using the accession number GSE63473.
Step1: We'll import the macosko2015 package, which contains a URL pointing to where we've created clean data
Step2: We've created a subset of the data that contains all the cells from batch 1, and only the differentially expressed genes. This is still pretty big!!
Step3: For later, let's also make a logged version the expression matrix
Step4: Now let's read the cell metadata
Step5: Figure 5a
For this figure, the caption is
Step6: Exercise 1
Make a subset of the data called figure5a_expression that contains only the genes from figure5a_genes_upper - Remember, genes are columns! How do you select only certain columns for pandas dataframes?
Step7:
Step8: We will use a function called groupby to grab the cells from each cluster, and use .mean()
Step9: Tidy data
To make this "punchcard" style figure, we will use the program altair. To use this program, we need to reshape our data into a tall, "tidy" data format, where each row is an observation, and each column is a variable
Step10: Now let's use the function reset_index, which will take everything that was an index and now make it a column
Step11: But our column names aren't so nice anymore ... let's create a dict to map the old column name to a new, nicer one.
Step12: We can use the dataframe function .rename() to rename our columns
Step13: Let's also add a log expression column, just in case
Step14: Exercise 2
Make the same plot, but use the logged expression column
Step15:
Step16: Bonus exercise (if you're feeling ahead)
Try the same thing, but with
Step17:
Step18: Now that we have the genes we want, let's get the cells we want!
We can use the range function in Python to get numbers in order
Step19: Now this returns a "range object" which means Python is being lazy and not telling us what's inside. To force Python into action, we can use list
Step20: So this is getting us all numbers from 0 to 4 (not including 5). We can use this group of numbers to subset our cell metadata! Let's make a variable called rows that contains True/False values telling us whether the cells are in that cluster number
Step21: Now let's use our rows variable to subset cell_metadata
Step22: Let's make sure we only have the clusters we need
Step23: This is kinda out of order so let's sort it with the sorted function
Step24: Exercise 4
Make a subset of the cell metadata, called figure5b_cell_metadata, that contains only the cells in the clusters shown in figure 5b.
Step25:
Step26: Now we want to get only the cells from these clusters. To do that, we would use .index
Step27: Exercise 5
Make a subset of gene expression called figure5b_expression using the figure5b_genes and figure5b_cell_metadata. Hint
Step28:
Step29: Again, we'll have to make a tidy version of the data to be able to make the violinplots
Exercise 6
Make a tidy version of the figure5b_expression called figure5b_tidy
Add a column to the figure5b_tidy dataframe that contains the log10 expression data
Step30: If you want, you could also create a function to simplify the tidying and logging
Step31: Now that you have your tidy data, we need to add the cell metadata. We will use .join, and specify to use the "barcode" column of figure5b_tidy
Step32: We can make violinplots using seaborn's sns.violinplot, but that will show us the expression across all genes
Step33: The below command specifies "expression" as the x-axis value (first argument), and "cluster_id" as the y-axis value (second argument). Then we say that we want the program to look at the data in our dataframe called figure5b_tidy_clusters.
Step34: Using sns.FacetGrid to make multiple violinplots
Since we want to make a separate violinplot for each gene, we need to take multiple steps. We will use the function sns.FacetGrid to make mini-plots of each gene. If you want to read more about plotting on data-aware grid in Python, check out the seaborn docs on grid plotting.
Let's take a look at the documentation.
Step35: Exercise 7
What is the first argument of FacetGrid? How can we specify that we want each column to be a gene symbol?
Use sns.FacetGrid on our figure5b_tidy_clusters data, specifying "gene_symbol" as the column in our data to use as columns in the grid.
Step36:
Step37: I have no idea which gene is where .. so let's add some titles with the convenient function g.set_titles
Step38: Now let's add our violinplots, using map on the facetgrid. Again, we'll use "expression" as the x-value (first argument) and "cluster_id" as the second argument.
Step39: Hmm, all of these genes are on totally different scales .. how can we make it so that each gene is scaled to its own minimum and maximum?
Exercise 8
Read the documentation for sns.FacetGrid and figure out how to turn of the shared values on the x axis
Step40:
Step41: Okay these violinplots are still pretty weird looking. In the paper, they scale the violinplots to all be the same width, and the lines are much thinner.
Let's look at the documentation of sns.violinplot and see what we can do.
Step42: Looks like we can set the scale variable to be "width" and let's try setting the linewidth to 1.
Step43: Much better! There's a few more things we need to tweak in sns.violinplot. Let's get rid of the dotted thing on the inside, and only show the data exactly where it's valued - the ends of the violins should be square, not pointy.
Exercise 9
Read the documentation of sns.violinplot and add to the options to cut off the violinplots at the data bounds, and have nothing on the inside of the violins.
Step44:
Step45: Okay one more thing on the violinplots ... they had a different color for every cluster, so let's do the same thing too. Right now they're all blue but let's make them all a different color. Since we have so many categories (21), and ColorBrewer doesn't have setups for when there are more than 10 colors, we need to use a different set of colors. We'll use the "husl" colormap, which uses perception research to make colormaps where no one color is too bright or too dark. Read more about it here
Here is an example of what the colormap looks like
Step46: Let's add palette="husl" to our violinplot command and see what it does
Step47: Now let's work on resizing the plots so they're each narrower. We'll add the following three options to `sns.FacetGrid to accomplish this
Step48: Hmm.. now we can see that the clusters aren't in numeric order. Is there an option in sns.violinplot that we can specify the order of the values?
Exercise 10
Read the sns.violinplot documentation to figure out the keyword to use to specify the order of the clusters
Make a sorted list of the unique cluster ids
Plot the violinplots on the FacetGrid
Step49:
Step50: Okay one last thing .. let's turn off the "expression" label at the bottom and the value scales (since right now we're just looking comparatively) with
Step51: Exercise 11
Take a step back ... does this look like the actual Figure 5b from the paper? Do you see the bimodality that they claim?
Why or why not?
YOUR ANSWER HERE
We don't see the bimodality here because they used loggged data, not the raw counts.
Exercise 12
Use logged expression (which column was this in our data? Check figure5b_tidy_clusters.head() to remind yourself) on the facetgrid of violinplots we created.
Step52:
Step53: Since we worked so hard to get these lines, let's write them as a function to a file called plotting_code.py. We'll move all the options we fiddled with into the arguments of our violinplot_grid function.
Notice we have to add import seaborn as sns into our file. That's because the file must be standalone and include everything it needs, including all the modules.
Step54: We can cat (short for concatenate) the file, which means dump the contents out to the output
Step55: Now we see more of the "bimodality" they talk about in the paper
Figure 5c
Figure 5c is all you!
Exercise 13
Use all the amacrine cells, but this time use all the genes from Figure 5c.
Note
Step56:
Step57: Let's take a step back ... What does this all mean?
They showed that for each amacrine cell cluster, they showed that one gene that was mutually exclusively detected using single-cell RNA-seq. And then, in Figure 5F, they showed that indeed, one of the markers is expressed in some amacrine cells but not others.
But, single-cell RNA seq is plagued with gene dropout -- randomly, one gene will be detected in one cell but not another.
What if there was a way that we could detect the genes that dropped out?
Compressed Sensing/Robust PCA
Compressed sensing is a field where they think about problems like, "if we only get 10% of the signal, and it's super noisy, could we reconstruct 100% of what was originally there?(sound familiar??) Turns out yes, you can! Robust PCA is one of the algorithms in compressed sensing which models the data $X$ as the sum of a low-rank matrix $L$ and a sparse matrix $S$.
$X = L + S$
$X$ is the expression data
$L$ is the low rank data. In our case, this essentially becomes a smoothed version of the expression matrix
$S$ is the sparse data. In our case, this captures the stochastic noise in the data. Some of this data may be biological, it is true. But largely, this data seems to carry the technical noise.
Robust PCA is often used in video analysis to find anomalies. In their case, $L$ is the background and $S$ is the "anomalies" (people walking around).
Cluster on raw (log2) amacrine cell expression
To understand what Robust PCA does to biological data, we first need to understand what the raw data looks like. Let's look at the gene expression in only amacrine cells, with the RAW data
Step58: Cluster on Robust PCA'd amacrine cell expression (lowrank)
Step59: Figure 5b using Robust PCA data
Step60: Looks like a lot of the signal from the genes was recovered!
Robust PCA data for Figure 5c
Subset the genes on only figure 5c | Python Code:
import altair
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
# Set the plotting style as for a "paper" (smaller labels)
# and using a white background with a grid ("whitegrid")
sns.set(context='paper', style='whitegrid')
%matplotlib inline
Explanation: Recreate Figure 5
The purpose of this notebook is to combine all the digital gene expression data for the retina cells, downloaded from the Gene Expression Omnibus using the accession number GSE63473.
End of explanation
import macosko2015
macosko2015.BASE_URL
Explanation: We'll import the macosko2015 package, which contains a URL pointing to where we've created clean data
End of explanation
urlname = macosko2015.BASE_URL + 'differential_clusters_expression.csv'
urlname
expression = pd.read_csv(urlname, index_col=0)
print(expression.shape)
expression.head()
Explanation: We've created a subset of the data that contains all the cells from batch 1, and only the differentially expressed genes. This is still pretty big!!
End of explanation
expression_log10 = np.log10(expression + 1)
print(expression_log10.shape)
expression_log10.head()
Explanation: For later, let's also make a logged version the expression matrix:
End of explanation
urlname = macosko2015.BASE_URL + 'differential_clusters_cell_metadata.csv'
cell_metadata = pd.read_csv(urlname, index_col=0)
cell_metadata.head()
Explanation: Now let's read the cell metadata
End of explanation
def upperizer(genes):
return [x.upper() for x in genes]
figure5a_genes = ['Nrxn2', 'Atp1b1', 'Pax6', 'Slc32a1', 'Slc6a1', 'Elavl3']
figure5a_genes_upper = upperizer(figure5a_genes)
figure5a_genes_upper
Explanation: Figure 5a
For this figure, the caption is:
(A) Pan-amacrine markers. The expression levels of the six genes identified (Nrxn2, Atp1b1, Pax6, Slc32a1, Slc6a1, Elavl3) are represented as dot plots across all 39 clusters; larger dots indicate broader expression within the cluster; deeper red denotes a higher expression level.
I wonder, how did they aggregate their expression per cluster? Mean, median, or mode? Did they log their data? We'll find out :)
You may have noticed that while the gene names are Captialcase in the paper, they're all uppercase in the data. So first, we'll define a function called upperizer that will make our gene names uppercase.
Then, we'll make a list of genes for Figure 5a
End of explanation
# YOUR CODE HERE
Explanation: Exercise 1
Make a subset of the data called figure5a_expression that contains only the genes from figure5a_genes_upper - Remember, genes are columns! How do you select only certain columns for pandas dataframes?
End of explanation
figure5a_expression = expression[figure5a_genes_upper]
print(figure5a_expression.shape)
figure5a_expression.head()
Explanation:
End of explanation
figure5a_expression_mean = figure5a_expression.groupby(cell_metadata['cluster_n'], axis=0).mean()
print(figure5a_expression_mean.shape)
figure5a_expression_mean.head()
Explanation: We will use a function called groupby to grab the cells from each cluster, and use .mean()
End of explanation
figure5a_expression_mean_unstack = figure5a_expression_mean.unstack()
print(figure5a_expression_mean_unstack.shape)
figure5a_expression_mean_unstack.head()
Explanation: Tidy data
To make this "punchcard" style figure, we will use the program altair. To use this program, we need to reshape our data into a tall, "tidy" data format, where each row is an observation, and each column is a variable:
Source: http://r4ds.had.co.nz/tidy-data.html
First, we will unstack our data, which will make a very long column of gene expression, where the gene name and cluster number is the index.
End of explanation
figure5a_expression_tidy = figure5a_expression_mean_unstack.reset_index()
print(figure5a_expression_tidy.shape)
figure5a_expression_tidy.head()
Explanation: Now let's use the function reset_index, which will take everything that was an index and now make it a column:
End of explanation
renamer = {'level_0': 'gene_symbol', 0: 'expression'}
renamer
Explanation: But our column names aren't so nice anymore ... let's create a dict to map the old column name to a new, nicer one.
End of explanation
figure5a_expression_tidy = figure5a_expression_tidy.rename(columns=renamer)
print(figure5a_expression_tidy.shape)
figure5a_expression_tidy.head()
Explanation: We can use the dataframe function .rename() to rename our columns:
End of explanation
figure5a_expression_tidy['expression_log'] = np.log10(figure5a_expression_tidy['expression'] + 1)
print(figure5a_expression_tidy.shape)
figure5a_expression_tidy.head()
altair.Chart(figure5a_expression_tidy).mark_circle().encode(
size='expression', x=altair.X('gene_symbol'), y=altair.Y('cluster_n'))
Explanation: Let's also add a log expression column, just in case :)
End of explanation
# YOUR CODE HERE
Explanation: Exercise 2
Make the same plot, but use the logged expression column
End of explanation
altair.Chart(figure5a_expression_tidy).mark_circle().encode(
size='expression_log', x=altair.X('gene_symbol'), y=altair.Y('cluster_n'))
Explanation:
End of explanation
# YOUR CODE HERE
# YOUR CODE HERE
# YOUR CODE HERE
Explanation: Bonus exercise (if you're feeling ahead)
Try the same thing, but with:
Median within clusters, raw counts
Median within clusters, log expression
Figure 5b
Here, we will make violinplots of specific genes within our dataset.
The caption of this figure is:
(B) Identification of known amacrine types among clusters. The 21 amacrine clusters consisted of 12 GABAergic, five glycinergic, one glutamatergic, and three non-GABAergic non-glycinergic clusters. Starburst amacrines were identified in cluster 3 by their expression of Chat; excitatory amacrines by expression of Slc17a8; A-II amacrines by their expression of Gjd2; and SEG amacrine neurons by their expression of Ebf3.
Exercise 3
Make a subset of genes called figure5b_genes using the gene names from the paper figure.
You may want to use the upperizer() function we used before
Use as many code cells as you need
End of explanation
figure5b_genes = ['Chat', "Gad1", 'Gad2', 'Slc17a8', 'Slc6a9', 'Gjd2', 'Gjd2', 'Ebf3']
figure5b_genes_upper = upperizer(figure5b_genes)
figure5b_genes_upper
Explanation:
End of explanation
range(5)
Explanation: Now that we have the genes we want, let's get the cells we want!
We can use the range function in Python to get numbers in order
End of explanation
list(range(5))
Explanation: Now this returns a "range object" which means Python is being lazy and not telling us what's inside. To force Python into action, we can use list:
End of explanation
rows = cell_metadata.cluster_n.isin(range(5))
rows
Explanation: So this is getting us all numbers from 0 to 4 (not including 5). We can use this group of numbers to subset our cell metadata! Let's make a variable called rows that contains True/False values telling us whether the cells are in that cluster number:
End of explanation
print('cell_metadata.shape', cell_metadata.shape)
cell_metadata_subset = cell_metadata.loc[rows]
print('cell_metadata_subset.shape', cell_metadata_subset.shape)
cell_metadata_subset.head()
Explanation: Now let's use our rows variable to subset cell_metadata
End of explanation
cell_metadata_subset.cluster_n.unique()
Explanation: Let's make sure we only have the clusters we need:
End of explanation
sorted(cell_metadata_subset.cluster_n.unique())
Explanation: This is kinda out of order so let's sort it with the sorted function:
End of explanation
# YOUR CODE HERE
# YOUR CODE HERE
# YOUR CODE HERE
Explanation: Exercise 4
Make a subset of the cell metadata, called figure5b_cell_metadata, that contains only the cells in the clusters shown in figure 5b.
End of explanation
rows = cell_metadata.cluster_n.isin(range(3, 24))
figure5b_cell_metadata = cell_metadata.loc[rows]
print(figure5b_cell_metadata.shape)
figure5b_cell_metadata.head()
sorted(figure5b_cell_metadata.cluster_n.unique())
Explanation:
End of explanation
figure5b_cell_metadata.index
Explanation: Now we want to get only the cells from these clusters. To do that, we would use .index
End of explanation
# YOUR CODE HERE
Explanation: Exercise 5
Make a subset of gene expression called figure5b_expression using the figure5b_genes and figure5b_cell_metadata. Hint: Use .loc on expression
End of explanation
figure5b_expression = expression.loc[figure5b_cell_metadata.index, figure5b_genes_upper]
print(figure5b_expression.shape)
figure5b_expression.head()
Explanation:
End of explanation
figure5b_cell_metadata.index
figure5b_expression.index
figure5b_tidy = figure5b_expression.unstack().reset_index()
figure5b_tidy = figure5b_tidy.rename(columns={'level_1': 'barcode', 'level_0': 'gene_symbol', 0: 'expression'})
figure5b_tidy['expression_log'] = np.log10(figure5b_tidy['expression'] + 1)
print(figure5b_tidy.shape)
figure5b_tidy.head()
Explanation: Again, we'll have to make a tidy version of the data to be able to make the violinplots
Exercise 6
Make a tidy version of the figure5b_expression called figure5b_tidy
Add a column to the figure5b_tidy dataframe that contains the log10 expression data
End of explanation
def tidify_and_log(data):
tidy = data.unstack().reset_index()
tidy = tidy.rename(columns={'level_1': 'barcode', 'level_0': 'gene_symbol', 0: 'expression'})
tidy['expression_log'] = np.log10(tidy['expression'] + 1)
return tidy
Explanation: If you want, you could also create a function to simplify the tidying and logging:
End of explanation
figure5b_tidy_clusters = figure5b_tidy.join(figure5b_cell_metadata, on='barcode')
print(figure5b_tidy_clusters.shape)
figure5b_tidy_clusters.head()
Explanation: Now that you have your tidy data, we need to add the cell metadata. We will use .join, and specify to use the "barcode" column of figure5b_tidy
End of explanation
sns.violinplot?
Explanation: We can make violinplots using seaborn's sns.violinplot, but that will show us the expression across all genes :(
End of explanation
sns.violinplot('expression', 'cluster_id', data=figure5b_tidy_clusters)
Explanation: The below command specifies "expression" as the x-axis value (first argument), and "cluster_id" as the y-axis value (second argument). Then we say that we want the program to look at the data in our dataframe called figure5b_tidy_clusters.
End of explanation
sns.FacetGrid?
Explanation: Using sns.FacetGrid to make multiple violinplots
Since we want to make a separate violinplot for each gene, we need to take multiple steps. We will use the function sns.FacetGrid to make mini-plots of each gene. If you want to read more about plotting on data-aware grid in Python, check out the seaborn docs on grid plotting.
Let's take a look at the documentation.
End of explanation
# YOUR CODE HERE
Explanation: Exercise 7
What is the first argument of FacetGrid? How can we specify that we want each column to be a gene symbol?
Use sns.FacetGrid on our figure5b_tidy_clusters data, specifying "gene_symbol" as the column in our data to use as columns in the grid.
End of explanation
facetgrid = sns.FacetGrid(figure5b_tidy_clusters, col='gene_symbol')
facetgrid
Explanation:
End of explanation
facetgrid = sns.FacetGrid(figure5b_tidy_clusters, col='gene_symbol')
facetgrid.set_titles('{col_name}')
Explanation: I have no idea which gene is where .. so let's add some titles with the convenient function g.set_titles
End of explanation
facetgrid = sns.FacetGrid(figure5b_tidy_clusters, col='gene_symbol')
facetgrid.map(sns.violinplot, 'expression', 'cluster_id')
facetgrid.set_titles('{col_name}')
Explanation: Now let's add our violinplots, using map on the facetgrid. Again, we'll use "expression" as the x-value (first argument) and "cluster_id" as the second argument.
End of explanation
# YOUR CODE HERE
# YOUR CODE HERE
Explanation: Hmm, all of these genes are on totally different scales .. how can we make it so that each gene is scaled to its own minimum and maximum?
Exercise 8
Read the documentation for sns.FacetGrid and figure out how to turn of the shared values on the x axis
End of explanation
facetgrid = sns.FacetGrid(figure5b_tidy_clusters, col='gene_symbol', sharex=False)
facetgrid.map(sns.violinplot, 'expression', 'cluster_id')
facetgrid.set_titles('{col_name}')
Explanation:
End of explanation
sns.violinplot?
Explanation: Okay these violinplots are still pretty weird looking. In the paper, they scale the violinplots to all be the same width, and the lines are much thinner.
Let's look at the documentation of sns.violinplot and see what we can do.
End of explanation
facetgrid = sns.FacetGrid(figure5b_tidy_clusters, col='gene_symbol', sharex=False)
facetgrid.map(sns.violinplot, 'expression', 'cluster_id', scale='width', linewidth=1)
facetgrid.set_titles('{col_name}')
Explanation: Looks like we can set the scale variable to be "width" and let's try setting the linewidth to 1.
End of explanation
# YOUR CODE HERE
Explanation: Much better! There's a few more things we need to tweak in sns.violinplot. Let's get rid of the dotted thing on the inside, and only show the data exactly where it's valued - the ends of the violins should be square, not pointy.
Exercise 9
Read the documentation of sns.violinplot and add to the options to cut off the violinplots at the data bounds, and have nothing on the inside of the violins.
End of explanation
facetgrid = sns.FacetGrid(figure5b_tidy_clusters, col='gene_symbol',
gridspec_kws=dict(hspace=0, wspace=0), sharex=False)
facetgrid.map(sns.violinplot, 'expression', 'cluster_id', scale='width', linewidth=1, inner=None, cut=True)
facetgrid.set_titles('{col_name}')
Explanation:
End of explanation
sns.palplot(sns.color_palette('husl', n_colors=50))
Explanation: Okay one more thing on the violinplots ... they had a different color for every cluster, so let's do the same thing too. Right now they're all blue but let's make them all a different color. Since we have so many categories (21), and ColorBrewer doesn't have setups for when there are more than 10 colors, we need to use a different set of colors. We'll use the "husl" colormap, which uses perception research to make colormaps where no one color is too bright or too dark. Read more about it here
Here is an example of what the colormap looks like:
End of explanation
facetgrid = sns.FacetGrid(figure5b_tidy_clusters, col='gene_symbol', sharex=False)
facetgrid.map(sns.violinplot, 'expression', 'cluster_id', scale='width',
linewidth=1, inner=None, cut=True, palette='husl')
facetgrid.set_titles('{col_name}')
Explanation: Let's add palette="husl" to our violinplot command and see what it does:
End of explanation
facetgrid = sns.FacetGrid(figure5b_tidy_clusters, col='gene_symbol',
size=4, aspect=0.25, gridspec_kws=dict(wspace=0),
sharex=False)
facetgrid.map(sns.violinplot, 'expression', 'cluster_id', scale='width',
linewidth=1, palette='husl', inner=None, cut=True)
facetgrid.set_titles('{col_name}')
Explanation: Now let's work on resizing the plots so they're each narrower. We'll add the following three options to `sns.FacetGrid to accomplish this:
size=4 (default: size=3) - Make the relative size of the plot bigger
aspect=0.25 (default: aspect=1) - Make the the width of the plot be 1/4 the size of the heigh
gridspec_kws=dict(wspace=0) - Set the width between plots to be zero
End of explanation
# YOUR CODE HERE
# YOUR CODE HERE
# YOUR CODE HERE
Explanation: Hmm.. now we can see that the clusters aren't in numeric order. Is there an option in sns.violinplot that we can specify the order of the values?
Exercise 10
Read the sns.violinplot documentation to figure out the keyword to use to specify the order of the clusters
Make a sorted list of the unique cluster ids
Plot the violinplots on the FacetGrid
End of explanation
cluster_order = figure5b_tidy_clusters.cluster_id.sort_values().unique()
cluster_order
facetgrid = sns.FacetGrid(figure5b_tidy_clusters, col='gene_symbol', size=4, aspect=0.25,
gridspec_kws=dict(wspace=0), sharex=False)
facetgrid.map(sns.violinplot, 'expression', 'cluster_id', scale='width',
linewidth=1, palette='husl', inner=None, cut=True, order=cluster_order)
facetgrid.set_titles('{col_name}')
Explanation:
End of explanation
facetgrid = sns.FacetGrid(figure5b_tidy_clusters, col='gene_symbol', size=4, aspect=0.25,
gridspec_kws=dict(wspace=0), sharex=False)
facetgrid.map(sns.violinplot, 'expression', 'cluster_id', scale='width',
linewidth=1, palette='husl', inner=None, cut=True, order=cluster_order)
facetgrid.set(xlabel='', xticks=[])
facetgrid.set_titles('{col_name}')
Explanation: Okay one last thing .. let's turn off the "expression" label at the bottom and the value scales (since right now we're just looking comparatively) with:
facetgrid.set(xlabel='', xticks=[])
End of explanation
# YOUR CODE HERE
# YOUR CODE HERE
Explanation: Exercise 11
Take a step back ... does this look like the actual Figure 5b from the paper? Do you see the bimodality that they claim?
Why or why not?
YOUR ANSWER HERE
We don't see the bimodality here because they used loggged data, not the raw counts.
Exercise 12
Use logged expression (which column was this in our data? Check figure5b_tidy_clusters.head() to remind yourself) on the facetgrid of violinplots we created.
End of explanation
figure5b_tidy_clusters.head()
facetgrid = sns.FacetGrid(figure5b_tidy_clusters, col='gene_symbol', size=4, aspect=0.25,
gridspec_kws=dict(wspace=0), sharex=False)
facetgrid.map(sns.violinplot, 'expression_log', 'cluster_id', scale='width',
linewidth=1, palette='husl', inner=None, cut=True, order=cluster_order)
facetgrid.set(xlabel='', xticks=[])
facetgrid.set_titles('{col_name}')
Explanation:
End of explanation
%%file plotting_code.py
import seaborn as sns
def violinplot_grid(tidy, col='gene_symbol', size=4, aspect=0.25, gridspec_kws=dict(wspace=0),
sharex=False, scale='width', linewidth=1, palette='husl', inner=None,
cut=True, order=None):
facetgrid = sns.FacetGrid(tidy, col=col, size=size, aspect=aspect,
gridspec_kws=gridspec_kws, sharex=sharex)
facetgrid.map(sns.violinplot, 'expression_log', 'cluster_id', scale=scale,
linewidth=linewidth, palette=palette, inner=inner, cut=cut, order=order)
facetgrid.set(xlabel='', xticks=[])
facetgrid.set_titles('{col_name}')
Explanation: Since we worked so hard to get these lines, let's write them as a function to a file called plotting_code.py. We'll move all the options we fiddled with into the arguments of our violinplot_grid function.
Notice we have to add import seaborn as sns into our file. That's because the file must be standalone and include everything it needs, including all the modules.
End of explanation
cat plotting_code.py
import plotting_code
plotting_code.violinplot_grid(figure5b_tidy_clusters, order=cluster_order)
Explanation: We can cat (short for concatenate) the file, which means dump the contents out to the output:
End of explanation
# YOUR CODE HERE
# YOUR CODE HERE
# YOUR CODE HERE
# YOUR CODE HERE
# YOUR CODE HERE
Explanation: Now we see more of the "bimodality" they talk about in the paper
Figure 5c
Figure 5c is all you!
Exercise 13
Use all the amacrine cells, but this time use all the genes from Figure 5c.
Note: You may not have expression in EVERY gene .. this is a subset of the entire dataset (only run 1!) and
End of explanation
figure5c_genes = ['Gng7', 'Gbx2', 'Tpbg', 'Slitrk6', 'Maf', 'Tac2', 'Loxl2', 'Vip', 'Glra1',
'Igfbp5', 'Pdgfra', 'Slc35d3', 'Car3', 'Fgf1', 'Igf1', 'Col12a1', 'Ptgds',
'Ppp1r17', 'Cck', 'Shisa9', 'Pou3f3']
figure5c_genes_upper = upperizer(figure5c_genes)
figure5c_expression = expression.loc[figure5b_cell_metadata.index, figure5c_genes_upper]
print(figure5c_expression.shape)
figure5c_expression.head()
figure5c_genes_upper
figure5c_tidy = tidify_and_log(figure5c_expression)
print(figure5c_tidy.shape)
figure5c_tidy.head()
figure5c_tidy_cell_metadata = figure5c_tidy.join(cell_metadata, on='barcode')
print(figure5c_tidy_cell_metadata.shape)
figure5c_tidy_cell_metadata.head()
plotting_code.violinplot_grid(figure5c_tidy_cell_metadata, order=cluster_order, aspect=0.2)
Explanation:
End of explanation
# Import a file I wrote with a cleaned-up clustermap
import fig_code
amacrine_cluster_n = sorted(figure5b_cell_metadata.cluster_n.unique())
amacrine_cluster_to_color = dict(zip(amacrine_cluster_n, sns.color_palette('husl', n_colors=len(amacrine_cluster_n))))
amacrine_cell_colors = [amacrine_cluster_to_color[i] for i in figure5b_cell_metadata['cluster_n']]
amacrine_expression = expression_log10.loc[figure5b_cell_metadata.index]
print(amacrine_expression.shape)
fig_code.clustermap(amacrine_expression, row_colors=amacrine_cell_colors)
Explanation: Let's take a step back ... What does this all mean?
They showed that for each amacrine cell cluster, they showed that one gene that was mutually exclusively detected using single-cell RNA-seq. And then, in Figure 5F, they showed that indeed, one of the markers is expressed in some amacrine cells but not others.
But, single-cell RNA seq is plagued with gene dropout -- randomly, one gene will be detected in one cell but not another.
What if there was a way that we could detect the genes that dropped out?
Compressed Sensing/Robust PCA
Compressed sensing is a field where they think about problems like, "if we only get 10% of the signal, and it's super noisy, could we reconstruct 100% of what was originally there?(sound familiar??) Turns out yes, you can! Robust PCA is one of the algorithms in compressed sensing which models the data $X$ as the sum of a low-rank matrix $L$ and a sparse matrix $S$.
$X = L + S$
$X$ is the expression data
$L$ is the low rank data. In our case, this essentially becomes a smoothed version of the expression matrix
$S$ is the sparse data. In our case, this captures the stochastic noise in the data. Some of this data may be biological, it is true. But largely, this data seems to carry the technical noise.
Robust PCA is often used in video analysis to find anomalies. In their case, $L$ is the background and $S$ is the "anomalies" (people walking around).
Cluster on raw (log2) amacrine cell expression
To understand what Robust PCA does to biological data, we first need to understand what the raw data looks like. Let's look at the gene expression in only amacrine cells, with the RAW data:
End of explanation
csv = macosko2015.BASE_URL + 'differential_clusters_lowrank_tidy_metadata_amacrine.csv'
lowrank_tidy = pd.read_csv(csv)
print(lowrank_tidy.shape)
# Reshape the data to be a large 2d matrix
lowrank_tidy_2d = lowrank_tidy.pivot(index='barcode', columns='gene_symbol', values='expression_log')
# set minimum value shown to 0 because there's a bunch of small (e.g. -1.1) negative numbers in the lowrank data
fig_code.clustermap(lowrank_tidy_2d, row_colors=amacrine_cell_colors, vmin=0)
Explanation: Cluster on Robust PCA'd amacrine cell expression (lowrank)
End of explanation
# Subset the genes on only figure 5b
rows = lowrank_tidy.gene_symbol.isin(figure5b_genes_upper)
lowrank_tidy_figure5b = lowrank_tidy.loc[rows]
print(lowrank_tidy_figure5b.shape)
lowrank_tidy_figure5b.head()
plotting_code.violinplot_grid(lowrank_tidy_figure5b, order=cluster_order, aspect=0.25)
Explanation: Figure 5b using Robust PCA data
End of explanation
rows = lowrank_tidy.gene_symbol.isin(figure5c_genes_upper)
lowrank_tidy_figure5c = lowrank_tidy.loc[rows]
print(lowrank_tidy_figure5c.shape)
lowrank_tidy_figure5c.head()
plotting_code.violinplot_grid(lowrank_tidy_figure5c, order=cluster_order, aspect=0.2)
Explanation: Looks like a lot of the signal from the genes was recovered!
Robust PCA data for Figure 5c
Subset the genes on only figure 5c
End of explanation |
15,707 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
The targeting algorithm of CRPropa
Here we will introduce you to the targeting algorithm in CRPropa, which emits particles from their sources using a von-Mises-Fisher distribution instead of an isotropic distribution. After emission from their sources, the particles get a weight assigned to them so that the resulting distribution at the observer can be reweighted to resemble an isotropically emitted distribution from the sources. This can lead to significantly larger number of hits compared with starting with an isotropic emission.
A simple example on how to use the targeting algorithm of CRPropa
Step1: Plotting of the results
Step2: A learning algorithm to obtain optimal values for mu and kappa
Step3: Testing sampling results
As can be seen in the following plot, the average sampling efficiency matches the desired hitting probability Phit | Python Code:
import numpy as np
from crpropa import *
# Create a random magnetic-field setup
randomSeed = 42
turbSpectrum = SimpleTurbulenceSpectrum(0.2*nG, 200*kpc, 2*Mpc, 5./3.)
gridprops = GridProperties(Vector3d(0), 256, 100*kpc)
BField = SimpleGridTurbulence(turbSpectrum, gridprops, randomSeed)
# Cosmic-ray propagation in magnetic fields without interactions
sim = ModuleList()
sim.add(PropagationCK(BField))
sim.add(MaximumTrajectoryLength(25 * Mpc))
# Define an observer
sourcePosition = Vector3d(2., 2., 2.) * Mpc
obsPosition = Vector3d(2., 10., 2.) * Mpc
obsRadius = 2. * Mpc
obs = Observer()
obs.add(ObserverSurface( Sphere(obsPosition, obsRadius)))
obs.setDeactivateOnDetection(True)
FilenameObserver = 'TargetedEmission.txt'
output = TextOutput(FilenameObserver)
obs.onDetection(output)
sim.add(obs)
# Define the vMF source
source = Source()
source.add(SourcePosition(sourcePosition))
source.add(SourceParticleType(nucleusId(1,1)))
source.add(SourceEnergy(10. * EeV))
# Here we need to add the vMF parameters
mu = np.array([0.,1.,0.]) # The average direction emission, pointing from the source to the observer
muvec = Vector3d(float(mu[0]), float(mu[1]), float(mu[2]))
kappa = 100. # The concentration parameter, set to a relatively large value
nparticles = 100000
source.add(SourceDirectedEmission(muvec,kappa))
#now we run the simulation
sim.setShowProgress(True)
sim.run(source, nparticles)
output.close()
Explanation: The targeting algorithm of CRPropa
Here we will introduce you to the targeting algorithm in CRPropa, which emits particles from their sources using a von-Mises-Fisher distribution instead of an isotropic distribution. After emission from their sources, the particles get a weight assigned to them so that the resulting distribution at the observer can be reweighted to resemble an isotropically emitted distribution from the sources. This can lead to significantly larger number of hits compared with starting with an isotropic emission.
A simple example on how to use the targeting algorithm of CRPropa
End of explanation
import healpy as hp
import matplotlib.pylab as plt
crdata = np.genfromtxt('TargetedEmission.txt')
Id = crdata[:,3]
E = crdata[:,4] * EeV
px = crdata[:,8]
py = crdata[:,9]
pz = crdata[:,10]
w = crdata[:,29]
lons = np.arctan2(-1. * py, -1. *px)
lats = np.pi / 2 - np.arccos( -pz / np.sqrt(px*px + py*py+ pz*pz) )
M = ParticleMapsContainer()
for i in range(len(E)):
M.addParticle(int(Id[i]), E[i], lons[i], lats[i], w[i])
#stack all maps
crMap = np.zeros(49152)
for pid in M.getParticleIds():
energies = M.getEnergies(int(pid))
for i, energy in enumerate(energies):
crMap += M.getMap(int(pid), energy * eV )
#plot maps using healpy
hp.mollview(map=crMap, title='Targeted emission')
plt.show()
Explanation: Plotting of the results
End of explanation
import numpy as np
from crpropa import *
import os.path
import time
def run_batch(mu,kappa,nbatch,epoch):
# Setup new batch simulation
def run_sim(mu,kappa,nbatch,epoch):
sim = ModuleList()
sim.add( SimplePropagation(5.*parsec, 0.5*Mpc) )
sim.add(MaximumTrajectoryLength(2000.*Mpc))
sourcePosition = Vector3d(100., 0., 0.) * Mpc
obsPosition = Vector3d(0., 0., 0.) * Mpc
obs = Observer()
obs.add(ObserverSurface( Sphere(sourcePosition, 50. * parsec)))
obs.setDeactivateOnDetection(False)
FilenameSource = 'SourceEmission_'+str(epoch)+'.txt'
output1 = TextOutput(FilenameSource)
obs.onDetection(output1)
sim.add(obs)
obs2 = Observer()
obs2.add(ObserverSurface( Sphere(obsPosition, 10.*Mpc)))
obs2.setDeactivateOnDetection(True)
FilenameObserver = 'Observer_'+str(epoch)+'.txt'
output2 = TextOutput(FilenameObserver)
obs2.onDetection(output2)
sim.add(obs2)
# Define the vMF source
source = Source()
source.add(SourcePosition(sourcePosition))
source.add(SourceParticleType(nucleusId(1,1)))
source.add(SourceEnergy(1 * EeV))
# Here we need to add the vMF parameters
muvec = Vector3d(float(mu[0]), float(mu[1]), float(mu[2]))
source.add(SourceDirectedEmission(muvec,kappa))
# Now we run the simulation
sim.setShowProgress(True)
sim.run(source, nbatch)
output1.close()
output2.close()
run_sim(mu,kappa,nbatch,epoch)
# Get ids of particles hitting the source
while not os.path.exists('Observer_'+str(epoch)+'.txt'):
time.sleep(1)
idhit = np.loadtxt('Observer_'+str(epoch)+'.txt', usecols=(2), unpack=True)
# Get emission directions of particles
virtualfile = open('SourceEmission_'+str(epoch)+'.txt')
ids, px,py,pz = np.loadtxt(virtualfile, usecols=(2,8,9,10), unpack=True) #, skiprows=10)
indices = [np.where(ids==ii)[0][0] for ii in idhit]
x=np.array([px[indices],py[indices],pz[indices]]).T
return x
def pdf_vonMises(x,mu,kappa):
# See eq. (3.1) of PoS (ICRC2019) 447
res=kappa*np.exp(-kappa*(1-x.dot(mu)))/(2.*np.pi*(1.-np.exp(-2*kappa)))
return res
def weight(x,mu,kappa):
# This routine calculates the reweighting for particles that should have been emitted according to 4pi
p0=1./(4.*np.pi)
p1=pdf_vonMises(x,mu,kappa)
res = p0/p1
return res
def estimate_mu_kappa(x,weights,probhit=0.9):
# This is a very simple learning algorithm
#1) Just estimate the mean direction on the sky
aux = np.sum(np.array([x[:,0]*weights,x[:,1]*weights,x[:,2]*weights]),axis=1)
mu=aux/np.linalg.norm(aux) #NOTE: mu needs to be normalized
#2) Estimate the disc of the target on the emission sky
aux = np.sum(((x[:,0]*mu[0])**2+(x[:,1]*mu[1])**2+(x[:,2]*mu[2])**2)*weights)/np.sum(weights)
# Estimate kappa, such that on average the hit probability is probhit
kappa = np.log(1.-probhit)/(aux-1.)
return mu,kappa
def sample(mu,kappa,nbatch=100,probhit=0.9,nepoch=200):
batches_x=[]
batches_w=[]
mu_traj=[]
kappa_traj=[]
acceptance=[]
for e in np.arange(nepoch):
print('Processing epoch Nr.:',e)
# Run CRPropa to generate a batch of particles
# CRPropa passes the initial normalised emission directions x
print('starting simulation...')
y=run_batch(mu,kappa,nbatch,e)
print('simulation done...')
# Now calculate weights for the particles
weights = []
for xx in y:
weights.append(weight(xx,mu,kappa))
# Change lists to arrays
y = np.array(y)
weights = np.array(weights)
# Learn the parameters of the emission vMF distribution
mu,kappa=estimate_mu_kappa(y,weights,probhit=probhit)
acceptance.append(len(y)/float(nbatch))
mu_traj.append(mu)
kappa_traj.append(kappa)
batches_x.append(y)
batches_w.append(weights)
x = np.copy(batches_x[0])
w = np.copy(batches_w[0])
for i in np.arange(1, len(batches_w)):
x=np.append(x, batches_x[i], axis=0)
w=np.append(w, batches_w[i], axis=0)
return x,w,np.array(acceptance),np.array(mu_traj),np.array(kappa_traj)
# Set initial values
mu = np.array([1,0,0])
kappa = 0.00001 # We start with an almost 4pi kernel
Phit = 0.90 # Here we choose the desired hit probability. Note: it is a trade off between accuracy and exploration
# Start the learning algorithm
x,w,acceptance, mu_traj, kappa_traj = sample(mu,kappa,probhit=Phit,nbatch=100000,nepoch=4)
print(mu_traj)
print(kappa_traj)
Explanation: A learning algorithm to obtain optimal values for mu and kappa
End of explanation
import matplotlib.pylab as plt
plt.title('Acceptance rate as a function of epoch')
plt.plot(acceptance, label='sample acceptance',color='black')
plt.plot(acceptance*0+Phit, label='target acceptance',color='red')
plt.xlabel(r'$n_{\rm{epoch}}$')
plt.ylabel('Acceptance rate')
plt.legend()
plt.show()
Explanation: Testing sampling results
As can be seen in the following plot, the average sampling efficiency matches the desired hitting probability Phit:
End of explanation |
15,708 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Organizing your code with functions
<img src="images/pestle.png" width="75" align="right">Years ago I learned to make Thai red curry paste from scratch, including roasting then grinding seeds and pounding stuff in a giant mortar and pestle. It takes forever and so I generally buy curry paste ready-made from the store.
<img src="images/redcurry.jpeg" width="70" align="right">Similarly, most cookbooks provide a number of fundamental recipes, such as making sauces, that are used by other recipes. A cookbook is organized into a series of executable recipes, some of which "invoke" other recipes. To make dinner, I open a cookbook, acquire some raw ingredients, then execute one or more recipes, usually in a specific sequence.
Writing a program proceeds in the same way. Opening a cookbook is the same as importing libraries. Acquiring raw ingredients could mean loading data into the memory. The main program invokes functions (recipes) to accomplish a particular task. As part of writing a program, we will typically break out logical sections of code into functions specific to our problem, whereas the functions in libraries tend to be broadly-applicable.
The way we organize our code is important. Programs quickly become an incomprehensible rats nest if we are not strict about style and organization. Here is the general structure of the Python programs we will write
Step1: The code template for a function with no arguments is
Step2: <img src="images/redbang.png" width="30" align="left">The Python interpreter does not execute the code inside the function unless we directly invoke that function. Python sees the function definition as just that
Step3: We don't need a print statement because we are executing inside a notebook, not a Python program. If this were in a regular Python program, we would need a print statement
Step4: We distinguish between functions and variables syntactically by always putting the parentheses next to the function name. I.e., pi is a variable reference but pi() is a function call.
Some functions don't have return values, such as a function that displays an image in a window. It has a side effect of altering the display but does not really have a return value. The return statement is omitted if the function does not return a value. Here's a contrived side-effecting example that does not need to return a value
Step5: If you try to use the value of a function that lacks a return, Python gives you the so-called None value.
Step6: Naturally, we can also return strings, not just numbers. For example here's a function called hello that does nothing but return string 'hello'
Step7: Turning to the more interesting cases now, here is the template for a function with one argument
Step8: This operation is accumulator and there is an associated code template, which you should memorize. Any time somebody says accumulator, you should think loop around a partial result update preceded by initialization of that result.
Summing values is very common so let's encapsulate the functionality in a function to avoid having to cut-and-paste the code template all the time. Our black box with a few sample "input-output" pairs from a function plan looks like
Step9: The key benefit of this function version is that now we have some generic code that we can invoke with a simple call to sum. The argument to the function is the list of data to sum and so the for loop refers to it than the specific Quantity variable. (Notice that the variable inside the function is now s not sum to avoid confusion with the function name.)
Step10: You might be tempted to build a function that directly references the Quantity global list instead of a parameter
Step11: Another thing to learn is that Python allows us to name the arguments as we passed them to a function
Step12: The function call, or invocation, sum(Quantity) passes the data to the function. The function returns a value and so the function call is considered to evaluate to a value, which we can print out as shown above. Like any value, we can assign the result of calling a function to a variable
Step13: Please remember that returning a value from a function is not the same thing as printing, which is a side-effect. Only the print statement prints a value to the console when running a program. Don't confuse executing a program with the interactive Python console (or this notebook), which automatically prints out the value of each expression we type. For example
Step14: Exercise
Write a function called add the takes 2 number parameters, x and y, and returns the addition of the two parameters.
Step15: Notice that once we use the argument names, the order does not matter
Step16: Exercise
Write a function called area that takes a radius r parameter and returns the area of a circle with that radius (πr<sup>2</sup>). Hint
Step17: Exercise
Write a Python function called words that accepts a string, doc, containing a sequence of words separated by a single space character and returns a list of words in lowercase. An argument of 'X Y z' should return a list with value ['x', 'y', 'z']. Hint
Step18: Search function
We've seen code to search for a list element, but the specific element and specific list were hardcoded. That is to say, the code only worked with specific values and was not generic
Step19: It would be nice to have a function we can call because searching is so common. To get started, we can just wrap the logic associated with searching in a function by indenting and adding a function header. But, we should also change the name of the list so that it is more generic and make it a parameter (same with the search target).
Step20: We are now passing two arguments to the function
Step21: It turns out we can simplify that function by replacing the break statement with a return statement. Whereas a break statement breaks out of the immediately enclosing loop, the return statement returns from the function no matter where it appears in the function. In the current version, if we find the element, the break statement breaks out of the loop and forces the processor to execute the statement following the loop, which is the return statement. Because the return statement takes an expression argument, we don't need to track the index in a separate variable. The return statement forces the processor to immediately exit the function and return the specified value. In effect, then the return breaks out of the loop first then the function.
Here is the way the cool kids would write that function
Step22: Visibility of symbols
Variables created outside of a function are so-called global variables because they live in the global space (or frame). For example, let's revisit the non-function version of the sum accumulator where I have added a call to lolviz library to display three global variables inside the loop
Step23: There are three (global) variables here
Step24: As you can see, there is a new scope for the sum function because the main program invoked a function. That function has a parameter called data and a local variable called s (from where I have called the callsviz function). Notice that both Quantity and data variables point at the same shared memory location!! It's just that the names are defined in different contexts (scopes). This is the aliasing of data we talked about in the last section. By traversing data, the sum function is actually traversing the Quantity list from the outer context.
Watch out for functions modifying data arguments
Step25: When the function returns, the frame for sum disappears, leaving only the global frame.
Step26: Visibility rules
Now that you have the idea of context in mind, let's establish some rules for the visibility of variables according to context
Step27: Return values versus printing
Just to pound this concept into your heads...
One of the big confusion points for students is the difference between return values and printing results. We'll look at this again when we translate plans to Python code, but it's important to understand this difference right away.
Programs in the analytics world typically read data from a file and emit output or write data to another file. In other words, programs interact with the world outside of the program. The world outside of the program is usually the network, the disk, or the screen. In contrast, most functions that we write won't interact with the outside world.
<img src="images/redbang.png" width="30" align="left">Functions compute and return (give values back) to their caller. They don't print anything to the user unless explicitly asked to do so with a print statement. | Python Code:
def pi():
return 3.14159
Explanation: Organizing your code with functions
<img src="images/pestle.png" width="75" align="right">Years ago I learned to make Thai red curry paste from scratch, including roasting then grinding seeds and pounding stuff in a giant mortar and pestle. It takes forever and so I generally buy curry paste ready-made from the store.
<img src="images/redcurry.jpeg" width="70" align="right">Similarly, most cookbooks provide a number of fundamental recipes, such as making sauces, that are used by other recipes. A cookbook is organized into a series of executable recipes, some of which "invoke" other recipes. To make dinner, I open a cookbook, acquire some raw ingredients, then execute one or more recipes, usually in a specific sequence.
Writing a program proceeds in the same way. Opening a cookbook is the same as importing libraries. Acquiring raw ingredients could mean loading data into the memory. The main program invokes functions (recipes) to accomplish a particular task. As part of writing a program, we will typically break out logical sections of code into functions specific to our problem, whereas the functions in libraries tend to be broadly-applicable.
The way we organize our code is important. Programs quickly become an incomprehensible rats nest if we are not strict about style and organization. Here is the general structure of the Python programs we will write:
import any libraries<br>
define any constants, simple data values<br>
define any functions<br>
main program body
Functions are subprograms
A sequence of operations grouped into a single, named entity is called a function. Functions are like mini programs or subprograms that we can plan out just like full programs.
Python programs consist of zero or more functions and the so-called "main" program, consisting of a sequence of operations that gets the ball rolling.
Instead of loading data from the disk, functions operate on data given to them from the invoking program. This incoming data is analogous to a recipe's list of ingredients and is specified in the form of one or more named parameters (also called arguments). Instead of printing a result or displaying a graph, as a program would, functions return values. Functions are meant as helper routines that are generically useful.
We begin planning a function by identifying:
a descriptive function name
the kind of value(s) it operates on (parameter types)
the kind of value it returns (return type)
what the function does and the value it returns
If we can't specifying exactly what goes in and out of the function, there's no hope of determining the processing steps, let alone Python code, to implement that function.
As with a program's work plan, we then manually write out some sample function invocations to show what data goes in and what data comes out.
Once we fully understand our goal, we plan out the sequence of operations needed by the function to compute the desired result. As when designing a whole program, we start with the return value and work our way backwards, identifying operations in reverse order. Note: The operations should be purely a function of the data passed to them as parameters---functions should be completely ignorant of any other data. (More on this when we actually translate function pseudocode to Python.)
Function templates
Python functions are like black boxes that, in general, accept input data and yield (return) values. Each invocation of a function triggers the execution of the code associated with that function and returns a result value or values. For example, here is a function called pi that takes no parameters but returns value 3.14159 each time it is called:
End of explanation
def pi():
return 3.14159
print("this is not part of function")
Explanation: The code template for a function with no arguments is:
def <ins>funcname</ins>():<br>
<ins>statement 1</ins><br>
<ins>statement 2</ins><br>
...<br>
return <ins>expression</ins><br>
with holes for the function name, statements associated with a function, and an expression describing the return value. Functions that have no return value skip the return statement.
<img src="images/redbang.png" width="30" align="left"> The way that we associate statements with a function in Python is by indentation. So return 3.14159 is part of the function because it is indented after the function header. The first statement that begins in the same column as the def is first statement outside of the function.
End of explanation
pi()
pi
Explanation: <img src="images/redbang.png" width="30" align="left">The Python interpreter does not execute the code inside the function unless we directly invoke that function. Python sees the function definition as just that: a "recipe" definition that we can call if we want.
The definition of a function is different than invoking or calling a function. Calling a function requires the function name and any argument values. In this case, we don't have any arguments so we call the function as just pi():
End of explanation
x = pi()
Explanation: We don't need a print statement because we are executing inside a notebook, not a Python program. If this were in a regular Python program, we would need a print statement: print(pi()), but of course that also works here.
Every invocation of that function evaluates to the value 3.14159. The function returns a value but prints nothing. For example, Jupyter notebooks or the Python interactive shell does not print anything if we assign the result to variable:
End of explanation
def hi():
print('hi')
hi()
Explanation: We distinguish between functions and variables syntactically by always putting the parentheses next to the function name. I.e., pi is a variable reference but pi() is a function call.
Some functions don't have return values, such as a function that displays an image in a window. It has a side effect of altering the display but does not really have a return value. The return statement is omitted if the function does not return a value. Here's a contrived side-effecting example that does not need to return a value:
End of explanation
x = hi()
print(x)
Explanation: If you try to use the value of a function that lacks a return, Python gives you the so-called None value.
End of explanation
def hello():
return "hello"
def parrt():
return "parrt", 5707
id, phone = parrt()
print(id, phone)
Explanation: Naturally, we can also return strings, not just numbers. For example here's a function called hello that does nothing but return string 'hello':
End of explanation
Quantity = [6, 49, 27, 30, 19, 21, 12, 22, 21]
sum = 0
for q in Quantity:
sum = sum + q
sum
Explanation: Turning to the more interesting cases now, here is the template for a function with one argument:
def funcname(argname):<br>
statement 1<br>
statement 2<br>
...<br>
return expression<br>
If there are two arguments, the function header looks like:
def funcname(argname1, argname2):<br>
Our job as programmers is to pick a descriptive function name, argument name(s), and statements within the function as per our function workplan.
Invoking a function with arguments looks like funcname(expression) or funcname(expression1, expression2) etc... The order of the arguments matters. Python matches the first expression with the first argument name given in the function definition.
Let's take a look at some of the code snippets from Programming Patterns in Python and see if we can abstract some useful functions.
Sum function
In Model of Computation, we saw code to translate mathematical Sigma notation to python and so this code to sum the values in a list should be pretty familiar to you:
End of explanation
def sum(data):
s = 0
for q in data:
s = s + q
return s # return accumulated value s to invoker (this is not a print statement!)
Quantity = [6, 49, 27, 30, 19, 21, 12, 22, 21]
sum(Quantity) # call sum with a specific list
sum(data=Quantity) # implicit assignment here
Explanation: This operation is accumulator and there is an associated code template, which you should memorize. Any time somebody says accumulator, you should think loop around a partial result update preceded by initialization of that result.
Summing values is very common so let's encapsulate the functionality in a function to avoid having to cut-and-paste the code template all the time. Our black box with a few sample "input-output" pairs from a function plan looks like:
<img src="images/sum-func.png" width="180">
(Laying out the examples like that made us realize that we need to worry about empty lists.)
We group the summing functionality into a function by indenting it and then adding a function header:
End of explanation
sum([1,2,3])
Explanation: The key benefit of this function version is that now we have some generic code that we can invoke with a simple call to sum. The argument to the function is the list of data to sum and so the for loop refers to it than the specific Quantity variable. (Notice that the variable inside the function is now s not sum to avoid confusion with the function name.)
End of explanation
ages = [10, 21, 13]
print(sum(ages))
print(sum([1,3,5,7,9]))
print(sum([ ])) # Empty list
Explanation: You might be tempted to build a function that directly references the Quantity global list instead of a parameter:
```python
OMG, this is so horrible I find it difficult to type!
def sum():
s = 0
for q in Quantity:
s = s + q
return s
```
The problem is this function now only works with one list and is in no way generically useful. This defeats the purpose of creating the function because it's not reusable.
Since the real function accepts a list parameter, we can pass another list to the function:
End of explanation
sum(data=ages)
Explanation: Another thing to learn is that Python allows us to name the arguments as we passed them to a function:
End of explanation
x = sum(Quantity) # call sum and save result in x
x
Explanation: The function call, or invocation, sum(Quantity) passes the data to the function. The function returns a value and so the function call is considered to evaluate to a value, which we can print out as shown above. Like any value, we can assign the result of calling a function to a variable:
End of explanation
def neg(x): return -x
Explanation: Please remember that returning a value from a function is not the same thing as printing, which is a side-effect. Only the print statement prints a value to the console when running a program. Don't confuse executing a program with the interactive Python console (or this notebook), which automatically prints out the value of each expression we type. For example:
```python
34
34
34+100
134
```
The sum function has one parameter but it's also common to have functions with two parameters.
Exercise
Write a function called neg that takes one number parameter x and returns the negative of x.
End of explanation
def max(x,y): return x if x>y else y
#same as:
#if x>y: return x
#else: return y
# test it
print(max(10,99))
print(max(99,10))
Explanation: Exercise
Write a function called add the takes 2 number parameters, x and y, and returns the addition of the two parameters.
End of explanation
print(max(x=10, y=99))
print(max(y=99, x=10))
Explanation: Notice that once we use the argument names, the order does not matter:
End of explanation
import math
def area(r): return math.pi * r**2 # ** is the power operator
# test it
area(1), area(r=2)
Explanation: Exercise
Write a function called area that takes a radius r parameter and returns the area of a circle with that radius (πr<sup>2</sup>). Hint: Recall that the math package has a variable called pi.
End of explanation
def words(doc:str) -> list:
words = doc.split(' ')
return [w.lower() for w in words]
# OR
def words(doc):
doc = doc.lower()
return doc.split(' ')
# OR
def words(doc): return doc.lower().split(' ')
words('Terence Parr is the instructor of MSAN501')
Explanation: Exercise
Write a Python function called words that accepts a string, doc, containing a sequence of words separated by a single space character and returns a list of words in lowercase. An argument of 'X Y z' should return a list with value ['x', 'y', 'z']. Hint: 'HI'.lower() evaluates to 'hi'.
End of explanation
first=['Xue', 'Mary', 'Robert'] # our given input
target = 'Mary' # searching for Mary
index = -1
for i in range(len(first)): # i is in range [0..n-1] or [0..n)
if first[i]==target:
index = i
break
index
Explanation: Search function
We've seen code to search for a list element, but the specific element and specific list were hardcoded. That is to say, the code only worked with specific values and was not generic:
End of explanation
def search(x, data):
index = -1
for i in range(len(data)): # i is in range [0..n-1] or [0..n)
if data[i]==x:
index = i
break
print(index)
first=['Xue', 'Mary', 'Robert']
search('Mary', first) # invoke search with 2 parameters
Explanation: It would be nice to have a function we can call because searching is so common. To get started, we can just wrap the logic associated with searching in a function by indenting and adding a function header. But, we should also change the name of the list so that it is more generic and make it a parameter (same with the search target).
End of explanation
search('Xue', first), search('Robert', first)
# It is a good idea to test the failure case
search('Jim', first)
Explanation: We are now passing two arguments to the function: x is the element to find and data is the list to search. Anytime we want, we can search a list for an element just by calling search:
End of explanation
def search(x, data):
for i in range(len(data)): # i is in range [0..n-1] or [0..n)
if data[i]==x:
return i # found element, return the current index i
return -1 # failure case; we did not return from inside loop
print(search('Mary', first))
print(search('Xue', first))
print(search('foo', first))
Explanation: It turns out we can simplify that function by replacing the break statement with a return statement. Whereas a break statement breaks out of the immediately enclosing loop, the return statement returns from the function no matter where it appears in the function. In the current version, if we find the element, the break statement breaks out of the loop and forces the processor to execute the statement following the loop, which is the return statement. Because the return statement takes an expression argument, we don't need to track the index in a separate variable. The return statement forces the processor to immediately exit the function and return the specified value. In effect, then the return breaks out of the loop first then the function.
Here is the way the cool kids would write that function:
End of explanation
from lolviz import *
Quantity = [6, 49, 27, 30, 19, 21, 12, 22, 21]
sum = 0
display(callviz(varnames=['Quantity','sum','q']))
for q in Quantity:
sum = sum + q
display(callviz(varnames=['Quantity','sum','q']))
sum
Explanation: Visibility of symbols
Variables created outside of a function are so-called global variables because they live in the global space (or frame). For example, let's revisit the non-function version of the sum accumulator where I have added a call to lolviz library to display three global variables inside the loop:
End of explanation
reset -f
from lolviz import *
Quantity = [6, 49, 27, 30, 19, 21, 12, 22, 21]
def sum(data):
s = 0
display(callsviz(varnames=['Quantity','data','s']))
for q in data:
s = s + q
return s
sum(Quantity)
Explanation: There are three (global) variables here: Quantity, sum, and q. The program uses all of those to compute the result.
Let's see what the "call stack" looks like using the function version of the accumulator.
End of explanation
def badsum(data):
#data = data.copy() # must manually make copy to avoid side-effect
data[0] = 99
display(callsviz(varnames=['Quantity','data','s']))
s = 0
for q in data:
s = s + q
return s
Quantity = [6, 49, 27, 30, 19, 21, 12, 22, 21]
badsum(Quantity)
print(Quantity)
Explanation: As you can see, there is a new scope for the sum function because the main program invoked a function. That function has a parameter called data and a local variable called s (from where I have called the callsviz function). Notice that both Quantity and data variables point at the same shared memory location!! It's just that the names are defined in different contexts (scopes). This is the aliasing of data we talked about in the last section. By traversing data, the sum function is actually traversing the Quantity list from the outer context.
Watch out for functions modifying data arguments
End of explanation
def sum(data):
s = 0
for q in data:
s = s + q
return s
print(sum(Quantity))
callsviz(varnames=['Quantity','data','s'])
reset -f
from lolviz import *
def f(x):
q = 0
g(x)
print("back from g")
display(callsviz(varnames=['x','q','y','z']))
def g(y):
print(y)
display(callsviz(varnames=['x','q','y','z']))
z = 99
f(33)
print("back from f")
display(callsviz(varnames=['x','q','y','z']))
Explanation: When the function returns, the frame for sum disappears, leaving only the global frame.
End of explanation
def f():
g()
def g():
print("hi mom!")
f()
Explanation: Visibility rules
Now that you have the idea of context in mind, let's establish some rules for the visibility of variables according to context:
Main programs cannot see variables and arguments inside functions; just because a main program can call a function, doesn't mean it can see the inner workings. Think of functions as black boxes that take parameters and return values.
Functions can technically see global variables but don't do this as a general rule. Pass the global variables that you need to each function as arguments.
The latter rule is a good one because violating it generally means you're doing something "wrong". For example, if we tweak the sum accumulator function to refer directly to the global variable Quantity, we get:
python
Quantity = [6, 49, 27, 30, 19, 21, 12, 22, 21]
def sum(data): # parameter not used!
s = 0
for q in Quantity: # uh oh!
s = s + q
return s
The problem is that, now, sum only works on that global data. It's not generically useful. The clue is that the function ignores the data argument. So, technically the function can see global data, but it's not a good idea. (Violating this rule to alter a global variable is also a good way to get a subtle bug that's difficult to find.)
Technically we need to see global symbols (functions)
End of explanation
def pi():
print(3.14159) # This is not a return statement!
print(pi())
Explanation: Return values versus printing
Just to pound this concept into your heads...
One of the big confusion points for students is the difference between return values and printing results. We'll look at this again when we translate plans to Python code, but it's important to understand this difference right away.
Programs in the analytics world typically read data from a file and emit output or write data to another file. In other words, programs interact with the world outside of the program. The world outside of the program is usually the network, the disk, or the screen. In contrast, most functions that we write won't interact with the outside world.
<img src="images/redbang.png" width="30" align="left">Functions compute and return (give values back) to their caller. They don't print anything to the user unless explicitly asked to do so with a print statement.
End of explanation |
15,709 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
model 02
Load train, test, questions data from pklz
First of all, we need to read those three data set.
Step1: Make training set
For training model, we might need to make feature and lable pair. In this case, we will use only uid, qid, and position for feature.
Step2: It means that user 0 tried to solve question number 1 which has 77 tokens for question and he or she answered at 61st token.
Train model and make predictions
Let's train model and make predictions. We will use simple Linear Regression at this moment.
Step3: http
Step4: Here is 4749 predictions.
Writing submission.
OK, let's writing submission into guess.csv file. In the given submission form, we realized that we need to put header. So, we will insert header at the first of predictions, and then make it as a file. | Python Code:
import gzip
import cPickle as pickle
with gzip.open("../data/train.pklz", "rb") as train_file:
train_set = pickle.load(train_file)
with gzip.open("../data/test.pklz", "rb") as test_file:
test_set = pickle.load(test_file)
with gzip.open("../data/questions.pklz", "rb") as questions_file:
questions = pickle.load(questions_file)
Explanation: model 02
Load train, test, questions data from pklz
First of all, we need to read those three data set.
End of explanation
X = []
Y = []
for key in train_set:
# We only care about positive case at this time
if train_set[key]['position'] < 0:
continue
uid = train_set[key]['uid']
qid = train_set[key]['qid']
pos = train_set[key]['position']
q_length = max(questions[qid]['pos_token'].keys())
feat = [uid, qid, q_length]
X.append(feat)
Y.append([pos])
print len(X)
print len(Y)
print X[0], Y[0]
Explanation: Make training set
For training model, we might need to make feature and lable pair. In this case, we will use only uid, qid, and position for feature.
End of explanation
from sklearn.linear_model import LinearRegression, Ridge, Lasso, ElasticNet
from sklearn.cross_validation import train_test_split, cross_val_score
X_train, X_test, Y_train, Y_test = train_test_split (X, Y)
regressor = LinearRegression()
scores = cross_val_score(regressor, X, Y, cv=10)
print 'Cross validation r-squared scores:', scores.mean()
print scores
regressor = Ridge()
scores = cross_val_score(regressor, X, Y, cv=10)
print 'Cross validation r-squared scores:', scores.mean()
print scores
regressor = Lasso()
scores = cross_val_score(regressor, X, Y, cv=10)
print 'Cross validation r-squared scores:', scores.mean()
print scores
regressor = ElasticNet()
scores = cross_val_score(regressor, X, Y, cv=10)
print 'Cross validation r-squared scores:', scores.mean()
print scores
from sklearn.linear_model import SGDRegressor
from sklearn.cross_validation import cross_val_score
from sklearn.preprocessing import StandardScaler
from sklearn.cross_validation import train_test_split
X_scaler = StandardScaler()
Y_scaler = StandardScaler()
X_train, X_test, Y_train, Y_test = train_test_split (X, Y)
X_train = X_scaler.fit_transform(X_train)
Y_train = Y_scaler.fit_transform(Y_train)
X_test = X_scaler.fit_transform(X_test)
Y_test = Y_scaler.fit_transform(Y_test)
Explanation: It means that user 0 tried to solve question number 1 which has 77 tokens for question and he or she answered at 61st token.
Train model and make predictions
Let's train model and make predictions. We will use simple Linear Regression at this moment.
End of explanation
regressor = SGDRegressor(loss='squared_loss', penalty='l1')
scores = cross_val_score(regressor, X_train, Y_train, cv=10)
print 'Cross validation r-squared scores:', scores.mean()
print scores
X_test = []
test_id = []
for key in test_set:
test_id.append(key)
uid = test_set[key]['uid']
qid = test_set[key]['qid']
q_length = max(questions[qid]['pos_token'].keys())
feat = [uid, qid, q_length]
X_test.append(feat)
X_scaler = StandardScaler()
Y_scaler = StandardScaler()
X_train = X_scaler.fit_transform(X)
Y_train = Y_scaler.fit_transform(Y)
X_test = X_scaler.fit_transform(X_test)
regressor.fit(X_train, Y_train)
predictions = regressor.predict(X_test)
predictions = Y_scaler.inverse_transform(predictions)
predictions = sorted([[id, predictions[index]] for index, id in enumerate(test_id)])
print len(predictions)
predictions[:5]
Explanation: http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.SGDRegressor.html
There has four loss-function. ‘squared_loss’, ‘huber’, ‘epsilon_insensitive’, or ‘squared_epsilon_insensitive’. Among those, squared_loss is the best in this case.
End of explanation
import csv
predictions.insert(0,["id", "position"])
with open('guess.csv', 'wb') as fp:
writer = csv.writer(fp, delimiter=',')
writer.writerows(predictions)
Explanation: Here is 4749 predictions.
Writing submission.
OK, let's writing submission into guess.csv file. In the given submission form, we realized that we need to put header. So, we will insert header at the first of predictions, and then make it as a file.
End of explanation |
15,710 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Atmos
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required
Step9: 2.2. Canonical Horizontal Resolution
Is Required
Step10: 2.3. Range Horizontal Resolution
Is Required
Step11: 2.4. Number Of Vertical Levels
Is Required
Step12: 2.5. High Top
Is Required
Step13: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required
Step14: 3.2. Timestep Shortwave Radiative Transfer
Is Required
Step15: 3.3. Timestep Longwave Radiative Transfer
Is Required
Step16: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required
Step17: 4.2. Changes
Is Required
Step18: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required
Step19: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required
Step20: 6.2. Scheme Method
Is Required
Step21: 6.3. Scheme Order
Is Required
Step22: 6.4. Horizontal Pole
Is Required
Step23: 6.5. Grid Type
Is Required
Step24: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required
Step25: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required
Step26: 8.2. Name
Is Required
Step27: 8.3. Timestepping Type
Is Required
Step28: 8.4. Prognostic Variables
Is Required
Step29: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required
Step30: 9.2. Top Heat
Is Required
Step31: 9.3. Top Wind
Is Required
Step32: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required
Step33: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required
Step34: 11.2. Scheme Method
Is Required
Step35: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required
Step36: 12.2. Scheme Characteristics
Is Required
Step37: 12.3. Conserved Quantities
Is Required
Step38: 12.4. Conservation Method
Is Required
Step39: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required
Step40: 13.2. Scheme Characteristics
Is Required
Step41: 13.3. Scheme Staggering Type
Is Required
Step42: 13.4. Conserved Quantities
Is Required
Step43: 13.5. Conservation Method
Is Required
Step44: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required
Step45: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required
Step46: 15.2. Name
Is Required
Step47: 15.3. Spectral Integration
Is Required
Step48: 15.4. Transport Calculation
Is Required
Step49: 15.5. Spectral Intervals
Is Required
Step50: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required
Step51: 16.2. ODS
Is Required
Step52: 16.3. Other Flourinated Gases
Is Required
Step53: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required
Step54: 17.2. Physical Representation
Is Required
Step55: 17.3. Optical Methods
Is Required
Step56: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required
Step57: 18.2. Physical Representation
Is Required
Step58: 18.3. Optical Methods
Is Required
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required
Step60: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required
Step61: 20.2. Physical Representation
Is Required
Step62: 20.3. Optical Methods
Is Required
Step63: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required
Step64: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required
Step65: 22.2. Name
Is Required
Step66: 22.3. Spectral Integration
Is Required
Step67: 22.4. Transport Calculation
Is Required
Step68: 22.5. Spectral Intervals
Is Required
Step69: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required
Step70: 23.2. ODS
Is Required
Step71: 23.3. Other Flourinated Gases
Is Required
Step72: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required
Step73: 24.2. Physical Reprenstation
Is Required
Step74: 24.3. Optical Methods
Is Required
Step75: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required
Step76: 25.2. Physical Representation
Is Required
Step77: 25.3. Optical Methods
Is Required
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required
Step79: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required
Step80: 27.2. Physical Representation
Is Required
Step81: 27.3. Optical Methods
Is Required
Step82: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required
Step83: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required
Step85: 30.2. Scheme Type
Is Required
Step86: 30.3. Closure Order
Is Required
Step87: 30.4. Counter Gradient
Is Required
Step88: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required
Step89: 31.2. Scheme Type
Is Required
Step90: 31.3. Scheme Method
Is Required
Step91: 31.4. Processes
Is Required
Step92: 31.5. Microphysics
Is Required
Step93: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required
Step94: 32.2. Scheme Type
Is Required
Step95: 32.3. Scheme Method
Is Required
Step96: 32.4. Processes
Is Required
Step97: 32.5. Microphysics
Is Required
Step98: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required
Step100: 34.2. Hydrometeors
Is Required
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required
Step102: 35.2. Processes
Is Required
Step103: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required
Step104: 36.2. Name
Is Required
Step105: 36.3. Atmos Coupling
Is Required
Step106: 36.4. Uses Separate Treatment
Is Required
Step107: 36.5. Processes
Is Required
Step108: 36.6. Prognostic Scheme
Is Required
Step109: 36.7. Diagnostic Scheme
Is Required
Step110: 36.8. Prognostic Variables
Is Required
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required
Step112: 37.2. Cloud Inhomogeneity
Is Required
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required
Step114: 38.2. Function Name
Is Required
Step115: 38.3. Function Order
Is Required
Step116: 38.4. Convection Coupling
Is Required
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required
Step118: 39.2. Function Name
Is Required
Step119: 39.3. Function Order
Is Required
Step120: 39.4. Convection Coupling
Is Required
Step121: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required
Step122: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required
Step123: 41.2. Top Height Direction
Is Required
Step124: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required
Step125: 42.2. Number Of Grid Points
Is Required
Step126: 42.3. Number Of Sub Columns
Is Required
Step127: 42.4. Number Of Levels
Is Required
Step128: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required
Step129: 43.2. Type
Is Required
Step130: 43.3. Gas Absorption
Is Required
Step131: 43.4. Effective Radius
Is Required
Step132: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required
Step133: 44.2. Overlap
Is Required
Step134: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required
Step135: 45.2. Sponge Layer
Is Required
Step136: 45.3. Background
Is Required
Step137: 45.4. Subgrid Scale Orography
Is Required
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required
Step139: 46.2. Source Mechanisms
Is Required
Step140: 46.3. Calculation Method
Is Required
Step141: 46.4. Propagation Scheme
Is Required
Step142: 46.5. Dissipation Scheme
Is Required
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required
Step144: 47.2. Source Mechanisms
Is Required
Step145: 47.3. Calculation Method
Is Required
Step146: 47.4. Propagation Scheme
Is Required
Step147: 47.5. Dissipation Scheme
Is Required
Step148: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required
Step149: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required
Step150: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required
Step151: 50.2. Fixed Value
Is Required
Step152: 50.3. Transient Characteristics
Is Required
Step153: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required
Step154: 51.2. Fixed Reference Date
Is Required
Step155: 51.3. Transient Method
Is Required
Step156: 51.4. Computation Method
Is Required
Step157: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required
Step158: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required
Step159: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'noaa-gfdl', 'gfdl-cm4', 'atmos')
Explanation: ES-DOC CMIP6 Model Properties - Atmos
MIP Era: CMIP6
Institute: NOAA-GFDL
Source ID: GFDL-CM4
Topic: Atmos
Sub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos.
Properties: 156 (127 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-20 15:02:34
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties --> Overview
2. Key Properties --> Resolution
3. Key Properties --> Timestepping
4. Key Properties --> Orography
5. Grid --> Discretisation
6. Grid --> Discretisation --> Horizontal
7. Grid --> Discretisation --> Vertical
8. Dynamical Core
9. Dynamical Core --> Top Boundary
10. Dynamical Core --> Lateral Boundary
11. Dynamical Core --> Diffusion Horizontal
12. Dynamical Core --> Advection Tracers
13. Dynamical Core --> Advection Momentum
14. Radiation
15. Radiation --> Shortwave Radiation
16. Radiation --> Shortwave GHG
17. Radiation --> Shortwave Cloud Ice
18. Radiation --> Shortwave Cloud Liquid
19. Radiation --> Shortwave Cloud Inhomogeneity
20. Radiation --> Shortwave Aerosols
21. Radiation --> Shortwave Gases
22. Radiation --> Longwave Radiation
23. Radiation --> Longwave GHG
24. Radiation --> Longwave Cloud Ice
25. Radiation --> Longwave Cloud Liquid
26. Radiation --> Longwave Cloud Inhomogeneity
27. Radiation --> Longwave Aerosols
28. Radiation --> Longwave Gases
29. Turbulence Convection
30. Turbulence Convection --> Boundary Layer Turbulence
31. Turbulence Convection --> Deep Convection
32. Turbulence Convection --> Shallow Convection
33. Microphysics Precipitation
34. Microphysics Precipitation --> Large Scale Precipitation
35. Microphysics Precipitation --> Large Scale Cloud Microphysics
36. Cloud Scheme
37. Cloud Scheme --> Optical Cloud Properties
38. Cloud Scheme --> Sub Grid Scale Water Distribution
39. Cloud Scheme --> Sub Grid Scale Ice Distribution
40. Observation Simulation
41. Observation Simulation --> Isscp Attributes
42. Observation Simulation --> Cosp Attributes
43. Observation Simulation --> Radar Inputs
44. Observation Simulation --> Lidar Inputs
45. Gravity Waves
46. Gravity Waves --> Orographic Gravity Waves
47. Gravity Waves --> Non Orographic Gravity Waves
48. Solar
49. Solar --> Solar Pathways
50. Solar --> Solar Constant
51. Solar --> Orbital Parameters
52. Solar --> Insolation Ozone
53. Volcanos
54. Volcanos --> Volcanoes Treatment
1. Key Properties --> Overview
Top level key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of atmospheric model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the atmosphere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Resolution
Characteristics of the model resolution
2.1. Horizontal Resolution Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.4. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on the computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 2.5. High Top
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping
Characteristics of the atmosphere model time stepping
3.1. Timestep Dynamics
Is Required: TRUE Type: STRING Cardinality: 1.1
Timestep for the dynamics, e.g. 30 min.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.2. Timestep Shortwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the shortwave radiative transfer, e.g. 1.5 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestep Longwave Radiative Transfer
Is Required: FALSE Type: STRING Cardinality: 0.1
Timestep for the longwave radiative transfer, e.g. 3 hours.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Orography
Characteristics of the model orography
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the orography.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
Explanation: 4.2. Changes
Is Required: TRUE Type: ENUM Cardinality: 1.N
If the orography type is modified describe the time adaptation changes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid --> Discretisation
Atmosphere grid discretisation
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of grid discretisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6. Grid --> Discretisation --> Horizontal
Atmosphere discretisation in the horizontal
6.1. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
Explanation: 6.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.3. Scheme Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation function order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. Horizontal Pole
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal discretisation pole singularity treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.5. Grid Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7. Grid --> Discretisation --> Vertical
Atmosphere discretisation in the vertical
7.1. Coordinate Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Type of vertical coordinate system
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Dynamical Core
Characteristics of the dynamical core
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere dynamical core
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the dynamical core of the model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Timestepping Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Timestepping framework type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of the model prognostic variables
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 9. Dynamical Core --> Top Boundary
Type of boundary layer at the top of the model
9.1. Top Boundary Condition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Top boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Top Heat
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary heat treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Top Wind
Is Required: TRUE Type: STRING Cardinality: 1.1
Top boundary wind treatment
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Dynamical Core --> Lateral Boundary
Type of lateral boundary condition (if the model is a regional model)
10.1. Condition
Is Required: FALSE Type: ENUM Cardinality: 0.1
Type of lateral boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Dynamical Core --> Diffusion Horizontal
Horizontal diffusion scheme
11.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Horizontal diffusion scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal diffusion scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12. Dynamical Core --> Advection Tracers
Tracer advection scheme
12.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Tracer advection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.3. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Tracer advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.4. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracer advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Dynamical Core --> Advection Momentum
Momentum advection scheme
13.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Momentum advection schemes name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Scheme Characteristics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.3. Scheme Staggering Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.4. Conserved Quantities
Is Required: TRUE Type: ENUM Cardinality: 1.N
Momentum advection scheme conserved quantities
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Conservation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Momentum advection scheme conservation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Radiation
Characteristics of the atmosphere radiation process
14.1. Aerosols
Is Required: TRUE Type: ENUM Cardinality: 1.N
Aerosols whose radiative effect is taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Radiation --> Shortwave Radiation
Properties of the shortwave radiation scheme
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of shortwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Shortwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Shortwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Shortwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Radiation --> Shortwave GHG
Representation of greenhouse gases in the shortwave radiation scheme
16.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17. Radiation --> Shortwave Cloud Ice
Shortwave radiative properties of ice crystals in clouds
17.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18. Radiation --> Shortwave Cloud Liquid
Shortwave radiative properties of liquid droplets in clouds
18.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19. Radiation --> Shortwave Cloud Inhomogeneity
Cloud inhomogeneity in the shortwave radiation scheme
19.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20. Radiation --> Shortwave Aerosols
Shortwave radiative properties of aerosols
20.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 20.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the shortwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 21. Radiation --> Shortwave Gases
Shortwave radiative properties of gases
21.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General shortwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Radiation --> Longwave Radiation
Properties of the longwave radiation scheme
22.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of longwave radiation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the longwave radiation scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Spectral Integration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Longwave radiation scheme spectral integration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.4. Transport Calculation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Longwave radiation transport calculation methods
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 22.5. Spectral Intervals
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Longwave radiation scheme number of spectral intervals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Radiation --> Longwave GHG
Representation of greenhouse gases in the longwave radiation scheme
23.1. Greenhouse Gas Complexity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Complexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. ODS
Is Required: FALSE Type: ENUM Cardinality: 0.N
Ozone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Other Flourinated Gases
Is Required: FALSE Type: ENUM Cardinality: 0.N
Other flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24. Radiation --> Longwave Cloud Ice
Longwave radiative properties of ice crystals in clouds
24.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud ice crystals
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.2. Physical Reprenstation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 24.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud ice crystals in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Radiation --> Longwave Cloud Liquid
Longwave radiative properties of liquid droplets in clouds
25.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with cloud liquid droplets
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to cloud liquid droplets in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Radiation --> Longwave Cloud Inhomogeneity
Cloud inhomogeneity in the longwave radiation scheme
26.1. Cloud Inhomogeneity
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method for taking into account horizontal cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Radiation --> Longwave Aerosols
Longwave radiative properties of aerosols
27.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with aerosols
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.2. Physical Representation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical representation of aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27.3. Optical Methods
Is Required: TRUE Type: ENUM Cardinality: 1.N
Optical methods applicable to aerosols in the longwave radiation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 28. Radiation --> Longwave Gases
Longwave radiative properties of gases
28.1. General Interactions
Is Required: TRUE Type: ENUM Cardinality: 1.N
General longwave radiative interactions with gases
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Turbulence Convection
Atmosphere Convective Turbulence and Clouds
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of atmosphere convection and turbulence
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Turbulence Convection --> Boundary Layer Turbulence
Properties of the boundary layer turbulence scheme
30.1. Scheme Name
Is Required: FALSE Type: ENUM Cardinality: 0.1
Boundary layer turbulence scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Boundary layer turbulence scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Closure Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Boundary layer turbulence scheme closure order
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Counter Gradient
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Uses boundary layer turbulence scheme counter gradient
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31. Turbulence Convection --> Deep Convection
Properties of the deep convection scheme
31.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Deep convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Deep convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of deep convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Turbulence Convection --> Shallow Convection
Properties of the shallow convection scheme
32.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Shallow convection scheme name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.2. Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.N
shallow convection scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
Explanation: 32.3. Scheme Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
shallow convection scheme method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Physical processes taken into account in the parameterisation of shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.5. Microphysics
Is Required: FALSE Type: ENUM Cardinality: 0.N
Microphysics scheme for shallow convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33. Microphysics Precipitation
Large Scale Cloud Microphysics and Precipitation
33.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of large scale cloud microphysics and precipitation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Microphysics Precipitation --> Large Scale Precipitation
Properties of the large scale precipitation scheme
34.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the large scale precipitation parameterisation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34.2. Hydrometeors
Is Required: TRUE Type: ENUM Cardinality: 1.N
Precipitating hydrometeors taken into account in the large scale precipitation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Properties of the large scale cloud microphysics scheme
35.1. Scheme Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name of the microphysics parameterisation scheme used for large scale clouds.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Large scale cloud microphysics processes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Cloud Scheme
Characteristics of the cloud scheme
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the atmosphere cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.2. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
Explanation: 36.3. Atmos Coupling
Is Required: FALSE Type: ENUM Cardinality: 0.N
Atmosphere components that are linked to the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.4. Uses Separate Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Different cloud schemes for the different types of clouds (convective, stratiform and boundary layer)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Processes included in the cloud scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.6. Prognostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a prognostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 36.7. Diagnostic Scheme
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the cloud scheme a diagnostic scheme?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.8. Prognostic Variables
Is Required: FALSE Type: ENUM Cardinality: 0.N
List the prognostic variables used by the cloud scheme, if applicable.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 37. Cloud Scheme --> Optical Cloud Properties
Optical cloud properties
37.1. Cloud Overlap Method
Is Required: FALSE Type: ENUM Cardinality: 0.1
Method for taking into account overlapping of cloud layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Cloud Inhomogeneity
Is Required: FALSE Type: STRING Cardinality: 0.1
Method for taking into account cloud inhomogeneity
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Sub-grid scale water distribution
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale water distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 38.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale water distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 38.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale water distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 38.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale water distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
Explanation: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Sub-grid scale ice distribution
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sub-grid scale ice distribution type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 39.2. Function Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Sub-grid scale ice distribution function name
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 39.3. Function Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Sub-grid scale ice distribution function type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
Explanation: 39.4. Convection Coupling
Is Required: TRUE Type: ENUM Cardinality: 1.N
Sub-grid scale ice distribution coupling with convection
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40. Observation Simulation
Characteristics of observation simulation
40.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of observation simulator characteristics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Observation Simulation --> Isscp Attributes
ISSCP Characteristics
41.1. Top Height Estimation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator ISSCP top height estimation methodUo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. Top Height Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator ISSCP top height direction
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 42. Observation Simulation --> Cosp Attributes
CFMIP Observational Simulator Package attributes
42.1. Run Configuration
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator COSP run configuration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.2. Number Of Grid Points
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of grid points
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.3. Number Of Sub Columns
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of sub-cloumns used to simulate sub-grid variability
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 42.4. Number Of Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Cloud simulator COSP number of levels
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 43. Observation Simulation --> Radar Inputs
Characteristics of the cloud radar simulator
43.1. Frequency
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Cloud simulator radar frequency (Hz)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 43.2. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator radar type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.3. Gas Absorption
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses gas absorption
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 43.4. Effective Radius
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Cloud simulator radar uses effective radius
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44. Observation Simulation --> Lidar Inputs
Characteristics of the cloud lidar simulator
44.1. Ice Types
Is Required: TRUE Type: ENUM Cardinality: 1.1
Cloud simulator lidar ice type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 44.2. Overlap
Is Required: TRUE Type: ENUM Cardinality: 1.N
Cloud simulator lidar overlap
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 45. Gravity Waves
Characteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.
45.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of gravity wave parameterisation in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.2. Sponge Layer
Is Required: TRUE Type: ENUM Cardinality: 1.1
Sponge layer in the upper levels in order to avoid gravity wave reflection at the top.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.3. Background
Is Required: TRUE Type: ENUM Cardinality: 1.1
Background wave distribution
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 45.4. Subgrid Scale Orography
Is Required: TRUE Type: ENUM Cardinality: 1.N
Subgrid scale orography effects taken into account.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 46. Gravity Waves --> Orographic Gravity Waves
Gravity waves generated due to the presence of orography
46.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 46.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 47. Gravity Waves --> Non Orographic Gravity Waves
Gravity waves generated by non-orographic processes.
47.1. Name
Is Required: FALSE Type: STRING Cardinality: 0.1
Commonly used name for the non-orographic gravity wave scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.2. Source Mechanisms
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave source mechanisms
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
Explanation: 47.3. Calculation Method
Is Required: TRUE Type: ENUM Cardinality: 1.N
Non-orographic gravity wave calculation method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.4. Propagation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave propogation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 47.5. Dissipation Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Non-orographic gravity wave dissipation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 48. Solar
Top of atmosphere solar insolation characteristics
48.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of solar insolation of the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 49. Solar --> Solar Pathways
Pathways for solar forcing of the atmosphere
49.1. Pathways
Is Required: TRUE Type: ENUM Cardinality: 1.N
Pathways for the solar forcing of the atmosphere model domain
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 50. Solar --> Solar Constant
Solar constant and top of atmosphere insolation characteristics
50.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of the solar constant.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 50.2. Fixed Value
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If the solar constant is fixed, enter the value of the solar constant (W m-2).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 50.3. Transient Characteristics
Is Required: TRUE Type: STRING Cardinality: 1.1
solar constant transient characteristics (W m-2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
Explanation: 51. Solar --> Orbital Parameters
Orbital parameters and top of atmosphere insolation characteristics
51.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time adaptation of orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 51.2. Fixed Reference Date
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Reference date for fixed orbital parameters (yyyy)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 51.3. Transient Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Description of transient orbital parameters
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 51.4. Computation Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Method used for computing orbital parameters.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 52. Solar --> Insolation Ozone
Impact of solar insolation on stratospheric ozone
52.1. Solar Ozone Impact
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does top of atmosphere insolation impact on stratospheric ozone?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 53. Volcanos
Characteristics of the implementation of volcanoes
53.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview description of the implementation of volcanic effects in the atmosphere
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 54. Volcanos --> Volcanoes Treatment
Treatment of volcanoes in the atmosphere
54.1. Volcanoes Implementation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How volcanic effects are modeled in the atmosphere.
End of explanation |
15,711 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
0. Introduction
The following notebook is going to demonstrate the usage of prediction-wrapper, a set of utility classes that makes it much easier to run sklearn machine learning experiments.
This demo shows the procedure of setting up a classification wrapper to run 5-fold cross-validations on 3 different classification models with 3 performance metrics by only using a few lines of code.
Step1: 1. Load and perpare data
Load Titanic data from local. Downloaded from https
Step2: Display some meta info from the data file.
Step3: We see that 'Age' feature has some missing data, so fill them with the current median.
Step4: Drop useless featurs from data frame.
Step5: 2. Set up model inputs
Build a list of feature names from data frame. Note that we need to drop the 'Survived' column from input features.
Step6: Build a list of categorical feature names.
Step7: And set the name of label in the data frame.
Step8: Initialize 3 classification models.
Step9: 3. Run a classification wrapper with multiple models and multiple metrics
Initialize the classification wrapper with 5-fold cross-validation, this is where magic happens.
Step10: Build a table to store results.
Step11: Run the classification wrapper with 3 models, and compute their results with 3 performance metrics.
Step12: Display results. | Python Code:
import pandas as pd
import numpy as np
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestClassifier
from binary_classifier_wrappers import KfoldBinaryClassifierWrapper
from metric_wrappers import RSquare, AUC, RMSE
Explanation: 0. Introduction
The following notebook is going to demonstrate the usage of prediction-wrapper, a set of utility classes that makes it much easier to run sklearn machine learning experiments.
This demo shows the procedure of setting up a classification wrapper to run 5-fold cross-validations on 3 different classification models with 3 performance metrics by only using a few lines of code.
End of explanation
titanic = pd.read_csv("./data/train.csv")
Explanation: 1. Load and perpare data
Load Titanic data from local. Downloaded from https://www.kaggle.com/c/titanic/data.
Since this is only a demo, I only used training data.
End of explanation
titanic.info()
Explanation: Display some meta info from the data file.
End of explanation
titanic["Age"].fillna(titanic["Age"].median(), inplace=True)
Explanation: We see that 'Age' feature has some missing data, so fill them with the current median.
End of explanation
titanic = titanic.drop(['PassengerId','Name','Ticket', 'Cabin', 'Embarked'], axis=1)
Explanation: Drop useless featurs from data frame.
End of explanation
all_feature_names = titanic.columns.tolist()
all_feature_names.remove('Survived')
all_feature_names
Explanation: 2. Set up model inputs
Build a list of feature names from data frame. Note that we need to drop the 'Survived' column from input features.
End of explanation
categorical_feature_names = ['Pclass', 'Sex']
Explanation: Build a list of categorical feature names.
End of explanation
label_name = 'Survived'
Explanation: And set the name of label in the data frame.
End of explanation
lr_model = LogisticRegression()
svn_model = SVC(probability = True)
rf_model = RandomForestClassifier()
model_dict = {'Logistic Regression': lr_model,
'SVM': svn_model,
'Random Forest': rf_model}
Explanation: Initialize 3 classification models.
End of explanation
k_fold_binary = KfoldBinaryClassifierWrapper(titanic, label_name, \
all_feature_names, categorical_feature_names, k=5)
Explanation: 3. Run a classification wrapper with multiple models and multiple metrics
Initialize the classification wrapper with 5-fold cross-validation, this is where magic happens.
End of explanation
model_performance_table = pd.DataFrame(index=range(len(model_dict)), \
columns=['Model', 'AUC', 'r^2', 'RMSE'])
Explanation: Build a table to store results.
End of explanation
for n, name in enumerate(model_dict.keys()):
k_fold_binary.set_model(model_dict[name])
pred_result = k_fold_binary.run()
model_performance_table.ix[n,'Model'] = name
model_performance_table.ix[n,'AUC'] = AUC.measure(pred_result.label, pred_result.pred_prob)
model_performance_table.ix[n,'r^2'] = RSquare.measure(pred_result.label, pred_result.pred_prob)
model_performance_table.ix[n,'RMSE'] = RMSE.measure(pred_result.label, pred_result.pred_prob)
Explanation: Run the classification wrapper with 3 models, and compute their results with 3 performance metrics.
End of explanation
model_performance_table = model_performance_table.sort_values(by='AUC', ascending=False).reset_index(drop=True)
model_performance_table
Explanation: Display results.
End of explanation |
15,712 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
With this notebook you can subselect TeV sources for the stacked analysis of the catalog and rank TeV sources for the IGMF analysis
Imports
Step1: Loading the FHES catalog
Step2: Performing cuts to define samples for stacked analysis
Step3: Show the redshift distributions after the cuts
Step4: Loading the TeV sources
Step5: add suffix 3FGL to TeV catalog
Step6: make a table with 3FGL names and their var index and join with tev table
Step7: Get the optical depth
Step8: Cuts on the TeV catalog
Step9: Remove sources by hand, e.g. because of variability, not well constrained redshift, etc.
Step10: Remove the rows that fail the cuts from the table
Step11: print catalog and save to file | Python Code:
import os
import sys
from collections import OrderedDict
import yaml
import numpy as np
from astropy.io import fits
from astropy.table import Table, Column, join, hstack, vstack
from haloanalysis.utils import create_mask, load_source_rows
from haloanalysis.sed import HaloSED
from haloanalysis.model import CascModel, CascLike
from haloanalysis.model import scan_igmf_likelihood
from haloanalysis.sed import SED
from ebltable.tau_from_model import OptDepth
from haloanalysis.utils import create_mask
import re
%matplotlib inline
Explanation: With this notebook you can subselect TeV sources for the stacked analysis of the catalog and rank TeV sources for the IGMF analysis
Imports
End of explanation
cat = '../data/table_std_psf0123_joint2a_stdmodel_cat_v15.fits'
t = Table.read(cat, hdu = 'CATALOG')
Explanation: Loading the FHES catalog
End of explanation
mask_str = [
{'HBL' : ' (nupeak > 1e15 ) & (var_index < 100.)'},
{'HBL $z < 0.2$' : ' (nupeak > 1e15 ) & (var_index < 100.) & (redshift <= 0.2)'},
{'XHBL' : ' (nupeak > 1e17 ) & (var_index < 100.) & (3lac_fx_fr > 1e4)'},
{'LBL $z > 0.5$' : ' (nupeak <= 1e13 ) & (redshift > 0.5) & (3lac_fx_fr < 1e4)'}
]
mask = []
for m in mask_str:
mask.append(create_mask(t,m))
print 'surviving sources', np.sum(mask[-1])
Explanation: Performing cuts to define samples for stacked analysis
End of explanation
color = ['k','r','b','g']
hatch = ['//','\\','||','-']
#t['redshift'][np.isnan(t['redshift'])] = np.ones(np.sum(np.isnan(t['redshift']))) * -0.1
for i,m in enumerate(mask):
if not i:
n,bins, patches = plt.hist(t['redshift'][m & np.isfinite(t['redshift'])],
bins = 15, normed = False, stacked = False, #range = (-0.1,2.5),
label = mask_str[i].keys()[0],
edgecolor = color[i], facecolor = 'None', lw = 2, hatch = hatch[i])
else:
n,bins, patches = plt.hist(t['redshift'][m& np.isfinite(t['redshift'])],
bins = bins, normed = False, stacked = False, label = mask_str[i].keys()[0],
edgecolor = color[i], facecolor = 'None', lw = 2, hatch = hatch[i])
plt.grid(True)
plt.legend(loc=0)
plt.xlabel('Redshift')
plt.savefig('redshift_dist_mask.png', format = 'png', dpi = 200)
Explanation: Show the redshift distributions after the cuts:
End of explanation
tau = OptDepth.readmodel(model = 'dominguez')
cat_tev = Table.read('../data/CompiledTeVSources.fits')
Explanation: Loading the TeV sources
End of explanation
for i,n in enumerate(cat_tev['3FGL_NAME']):
cat_tev['3FGL_NAME'][i] = '3FGL ' + n
Explanation: add suffix 3FGL to TeV catalog:
End of explanation
tfhes_var = Table([t['name_3fgl'],t['var_index']], names = ['3FGL_NAME', 'var_index'])
cat_tev = join(cat_tev,tfhes_var)
Explanation: make a table with 3FGL names and their var index and join with tev table
End of explanation
m = np.isfinite(cat_tev['E_REF'].data)
taus = []
for i,z in enumerate(cat_tev['REDSHIFT'].data):
taus.append(tau.opt_depth(z,cat_tev['E_REF'].data[i,m[i]]))
taus = np.array(taus)
tau_max = np.array([tm[-1] for tm in taus])
Explanation: Get the optical depth
End of explanation
c = {'var_zsafe' : '(IS_REDSHIFT_SAFE == 1) & (var_index < 100)'}
mtev = create_mask(cat_tev,c )
mtev = (tau_max > 2.) & mtev
Explanation: Cuts on the TeV catalog:
End of explanation
for i,n in enumerate(cat_tev['SOURCE_FULL'].data):
# remove sources with only LL on z:
if n.find('1553') >= 0 or n.find('1424') >= 0: mtev[i] = False
# remove highly variable sources -- this should be clearer defined
#if n.find('279') >= 0 or n.find('2155') >= 0 or n.lower().find('mkn') >= 0: mtev[i] = False
# fit fails:
if n.find('0710') >= 0: mtev[i] = False
Explanation: Remove sources by hand, e.g. because of variability, not well constrained redshift, etc.
End of explanation
idx = np.arange(mtev.shape[0], dtype = np.int)[np.invert(mtev)]
cat_tev.remove_rows(idx)
Explanation: Remove the rows that fail the cuts from the table:
End of explanation
cat_tev.write('../data/TeV_sources_cut_{0:s}.fits'.format(c.keys()[0]), overwrite = True)
print cat_tev
help(np.array)
Explanation: print catalog and save to file
End of explanation |
15,713 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Regression in Python
This is a very quick run-through of some basic statistical concepts, adapted from Lab 4 in Harvard's CS109 course. Please feel free to try the original lab if you're feeling ambitious
Step1: Part 1
Step2: Now let's explore the data set itself.
Step3: There are no column names in the DataFrame. Let's add those.
Step4: Now we have a pandas DataFrame called bos containing all the data we want to use to predict Boston Housing prices. Let's create a variable called PRICE which will contain the prices. This information is contained in the target data.
Step5: EDA and Summary Statistics
Let's explore this data set. First we use describe() to get basic summary statistics for each of the columns.
Step6: Scatter plots
Let's look at some scatter plots for three variables
Step7: Your turn
Step8: Your turn
Step9: Scatter Plots using Seaborn
Seaborn is a cool Python plotting library built on top of matplotlib. It provides convenient syntax and shortcuts for many common types of plots, along with better-looking defaults.
We can also use seaborn regplot for the scatterplot above. This provides automatic linear regression fits (useful for data exploration later on). Here's one example below.
Step10: Histograms
Step11: Your turn
Step12: Linear regression with Boston housing data example
Here,
$Y$ = boston housing prices (also called "target" data in python)
and
$X$ = all the other features (or independent variables)
which we will use to fit a linear regression model and predict Boston housing prices. We will use the least squares method as the way to estimate the coefficients.
We'll use two ways of fitting a linear regression. We recommend the first but the second is also powerful in its features.
Fitting Linear Regression using statsmodels
Statsmodels is a great Python library for a lot of basic and inferential statistics. It also provides basic regression functions using an R-like syntax, so it's commonly used by statisticians. While we don't cover statsmodels officially in the Data Science Intensive, it's a good library to have in your toolbox. Here's a quick example of what you could do with it.
Step13: Interpreting coefficients
There is a ton of information in this output. But we'll concentrate on the coefficient table (middle table). We can interpret the RM coefficient (9.1021) by first noticing that the p-value (under P>|t|) is so small, basically zero. We can interpret the coefficient as, if we compare two groups of towns, one where the average number of rooms is say $5$ and the other group is the same except that they all have $6$ rooms. For these two groups the average difference in house prices is about $9.1$ (in thousands) so about $\$9,100$ difference. The confidence interval fives us a range of plausible values for this difference, about ($\$8,279, \$9,925$), deffinitely not chump change.
statsmodels formulas
This formula notation will seem familiar to R users, but will take some getting used to for people coming from other languages or are new to statistics.
The formula gives instruction for a general structure for a regression call. For statsmodels (ols or logit) calls you need to have a Pandas dataframe with column names that you will add to your formula. In the below example you need a pandas data frame that includes the columns named (Outcome, X1,X2, ...), bbut you don't need to build a new dataframe for every regression. Use the same dataframe with all these things in it. The structure is very simple
Step14: Fitting Linear Regression using sklearn
Step15: What can you do with a LinearRegression object?
Check out the scikit-learn docs here. We have listed the main functions here.
Main functions | Description
--- | ---
lm.fit() | Fit a linear model
lm.predit() | Predict Y using the linear model with estimated coefficients
lm.score() | Returns the coefficient of determination (R^2). A measure of how well observed outcomes are replicated by the model, as the proportion of total variation of outcomes explained by the model
What output can you get?
Step16: Output | Description
--- | ---
lm.coef_ | Estimated coefficients
lm.intercept_ | Estimated intercept
Fit a linear model
The lm.fit() function estimates the coefficients the linear regression using least squares.
Step17: Your turn
Step18: Estimated intercept and coefficients
Let's look at the estimated coefficients from the linear model using 1m.intercept_ and lm.coef_.
After we have fit our linear regression model using the least squares method, we want to see what are the estimates of our coefficients $\beta_0$, $\beta_1$, ..., $\beta_{13}$
Step19: Predict Prices
We can calculate the predicted prices ($\hat{Y}_i$) using lm.predict.
$$ \hat{Y}i = \hat{\beta}_0 + \hat{\beta}_1 X_1 + \ldots \hat{\beta}{13} X_{13} $$
Step20: Your turn
Step21: Residual sum of squares
Let's calculate the residual sum of squares
$$ S = \sum_{i=1}^N r_i = \sum_{i=1}^N (y_i - (\beta_0 + \beta_1 x_i))^2 $$
Step22: Mean squared error
This is simple the mean of the residual sum of squares.
Your turn
Step23: Relationship between PTRATIO and housing price
Try fitting a linear regression model using only the 'PTRATIO' (pupil-teacher ratio by town)
Calculate the mean squared error.
Step24: We can also plot the fitted linear regression line.
Step25: Your turn
Try fitting a linear regression model using three independent variables
'CRIM' (per capita crime rate by town)
'RM' (average number of rooms per dwelling)
'PTRATIO' (pupil-teacher ratio by town)
Calculate the mean squared error.
Step26: Other important things to think about when fitting a linear regression model
<div class="span5 alert alert-danger">
<ul>
<li>**Linearity**. The dependent variable $Y$ is a linear combination of the regression coefficients and the independent variables $X$. </li>
<li>**Constant standard deviation**. The SD of the dependent variable $Y$ should be constant for different values of X.
<ul>
<li>e.g. PTRATIO
</ul>
</li>
<li> **Normal distribution for errors**. The $\epsilon$ term we discussed at the beginning are assumed to be normally distributed.
$$ \epsilon_i \sim N(0, \sigma^2)$$
Sometimes the distributions of responses $Y$ may not be normally distributed at any given value of $X$. e.g. skewed positively or negatively. </li>
<li> **Independent errors**. The observations are assumed to be obtained independently.
<ul>
<li>e.g. Observations across time may be correlated
</ul>
</li>
</ul>
</div>
Part 3
Step27: Another way, is to split the data into random train and test subsets using the function train_test_split in sklearn.cross_validation. Here's the documentation.
Step28: Your turn
Step29: Your turn
Step30: Residual plots
Step31: Your turn
Step32: K-fold Cross-validation as an extension of this idea
<div class="span5 alert alert-info">
<p> A simple extension of the Test/train split is called K-fold cross-validation. </p>
<p> Here's the procedure | Python Code:
# special IPython command to prepare the notebook for matplotlib and other libraries
%pylab inline
import numpy as np
import pandas as pd
import scipy.stats as stats
import matplotlib.pyplot as plt
import sklearn
import seaborn as sns
# special matplotlib argument for improved plots
from matplotlib import rcParams
sns.set_style("whitegrid")
sns.set_context("poster")
Explanation: Regression in Python
This is a very quick run-through of some basic statistical concepts, adapted from Lab 4 in Harvard's CS109 course. Please feel free to try the original lab if you're feeling ambitious :-) The CS109 git repository also has the solutions if you're stuck.
Linear Regression Models
Prediction using linear regression
Some re-sampling methods
Train-Test splits
Cross Validation
Linear regression is used to model and predict continuous outcomes while logistic regression is used to model binary outcomes. We'll see some examples of linear regression as well as Train-test splits.
The packages we'll cover are: statsmodels, seaborn, and scikit-learn. While we don't explicitly teach statsmodels and seaborn in the Springboard workshop, those are great libraries to know.
<img width=600 height=300 src="https://imgs.xkcd.com/comics/sustainable.png"/>
End of explanation
from sklearn.datasets import load_boston
boston = load_boston()
boston.keys()
boston.data.shape
# Print column names
print(boston.feature_names)
# Print description of Boston housing data set
print(boston.DESCR)
Explanation: Part 1: Linear Regression
Purpose of linear regression
<div class="span5 alert alert-info">
<p> Given a dataset $X$ and $Y$, linear regression can be used to: </p>
<ul>
<li> Build a <b>predictive model</b> to predict future values of $X_i$ without a $Y$ value. </li>
<li> Model the <b>strength of the relationship</b> between each dependent variable $X_i$ and $Y$</li>
<ul>
<li> Sometimes not all $X_i$ will have a relationship with $Y$</li>
<li> Need to figure out which $X_i$ contributes most information to determine $Y$ </li>
</ul>
<li>Linear regression is used in so many applications that I won't warrant this with examples. It is in many cases, the first pass prediction algorithm for continuous outcomes. </li>
</ul>
</div>
A brief recap (feel free to skip if you don't care about the math)
Linear Regression is a method to model the relationship between a set of independent variables $X$ (also knowns as explanatory variables, features, predictors) and a dependent variable $Y$. This method assumes the relationship between each predictor $X$ is linearly related to the dependent variable $Y$.
$$ Y = \beta_0 + \beta_1 X + \epsilon$$
where $\epsilon$ is considered as an unobservable random variable that adds noise to the linear relationship. This is the simplest form of linear regression (one variable), we'll call this the simple model.
$\beta_0$ is the intercept of the linear model
Multiple linear regression is when you have more than one independent variable
$X_1$, $X_2$, $X_3$, $\ldots$
$$ Y = \beta_0 + \beta_1 X_1 + \ldots + \beta_p X_p + \epsilon$$
Back to the simple model. The model in linear regression is the conditional mean of $Y$ given the values in $X$ is expressed a linear function.
$$ y = f(x) = E(Y | X = x)$$
http://www.learner.org/courses/againstallodds/about/glossary.html
The goal is to estimate the coefficients (e.g. $\beta_0$ and $\beta_1$). We represent the estimates of the coefficients with a "hat" on top of the letter.
$$ \hat{\beta}_0, \hat{\beta}_1 $$
Once you estimate the coefficients $\hat{\beta}_0$ and $\hat{\beta}_1$, you can use these to predict new values of $Y$
$$\hat{y} = \hat{\beta}_0 + \hat{\beta}_1 x_1$$
How do you estimate the coefficients?
There are many ways to fit a linear regression model
The method called least squares is one of the most common methods
We will discuss least squares today
Estimating $\hat\beta$: Least squares
Least squares is a method that can estimate the coefficients of a linear model by minimizing the difference between the following:
$$ S = \sum_{i=1}^N r_i = \sum_{i=1}^N (y_i - (\beta_0 + \beta_1 x_i))^2 $$
where $N$ is the number of observations.
We will not go into the mathematical details, but the least squares estimates $\hat{\beta}_0$ and $\hat{\beta}_1$ minimize the sum of the squared residuals $r_i = y_i - (\beta_0 + \beta_1 x_i)$ in the model (i.e. makes the difference between the observed $y_i$ and linear model $\beta_0 + \beta_1 x_i$ as small as possible).
The solution can be written in compact matrix notation as
$$\hat\beta = (X^T X)^{-1}X^T Y$$
We wanted to show you this in case you remember linear algebra, in order for this solution to exist we need $X^T X$ to be invertible. Of course this requires a few extra assumptions, $X$ must be full rank so that $X^T X$ is invertible, etc. This is important for us because this means that having redundant features in our regression models will lead to poorly fitting (and unstable) models. We'll see an implementation of this in the extra linear regression example.
Note: The "hat" means it is an estimate of the coefficient.
Part 2: Boston Housing Data Set
The Boston Housing data set contains information about the housing values in suburbs of Boston. This dataset was originally taken from the StatLib library which is maintained at Carnegie Mellon University and is now available on the UCI Machine Learning Repository.
Load the Boston Housing data set from sklearn
This data set is available in the sklearn python module which is how we will access it today.
End of explanation
bos = pd.DataFrame(boston.data)
bos.head()
Explanation: Now let's explore the data set itself.
End of explanation
bos.columns = boston.feature_names
bos.head()
Explanation: There are no column names in the DataFrame. Let's add those.
End of explanation
print(boston.target.shape)
bos['PRICE'] = boston.target
bos.head()
Explanation: Now we have a pandas DataFrame called bos containing all the data we want to use to predict Boston Housing prices. Let's create a variable called PRICE which will contain the prices. This information is contained in the target data.
End of explanation
bos.describe()
Explanation: EDA and Summary Statistics
Let's explore this data set. First we use describe() to get basic summary statistics for each of the columns.
End of explanation
plt.scatter(bos.CRIM, bos.PRICE)
plt.xlabel("Per capita crime rate by town (CRIM)")
plt.ylabel("Housing Price")
plt.title("Relationship between CRIM and Price")
Explanation: Scatter plots
Let's look at some scatter plots for three variables: 'CRIM', 'RM' and 'PTRATIO'.
What kind of relationship do you see? e.g. positive, negative? linear? non-linear?
End of explanation
#your turn: scatter plot between *RM* and *PRICE*
plt.scatter(bos.RM, bos.PRICE)
plt.xlabel("average number of rooms per dwelling (RM)")
plt.ylabel("Housing Price")
plt.title("Relationship between CRIM and Price")
#your turn: scatter plot between *PTRATIO* and *PRICE*
plt.scatter(bos.PTRATIO, bos.PRICE)
plt.xlabel("pupil-teacher ratio by town (PTRATIO)")
plt.ylabel("Housing Price")
plt.title("Relationship between CRIM and Price")
Explanation: Your turn: Create scatter plots between RM and PRICE, and PTRATIO and PRICE. What do you notice?
End of explanation
#your turn: create some other scatter plots
plt.scatter(bos.LSTAT, bos.PRICE)
plt.xlabel("pupil-teacher ratio by town (PTRATIO)")
plt.ylabel("Housing Price")
plt.title("Relationship between CRIM and Price")
Explanation: Your turn: What are some other numeric variables of interest? Plot scatter plots with these variables and PRICE.
End of explanation
sns.regplot(y="PRICE", x="RM", data=bos, fit_reg = True)
Explanation: Scatter Plots using Seaborn
Seaborn is a cool Python plotting library built on top of matplotlib. It provides convenient syntax and shortcuts for many common types of plots, along with better-looking defaults.
We can also use seaborn regplot for the scatterplot above. This provides automatic linear regression fits (useful for data exploration later on). Here's one example below.
End of explanation
plt.hist(bos.CRIM)
plt.title("CRIM")
plt.xlabel("Crime rate per capita")
plt.ylabel("Frequencey")
plt.show()
Explanation: Histograms
End of explanation
#your turn
plt.hist(bos.RM)
plt.title("RM")
plt.xlabel("average number of rooms per dwelling (RM)")
plt.ylabel("Frequencey")
plt.show()
#your turn
plt.hist(bos.PTRATIO)
plt.title("PTRATIO")
plt.xlabel("pupil-teacher ratio by town (PTRATIO)")
plt.ylabel("Frequencey")
plt.show()
Explanation: Your turn:
Plot histograms for RM and PTRATIO, along with the two variables you picked in the previous section.
End of explanation
# Import regression modules
# ols - stands for Ordinary least squares, we'll use this
import statsmodels.api as sm
from statsmodels.formula.api import ols
# statsmodels works nicely with pandas dataframes
# The thing inside the "quotes" is called a formula, a bit on that below
m = ols('PRICE ~ RM',bos).fit()
print(m.summary())
Explanation: Linear regression with Boston housing data example
Here,
$Y$ = boston housing prices (also called "target" data in python)
and
$X$ = all the other features (or independent variables)
which we will use to fit a linear regression model and predict Boston housing prices. We will use the least squares method as the way to estimate the coefficients.
We'll use two ways of fitting a linear regression. We recommend the first but the second is also powerful in its features.
Fitting Linear Regression using statsmodels
Statsmodels is a great Python library for a lot of basic and inferential statistics. It also provides basic regression functions using an R-like syntax, so it's commonly used by statisticians. While we don't cover statsmodels officially in the Data Science Intensive, it's a good library to have in your toolbox. Here's a quick example of what you could do with it.
End of explanation
# your turn
sns.regplot(y=bos.PRICE, x=m.fittedvalues, fit_reg = True)
plt.xlabel('predicted price')
Explanation: Interpreting coefficients
There is a ton of information in this output. But we'll concentrate on the coefficient table (middle table). We can interpret the RM coefficient (9.1021) by first noticing that the p-value (under P>|t|) is so small, basically zero. We can interpret the coefficient as, if we compare two groups of towns, one where the average number of rooms is say $5$ and the other group is the same except that they all have $6$ rooms. For these two groups the average difference in house prices is about $9.1$ (in thousands) so about $\$9,100$ difference. The confidence interval fives us a range of plausible values for this difference, about ($\$8,279, \$9,925$), deffinitely not chump change.
statsmodels formulas
This formula notation will seem familiar to R users, but will take some getting used to for people coming from other languages or are new to statistics.
The formula gives instruction for a general structure for a regression call. For statsmodels (ols or logit) calls you need to have a Pandas dataframe with column names that you will add to your formula. In the below example you need a pandas data frame that includes the columns named (Outcome, X1,X2, ...), bbut you don't need to build a new dataframe for every regression. Use the same dataframe with all these things in it. The structure is very simple:
Outcome ~ X1
But of course we want to to be able to handle more complex models, for example multiple regression is doone like this:
Outcome ~ X1 + X2 + X3
This is the very basic structure but it should be enough to get you through the homework. Things can get much more complex, for a quick run-down of further uses see the statsmodels help page.
Let's see how our model actually fit our data. We can see below that there is a ceiling effect, we should probably look into that. Also, for large values of $Y$ we get underpredictions, most predictions are below the 45-degree gridlines.
Your turn: Create a scatterpot between the predicted prices, available in m.fittedvalues and the original prices. How does the plot look?
End of explanation
from sklearn.linear_model import LinearRegression
X = bos.drop('PRICE', axis = 1)
# This creates a LinearRegression object
lm = LinearRegression()
lm
Explanation: Fitting Linear Regression using sklearn
End of explanation
# Look inside lm object
# lm.<tab>
Explanation: What can you do with a LinearRegression object?
Check out the scikit-learn docs here. We have listed the main functions here.
Main functions | Description
--- | ---
lm.fit() | Fit a linear model
lm.predit() | Predict Y using the linear model with estimated coefficients
lm.score() | Returns the coefficient of determination (R^2). A measure of how well observed outcomes are replicated by the model, as the proportion of total variation of outcomes explained by the model
What output can you get?
End of explanation
# Use all 13 predictors to fit linear regression model
lm.fit(X, bos.PRICE)
Explanation: Output | Description
--- | ---
lm.coef_ | Estimated coefficients
lm.intercept_ | Estimated intercept
Fit a linear model
The lm.fit() function estimates the coefficients the linear regression using least squares.
End of explanation
lm = LinearRegression(fit_intercept=False)
lm.fit(X, bos.PRICE)
Explanation: Your turn: How would you change the model to not fit an intercept term? Would you recommend not having an intercept?
End of explanation
print('Estimated intercept coefficient:', lm.intercept_)
print('Number of coefficients:', len(lm.coef_))
# The coefficients
pd.DataFrame(list(zip(X.columns, lm.coef_)), columns = ['features', 'estimatedCoefficients'])
Explanation: Estimated intercept and coefficients
Let's look at the estimated coefficients from the linear model using 1m.intercept_ and lm.coef_.
After we have fit our linear regression model using the least squares method, we want to see what are the estimates of our coefficients $\beta_0$, $\beta_1$, ..., $\beta_{13}$:
$$ \hat{\beta}0, \hat{\beta}_1, \ldots, \hat{\beta}{13} $$
End of explanation
# first five predicted prices
lm.predict(X)[0:5]
Explanation: Predict Prices
We can calculate the predicted prices ($\hat{Y}_i$) using lm.predict.
$$ \hat{Y}i = \hat{\beta}_0 + \hat{\beta}_1 X_1 + \ldots \hat{\beta}{13} X_{13} $$
End of explanation
# your turn
sns.distplot(lm.predict(X), kde=False)
sns.regplot(lm.predict(X), bos.PRICE)
plt.xlabel('predicted prices')
Explanation: Your turn:
Histogram: Plot a histogram of all the predicted prices
Scatter Plot: Let's plot the true prices compared to the predicted prices to see they disagree (we did this with statsmodels before).
End of explanation
print(np.sum((bos.PRICE - lm.predict(X)) ** 2))
Explanation: Residual sum of squares
Let's calculate the residual sum of squares
$$ S = \sum_{i=1}^N r_i = \sum_{i=1}^N (y_i - (\beta_0 + \beta_1 x_i))^2 $$
End of explanation
#your turn
print(np.mean((bos.PRICE - lm.predict(X)) ** 2))
Explanation: Mean squared error
This is simple the mean of the residual sum of squares.
Your turn: Calculate the mean squared error and print it.
End of explanation
lm = LinearRegression()
lm.fit(X[['PTRATIO']], bos.PRICE)
msePTRATIO = np.mean((bos.PRICE - lm.predict(X[['PTRATIO']])) ** 2)
print(msePTRATIO)
Explanation: Relationship between PTRATIO and housing price
Try fitting a linear regression model using only the 'PTRATIO' (pupil-teacher ratio by town)
Calculate the mean squared error.
End of explanation
plt.scatter(bos.PTRATIO, bos.PRICE)
plt.xlabel("Pupil-to-Teacher Ratio (PTRATIO)")
plt.ylabel("Housing Price")
plt.title("Relationship between PTRATIO and Price")
plt.plot(bos.PTRATIO, lm.predict(X[['PTRATIO']]), color='blue', linewidth=3)
plt.show()
Explanation: We can also plot the fitted linear regression line.
End of explanation
# your turn
lm = LinearRegression()
lm.fit(X[['PTRATIO', 'RM', 'CRIM']], bos.PRICE)
msePTRATIO = np.mean((bos.PRICE - lm.predict(X[['PTRATIO', 'RM', 'CRIM']])) ** 2)
print(msePTRATIO)
Explanation: Your turn
Try fitting a linear regression model using three independent variables
'CRIM' (per capita crime rate by town)
'RM' (average number of rooms per dwelling)
'PTRATIO' (pupil-teacher ratio by town)
Calculate the mean squared error.
End of explanation
X_train = X[:-50]
X_test = X[-50:]
Y_train = bos.PRICE[:-50]
Y_test = bos.PRICE[-50:]
print(X_train.shape)
print(X_test.shape)
print(Y_train.shape)
print(Y_test.shape)
Explanation: Other important things to think about when fitting a linear regression model
<div class="span5 alert alert-danger">
<ul>
<li>**Linearity**. The dependent variable $Y$ is a linear combination of the regression coefficients and the independent variables $X$. </li>
<li>**Constant standard deviation**. The SD of the dependent variable $Y$ should be constant for different values of X.
<ul>
<li>e.g. PTRATIO
</ul>
</li>
<li> **Normal distribution for errors**. The $\epsilon$ term we discussed at the beginning are assumed to be normally distributed.
$$ \epsilon_i \sim N(0, \sigma^2)$$
Sometimes the distributions of responses $Y$ may not be normally distributed at any given value of $X$. e.g. skewed positively or negatively. </li>
<li> **Independent errors**. The observations are assumed to be obtained independently.
<ul>
<li>e.g. Observations across time may be correlated
</ul>
</li>
</ul>
</div>
Part 3: Training and Test Data sets
Purpose of splitting data into Training/testing sets
<div class="span5 alert alert-info">
<p> Let's stick to the linear regression example: </p>
<ul>
<li> We built our model with the requirement that the model fit the data well. </li>
<li> As a side-effect, the model will fit <b>THIS</b> dataset well. What about new data? </li>
<ul>
<li> We wanted the model for predictions, right?</li>
</ul>
<li> One simple solution, leave out some data (for <b>testing</b>) and <b>train</b> the model on the rest </li>
<li> This also leads directly to the idea of cross-validation, next section. </li>
</ul>
</div>
One way of doing this is you can create training and testing data sets manually.
End of explanation
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(
X, bos.PRICE, test_size=0.33, random_state = 5)
print(X_train.shape)
print(X_test.shape)
print(Y_train.shape)
print(Y_test.shape)
Explanation: Another way, is to split the data into random train and test subsets using the function train_test_split in sklearn.cross_validation. Here's the documentation.
End of explanation
# your turn
lm = LinearRegression()
lm.fit(X_train, Y_train)
lm.predict(X_test)
Explanation: Your turn: Let's build a linear regression model using our new training data sets.
Fit a linear regression model to the training set
Predict the output on the test set
End of explanation
# your turn
mse_train = np.mean((Y_train - lm.predict(X_train)) ** 2)
mse_test = np.mean((Y_test - lm.predict(X_test)) ** 2)
print(mse_train)
print(mse_test)
# train is better, so overfitted on the traindata somewhat
Explanation: Your turn:
Calculate the mean squared error
using just the test data
using just the training data
Are they pretty similar or very different? What does that mean?
End of explanation
plt.scatter(lm.predict(X_train), lm.predict(X_train) - Y_train, c='b', s=40, alpha=0.5)
plt.scatter(lm.predict(X_test), lm.predict(X_test) - Y_test, c='g', s=40)
plt.hlines(y = 0, xmin=0, xmax = 50)
plt.title('Residual Plot using training (blue) and test (green) data')
plt.ylabel('Residuals')
Explanation: Residual plots
End of explanation
# Yes, I think so, since the datapoints of the test and traindata are pretty similar
Explanation: Your turn: Do you think this linear regression model generalizes well on the test data?
End of explanation
from sklearn.model_selection import cross_val_score
from sklearn.linear_model import LinearRegression
clf = LinearRegression()
scores = cross_val_score(clf, bos.drop('PRICE', axis = 1), bos.PRICE, cv=4, scoring='neg_mean_squared_error')
scores*-1
# train/test split was 28. The variation is pretty big in the different folds, so it is in between the highest and lowest
# but the one train/test split did not show this variation, which is important to know
Explanation: K-fold Cross-validation as an extension of this idea
<div class="span5 alert alert-info">
<p> A simple extension of the Test/train split is called K-fold cross-validation. </p>
<p> Here's the procedure:</p>
<ul>
<li> randomly assign your $n$ samples to one of $K$ groups. They'll each have about $n/k$ samples</li>
<li> For each group $k$: </li>
<ul>
<li> Fit the model (e.g. run regression) on all data excluding the $k^{th}$ group</li>
<li> Use the model to predict the outcomes in group $k$</li>
<li> Calculate your prediction error for each observation in $k^{th}$ group (e.g. $(Y_i - \hat{Y}_i)^2$ for regression, $\mathbb{1}(Y_i = \hat{Y}_i)$ for logistic regression). </li>
</ul>
<li> Calculate the average prediction error across all samples $Err_{CV} = \frac{1}{n}\sum_{i=1}^n (Y_i - \hat{Y}_i)^2$ </li>
</ul>
</div>
Luckily you don't have to do this entire process all by hand (for loops, etc.) every single time, sci-kit learn has a very nice implementation of this, have a look at the documentation.
Your turn (extra credit): Implement K-Fold cross-validation using the procedure above and Boston Housing data set using $K=4$. How does the average prediction error compare to the train-test split above?
End of explanation |
15,714 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
MultiPolygons
The MultiPolygons glyphs is modeled closely on the GeoJSON spec for Polygon and MultiPolygon. The data that are used to construct MultiPolygons are nested 3 deep. In the top level of nesting, each item in the list represents a MultiPolygon - an entity like a state or a contour level. Each MultiPolygon is composed of Polygons representing different parts of the MultiPolygon. Each Polygon contains a list of coordinates representing the exterior bounds of the Polygon followed by lists of coordinates of any holes contained within the Polygon.
Polygon with no holes
We'll start with one square with bottom left corner at (1, 3) and top right corner at (2, 4). The simple case of one Polygon with no holes is represented in geojson as follows
Step1: Notice that in the geojson Polygon always starts and ends at the same point and that the direction in which the Polygon is drawn (winding) must be counter-clockwise. In bokeh we don't have these two restrictions, the direction doesn't matter, and the polygon will be closed even if the starting end ending point are not the same.
Step2: Polygon with holes
Now we'll add some holes to the square polygon defined above. We'll add a triangle in the lower left corner and another in the upper right corner. In geojson this can be represented as follows
Step3: MultiPolygon
Now we'll examine a MultiPolygon. A MultiPolygon is composed of different parts each of which is a Polygon and each of which can have or not have holes. To create a MultiPolygon from the Polygon that we are using above, we'll add a triangle below the square with holes. Here is how this shape would be represented in geojson
Step4: It is important to understand that the Polygons that make up this MultiPolygon are part of the same entity. It can be helpful to think of representing physically separate areas that are part of the same entity such as the islands of Hawaii.
MultiPolygons
Finally, we'll take a look at how we can represent a list of MultiPolygons. Each Mulipolygon represents a different entity. In geojson this would be a FeatureCollection
Step5: Using MultiPolygons glyph directly
Step6: By looking at the dataframe for this ColumnDataSource object, we can see that each MultiPolygon is represented by one row.
Step7: Using numpy arrays with MultiPolygons
Numpy arrays can be used instead of python native lists. In the following example, we'll generate concentric circles and used them to make rings. Similar methods could be used to generate contours. | Python Code:
from bokeh.plotting import figure, output_notebook, show
output_notebook()
p = figure(plot_width=300, plot_height=300, tools='hover,tap,wheel_zoom,pan,reset,help')
p.multi_polygons(xs=[[[[1, 2, 2, 1, 1]]]],
ys=[[[[3, 3, 4, 4, 3]]]])
show(p)
Explanation: MultiPolygons
The MultiPolygons glyphs is modeled closely on the GeoJSON spec for Polygon and MultiPolygon. The data that are used to construct MultiPolygons are nested 3 deep. In the top level of nesting, each item in the list represents a MultiPolygon - an entity like a state or a contour level. Each MultiPolygon is composed of Polygons representing different parts of the MultiPolygon. Each Polygon contains a list of coordinates representing the exterior bounds of the Polygon followed by lists of coordinates of any holes contained within the Polygon.
Polygon with no holes
We'll start with one square with bottom left corner at (1, 3) and top right corner at (2, 4). The simple case of one Polygon with no holes is represented in geojson as follows:
geojson
{
"type": "Polygon",
"coordinates": [
[
[1, 3],
[2, 3],
[2, 4],
[1, 4],
[1, 3]
]
]
}
In geojson this list of coordinates is nested 1 deep to allow for passing lists of holes within the polygon. In bokeh (using MultiPolygon) the coordinates for this same polygon will be nested 3 deep to allow space for other entities and for other parts of the MultiPolygon.
End of explanation
p = figure(plot_width=300, plot_height=300, tools='hover,tap,wheel_zoom,pan,reset,help')
p.multi_polygons(xs=[[[[1, 1, 2, 2]]]],
ys=[[[[3, 4, 4, 3]]]])
show(p)
Explanation: Notice that in the geojson Polygon always starts and ends at the same point and that the direction in which the Polygon is drawn (winding) must be counter-clockwise. In bokeh we don't have these two restrictions, the direction doesn't matter, and the polygon will be closed even if the starting end ending point are not the same.
End of explanation
p = figure(plot_width=300, plot_height=300, tools='hover,tap,wheel_zoom,pan,reset,help')
p.multi_polygons(xs=[[[ [1, 2, 2, 1], [1.2, 1.6, 1.6], [1.8, 1.8, 1.6] ]]],
ys=[[[ [3, 3, 4, 4], [3.2, 3.6, 3.2], [3.4, 3.8, 3.8] ]]])
show(p)
Explanation: Polygon with holes
Now we'll add some holes to the square polygon defined above. We'll add a triangle in the lower left corner and another in the upper right corner. In geojson this can be represented as follows:
geojson
{
"type": "Polygon",
"coordinates": [
[
[1, 3],
[2, 3],
[2, 4],
[1, 4],
[1, 3]
],
[
[1.2, 3.2],
[1.6, 3.6],
[1.6, 3.2],
[1.2, 3.2]
],
[
[1.8, 3.8],
[1.8, 3.4],
[1.6, 3.8],
[1.8, 3.8]
]
]
}
Once again notice that the direction in which the polygons are drawn doesn't matter and the last point in a polygon does not need to match the first. Hover over the holes to demonstrate that they aren't considered part of the Polygon.
End of explanation
p = figure(plot_width=300, plot_height=300, tools='hover,tap,wheel_zoom,pan,reset,help')
p.multi_polygons(xs=[[[ [1, 1, 2, 2], [1.2, 1.6, 1.6], [1.8, 1.8, 1.6] ], [ [3, 4, 3] ]]],
ys=[[[ [4, 3, 3, 4], [3.2, 3.2, 3.6], [3.4, 3.8, 3.8] ], [ [1, 1, 3] ]]])
show(p)
Explanation: MultiPolygon
Now we'll examine a MultiPolygon. A MultiPolygon is composed of different parts each of which is a Polygon and each of which can have or not have holes. To create a MultiPolygon from the Polygon that we are using above, we'll add a triangle below the square with holes. Here is how this shape would be represented in geojson:
geojson
{
"type": "MultiPolygon",
"coordinates": [
[
[
[1, 3],
[2, 3],
[2, 4],
[1, 4],
[1, 3]
],
[
[1.2, 3.2],
[1.6, 3.6],
[1.6, 3.2],
[1.2, 3.2]
],
[
[1.8, 3.8],
[1.8, 3.4],
[1.6, 3.8],
[1.8, 3.8]
]
],
[
[
[3, 1],
[4, 1],
[3, 3],
[3, 1]
]
]
]
}
End of explanation
p = figure(plot_width=300, plot_height=300, tools='hover,tap,wheel_zoom,pan,reset,help')
p.multi_polygons(
xs=[
[[ [1, 1, 2, 2], [1.2, 1.6, 1.6], [1.8, 1.8, 1.6] ], [ [3, 3, 4] ]],
[[ [1, 2, 2, 1], [1.3, 1.3, 1.7, 1.7] ]]],
ys=[
[[ [4, 3, 3, 4], [3.2, 3.2, 3.6], [3.4, 3.8, 3.8] ], [ [1, 3, 1] ]],
[[ [1, 1, 2, 2], [1.3, 1.7, 1.7, 1.3] ]]],
color=['blue', 'red'])
show(p)
Explanation: It is important to understand that the Polygons that make up this MultiPolygon are part of the same entity. It can be helpful to think of representing physically separate areas that are part of the same entity such as the islands of Hawaii.
MultiPolygons
Finally, we'll take a look at how we can represent a list of MultiPolygons. Each Mulipolygon represents a different entity. In geojson this would be a FeatureCollection:
geojson
{
"type": "FeatureCollection",
"features": [
{
"type": "Feature",
"properties": {
"fill": "blue"
},
"geometry": {
"type": "MultiPolygon",
"coordinates": [
[
[
[1, 3],
[2, 3],
[2, 4],
[1, 4],
[1, 3]
],
[
[1.2, 3.2],
[1.6, 3.6],
[1.6, 3.2],
[1.2, 3.2]
],
[
[1.8, 3.8],
[1.8, 3.4],
[1.6, 3.8],
[1.8, 3.8]
]
],
[
[
[3, 1],
[4, 1],
[3, 3],
[3, 1]
]
]
]
}
},
{
"type": "Feature",
"properties": {
"fill": "red"
},
"geometry": {
"type": "Polygon",
"coordinates": [
[
[1, 1],
[2, 1],
[2, 2],
[1, 2],
[1, 1]
],
[
[1.3, 1.3],
[1.3, 1.7],
[1.7, 1.7],
[1.7, 1.3]
[1.3, 1.3]
]
]
}
}
]}
End of explanation
from bokeh.models import ColumnDataSource, Plot, LinearAxis, Grid
from bokeh.models.glyphs import MultiPolygons
from bokeh.models.tools import TapTool, WheelZoomTool, ResetTool, HoverTool
source = ColumnDataSource(dict(
xs=[
[
[
[1, 1, 2, 2],
[1.2, 1.6, 1.6],
[1.8, 1.8, 1.6]
],
[
[3, 3, 4]
]
],
[
[
[1, 2, 2, 1],
[1.3, 1.3, 1.7, 1.7]
]
]
],
ys=[
[
[
[4, 3, 3, 4],
[3.2, 3.2, 3.6],
[3.4, 3.8, 3.8]
],
[
[1, 3, 1]
]
],
[
[
[1, 1, 2, 2],
[1.3, 1.7, 1.7, 1.3]
]
]
],
color=["blue", "red"],
label=["A", "B"]
))
Explanation: Using MultiPolygons glyph directly
End of explanation
source.to_df()
hover = HoverTool(tooltips=[("Label", "@label")])
plot = Plot(plot_width=300, plot_height=300, tools=[hover, TapTool(), WheelZoomTool()])
glyph = MultiPolygons(xs="xs", ys="ys", fill_color='color')
plot.add_glyph(source, glyph)
xaxis = LinearAxis()
plot.add_layout(xaxis, 'below')
yaxis = LinearAxis()
plot.add_layout(yaxis, 'left')
plot.add_layout(Grid(dimension=0, ticker=xaxis.ticker))
plot.add_layout(Grid(dimension=1, ticker=yaxis.ticker))
show(plot)
Explanation: By looking at the dataframe for this ColumnDataSource object, we can see that each MultiPolygon is represented by one row.
End of explanation
import numpy as np
from bokeh.palettes import Viridis10 as palette
def circle(radius):
angles = np.linspace(0, 2*np.pi, 100)
return {'x': radius*np.sin(angles), 'y': radius*np.cos(angles), 'radius': radius}
radii = np.geomspace(1, 100, 10)
source = dict(xs=[],
ys=[],
color=[palette[i] for i in range(10)],
outer_radius=radii)
for i, r in enumerate(radii):
exterior = circle(r)
if i == 0:
polygon_xs = [exterior['x']]
polygon_ys = [exterior['y']]
else:
hole = circle(radii[i-1])
polygon_xs = [exterior['x'], hole['x']]
polygon_ys = [exterior['y'], hole['y']]
source['xs'].append([polygon_xs])
source['ys'].append([polygon_ys])
p = figure(plot_width=300, plot_height=300,
tools='hover,tap,wheel_zoom,pan,reset,help',
tooltips=[("Outer Radius", "@outer_radius")])
p.multi_polygons('xs', 'ys', fill_color='color', source=source)
show(p)
Explanation: Using numpy arrays with MultiPolygons
Numpy arrays can be used instead of python native lists. In the following example, we'll generate concentric circles and used them to make rings. Similar methods could be used to generate contours.
End of explanation |
15,715 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Begin by defining the output parsing function for usage
Define overnight output file location and parse into dictionary
Step1: Define a function that lets us pull out the optimal fit for a specific cortex
Step2: Visualising the results
It makes sense to try and visualise the results as a whole. Knowing the single best fit for each cortex doesn't show us the trend across all subjects or the data as a whole. | Python Code:
OVERNIGHT_FILE = '/home/buck06191/Desktop/optimisation_edit.json'
with open(OVERNIGHT_FILE) as f:
optim_data = json.load(f)
# Check length of each dict section before converting to pandas DF
import copy
x = copy.copy(optim_data)
{k:len(x[k]) for k in x.keys()}
overnight_df = pd.DataFrame(optim_data)
Explanation: Begin by defining the output parsing function for usage
Define overnight output file location and parse into dictionary
End of explanation
def optimal_fit(xx, cortex):
df = xx.loc[xx['Cortex']==cortex]
return df.loc[df['Final_Distance']==df['Final_Distance'].min()]
df_PFC = overnight_df.loc[overnight_df['Cortex']=='PFC']
df_VC = overnight_df.loc[overnight_df['Cortex']=='VC']
optimal_PFC = df_PFC.loc[df_PFC.groupby(['Subject', 'Max_Demand']).Final_Distance.agg('idxmin')]
optimal_PFC
optimal_VC = df_VC.loc[df_VC.groupby(['Subject', 'Max_Demand']).Final_Distance.agg('idxmin')]
df = result = pd.concat([optimal_VC, optimal_PFC])
R_corr = df.groupby(['Cortex', 'Max_Demand'])['R_autc'].apply(lambda x: x.corr(df['R_autp']))
t_corr = df.groupby(['Cortex', 'Max_Demand'])['t_c'].apply(lambda x: x.corr(df['t_p']))
print(R_corr)
plt.figure()
plt.plot(R_corr.index.levels[1], R_corr.ix['PFC'], '.r', label='PFC')
plt.plot(R_corr.index.levels[1], R_corr.ix['VC'], '.b', label='VC')
plt.title('R Correlation')
plt.legend()
plt.figure()
plt.plot(t_corr.index.levels[1], t_corr.ix['PFC'], '.r', label='PFC')
plt.title('Time Correlation')
plt.plot(t_corr.index.levels[1], t_corr.ix['VC'], '.b', label='VC')
plt.legend()
g = sns.FacetGrid(df, col="Cortex", row='Max_Demand', hue='Subject')
g = (g.map(plt.scatter, "R_autp", "R_autc", edgecolor="w")).add_legend()
g = sns.FacetGrid(df, col="Cortex", row='Max_Demand', hue='Subject')
g = (g.map(plt.scatter, "t_p", "t_c", edgecolor="w")).add_legend()
Explanation: Define a function that lets us pull out the optimal fit for a specific cortex
End of explanation
g=sns.factorplot(data=overnight_df, x='Max_Demand', y='Final_Distance',
hue='Cortex', col='Cortex', kind='box', col_wrap=3)
param_list = ['R_autc', 't_c', 'R_autp', 't_p', 'R_autu', 't_u', 'R_auto', 't_o']
for parameter in param_list:
plt.figure()
g = sns.jointplot(parameter, 'Final_Distance', overnight_df)
g.fig.suptitle(parameter)
from mpl_toolkits.mplot3d import Axes3D
import numpy as np
threedee = plt.figure().gca(projection='3d')
threedee.scatter(overnight_df['R_autp'], overnight_df['R_autc'], overnight_df['Final_Distance'])
threedee.set_xlabel('R_autp')
threedee.set_ylabel('R_autc')
threedee.set_zlabel('Final Distance')
plt.show()
Explanation: Visualising the results
It makes sense to try and visualise the results as a whole. Knowing the single best fit for each cortex doesn't show us the trend across all subjects or the data as a whole.
End of explanation |
15,716 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Type
Is Required
Step7: 1.4. Elemental Stoichiometry
Is Required
Step8: 1.5. Elemental Stoichiometry Details
Is Required
Step9: 1.6. Prognostic Variables
Is Required
Step10: 1.7. Diagnostic Variables
Is Required
Step11: 1.8. Damping
Is Required
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required
Step13: 2.2. Timestep If Not From Ocean
Is Required
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required
Step15: 3.2. Timestep If Not From Ocean
Is Required
Step16: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required
Step17: 4.2. Scheme
Is Required
Step18: 4.3. Use Different Scheme
Is Required
Step19: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required
Step20: 5.2. River Input
Is Required
Step21: 5.3. Sediments From Boundary Conditions
Is Required
Step22: 5.4. Sediments From Explicit Model
Is Required
Step23: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required
Step24: 6.2. CO2 Exchange Type
Is Required
Step25: 6.3. O2 Exchange Present
Is Required
Step26: 6.4. O2 Exchange Type
Is Required
Step27: 6.5. DMS Exchange Present
Is Required
Step28: 6.6. DMS Exchange Type
Is Required
Step29: 6.7. N2 Exchange Present
Is Required
Step30: 6.8. N2 Exchange Type
Is Required
Step31: 6.9. N2O Exchange Present
Is Required
Step32: 6.10. N2O Exchange Type
Is Required
Step33: 6.11. CFC11 Exchange Present
Is Required
Step34: 6.12. CFC11 Exchange Type
Is Required
Step35: 6.13. CFC12 Exchange Present
Is Required
Step36: 6.14. CFC12 Exchange Type
Is Required
Step37: 6.15. SF6 Exchange Present
Is Required
Step38: 6.16. SF6 Exchange Type
Is Required
Step39: 6.17. 13CO2 Exchange Present
Is Required
Step40: 6.18. 13CO2 Exchange Type
Is Required
Step41: 6.19. 14CO2 Exchange Present
Is Required
Step42: 6.20. 14CO2 Exchange Type
Is Required
Step43: 6.21. Other Gases
Is Required
Step44: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required
Step45: 7.2. PH Scale
Is Required
Step46: 7.3. Constants If Not OMIP
Is Required
Step47: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required
Step48: 8.2. Sulfur Cycle Present
Is Required
Step49: 8.3. Nutrients Present
Is Required
Step50: 8.4. Nitrous Species If N
Is Required
Step51: 8.5. Nitrous Processes If N
Is Required
Step52: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required
Step53: 9.2. Upper Trophic Levels Treatment
Is Required
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required
Step55: 10.2. Pft
Is Required
Step56: 10.3. Size Classes
Is Required
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required
Step58: 11.2. Size Classes
Is Required
Step59: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required
Step60: 12.2. Lability
Is Required
Step61: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required
Step62: 13.2. Types If Prognostic
Is Required
Step63: 13.3. Size If Prognostic
Is Required
Step64: 13.4. Size If Discrete
Is Required
Step65: 13.5. Sinking Speed If Prognostic
Is Required
Step66: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required
Step67: 14.2. Abiotic Carbon
Is Required
Step68: 14.3. Alkalinity
Is Required | Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'dwd', 'sandbox-1', 'ocnbgchem')
Explanation: ES-DOC CMIP6 Model Properties - Ocnbgchem
MIP Era: CMIP6
Institute: DWD
Source ID: SANDBOX-1
Topic: Ocnbgchem
Sub-Topics: Tracers.
Properties: 65 (37 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:57
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
4. Key Properties --> Transport Scheme
5. Key Properties --> Boundary Forcing
6. Key Properties --> Gas Exchange
7. Key Properties --> Carbon Chemistry
8. Tracers
9. Tracers --> Ecosystem
10. Tracers --> Ecosystem --> Phytoplankton
11. Tracers --> Ecosystem --> Zooplankton
12. Tracers --> Disolved Organic Matter
13. Tracers --> Particules
14. Tracers --> Dic Alkalinity
1. Key Properties
Ocean Biogeochemistry key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean biogeochemistry model code (PISCES 2.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
Explanation: 1.4. Elemental Stoichiometry
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe elemental stoichiometry (fixed, variable, mix of the two)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Elemental Stoichiometry Details
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe which elements have fixed/variable stoichiometry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all prognostic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Diagnostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.N
List of all diagnotic tracer variables in the ocean biogeochemistry component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Damping
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any tracer damping used (such as artificial correction or relaxation to climatology,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Time stepping method for passive tracers transport in ocean biogeochemistry
2.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for passive tracers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for passive tracers (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Time stepping framework for biology sources and sinks in ocean biogeochemistry
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time stepping framework for biology sources and sinks
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Timestep If Not From Ocean
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Time step for biology sources and sinks (if different from ocean)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Transport Scheme
Transport scheme in ocean biogeochemistry
4.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transport scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 4.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Transport scheme used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Use Different Scheme
Is Required: FALSE Type: STRING Cardinality: 0.1
Decribe transport scheme if different than that of ocean model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Boundary Forcing
Properties of biogeochemistry boundary forcing
5.1. Atmospheric Deposition
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how atmospheric deposition is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
Explanation: 5.2. River Input
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how river input is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Sediments From Boundary Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from boundary condition
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Sediments From Explicit Model
Is Required: FALSE Type: STRING Cardinality: 0.1
List which sediments are speficied from explicit sediment model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Gas Exchange
*Properties of gas exchange in ocean biogeochemistry *
6.1. CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.2. CO2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe CO2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.3. O2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is O2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 6.4. O2 Exchange Type
Is Required: FALSE Type: ENUM Cardinality: 0.1
Describe O2 gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.5. DMS Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is DMS gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.6. DMS Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify DMS gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.7. N2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.8. N2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.9. N2O Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is N2O gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.10. N2O Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify N2O gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.11. CFC11 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC11 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.12. CFC11 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC11 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.13. CFC12 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is CFC12 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.14. CFC12 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify CFC12 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.15. SF6 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is SF6 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.16. SF6 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify SF6 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.17. 13CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 13CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.18. 13CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 13CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.19. 14CO2 Exchange Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is 14CO2 gas exchange modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.20. 14CO2 Exchange Type
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify 14CO2 gas exchange scheme type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.21. Other Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
Specify any other gas exchange
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Carbon Chemistry
Properties of carbon chemistry biogeochemistry
7.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how carbon chemistry is modeled
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 7.2. PH Scale
Is Required: FALSE Type: ENUM Cardinality: 0.1
If NOT OMIP protocol, describe pH scale.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Constants If Not OMIP
Is Required: FALSE Type: STRING Cardinality: 0.1
If NOT OMIP protocol, list carbon chemistry constants.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Tracers
Ocean biogeochemistry tracers
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of tracers in ocean biogeochemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Sulfur Cycle Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sulfur cycle modeled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.3. Nutrients Present
Is Required: TRUE Type: ENUM Cardinality: 1.N
List nutrient species present in ocean biogeochemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.4. Nitrous Species If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous species.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.5. Nitrous Processes If N
Is Required: FALSE Type: ENUM Cardinality: 0.N
If nitrogen present, list nitrous processes.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Tracers --> Ecosystem
Ecosystem properties in ocean biogeochemistry
9.1. Upper Trophic Levels Definition
Is Required: TRUE Type: STRING Cardinality: 1.1
Definition of upper trophic level (e.g. based on size) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Upper Trophic Levels Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Define how upper trophic level are treated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
Explanation: 10. Tracers --> Ecosystem --> Phytoplankton
Phytoplankton properties in ocean biogeochemistry
10.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of phytoplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Pft
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton functional types (PFT) (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Phytoplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Tracers --> Ecosystem --> Zooplankton
Zooplankton properties in ocean biogeochemistry
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of zooplankton
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Size Classes
Is Required: FALSE Type: ENUM Cardinality: 0.N
Zooplankton size classes (if applicable)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 12. Tracers --> Disolved Organic Matter
Disolved organic matter properties in ocean biogeochemistry
12.1. Bacteria Present
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there bacteria representation ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Lability
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe treatment of lability in dissolved organic matter
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Tracers --> Particules
Particulate carbon properties in ocean biogeochemistry
13.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is particulate carbon represented in ocean biogeochemistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, type(s) of particulate matter taken into account
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
Explanation: 13.3. Size If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe if a particule size spectrum is used to represent distribution of particules in water volume
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13.4. Size If Discrete
Is Required: FALSE Type: STRING Cardinality: 0.1
If prognostic and discrete size, describe which size classes are used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.5. Sinking Speed If Prognostic
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, method for calculation of sinking speed of particules
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
Explanation: 14. Tracers --> Dic Alkalinity
DIC and alkalinity properties in ocean biogeochemistry
14.1. Carbon Isotopes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which carbon isotopes are modelled (C13, C14)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.2. Abiotic Carbon
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is abiotic carbon modelled ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
Explanation: 14.3. Alkalinity
Is Required: TRUE Type: ENUM Cardinality: 1.1
How is alkalinity modelled ?
End of explanation |
15,717 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Self-Driving Car Engineer Nanodegree
Deep Learning
Project
Step1: Step 1
Step2: Visualizing Dataset
Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include
Step3: Plotting a comparison between distribution of data for training and testing dataset
Step4: The above diagram shows the histogram distribution of training and testing data.
Augmenting Data
As we can clearly see the data is not evenly distributed amongst the various labels. A good training set should have evenly distributed data. In this project we will augment the data for labels having dataset less than the average data per label.
Step5: As mentioned in the research paper Traffic Sign Recognition with Multi-Scale Convolutional Networks we will be augmenting the data by geometric transformation of the original dataset i.e. translating, rotating and scaling the original dataset.
Step6: Data Distribution After Augmentation
Step7: Step 2
Step8: Model Architecture
Step9: Train, Validate and Test the Model
A validation set can be used to assess how well the model is performing. A low accuracy on the training and validation
sets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting.
Step10: Step 3
Step11: Selection Criteria for Test Images
The test images are selected keeping in mind the complexity to recognize the sign board. Numerical sign board of 30 kmph speed limit is chosen to test whether the network has the ability to recognize the number amongst various other similar speed limit boards.
Complex designs like of bicycle and the wildlife sign board featuring the dear are selected to see whether the network has the ability to identity complex shapes.
Along with these complex designs, three simpler sign boards namely keep right, bumpy road and priority road are taken to have a comparative study of how well the network work with complex sign boards
Predict the Sign Type for Each Image
Step12: Analyze Performance
Step13: Output Top 5 Softmax Probabilities For Each Image Found on the Web
For each of the new images, we have printed out the model's softmax probabilities to show the certainty of the model's predictions.
Step14: Step 4 | Python Code:
# Load pickled data
import pickle
# Loading the relevant files:
# Training Data: train.p
# Validating Data: valid.p
# Testing Data: test.p
training_file = "train.p"
validation_file= "valid.p"
testing_file = "test.p"
with open(training_file, mode='rb') as f:
train = pickle.load(f)
with open(validation_file, mode='rb') as f:
valid = pickle.load(f)
with open(testing_file, mode='rb') as f:
test = pickle.load(f)
x_train, y_train = train['features'], train['labels']
x_valid, y_valid = valid['features'], valid['labels']
x_test, y_test = test['features'], test['labels']
print ('Data has been loaded')
Explanation: Self-Driving Car Engineer Nanodegree
Deep Learning
Project: Build a Traffic Sign Recognition Classifier
In this notebook, we have implemented a traffic sign recognition and classifier and the steps are illustrated in more detail in the writeup.
Step 0: Load The Data
End of explanation
# Assuring whether the number of features matches number of labels for all the datasets
assert (len(x_train) == len(y_train))
assert (len(x_test) == len(y_test))
# Number of training examples
n_train = len(x_train)
# Number of testing examples.
n_test = len(x_test)
# Number of validating examples
n_valid = len(x_valid)
# Fetching the size of the image by using the key 'sizes' which stores the size of image
image_shape = x_train[0].shape
# Fetching the number of unique labels
'''
We have taken the max of labels as the training data consists all the expected ids present
int the provided .csv file starting from 0, hence we add 1 to the maximum value
'''
n_classes = max(y_train) + 1
print("Number of training examples =", n_train)
print("Number of testing examples =", n_test)
print("Number of validating examples =", n_valid)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
Explanation: Step 1: Dataset Summary & Exploration
The pickled data is a dictionary with 4 key/value pairs:
'features' is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).
'labels' is a 1D array containing the label/class id of the traffic sign. The file signnames.csv contains id -> name mappings for each id.
'sizes' is a list containing tuples, (width, height) representing the the original width and height the image.
'coords' is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES
Provide a Basic Summary of the Data Set Using Python, Numpy and/or Pandas
End of explanation
### Data exploration visualization code goes here.
### Feel free to use as many code cells as needed.
import matplotlib.pyplot as plt
import random
# Visualizations will be shown in the notebook.
%matplotlib inline
fig, axs = plt.subplots(2,5, figsize=(15, 4))
fig.subplots_adjust(hspace = .2, wspace=.001)
axs = axs.ravel()
for i in range(10):
index = random.randint(0, len(x_train))
image = x_train[index]
axs[i].axis('off')
axs[i].imshow(image)
axs[i].set_title(y_train[index])
# Visualizing the distribution of data across various labels
import numpy as np
hist, bins = np.histogram(y_train, bins=n_classes)
width = 0.7 * (bins[1] - bins[0])
center = (bins[:-1] + bins[1:]) / 2
plt.bar(center, hist, align='center', width=width)
plt.show()
Explanation: Visualizing Dataset
Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc.
The Matplotlib examples and gallery pages are a great resource for doing visualizations in Python.
NOTE: It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections.
End of explanation
# Visualizing the distribution of training data and testing data
train_hist, train_bins = np.histogram(y_train, bins=n_classes)
test_hist, test_bins = np.histogram(y_test, bins=n_classes)
train_width = 0.7 * (train_bins[1] - train_bins[0])
train_center = (train_bins[:-1] + train_bins[1:]) / 2
test_width = 0.7 * (test_bins[1] - test_bins[0])
test_center = (test_bins[:-1] + test_bins[1:]) / 2
plt.bar(train_center, train_hist, align='center', color='red', width=train_width)
plt.bar(test_center, test_hist, align='center', color='green', width=test_width)
plt.show()
Explanation: Plotting a comparison between distribution of data for training and testing dataset
End of explanation
# Calculating average data per class
avg = (int)(len(y_train)/n_classes)
print('Average Data Per Class is approx:' , avg)
Explanation: The above diagram shows the histogram distribution of training and testing data.
Augmenting Data
As we can clearly see the data is not evenly distributed amongst the various labels. A good training set should have evenly distributed data. In this project we will augment the data for labels having dataset less than the average data per label.
End of explanation
# Augmenting data of those classes who have dataset less than ceil of avg
import cv2
# Function to translate image by a random value -4 to 4 px
def translate(img):
tx = (int)(random.uniform(-4, 4))
ty = (int)(random.uniform(-4, 4))
rows,cols,depth = img.shape
M = np.float32([[1,0,tx],[0,1,ty]])
return cv2.warpAffine(img,M,(cols,rows))
# Function to rotate image by random value between -30 to 30 degree
def rotate(img):
theta = (int)(random.uniform(-30, 30))
rows,cols,depth = img.shape
M = cv2.getRotationMatrix2D((cols/2,rows/2), theta,1)
return cv2.warpAffine(img,M,(cols,rows))
# Function to scale image by random value 0.75 to 1.25
def scale(img):
rows,cols,ch = img.shape
px = (int)(random.uniform(-8,8))
pts1 = np.float32([[px,px],[rows-px,px],[px,cols-px],[rows-px,cols-px]])
pts2 = np.float32([[0,0],[rows,0],[0,cols],[rows,cols]])
M = cv2.getPerspectiveTransform(pts1,pts2)
return cv2.warpPerspective(img,M,(rows,cols))
# Translating, Rotating and Scaling a sample image
new_img = translate(x_train[90])
fig, axs = plt.subplots(1,2, figsize=(10, 2))
axs[0].axis('off')
axs[0].imshow(x_train[90].squeeze())
axs[0].set_title('Original')
axs[1].axis('off')
axs[1].imshow(new_img.squeeze())
axs[1].set_title('Translated')
new_img = rotate(x_train[90])
fig, axs = plt.subplots(1,2, figsize=(10, 2))
axs[0].axis('off')
axs[0].imshow(x_train[90].squeeze())
axs[0].set_title('Original')
axs[1].axis('off')
axs[1].imshow(new_img.squeeze())
axs[1].set_title('Rotated')
new_img = scale(x_train[90])
fig, axs = plt.subplots(1,2, figsize=(10, 2))
axs[0].axis('off')
axs[0].imshow(x_train[90].squeeze())
axs[0].set_title('Original')
axs[1].axis('off')
axs[1].imshow(new_img.squeeze())
axs[1].set_title('Scaled')
# Augmenting dataset
for label in range(0, n_classes):
print('Class', label)
print('Proccessing->', end='')
label_indices = np.where(y_train == label)
n_indices = len(label_indices[0])
if n_indices < avg:
for i in range(0, avg - n_indices):
new_img = x_train[(label_indices[0][i % n_indices])]
n = random.randint(0,2)
if n == 0:
new_img = translate(new_img)
elif n == 1:
new_img = rotate(new_img)
else:
new_img = scale(new_img)
x_train = np.concatenate((x_train, [new_img]), axis=0)
y_train = np.concatenate((y_train, [label]), axis=0)
if i %10 == 0:
print('*', end='')
print('Completed')
print('')
print("Augmentation Completed")
Explanation: As mentioned in the research paper Traffic Sign Recognition with Multi-Scale Convolutional Networks we will be augmenting the data by geometric transformation of the original dataset i.e. translating, rotating and scaling the original dataset.
End of explanation
# Visualizing the distribution of data across various labels
import numpy as np
hist, bins = np.histogram(y_train, bins=n_classes)
width = 0.7 * (bins[1] - bins[0])
center = (bins[:-1] + bins[1:]) / 2
plt.bar(center, hist, align='center', width=width)
plt.show()
Explanation: Data Distribution After Augmentation
End of explanation
### Preprocess the data here. Preprocessing steps could include normalization, converting to grayscale, etc.
### Feel free to use as many code cells as needed.
from sklearn.utils import shuffle
import numpy as np
import cv2
# Function to preprocess data
def preprocess(x_data):
for i in range (0, x_data.shape[0]):
x_data[i] = cv2.cvtColor(x_data[i], cv2.COLOR_RGB2YUV)
# Equalizing Histogram of each channel using Contrast Limited Adaptive Histogram Equalization (CLAHE)
clahe = cv2.createCLAHE(clipLimit=4.0, tileGridSize=(4,4))
x_data[i,:,:,0] = clahe.apply(x_data[i,:,:, 0])
x_data[i] = cv2.cvtColor(x_data[i], cv2.COLOR_YUV2RGB)
return x_data
# Shuffling data
x_train, y_train = shuffle(x_train, y_train)
x_original = np.copy(x_train)
x_train = preprocess(x_train)
x_valid = preprocess(x_valid)
x_test = preprocess(x_test)
# Displaying processed and original images
%matplotlib inline
fig, axs = plt.subplots(2,5)
fig.subplots_adjust(hspace = .2, wspace=.001)
for i in range(5):
# Ploting original Image
index = random.randint(0, len(x_train))
image = x_original[index]
axs[0,i].axis('off')
axs[0,i].imshow(image)
axs[0,i].set_title(y_train[index])
# Plotting Proccessed image
image = x_train[index ]
axs[1,i].axis('off')
axs[1,i].imshow(image,squeexe())
axs[1,i].set_title(y_train[index])
Explanation: Step 2: Design and Test a Model Architecture
The approach followed to design the model is discussed in the writeup. In a nutshell we have taken LeNet architecture as the base and we have combined some elements from the model mentioned in the paper titled Traffic Sign Recognition with Multi-Scale Convolutional Networks. And have added some parts after experimentation.
Pre-process the Data Set
Preprocessing includes equalizing the histogram of the image to increase the overall contrast.
End of explanation
import tensorflow as tf
# Defining number of epochs and batch_size
EPOCHS = 100
BATCH_SIZE = 100
from tensorflow.contrib.layers import flatten
conv1 = None
# Modified LeNet Architecture for training the data
def LeNet(x):
# Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer
mu = 0
sigma = 0.1
# Weights for each layer
weights = {'wc1': tf.Variable(tf.truncated_normal((5, 5, 3, 6), mean=mu, stddev=sigma, dtype=tf.float32), name='ConvolutionalWeight1'),
'wc2': tf.Variable(tf.truncated_normal((5, 5, 6, 16), mean=mu, stddev=sigma, dtype=tf.float32), name='ConvolutionalWeight2'),
'wc3': tf.Variable(tf.truncated_normal((5, 5, 16, 400), mean=mu, stddev=sigma, dtype=tf.float32), name='ConvolutionalWeight3'),
'wfc1': tf.Variable(tf.truncated_normal((800, 400), mean=mu, stddev=sigma, dtype=tf.float32), name='FullyConnectedLayerWeight1'),
'wfc2': tf.Variable(tf.truncated_normal((400, 120), mean=mu, stddev=sigma, dtype=tf.float32), name='FullyConnectedLayerWeight2'),
'wfc3': tf.Variable(tf.truncated_normal((120, 43), mean=mu, stddev=sigma, dtype=tf.float32), name='FullyConnectedLayerWeight3')}
# Biases for each layer
biases = {'bc1':tf.zeros(6, name='ConvolutionalBias1'),
'bc2':tf.zeros(16, name='ConvolutionalBias2'),
'bc3':tf.zeros(400, name='ConvolutionalBias3'),
'bfc1': tf.zeros(400, name='FullyConnectedLayerBias1'),
'bfc2':tf.zeros(120, name='FullyConnectedLayerBias2'),
'bfc3':tf.zeros(43, name='FullyConnectedLayerBias3')}
# Layer 1: Convolutional. Input = 32x32x3. Output = 28x28x6.
conv1 = tf.nn.conv2d(x, weights['wc1'], [1, 1, 1, 1], padding='VALID')
conv1 = tf.nn.bias_add(conv1, biases['bc1'])
# Activation.
conv1 = tf.nn.relu(conv1)
# Pooling. Input = 28x28x6. Output = 14x14x6.
conv1 = tf.nn.max_pool(conv1, [1, 2, 2, 1], [1, 2, 2, 1], padding='VALID')
# Layer 2: Convolutional. Output = 10x10x16.
conv2 = tf.nn.conv2d(conv1, weights['wc2'], [1, 1, 1, 1], padding='VALID')
conv2 = tf.nn.bias_add(conv2, biases['bc2'])
# Activation.
conv2 = tf.nn.relu(conv2)
# Pooling. Input = 10x10x16. Output = 5x5x16.
conv2 = tf.nn.max_pool(conv2, [1, 2, 2, 1], [1, 2, 2, 1], padding='VALID')
# Convolution Layer 3 Input = 5x5x16 Output = 1x1x400
conv3 = tf.nn.conv2d(conv2, weights['wc3'], [1, 1, 1, 1], padding='VALID')
conv3 = tf.nn.bias_add(conv3, biases['bc3'])
# Activation
conv3 = tf.nn.relu(conv3)
# Flatten. Input = 5x5x16. Output = 400.
conv3 = flatten(conv3)
conv2 = flatten(conv2)
res = tf.concat(1, [conv3, conv2])
res = tf.nn.dropout(res, keep_prob)
# Fully Connected. Input = 800. Output = 400.
fc1 = tf.add(tf.matmul(res, weights['wfc1']), biases['bfc1'])
# Activation.
fc1 = tf.nn.relu(fc1)
# Layer 4: Fully Connected. Input = 400. Output = 120.
fc2 = tf.add(tf.matmul(fc1, weights['wfc2']), biases['bfc2'])
# Activation.
fc2 = tf.nn.relu(fc2)
# Layer 5: Fully Connected. Input = 84. Output = 43.
logits = tf.add(tf.matmul(fc2, weights['wfc3']), biases['bfc3'])
return logits
tf.reset_default_graph()
x = tf.placeholder(tf.float32, (None, 32, 32, 3), name='X-input')
y = tf.placeholder(tf.int32, (None), name='Y-input')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
one_hot_y = tf.one_hot(y, 43)
# Learning Rate
rate = 0.0009
logits = LeNet(x)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits, one_hot_y)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = rate)
training_operation = optimizer.minimize(loss_operation)
correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
saver = tf.train.Saver()
# Evaluation Function
def evaluate(X_data, y_data):
num_examples = len(X_data)
total_accuracy = 0
sess = tf.get_default_session()
for offset in range(0, num_examples, BATCH_SIZE):
batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y, keep_prob:0.5})
total_accuracy += (accuracy * len(batch_x))
return total_accuracy / num_examples
Explanation: Model Architecture
End of explanation
# Training Data
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
num_examples = len(x_train)
print("Training...")
print()
for i in range(EPOCHS):
x_train, y_train = shuffle(x_train, y_train)
for offset in range(0, num_examples, BATCH_SIZE):
end = offset + BATCH_SIZE
batch_x, batch_y = x_train[offset:end], y_train[offset:end]
sess.run(training_operation, feed_dict={x: batch_x, y: batch_y, keep_prob:0.5})
validation_accuracy = evaluate(x_valid, y_valid)
if (i+1)%10 == 0 or i == 0:
print("EPOCH {} ...".format(i+1))
print("Validation Accuracy = {:.3f}".format(validation_accuracy))
print()
saver.save(sess, './lenet')
print("Model trained")
### Train your model here.
### Calculate and report the accuracy on the training and validation set.
### Once a final model architecture is selected,
### the accuracy on the test set should be calculated and reported as well.
### Feel free to use as many code cells as needed.
# Testing Data
print('')
print('-----------------------------------------------------------------------')
with tf.Session() as sess:
print("Testing...")
saver.restore(sess, tf.train.latest_checkpoint('.'))
test_accuracy = evaluate(x_test, y_test)
print("Test Accuracy = {:.3f}".format(test_accuracy))
Explanation: Train, Validate and Test the Model
A validation set can be used to assess how well the model is performing. A low accuracy on the training and validation
sets imply underfitting. A high accuracy on the training set but low accuracy on the validation set implies overfitting.
End of explanation
### Load the images and plot them here.
### Feel free to use as many code cells as needed.
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import os
%matplotlib inline
# Loading New Images
images = os.listdir('extra_images/')
fig, axs = plt.subplots(1,6, figsize=(15, 2))
i = 0
prediction_images = []
# Plotting Test Images
for image in images:
img = mpimg.imread('extra_images/' + image)
axs[i].imshow(img)
prediction_images.append(img)
i=i+1
x_predict = np.asarray(prediction_images)
# Storing new images in x_predict array
x_predict = preprocess(x_predict)
# Storing labels for the y_predict
y_predict = [31, 38, 22, 12, 29, 1]
Explanation: Step 3: Test a Model on New Images
To give yourself more insight into how the model is working, we have download six pictures of German traffic signs from the web and have used the model to predict the traffic sign type.
The signnames.csv file contains mappings from the class id (integer) to the actual sign name.
Load and Output the Images
End of explanation
### Run the predictions here and use the model to output the prediction for each image.
### Make sure to pre-process the images with the same pre-processing pipeline used earlier.
### Feel free to use as many code cells as needed.
softmax_logits = tf.nn.softmax(logits)
top_k = tf.nn.top_k(softmax_logits, k=1)
# Predicting image with the stored wights
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
saver = tf.train.import_meta_graph('./lenet.meta')
saver.restore(sess, "./lenet")
prediction = sess.run(top_k, feed_dict={x: x_predict, keep_prob:1.0})
result = (prediction.indices).reshape((1,-1))[0]
# Displaying Result
print('Prediction:',result)
print('Original labels:', y_predict)
Explanation: Selection Criteria for Test Images
The test images are selected keeping in mind the complexity to recognize the sign board. Numerical sign board of 30 kmph speed limit is chosen to test whether the network has the ability to recognize the number amongst various other similar speed limit boards.
Complex designs like of bicycle and the wildlife sign board featuring the dear are selected to see whether the network has the ability to identity complex shapes.
Along with these complex designs, three simpler sign boards namely keep right, bumpy road and priority road are taken to have a comparative study of how well the network work with complex sign boards
Predict the Sign Type for Each Image
End of explanation
### Calculate the accuracy for these 5 new images.
### For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate on these new images.
# Calculating Accuracy of the Network Over New Images
count = 0
for i in range(0,6):
if y_predict[i] == result[i]:
count += 1
accuracy = count/6 * 100
print('The accuracy of the network is:', accuracy, '%')
Explanation: Analyze Performance
End of explanation
# Printing top 5 softmax pro
softmax_logits = tf.nn.softmax(logits)
top_k = tf.nn.top_k(softmax_logits, k=5)
# Predicting image with the stored wights
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
saver = tf.train.import_meta_graph('./lenet.meta')
saver.restore(sess, "./lenet")
top_probs =sess.run(top_k, feed_dict={x: x_predict, keep_prob:1})
# Fetching values and the labels
top_values = top_probs[0]
top_labels = top_probs[1]
N = 5
ind = np.arange(N)
width = 0.35
for i in range(6):
print("Image ", i+1)
print("Top Labels:\n", top_labels[i])
print("Top Probabilties:\n", top_values[i])
for i in range(6):
plt.figure(i)
values = top_values[i]
plt.ylabel('Probabilities')
plt.xlabel('Labels')
plt.title('Top 5 Softmax Probabilities Image {}'.format(str(i+1)))
plt.xticks(ind+width, tuple(top_labels[i]))
plt.bar(ind, top_values[i], width=width)
Explanation: Output Top 5 Softmax Probabilities For Each Image Found on the Web
For each of the new images, we have printed out the model's softmax probabilities to show the certainty of the model's predictions.
End of explanation
### Visualize your network's feature maps here.
### Feel free to use as many code cells as needed.
# image_input: the test image being fed into the network to produce the feature maps
# tf_activation: should be a tf variable name used during your training procedure that represents the calculated state of a specific weight layer
# activation_min/max: can be used to view the activation contrast in more detail, by default matplot sets min and max to the actual min and max values of the output
# plt_num: used to plot out multiple different weight feature map sets on the same block, just extend the plt number for each new feature map entry
def outputFeatureMap(image_input, tf_activation, activation_min=-1, activation_max=-1 ,plt_num=1):
# Here make sure to preprocess your image_input in a way your network expects
# with size, normalization, ect if needed
# image_input =
# Note: x should be the same name as your network's tensorflow data placeholder variable
# If you get an error tf_activation is not defined it maybe having trouble accessing the variable from inside a function
activation = tf_activation.eval(session=sess,feed_dict={x : image_input})
featuremaps = activation.shape[3]
plt.figure(plt_num, figsize=(15,15))
for featuremap in range(featuremaps):
plt.subplot(6,8, featuremap+1) # sets the number of feature maps to show on each row and column
plt.title('FeatureMap ' + str(featuremap)) # displays the feature map number
if activation_min != -1 & activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin =activation_min, vmax=activation_max, cmap="gray")
elif activation_max != -1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmax=activation_max, cmap="gray")
elif activation_min !=-1:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", vmin=activation_min, cmap="gray")
else:
plt.imshow(activation[0,:,:, featuremap], interpolation="nearest", cmap="gray")
# Constructing a layer in the above model to observe the feature map
# Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer
mu = 0
sigma = 0.1
# Weights for each layer
weights = {'wc1': tf.Variable(tf.truncated_normal((5, 5, 3, 6), mean=mu, stddev=sigma, dtype=tf.float32)),
'wc2': tf.Variable(tf.truncated_normal((5, 5, 6, 16), mean=mu, stddev=sigma, dtype=tf.float32)),
'wc3': tf.Variable(tf.truncated_normal((5, 5, 16, 400), mean=mu, stddev=sigma, dtype=tf.float32)),
'wfc1': tf.Variable(tf.truncated_normal((800, 400), mean=mu, stddev=sigma, dtype=tf.float32)),
'wfc2': tf.Variable(tf.truncated_normal((400, 120), mean=mu, stddev=sigma, dtype=tf.float32)),
'wfc3': tf.Variable(tf.truncated_normal((120, 43), mean=mu, stddev=sigma, dtype=tf.float32))}
# Biases for each layer
biases = {'bc1':tf.zeros(6), 'bc2':tf.zeros(16), 'bc3':tf.zeros(400), 'bfc1': tf.zeros(400), 'bfc2':tf.zeros(120), 'bfc3':tf.zeros(43)}
# Layer 1: Convolutional. Input = 32x32x3. Output = 28x28x6.
conv1 = tf.nn.conv2d(x, weights['wc1'], [1, 1, 1, 1], padding='VALID')
conv1 = tf.nn.bias_add(conv1, biases['bc1'])
# Activation.
conv1 = tf.nn.relu(conv1)
with tf.Session() as sess:
print("Convolutional Layer 1")
sess.run(tf.global_variables_initializer())
outputFeatureMap([x_predict[1]],conv1)
# Pooling. Input = 28x28x6. Output = 14x14x6.
conv1 = tf.nn.max_pool(conv1, [1, 2, 2, 1], [1, 2, 2, 1], padding='VALID')
# Layer 2: Convolutional. Output = 10x10x16.
conv2 = tf.nn.conv2d(conv1, weights['wc2'], [1, 1, 1, 1], padding='VALID')
conv2 = tf.nn.bias_add(conv2, biases['bc2'])
with tf.Session() as sess:
print("Convolutional Layer 2")
sess.run(tf.global_variables_initializer())
outputFeatureMap([x_predict[1]],conv2, plt_num=2)
Explanation: Step 4: Visualize the Neural Network's State with Test Images
This Section is not required to complete but acts as an additional excersise for understaning the output of a neural network's weights. While neural networks can be a great learning device they are often referred to as a black box. We can understand what the weights of a neural network look like better by plotting their feature maps. After successfully training your neural network you can see what it's feature maps look like by plotting the output of the network's weight layers in response to a test stimuli image. From these plotted feature maps, it's possible to see what characteristics of an image the network finds interesting. For a sign, maybe the inner network feature maps react with high activation to the sign's boundary outline or to the contrast in the sign's painted symbol.
Provided for you below is the function code that allows you to get the visualization output of any tensorflow weight layer you want. The inputs to the function should be a stimuli image, one used during training or a new one you provided, and then the tensorflow variable name that represents the layer's state during the training process, for instance if you wanted to see what the LeNet lab's feature maps looked like for it's second convolutional layer you could enter conv2 as the tf_activation variable.
For an example of what feature map outputs look like, check out NVIDIA's results in their paper End-to-End Deep Learning for Self-Driving Cars in the section Visualization of internal CNN State. NVIDIA was able to show that their network's inner weights had high activations to road boundary lines by comparing feature maps from an image with a clear path to one without. Try experimenting with a similar test to show that your trained network's weights are looking for interesting features, whether it's looking at differences in feature maps from images with or without a sign, or even what feature maps look like in a trained network vs a completely untrained one on the same sign image.
End of explanation |
15,718 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
CI/CD for TFX pipelines
Learning Objectives
Develop a CI/CD workflow with Cloud Build to build and deploy TFX pipeline code.
Integrate with Github to automatically trigger pipeline deployment with source code repository changes.
In this lab, you will walk through authoring a Cloud Build CI/CD workflow that automatically builds and deploys a TFX pipeline. You will also integrate your workflow with GitHub by setting up a trigger that starts the workflow when a new tag is applied to the GitHub repo hosting the pipeline's code.
Understanding the Cloud Build workflow
Review the cloudbuild_vertex.yaml file to understand how the CI/CD workflow is implemented and how environment specific settings are abstracted using Cloud Build variables.
The Cloud Build CI/CD workflow automates the steps you walked through manually during the second lab of this series
Step1: Creating the TFX CLI builder
Review the Dockerfile for the TFX CLI builder
Step2: Build the image and push it to your project's Container Registry
Hint
Step3: Note | Python Code:
REGION = "us-central1"
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
ARTIFACT_STORE = f"gs://{PROJECT_ID}"
Explanation: CI/CD for TFX pipelines
Learning Objectives
Develop a CI/CD workflow with Cloud Build to build and deploy TFX pipeline code.
Integrate with Github to automatically trigger pipeline deployment with source code repository changes.
In this lab, you will walk through authoring a Cloud Build CI/CD workflow that automatically builds and deploys a TFX pipeline. You will also integrate your workflow with GitHub by setting up a trigger that starts the workflow when a new tag is applied to the GitHub repo hosting the pipeline's code.
Understanding the Cloud Build workflow
Review the cloudbuild_vertex.yaml file to understand how the CI/CD workflow is implemented and how environment specific settings are abstracted using Cloud Build variables.
The Cloud Build CI/CD workflow automates the steps you walked through manually during the second lab of this series:
1. Builds the custom TFX image to be used as a runtime execution environment for TFX components
1. Pushes the custom TFX image to your project's Container Registry
1. Compiles and run the TFX pipeline on Vertex pipelines
The Cloud Build workflow configuration uses both standard and custom Cloud Build builders. The custom builder encapsulates TFX CLI.
Configuring environment settings
You may need to open CloudShell and run the following command to allow CloudBuild to deploy a pipeline on Vertex:
```bash
export PROJECT_ID=$(gcloud config get-value core/project)
PROJECT_NUMBER=$(gcloud projects describe $PROJECT_ID --format="value(projectNumber)")
CLOUD_BUILD_SERVICE_ACCOUNT="${PROJECT_NUMBER}@cloudbuild.gserviceaccount.com"
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member serviceAccount:$CLOUD_BUILD_SERVICE_ACCOUNT \
--role roles/editor
```
Configure environment settings
End of explanation
!cat tfx-cli_vertex/Dockerfile
Explanation: Creating the TFX CLI builder
Review the Dockerfile for the TFX CLI builder
End of explanation
IMAGE_NAME = "tfx-cli_vertex"
IMAGE_URI = f"gcr.io/{PROJECT_ID}/{IMAGE_NAME}"
IMAGE_URI
# TODO: Your gcloud command here to build tfx-cli and submit to Container Registry.
!gcloud builds submit --timeout=15m --tag {IMAGE_URI} {IMAGE_NAME}
Explanation: Build the image and push it to your project's Container Registry
Hint: Review the Cloud Build gcloud command line reference for builds submit. Your image should follow the format gcr.io/[PROJECT_ID]/[IMAGE_NAME]. Note the source code for the tfx-cli is in the directory ./tfx-cli_vertex. It has an helper function tfx_pipeline_run.py to run a compiled pipeline on Vertex.
End of explanation
SUBSTITUTIONS = f"_REGION={REGION}"
# TODO: write gcloud builds submit command to trigger manual pipeline run.
!gcloud builds submit . --timeout=2h --config cloudbuild_vertex.yaml --substitutions {SUBSTITUTIONS}
Explanation: Note: building and deploying the container below is expected to take 10-15 min.
Exercise: manually trigger CI/CD pipeline run with Cloud Build
You can manually trigger Cloud Build runs using the gcloud builds submit command.
See the documentation for pass the cloudbuild_vertex.yaml file and the substitutions.
End of explanation |
15,719 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Authors.
Step1: カスタム訓練:ウォークスルー
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https
Step2: 次に、Colab メニューから Runtime > Restart Runtime を選択して、Colab ランタイムを再起動します。
ランタイムを再起動せずに、チュートリアルを先に進めないでください。
TensorFlow と他に必要な Python モジュールをインポートします。
Step3: データセットをインポートする
デフォルトの penguins/processed TensorFlow Dataset はすでにクリーニングされて正規化が済んでおり、モデルを構築できる準備が整っています。processed データをダウンロードする前に、簡易バージョンをプレビューして、元のペンギン調査データを理解しておきましょう。
データをプレビューする
TensorFlow Datasets tdfs.load メソッドを使用して、penguins データセットの簡易バージョン(penguins/simple)をダウンロードします。このデータセットには 344 件のデータレコードが存在します。最初の 5 件のレコードを DataFrame オブジェクトに抽出し、このデータセットのサンプルの値を調べます。
Step4: 番号付きの行がデータレコードで、行ごとに 1 つのサンプルが含まれます。
最初の 6 つのフィールドは、サンプルの特徴づける特徴量です。ここでは、ペンギンの測定値を表す数字が含まれています。
最後の列はラベルです。予測しようとしている値がこれです。このデータセットでは、ペンギンの種名に対応する 0、1、または 2 の整数が示されます。
このデータセットでは、ペンギンの種のラベルを数値で表現することにより、構築するモデルで扱いやすくしています。これらの数値は、次のペンギンの種に対応しています。
0
Step5: 特徴量とラベルについての詳細は、機械学習クラッシュコースの ML 用語セクションをご覧ください。
前処理済みのデータセットをダウンロードする
次に、tfds.load メソッドを使用して、前処理済みの penguins データセット(penguins/processed)をダウンロードします。すると、tf.data.Dataset オブジェクトのリストが返されます。penguins/processed データセットには独自のテストセットは用意されていないため、80
Step6: このバージョンのデータセットは処理済みであるため、データが 4 つの正規化された特徴量と種ラベルに縮小されていることに注意してください。このフォーマットでは、データを素早く使用してモデルをトレーニングできるようになっているため、移行の処理は必要ありません。
Step7: バッチのいくつかの特徴量をプロットして、クラスターを可視化できます。
Step8: 単純な線形モデルを構築する
なぜモデルか?
モデルは、特徴量とラベルの関係です。ペンギンの分類問題においては、このモデルは体重、フリッパー、および嘴峰の測定値、および予測されるペンギンの種の関係を定義しています。単純なモデルは、数行の代数で記述することは可能ですが、複雑な機械学習モデルにはパラメータの数も多く、要約が困難です。
機械学習を使用せずに、4 つの特徴量とペンギンの種の関係を判定することはできるのでしょうか。つまり、従来のプログラミング手法(多数の条件ステートメントを使用するなど)を使って、モデルを作成できるのでしょうか。おそらく、体重と嘴峰の測定値の関係を特定できるだけの長い時間を費やしてデータセットを分析すれば、特定の種に絞ることは可能かもしれません。これでは、複雑なデータセットでは不可能でなくとも困難極まりないことでしょう。適した機械学習アプローチであれば、ユーザーに代わってモデルを判定することができます。代表的なサンプルを適確な機械学習モデルタイプに十分にフィードすれば、プログラムによって関係を見つけ出すことができます。
モデルの選択
次に、トレーニングするモデルの種類を選択する必要があります。選択できる種類は多数あり、最適な種類を 1 つ選ぶにはそれなりの経験が必要となります。このチュートリアルでは、ニューラルネットワークを使用して、ペンギンの分類問題を解決することにします。ニューラルネットワークは、特徴量とラベルの複雑な関係を見つけ出すことができます。非常に構造化されたグラフで、1 つ以上の非表示レイヤーで編成されており、各非表示レイヤーは 1 つ以上のニューロンで構成されています。ニューラルネットワークにはいくつかのカテゴリがありますが、このプログラムでは、Dense または全結合のニューラルネットワークを使用します。このネットワークでは、1 つのレイヤーのニューロンが前のレイヤーのすべてのユーロんから入力接続を受け取ります。たとえば、図 2 では、1 つの入力レイヤー、2 つの非表示レイヤー、および 1 つの出力レイヤーで構成される Dense ニューラルネットワークが示されています。
<table>
<tr><td> <img src="https
Step9: 活性化関数(activation function) は、そのレイヤーの各ノードの出力の形を決定します。この関数の非線形性は重要であり、それがなければモデルは 1層しかないものと等価になってしまいます。利用可能な活性化関数 はたくさんありますが、隠れ層では ReLU が一般的です。
理想的な隠れ層の数やニューロンの数は問題やデータセットによって異なります。機械学習のさまざまな側面と同様に、ニューラルネットワークの最良の形を選択するには、知識と経験の両方が必要です。経験則から、一般的には隠れ層やニューロンの数を増やすとより強力なモデルを作ることができますが、効果的に訓練を行うためにより多くのデータを必要とします。
モデルを使用する
それでは、このモデルが特徴量のバッチに対して何を行うかを見てみましょう。
Step10: ご覧のように、サンプルのそれぞれは、各クラスの ロジット(logit) 値を返します。
これらのロジット値を各クラスの確率に変換するためには、 softmax 関数を使用します。
Step11: クラスに渡って tf.math.argmax を取ると、クラスのインデックスの予測を得られますが、モデルはまだトレーニングされていないため、これは良い予測ではありません。
Step12: モデルの訓練
訓練(Training) は、機械学習において、モデルが徐々に最適化されていく、あるいはモデルがデータセットを学習する段階です。目的は、見たことのないデータについて予測を行うため、訓練用データセットの構造を十分に学習することです。訓練用データセットを学習しすぎると、予測は見たことのあるデータに対してしか有効ではなく、一般化できません。この問題は 過学習(overfitting) と呼ばれ、問題の解き方を理解するのではなく答えを丸暗記するようなものです。
ペンギンの分類問題は、教師あり機械学習の例であり、モデルはラベルを含むサンプルからトレーニングされています。サンプルにラベルを含まない場合は、教師なし機械学習と呼ばれ、モデルは通常、特徴量からパターンを見つけ出します。
損失と勾配関数を定義する
トレーニングと評価の段階では、モデルの損失を計算する必要があります。これは、モデルの予測がどれくらい目標から外れているかを測定するものです。言い換えると、モデルのパフォーマンスがどれくらい劣っているかを示します。この値を最小化または最適化することが望まれます。
モデルは、モデルのクラスの確率予測と目標のラベルを取り、サンプル間の平均的な損失を返す tf.keras.losses.SparseCategoricalCrossentropy 関数を使用して損失を計算します。
Step13: tf.GradientTape コンテキストを使って、モデルを最適化する際に使われる 勾配(gradients) を計算しましょう。
Step14: オプティマイザの作成
オプティマイザは、loss 関数を最小化するために、計算された勾配をモデルのパラメータに適用します。損失関数は、曲面(図 3 を参照)として考えることができ、その周辺を探りながら最低ポイントを見つけることができます。勾配は最も急な上昇に向かってポイントするため、逆方向に進んで曲面を下方向に移動します。バッチごとに損失と勾配を対話的に計算することで、トレーニング中にモデルの調整を行うことができます。モデルは徐々に、重みとバイアスの最適な組み合わせを見つけて損失を最小化できるようになります。損失が低いほど、モデルの予測が最適化されます。
<table>
<tr><td> <img src="https
Step15: 次に、このオブジェクトを使用して、1 つの最適化ステップを計算します。
Step16: 訓練ループ
すべての部品が揃ったので、モデルの訓練ができるようになりました。訓練ループは、モデルにデータセットのサンプルを供給し、モデルがよりよい予測を行えるようにします。下記のコードブロックは、この訓練のステップを構成します。
epoch(エポック) をひとつずつ繰り返します。エポックとは、データセットをひととおり処理するということです。
エポック内では、訓練用の Dataset(データセット) のサンプルひとつずつから、その features(特徴量) (x) と label(ラベル) (y) を取り出して繰り返し処理します。
サンプルの特徴量を使って予測を行い、ラベルと比較します。予測の不正確度を測定し、それを使ってモデルの損失と勾配を計算します。
optimizer を使って、モデルのパラメータを更新します。
可視化のためにいくつかの統計量を記録します。
これをエポックごとに繰り返します。
num_epochs 変数は、データセットコレクションをループする回数です。以下のコードでは、num_epochs は 201 に設定されているため、このトレーニングループは 201 回実行します。直感に反し、モデルをより長くトレーニングしても、モデルがさらに最適化されることは保証されません。num_epochs は、ユーザーが調整できるハイパーパラメータです。通常、適切な数値を選択するには、経験と実験の両方が必要です。
Step17: または、組み込みの Keras Model.fit(ds_train_batch) メソッドを使用して、モデルをトレーニングすることもできます。
時間の経過に対する損失関数の可視化
モデルのトレーニングの進行状況を出力することは役立ちますが、TensorFlow に同梱された TensorBoard という可視化とメトリクスツールを使って進行状況を可視化することもできます。この単純な例では、matplotlib モジュールを使用して基本的なグラフを作成できます。
これらのグラフを解釈するには経験が必要ですが、一般的に、損失の減少と精度の上昇に注目できます。
Step18: モデルの有効性評価
モデルがトレーニングが完了したため、パフォーマンスの統計を取得できるようになりました。
評価とは、モデルがどれくらい効果的に予測を立てられるかを判定することです。ペンギンの分類においてモデルの有効性を判定するには、測定値をモデルに渡し、それが表すペンギンの種をモデルに問います。次に、モデルの予測を実際のラベルと比較します。たとえば、入力サンプルの半数で正しい種を選択したモデルであれば、その精度は 0.5 となります。図 4 には、わずかに有効性の高いモデルが示されており、80% の精度で、5 回の予測の内 4 回が正解となっています。
<table cellpadding="8" border="0">
<colgroup>
<col span="4">
<col span="1" bgcolor="lightblue">
<col span="1" bgcolor="lightgreen">
</colgroup>
<tr bgcolor="lightgray">
<th colspan="4">サンプルの特徴量</th>
<th colspan="1">ラベル</th>
<th colspan="1">モデルの予測値</th>
</tr>
<tr>
<td>5.9</td>
<td>3.0</td>
<td>4.3</td>
<td>1.5</td>
<td align="center">1</td>
<td align="center">1</td>
</tr>
<tr>
<td>6.9</td>
<td>3.1</td>
<td>5.4</td>
<td>2.1</td>
<td align="center">2</td>
<td align="center">2</td>
</tr>
<tr>
<td>5.1</td>
<td>3.3</td>
<td>1.7</td>
<td>0.5</td>
<td align="center">0</td>
<td align="center">0</td>
</tr>
<tr>
<td>6.0</td> <td>3.4</td> <td>4.5</td> <td>1.6</td> <td align="center">1</td>
<td align="center" bgcolor="red">2</td>
</tr>
<tr>
<td>5.5</td>
<td>2.5</td>
<td>4.0</td>
<td>1.3</td>
<td align="center">1</td>
<td align="center">1</td>
</tr>
<tr><td align="center" colspan="6"> <b>図 4.</b> 80% 正確なペンギンの分類器<br>
</td></tr>
</table>
テストセットをセットアップする
モデルの評価はモデルの訓練と同様です。もっとも大きな違いは、サンプルが訓練用データセットではなくテスト用データセット(test set) からのものであるという点です。モデルの有効性を正しく評価するには、モデルの評価に使うサンプルは訓練用データセットのものとは違うものでなければなりません。
penguin データセットには、別途テストデータセットが用意されていないため、当然、前述のデータセットのダウンロードセクションのデータセットにもテストデータセットはありません。そこで、元のデータセットをテストデータセットとトレーニングデータセットに分割します。評価には、ds_test_batch データセットを使用してください。
テスト用データセットでのモデルの評価
トレーニングの段階とは異なり、このモデルはテストデータの 1 つのエポックしか評価しません。次のコードはテストセットの各サンプルを反復し、モデルの予測を実際のラベルに比較します。この比較は、テストセット全体におけるモデルの精度を測定するために使用されます。
Step19: また、model.evaluate(ds_test, return_dict=True) Keras 関数を使用して、テストデータセットの精度情報を取得することもできます。
たとえば、最後のバッチを調べて、モデルの予測が通常正しい予測であることを観察することができます。
Step20: 訓練済みモデルを使った予測
モデルをトレーニングし、ペンギンの種を分類する上でモデルが良好であることを「証明」しました(ただし、完璧ではありません)。では、トレーニング済みのモデルを使用して、ラベルなしのサンプル、つまりラベルのない特徴量を含むサンプルで予測を立ててみましょう。
実際には、ラベルなしのサンプルは、アプリ、CSV ファイル、データフィードといったさまざまなソースから取得される場合がありますが、このチュートリアルでは、ラベルなしのサンプルを手動で提供して、それぞれのラベルを予測することにします。ラベル番号は、次のように指定されていることを思い出してください。
0 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
!pip install -q tfds-nightly
Explanation: カスタム訓練:ウォークスルー
<table class="tfo-notebook-buttons" align="left">
<td> <a target="_blank" href="https://www.tensorflow.org/tutorials/customization/custom_training_walkthrough"> <img src="https://www.tensorflow.org/images/tf_logo_32px.png"> TensorFlow.org で表示</a>
</td>
<td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/tutorials/customization/custom_training_walkthrough.ipynb"> <img src="https://www.tensorflow.org/images/colab_logo_32px.png"> Google Colab で実行</a>
</td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/tutorials/customization/custom_training_walkthrough.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png"> GitHub でソースを表示</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/tutorials/customization/custom_training_walkthrough.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード/a0}</a></td>
</table>
このチュートリアルでは、カスタムトレーニングループを使って機械学習モデルをトレーニングし、ペンギンを種類別に分類する方法を説明します。このノートブックでは、TensorFlow を使用して、次の項目を達成します。
データセットをインポートする
単純な線形モデルを構築する
モデルをトレーニングする
モデルの有効性を評価する
トレーニングされたモデルを使用して予測を立てる
TensorFlow プログラミング
このチュートリアルでは、次の TensorFlow プログラミングタスクを実演しています。
TensorFlow Datasets API を使ってデータをインポートする
Keras API を使ってモデルとレイヤーを構築する
ペンギンの分類の問題
鳥類学者が、発見したペンギンを自動的に分類する方法を探していると仮定しましょう。機械学習では、ペンギンを静的に分類するためのアルゴリズムが多数用意されています。たとえば、高度な機械学習プログラムでは、写真を基にペンギンを分類できるものもあります。このチュートリアルで作成するモデルは、これよりも少しシンプルで、体重、フリッパーの長さ、くちばし、特に 嘴峰(しほう)の長さと幅に基づいてペンギンを分類します。
ペンギンには 18 種ありますが、このチュートリアルでは次の 3 種のみを分類してみることにしましょう。
ヒゲペンギン
ジェンツーペンギン
アデリーペンギン
<table>
<tr><td> <img src="https://www.tensorflow.org/images/iris_three_species.jpg" alt="Petal geometry compared for three iris species: Iris setosa, Iris virginica, and Iris versicolor"> </td></tr>
<tr><td align="center">
<b>図 1.</b> <a href="https://en.wikipedia.org/wiki/Chinstrap_penguin">ヒゲペンギン</a>、<a href="https://en.wikipedia.org/wiki/Gentoo_penguin">ジェンツー</a>、および<a href="https://en.wikipedia.org/wiki/Ad%C3%A9lie_penguin">アデリー</a>ペンギン(イラスト: @allison_horst, CC BY-SA 2.0)。<br>
</td></tr>
</table>
幸いにも、体重、フリッパーの長さ、くちばしの測定値とその他のデータで含む334 羽のペンギンのデータセットが調査チームによって既に作成されて共有されています。このデータセットは、penguins TensorFlow Dataset としても提供されています。
セットアップ
penguis データセットに使用する tfds-nightly パッケージをインストールします。tfds-nightly パッケージは毎晩リリースされる TensorFlow Datasets(TFDS)のバージョンです。TFDS の詳細については、TensorFlow Datasets の概要をご覧ください。
End of explanation
import os
import tensorflow as tf
import tensorflow_datasets as tfds
import matplotlib.pyplot as plt
print("TensorFlow version: {}".format(tf.__version__))
print("TensorFlow Datasets version: ",tfds.__version__)
Explanation: 次に、Colab メニューから Runtime > Restart Runtime を選択して、Colab ランタイムを再起動します。
ランタイムを再起動せずに、チュートリアルを先に進めないでください。
TensorFlow と他に必要な Python モジュールをインポートします。
End of explanation
ds_preview, info = tfds.load('penguins/simple', split='train', with_info=True)
df = tfds.as_dataframe(ds_preview.take(5), info)
print(df)
print(info.features)
Explanation: データセットをインポートする
デフォルトの penguins/processed TensorFlow Dataset はすでにクリーニングされて正規化が済んでおり、モデルを構築できる準備が整っています。processed データをダウンロードする前に、簡易バージョンをプレビューして、元のペンギン調査データを理解しておきましょう。
データをプレビューする
TensorFlow Datasets tdfs.load メソッドを使用して、penguins データセットの簡易バージョン(penguins/simple)をダウンロードします。このデータセットには 344 件のデータレコードが存在します。最初の 5 件のレコードを DataFrame オブジェクトに抽出し、このデータセットのサンプルの値を調べます。
End of explanation
class_names = ['Adélie', 'Chinstrap', 'Gentoo']
Explanation: 番号付きの行がデータレコードで、行ごとに 1 つのサンプルが含まれます。
最初の 6 つのフィールドは、サンプルの特徴づける特徴量です。ここでは、ペンギンの測定値を表す数字が含まれています。
最後の列はラベルです。予測しようとしている値がこれです。このデータセットでは、ペンギンの種名に対応する 0、1、または 2 の整数が示されます。
このデータセットでは、ペンギンの種のラベルを数値で表現することにより、構築するモデルで扱いやすくしています。これらの数値は、次のペンギンの種に対応しています。
0: アデリーペンギン
1: ヒゲペンギン
2: ジェンツーペンギン
この順序で、ペンギンの種名を含むリストを作成します。このリストは、分類モデルの出力を解釈するために使用します。
End of explanation
ds_split, info = tfds.load("penguins/processed", split=['train[:20%]', 'train[20%:]'], as_supervised=True, with_info=True)
ds_test = ds_split[0]
ds_train = ds_split[1]
assert isinstance(ds_test, tf.data.Dataset)
print(info.features)
df_test = tfds.as_dataframe(ds_test.take(5), info)
print("Test dataset sample: ")
print(df_test)
df_train = tfds.as_dataframe(ds_train.take(5), info)
print("Train dataset sample: ")
print(df_train)
ds_train_batch = ds_train.batch(32)
Explanation: 特徴量とラベルについての詳細は、機械学習クラッシュコースの ML 用語セクションをご覧ください。
前処理済みのデータセットをダウンロードする
次に、tfds.load メソッドを使用して、前処理済みの penguins データセット(penguins/processed)をダウンロードします。すると、tf.data.Dataset オブジェクトのリストが返されます。penguins/processed データセットには独自のテストセットは用意されていないため、80:20 分割で、トレーニングセットとテストセットにデータセットをスライスします。テストデータセットは、後でモデルを検証する際に使用します。
End of explanation
features, labels = next(iter(ds_train_batch))
print(features)
print(labels)
Explanation: このバージョンのデータセットは処理済みであるため、データが 4 つの正規化された特徴量と種ラベルに縮小されていることに注意してください。このフォーマットでは、データを素早く使用してモデルをトレーニングできるようになっているため、移行の処理は必要ありません。
End of explanation
plt.scatter(features[:,0],
features[:,2],
c=labels,
cmap='viridis')
plt.xlabel("Body Mass")
plt.ylabel("Culmen Length")
plt.show()
Explanation: バッチのいくつかの特徴量をプロットして、クラスターを可視化できます。
End of explanation
model = tf.keras.Sequential([
tf.keras.layers.Dense(10, activation=tf.nn.relu, input_shape=(4,)), # input shape required
tf.keras.layers.Dense(10, activation=tf.nn.relu),
tf.keras.layers.Dense(3)
])
Explanation: 単純な線形モデルを構築する
なぜモデルか?
モデルは、特徴量とラベルの関係です。ペンギンの分類問題においては、このモデルは体重、フリッパー、および嘴峰の測定値、および予測されるペンギンの種の関係を定義しています。単純なモデルは、数行の代数で記述することは可能ですが、複雑な機械学習モデルにはパラメータの数も多く、要約が困難です。
機械学習を使用せずに、4 つの特徴量とペンギンの種の関係を判定することはできるのでしょうか。つまり、従来のプログラミング手法(多数の条件ステートメントを使用するなど)を使って、モデルを作成できるのでしょうか。おそらく、体重と嘴峰の測定値の関係を特定できるだけの長い時間を費やしてデータセットを分析すれば、特定の種に絞ることは可能かもしれません。これでは、複雑なデータセットでは不可能でなくとも困難極まりないことでしょう。適した機械学習アプローチであれば、ユーザーに代わってモデルを判定することができます。代表的なサンプルを適確な機械学習モデルタイプに十分にフィードすれば、プログラムによって関係を見つけ出すことができます。
モデルの選択
次に、トレーニングするモデルの種類を選択する必要があります。選択できる種類は多数あり、最適な種類を 1 つ選ぶにはそれなりの経験が必要となります。このチュートリアルでは、ニューラルネットワークを使用して、ペンギンの分類問題を解決することにします。ニューラルネットワークは、特徴量とラベルの複雑な関係を見つけ出すことができます。非常に構造化されたグラフで、1 つ以上の非表示レイヤーで編成されており、各非表示レイヤーは 1 つ以上のニューロンで構成されています。ニューラルネットワークにはいくつかのカテゴリがありますが、このプログラムでは、Dense または全結合のニューラルネットワークを使用します。このネットワークでは、1 つのレイヤーのニューロンが前のレイヤーのすべてのユーロんから入力接続を受け取ります。たとえば、図 2 では、1 つの入力レイヤー、2 つの非表示レイヤー、および 1 つの出力レイヤーで構成される Dense ニューラルネットワークが示されています。
<table>
<tr><td> <img src="https://www.tensorflow.org/images/custom_estimators/full_network.png" alt="A diagram of the network architecture: Inputs, 2 hidden layers, and outputs"> </td></tr>
<tr><td align="center"> <b>図2.</b> 特徴量と隠れ層、予測をもつニューラルネットワーク<br>{nbsp} </td></tr>
</table>
図 2 のモデルをトレーニングし、ラベルなしのサンプルをフィードすると、このペンギンが特定のペンギン種であるという尤度によって 3 つの予測が生成されます。この予測は推論と呼ばれます。この例では、出力予測の和は 1.0 です。図 2 の場合、この予測は、アデリーは 0.02、ヒゲペンギンは 0.95、ジェンツーは 0.03 となります。つまり、モデルは、95% の確率で、ラベル無しのサンプルペンギンはヒゲペンギンであると予測していることになります。
Keras を使ったモデル構築
TensorFlow の tf.keras API は、モデルと層を作成するためのおすすめの方法です。Keras がすべてを結びつけるという複雑さを引き受けてくれるため、モデルや実験の構築がかんたんになります。
tf.keras.Sequential モデルは、レイヤーの線形スタックです。コンストラクタはレイヤーインスタンスのリスト(この場合は 2 つの tf.keras.layers.Dense レイヤー、各レイヤーの 10 個のノード、ラベルの予測である 3 つのノードを持つ出力レイヤー)を取ります。最初のレイヤーの input_shape パラメータはデータセットの特徴量の数に対応しており、必須です。
End of explanation
predictions = model(features)
predictions[:5]
Explanation: 活性化関数(activation function) は、そのレイヤーの各ノードの出力の形を決定します。この関数の非線形性は重要であり、それがなければモデルは 1層しかないものと等価になってしまいます。利用可能な活性化関数 はたくさんありますが、隠れ層では ReLU が一般的です。
理想的な隠れ層の数やニューロンの数は問題やデータセットによって異なります。機械学習のさまざまな側面と同様に、ニューラルネットワークの最良の形を選択するには、知識と経験の両方が必要です。経験則から、一般的には隠れ層やニューロンの数を増やすとより強力なモデルを作ることができますが、効果的に訓練を行うためにより多くのデータを必要とします。
モデルを使用する
それでは、このモデルが特徴量のバッチに対して何を行うかを見てみましょう。
End of explanation
tf.nn.softmax(predictions[:5])
Explanation: ご覧のように、サンプルのそれぞれは、各クラスの ロジット(logit) 値を返します。
これらのロジット値を各クラスの確率に変換するためには、 softmax 関数を使用します。
End of explanation
print("Prediction: {}".format(tf.argmax(predictions, axis=1)))
print(" Labels: {}".format(labels))
Explanation: クラスに渡って tf.math.argmax を取ると、クラスのインデックスの予測を得られますが、モデルはまだトレーニングされていないため、これは良い予測ではありません。
End of explanation
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
def loss(model, x, y, training):
# training=training is needed only if there are layers with different
# behavior during training versus inference (e.g. Dropout).
y_ = model(x, training=training)
return loss_object(y_true=y, y_pred=y_)
l = loss(model, features, labels, training=False)
print("Loss test: {}".format(l))
Explanation: モデルの訓練
訓練(Training) は、機械学習において、モデルが徐々に最適化されていく、あるいはモデルがデータセットを学習する段階です。目的は、見たことのないデータについて予測を行うため、訓練用データセットの構造を十分に学習することです。訓練用データセットを学習しすぎると、予測は見たことのあるデータに対してしか有効ではなく、一般化できません。この問題は 過学習(overfitting) と呼ばれ、問題の解き方を理解するのではなく答えを丸暗記するようなものです。
ペンギンの分類問題は、教師あり機械学習の例であり、モデルはラベルを含むサンプルからトレーニングされています。サンプルにラベルを含まない場合は、教師なし機械学習と呼ばれ、モデルは通常、特徴量からパターンを見つけ出します。
損失と勾配関数を定義する
トレーニングと評価の段階では、モデルの損失を計算する必要があります。これは、モデルの予測がどれくらい目標から外れているかを測定するものです。言い換えると、モデルのパフォーマンスがどれくらい劣っているかを示します。この値を最小化または最適化することが望まれます。
モデルは、モデルのクラスの確率予測と目標のラベルを取り、サンプル間の平均的な損失を返す tf.keras.losses.SparseCategoricalCrossentropy 関数を使用して損失を計算します。
End of explanation
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets, training=True)
return loss_value, tape.gradient(loss_value, model.trainable_variables)
Explanation: tf.GradientTape コンテキストを使って、モデルを最適化する際に使われる 勾配(gradients) を計算しましょう。
End of explanation
optimizer = tf.keras.optimizers.SGD(learning_rate=0.01)
Explanation: オプティマイザの作成
オプティマイザは、loss 関数を最小化するために、計算された勾配をモデルのパラメータに適用します。損失関数は、曲面(図 3 を参照)として考えることができ、その周辺を探りながら最低ポイントを見つけることができます。勾配は最も急な上昇に向かってポイントするため、逆方向に進んで曲面を下方向に移動します。バッチごとに損失と勾配を対話的に計算することで、トレーニング中にモデルの調整を行うことができます。モデルは徐々に、重みとバイアスの最適な組み合わせを見つけて損失を最小化できるようになります。損失が低いほど、モデルの予測が最適化されます。
<table>
<tr><td> <img src="https://cs231n.github.io/assets/nn3/opt1.gif" width="70%" alt="Optimization algorithms visualized over time in 3D space."> </td></tr>
<tr><td align="center"> <b>図3.</b> 3次元空間における最適化アルゴリズムの時系列可視化。<br>(Source: <a href="http://cs231n.github.io/neural-networks-3/">Stanford class CS231n</a>, MIT License, Image credit: <a href="https://twitter.com/alecrad">Alec Radford</a>) </td></tr>
</table>
TensorFlow には、トレーニングに使用できる多数の最適化アルゴリズムが用意されています。このチュートリアルでは、確率的勾配降下法(SGD)アルゴリズムを実装する tf.keras.optimizers.SGD を使用しています。learning_rate パラメータは、曲面を下降するイテレーションごとに取るステップサイズを設定します。このレートは、一般的により良い結果を達成できるように調整するハイパーパラメータです。
オプティマイザを 0.01 の学習率でインスタンス化します。これはトレーニングのイテレーションごとに、勾配が操作するスカラー値です。
End of explanation
loss_value, grads = grad(model, features, labels)
print("Step: {}, Initial Loss: {}".format(optimizer.iterations.numpy(),
loss_value.numpy()))
optimizer.apply_gradients(zip(grads, model.trainable_variables))
print("Step: {}, Loss: {}".format(optimizer.iterations.numpy(),
loss(model, features, labels, training=True).numpy()))
Explanation: 次に、このオブジェクトを使用して、1 つの最適化ステップを計算します。
End of explanation
## Note: Rerunning this cell uses the same model parameters
# Keep results for plotting
train_loss_results = []
train_accuracy_results = []
num_epochs = 201
for epoch in range(num_epochs):
epoch_loss_avg = tf.keras.metrics.Mean()
epoch_accuracy = tf.keras.metrics.SparseCategoricalAccuracy()
# Training loop - using batches of 32
for x, y in ds_train_batch:
# Optimize the model
loss_value, grads = grad(model, x, y)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
# Track progress
epoch_loss_avg.update_state(loss_value) # Add current batch loss
# Compare predicted label to actual label
# training=True is needed only if there are layers with different
# behavior during training versus inference (e.g. Dropout).
epoch_accuracy.update_state(y, model(x, training=True))
# End epoch
train_loss_results.append(epoch_loss_avg.result())
train_accuracy_results.append(epoch_accuracy.result())
if epoch % 50 == 0:
print("Epoch {:03d}: Loss: {:.3f}, Accuracy: {:.3%}".format(epoch,
epoch_loss_avg.result(),
epoch_accuracy.result()))
Explanation: 訓練ループ
すべての部品が揃ったので、モデルの訓練ができるようになりました。訓練ループは、モデルにデータセットのサンプルを供給し、モデルがよりよい予測を行えるようにします。下記のコードブロックは、この訓練のステップを構成します。
epoch(エポック) をひとつずつ繰り返します。エポックとは、データセットをひととおり処理するということです。
エポック内では、訓練用の Dataset(データセット) のサンプルひとつずつから、その features(特徴量) (x) と label(ラベル) (y) を取り出して繰り返し処理します。
サンプルの特徴量を使って予測を行い、ラベルと比較します。予測の不正確度を測定し、それを使ってモデルの損失と勾配を計算します。
optimizer を使って、モデルのパラメータを更新します。
可視化のためにいくつかの統計量を記録します。
これをエポックごとに繰り返します。
num_epochs 変数は、データセットコレクションをループする回数です。以下のコードでは、num_epochs は 201 に設定されているため、このトレーニングループは 201 回実行します。直感に反し、モデルをより長くトレーニングしても、モデルがさらに最適化されることは保証されません。num_epochs は、ユーザーが調整できるハイパーパラメータです。通常、適切な数値を選択するには、経験と実験の両方が必要です。
End of explanation
fig, axes = plt.subplots(2, sharex=True, figsize=(12, 8))
fig.suptitle('Training Metrics')
axes[0].set_ylabel("Loss", fontsize=14)
axes[0].plot(train_loss_results)
axes[1].set_ylabel("Accuracy", fontsize=14)
axes[1].set_xlabel("Epoch", fontsize=14)
axes[1].plot(train_accuracy_results)
plt.show()
Explanation: または、組み込みの Keras Model.fit(ds_train_batch) メソッドを使用して、モデルをトレーニングすることもできます。
時間の経過に対する損失関数の可視化
モデルのトレーニングの進行状況を出力することは役立ちますが、TensorFlow に同梱された TensorBoard という可視化とメトリクスツールを使って進行状況を可視化することもできます。この単純な例では、matplotlib モジュールを使用して基本的なグラフを作成できます。
これらのグラフを解釈するには経験が必要ですが、一般的に、損失の減少と精度の上昇に注目できます。
End of explanation
test_accuracy = tf.keras.metrics.Accuracy()
ds_test_batch = ds_test.batch(10)
for (x, y) in ds_test_batch:
# training=False is needed only if there are layers with different
# behavior during training versus inference (e.g. Dropout).
logits = model(x, training=False)
prediction = tf.argmax(logits, axis=1, output_type=tf.int64)
test_accuracy(prediction, y)
print("Test set accuracy: {:.3%}".format(test_accuracy.result()))
Explanation: モデルの有効性評価
モデルがトレーニングが完了したため、パフォーマンスの統計を取得できるようになりました。
評価とは、モデルがどれくらい効果的に予測を立てられるかを判定することです。ペンギンの分類においてモデルの有効性を判定するには、測定値をモデルに渡し、それが表すペンギンの種をモデルに問います。次に、モデルの予測を実際のラベルと比較します。たとえば、入力サンプルの半数で正しい種を選択したモデルであれば、その精度は 0.5 となります。図 4 には、わずかに有効性の高いモデルが示されており、80% の精度で、5 回の予測の内 4 回が正解となっています。
<table cellpadding="8" border="0">
<colgroup>
<col span="4">
<col span="1" bgcolor="lightblue">
<col span="1" bgcolor="lightgreen">
</colgroup>
<tr bgcolor="lightgray">
<th colspan="4">サンプルの特徴量</th>
<th colspan="1">ラベル</th>
<th colspan="1">モデルの予測値</th>
</tr>
<tr>
<td>5.9</td>
<td>3.0</td>
<td>4.3</td>
<td>1.5</td>
<td align="center">1</td>
<td align="center">1</td>
</tr>
<tr>
<td>6.9</td>
<td>3.1</td>
<td>5.4</td>
<td>2.1</td>
<td align="center">2</td>
<td align="center">2</td>
</tr>
<tr>
<td>5.1</td>
<td>3.3</td>
<td>1.7</td>
<td>0.5</td>
<td align="center">0</td>
<td align="center">0</td>
</tr>
<tr>
<td>6.0</td> <td>3.4</td> <td>4.5</td> <td>1.6</td> <td align="center">1</td>
<td align="center" bgcolor="red">2</td>
</tr>
<tr>
<td>5.5</td>
<td>2.5</td>
<td>4.0</td>
<td>1.3</td>
<td align="center">1</td>
<td align="center">1</td>
</tr>
<tr><td align="center" colspan="6"> <b>図 4.</b> 80% 正確なペンギンの分類器<br>
</td></tr>
</table>
テストセットをセットアップする
モデルの評価はモデルの訓練と同様です。もっとも大きな違いは、サンプルが訓練用データセットではなくテスト用データセット(test set) からのものであるという点です。モデルの有効性を正しく評価するには、モデルの評価に使うサンプルは訓練用データセットのものとは違うものでなければなりません。
penguin データセットには、別途テストデータセットが用意されていないため、当然、前述のデータセットのダウンロードセクションのデータセットにもテストデータセットはありません。そこで、元のデータセットをテストデータセットとトレーニングデータセットに分割します。評価には、ds_test_batch データセットを使用してください。
テスト用データセットでのモデルの評価
トレーニングの段階とは異なり、このモデルはテストデータの 1 つのエポックしか評価しません。次のコードはテストセットの各サンプルを反復し、モデルの予測を実際のラベルに比較します。この比較は、テストセット全体におけるモデルの精度を測定するために使用されます。
End of explanation
tf.stack([y,prediction],axis=1)
Explanation: また、model.evaluate(ds_test, return_dict=True) Keras 関数を使用して、テストデータセットの精度情報を取得することもできます。
たとえば、最後のバッチを調べて、モデルの予測が通常正しい予測であることを観察することができます。
End of explanation
predict_dataset = tf.convert_to_tensor([
[0.3, 0.8, 0.4, 0.5,],
[0.4, 0.1, 0.8, 0.5,],
[0.7, 0.9, 0.8, 0.4]
])
# training=False is needed only if there are layers with different
# behavior during training versus inference (e.g. Dropout).
predictions = model(predict_dataset, training=False)
for i, logits in enumerate(predictions):
class_idx = tf.argmax(logits).numpy()
p = tf.nn.softmax(logits)[class_idx]
name = class_names[class_idx]
print("Example {} prediction: {} ({:4.1f}%)".format(i, name, 100*p))
Explanation: 訓練済みモデルを使った予測
モデルをトレーニングし、ペンギンの種を分類する上でモデルが良好であることを「証明」しました(ただし、完璧ではありません)。では、トレーニング済みのモデルを使用して、ラベルなしのサンプル、つまりラベルのない特徴量を含むサンプルで予測を立ててみましょう。
実際には、ラベルなしのサンプルは、アプリ、CSV ファイル、データフィードといったさまざまなソースから取得される場合がありますが、このチュートリアルでは、ラベルなしのサンプルを手動で提供して、それぞれのラベルを予測することにします。ラベル番号は、次のように指定されていることを思い出してください。
0: アデリーペンギン
1: ヒゲペンギン
2: ジェンツーペンギン
End of explanation |
15,720 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
AI Explanations
Step1: Run the following cell to create your Cloud Storage bucket if it does not already exist.
Step2: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, we create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
Step3: Import libraries
Import the libraries for this tutorial. This tutorial has been tested with TensorFlow versions 2.3.
Step4: Download and preprocess the data
In this section you'll download the data to train your model from a public GCS bucket. The original data is from the BigQuery datasets linked above. For your convenience, we've joined the London bike and NOAA weather tables, done some preprocessing, and provided a subset of that dataset here.
Step5: Read the data with Pandas
You'll use Pandas to read the data into a DataFrame and then do some additional pre-processing.
Step6: Next, you will separate the data into features ('data') and labels ('labels').
Step7: Split data into train and test sets
You'll split your data into train and test sets using an 80 / 20 train / test split.
Step8: Build, train, and evaluate our model with Keras
This section shows how to build, train, evaluate, and get local predictions from a model by using the Keras Sequential API. The model will takes your 10 features as input and predict the trip duration in minutes.
Step9: Create an input data pipeline with tf.data
Per best practices, we will use tf.Data to create our input data pipeline. Our data is all in an in-memory dataframe, so we will use tf.data.Dataset.from_tensor_slices to create our pipeline.
Step10: Train the model
Now we train the model. We will specify a number of epochs which to train the model and tell the model how many steps to expect per epoch.
Step11: Evaluate the trained model locally
Step12: Export the model as a TF 2.x SavedModel
When using TensorFlow 2.x, you export the model as a SavedModel and load it into Cloud Storage.
Step13: Use TensorFlow's saved_model_cli to inspect the model's SignatureDef. We'll use this information when we deploy our model to AI Explanations in the next section.
Step14: Deploy the model to AI Explanations
In order to deploy the model to Explanations, you need to generate an explanations_metadata.json file and upload this to the Cloud Storage bucket with your SavedModel. Then you'll deploy the model using gcloud.
Prepare explanation metadata
In order to deploy this model to AI Explanations, you need to create an explanation_metadata.json file with information about your model inputs, outputs, and baseline. You can use the Explainable AI SDK to generate most of the fields.
The value for input_baselines tells the explanations service what the baseline input should be for your model. Here you're using the median for all of your input features. That means the baseline prediction for this model will be the trip duration your model predicts for the median of each feature in your dataset.
Since this model accepts a single numpy array with all numerical feature, you can optionally pass an index_feature_mapping list to AI Explanations to make the API response easier to parse. When you provide a list of feature names via this parameter, the service will return a key / value mapping of each feature with its corresponding attribution value.
Step15: Since this is a regression model (predicting a numerical value), the baseline prediction will be the same for every example we send to the model. If this were instead a classification model, each class would have a different baseline prediction.
Create the model
Step16: Create the model version
Creating the version will take ~5-10 minutes. Note that your first deploy could take longer.
Step17: Get predictions and explanations
Now that your model is deployed, you can use the AI Platform Prediction API to get feature attributions. You'll pass it a single test example here and see which features were most important in the model's prediction. Here you'll use the Explainable AI SDK to get your prediction and explanation. You can also use gcloud.
Format your explanation request
To make your AI Explanations request, you need to create a JSON object with your test data for prediction.
Step18: Send the explain request
You can use the Explainable AI SDK to send explanation requests to your deployed model.
Step19: Understanding the explanations response
First, let's look at the trip duration your model predicted and compare it to the actual value.
Step20: Next let's look at the feature attributions for this particular example. Positive attribution values mean a particular feature pushed your model prediction up by that amount, and vice versa for negative attribution values.
Step21: Check your explanations and baselines
To better make sense of the feature attributions you're getting, you should compare them with your model's baseline. In most cases, the sum of your attribution values + the baseline should be very close to your model's predicted value for each input. Also note that for regression models, the baseline_score returned from AI Explanations will be the same for each example sent to your model. For classification models, each class will have its own baseline.
In this section you'll send 10 test examples to your model for prediction in order to compare the feature attributions with the baseline. Then you'll run each test example's attributions through two sanity checks in the sanity_check_explanations method.
Step22: In the function below you perform two sanity checks for models using Integrated Gradient (IG) explanations and one sanity check for models using Sampled Shapley.
Step23: Understanding AI Explanations with the What-If Tool
In this section you'll use the What-If Tool to better understand how your model is making predictions. See the cell below the What-if Tool for visualization ideas.
The What-If-Tool expects data with keys for each feature name, but your model expects a flat list. The functions below convert data to the format required by the What-If Tool.
Step24: What-If Tool visualization ideas
On the x-axis, you'll see the predicted trip duration for the test inputs you passed to the What-If Tool. Each circle represents one of your test examples. If you click on a circle, you'll be able to see the feature values for that example along with the attribution values for each feature.
You can edit individual feature values and re-run prediction directly within the What-If Tool. Try changing distance, click Run inference and see how that affects the model's prediction
You can sort features for an individual example by their attribution value, try changing the sort from the attributions dropdown
The What-If Tool also lets you create custom visualizations. You can do this by changing the values in the dropdown menus above the scatter plot visualization. For example, you can sort data points by inference error, or by their similarity to a single datapoint.
Cleaning up
To clean up all GCP resources used in this project, you can delete the GCP
project you used for the tutorial.
Alternatively, you can clean up individual resources by running the following
commands | Python Code:
import os
PROJECT_ID = "" # TODO: your PROJECT_ID here.
os.environ["PROJECT_ID"] = PROJECT_ID
BUCKET_NAME = "" # TODO: your BUCKET_NAME here.
REGION = "us-central1"
os.environ['BUCKET_NAME'] = BUCKET_NAME
os.environ['REGION'] = REGION
Explanation: AI Explanations: Explaining a tabular data model
Overview
In this tutorial we will perform the following steps:
Build and train a Keras model.
Export the Keras model as a TF 1 SavedModel and deploy the model on Cloud AI Platform.
Compute explainations for our model's predictions using Explainable AI on Cloud AI Platform.
Dataset
The dataset used for this tutorial was created from a BigQuery Public Dataset: NYC 2018 Yellow Taxi data.
Objective
The goal is to train a model using the Keras Sequential API that predicts how much a customer is compelled to pay (fares + tolls) for a taxi ride given the pickup location, dropoff location, the day of the week, and the hour of the day.
This tutorial focuses more on deploying the model to AI Explanations than on the design of the model itself. We will be using preprocessed data for this lab. If you wish to know more about the data and how it was preprocessed please see this notebook.
Setup
End of explanation
%%bash
exists=$(gsutil ls -d | grep -w gs://${BUCKET_NAME}/)
if [ -n "$exists" ]; then
echo -e "Bucket gs://${BUCKET_NAME} already exists."
else
echo "Creating a new GCS bucket."
gsutil mb -l ${REGION} gs://${BUCKET_NAME}
echo -e "\nHere are your current buckets:"
gsutil ls
fi
Explanation: Run the following cell to create your Cloud Storage bucket if it does not already exist.
End of explanation
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, we create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
End of explanation
import tensorflow as tf
import pandas as pd
# should be >= 2.1
print("Tensorflow version " + tf.__version__)
if tf.__version__ < "2.1":
raise Exception("TF 2.1 or greater is required")
!pip install explainable-ai-sdk
import explainable_ai_sdk
Explanation: Import libraries
Import the libraries for this tutorial. This tutorial has been tested with TensorFlow versions 2.3.
End of explanation
# Copy the data to your notebook instance
! gsutil cp 'gs://explanations_sample_data/bike-data.csv' ./
Explanation: Download and preprocess the data
In this section you'll download the data to train your model from a public GCS bucket. The original data is from the BigQuery datasets linked above. For your convenience, we've joined the London bike and NOAA weather tables, done some preprocessing, and provided a subset of that dataset here.
End of explanation
data = pd.read_csv('bike-data.csv')
# Shuffle the data
data = data.sample(frac=1, random_state=2)
# Drop rows with null values
data = data[data['wdsp'] != 999.9]
data = data[data['dewp'] != 9999.9]
# Rename some columns for readability
data = data.rename(columns={'day_of_week': 'weekday'})
data = data.rename(columns={'max': 'max_temp'})
data = data.rename(columns={'dewp': 'dew_point'})
# Drop columns you won't use to train this model
data = data.drop(columns=['start_station_name', 'end_station_name', 'bike_id', 'snow_ice_pellets'])
# Convert trip duration from seconds to minutes so it's easier to understand
data['duration'] = data['duration'].apply(lambda x: float(x / 60))
# Preview the first 5 rows of training data
data.head()
Explanation: Read the data with Pandas
You'll use Pandas to read the data into a DataFrame and then do some additional pre-processing.
End of explanation
# Save duration to its own DataFrame and remove it from the original DataFrame
labels = data['duration']
data = data.drop(columns=['duration'])
Explanation: Next, you will separate the data into features ('data') and labels ('labels').
End of explanation
# Use 80/20 train/test split
train_size = int(len(data) * .8)
print("Train size: %d" % train_size)
print("Test size: %d" % (len(data) - train_size))
# Split your data into train and test sets
train_data = data[:train_size]
train_labels = labels[:train_size]
test_data = data[train_size:]
test_labels = labels[train_size:]
Explanation: Split data into train and test sets
You'll split your data into train and test sets using an 80 / 20 train / test split.
End of explanation
# Build your model
model = tf.keras.Sequential(name="bike_predict")
model.add(tf.keras.layers.Dense(64, input_dim=len(train_data.iloc[0]), activation='relu'))
model.add(tf.keras.layers.Dense(32, activation='relu'))
model.add(tf.keras.layers.Dense(1))
# Compile the model and see a summary
optimizer = tf.keras.optimizers.Adam(0.001)
model.compile(loss='mean_squared_logarithmic_error', optimizer=optimizer)
model.summary()
Explanation: Build, train, and evaluate our model with Keras
This section shows how to build, train, evaluate, and get local predictions from a model by using the Keras Sequential API. The model will takes your 10 features as input and predict the trip duration in minutes.
End of explanation
batch_size = 256
epochs = 3
input_train = tf.data.Dataset.from_tensor_slices(train_data)
output_train = tf.data.Dataset.from_tensor_slices(train_labels)
input_train = input_train.batch(batch_size).repeat()
output_train = output_train.batch(batch_size).repeat()
train_dataset = tf.data.Dataset.zip((input_train, output_train))
Explanation: Create an input data pipeline with tf.data
Per best practices, we will use tf.Data to create our input data pipeline. Our data is all in an in-memory dataframe, so we will use tf.data.Dataset.from_tensor_slices to create our pipeline.
End of explanation
# This will take about a minute to run
# To keep training time short, you're not using the full dataset
model.fit(train_dataset, steps_per_epoch=train_size // batch_size, epochs=epochs)
Explanation: Train the model
Now we train the model. We will specify a number of epochs which to train the model and tell the model how many steps to expect per epoch.
End of explanation
# Run evaluation
results = model.evaluate(test_data, test_labels)
print(results)
# Send test instances to model for prediction
predict = model.predict(test_data[:5])
# Preview predictions on the first 5 examples from your test dataset
for i, val in enumerate(predict):
print('Predicted duration: {}'.format(round(val[0])))
print('Actual duration: {} \n'.format(test_labels.iloc[i]))
Explanation: Evaluate the trained model locally
End of explanation
export_path = 'gs://' + BUCKET_NAME + '/explanations/mymodel'
model.save(export_path)
print(export_path)
Explanation: Export the model as a TF 2.x SavedModel
When using TensorFlow 2.x, you export the model as a SavedModel and load it into Cloud Storage.
End of explanation
! saved_model_cli show --dir $export_path --all
Explanation: Use TensorFlow's saved_model_cli to inspect the model's SignatureDef. We'll use this information when we deploy our model to AI Explanations in the next section.
End of explanation
# Print the names of your tensors
print('Model input tensor: ', model.input.name)
print('Model output tensor: ', model.output.name)
from explainable_ai_sdk.metadata.tf.v2 import SavedModelMetadataBuilder
builder = SavedModelMetadataBuilder(export_path)
builder.set_numeric_metadata(
model.input.name.split(':')[0],
input_baselines=[train_data.median().values.tolist()],
index_feature_mapping=train_data.columns.tolist()
)
builder.save_metadata(export_path)
Explanation: Deploy the model to AI Explanations
In order to deploy the model to Explanations, you need to generate an explanations_metadata.json file and upload this to the Cloud Storage bucket with your SavedModel. Then you'll deploy the model using gcloud.
Prepare explanation metadata
In order to deploy this model to AI Explanations, you need to create an explanation_metadata.json file with information about your model inputs, outputs, and baseline. You can use the Explainable AI SDK to generate most of the fields.
The value for input_baselines tells the explanations service what the baseline input should be for your model. Here you're using the median for all of your input features. That means the baseline prediction for this model will be the trip duration your model predicts for the median of each feature in your dataset.
Since this model accepts a single numpy array with all numerical feature, you can optionally pass an index_feature_mapping list to AI Explanations to make the API response easier to parse. When you provide a list of feature names via this parameter, the service will return a key / value mapping of each feature with its corresponding attribution value.
End of explanation
import datetime
MODEL = 'bike' + datetime.datetime.now().strftime("%d%m%Y%H%M%S")
# Create the model if it doesn't exist yet (you only need to run this once)
! gcloud ai-platform models create $MODEL --enable-logging --region=$REGION
Explanation: Since this is a regression model (predicting a numerical value), the baseline prediction will be the same for every example we send to the model. If this were instead a classification model, each class would have a different baseline prediction.
Create the model
End of explanation
# Each time you create a version the name should be unique
VERSION = 'v1'
# Create the version with gcloud
explain_method = 'integrated-gradients'
! gcloud beta ai-platform versions create $VERSION \
--model $MODEL \
--origin $export_path \
--runtime-version 2.1 \
--framework TENSORFLOW \
--python-version 3.7 \
--machine-type n1-standard-4 \
--explanation-method $explain_method \
--num-integral-steps 25 \
--region $REGION
# Make sure the model deployed correctly. State should be `READY` in the following log
! gcloud ai-platform versions describe $VERSION --model $MODEL --region $REGION
Explanation: Create the model version
Creating the version will take ~5-10 minutes. Note that your first deploy could take longer.
End of explanation
# Format data for prediction to your model
prediction_json = {model.input.name.split(':')[0]: test_data.iloc[0].values.tolist()}
Explanation: Get predictions and explanations
Now that your model is deployed, you can use the AI Platform Prediction API to get feature attributions. You'll pass it a single test example here and see which features were most important in the model's prediction. Here you'll use the Explainable AI SDK to get your prediction and explanation. You can also use gcloud.
Format your explanation request
To make your AI Explanations request, you need to create a JSON object with your test data for prediction.
End of explanation
remote_ig_model = explainable_ai_sdk.load_model_from_ai_platform(project=PROJECT_ID,
model=MODEL,
version=VERSION,
region=REGION)
ig_response = remote_ig_model.explain([prediction_json])
Explanation: Send the explain request
You can use the Explainable AI SDK to send explanation requests to your deployed model.
End of explanation
attr = ig_response[0].get_attribution()
predicted = round(attr.example_score, 2)
print('Predicted duration: ' + str(predicted) + ' minutes')
print('Actual duration: ' + str(test_labels.iloc[0]) + ' minutes')
Explanation: Understanding the explanations response
First, let's look at the trip duration your model predicted and compare it to the actual value.
End of explanation
ig_response[0].visualize_attributions()
Explanation: Next let's look at the feature attributions for this particular example. Positive attribution values mean a particular feature pushed your model prediction up by that amount, and vice versa for negative attribution values.
End of explanation
# Prepare 10 test examples to your model for prediction
pred_batch = []
for i in range(10):
pred_batch.append({model.input.name.split(':')[0]: test_data.iloc[i].values.tolist()})
test_response = remote_ig_model.explain(pred_batch)
Explanation: Check your explanations and baselines
To better make sense of the feature attributions you're getting, you should compare them with your model's baseline. In most cases, the sum of your attribution values + the baseline should be very close to your model's predicted value for each input. Also note that for regression models, the baseline_score returned from AI Explanations will be the same for each example sent to your model. For classification models, each class will have its own baseline.
In this section you'll send 10 test examples to your model for prediction in order to compare the feature attributions with the baseline. Then you'll run each test example's attributions through two sanity checks in the sanity_check_explanations method.
End of explanation
def sanity_check_explanations(example, mean_tgt_value=None, variance_tgt_value=None):
passed_test = 0
total_test = 1
# `attributions` is a dict where keys are the feature names
# and values are the feature attributions for each feature
attr = example.get_attribution()
baseline_score = attr.baseline_score
# sum_with_baseline = np.sum(attribution_vals) + baseline_score
predicted_val = attr.example_score
# Sanity check 1
# The prediction at the input is equal to that at the baseline.
# Please use a different baseline. Some suggestions are: random input, training
# set mean.
if abs(predicted_val - baseline_score) <= 0.05:
print('Warning: example score and baseline score are too close.')
print('You might not get attributions.')
else:
passed_test += 1
# Sanity check 2 (only for models using Integrated Gradient explanations)
# Ideally, the sum of the integrated gradients must be equal to the difference
# in the prediction probability at the input and baseline. Any discrepency in
# these two values is due to the errors in approximating the integral.
if explain_method == 'integrated-gradients':
total_test += 1
want_integral = predicted_val - baseline_score
got_integral = sum(attr.post_processed_attributions.values())
if abs(want_integral - got_integral) / abs(want_integral) > 0.05:
print('Warning: Integral approximation error exceeds 5%.')
print('Please try increasing the number of integrated gradient steps.')
else:
passed_test += 1
print(passed_test, ' out of ', total_test, ' sanity checks passed.')
for response in test_response:
sanity_check_explanations(response)
Explanation: In the function below you perform two sanity checks for models using Integrated Gradient (IG) explanations and one sanity check for models using Sampled Shapley.
End of explanation
# This is the number of data points you'll send to the What-if Tool
WHAT_IF_TOOL_SIZE = 500
from witwidget.notebook.visualization import WitWidget, WitConfigBuilder
def create_list(ex_dict):
new_list = []
for i in feature_names:
new_list.append(ex_dict[i])
return new_list
def example_dict_to_input(example_dict):
return {'dense_input': create_list(example_dict)}
from collections import OrderedDict
wit_data = test_data.iloc[:WHAT_IF_TOOL_SIZE].copy()
wit_data['duration'] = test_labels[:WHAT_IF_TOOL_SIZE]
wit_data_dict = wit_data.to_dict(orient='records', into=OrderedDict)
config_builder = WitConfigBuilder(
wit_data_dict
).set_ai_platform_model(
PROJECT_ID,
MODEL,
VERSION,
adjust_example=example_dict_to_input
).set_target_feature('duration').set_model_type('regression')
WitWidget(config_builder)
Explanation: Understanding AI Explanations with the What-If Tool
In this section you'll use the What-If Tool to better understand how your model is making predictions. See the cell below the What-if Tool for visualization ideas.
The What-If-Tool expects data with keys for each feature name, but your model expects a flat list. The functions below convert data to the format required by the What-If Tool.
End of explanation
# Delete model version resource
! gcloud ai-platform versions delete $VERSION --quiet --model $MODEL
# Delete model resource
! gcloud ai-platform models delete $MODEL --quiet
# Delete Cloud Storage objects that were created
! gsutil -m rm -r gs://$BUCKET_NAME
Explanation: What-If Tool visualization ideas
On the x-axis, you'll see the predicted trip duration for the test inputs you passed to the What-If Tool. Each circle represents one of your test examples. If you click on a circle, you'll be able to see the feature values for that example along with the attribution values for each feature.
You can edit individual feature values and re-run prediction directly within the What-If Tool. Try changing distance, click Run inference and see how that affects the model's prediction
You can sort features for an individual example by their attribution value, try changing the sort from the attributions dropdown
The What-If Tool also lets you create custom visualizations. You can do this by changing the values in the dropdown menus above the scatter plot visualization. For example, you can sort data points by inference error, or by their similarity to a single datapoint.
Cleaning up
To clean up all GCP resources used in this project, you can delete the GCP
project you used for the tutorial.
Alternatively, you can clean up individual resources by running the following
commands:
End of explanation |
15,721 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<h1 align="center">Oncolist Server API Examples</h1>
<h3 align="center">Author
Step1: <font color='blue'> Notice
Step2: After you verified the project information, you can execute the pipeline. When the job is done, you will see the log infomration returned from the cluster.
Checking the disease names
Step3: Run the pipeline with the specific operation.
Step4: To check the processing status
Step5: To delete the cluster, you just need to set the cluster name and call the below function. | Python Code:
import os
import sys
sys.path.append(os.getcwd().replace("notebooks", "cfncluster"))
## S3 input and output address.
s3_input_files_address = "s3://path/to/input folder"
s3_output_files_address = "s3://path/to/output folder"
## CFNCluster name
your_cluster_name = "cluster_name"
## The private key pair for accessing cluster.
private_key = "/path/to/private_key.pem"
## If delete cfncluster after job is done.
delete_cfncluster = False
Explanation: <h1 align="center">Oncolist Server API Examples</h1>
<h3 align="center">Author: Guorong Xu</h3>
<h3 align="center">2016-09-19</h3>
The notebook is an example that tells you how to calculate correlation, annotate gene clusters and generate JSON files on AWS.
<font color='red'>Notice: Please open the notebook under /notebooks/BasicCFNClusterSetup.ipynb to install CFNCluster package on your Jupyter-notebook server before running the notebook.</font>
1. Configure AWS key pair, data location on S3 and the project information
End of explanation
import CFNClusterManager, ConnectionManager
## Create a new cluster
master_ip_address = CFNClusterManager.create_cfn_cluster(cluster_name=your_cluster_name)
ssh_client = ConnectionManager.connect_master(hostname=master_ip_address,
username="ec2-user",
private_key_file=private_key)
Explanation: <font color='blue'> Notice: </font>
The file name of the expression file should follow the rule if you want to annotate correct in the output JSON file:
"GSE number_Author name_Disease name_Number of Arrays_Institue name.txt".
For example: GSE65216_Maire_Breast_Tumor_159_Arrays_Paris.txt
2. Create CFNCluster
Notice: The CFNCluster package can be only installed on Linux box which supports pip installation.
End of explanation
import PipelineManager
## You can call this function to check the disease names included in the annotation.
PipelineManager.check_disease_name()
## Define the disease name from the below list of disease names.
disease_name = "BreastCancer"
Explanation: After you verified the project information, you can execute the pipeline. When the job is done, you will see the log infomration returned from the cluster.
Checking the disease names
End of explanation
import PipelineManager
## define operation
## calculate: calculate correlation;"
## oslom_cluster: clustering the gene moudules;"
## print_oslom_cluster_json: print json files;"
## all: run all operations;"
operation = "all"
## run the pipeline
PipelineManager.run_analysis(ssh_client, disease_name, operation, s3_input_files_address, s3_output_files_address)
Explanation: Run the pipeline with the specific operation.
End of explanation
import PipelineManager
PipelineManager.check_processing_status(ssh_client)
Explanation: To check the processing status
End of explanation
import CFNClusterManager
if delete_cfncluster == True:
CFNClusterManager.delete_cfn_cluster(cluster_name=your_cluster_name)
Explanation: To delete the cluster, you just need to set the cluster name and call the below function.
End of explanation |
15,722 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Review of EuroSciPy2015
0. IPython Notebooks
1. Tutorial 1
Step1: Create a local host for the notebook in the directory of interest by running the command
Step2: Images, data files, etc. should be stored in a subdirectory of the Notebook host.
1. Tutorial #1
Step3: Accessing Python's Source Code
cf. http
Step4: ...and we can read the source code of built-in functions by downloading the source from the Python.org Mercurial repositories)
Step5: We can view the source code for a particular function, such as range, in the Python.org Mercurial Repository
Step6: Sneak-peek @ Advanced Topics
NumPy & SciPy
Pandas
Cython
Translate Python scripts into C code, and compile to machine code.
Step7: 2. Tutorial 2
Step8: Matrix Tricks
Step9: Memmapping
@ https
Step10: A look into the future
Blaze will supersede NumPy for Big Data
@ https | Python Code:
pip install ipython-notebook
Explanation: A Review of EuroSciPy2015
0. IPython Notebooks
1. Tutorial 1: An Introduction to Python (Joris Vankerschaver)
2. Tutorial 2: Never get in a data battle without Numpy arrays (Valerio Maggio)
3. numexpr
4. Interesting talks
0. Jupyter aka IPython Notebook
Interactive programming interface
Simultaneous development & documentation
All tutorials and most lectures at EuroSciPy2015 were given using IPython Notebooks.
We have adopted the Notebook for this presentation as a practical exercise.
End of explanation
ipython notebook
Explanation: Create a local host for the notebook in the directory of interest by running the command:
End of explanation
# Mutable objects can be changed in place (e.g. lists),
# Immutable objects can NOT (e.g. ints, strings, tuples)
tup = ('a','0','@')
tup
tup[0] = 8
record = {}
record['first'] = 'Alan'
record['last'] = 'Turing'
record
record.update({'workplace':'Bletchley Park'})
record
# Comma after print statement removes implicit \n, prints to same line
for x in range(0,4):
print x,
# xrange(#) more efficient than range(#), because:
# range() creates the whole sequence of numbers,
# while xrange() creates them as needed!
%timeit range(1000000)
%timeit xrange(1000000)
# Namespaces are evil
pi = 3.14
from numpy import pi
pi
Explanation: Images, data files, etc. should be stored in a subdirectory of the Notebook host.
1. Tutorial #1: An Introduction to Python (Joris Vankerschaver)
IPython Notebook: https://github.com/jvkersch/python-tutorial-files
An overview of basic Python syntax and data structures, including:
- lists, tuples, dictionaries
- mutable vs immutable objects
- set, enumerate
- read from / write to files
- namespaces
End of explanation
range?
Explanation: Accessing Python's Source Code
cf. http://stackoverflow.com/questions/8608587/finding-the-source-code-for-built-in-python-functions
We can get help for a built-in Python function, such as range, with a single question mark:
End of explanation
# import inspect
# inspect.getsourcefile(range) # doesn't work for built-in functions
# Python's built-in "Counter" class defines the control flow of iterators,
# used in functions such as "for i in range(0,10) ..."
class Counter(object):
def __init__(self, low, high):
self.current = low
self.high = high
def __iter__(self):
'Returns itself as an iterator object'
return self
def __next__(self):
'Returns the next value till current is lower than high'
if self.current > self.high:
raise StopIteration
else:
self.current += 1
return self.current - 1
Explanation: ...and we can read the source code of built-in functions by downloading the source from the Python.org Mercurial repositories): https://hg.python.org/
End of explanation
for i in range(0,10):
print i*i
# List comprehension (with filter)
[a*2 for a in range(0,10) if a>3]
Explanation: We can view the source code for a particular function, such as range, in the Python.org Mercurial Repository: https://hg.python.org/cpython/file/c6880edaf6f3/Objects/rangeobject.c
End of explanation
# The Zen of Python
import this
Explanation: Sneak-peek @ Advanced Topics
NumPy & SciPy
Pandas
Cython
Translate Python scripts into C code, and compile to machine code.
End of explanation
# We can infer the data type of an array structure (but not of int, list, etc.)
import numpy as np
a = np.array([1, 2, 3], dtype=np.int16)
a.dtype
arev = a[::-1]
arev
# Typecast variables into float, complex numbers,
b = np.float64(64)
c = np.complex(b)
print "R(c) = ", c.real
print "I(c) = ", c.imag
# Specify type of array elements
x = np.ones(4, 'int8')
x
# Wrap-around
x[0] = 256
x
# Define a new record and create an array of corresponding data types
rt = np.dtype([('artist', np.str_, 40),('title', np.str_, 40), ('year', np.int16)])
music = np.array([('John Cage','4\'33\'\'',1952)], dtype=rt)
music
Explanation: 2. Tutorial 2: Never get in a data battle without Numpy arrays (Valerio Maggio)
IPython Notebook: https://github.com/leriomaggio/numpy_euroscipy2015
Arrays and Data Types
End of explanation
# Flatten a matrix into a 1-D array
r = np.array([[1, 2, 3], [4, 5, 6]])
r.ravel()
# Save a .csv file using arbitrary precision
M = np.random.rand(3,3)
np.savetxt("data/random-matrix2.csv", M, fmt='%.5f')
# Create a matrix using list comprehension
coolmx = np.array([[10*j+i for i in range(6)] for j in range(6)])
coolmx
Explanation: Matrix Tricks
End of explanation
# Machine Learning in Python is a oneliner!
centroids, variance = vq.kmeans(data, 3)
#... after preparing the data and importing the scikit-learn package
Explanation: Memmapping
@ https://github.com/leriomaggio/numpy_euroscipy2015/blob/master/05_Memmapping.ipynb
Machine Learning with SciKit-Learn
Applying the K-means algorithm to the Iris dataset
@ https://github.com/leriomaggio/numpy_euroscipy2015/blob/master/07_0_MachineLearning_Data.ipynb
End of explanation
import numexpr as ne
import numpy as np
a = np.arange(1e8)
b = np.arange(1e8)
print "NumPy >> "
%timeit a**2 + b**2 + 2*a*b
print "NumExpr >> "
%timeit ne.evaluate('a**2 + b**2 + 2*a*b')
Explanation: A look into the future
Blaze will supersede NumPy for Big Data
@ https://github.com/leriomaggio/numpy_euroscipy2015/blob/master/08_A_look_at_the_future.ipynb
numpy-100
"...a quick reference for new and old users and to provide also a set of exercices for those who teach."
https://github.com/rougier/numpy-100
3. numexpr
@ https://github.com/pydata/numexpr (previously @ https://code.google.com/p/numexpr/)
JIT (Just-in-time) compilation for significant speed-up of numerial calculations
numexpr evaluates multiple-operator array expressions many times faster than NumPy can. It accepts the expression as a string, analyzes it, rewrites it more efficiently, and compiles it on the fly into code for its internal virtual machine (VM). Due to its integrated just-in-time (JIT) compiler, it does not require a compiler at runtime.
Multithreading to make use of multiple CPU cores
numexpr implements support for multi-threading computations straight into its internal virtual machine, written in C. This allows to bypass the GIL in Python, and allows near-optimal parallel performance in your vector expressions, most specially on CPU-bounded operations (memory-bounded ones were already the strong point of numexpr).
Can be used to evaluate expressions in NumPy and Pandas
cf. https://github.com/leriomaggio/numpy_euroscipy2015/blob/master/06_Numexpr.ipynb
The Speed of NumExpr
The speed advantage of NumExpr is due to using fewer temporary variables to store data. Instead of using temp variables, the results are stored successively in the output argument.
For this reason, NumExpr outperforms standard Python when array sizes are larger than the processor cache. For small computations, it is actually slower...
...so use only when necessary!
More info about how it is done: https://github.com/pydata/numexpr#how-numexpr-can-achieve-such-a-high-performance
End of explanation |
15,723 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step7: Building Custom Plugins
The 3ML instrument/data philosophy focuses on the abstraction of the data to likelihood interface. Rather than placing square pegs in round holes by enforcing a common, restrictive data format, 3ML provides an interface that takes a model and returns the likelhood value of that model with its current parameter values.
This way, the data format, likelihood formula, internal computations, etc. that are required for the given instrument can be utilized in thier native, most sensitve, and expertly handled form.
While many general and instrument specific plugins are already provided in 3ML, the ability to build a custom plugin from scratch allows for the quick interface of a new instrument, experiment, or idea into the power 3ML framework on the fly. Let's take a look at the basic components of a plugin and construct one for ourselves.
The PluginPrototype class
The basic functionality of any plugin is prototyped in the PluginPrototype class. This is under the main directory in the 3ML source code, but let's examine it here
Step8: Basic Properties
The basic properties of a plugin are its name and nuisance parameters. These are mostly handled by 3ML natively, but can be manipulated internally as needed.
name
All plugins must be given an instance name. Since it is possible that many instances of a particular plugin may be used in an analysis (many different x-ray instruments with FITS PHA data?), 3ML must be able to distinguish them from one another.
nuisance parameters
Nuisance parameters are parameters that are plugin instance dependent and not part of the inter-plugin shared likelihood model. An effective area correction for a given detector that scales its internal effective area or an internal parameter in an instrument's software-dependent inner fit are examples of nuisance parameters.
Unique Properties
The properties that abstract the model-data-likelihood interface are the set_model, get_log_like, and inner_fit members of the plugin. These must be implemented or an error will be returned when trying to define the class.
set_model
This member is responsible for translating the astromodels Model object shared by all the plugins during an analysis to this plugin's data. For example, the DispersionSpectrumLike plugin translates the likelihood model by setting up the convolution of the model through its energy dispersion matrix. There are no restrictions on how this interface occurs allowing for freedom in the data format and/or software that are used to calculate the model.
get_log_like
This is the member that is called by 3ML when performing parameter estimation to assess the likelihood of the plugin. It simply returns a number, the log likelihood. No restrictions are placed on how this number is calculated allowing for it to be the product of complex instrument software, mathematical formulas, etc.
inner_fit
This is used for the profile likelihood. Keeping fixed all parameters in the LikelihoodModel, this method minimizes the log likelihood over the remaining nuisance parameters, i.e., the parameters belonging only to the model for this particular detector. If there are no nuisance parameters, simply return the logLike value.
Making a custom plugin
Let's build a simple (and useless) plugin to see how the process works. First, we import the PluginPrototype class from 3ML.
Step9: If we try to create a plugin without implementing all the needed memebers, we will run into an error.
Step10: So, let's instead build a proper plugin | Python Code:
class PluginPrototype(object):
__metaclass__ = abc.ABCMeta
def __init__(self, name, nuisance_parameters):
assert is_valid_variable_name(name), "The name %s cannot be used as a name. You need to use a valid " \
"python identifier: no spaces, cannot start with numbers, cannot contain " \
"operators symbols such as -, +, *, /" % name
# Make sure total is not used as a name (need to use it for other things, like the total value of the statistic)
assert name.lower() != "total", "Sorry, you cannot use 'total' as name for a plugin."
self._name = name
# This is just to make sure that the plugin is legal
assert isinstance(nuisance_parameters, dict)
self._nuisance_parameters = nuisance_parameters
# These are the external properties (time, polarization, etc.)
# self._external_properties = []
self._tag = None
def get_name(self):
warnings.warn("Do not use get_name() for plugins, use the .name property", DeprecationWarning)
return self.name
@property
def name(self):
Returns the name of this instance
:return: a string (this is enforced to be a valid python identifier)
return self._name
@property
def nuisance_parameters(self):
Returns a dictionary containing the nuisance parameters for this dataset
:return: a dictionary
return self._nuisance_parameters
def update_nuisance_parameters(self, new_nuisance_parameters):
assert isinstance(new_nuisance_parameters, dict)
self._nuisance_parameters = new_nuisance_parameters
def get_number_of_data_points(self):
This returns the number of data points that are used to evaluate the likelihood.
For binned measurements, this is the number of active bins used in the fit. For
unbinned measurements, this would be the number of photons/particles that are
evaluated on the likelihood
warnings.warn(
"get_number_of_data_points not implemented, values for statistical measurements such as AIC or BIC are "
"unreliable", )
return 1.
def _get_tag(self):
return self._tag
def _set_tag(self, spec):
Tag this plugin with the provided independent variable and a start and end value.
This can be used for example to fit a time-varying model. In this case the independent variable will be the
time and the start and end will be the start and stop time of the exposure for this plugin. These values will
be used to average the model over the provided time interval when fitting.
:param independent_variable: an IndependentVariable instance
:param start: start value for this plugin
:param end: end value for this plugin. If this is not provided, instead of integrating the model between
start and end, the model will be evaluate at start. Default: None (i.e., not provided)
:return: none
if len(spec) == 2:
independent_variable, start = spec
end = None
elif len(spec) == 3:
independent_variable, start, end = spec
else:
raise ValueError("Tag specification should be (independent_variable, start[, end])")
# Let's do a lazy check
if not isinstance(independent_variable, IndependentVariable):
warnings.warn("When tagging a plugin, you should use an IndependentVariable instance. You used instead "
"an instance of a %s object. This might lead to crashes or "
"other problems." % type(independent_variable))
self._tag = (independent_variable, start, end)
tag = property(_get_tag, _set_tag, doc="Gets/sets the tag for this instance, as (independent variable, start, "
"[end])")
######################################################################
# The following methods must be implemented by each plugin
######################################################################
@abc.abstractmethod
def set_model(self, likelihood_model_instance):
Set the model to be used in the joint minimization. Must be a LikelihoodModel instance.
pass
@abc.abstractmethod
def get_log_like(self):
Return the value of the log-likelihood with the current values for the
parameters
pass
@abc.abstractmethod
def inner_fit(self):parametersparametersparameters
This is used for the profile likelihood. Keeping fixed all parameters in the
LikelihoodModel, this method minimizes the logLike over the remaining nuisance
parameters, i.e., the parameters belonging only to the model for this
particular detector. If there are no nuisance parameters, simply return the
logLike value.
pass
Explanation: Building Custom Plugins
The 3ML instrument/data philosophy focuses on the abstraction of the data to likelihood interface. Rather than placing square pegs in round holes by enforcing a common, restrictive data format, 3ML provides an interface that takes a model and returns the likelhood value of that model with its current parameter values.
This way, the data format, likelihood formula, internal computations, etc. that are required for the given instrument can be utilized in thier native, most sensitve, and expertly handled form.
While many general and instrument specific plugins are already provided in 3ML, the ability to build a custom plugin from scratch allows for the quick interface of a new instrument, experiment, or idea into the power 3ML framework on the fly. Let's take a look at the basic components of a plugin and construct one for ourselves.
The PluginPrototype class
The basic functionality of any plugin is prototyped in the PluginPrototype class. This is under the main directory in the 3ML source code, but let's examine it here:
End of explanation
from threeML import PluginPrototype
Explanation: Basic Properties
The basic properties of a plugin are its name and nuisance parameters. These are mostly handled by 3ML natively, but can be manipulated internally as needed.
name
All plugins must be given an instance name. Since it is possible that many instances of a particular plugin may be used in an analysis (many different x-ray instruments with FITS PHA data?), 3ML must be able to distinguish them from one another.
nuisance parameters
Nuisance parameters are parameters that are plugin instance dependent and not part of the inter-plugin shared likelihood model. An effective area correction for a given detector that scales its internal effective area or an internal parameter in an instrument's software-dependent inner fit are examples of nuisance parameters.
Unique Properties
The properties that abstract the model-data-likelihood interface are the set_model, get_log_like, and inner_fit members of the plugin. These must be implemented or an error will be returned when trying to define the class.
set_model
This member is responsible for translating the astromodels Model object shared by all the plugins during an analysis to this plugin's data. For example, the DispersionSpectrumLike plugin translates the likelihood model by setting up the convolution of the model through its energy dispersion matrix. There are no restrictions on how this interface occurs allowing for freedom in the data format and/or software that are used to calculate the model.
get_log_like
This is the member that is called by 3ML when performing parameter estimation to assess the likelihood of the plugin. It simply returns a number, the log likelihood. No restrictions are placed on how this number is calculated allowing for it to be the product of complex instrument software, mathematical formulas, etc.
inner_fit
This is used for the profile likelihood. Keeping fixed all parameters in the LikelihoodModel, this method minimizes the log likelihood over the remaining nuisance parameters, i.e., the parameters belonging only to the model for this particular detector. If there are no nuisance parameters, simply return the logLike value.
Making a custom plugin
Let's build a simple (and useless) plugin to see how the process works. First, we import the PluginPrototype class from 3ML.
End of explanation
class BadPlugin(PluginPrototype):
pass
bad_plugin = BadPlugin('name',{})
Explanation: If we try to create a plugin without implementing all the needed memebers, we will run into an error.
End of explanation
from astromodels import Parameter
import collections
class GoodPlugin(PluginPrototype):
def __init__(self, name):
# create the hash for the nuisance parameters
nuisance_parameters = collections.OrderedDict()
# create a dummy parameter
par = Parameter("dummy_%s" % name, 1.0, min_value=0.8, max_value=1.2, delta=0.05,
free=False, desc="A dummy parameter for %s" % name)
nuisance_parameters[par.name] = par
# call the prototype constructor
super(GoodPlugin, self).__init__(name,nuisance_parameters)
def set_model(self, model):
# attach the model to the object
self._model = model
def get_log_like(self):
# this isn't going to be very usefull
return -99.
def inner_fit(self):
return self.get_log_like()
good_plugin = GoodPlugin('name')
good_plugin.name
good_plugin.get_log_like()
good_plugin.nuisance_parameters
Explanation: So, let's instead build a proper plugin
End of explanation |
15,724 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<b>Exercise</b>
Step1: The grammar that should be used to parse this program is given in the file
Examples/simple.g. It is very similar to the grammar that we have developed previously for our interpreter. I have simplified this grammar at various places to make it more suitable
for the current task.
Step2: Exercise 1
Step3: Exercise 2
Step4: The cell below tests your tokenizer. Your task is to compare the output with the output shown above.
Step5: The function parse(self, TL) is called with two arguments
Step7: Exercise 3
Step8: Exercise 4
Step9: Exercise 5
Step10: Testing
The notebook ../AST-2-Dot.ipynb implements the function tuple2dot(nt) that displays the nested tuple nt as a tree via graphvis.
Step11: Calling the function test below should produce the following nested tuple as parse tree | Python Code:
cat Examples/sum-for.sl
Explanation: <b>Exercise</b>: Extending a Shift-Reduce Parser
In this exercise your task is to extend the shift-reduce parser
that has been discussed in the lecture so that it returns an abstract syntax tree. You should test it with the program sum-for.sl that is given the directory Examples.
End of explanation
cat Examples/simple.g
Explanation: The grammar that should be used to parse this program is given in the file
Examples/simple.g. It is very similar to the grammar that we have developed previously for our interpreter. I have simplified this grammar at various places to make it more suitable
for the current task.
End of explanation
import re
Explanation: Exercise 1: Generate both the action-table and the goto table for this grammar using the notebook SLR-Table-Generator.ipynb.
Implementing a Scanner
End of explanation
def tokenize(s):
'''Transform the string s into a list of tokens. The string s
is supposed to represent an arithmetic expression.
'''
"Edit the code below!"
lexSpec = r'''([ \t\n]+) | # blanks and tabs
([1-9][0-9]*|0) | # number
([()]) | # parentheses
([-+*/]) | # arithmetical operators
(.) # unrecognized character
'''
tokenList = re.findall(lexSpec, s, re.VERBOSE)
result = []
for ws, number, parenthesis, operator, error in tokenList:
if ws: # skip blanks and tabs
continue
elif number:
result += [ 'NUMBER' ]
elif parenthesis:
result += [ parenthesis ]
elif operator:
result += [ operator ]
elif error:
result += [ f'ERROR({error})']
return result
Explanation: Exercise 2: The function tokenize(s) transforms the string s into a list of tokens.
Given the program sum-for.sl it should produce the list of tokens shown further below. Note that a number n is stored as a pairs of the form
('NUMBER', n)
and an identifier v is stored as the pair
('ID', v).
You have to take care of keywords like for or while: Syntactically, they are equal to identifiers, but the scanner should <u>not</u> turn them into pairs but rather return them as strings so that the parser does not mistake them for identifiers.
Below is the token list that should be produced from scanning the file sum-for.sl:
['function',
('ID', 'sum'),
'(',
('ID', 'n'),
')',
'{',
('ID', 's'),
':=',
('NUMBER', 0),
';',
'for',
'(',
('ID', 'i'),
':=',
('NUMBER', 1),
';',
('ID', 'i'),
'≤',
('ID', 'n'),
'*',
('ID', 'n'),
';',
('ID', 'i'),
':=',
('ID', 'i'),
'+',
('NUMBER', 1),
')',
'{',
('ID', 's'),
':=',
('ID', 's'),
'+',
('ID', 'i'),
';',
'}',
'return',
('ID', 's'),
';',
'}',
('ID', 'print'),
'(',
('ID', 'sum'),
'(',
('NUMBER', 6),
')',
')',
';']
For reference, I have given the old implementation of the function tokenize that has been used in the notebook Shift-Reduce-Parser-Pure.ipynb. You have to edit this function so that it works with the grammar simple.g.
End of explanation
with open('Examples/sum-for.sl', 'r', encoding='utf-8') as file:
program = file.read()
tokenize(program)
class ShiftReduceParser():
def __init__(self, actionTable, gotoTable):
self.mActionTable = actionTable
self.mGotoTable = gotoTable
Explanation: The cell below tests your tokenizer. Your task is to compare the output with the output shown above.
End of explanation
%run parse-table.py
Explanation: The function parse(self, TL) is called with two arguments:
- self ia an object of class ShiftReduceParser that maintain both an action table
and a goto table.
- TL is a list of tokens. Tokens are either
- literals, i.e. strings enclosed in single quote characters,
- pairs of the form ('NUMBER', n) where n is a natural number, or
- the symbol $ denoting the end of input.
Below, it is assumed that parse-table.py is the file that you have created in
Exercise 1.
End of explanation
def parse(self, TL):
Edit this code so that it returns a parse tree.
Make use of the auxiliary function combine_trees that you have to
implement in Exercise 4.
index = 0 # points to next token
Symbols = [] # stack of symbols
States = ['s0'] # stack of states, s0 is start state
TL += ['$']
while True:
q = States[-1]
t = TL[index]
print('Symbols:', ' '.join(Symbols + ['|'] + TL[index:]).strip())
p = self.mActionTable.get((q, t), 'error')
if p == 'error':
return False
elif p == 'accept':
return True
elif p[0] == 'shift':
s = p[1]
Symbols += [t]
States += [s]
index += 1
elif p[0] == 'reduce':
head, body = p[1]
n = len(body)
if n > 0:
Symbols = Symbols[:-n]
States = States [:-n]
Symbols = Symbols + [head]
state = States[-1]
States += [ self.mGotoTable[state, head] ]
ShiftReduceParser.parse = parse
del parse
Explanation: Exercise 3:
The function parse given below is the from the notebook Shift-Reduce-Parser.ipynb. Adapt this function so that it does not just return Trueor False
but rather returns a parse tree as a nested list. The key idea is that the list Symbols
should now be a list of parse trees and tokens instead of just syntactical variables and tokens, i.e. the syntactical variables should be replaced by their parse trees.
It might be useful to implement an auxilliary function combine_trees that takes a
list of parse trees and combines the into a new parse tree.
End of explanation
def combine_trees(TL):
if len(TL) == 0:
return ()
if isinstance(TL, str):
return (str(TL),)
Literals = [t for t in TL if isinstance(t, str)]
Trees = [t for t in TL if not isinstance(t, str)]
if len(Literals) > 0:
label = Literals[0]
else:
label = ''
result = (label,) + tuple(Trees)
return result
VoidKeys = { '', '(', ';', 'NUMBER', 'ID' }
Explanation: Exercise 4:
Given a list of tokens and parse trees TL the function combine_trees combines these trees into a new parse tree. The parse trees are represented as nested tuples. The data type of a nested tuple is defined recursively:
- A nested tuple is a tuple of the form (Head,) + Body where
* Head is a string and
* Body is a tuple of strings, integers, and nested tuples.
When the nested tuple (Head,) + Body is displayed as a tree, Head is used as the label at the root of the tree. If len(Body) = n, then the root has n children. These n children are obtained by displaying Body[0], $\cdots$, Body[n-1] as trees.
In order to convert the list of tokens and parse trees into a nested tuple we need a string that can serve as the Head of the parse tree. The easiest way to to this is to take the first element of TL that is a string because the strings in TL are keywords like for or while or they are operator symbols. The remaining strings after the first in TL can be discarded.
If there is no string in TL, you can define Head as the empty string.
I suggest a recursive implementation of this function.
The file sum-st.pdf shows the parse tree of the program that is stored in the file sum-for.sl.
End of explanation
def simplify_tree(tree):
if isinstance(tree, int) or isinstance(tree, str):
return tree
head, *body = tree
if body == []:
return tree
if head == '' and len(body) == 2 and body[0] == ('',):
return simplify_tree(body[1])
if head in VoidKeys and len(body) == 1:
return simplify_tree(body[0])
body_simplified = simplify_tree_list(body)
if head == '(' and len(body) == 2:
return (body_simplified[0],) + body_simplified[1:]
if head == '':
head = '.'
return (head,) + body_simplified
def simplify_tree_list(TL):
if TL == []:
return ()
tree, *Rest = TL
return (simplify_tree(tree),) + simplify_tree_list(Rest)
Explanation: Exercise 5:
The function simplfy_tree(tree) transforms the parse tree tree into an abstract syntax tree. The parse tree tree is represented as a nested tuple of the form
tree = (head,) + body
The function should simplify the tree as follows:
- If head == '' and body is a tuple of length 2 that starts with an empty string,
then this tree should be simplified to body[1].
- If head does not contain useful information, for example if head is the empty string
or an opening parenthesis and, furthermore, body is a tuple of length 1,
then this tree should be simplified to body[0].
- By convention, remaining empty Head labels should be replaced by the label '.'
as this label is traditionally used to construct lists.
I suggest a recursive implementation of this function.
The file sum-ast.pdf shows the abstract syntax tree of the program that is stored in the file sum-for.sl.
End of explanation
%run ../AST-2-Dot.ipynb
cat -n Examples/sum-for.sl
def test(file):
with open(file, 'r', encoding='utf-8') as file:
program = file.read()
parser = ShiftReduceParser(actionTable, gotoTable)
TL = tokenize(program)
st = parser.parse(TL)
ast = simplify_tree(st)
return st, ast
Explanation: Testing
The notebook ../AST-2-Dot.ipynb implements the function tuple2dot(nt) that displays the nested tuple nt as a tree via graphvis.
End of explanation
st, ast = test('Examples/sum-for.sl')
print(st)
print(ast)
display(tuple2dot(st))
display(tuple2dot(ast))
Explanation: Calling the function test below should produce the following nested tuple as parse tree:
('', ('', ('', ('function', ('ID', 'sum'), ('', ('ID', 'n')), ('', ('', ('', ('',), (';', (':=', ('ID', 's'), ('', ('', ('', ('NUMBER', 0))))))), ('for', (':=', ('ID', 'i'), ('', ('', ('', ('NUMBER', 1))))), ('', ('', ('', ('≤', ('', ('', ('', ('ID', 'i')))), ('', ('*', ('', ('', ('ID', 'n'))), ('', ('ID', 'n')))))))), (':=', ('ID', 'i'), ('+', ('', ('', ('', ('ID', 'i')))), ('', ('', ('NUMBER', 1))))), ('', ('',), (';', (':=', ('ID', 's'), ('+', ('', ('', ('', ('ID', 's')))), ('', ('', ('ID', 'i'))))))))), ('return', ('', ('', ('', ('ID', 's')))))))), (';', ('', ('', ('(', ('ID', 'print'), ('', ('', ('', ('(', ('ID', 'sum'), ('', ('', ('', ('', ('NUMBER', 6)))))))))))))))
The file sum-st.pdf shows this nested tuple as a tree.
Transforming the parse tree into an abstract syntax tree should yield the following nested tuple:
('.', ('function', 'sum', 'n', ('.', ('.', (':=', 's', 0), ('for', (':=', 'i', 1), ('≤', 'i', ('*', 'n', 'n')), (':=', 'i', ('+', 'i', 1)), (':=', 's', ('+', 's', 'i')))), ('return', 's'))), ('print', ('sum', 6)))
The file sum-ast.pdf shows this nested tuple as a tree.
End of explanation |
15,725 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<div class="alert alert-success">
Este notebook de ipython depende del modulo `vis_int`, el cual es ilustrado en el notebook de [Visualización e Interacción](vis_int.ipybn).
</div>
Step1: Técnicas Numéricas
Para el desarrollo de los modelos expuestos en los cursos de mecánica cuántica y física moderna, se recurre frecuentemente a funciones especiales y técnicas de solución matemáticas que en su operatividad pueden distraer el objetivo del curso y el adecuado entendimiento de los conceptos físicos en medio de las herramientas matemáticas.
Con esto en mente, el uso de técnicas numéricas simples puede apoyar significativamente el desarrollo de los cursos, teniendo como ventaja la reducción a formas matemáticas simples (los métodos numéricos llevan a aproximaciones con operaciones aritmeticas) y funciones simples (las funciones se aproximan a funciones mucho más simples, generalmente polinomios). Esta reducción facilita además reducir a una sola técnica multiples desarrollos, ya que las diferencias no van tanto en los detalles del modelo original (como si dependen las soluciones análiticas) sino en los detalles del tipo de aproximación general.
Se exponen las soluciones numéricas de los siguientes problemas, útiles para el desarrollo de problemas 1D de mecánica cuántica.
1. Búsqueda de raíces.
+ Bisección.
+ Incremental.
1. Ecuaciones diferenciales con valores de frontera.
+ Método del disparo con algoritmo Numerov.
1. Adimensionalización.
+ Unidades atomicas de Rydberg.
Búsqueda de raíces
Los problemas de búsquedas de raíces corresponden a encontrar valores que al evaluarse en la función de interes generan como evaluación el valor cero. En la mecánica cuántica nos encontramos con la particularidad de requerir el calculo de raíces para determinar los autovalores de energía de un sistema en su planteamiento continuo (representación en el espacio directo). En estos sistemas de interes, de estados ligados, la energía del sistema se encuentra entre el mínimo y el máximo de la energía potencial a la que se encuentra sometido en el espacio, $$ V_{min} \leq E_n \leq V_{max}.$$
En caso de ser el máximo $V_{max} \rightarrow \infty$, el sistema posee infinitos autovalores que se encuentran con la condición $V_{min} \leq E_n$.
Para cualquiera de los casos, se presenta un interes en encontrar estos autovalores de manera ordenada, y esto lleva seleccionar los métodos de búsqueda cerrados por encima de los métodos de búsquedas abiertos, ya que en estos últimos la selección de un valor inicial no asegura la búsqueda en cercanías de este o en una dirección dada, por el contrario en los métodos cerrados se puede limitar la búsqueda a una región de la cual tenemos conocimiento que se presenta la raíz (autovalor de energía).
El uso combinado entre el método de búsqueda incremental y el método de bisección, con un paso adecuado de energía, permite cumplir con el objetivo de hallar todos los autovalores (cuando los límites de energía son finitos) del sistema de forma ordenada, y con precisión arbitraria (limitada solo por la precisión de máquina). Para ello se inicia en el intervalo de búsqueda con el método de búsqueda incremental, el cual al encontrar un intervalo candidato a raíz (un intervalo que presenta cambio de signo entre sus extremos), refina el resultado mediante la aplicación del método de bisección en el intervalo candidato.
Forma iterativa de búsqueda incremental $E_{i+1} = E_i + \Delta E$.
Forma iterativa de bisección $E_{i+1} = \frac{E_i + E_{i-1}}{2}$.
Step2: Se observa en la implementación del método de bisección, que se considera una revisión extra a los códigos tradicionales, con el fin de validad si el candidato a raíz realmente lo es. Esto se requiere ya que es posible que la función asociada a la discretización de la energía posea discontinuidades alrededor de las cuales presente cambio de signo.
Notese que la teoría clásica de métodos numéricos indica que estos métodos se aplican para funciones continuas. En este caso que esperamos discontinuidades dadas por cambios de signo a causa de divergencias al infinito, se pueden remover sistematicamente notando que a medida que se converge al candidato a raíz (tamaño de intervalo menor que la tolerancia), la evaluación de la función en este valor es significativamente mayor a la tolerancia, y cada vez su evaluación es mayor a la anterior.
\begin{equation}
E \in [E_i, E{i+1}] \wedge \Delta E \leq tol \wedge \begin{cases}
f(E) > tol, & \qquad\text{Discontinuidad}\
f(E) \leq tol, & \qquad\text{Raíz (autovalor)}
\end{cases}
\end{equation}
Una vez se obtiene una raíz, el método de búsqueda incremental continua nuevamente avanzando hasta encontrar un próximo intervalo candidato, al cual vuelve a aplicarle el método de bisección para distinguir si es raíz o discontinuidad. Este proceso se continua hasta el límite superior para la energía, $V_{max}$.
Para la busqueda de un autovalor especifico, se requiere buscar todos los autovalores anteriores. De manera que se requiere de una función auxiliar que medie este progreso dada un modo. El caracter progresivo sobre las energías ofrece la ventaja sobre técnicas de autovalores, de la posibilidad de obtener los autovalores ordenados de manera natural.
Step3: A continuación se ilustra el uso de la técnica con la función trascendental del problema del pozo finito simetrico con paridad par, que en la forma adimensional corresponde a
Step4: Ecuaciones diferenciales con problemas de frontera
La ecuación de Schrödinger, ya sea dependiente o independiente del tiempo, es una ecuación diferencial de segundo orden. Al remover la dependencia del tiempo y disponer de problemas 1D, se tiene que la ecuación diferencial es ordinaria. Para este tipo de ecuaciones (segundo orden, una variable) es posible aplicar métodos especificos que solución con bajo costo computacional aproximaciones de orden alto. Ejemplo de esto son los métodos de Verlet y de Numerov, con método del disparo.
De la ecuación de Schrödinger se observa que si se reemplazan los valores por cantidades conocidas estimadas, el valor de la energía $E$ que cumple con ser autovalor, es aquel que haga satisfacer las condiciones de frontera del problema, y por ende una forma de solucionar el problema es mediante la aplicación de un problema de busqueda de raices. De esta forma, el método del disparo lo que hace es el ajuste de $E$ para que partiendo de una frontera, con la condicón respectiva, al propagarse hasta la otra frontera llegue con el valor de la otra condición. De no hacerlo, se cambio el valor de $E$ y se repite el proceso.
El esquema de Numerov para la propagación es, dada una ecuación diferencial ordinaria de segundo orden sin termino lineal, $$ \frac{d^2y(x)}{dx^2} + K(x)y(x) = 0, $$ su esquema discreto se plantea como $$ y_{i+2} = \frac{\left(2-\frac{5h^2 K_{i+2}}{6} \right)y_{i+1} - \left(1+\frac{h^2 K_{i}}{12} \right) y_i}{ \left(1+\frac{h^2 K_{i+2}}{12} \right)}. $$
Para nuestro caso, la función $K(x)$ posee dependencia de la energía, y todos los demás elementos son conocidos (la función solución, de onda en este caso, se construye iterativamente dado un valor de energía), por lo cual se puede definir una función que dada una energía como argumento, genere el valor de la función de onda en la frontera opuesta. Este valor en la frontera, por las condiciones establecidas por los potenciales y la condición de integrabilidad, debe ser $\psi(x_{izq}) = \psi(x_{der}) = 0$.
Tambien es usual usar como definición de la función, la diferencia de las derivadas logaritmicas en un punto intermedio, realizando la propagación desde ambos extremos. Por facilidad, se optará por la metodología clásica del disparo, que ofrece menor cantidad de operaciones y no presenta ambigüedades de definición, que consta de comparar con el valor esperado en la frontera opuesta, $Numerov(E) = 0$.
Este problema de búsqueda de raíces requiere conocer dos condiciones iniciales, la segunda debe tomarse acorde a la paridad de la energía buscada. Para paridad par, el segundo valor es positivo, mientras que para paridad impar el segundo valor es negativo.
La función estacionario se define para buscar el punto de empate adecuado para el análisis de la continuidad de la función de onda y su derivada. Como criterio, se buscan los turning points clásicos, donde $E=V(x)$.
Step5: Para la ecuación de Schrödinger, $K(x) = E - V(x)$.
Step6: Para ilustrar el método del disparo, se presenta el siguiente control. La idea es ajustar para una configuración de potencial $V_0$, ancho $a$, longitud total $L$ y numero de elementos de discretización $n$, la energía $E$ adecuada para observar continuidad en la función de onda y su derivada en todo el intervalo. Dada la implementación del método se verifica dicha continuidad en el limite de la primera pared. Más adelante se define de manera general como seleccionar el punto de comparación.
Step7: La anterior ilustración tambien permite observar los efectos del potencial sobre un paquete de onda cuando la energía es menor o mayor que el potencial. Se puede observar como para $E>V_0$, se obtiene una función de onda oscilante en todo el intervalo, equivalente a una particula libre.
Se define la función E_N para el calculo de las autoenergías, que pueden incluirse en la función Phi para generar las autofunciones. Esta función puede explorarse en el notebook de Estados ligados.
Step8: El siguiente bloque define la base de los controles, fun_contenedor_base, para el notebook conceptual, Estados ligados, donde los parametros de máxima energía de búsqueda, longitud de interes, número de estado y particiones son comunes. | Python Code:
from vis_int import *
import vis_int
print(dir(vis_int))
Explanation: <div class="alert alert-success">
Este notebook de ipython depende del modulo `vis_int`, el cual es ilustrado en el notebook de [Visualización e Interacción](vis_int.ipybn).
</div>
End of explanation
def biseccion(funcion, a, b, tol_x = 1e-6, factor_ty = 1e2):
f0 = funcion(a)
f1 = funcion(b)
if abs(f0) < tol_x: # Se verifica que los extremos sean raices
return a
elif abs(f1) < tol_x:
return b
else: # Si los extremos no son raices, se bisecta.
c = (a + b) / 2.0
f2 = funcion(c)
while abs(f2) >= tol_x and abs(c - b) >= tol_x:
if f2 * f0 < 0 :
b = c
f1 = f2
else:
a = c
f0 = f2
c = (a + b) / 2.0
f2 = funcion(c)
if abs(f2) < tol_x * factor_ty: # Se verifica que efectivamente sea raiz
return c
else: # En caso de ser asintota vertical con cambio de signo
return None
def incremental(funcion, a, b, delta_x = 1e-4, tol_x = 1e-6):
c0 = a
f0 = funcion(c0)
c1 = c0 + delta_x
c = None
while c == None and c1 <=b: # Si no se ha hallado raíz y se esta en el intervalo, avance
f1 = funcion(c1)
while f0*f1 > 0 and c1 <= b:
c0 = c1
f0 = f1
c1 = c1 + delta_x
f1 = funcion(c1)
if c1 > b: # Final del intervalo, equivalente f0*f1 > 0
return None
else: # Sub-intervalo con cambio de signo
c = biseccion(funcion, c0, c1, tol_x) # Se invoca bisección para mejorar aproximación
if c == None: # Si el candidato era discontinuidad, incremental avanza
c0 = c1
f0 = f1
c1 = c1 + delta_x
return c
Explanation: Técnicas Numéricas
Para el desarrollo de los modelos expuestos en los cursos de mecánica cuántica y física moderna, se recurre frecuentemente a funciones especiales y técnicas de solución matemáticas que en su operatividad pueden distraer el objetivo del curso y el adecuado entendimiento de los conceptos físicos en medio de las herramientas matemáticas.
Con esto en mente, el uso de técnicas numéricas simples puede apoyar significativamente el desarrollo de los cursos, teniendo como ventaja la reducción a formas matemáticas simples (los métodos numéricos llevan a aproximaciones con operaciones aritmeticas) y funciones simples (las funciones se aproximan a funciones mucho más simples, generalmente polinomios). Esta reducción facilita además reducir a una sola técnica multiples desarrollos, ya que las diferencias no van tanto en los detalles del modelo original (como si dependen las soluciones análiticas) sino en los detalles del tipo de aproximación general.
Se exponen las soluciones numéricas de los siguientes problemas, útiles para el desarrollo de problemas 1D de mecánica cuántica.
1. Búsqueda de raíces.
+ Bisección.
+ Incremental.
1. Ecuaciones diferenciales con valores de frontera.
+ Método del disparo con algoritmo Numerov.
1. Adimensionalización.
+ Unidades atomicas de Rydberg.
Búsqueda de raíces
Los problemas de búsquedas de raíces corresponden a encontrar valores que al evaluarse en la función de interes generan como evaluación el valor cero. En la mecánica cuántica nos encontramos con la particularidad de requerir el calculo de raíces para determinar los autovalores de energía de un sistema en su planteamiento continuo (representación en el espacio directo). En estos sistemas de interes, de estados ligados, la energía del sistema se encuentra entre el mínimo y el máximo de la energía potencial a la que se encuentra sometido en el espacio, $$ V_{min} \leq E_n \leq V_{max}.$$
En caso de ser el máximo $V_{max} \rightarrow \infty$, el sistema posee infinitos autovalores que se encuentran con la condición $V_{min} \leq E_n$.
Para cualquiera de los casos, se presenta un interes en encontrar estos autovalores de manera ordenada, y esto lleva seleccionar los métodos de búsqueda cerrados por encima de los métodos de búsquedas abiertos, ya que en estos últimos la selección de un valor inicial no asegura la búsqueda en cercanías de este o en una dirección dada, por el contrario en los métodos cerrados se puede limitar la búsqueda a una región de la cual tenemos conocimiento que se presenta la raíz (autovalor de energía).
El uso combinado entre el método de búsqueda incremental y el método de bisección, con un paso adecuado de energía, permite cumplir con el objetivo de hallar todos los autovalores (cuando los límites de energía son finitos) del sistema de forma ordenada, y con precisión arbitraria (limitada solo por la precisión de máquina). Para ello se inicia en el intervalo de búsqueda con el método de búsqueda incremental, el cual al encontrar un intervalo candidato a raíz (un intervalo que presenta cambio de signo entre sus extremos), refina el resultado mediante la aplicación del método de bisección en el intervalo candidato.
Forma iterativa de búsqueda incremental $E_{i+1} = E_i + \Delta E$.
Forma iterativa de bisección $E_{i+1} = \frac{E_i + E_{i-1}}{2}$.
End of explanation
def raiz_n(funcion, a, b, N, delta_x = 1e-4, tol_x = 1e-6):
c0 = a
cont_raiz = 0
while c0 < b and cont_raiz < N:
c = incremental(funcion, c0, b, delta_x, tol_x)
if c == None: # Si incremental termina en 'None', no hay más raíces
return None
cont_raiz = cont_raiz + 1
c0 = c + delta_x
if cont_raiz == N:
return c
else:
return None
Explanation: Se observa en la implementación del método de bisección, que se considera una revisión extra a los códigos tradicionales, con el fin de validad si el candidato a raíz realmente lo es. Esto se requiere ya que es posible que la función asociada a la discretización de la energía posea discontinuidades alrededor de las cuales presente cambio de signo.
Notese que la teoría clásica de métodos numéricos indica que estos métodos se aplican para funciones continuas. En este caso que esperamos discontinuidades dadas por cambios de signo a causa de divergencias al infinito, se pueden remover sistematicamente notando que a medida que se converge al candidato a raíz (tamaño de intervalo menor que la tolerancia), la evaluación de la función en este valor es significativamente mayor a la tolerancia, y cada vez su evaluación es mayor a la anterior.
\begin{equation}
E \in [E_i, E{i+1}] \wedge \Delta E \leq tol \wedge \begin{cases}
f(E) > tol, & \qquad\text{Discontinuidad}\
f(E) \leq tol, & \qquad\text{Raíz (autovalor)}
\end{cases}
\end{equation}
Una vez se obtiene una raíz, el método de búsqueda incremental continua nuevamente avanzando hasta encontrar un próximo intervalo candidato, al cual vuelve a aplicarle el método de bisección para distinguir si es raíz o discontinuidad. Este proceso se continua hasta el límite superior para la energía, $V_{max}$.
Para la busqueda de un autovalor especifico, se requiere buscar todos los autovalores anteriores. De manera que se requiere de una función auxiliar que medie este progreso dada un modo. El caracter progresivo sobre las energías ofrece la ventaja sobre técnicas de autovalores, de la posibilidad de obtener los autovalores ordenados de manera natural.
End of explanation
def trascendental(E, V_0, a):
k2 = sqrt(V_0 - E)
return sqrt(E) - k2*tan(k2*a/2)
def int_raiz_trasc(V_0:(.1,20.,.1), a:(.1,15.,.1), N:(1, 6, 1), n:(1, 100, 1)):
f = lambda E: trascendental(E, V_0, a)
try:
r = raiz_n(f, 0, V_0, N)
E, tr = discretizar(f, 0, V_0, n)
graficar_funcion(E, tr)
graficar_punto_texto(r, 0, 'Autovalor ' + str(N))
display(Latex('\(E_' + str(N) + '=' + str(r) + '\)'))
plt.grid(True)
plt.show()
display(HTML('<div class="alert alert-warning">'+\
'<strong>Advertencia</strong> Alrededor de las discontinuidades'+\
' el gráfico no es representado fielmente. </div>'))
except ValueError:
display(HTML('<div class="alert alert-danger">'+\
'<strong>Error</strong> Se evaluo la función en una discontinuidad.'+\
'</div>'))
interact(int_raiz_trasc)
Explanation: A continuación se ilustra el uso de la técnica con la función trascendental del problema del pozo finito simetrico con paridad par, que en la forma adimensional corresponde a:
$$ \sqrt{E} - \sqrt{V_0 - E} \tan\left( \frac{\sqrt{V_0 - E}a}{2} \right) = 0, $$
con $a$ el ancho del pozo, $V_0$ es la profundidad del pozo (con referencia desde cero por convención).
End of explanation
def estacionario(K, L, h):
x = -L/2
while x < L/2 and K(x) <= 0:
x = x + h
if x >= L/2:
return L/2
elif x == -L/2:
return -L/2
else:
return x - h
def numerov(K_ex, L, E, N, n):
h = L / n
K = lambda x: K_ex(E, x)
p_est = estacionario(K, L, h)
x = -L/2
phi0 = 0.0
x = x + h
phi1 = 1e-10
x = x + h
while x <= p_est :
term0 = 1 + h**2 * K(x - h) / 12
term1 = 2 - 5 * h**2 * K( x) / 6
term2 = 1 + h**2 * K(x + h) / 12
aux = phi1
phi1 = (term1 * phi1 - term0 * phi0) / term2
phi0 = aux
x = x + h
phi_i_1 = phi1
phi_i_0 = phi0
x = L/2
phi0 = 0.0
x = x - h
phi1 = 1e-10 * (-1)**(N%2 + 1)
x = x - h
while x > p_est :
term0 = 1 + h**2 * K(x + h) / 12
term1 = 2 - 5 * h**2 * K(x) / 6
term2 = 1 + h**2 * K(x - h) / 12
aux = phi1
phi1 = (term1 * phi1 - term0 * phi0) / term2
phi0 = aux
x = x - h
phi_d_1 = phi_i_1
phi_d_0 = phi0 * phi_i_1 / phi1
return (2*phi_d_1 - (phi_i_0+phi_d_0)) / (phi_d_0 - phi_i_0)
def Phi(K_ex, L, E, N, n):
h = L / n
K = lambda x: K_ex(E, x)
p_est = estacionario(K, L, h)
x = -L/2
x_g = [x]
phi0 = 0.0
phi_g = [phi0]
x = x + h
phi1 = 1e-10
x_g.append(x)
phi_g.append(phi1)
x = x + h
while x <= p_est:
term0 = 1 + h**2 * K(x - h) / 12
term1 = 2 - 5 * h**2 * K(x) / 6
term2 = 1 + h**2 * K(x + h) / 12
aux = phi1
phi1 = (term1 * phi1 - term0 * phi0) / term2
x_g.append(x)
phi_g.append(phi1)
phi0 = aux
x = x + h
x = L/2
phi0 = 0.0
x_gd = [x]
phi_gd = [phi0]
x = x - h
phi1 = 1e-10 * (-1)**(N%2 + 1)
x_gd.insert(0, x)
phi_gd.insert(0, phi1)
x = x - h
while x > p_est:
term0 = 1 + h**2 * K(x + h) / 12
term1 = 2 - 5 * h**2 * K(x) / 6
term2 = 1 + h**2 * K(x - h) / 12
aux = phi1
phi1 = (term1 * phi1 - term0 * phi0) / term2
x_gd.insert(0, x)
phi_gd.insert(0, phi1)
phi0 = aux
x = x - h
n_d = len(phi_gd)
phi_gd = [phi_gd[i] * phi_g[-1] / phi1 for i in range(n_d)]
x_g.extend(x_gd)
phi_g.extend(phi_gd)
return x_g, phi_g
Explanation: Ecuaciones diferenciales con problemas de frontera
La ecuación de Schrödinger, ya sea dependiente o independiente del tiempo, es una ecuación diferencial de segundo orden. Al remover la dependencia del tiempo y disponer de problemas 1D, se tiene que la ecuación diferencial es ordinaria. Para este tipo de ecuaciones (segundo orden, una variable) es posible aplicar métodos especificos que solución con bajo costo computacional aproximaciones de orden alto. Ejemplo de esto son los métodos de Verlet y de Numerov, con método del disparo.
De la ecuación de Schrödinger se observa que si se reemplazan los valores por cantidades conocidas estimadas, el valor de la energía $E$ que cumple con ser autovalor, es aquel que haga satisfacer las condiciones de frontera del problema, y por ende una forma de solucionar el problema es mediante la aplicación de un problema de busqueda de raices. De esta forma, el método del disparo lo que hace es el ajuste de $E$ para que partiendo de una frontera, con la condicón respectiva, al propagarse hasta la otra frontera llegue con el valor de la otra condición. De no hacerlo, se cambio el valor de $E$ y se repite el proceso.
El esquema de Numerov para la propagación es, dada una ecuación diferencial ordinaria de segundo orden sin termino lineal, $$ \frac{d^2y(x)}{dx^2} + K(x)y(x) = 0, $$ su esquema discreto se plantea como $$ y_{i+2} = \frac{\left(2-\frac{5h^2 K_{i+2}}{6} \right)y_{i+1} - \left(1+\frac{h^2 K_{i}}{12} \right) y_i}{ \left(1+\frac{h^2 K_{i+2}}{12} \right)}. $$
Para nuestro caso, la función $K(x)$ posee dependencia de la energía, y todos los demás elementos son conocidos (la función solución, de onda en este caso, se construye iterativamente dado un valor de energía), por lo cual se puede definir una función que dada una energía como argumento, genere el valor de la función de onda en la frontera opuesta. Este valor en la frontera, por las condiciones establecidas por los potenciales y la condición de integrabilidad, debe ser $\psi(x_{izq}) = \psi(x_{der}) = 0$.
Tambien es usual usar como definición de la función, la diferencia de las derivadas logaritmicas en un punto intermedio, realizando la propagación desde ambos extremos. Por facilidad, se optará por la metodología clásica del disparo, que ofrece menor cantidad de operaciones y no presenta ambigüedades de definición, que consta de comparar con el valor esperado en la frontera opuesta, $Numerov(E) = 0$.
Este problema de búsqueda de raíces requiere conocer dos condiciones iniciales, la segunda debe tomarse acorde a la paridad de la energía buscada. Para paridad par, el segundo valor es positivo, mientras que para paridad impar el segundo valor es negativo.
La función estacionario se define para buscar el punto de empate adecuado para el análisis de la continuidad de la función de onda y su derivada. Como criterio, se buscan los turning points clásicos, donde $E=V(x)$.
End of explanation
def K_Schr(V_0, a):
return lambda e, x: e - potencial(V_0, a, x)
Explanation: Para la ecuación de Schrödinger, $K(x) = E - V(x)$.
End of explanation
def disparo(V_0, a, L, n, N, E):
x, phi = Phi(K_Schr(V_0, a), L, E, N, n)
V = [potencial(V_0, a, i) for i in x]
graficar_potencial(x, V)
graficar_autofuncion(x, phi, V_0)
graficar_autovalor(L, E)
plt.show()
def presion_disparo(boton):
disparo(V_0, a.value, L, n.value, N, E.value)
interact(disparo, V_0=(0., 20., .5), a=(.5, 10., .1), L=(10., 50., 5.), n=(100, 500, 50), N=fixed(1), E=(.0, 5., .01))
Explanation: Para ilustrar el método del disparo, se presenta el siguiente control. La idea es ajustar para una configuración de potencial $V_0$, ancho $a$, longitud total $L$ y numero de elementos de discretización $n$, la energía $E$ adecuada para observar continuidad en la función de onda y su derivada en todo el intervalo. Dada la implementación del método se verifica dicha continuidad en el limite de la primera pared. Más adelante se define de manera general como seleccionar el punto de comparación.
End of explanation
def E_N(K, E_max, L, N, n, delta_e = 1e-4, tol_e = 1e-6):
Numerov = lambda e: numerov(K, L, e, N, n)
return raiz_n(Numerov, tol_e, E_max, N, delta_e, tol_e)
def Solve_Schr(Vx, E_max, L, N, n):
x_vec, V_vec = discretizar(Vx, -L/2, L/2, n)
V_min = min(V_vec)
K = lambda e, x : e - Vx(x) + V_min
E = E_N(K, E_max - V_min, L, N, n)
if E != None:
x_vec, phi = Phi(K, L, E, N, n)
E = E + V_min
display(Latex('\(E_{' + str(N) + '} = ' + str(E) + '\)'))
V_vec = [Vx(i) for i in x_vec]
graficar_potencial(x_vec, V_vec)
V_max = max(V_vec)
V_ref = max(abs(V_min), V_max)
graficar_autofuncion(x_vec, phi, V_ref)
graficar_autovalor(L, E)
plt.show()
return E, x_vec, phi
else:
display(HTML('<div class="alert alert-danger">'+\
'<strong>Error</strong> Se evaluo la función en una discontinuidad.'+\
'</div>'))
Explanation: La anterior ilustración tambien permite observar los efectos del potencial sobre un paquete de onda cuando la energía es menor o mayor que el potencial. Se puede observar como para $E>V_0$, se obtiene una función de onda oscilante en todo el intervalo, equivalente a una particula libre.
Se define la función E_N para el calculo de las autoenergías, que pueden incluirse en la función Phi para generar las autofunciones. Esta función puede explorarse en el notebook de Estados ligados.
End of explanation
def fun_contenedor_base():
E_max = FloatSlider(value=10., min = 1., max=20., step=1., description= '$E_{max}$')
L = FloatSlider(value = 30., min = 10., max = 100., step= 1., description='L')
N = IntSlider(value=1, min=1, max= 6, step=1, description='N')
n = IntSlider(value= 300, min= 100, max= 500, step=20, description='n')
return Box(children=[E_max, L, N, n])
Contenedor_base = fun_contenedor_base()
display(Contenedor_base)
def agregar_control(base, control):
controles = list(base.children)
controles.append(control)
base.children = tuple(controles)
control_prueba = fun_contenedor_base()
agregar_control(control_prueba, Text(description='Casilla de texto para prueba'))
display(control_prueba)
Explanation: El siguiente bloque define la base de los controles, fun_contenedor_base, para el notebook conceptual, Estados ligados, donde los parametros de máxima energía de búsqueda, longitud de interes, número de estado y particiones son comunes.
End of explanation |
15,726 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step2: Imports
Step3: tf.data.Dataset
Step4: Let's have a look at the data
Step5: Keras model
If you are not sure what cross-entropy, dropout, softmax or batch-normalization mean, head here for a crash-course
Step6: Train and validate the model
Step7: Visualize predictions | Python Code:
BATCH_SIZE = 128
EPOCHS = 10
training_images_file = 'gs://mnist-public/train-images-idx3-ubyte'
training_labels_file = 'gs://mnist-public/train-labels-idx1-ubyte'
validation_images_file = 'gs://mnist-public/t10k-images-idx3-ubyte'
validation_labels_file = 'gs://mnist-public/t10k-labels-idx1-ubyte'
Explanation: <a href="https://colab.research.google.com/github/GoogleCloudPlatform/tensorflow-without-a-phd/blob/master/tensorflow-mnist-tutorial/keras_01_mnist.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Parameters
End of explanation
import os, re, math, json, shutil, pprint
import PIL.Image, PIL.ImageFont, PIL.ImageDraw
import IPython.display as display
import numpy as np
import tensorflow as tf
from matplotlib import pyplot as plt
print("Tensorflow version " + tf.__version__)
#@title visualization utilities [RUN ME]
This cell contains helper functions used for visualization
and downloads only. You can skip reading it. There is very
little useful Keras/Tensorflow code here.
# Matplotlib config
plt.ioff()
plt.rc('image', cmap='gray_r')
plt.rc('grid', linewidth=1)
plt.rc('xtick', top=False, bottom=False, labelsize='large')
plt.rc('ytick', left=False, right=False, labelsize='large')
plt.rc('axes', facecolor='F8F8F8', titlesize="large", edgecolor='white')
plt.rc('text', color='a8151a')
plt.rc('figure', facecolor='F0F0F0', figsize=(16,9))
# Matplotlib fonts
MATPLOTLIB_FONT_DIR = os.path.join(os.path.dirname(plt.__file__), "mpl-data/fonts/ttf")
# pull a batch from the datasets. This code is not very nice, it gets much better in eager mode (TODO)
def dataset_to_numpy_util(training_dataset, validation_dataset, N):
# get one batch from each: 10000 validation digits, N training digits
batch_train_ds = training_dataset.unbatch().batch(N)
# eager execution: loop through datasets normally
if tf.executing_eagerly():
for validation_digits, validation_labels in validation_dataset:
validation_digits = validation_digits.numpy()
validation_labels = validation_labels.numpy()
break
for training_digits, training_labels in batch_train_ds:
training_digits = training_digits.numpy()
training_labels = training_labels.numpy()
break
else:
v_images, v_labels = validation_dataset.make_one_shot_iterator().get_next()
t_images, t_labels = batch_train_ds.make_one_shot_iterator().get_next()
# Run once, get one batch. Session.run returns numpy results
with tf.Session() as ses:
(validation_digits, validation_labels,
training_digits, training_labels) = ses.run([v_images, v_labels, t_images, t_labels])
# these were one-hot encoded in the dataset
validation_labels = np.argmax(validation_labels, axis=1)
training_labels = np.argmax(training_labels, axis=1)
return (training_digits, training_labels,
validation_digits, validation_labels)
# create digits from local fonts for testing
def create_digits_from_local_fonts(n):
font_labels = []
img = PIL.Image.new('LA', (28*n, 28), color = (0,255)) # format 'LA': black in channel 0, alpha in channel 1
font1 = PIL.ImageFont.truetype(os.path.join(MATPLOTLIB_FONT_DIR, 'DejaVuSansMono-Oblique.ttf'), 25)
font2 = PIL.ImageFont.truetype(os.path.join(MATPLOTLIB_FONT_DIR, 'STIXGeneral.ttf'), 25)
d = PIL.ImageDraw.Draw(img)
for i in range(n):
font_labels.append(i%10)
d.text((7+i*28,0 if i<10 else -4), str(i%10), fill=(255,255), font=font1 if i<10 else font2)
font_digits = np.array(img.getdata(), np.float32)[:,0] / 255.0 # black in channel 0, alpha in channel 1 (discarded)
font_digits = np.reshape(np.stack(np.split(np.reshape(font_digits, [28, 28*n]), n, axis=1), axis=0), [n, 28*28])
return font_digits, font_labels
# utility to display a row of digits with their predictions
def display_digits(digits, predictions, labels, title, n):
fig = plt.figure(figsize=(13,3))
digits = np.reshape(digits, [n, 28, 28])
digits = np.swapaxes(digits, 0, 1)
digits = np.reshape(digits, [28, 28*n])
plt.yticks([])
plt.xticks([28*x+14 for x in range(n)], predictions)
plt.grid(b=None)
for i,t in enumerate(plt.gca().xaxis.get_ticklabels()):
if predictions[i] != labels[i]: t.set_color('red') # bad predictions in red
plt.imshow(digits)
plt.grid(None)
plt.title(title)
display.display(fig)
# utility to display multiple rows of digits, sorted by unrecognized/recognized status
def display_top_unrecognized(digits, predictions, labels, n, lines):
idx = np.argsort(predictions==labels) # sort order: unrecognized first
for i in range(lines):
display_digits(digits[idx][i*n:(i+1)*n], predictions[idx][i*n:(i+1)*n], labels[idx][i*n:(i+1)*n],
"{} sample validation digits out of {} with bad predictions in red and sorted first".format(n*lines, len(digits)) if i==0 else "", n)
def plot_learning_rate(lr_func, epochs):
xx = np.arange(epochs+1, dtype=np.float)
y = [lr_decay(x) for x in xx]
fig, ax = plt.subplots(figsize=(9, 6))
ax.set_xlabel('epochs')
ax.set_title('Learning rate\ndecays from {:0.3g} to {:0.3g}'.format(y[0], y[-2]))
ax.minorticks_on()
ax.grid(True, which='major', axis='both', linestyle='-', linewidth=1)
ax.grid(True, which='minor', axis='both', linestyle=':', linewidth=0.5)
ax.step(xx,y, linewidth=3, where='post')
display.display(fig)
class PlotTraining(tf.keras.callbacks.Callback):
def __init__(self, sample_rate=1, zoom=1):
self.sample_rate = sample_rate
self.step = 0
self.zoom = zoom
self.steps_per_epoch = 60000//BATCH_SIZE
def on_train_begin(self, logs={}):
self.batch_history = {}
self.batch_step = []
self.epoch_history = {}
self.epoch_step = []
self.fig, self.axes = plt.subplots(1, 2, figsize=(16, 7))
plt.ioff()
def on_batch_end(self, batch, logs={}):
if (batch % self.sample_rate) == 0:
self.batch_step.append(self.step)
for k,v in logs.items():
# do not log "batch" and "size" metrics that do not change
# do not log training accuracy "acc"
if k=='batch' or k=='size':# or k=='acc':
continue
self.batch_history.setdefault(k, []).append(v)
self.step += 1
def on_epoch_end(self, epoch, logs={}):
plt.close(self.fig)
self.axes[0].cla()
self.axes[1].cla()
self.axes[0].set_ylim(0, 1.2/self.zoom)
self.axes[1].set_ylim(1-1/self.zoom/2, 1+0.1/self.zoom/2)
self.epoch_step.append(self.step)
for k,v in logs.items():
# only log validation metrics
if not k.startswith('val_'):
continue
self.epoch_history.setdefault(k, []).append(v)
display.clear_output(wait=True)
for k,v in self.batch_history.items():
self.axes[0 if k.endswith('loss') else 1].plot(np.array(self.batch_step) / self.steps_per_epoch, v, label=k)
for k,v in self.epoch_history.items():
self.axes[0 if k.endswith('loss') else 1].plot(np.array(self.epoch_step) / self.steps_per_epoch, v, label=k, linewidth=3)
self.axes[0].legend()
self.axes[1].legend()
self.axes[0].set_xlabel('epochs')
self.axes[1].set_xlabel('epochs')
self.axes[0].minorticks_on()
self.axes[0].grid(True, which='major', axis='both', linestyle='-', linewidth=1)
self.axes[0].grid(True, which='minor', axis='both', linestyle=':', linewidth=0.5)
self.axes[1].minorticks_on()
self.axes[1].grid(True, which='major', axis='both', linestyle='-', linewidth=1)
self.axes[1].grid(True, which='minor', axis='both', linestyle=':', linewidth=0.5)
display.display(self.fig)
Explanation: Imports
End of explanation
AUTO = tf.data.experimental.AUTOTUNE
def read_label(tf_bytestring):
label = tf.io.decode_raw(tf_bytestring, tf.uint8)
label = tf.reshape(label, [])
label = tf.one_hot(label, 10)
return label
def read_image(tf_bytestring):
image = tf.io.decode_raw(tf_bytestring, tf.uint8)
image = tf.cast(image, tf.float32)/256.0
image = tf.reshape(image, [28*28])
return image
def load_dataset(image_file, label_file):
imagedataset = tf.data.FixedLengthRecordDataset(image_file, 28*28, header_bytes=16)
imagedataset = imagedataset.map(read_image, num_parallel_calls=16)
labelsdataset = tf.data.FixedLengthRecordDataset(label_file, 1, header_bytes=8)
labelsdataset = labelsdataset.map(read_label, num_parallel_calls=16)
dataset = tf.data.Dataset.zip((imagedataset, labelsdataset))
return dataset
def get_training_dataset(image_file, label_file, batch_size):
dataset = load_dataset(image_file, label_file)
dataset = dataset.cache() # this small dataset can be entirely cached in RAM, for TPU this is important to get good performance from such a small dataset
dataset = dataset.shuffle(5000, reshuffle_each_iteration=True)
dataset = dataset.repeat() # Mandatory for Keras for now
dataset = dataset.batch(batch_size, drop_remainder=True) # drop_remainder is important on TPU, batch size must be fixed
dataset = dataset.prefetch(AUTO) # fetch next batches while training on the current one (-1: autotune prefetch buffer size)
return dataset
def get_validation_dataset(image_file, label_file):
dataset = load_dataset(image_file, label_file)
dataset = dataset.cache() # this small dataset can be entirely cached in RAM, for TPU this is important to get good performance from such a small dataset
dataset = dataset.batch(10000, drop_remainder=True) # 10000 items in eval dataset, all in one batch
dataset = dataset.repeat() # Mandatory for Keras for now
return dataset
# instantiate the datasets
training_dataset = get_training_dataset(training_images_file, training_labels_file, BATCH_SIZE)
validation_dataset = get_validation_dataset(validation_images_file, validation_labels_file)
# For TPU, we will need a function that returns the dataset
training_input_fn = lambda: get_training_dataset(training_images_file, training_labels_file, BATCH_SIZE)
validation_input_fn = lambda: get_validation_dataset(validation_images_file, validation_labels_file)
Explanation: tf.data.Dataset: parse files and prepare training and validation datasets
Please read the best practices for building input pipelines with tf.data.Dataset
End of explanation
N = 24
(training_digits, training_labels,
validation_digits, validation_labels) = dataset_to_numpy_util(training_dataset, validation_dataset, N)
display_digits(training_digits, training_labels, training_labels, "training digits and their labels", N)
display_digits(validation_digits[:N], validation_labels[:N], validation_labels[:N], "validation digits and their labels", N)
font_digits, font_labels = create_digits_from_local_fonts(N)
Explanation: Let's have a look at the data
End of explanation
model = tf.keras.Sequential(
[
tf.keras.layers.Input(shape=(28*28,)),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='sgd',
loss='categorical_crossentropy',
metrics=['accuracy'])
# print model layers
model.summary()
# utility callback that displays training curves
plot_training = PlotTraining(sample_rate=10, zoom=1)
Explanation: Keras model
If you are not sure what cross-entropy, dropout, softmax or batch-normalization mean, head here for a crash-course: Tensorflow and deep learning without a PhD
End of explanation
steps_per_epoch = 60000//BATCH_SIZE # 60,000 items in this dataset
print("Steps per epoch: ", steps_per_epoch)
history = model.fit(training_dataset, steps_per_epoch=steps_per_epoch, epochs=EPOCHS,
validation_data=validation_dataset, validation_steps=1, callbacks=[plot_training])
Explanation: Train and validate the model
End of explanation
# recognize digits from local fonts
probabilities = model.predict(font_digits, steps=1)
predicted_labels = np.argmax(probabilities, axis=1)
display_digits(font_digits, predicted_labels, font_labels, "predictions from local fonts (bad predictions in red)", N)
# recognize validation digits
probabilities = model.predict(validation_digits, steps=1)
predicted_labels = np.argmax(probabilities, axis=1)
display_top_unrecognized(validation_digits, predicted_labels, validation_labels, N, 7)
Explanation: Visualize predictions
End of explanation |
15,727 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Feature
Step1: Config
Automatically discover the paths to various data folders and compose the project structure.
Step2: Identifier for storing these features on disk and referring to them later.
Step3: Load Data
Original question datasets.
Step4: Build features
Generate a graph of questions and their neighbors.
Step5: Compute PageRank.
Step6: Extract final features
Step7: Save features | Python Code:
from pygoose import *
import hashlib
Explanation: Feature: PageRank on Question Co-Occurrence Graph
This is a "magic" (leaky) feature that exploits the patterns in question co-occurrence graph (based on the kernel by @zfturbo).
Imports
This utility package imports numpy, pandas, matplotlib and a helper kg module into the root namespace.
End of explanation
project = kg.Project.discover()
Explanation: Config
Automatically discover the paths to various data folders and compose the project structure.
End of explanation
feature_list_id = 'magic_pagerank'
Explanation: Identifier for storing these features on disk and referring to them later.
End of explanation
df_train = pd.read_csv(project.data_dir + 'train.csv').fillna('')
df_test = pd.read_csv(project.data_dir + 'test.csv').fillna('')
Explanation: Load Data
Original question datasets.
End of explanation
def generate_qid_graph_table(row):
hash_key1 = hashlib.md5(row['question1'].encode('utf-8')).hexdigest()
hash_key2 = hashlib.md5(row['question2'].encode('utf-8')).hexdigest()
qid_graph.setdefault(hash_key1, []).append(hash_key2)
qid_graph.setdefault(hash_key2, []).append(hash_key1)
qid_graph = {}
_ = df_train.apply(generate_qid_graph_table, axis=1)
_ = df_test.apply(generate_qid_graph_table, axis=1)
Explanation: Build features
Generate a graph of questions and their neighbors.
End of explanation
def pagerank():
MAX_ITER = 20
d = 0.85
# Initializing: every node gets a uniform value!
pagerank_dict = {i: 1 / len(qid_graph) for i in qid_graph}
num_nodes = len(pagerank_dict)
for iter in range(0, MAX_ITER):
for node in qid_graph:
local_pr = 0
for neighbor in qid_graph[node]:
local_pr += pagerank_dict[neighbor] / len(qid_graph[neighbor])
pagerank_dict[node] = (1 - d) / num_nodes + d * local_pr
return pagerank_dict
pagerank_dict = pagerank()
Explanation: Compute PageRank.
End of explanation
def get_pagerank_value(pair):
q1 = hashlib.md5(pair[0].encode('utf-8')).hexdigest()
q2 = hashlib.md5(pair[1].encode('utf-8')).hexdigest()
return [pagerank_dict[q1], pagerank_dict[q2]]
pagerank_train = kg.jobs.map_batch_parallel(
df_train[['question1', 'question2']].as_matrix(),
item_mapper = get_pagerank_value,
batch_size=1000,
)
pagerank_test = kg.jobs.map_batch_parallel(
df_test[['question1', 'question2']].as_matrix(),
item_mapper = get_pagerank_value,
batch_size=1000,
)
X_train = np.array(pagerank_train) * 1000
X_test = np.array(pagerank_test) * 1000
print('X train:', X_train.shape)
print('X test: ', X_test.shape)
Explanation: Extract final features
End of explanation
feature_names = [
'pagerank_q1',
'pagerank_q2',
]
project.save_features(X_train, X_test, feature_names, feature_list_id)
Explanation: Save features
End of explanation |
15,728 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
12d - AgriPV
Step1: General Parameters and Variables
Step2: <a id='step1'></a>
1. Loop to Raytrace and sample irradiance at where Three would be located
Step3: <a id='step2'></a>
2. Calculate GHI for Comparisons
<a id='step2a'></a>
Option 1
Step4: <a id='step2b'></a>
Option 2
Step5: <a id='step3'></a>
3. Compile Results
Step6: Let's calculate some relevant metrics for irradiance
Step7: <a id='step4'></a>
4. Plot results
Step8: <a id='step5'></a>
5. Raytrace with Tree Geometry
<a id='step5a'></a>
Tree parameters
Step9: <a id='step5b'></a>
Loop to Raytrace and Sample Irradiance at Each side of the Tree (N, S, E, W)
Step10: <a id='step5c'></a>
Single simulation until MakeOct for Getting a PRETTY IMAGE
Step11: Now you can view the Geometry by navigating on the terminal to the testfolder, and using the octfile name generated above
rvu -vf views\front.vp -e .0265652 -vp 2 -21 2.5 -vd 0 1 0 Coffee_ch_1.8_xgap_1.2_tilt_18_pitch_2.2.oct
<a id='step6'></a>
6. Compile Results Trees
Step12: <a id='step7'></a>
7. Plot | Python Code:
import bifacial_radiance
import os
from pathlib import Path
import numpy as np
import pandas as pd
testfolder = str(Path().resolve().parent.parent / 'bifacial_radiance' / 'TEMP' / 'Tutorial_18')
if not os.path.exists(testfolder):
os.makedirs(testfolder)
resultsfolder = os.path.join(testfolder, 'results')
Explanation: 12d - AgriPV: Designing for adecuate crop shading
This journal supports the process of designing a solar panel configuration to appropriately represent ideal shading conditions for coffee production underneath elevated solar panels.
The coffee trees would be under and/or in between elevated solar panels (panels would be elevated 6, 8, or 10 ft tall). The light/shade analysis helps determine appropriate panel heights and spacings t0 achieve appropriate shading. The desired level of shading is maximum of 30% (i.e., 70% of normal, unshaded light).
Details:
* The coffee plants are expected to be \~5 ft tall. (5-6 ft tall and 3 ft wide (<a href="https://realgoodcoffeeco.com/blogs/realgoodblog/how-to-grow-a-coffee-plant-at-home#:~:text=However%2C%20you%20must%20keep%20in,tall%20and%203%20feet%20wide">Reference</a>)
* Location: 18.202142, -66.759187; (18°12'07.7"N 66°45'33.1"W)
* Desired area of initial analysis: 400-600 ft2 (37-55 m2)
* Racking: Fixed-tilt panels
* Panel size: 3.3 feet x 5.4 feet (1m x 1.64m)
* Analysis variations:
<ul> <li> a. Panel height: would like to examine heights of 6 ft, 8 ft, and 10 ft hub height.
<li> b. Panel spacing (N/W): would like to look at multiple distances (e.g., 2 ft, 3 ft, 4 ft) </li>
<li> c. Inter-Row spacing (E/W): would like to look at multiple distances (e.g., 2 ft, 3 ft, 4 ft)! </li>
Steps on this Journal:
<ol>
<li> <a href='#step1'> <u><b>Loop to Raytrace and sample irradiance at where Three would be located </u></b></li>
<li> <a href='#step2'> Calculate GHI for Comparisons </li>
<ul><li> <a href='#step2a'> Option 1: Raytrace of Empty Field </li></ul>
<ul><li> <a href='#step2b'> Option 2: Weather File </li></ul>
<li> <a href='#step3'> Compile Results </li>
<li> <a href='#step4'> Plot Results</li>
<li> <a href='#step5'> <u><b> Raytrace with Tree Geometry <u></b></li>
<ul><li> <a href='#step5a'>Tree Parameters</li></ul>
<ul><li> <a href='#step5b'>Loop to Raytrace and Sample Irradiance at Each side of the Tree (N, S, E, W)</li></ul>
<ul><li> <a href='#step5c'>Single simulation until MakeOct for Getting a PRETTY IMAGE </li></ul>
<li> <a href='#step6'> Compile Results</li>
<li> <a href='#step7'> Plot </li>
</ol>
![AgriPV Coffee Trees Simulation](../images_wiki/AdvancedJournals/AgriPV_CoffeeTrees.PNG)
While we have HPC scripts to do the below simulation, this journals runs all of the above so it might take some time, as there are 109 combinations of parameters explored
End of explanation
lat = 18.202142
lon = -66.759187
albedo = 0.25 # Grass value from Torres Molina, "Measuring UHI in Puerto Rico" 18th LACCEI
# International Multi-Conference for Engineering, Education, and Technology
ft2m = 0.3048
# Loops
clearance_heights = np.array([6.0, 8.0, 10.0])* ft2m
xgaps = np.array([2, 3, 4]) * ft2m
Ds = np.array([2, 3, 4]) * ft2m # D is a variable that represents the spacing between rows, not-considering the collector areas.
tilts = [round(lat), 10]
x = 1.64
y = 1
azimuth = 180
nMods = 20
nRows = 7
numpanels = 1
moduletype = 'test-module'
hpc = False
sim_general_name = 'tutorial_18'
if not os.path.exists(os.path.join(testfolder, 'EPWs')):
demo = bifacial_radiance.RadianceObj('test',testfolder)
epwfile = demo.getEPW(lat,lon)
else:
epwfile = r'EPWs\PRI_Mercedita.AP.785203_TMY3.epw'
Explanation: General Parameters and Variables
End of explanation
demo = bifacial_radiance.RadianceObj(sim_general_name,str(testfolder))
demo.setGround(albedo)
demo.readWeatherFile(epwfile)
demo.genCumSky()
for ch in range (0, len(clearance_heights)):
clearance_height = clearance_heights[ch]
for xx in range (0, len(xgaps)):
xgap = xgaps[xx]
for tt in range (0, len(tilts)):
tilt = tilts[tt]
for dd in range (0, len(Ds)):
pitch = y * np.cos(np.radians(tilt))+Ds[dd]
sim_name = (sim_general_name+'_ch_'+str(round(clearance_height,1))+
'_xgap_'+str(round(xgap,1))+\
'_tilt_'+str(round(tilt,1))+
'_pitch_'+str(round(pitch,1)))
# Coffe plant location at:
coffeeplant_x = (x+xgap)/2
coffeeplant_y = pitch/2
demo.makeModule(name=moduletype, x=x, y=y, xgap = xgap)
sceneDict = {'tilt':tilt,'pitch':pitch,'clearance_height':clearance_height,'azimuth':azimuth, 'nMods': nMods, 'nRows': nRows}
scene = demo.makeScene(moduletype=moduletype,sceneDict=sceneDict, hpc=hpc, radname = sim_name)
octfile = demo.makeOct(octname = demo.basename , hpc=hpc)
analysis = bifacial_radiance.AnalysisObj(octfile=octfile, name=sim_name)
# Modify sensor position to coffee plant location
frontscan, backscan = analysis.moduleAnalysis(scene=scene, sensorsy=1)
groundscan = frontscan.copy()
groundscan['xstart'] = coffeeplant_x
groundscan['ystart'] = coffeeplant_y
groundscan['zstart'] = 0.05
groundscan['orient'] = '0 0 -1'
analysis.analysis(octfile, name=sim_name+'_Front&Back', frontscan=frontscan, backscan=backscan)
analysis.analysis(octfile, name=sim_name+'_Ground&Back', frontscan=groundscan, backscan=backscan)
Explanation: <a id='step1'></a>
1. Loop to Raytrace and sample irradiance at where Three would be located
End of explanation
sim_name = 'EMPTY'
demo.makeModule(name=moduletype, x=0.001, y=0.001, xgap = 0)
sceneDict = {'tilt':0,'pitch':2,'clearance_height':0.005,'azimuth':180, 'nMods': 1, 'nRows': 1}
scene = demo.makeScene(moduletype=moduletype,sceneDict=sceneDict, hpc=hpc, radname = sim_name)
octfile = demo.makeOct(octname = demo.basename , hpc=hpc)
analysis = bifacial_radiance.AnalysisObj(octfile=octfile, name=sim_name)
frontscan, backscan = analysis.moduleAnalysis(scene=scene, sensorsy=1)
emptyscan = frontscan.copy()
emptyscan['xstart'] = 3
emptyscan['ystart'] = 3
emptyscan['zstart'] = 0.05
emptyscan['orient'] = '0 0 -1'
emptybackscan = emptyscan.copy()
emptybackscan['orient'] = '0 0 1'
analysis.analysis(octfile, name='_EMPTYSCAN', frontscan=emptyscan, backscan=emptybackscan)
resname = os.path.join(resultsfolder, 'irr__EMPTYSCAN.csv')
data = pd.read_csv(resname)
puerto_rico_Year = data['Wm2Front'][0]
print("YEARLY TOTAL Wh/m2:", puerto_rico_Year)
Explanation: <a id='step2'></a>
2. Calculate GHI for Comparisons
<a id='step2a'></a>
Option 1: Raytrace of Empty Field
End of explanation
# Indexes for start of each month of interest in TMY3 8760 hours file
#starts = [2881, 3626, 4346, 5090, 5835]
#ends = [3621, 4341, 5085, 5829, 6550]
starts = [metdata.datetime.index(pd.to_datetime('2021-05-01 6:0:0 -7')),
metdata.datetime.index(pd.to_datetime('2021-06-01 6:0:0 -7')),
metdata.datetime.index(pd.to_datetime('2021-07-01 6:0:0 -7')),
metdata.datetime.index(pd.to_datetime('2021-08-01 6:0:0 -7')),
metdata.datetime.index(pd.to_datetime('2021-09-01 6:0:0 -7'))]
ends = [metdata.datetime.index(pd.to_datetime('2021-05-31 18:0:0 -7')),
metdata.datetime.index(pd.to_datetime('2021-06-30 18:0:0 -7')),
metdata.datetime.index(pd.to_datetime('2021-07-31 18:0:0 -7')),
metdata.datetime.index(pd.to_datetime('2021-08-31 18:0:0 -7')),
metdata.datetime.index(pd.to_datetime('2021-09-30 18:0:0 -7'))]
ghi_PR=[]
for ii in range(0, len(starts)):
start = starts[ii]
end = ends[ii]
ghi_PR.append(demo.metdata.ghi[start:end].sum())
puerto_Rico_Monthly = ghi_PR # Wh/m2
puerto_Rico_YEAR = demo.metdata.ghi.sum() # Wh/m2
print("Monthly Values May-Sept:", puerto_Rico_Monthly, "Wh/m2")
print("Year Values", puerto_Rico_YEAR, "Wh/m2")
Explanation: <a id='step2b'></a>
Option 2: Weather File
End of explanation
ch_all = []
xgap_all = []
tilt_all = []
pitch_all = []
FrontIrrad = []
RearIrrad = []
GroundIrrad = []
for ch in range (0, len(clearance_heights)):
clearance_height = clearance_heights[ch]
for xx in range (0, len(xgaps)):
xgap = xgaps[xx]
for tt in range (0, len(tilts)):
tilt = tilts[tt]
for dd in range (0, len(Ds)):
pitch = y * np.cos(np.radians(tilt))+Ds[dd]
# irr_Coffee_ch_1.8_xgap_0.6_tilt_18_pitch_1.6_Front&Back.csv
sim_name = ('irr_Coffee'+'_ch_'+str(round(clearance_height,1))+
'_xgap_'+str(round(xgap,1))+\
'_tilt_'+str(round(tilt,1))+
'_pitch_'+str(round(pitch,1))+'_Front&Back.csv')
sim_name2 = ('irr_Coffee'+'_ch_'+str(round(clearance_height,1))+
'_xgap_'+str(round(xgap,1))+\
'_tilt_'+str(round(tilt,1))+
'_pitch_'+str(round(pitch,1))+'_Ground&Back.csv')
ch_all.append(clearance_height)
xgap_all.append(xgap)
tilt_all.append(tilt)
pitch_all.append(pitch)
data = pd.read_csv(os.path.join(resultsfolder, sim_name))
FrontIrrad.append(data['Wm2Front'].item())
RearIrrad.append(data['Wm2Back'].item())
data = pd.read_csv(os.path.join(resultsfolder, sim_name2))
GroundIrrad.append(data['Wm2Front'].item())
ch_all = pd.Series(ch_all, name='clearance_height')
xgap_all = pd.Series(xgap_all, name='xgap')
tilt_all = pd.Series(tilt_all, name='tilt')
pitch_all = pd.Series(pitch_all, name='pitch')
FrontIrrad = pd.Series(FrontIrrad, name='FrontIrrad')
RearIrrad = pd.Series(RearIrrad, name='RearIrrad')
GroundIrrad = pd.Series(GroundIrrad, name='GroundIrrad')
df = pd.concat([ch_all, xgap_all, tilt_all, pitch_all, FrontIrrad, RearIrrad, GroundIrrad], axis=1)
df
Explanation: <a id='step3'></a>
3. Compile Results
End of explanation
df[['GroundIrrad_percent_GHI']] = df[['GroundIrrad']]*100/puerto_Rico_YEAR
df['FrontIrrad_percent_GHI'] = df['FrontIrrad']*100/puerto_Rico_YEAR
df['RearIrrad_percent_GHI'] = df['RearIrrad']*100/puerto_Rico_YEAR
df['BifacialGain'] = df['RearIrrad']*0.65*100/df['FrontIrrad']
print(df['GroundIrrad_percent_GHI'].min())
print(df['GroundIrrad_percent_GHI'].max())
Explanation: Let's calculate some relevant metrics for irradiance
End of explanation
import seaborn as sns
import matplotlib.pyplot as plt
tilts_l = list(df['tilt'].unique())
ch_l = list(df['clearance_height'].unique())
print(tilts_l)
print(ch_l)
for tilt in tilts_l:
for clearance_height in ch_l:
df2=df.loc[df['tilt']==tilts[1]]
df3 = df2.loc[df2['clearance_height']==clearance_heights[2]]
df3['pitch']=df3['pitch'].round(1)
df3['xgap']=df3['xgap'].round(1)
sns.set(font_scale=2)
table = df3.pivot('pitch', 'xgap', 'GroundIrrad_percent_GHI')
ax = sns.heatmap(table, cmap='hot', vmin = 50, vmax= 100, annot=True)
ax.invert_yaxis()
figtitle = 'Clearance Height ' + str(clearance_height/ft2m)+' ft, Tilt ' + str(tilt) + '$^\circ$'
plt.title(figtitle)
print(table)
plt.show()
Explanation: <a id='step4'></a>
4. Plot results
End of explanation
tree_albedo = 0.165 # Wikipedia [0.15-0.18]
trunk_x = 0.8 * ft2m
trunk_y = trunk_x
trunk_z = 1 * ft2m
tree_x = 3 * ft2m
tree_y = tree_x
tree_z = 4 * ft2m
Explanation: <a id='step5'></a>
5. Raytrace with Tree Geometry
<a id='step5a'></a>
Tree parameters
End of explanation
for ch in range (0, len(clearance_heights)):
clearance_height = clearance_heights[ch]
for xx in range (0, len(xgaps)):
xgap = xgaps[xx]
for tt in range (0, len(tilts)):
tilt = tilts[tt]
for dd in range (0, len(Ds)):
pitch = y * np.cos(np.radians(tilt))+Ds[dd]
sim_name = (sim_general_name+'_ch_'+str(round(clearance_height,1))+
'_xgap_'+str(round(xgap,1))+\
'_tilt_'+str(round(tilt,1))+
'_pitch_'+str(round(pitch,1)))
coffeeplant_x = (x+xgap)/2
coffeeplant_y = pitch
demo.makeModule(name=moduletype, x=x, y=y, xgap = xgap)
sceneDict = {'tilt':tilt,'pitch':pitch,'clearance_height':clearance_height,'azimuth':azimuth, 'nMods': nMods, 'nRows': nRows}
scene = demo.makeScene(moduletype=moduletype,sceneDict=sceneDict, hpc=hpc, radname = sim_name)
# Appending the Trees here
text = ''
for ii in range(0,3):
coffeeplant_x = (x+xgap)/2 + (x+xgap)*ii
for jj in range(0,3):
coffeeplant_y = pitch/2 + pitch*jj
name = 'tree'+str(ii)+str(jj)
text += '\r\n! genrev Metal_Grey tube{}tree t*{} {} 32 | xform -t {} {} {}'.format('head'+str(ii)+str(jj),tree_z, tree_x/2.0,
-trunk_x/2.0 + coffeeplant_x,
-trunk_x/2.0 + coffeeplant_y, trunk_z)
text += '\r\n! genrev Metal_Grey tube{}tree t*{} {} 32 | xform -t {} {} 0'.format('trunk'+str(ii)+str(jj),trunk_z, trunk_x/2.0,
-trunk_x/2.0 + coffeeplant_x,
-trunk_x/2.0 + coffeeplant_y)
customObject = demo.makeCustomObject(name,text)
demo.appendtoScene(radfile=scene.radfiles, customObject=customObject, text="!xform -rz 0")
octfile = demo.makeOct(octname = demo.basename , hpc=hpc)
analysis = bifacial_radiance.AnalysisObj(octfile=octfile, name=sim_name)
ii = 1
jj = 1
coffeeplant_x = (x+xgap)/2 + (x+xgap)*ii
coffeeplant_y = pitch/2 + pitch*jj
frontscan, backscan = analysis.moduleAnalysis(scene=scene, sensorsy=1)
treescan_south = frontscan.copy()
treescan_north = frontscan.copy()
treescan_east = frontscan.copy()
treescan_west = frontscan.copy()
treescan_south['xstart'] = coffeeplant_x
treescan_south['ystart'] = coffeeplant_y - tree_x/2.0 - 0.05
treescan_south['zstart'] = tree_z
treescan_south['orient'] = '0 1 0'
treescan_north['xstart'] = coffeeplant_x
treescan_north['ystart'] = coffeeplant_y + tree_x/2.0 + 0.05
treescan_north['zstart'] = tree_z
treescan_north['orient'] = '0 -1 0'
treescan_east['xstart'] = coffeeplant_x + tree_x/2.0 + 0.05
treescan_east['ystart'] = coffeeplant_y
treescan_east['zstart'] = tree_z
treescan_east['orient'] = '-1 0 0'
treescan_west['xstart'] = coffeeplant_x - tree_x/2.0 - 0.05
treescan_west['ystart'] = coffeeplant_y
treescan_west['zstart'] = tree_z
treescan_west['orient'] = '1 0 0'
groundscan = frontscan.copy()
groundscan['xstart'] = coffeeplant_x
groundscan['ystart'] = coffeeplant_y
groundscan['zstart'] = 0.05
groundscan['orient'] = '0 0 -1'
analysis.analysis(octfile, name=sim_name+'_North&South', frontscan=treescan_north, backscan=treescan_south)
analysis.analysis(octfile, name=sim_name+'_East&West', frontscan=treescan_east, backscan=treescan_west)
Explanation: <a id='step5b'></a>
Loop to Raytrace and Sample Irradiance at Each side of the Tree (N, S, E, W)
End of explanation
tree_albedo = 0.165 # Wikipedia [0.15-0.18]
trunk_x = 0.8 * ft2m
trunk_y = trunk_x
trunk_z = 1 * ft2m
tree_x = 3 * ft2m
tree_y = tree_x
tree_z = 4 * ft2m
clearance_height = clearance_heights[0]
xgap = xgaps[-1]
tilt = tilts[0]
pitch = y * np.cos(np.radians(tilt))+Ds[-1]
sim_name = (sim_general_name+'_ch_'+str(round(clearance_height,1))+
'_xgap_'+str(round(xgap,1))+\
'_tilt_'+str(round(tilt,1))+
'_pitch_'+str(round(pitch,1)))
demo = bifacial_radiance.RadianceObj(sim_name,str(testfolder))
demo.setGround(albedo)
demo.readWeatherFile(epwfile)
coffeeplant_x = (x+xgap)/2
coffeeplant_y = pitch
demo.gendaylit(4020)
demo.makeModule(name=moduletype, x=x, y=y, xgap = xgap)
sceneDict = {'tilt':tilt,'pitch':pitch,'clearance_height':clearance_height,'azimuth':azimuth, 'nMods': nMods, 'nRows': nRows}
scene = demo.makeScene(moduletype=moduletype,sceneDict=sceneDict, hpc=hpc, radname = sim_name)
for ii in range(0,3):
coffeeplant_x = (x+xgap)/2 + (x+xgap)*ii
for jj in range(0,3):
coffeeplant_y = pitch/2 + pitch*jj
name = 'tree'+str(ii)+str(jj)
text = '! genrev litesoil tube{}tree t*{} {} 32 | xform -t {} {} {}'.format('head'+str(ii)+str(jj),tree_z, tree_x/2.0,
-trunk_x/2.0 + coffeeplant_x,
-trunk_x/2.0 + coffeeplant_y, trunk_z)
text += '\r\n! genrev litesoil tube{}tree t*{} {} 32 | xform -t {} {} 0'.format('trunk'+str(ii)+str(jj),trunk_z, trunk_x/2.0,
-trunk_x/2.0 + coffeeplant_x,
-trunk_x/2.0 + coffeeplant_y)
customObject = demo.makeCustomObject(name,text)
demo.appendtoScene(radfile=scene.radfiles, customObject=customObject, text="!xform -rz 0")
octfile = demo.makeOct(octname = demo.basename , hpc=hpc)
Explanation: <a id='step5c'></a>
Single simulation until MakeOct for Getting a PRETTY IMAGE
End of explanation
# irr_Coffee_ch_1.8_xgap_0.6_tilt_18_pitch_1.6_Front&Back.csv
ch_all = []
xgap_all = []
tilt_all = []
pitch_all = []
NorthIrrad = []
SouthIrrad = []
EastIrrad = []
WestIrrad = []
ft2m = 0.3048
clearance_heights = np.array([6.0, 8.0, 10.0])* ft2m
xgaps = np.array([2, 3, 4]) * ft2m
Ds = np.array([2, 3, 4]) * ft2m # D is a variable that represents the spacing between rows, not-considering the collector areas.
tilts = [18, 10]
y = 1
for ch in range (0, len(clearance_heights)):
clearance_height = clearance_heights[ch]
for xx in range (0, len(xgaps)):
xgap = xgaps[xx]
for tt in range (0, len(tilts)):
tilt = tilts[tt]
for dd in range (0, len(Ds)):
pitch = y * np.cos(np.radians(tilt))+Ds[dd]
sim_name = ('irr_Coffee'+'_ch_'+str(round(clearance_height,1))+
'_xgap_'+str(round(xgap,1))+\
'_tilt_'+str(round(tilt,1))+
'_pitch_'+str(round(pitch,1))+'_North&South.csv')
sim_name2 = ('irr_Coffee'+'_ch_'+str(round(clearance_height,1))+
'_xgap_'+str(round(xgap,1))+\
'_tilt_'+str(round(tilt,1))+
'_pitch_'+str(round(pitch,1))+'_East&West.csv')
ch_all.append(clearance_height)
xgap_all.append(xgap)
tilt_all.append(tilt)
pitch_all.append(pitch)
data = pd.read_csv(os.path.join(resultsfolder, sim_name))
NorthIrrad.append(data['Wm2Front'].item())
SouthIrrad.append(data['Wm2Back'].item())
data = pd.read_csv(os.path.join(resultsfolder, sim_name2))
EastIrrad.append(data['Wm2Front'].item())
WestIrrad.append(data['Wm2Back'].item())
ch_all = pd.Series(ch_all, name='clearance_height')
xgap_all = pd.Series(xgap_all, name='xgap')
tilt_all = pd.Series(tilt_all, name='tilt')
pitch_all = pd.Series(pitch_all, name='pitch')
NorthIrrad = pd.Series(NorthIrrad, name='NorthIrrad')
SouthIrrad = pd.Series(SouthIrrad, name='SouthIrrad')
EastIrrad = pd.Series(EastIrrad, name='EastIrrad')
WestIrrad = pd.Series(WestIrrad, name='WestIrrad')
df = pd.concat([ch_all, xgap_all, tilt_all, pitch_all, NorthIrrad, SouthIrrad, EastIrrad, WestIrrad], axis=1)
df.to_csv(os.path.join(resultsfolder,'TREES.csv'))
trees = pd.read_csv(os.path.join(resultsfolder, 'TREES.csv'))
trees.tail()
trees['TreeIrrad_percent_GHI'] = trees[['NorthIrrad','SouthIrrad','EastIrrad','WestIrrad']].mean(axis=1)*100/puerto_Rico_YEAR
print(trees['TreeIrrad_percent_GHI'].min())
print(trees['TreeIrrad_percent_GHI'].max())
Explanation: Now you can view the Geometry by navigating on the terminal to the testfolder, and using the octfile name generated above
rvu -vf views\front.vp -e .0265652 -vp 2 -21 2.5 -vd 0 1 0 Coffee_ch_1.8_xgap_1.2_tilt_18_pitch_2.2.oct
<a id='step6'></a>
6. Compile Results Trees
End of explanation
tilts_l = list(trees['tilt'].unique())
ch_l = list(trees['clearance_height'].unique())
print(tilts_l)
print(ch_l)
for tilt in tilts_l:
for clearance_height in ch_l:
df2=trees.loc[df['tilt']==tilts[1]]
df3 = df2.loc[df2['clearance_height']==clearance_heights[2]]
df3['pitch']=df3['pitch'].round(1)
df3['xgap']=df3['xgap'].round(1)
sns.set(font_scale=2)
table = df3.pivot('pitch', 'xgap', 'TreeIrrad_percent_GHI')
ax = sns.heatmap(table, cmap='hot', vmin = 22, vmax= 35, annot=True)
ax.invert_yaxis()
figtitle = 'Clearance Height ' + str(clearance_height/ft2m)+' ft, Tilt ' + str(tilt) + '$^\circ$'
plt.title(figtitle)
print(table)
plt.show()
Explanation: <a id='step7'></a>
7. Plot
End of explanation |
15,729 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Querying the GitHub API for repositories and organizations
By Stuart Geiger and Jamie Whitacre, made at a SciPy 2016 sprint. See the rendered, interactive, embedable map here.
Step1: With this Github object, you can get all kinds of Github objects, which you can then futher explore.
Step2: We can plot points on a map using ipyleaflets and ipywidgets. We first set up a map object, which is created with various parameters. Then we create Marker objects, which are then appended to the map. We then display the map inline in this notebook.
Step5: Querying GitHub for location data
For our mapping script, we want to get profiles for everyone who has made a commit to any of the repositories in the Jupyter organization, find their location (if any), then add it to a list. The API has a get_contributors function for repo objects, which returns a list of contributors ordered by number of commits, but not one that works across all repos in an org. So we have to iterate through all the repos in the org, and run the get_contributors method for We also want to make sure we don't add any duplicates to our list to over-represent any areas, so we keep track of people in a dictionary.
I've written a few functions to make it easy to retreive and map an organization's contributors.
Step6: Mapping multiple organizations
Sometimes you have multiple organizations within a group of interest. Because these are functions, they can be combined with some loops.
Step7: Plotting the map
Step13: Saving to file | Python Code:
!pip install pygithub
!pip install geopy
!pip install ipywidgets
from github import Github
#this is my private login credentials, stored in ghlogin.py
import ghlogin
g = Github(login_or_token=ghlogin.gh_user, password=ghlogin.gh_passwd)
Explanation: Querying the GitHub API for repositories and organizations
By Stuart Geiger and Jamie Whitacre, made at a SciPy 2016 sprint. See the rendered, interactive, embedable map here.
End of explanation
user = g.get_user("staeiou")
from geopy.geocoders import Nominatim
Explanation: With this Github object, you can get all kinds of Github objects, which you can then futher explore.
End of explanation
import ipywidgets
from ipyleaflet import (
Map,
Marker,
TileLayer, ImageOverlay,
Polyline, Polygon, Rectangle, Circle, CircleMarker,
GeoJSON,
DrawControl
)
Explanation: We can plot points on a map using ipyleaflets and ipywidgets. We first set up a map object, which is created with various parameters. Then we create Marker objects, which are then appended to the map. We then display the map inline in this notebook.
End of explanation
def get_org_contributor_locations(github, org_name):
For a GitHub organization, get location for contributors to any repo in the org.
Returns a dictionary of {username URLS : geopy Locations}, then a dictionary of various metadata.
# Set up empty dictionaries and metadata variables
contributor_locs = {}
locations = []
none_count = 0
error_count = 0
user_loc_count = 0
duplicate_count = 0
geolocator = Nominatim()
# For each repo in the organization
for repo in github.get_organization(org_name).get_repos():
#print(repo.name)
# For each contributor in the repo
for contributor in repo.get_contributors():
print('.', end="")
# If the contributor_locs dictionary doesn't have an entry for this user
if contributor_locs.get(contributor.url) is None:
# Try-Except block to handle API errors
try:
# If the contributor has no location in profile
if(contributor.location is None):
#print("No Location")
none_count += 1
else:
# Get coordinates for location string from Nominatim API
location=geolocator.geocode(contributor.location)
#print(contributor.location, " | ", location)
# Add a new entry to the dictionary. Value is user's URL, key is geocoded location object
contributor_locs[contributor.url] = location
user_loc_count += 1
except Exception:
print('!', end="")
error_count += 1
else:
duplicate_count += 1
return contributor_locs,{'no_loc_count':none_count, 'user_loc_count':user_loc_count,
'duplicate_count':duplicate_count, 'error_count':error_count}
def map_location_dict(map_obj,org_location_dict):
Maps the locations in a dictionary of {ids : geoPy Locations}.
Must be passed a map object, then the dictionary. Returns the map object.
for username, location in org_location_dict.items():
if(location is not None):
mark = Marker(location=[location.latitude,location.longitude])
mark.visible
map_obj += mark
return map_obj
Explanation: Querying GitHub for location data
For our mapping script, we want to get profiles for everyone who has made a commit to any of the repositories in the Jupyter organization, find their location (if any), then add it to a list. The API has a get_contributors function for repo objects, which returns a list of contributors ordered by number of commits, but not one that works across all repos in an org. So we have to iterate through all the repos in the org, and run the get_contributors method for We also want to make sure we don't add any duplicates to our list to over-represent any areas, so we keep track of people in a dictionary.
I've written a few functions to make it easy to retreive and map an organization's contributors.
End of explanation
jupyter_orgs = ['jupyter', 'ipython', 'jupyter-attic','jupyterhub']
orgs_location_dict = {}
orgs_metadata_dict = {}
for org in jupyter_orgs:
# For a status update, print when we get to a new org in the list
print(org)
orgs_location_dict[org], orgs_metadata_dict[org] = get_org_contributor_locations(g,org)
orgs_metadata_dict
Explanation: Mapping multiple organizations
Sometimes you have multiple organizations within a group of interest. Because these are functions, they can be combined with some loops.
End of explanation
center = [30, 5]
zoom = 2
jupyter_orgs_maps = Map(default_tiles=TileLayer(opacity=1.0), center=center, zoom=zoom,
layout=ipywidgets.Layout(height="600px"))
for org_name,org_location_dict in orgs_location_dict.items():
jupyter_orgs_maps += map_location_dict(jupyter_orgs_maps,org_location_dict)
jupyter_orgs_maps
Explanation: Plotting the map
End of explanation
def org_dict_to_csv(org_location_dict, filename, hashed_usernames = True):
Outputs a dict of users : locations to a CSV file.
Requires org_location_dict and filename, optional hashed_usernames parameter.
Uses hashes of usernames by default for privacy reasons. Think carefully
about publishing location data about uniquely identifiable users. Hashing
allows you to check unique users without revealing personal information.
try:
import hashlib
with open(filename, 'w') as f:
f.write("user, longitude, latitude\n")
for user, location in org_location_dict.items():
if location is not None:
if hashed_usernames:
user_output = hashlib.sha1(user.encode('utf-8')).hexdigest()
else:
user_output = user
line = user_output + ", " + str(location.longitude) + ", " \
+ str(location.latitude) + "\n"
f.write(line)
f.close()
except Exception as e:
return e
def csv_to_js_var(input_file, output_file):
import pandas as pd
import json
df = pd.read_csv(input_file)
dct = df.to_dict()
with open(output_file,'w') as f:
f.write('var addressPoints = '+json.dumps([[ll,l,u] for u,l,ll in zip(dct['user'].values(),dct[' longitude'].values(), dct[' latitude'].values())], indent=2)+';')
def org_dict_to_geojson(org_location_dict, filename, hashed_usernames = True):
CURRENTLY BROKEN!
Outputs a dict of users : locations to a CSV file.
Requires org_location_dict and filename, optional hashed_usernames parameter.
Uses hashes of usernames by default for privacy reasons. Think carefully
about publishing location data about uniquely identifiable users. Hashing
allows you to check unique users without revealing personal information.
import hashlib
with open(filename, 'w') as f:
header =
{ "type": "FeatureCollection",
"features": [
f.write(header)
for user, location in org_location_dict.items():
if location is not None:
if hashed_usernames:
user_output = hashlib.sha1(user.encode('utf-8')).hexdigest()
else:
user_output = user
line =
{
"type": "Feature",
"geometry": {
"type": "Point",
"coordinates": [%s, %s]
},
"properties": {
"name": "%s"
}
},
% (location.longitude, location.latitude, user_output)
f.write(line)
f.write("]}")
f.close()
org_dict_to_csv(orgs_location_dict['ipython'], "org_data/ipython.csv")
for org_name, org_location_dict in orgs_location_dict.items():
org_dict_to_csv(org_location_dict, "org_data/" + org_name + ".csv")
csv_to_js_var("org_data/" + org_name + ".csv", "org_data/" + org_name + ".js")
def csv_to_org_dict(filename):
TODO: Write function to read an outputted CSV file back to an org_dict.
Should convert lon/lat pairs to geopy Location objects for full compatibility.
Also, think about a general class object for org_dicts.
Explanation: Saving to file
End of explanation |
15,730 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Hyperparamètres, LassoRandomForestRregressor et grid_search (correction)
Le notebook explore l'optimisation des hyper paramaètres du modèle LassoRandomForestRegressor, et fait varier le nombre d'arbre et le paramètres alpha.
Step1: Données
Step2: Premiers modèles
Step3: Pour le modèle, il suffit de copier coller le code écrit dans ce fichier lasso_random_forest_regressor.py.
Step4: Le modèle a réduit le nombre d'arbres.
Step5: Grid Search
On veut trouver la meilleure paire de paramètres (n_estimators, alpha). scikit-learn implémente l'objet GridSearchCV qui effectue de nombreux apprentissage avec toutes les valeurs de paramètres qu'il reçoit. Voici tous les paramètres qu'on peut changer
Step6: Les meilleurs paramètres sont les suivants
Step7: Et le modèle a gardé un nombre réduit d'arbres
Step8: Evolution de la performance en fonction des paramètres | Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
%matplotlib inline
Explanation: Hyperparamètres, LassoRandomForestRregressor et grid_search (correction)
Le notebook explore l'optimisation des hyper paramaètres du modèle LassoRandomForestRegressor, et fait varier le nombre d'arbre et le paramètres alpha.
End of explanation
from sklearn.datasets import load_boston
from sklearn.model_selection import train_test_split
data = load_boston()
X, y = data.data, data.target
X_train, X_test, y_train, y_test = train_test_split(X, y)
Explanation: Données
End of explanation
from sklearn.ensemble import RandomForestRegressor
from sklearn.metrics import r2_score
rf = RandomForestRegressor()
rf.fit(X_train, y_train)
r2_score(y_test, rf.predict(X_test))
Explanation: Premiers modèles
End of explanation
from ensae_teaching_cs.ml.lasso_random_forest_regressor import LassoRandomForestRegressor
lrf = LassoRandomForestRegressor()
lrf.fit(X_train, y_train)
r2_score(y_test, lrf.predict(X_test))
Explanation: Pour le modèle, il suffit de copier coller le code écrit dans ce fichier lasso_random_forest_regressor.py.
End of explanation
len(lrf.estimators_)
Explanation: Le modèle a réduit le nombre d'arbres.
End of explanation
lrf.get_params()
params = {
'lasso_estimator__alpha': [0.25, 0.5, 0.75, 1., 1.25, 1.5],
'rf_estimator__n_estimators': [20, 40, 60, 80, 100, 120]
}
from sklearn.exceptions import ConvergenceWarning
from sklearn.model_selection import GridSearchCV
import warnings
warnings.filterwarnings("ignore", category=ConvergenceWarning)
grid = GridSearchCV(estimator=LassoRandomForestRegressor(),
param_grid=params, verbose=1)
grid.fit(X_train, y_train)
Explanation: Grid Search
On veut trouver la meilleure paire de paramètres (n_estimators, alpha). scikit-learn implémente l'objet GridSearchCV qui effectue de nombreux apprentissage avec toutes les valeurs de paramètres qu'il reçoit. Voici tous les paramètres qu'on peut changer :
End of explanation
grid.best_params_
Explanation: Les meilleurs paramètres sont les suivants :
End of explanation
len(grid.best_estimator_.estimators_)
r2_score(y_test, grid.predict(X_test))
Explanation: Et le modèle a gardé un nombre réduit d'arbres :
End of explanation
grid.cv_results_
import numpy
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(14, 6))
ax = fig.add_subplot(131, projection='3d')
xs = numpy.array([el['lasso_estimator__alpha'] for el in grid.cv_results_['params']])
ys = numpy.array([el['rf_estimator__n_estimators'] for el in grid.cv_results_['params']])
zs = numpy.array(grid.cv_results_['mean_test_score'])
ax.scatter(xs, ys, zs)
ax.set_title("3D...")
ax = fig.add_subplot(132)
for x in sorted(set(xs)):
y2 = ys[xs == x]
z2 = zs[xs == x]
ax.plot(y2, z2, label="alpha=%1.2f" % x, lw=x*2)
ax.legend();
ax = fig.add_subplot(133)
for y in sorted(set(ys)):
x2 = xs[ys == y]
z2 = zs[ys == y]
ax.plot(x2, z2, label="n_estimators=%d" % y, lw=y/40)
ax.legend();
Explanation: Evolution de la performance en fonction des paramètres
End of explanation |
15,731 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Let's admit it, bayesian modeling on time series is slow. In pymc3, it typically implies using theano scan operation. Here, we will show how to profile one step of the kalman filter, as well as the scan operation over the time series.
First, load the required packages
Step1: We will use the same data as in the 01_RandomWalkPlusObservation notebook.
Step2: Next, we create all the tensors required to describe our model
Step3: We will also create some actual values for them
Step4: Let's calculate the likelihood of the observed values, given the parameters above
Step5: Time required for the log-likelihood calculation
Step6: Profiling a non-scan operation is relatively simple. As an example, let's create a function to calculate the first time step of the Kalman filter
Step7: Repeating the procedure with a scan procedure, we can see that the code inside it is not profiled. It took me a while to make it work (not even stackoverflow helped!!!). In the end, this is how I made it work | Python Code:
import numpy as np
import theano
import theano.tensor as tt
import kalman
Explanation: Let's admit it, bayesian modeling on time series is slow. In pymc3, it typically implies using theano scan operation. Here, we will show how to profile one step of the kalman filter, as well as the scan operation over the time series.
First, load the required packages:
End of explanation
# True values
T = 500 # Time steps
sigma2_eps0 = 3 # Variance of the observation noise
sigma2_eta0 = 10 # Variance in the update of the mean
# Simulate data
np.random.seed(12345)
eps = np.random.normal(scale=sigma2_eps0**0.5, size=T)
eta = np.random.normal(scale=sigma2_eta0**0.5, size=T)
mu = np.cumsum(eta)
y = mu + eps
Explanation: We will use the same data as in the 01_RandomWalkPlusObservation notebook.
End of explanation
# Upon using pymc3, the following theano configuration flag is changed,
# leading to tensors being required to have test values
#theano.config.compute_test_value = 'ignore'
# Tensors for the measurement equation
Z = tt.dmatrix(name='Z')
d = tt.dvector(name='d')
H = tt.dmatrix(name='H')
# Tensors for the transition equation
T = tt.dmatrix(name='T')
c = tt.dvector(name='c')
R = tt.dmatrix(name='R')
Q = tt.dmatrix(name='Q')
# Initial position and uncertainty
a0 = tt.dvector(name='a0')
P0 = tt.dmatrix(name='P0')
Explanation: Next, we create all the tensors required to describe our model:
End of explanation
ɛ_σ2 = 3.
η_σ2 = 10.
args = dict(Z = np.array([[1.]]),
d = np.array([0.]),
H = np.array([[ɛ_σ2]]),
T = np.array([[1.]]),
c = np.array([0.]),
R = np.array([[1.]]),
Q = np.array([[η_σ2]]),
a0 = np.array([0.]),
P0 = np.array([[1e6]]))
Explanation: We will also create some actual values for them:
End of explanation
kalmanTheano = kalman.KalmanTheano(Z, d, H, T, c, R, Q, a0, P0)
(at, Pt, lliks), updates = kalmanTheano.filter(y[:,None])
f = theano.function([Z, d, H, T, c, R, Q, a0, P0], lliks)
llik = f(**args)
llik[1:].sum()
Explanation: Let's calculate the likelihood of the observed values, given the parameters above:
End of explanation
print('Measuring time...')
%timeit f(**args)
Explanation: Time required for the log-likelihood calculation:
End of explanation
Y0 = tt.dvector(name='Y0')
_,_,llik = kalman.core._oneStep(Y0, Z, d, H, T, c, R, Q, a0, P0)
profiler = theano.compile.ScanProfileStats()
f = theano.function([Y0, Z, d, H, T, c, R, Q, a0, P0], llik, profile=profiler)
f(y[0,None], **args);
profiler.summary()
Explanation: Profiling a non-scan operation is relatively simple. As an example, let's create a function to calculate the first time step of the Kalman filter:
End of explanation
profiler = theano.compile.ScanProfileStats()
(_,_,llik),_ = kalmanTheano.filter(y[:,None], profile=profiler)
f = theano.function([Z, d, H, T, c, R, Q, a0, P0], llik, profile=profiler)
f(**args);
# Select the node corresponding to the scan operation
scan_op = next(k for k in profiler.op_nodes()
if isinstance(k, theano.scan_module.scan_op.Scan))
scan_op.profile.summary()
Explanation: Repeating the procedure with a scan procedure, we can see that the code inside it is not profiled. It took me a while to make it work (not even stackoverflow helped!!!). In the end, this is how I made it work:
End of explanation |
15,732 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Excercise - Working With CSV
Using the CSV module
A CSV file is often used exchange format for spreadsheets and databases.
Each line is called a record and each field within a record is seperated by a delimiter such as comma, tab etc.
We use the module "CSV" which is not included in the standard library of Python.
Note
Step1: Step 2
Step2: Step 3
Step3: The full code
Let us package all of this into a nice function which
- reads the word_sentiment.csv file
- searches for a particualr given word
- returns the sentiment value of the word given to it. If the word is not found it returns 0 .
Step6: Now let us update this code so that we ask the user to enter a sentence. We then break the sentence into words and find the sentiment of each word. We then aggregate the sentiments across all the words to calcuate the sentiment of the sentence and tell if the sentence entered is positive or negative. Hint
Step7: Can you improve this code to handle double like "not" ? eg. "poonacha is not good" should return a negative sentiment rather than positive . | Python Code:
import csv
Explanation: Excercise - Working With CSV
Using the CSV module
A CSV file is often used exchange format for spreadsheets and databases.
Each line is called a record and each field within a record is seperated by a delimiter such as comma, tab etc.
We use the module "CSV" which is not included in the standard library of Python.
Note: Keep in mind that Mac uses a different delimiter to determine the end of a row in a CSV file than Microsoft. Since the CSV python module we will use works well with Windows CSV files, we will save and use a Windows CSV file in our program. So in MAC, you have to save the CSV file as "windows csv" file rather than just csv file.
Let us write a program to read a CSV file (word_sentiment.csv). This file contains a list of 2000 + words and its sentiment ranging form -5 to +5.
Write a function "word_sentiment" which checks if the entered word is found in the sentiment_csv file and returns the corresponding sentiment. If the word is not found it returns 0.
Step 1:Import the module CSV.
If any module is not included in the computer, we will need to do "pip install csv" in the terminal (in case of mac) or in the command prompt (in case of windows).
End of explanation
SENTIMENT_CSV = "C:\\Users\kmpoo\Dropbox\HEC\Teaching\Python for PhD May 2019\python4phd\Session 1\Sent\word_sentiment.csv"
Explanation: Step 2: Assign the path of the file to a global variable "SENTIMENT_CSV"
End of explanation
with open(SENTIMENT_CSV, 'rt',encoding = 'utf-8') as senti_data:
sentiment = csv.reader(senti_data)
for data_row in sentiment:
print(data_row)
Explanation: Step 3: Open the file using the "with open()" command and read the file
Before we read a file, we need to open it. The "with open()" command is very handy since it can open the file and give you a handler with which you can read the file. One of the benefits of the "with"command is that (unlike the simple open() command) it can automaticaly close the file, allowing write operations to be completed. The syntax is :
with open('filename', 'mode', 'encoding') as fileobj* *
Where fileobj is the file object returned by open(); filename is the string name of the file. mode indicates what you want to do with the file and ecoding defines the type of encoding with which you want to open the file.
Mode could be:
* w -> write. if the file exists it is overwritten
* r -> read
* a -> append. Write at the end of the file
* x - > write. Only if the file does not exist. It does not allow a file to be re-written
For each, adding a subfix 't' refers to read/write as text and the subfix 'b' refers to read/write as bytes.
Encoding could be:
* 'ascii'
* 'utf-8'
* 'latin-1'
* 'cp-1252'
* 'unicode-escape'
After opening the file, we call the csv.reader() function to read the data. It assigns a data structure (similar to a multidimentional list) which we can use to read any cell in the csv file.
End of explanation
import csv
SENTIMENT_CSV = "C:/Users/kmpoo/Dropbox/HEC/Teaching/Python for PhD Mar 2018/python4phd/Session 1/Sent/word_sentiment.csv"
'''Updated the path to point to your file. The path provided changes based on your operating system. '''
def word_sentiment(word):
'''This function uses the word_sentiment.csv file to find the sentiment of the word
entered'''
with open(SENTIMENT_CSV, 'rt',encoding = 'utf-8') as senti_data:
sentiment = csv.reader(senti_data)
for data_row in sentiment:
if data_row[0] == word:
sentiment_val = data_row[1]
return sentiment_val
return 0
def main():
word_in = input("enter the word: ").lower()
return_val = word_sentiment(word_in)
print("the sentiment of the word ",word_in ," is: ",return_val)
main()
Explanation: The full code
Let us package all of this into a nice function which
- reads the word_sentiment.csv file
- searches for a particualr given word
- returns the sentiment value of the word given to it. If the word is not found it returns 0 .
End of explanation
import csv
SENTIMENT_CSV = "C:/Users/kmpoo/Dropbox/HEC/Teaching/Python for PhD Mar 2018/python4phd/Session 1/Sent/word_sentiment.csv"#Updated the path to point to your file.
'''The path provided changes based on your operating system. For a windows system the format
of the path will be "C:/Users/User/Desktop/word_sentiment.csv" '''
def word_sentiment(word):
This function uses the word_sentiment.csv file to find the sentiment of the word
entered
with open(SENTIMENT_CSV, 'rt',encoding = 'utf-8') as senti_data:
sentiment = csv.reader(senti_data)
for data_row in sentiment:
if data_row[0] == word:
sentiment_val = data_row[1]
return sentiment_val
return 0
def main():
This function asks the user to input a sentence and tries to calculate the sentiment
of the sentence
sentiment = 0
sentence_in = input("enter the sentence: ").lower()
words_list = sentence_in.split()
for word in words_list:
sentiment = sentiment + int(word_sentiment(word))
if sentiment > 0:
print("The entered sentence has a positive sentiment")
elif sentiment == 0:
print("The entered sentence has a neutral sentiment")
else:
print("The entered sentence has a negative sentiment")
main()
Explanation: Now let us update this code so that we ask the user to enter a sentence. We then break the sentence into words and find the sentiment of each word. We then aggregate the sentiments across all the words to calcuate the sentiment of the sentence and tell if the sentence entered is positive or negative. Hint: Use the split() command we saw in lesson 1.
End of explanation
# Enter code here
Explanation: Can you improve this code to handle double like "not" ? eg. "poonacha is not good" should return a negative sentiment rather than positive .
End of explanation |
15,733 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Ordinary Differential Equations Exercise 1
Imports
Step2: Lorenz system
The Lorenz system is one of the earliest studied examples of a system of differential equations that exhibits chaotic behavior, such as bifurcations, attractors, and sensitive dependence on initial conditions. The differential equations read
Step4: Write a function solve_lorenz that solves the Lorenz system above for a particular initial condition $[x(0),y(0),z(0)]$. Your function should return a tuple of the solution array and time array.
Step6: Write a function plot_lorentz that
Step7: Use interact to explore your plot_lorenz function with | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from scipy.integrate import odeint
from IPython.html.widgets import interact, fixed
from IPython.html.widgets import interact, interactive, fixed
Explanation: Ordinary Differential Equations Exercise 1
Imports
End of explanation
def lorentz_derivs(yvec, t, sigma, rho, beta):
Compute the the derivatives for the Lorentz system at yvec(t).
# YOUR CODE HERE
# raise NotImplementedError()
x = yvec[0]
y = yvec[1]
z = yvec[2]
dx = sigma*(y-x)
dy = x*(rho-z)-y
dz = x*y - beta*z
return np.array([dx,dy,dz])
assert np.allclose(lorentz_derivs((1,1,1),0, 1.0, 1.0, 2.0),[0.0,-1.0,-1.0])
Explanation: Lorenz system
The Lorenz system is one of the earliest studied examples of a system of differential equations that exhibits chaotic behavior, such as bifurcations, attractors, and sensitive dependence on initial conditions. The differential equations read:
$$ \frac{dx}{dt} = \sigma(y-x) $$
$$ \frac{dy}{dt} = x(\rho-z) - y $$
$$ \frac{dz}{dt} = xy - \beta z $$
The solution vector is $[x(t),y(t),z(t)]$ and $\sigma$, $\rho$, and $\beta$ are parameters that govern the behavior of the solutions.
Write a function lorenz_derivs that works with scipy.integrate.odeint and computes the derivatives for this system.
End of explanation
def solve_lorentz(ic, max_time=4.0, sigma=10.0, rho=28.0, beta=8.0/3.0):
Solve the Lorenz system for a single initial condition.
Parameters
----------
ic : array, list, tuple
Initial conditions [x,y,z].
max_time: float
The max time to use. Integrate with 250 points per time unit.
sigma, rho, beta: float
Parameters of the differential equation.
Returns
-------
soln : np.ndarray
The array of the solution. Each row will be the solution vector at that time.
t : np.ndarray
The array of time points used.
# YOUR CODE HERE
# raise NotImplementedError()
t = np.linspace(0, max_time, 250*max_time)
soln = odeint(lorentz_derivs, ic, t, args=(sigma, rho,beta), atol=1e-9, rtol=1e-9)
return np.array(soln), np.array(t)
assert True # leave this to grade solve_lorenz
Explanation: Write a function solve_lorenz that solves the Lorenz system above for a particular initial condition $[x(0),y(0),z(0)]$. Your function should return a tuple of the solution array and time array.
End of explanation
N = 5
colors = plt.cm.hot(np.linspace(0,1,N))
for i in range(N):
# To use these colors with plt.plot, pass them as the color argument
print(colors[i])
def plot_lorentz(N=10, max_time=4.0, sigma=10.0, rho=28.0, beta=8.0/3.0):
Plot [x(t),z(t)] for the Lorenz system.
Parameters
----------
N : int
Number of initial conditions and trajectories to plot.
max_time: float
Maximum time to use.
sigma, rho, beta: float
Parameters of the differential equation.
# YOUR CODE HERE
# raise NotImplementedError()
plt.figure(figsize=(15,8))
np.random.seed(1)
r = []
for i in range(0,10):
data = (np.random.random(3)-0.5)*30.0
r.append(solve_lorentz(data, max_time, sigma, rho, beta))
for j in r:
x = [p[0] for p in j[0]]
z = [p[2] for p in j[0]]
color = plt.cm.summer((x[0] + z[0]/60.0 - 0.5))
plt.plot(x, z, color=color)
plt.xlabel('$x(t)$')
plt.ylabel('$z(t)$')
plt.title('Lorentz system')
plot_lorentz()
assert True # leave this to grade the plot_lorenz function
Explanation: Write a function plot_lorentz that:
Solves the Lorenz system for N different initial conditions. To generate your initial conditions, draw uniform random samples for x, y and z in the range $[-15,15]$. Call np.random.seed(1) a single time at the top of your function to use the same seed each time.
Plot $[x(t),z(t)]$ using a line to show each trajectory.
Color each line using the hot colormap from Matplotlib.
Label your plot and choose an appropriate x and y limit.
The following cell shows how to generate colors that can be used for the lines:
End of explanation
# YOUR CODE HERE
# raise NotImplementedError()
w = interactive(plot_lorentz, max_time = (1,10,1), N = (1,50,1), sigma = (0.0,50.0,0.1), rho = (0.0, 50.0, 0.1), bata = fixed(8.0/3.0));
w
Explanation: Use interact to explore your plot_lorenz function with:
max_time an integer slider over the interval $[1,10]$.
N an integer slider over the interval $[1,50]$.
sigma a float slider over the interval $[0.0,50.0]$.
rho a float slider over the interval $[0.0,50.0]$.
beta fixed at a value of $8/3$.
End of explanation |
15,734 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: MyHDL Function (module)
An introductory MyHDL tutorial presents a small example towards the begining of the post. A MyHDL anatomy graphic (see below) is used to describe the parts of a MyHDL module. Note, the nomenclature is a little odd here, in Python a module is a file and in MyHDL a module (typically sometimes called a component) is a Python function that describes a set of hardware behavior. Hardware module is commonly used to name an HDL component in a digital circuit - the use has been propagated forward.
<center><figure>
<a href="https
Step2: The following function will stimulate the above MyHDL module. The stimulator all exercise the module in the same way whereas the verification (test) will use random values for testing and test numerous cycles. The cell after the stimulator is a cell that plots the waveform of the stimulator. Waring, the embedded VCD waveform plotter is beta and very limited. It is useful for very simple waveforms. For full waveform viewing use an external tool such as gtkwave.
Step3: After the above shifty implementation has been coded, run the next cell to test and verify the behavior of the described digital circuit. If the test fails it will print out a number of simuilation steps and some values. The VCD file can be displayed via the vcd.parse_and_plot('vcd/01_mex_test.vcd') function (same as above and the same basic waveforms warning) for debug or use an eternal waveform viewer (e.g. gtkwave) to view the simulation waveform and debug. | Python Code:
def shifty(clock, reset, load, load_value, output_bit, initial_value=0):
Ports:
load: input, load strobe, load the `load_value`
load_value: input, the value to be loaded
output_bit: output, The most significant
initial_value: internal shift registers initial value (value after reset)
assert isinstance(load_value.val, intbv)
# the internal shift register will be the same sizes as the `load_value`
shiftreg = Signal(intbv(initial_value,
min=load_value.min, max=load_value.max))
mask = shiftreg.max-1
# non-working template
@always_seq(clock.posedge, reset=reset)
def beh():
output_bit.next = shiftreg[0]
# for monitoring, access outside this function
shifty.shiftreg = shiftreg
return beh
Explanation: MyHDL Function (module)
An introductory MyHDL tutorial presents a small example towards the begining of the post. A MyHDL anatomy graphic (see below) is used to describe the parts of a MyHDL module. Note, the nomenclature is a little odd here, in Python a module is a file and in MyHDL a module (typically sometimes called a component) is a Python function that describes a set of hardware behavior. Hardware module is commonly used to name an HDL component in a digital circuit - the use has been propagated forward.
<center><figure>
<a href="https://www.flickr.com/photos/79765478@N08/14230879911" title="myhdl_module_anatomy by cfelton*, on Flickr"><img src="https://farm3.staticflickr.com/2932/14230879911_03ce54dcde_z.jpg" width="640" height="322" alt="myhdl_module_anatomy"></a>
<caption> MyHDL Module Anatomy </caption>
</figure></center>
A Shift Register
<!-- there is an assumption the user will know what a shift register is, these exercises are for people that know Verilog/VHDL. Not teaching digital logic from scratch !! -->
What exactly does a shift register do? In the exercise description section there is a link to a short video describing a shift register. Basically, to generate a shift register all we really need is a description of what the expected behavior is. In this case we have a parallel value, load_value, that will be serialized to a single bit, the following table shows the temporal behavior. If we have an constrained integer with a maximum value of 256, the following will be the behavior:
Time | load | ival (d) | shift (b) | obit
-----+------+----------+-----------+-----
T0 | 1 | 32 | 0000_0000 | 0
T1 | 0 | X | 0010_0000 | 0
T2 | 0 | X | 0100_0000 | 0
T3 | 0 | X | 1000_0000 | 1
T4 | 0 | X | 0000_0001 | 0
T5 | 0 | X | 0000_0010 | 0
In the above table abbreviations are used for the Signals listed in the module.
ival: initial_value
shift: shiftreg
obit: output_bit
Exercise Description
This exercise is to implement the shift register shown with the following additions:
Make the shift register circular
Add an inital condition parameter initial_value
To make the the shift register(YouTube) circular connect the most-significant-bit (msb) to the least-significant-bit (lsb).
Sections from the MyHDL manual that may be useful:
Bit indexing and slicing
Signals, Why Signal Assignments
The concat function
Fill in the body of the following and then run the test cell.
Hints
An internal signal will be used to represent the shift register. The width (max value) of the register is determined by the type of load_value.
End of explanation
stimulator(shifty)
# Note, the following waveform plotter is experimental. Using
# an external waveform viewer, like gtkwave, would be useful.
vcd.parse_and_plot('vcd/01_mex_stim.vcd')
Explanation: The following function will stimulate the above MyHDL module. The stimulator all exercise the module in the same way whereas the verification (test) will use random values for testing and test numerous cycles. The cell after the stimulator is a cell that plots the waveform of the stimulator. Waring, the embedded VCD waveform plotter is beta and very limited. It is useful for very simple waveforms. For full waveform viewing use an external tool such as gtkwave.
End of explanation
test(shifty)
# View the generated VHDL
%less output/shifty.vhd
# View the generated Verilog
%less output/shifty.v
Explanation: After the above shifty implementation has been coded, run the next cell to test and verify the behavior of the described digital circuit. If the test fails it will print out a number of simuilation steps and some values. The VCD file can be displayed via the vcd.parse_and_plot('vcd/01_mex_test.vcd') function (same as above and the same basic waveforms warning) for debug or use an eternal waveform viewer (e.g. gtkwave) to view the simulation waveform and debug.
End of explanation |
15,735 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
1D Wasserstein barycenter comparison between exact LP and entropic regularization
This example illustrates the computation of regularized Wasserstein Barycenter
as proposed in [3] and exact LP barycenters using standard LP solver.
It reproduces approximately Figure 3.1 and 3.2 from the following paper
Step1: Gaussian Data
Step2: Dirac Data
Step3: Final figure | Python Code:
# Author: Remi Flamary <remi.flamary@unice.fr>
#
# License: MIT License
import numpy as np
import matplotlib.pylab as pl
import ot
# necessary for 3d plot even if not used
from mpl_toolkits.mplot3d import Axes3D # noqa
from matplotlib.collections import PolyCollection # noqa
#import ot.lp.cvx as cvx
Explanation: 1D Wasserstein barycenter comparison between exact LP and entropic regularization
This example illustrates the computation of regularized Wasserstein Barycenter
as proposed in [3] and exact LP barycenters using standard LP solver.
It reproduces approximately Figure 3.1 and 3.2 from the following paper:
Cuturi, M., & Peyré, G. (2016). A smoothed dual approach for variational
Wasserstein problems. SIAM Journal on Imaging Sciences, 9(1), 320-343.
[3] Benamou, J. D., Carlier, G., Cuturi, M., Nenna, L., & Peyré, G. (2015).
Iterative Bregman projections for regularized transportation problems
SIAM Journal on Scientific Computing, 37(2), A1111-A1138.
End of explanation
#%% parameters
problems = []
n = 100 # nb bins
# bin positions
x = np.arange(n, dtype=np.float64)
# Gaussian distributions
# Gaussian distributions
a1 = ot.datasets.make_1D_gauss(n, m=20, s=5) # m= mean, s= std
a2 = ot.datasets.make_1D_gauss(n, m=60, s=8)
# creating matrix A containing all distributions
A = np.vstack((a1, a2)).T
n_distributions = A.shape[1]
# loss matrix + normalization
M = ot.utils.dist0(n)
M /= M.max()
#%% plot the distributions
pl.figure(1, figsize=(6.4, 3))
for i in range(n_distributions):
pl.plot(x, A[:, i])
pl.title('Distributions')
pl.tight_layout()
#%% barycenter computation
alpha = 0.5 # 0<=alpha<=1
weights = np.array([1 - alpha, alpha])
# l2bary
bary_l2 = A.dot(weights)
# wasserstein
reg = 1e-3
ot.tic()
bary_wass = ot.bregman.barycenter(A, M, reg, weights)
ot.toc()
ot.tic()
bary_wass2 = ot.lp.barycenter(A, M, weights, solver='interior-point', verbose=True)
ot.toc()
pl.figure(2)
pl.clf()
pl.subplot(2, 1, 1)
for i in range(n_distributions):
pl.plot(x, A[:, i])
pl.title('Distributions')
pl.subplot(2, 1, 2)
pl.plot(x, bary_l2, 'r', label='l2')
pl.plot(x, bary_wass, 'g', label='Reg Wasserstein')
pl.plot(x, bary_wass2, 'b', label='LP Wasserstein')
pl.legend()
pl.title('Barycenters')
pl.tight_layout()
problems.append([A, [bary_l2, bary_wass, bary_wass2]])
Explanation: Gaussian Data
End of explanation
#%% parameters
a1 = 1.0 * (x > 10) * (x < 50)
a2 = 1.0 * (x > 60) * (x < 80)
a1 /= a1.sum()
a2 /= a2.sum()
# creating matrix A containing all distributions
A = np.vstack((a1, a2)).T
n_distributions = A.shape[1]
# loss matrix + normalization
M = ot.utils.dist0(n)
M /= M.max()
#%% plot the distributions
pl.figure(1, figsize=(6.4, 3))
for i in range(n_distributions):
pl.plot(x, A[:, i])
pl.title('Distributions')
pl.tight_layout()
#%% barycenter computation
alpha = 0.5 # 0<=alpha<=1
weights = np.array([1 - alpha, alpha])
# l2bary
bary_l2 = A.dot(weights)
# wasserstein
reg = 1e-3
ot.tic()
bary_wass = ot.bregman.barycenter(A, M, reg, weights)
ot.toc()
ot.tic()
bary_wass2 = ot.lp.barycenter(A, M, weights, solver='interior-point', verbose=True)
ot.toc()
problems.append([A, [bary_l2, bary_wass, bary_wass2]])
pl.figure(2)
pl.clf()
pl.subplot(2, 1, 1)
for i in range(n_distributions):
pl.plot(x, A[:, i])
pl.title('Distributions')
pl.subplot(2, 1, 2)
pl.plot(x, bary_l2, 'r', label='l2')
pl.plot(x, bary_wass, 'g', label='Reg Wasserstein')
pl.plot(x, bary_wass2, 'b', label='LP Wasserstein')
pl.legend()
pl.title('Barycenters')
pl.tight_layout()
#%% parameters
a1 = np.zeros(n)
a2 = np.zeros(n)
a1[10] = .25
a1[20] = .5
a1[30] = .25
a2[80] = 1
a1 /= a1.sum()
a2 /= a2.sum()
# creating matrix A containing all distributions
A = np.vstack((a1, a2)).T
n_distributions = A.shape[1]
# loss matrix + normalization
M = ot.utils.dist0(n)
M /= M.max()
#%% plot the distributions
pl.figure(1, figsize=(6.4, 3))
for i in range(n_distributions):
pl.plot(x, A[:, i])
pl.title('Distributions')
pl.tight_layout()
#%% barycenter computation
alpha = 0.5 # 0<=alpha<=1
weights = np.array([1 - alpha, alpha])
# l2bary
bary_l2 = A.dot(weights)
# wasserstein
reg = 1e-3
ot.tic()
bary_wass = ot.bregman.barycenter(A, M, reg, weights)
ot.toc()
ot.tic()
bary_wass2 = ot.lp.barycenter(A, M, weights, solver='interior-point', verbose=True)
ot.toc()
problems.append([A, [bary_l2, bary_wass, bary_wass2]])
pl.figure(2)
pl.clf()
pl.subplot(2, 1, 1)
for i in range(n_distributions):
pl.plot(x, A[:, i])
pl.title('Distributions')
pl.subplot(2, 1, 2)
pl.plot(x, bary_l2, 'r', label='l2')
pl.plot(x, bary_wass, 'g', label='Reg Wasserstein')
pl.plot(x, bary_wass2, 'b', label='LP Wasserstein')
pl.legend()
pl.title('Barycenters')
pl.tight_layout()
Explanation: Dirac Data
End of explanation
#%% plot
nbm = len(problems)
nbm2 = (nbm // 2)
pl.figure(2, (20, 6))
pl.clf()
for i in range(nbm):
A = problems[i][0]
bary_l2 = problems[i][1][0]
bary_wass = problems[i][1][1]
bary_wass2 = problems[i][1][2]
pl.subplot(2, nbm, 1 + i)
for j in range(n_distributions):
pl.plot(x, A[:, j])
if i == nbm2:
pl.title('Distributions')
pl.xticks(())
pl.yticks(())
pl.subplot(2, nbm, 1 + i + nbm)
pl.plot(x, bary_l2, 'r', label='L2 (Euclidean)')
pl.plot(x, bary_wass, 'g', label='Reg Wasserstein')
pl.plot(x, bary_wass2, 'b', label='LP Wasserstein')
if i == nbm - 1:
pl.legend()
if i == nbm2:
pl.title('Barycenters')
pl.xticks(())
pl.yticks(())
Explanation: Final figure
End of explanation |
15,736 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Train Network for feature extraction
Feed MFCC values for each song to and encoder-decoder network.
Step1: Read data
Pad items with max length of 150
X.shape = (N, 150, 20)
Step2: Train
Reconstruct sequences from a dense vector of size 20
Step3: Load previous model | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn
seaborn.set()
import cPickle
import numpy as np
from keras import backend as K
from keras.models import Sequential, model_from_yaml
from keras.layers.recurrent import LSTM
from keras.layers.core import Activation, Dense, Dropout, RepeatVector
from keras.layers.wrappers import TimeDistributed
from keras.preprocessing import sequence
import yaml
import os
Explanation: Train Network for feature extraction
Feed MFCC values for each song to and encoder-decoder network.
End of explanation
# Read data
config = yaml.load(open(os.path.join(os.path.expanduser("~"), ".blackbird", "config.yaml")).read())
seq_features = cPickle.load(open(config["data"]["features"], "rb"))
weights_file = config["data"]["model"]["weights"]
arch_file = config["data"]["model"]["arch"]
output_layer = int(config["data"]["model"]["output"])
maxlen = 150
X = np.empty((len(seq_features), maxlen, 20))
for idx, key in enumerate(seq_features):
X[idx, :, :] = sequence.pad_sequences(seq_features[key], maxlen=maxlen, dtype="float32").T
Explanation: Read data
Pad items with max length of 150
X.shape = (N, 150, 20)
End of explanation
# Create model
model = Sequential()
model.add(LSTM(64, return_sequences=False, input_shape=(maxlen, 20), go_backwards=True))
model.add(Dropout(0.5))
model.add(Dense(20))
model.add(Activation("tanh"))
model.add(RepeatVector(maxlen))
model.add(Dropout(0.5))
model.add(LSTM(64, return_sequences=True, go_backwards=True))
model.add(TimeDistributed(Dense(20)))
model.compile(loss="mse", optimizer="adam")
# Train
history = model.fit(X, X, batch_size=128, nb_epoch=500, validation_split=0.2, verbose=1)
# Use the validation loss curve to stop at a good solution
plt.figure(figsize=(14, 5))
plt.plot(history.history["loss"], label="Training loss")
plt.plot(history.history["val_loss"], label="Validation loss")
plt.legend()
plt.show()
# Save architecture and weights
if os.path.isfile(weights_file):
os.rename(weights_file, weights_file + ".backup")
if os.path.isfile(arch_file):
os.rename(arch_file, arch_file + ".backup")
# Save things
open(arch_file, "w").write(model.to_yaml())
model.save_weights(weights_file)
Explanation: Train
Reconstruct sequences from a dense vector of size 20
End of explanation
# Load model
model = model_from_yaml(open(arch_file).read())
model.load_weights(weights_file)
# Function to predict output
predict = K.function([model.layers[0].input, K.learning_phase()],
model.layers[output_layer].output)
# Predict output
test_X = predict([X, 0])
Explanation: Load previous model
End of explanation |
15,737 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Alpsko smučanje
Obdelava podatkov
Step1: Najprej sem spletne strani FIS pobrala podatke o smučarjih in njihovih id številkah na spletišču FIS. Id-je sem potrebovala za sestavljanje url naslovov posameznih športnikov. Zbrane podatke sem nato spravila v datoteko smucarji.csv.
Step2: Tabela izgleda tako
Step3: Nato sem za vsakega od tekmovalcev s strani z njegovimi rezultati (npr. Eva-Maria Brem) pobrala podatke o vsaki tekmi
Step4: Tabela za Evo-Mario Brem
Step5: V kasnejši analizi se pojavi težava, da so podatki o uvrstitvi lahko številke ali besedilo (npr. DNQ1, DNF1, DSQ2 in DNS1), ki označuje odstope, diskvalifikacije in podobne anomalije.
Zato tabeli dodamo nov stolpec mesto1, kjer besedilne podatke identificiramo z 0. Tu nas ne zanima, zakaj tekmovalka ni osvojila točk.
Step6: V 2. in 13. vrstici je vidna razlika med stolpcema 'mesto' in 'mesto1'.
Če bomo želeli delati analizo skupnega seštevka, moramo pretvoriti mesto tudi v točke. Definiramo seznam 'tocke', v katerega na i-to mesto (i teče od 0 do 30) zapišemo, koliko točk tekmovalka dobi za osvojeno i-to mesto.
Step7: Opomba
Step8: Pa si poglejmo, v katerih disciplinah najpogosteje tekmuje Eva-Maria Brem
Step9: Eva-Maria Brem je torej najpogosteje tekmuje v slalomu in veleslalomu. Ponazorimo to še z grafom
Step10: Čeprav najpogosteje tekmuje v slalomu in veleslalomu, pa to nista nujno disciplini, v katerih dosega najboljše rezultate. Najprej si poglejmo, kakšni so njeni rezultati v slalomu in nato še veleslalomu
Step11: Iz tabel je razvidno, da so njeni razultati v slalomu v vačini na repu trideseterice, med tem ko se v veleslalomu uvršča med 5 najboljših. To se še lepše vidi z grafov
Step12: Analiza narodnosti
Zanima nas, koliko je smučarjev določene narodnosti. Najprej jih preštejmo, nato pa ponazorimo to z grafom
Step13: Popravi graf, da najmanjše lepo prikaže!!!
Analiza smuči
Najprej si oglejmo, katere znamke smuči so najpogostejše v svetovnem pokalu
Step14: Poglejmo, predstavniki katerih držav uporabljajo smuči Head (in koliko jih je)
Step15: Podobno si lahko pogledamo, katerim proizvajalcem smuči najbolj zaupajo smučarji iz avstrije | Python Code:
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import numpy as py
#import scipy
# Make the graphs a bit prettier, and bigger
#pd.set_option('display.mpl_style', 'default')
#plt.rcParams['figure.figsize'] = (15, 5)
# This is necessary to show lots of columns in pandas 0.12.
# Not necessary in pandas 0.13.
pd.set_option('display.width', 5000)
pd.set_option('display.max_columns', 60)
Explanation: Alpsko smučanje
Obdelava podatkov
End of explanation
pot="C://Users/Ziva/Documents/AlpineSkiing/csv-datoteke/smucarji.csv"
smucarji = pd.read_csv(pot, parse_dates=['rojstvo'])
Explanation: Najprej sem spletne strani FIS pobrala podatke o smučarjih in njihovih id številkah na spletišču FIS. Id-je sem potrebovala za sestavljanje url naslovov posameznih športnikov. Zbrane podatke sem nato spravila v datoteko smucarji.csv.
End of explanation
smucarji[:10]
Explanation: Tabela izgleda tako:
End of explanation
pot_brem = "C:/Users/Ziva/Documents/AlpineSkiing/csv-datoteke/BREM Eva-Maria.csv"
brem = pd.read_csv(pot_brem, parse_dates=['datum'])
Explanation: Nato sem za vsakega od tekmovalcev s strani z njegovimi rezultati (npr. Eva-Maria Brem) pobrala podatke o vsaki tekmi: datum, prizorišče, disciplino, uvrstitev, zaostanek.
End of explanation
brem[:15]
Explanation: Tabela za Evo-Mario Brem:
End of explanation
def pretvori(bes):
if bes in ['DNQ1', 'DNF1', 'DSQ2', 'DNS1','DNF2']:
return 0
else:
return int(bes)
brem['mesto1'] = brem['mesto'].map(pretvori)
brem[:15]
Explanation: V kasnejši analizi se pojavi težava, da so podatki o uvrstitvi lahko številke ali besedilo (npr. DNQ1, DNF1, DSQ2 in DNS1), ki označuje odstope, diskvalifikacije in podobne anomalije.
Zato tabeli dodamo nov stolpec mesto1, kjer besedilne podatke identificiramo z 0. Tu nas ne zanima, zakaj tekmovalka ni osvojila točk.
End of explanation
tocke=[0,100,80,60,50,45,40,36,32,29,26,24,22,20,18,16,15,14,13,12,11,10,9,8,7,6,5,4,3,2,1]
def pretvori_2(bes):
if bes in ["DNQ1", "DNF1", "DSQ2", "DNS1", "DNF2"]:
return 0
else:
if int(bes) > 30:
return 0
else:
return tocke[int(bes)];
Explanation: V 2. in 13. vrstici je vidna razlika med stolpcema 'mesto' in 'mesto1'.
Če bomo želeli delati analizo skupnega seštevka, moramo pretvoriti mesto tudi v točke. Definiramo seznam 'tocke', v katerega na i-to mesto (i teče od 0 do 30) zapišemo, koliko točk tekmovalka dobi za osvojeno i-to mesto.
End of explanation
brem['tocke'] = brem['mesto'].map(pretvori_2)
brem[:15]
Explanation: Opomba: mesto1 in tocke bi bilo bolj smiselno dodati v prvotni csv!!!!
End of explanation
brem['disciplina'].value_counts()
Explanation: Pa si poglejmo, v katerih disciplinah najpogosteje tekmuje Eva-Maria Brem:
End of explanation
brem['disciplina'].value_counts().plot(kind='pie', figsize=(6,6))
Explanation: Eva-Maria Brem je torej najpogosteje tekmuje v slalomu in veleslalomu. Ponazorimo to še z grafom:
End of explanation
slalom = brem['disciplina'] == 'Slalom'
brem[slalom][:15]
veleslalom = brem['disciplina'] == 'Giant Slalom'
brem[veleslalom][:15]
Explanation: Čeprav najpogosteje tekmuje v slalomu in veleslalomu, pa to nista nujno disciplini, v katerih dosega najboljše rezultate. Najprej si poglejmo, kakšni so njeni rezultati v slalomu in nato še veleslalomu:
End of explanation
brem[slalom]['mesto1'].value_counts().plot(kind='bar')
urejen = brem[veleslalom].sort_values(['mesto1'], ascending=True)
#urejen['mesto1'].value_counts()
#urejen['mesto1'].value_counts().plot(kind='bar')
#ne more uredit, ker DNQ1, DNF1, DSQ2 in DNS1 niso legit uvrstitve.
brem[veleslalom]['mesto1'].value_counts().plot(kind='bar')
Explanation: Iz tabel je razvidno, da so njeni razultati v slalomu v vačini na repu trideseterice, med tem ko se v veleslalomu uvršča med 5 najboljših. To se še lepše vidi z grafov:
End of explanation
smucarji['drzava'].value_counts()
smucarji['drzava'].value_counts().plot(kind='pie', figsize = (6,6))
Explanation: Analiza narodnosti
Zanima nas, koliko je smučarjev določene narodnosti. Najprej jih preštejmo, nato pa ponazorimo to z grafom:
End of explanation
smucarji['smuci'].value_counts()
smucarji['smuci'].value_counts().plot(kind='pie', figsize=(6,6))
Explanation: Popravi graf, da najmanjše lepo prikaže!!!
Analiza smuči
Najprej si oglejmo, katere znamke smuči so najpogostejše v svetovnem pokalu:
End of explanation
smucarji[smucarji['smuci'] == "Head"]['drzava'].value_counts().plot(kind='bar')
Explanation: Poglejmo, predstavniki katerih držav uporabljajo smuči Head (in koliko jih je):
To do: naredi graf, ki bo prikazal to za vse smuči.
End of explanation
smucarji[smucarji['drzava'] == "AUT"]['smuci'].value_counts().plot(kind='bar')
Explanation: Podobno si lahko pogledamo, katerim proizvajalcem smuči najbolj zaupajo smučarji iz avstrije:
To do: naredi tak graf, ki bo prikazal to za vse države!
End of explanation |
15,738 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Anna KaRNNa
In this notebook, we'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
Step1: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
Step2: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
Step3: And we can see the characters encoded as integers.
Step4: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
Step5: Making training mini-batches
Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this
Step6: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
Step7: If you implemented get_batches correctly, the above output should look something like
```
x
[[55 63 69 22 6 76 45 5 16 35]
[ 5 69 1 5 12 52 6 5 56 52]
[48 29 12 61 35 35 8 64 76 78]
[12 5 24 39 45 29 12 56 5 63]
[ 5 29 6 5 29 78 28 5 78 29]
[ 5 13 6 5 36 69 78 35 52 12]
[63 76 12 5 18 52 1 76 5 58]
[34 5 73 39 6 5 12 52 36 5]
[ 6 5 29 78 12 79 6 61 5 59]
[ 5 78 69 29 24 5 6 52 5 63]]
y
[[63 69 22 6 76 45 5 16 35 35]
[69 1 5 12 52 6 5 56 52 29]
[29 12 61 35 35 8 64 76 78 28]
[ 5 24 39 45 29 12 56 5 63 29]
[29 6 5 29 78 28 5 78 29 45]
[13 6 5 36 69 78 35 52 12 43]
[76 12 5 18 52 1 76 5 58 52]
[ 5 73 39 6 5 12 52 36 5 78]
[ 5 29 78 12 79 6 61 5 59 63]
[78 69 29 24 5 6 52 5 63 76]]
``
although the exact numbers will be different. Check to make sure the data is shifted over one step fory`.
Building the model
Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.
<img src="assets/charRNN.png" width=500px>
Inputs
First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob. This will be a scalar, that is a 0-D tensor. To make a scalar, you create a placeholder without giving it a size.
Exercise
Step8: LSTM Cell
Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.
We first create a basic LSTM cell with
python
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with
python
tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. For example,
python
tf.contrib.rnn.MultiRNNCell([cell]*num_layers)
This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow will create different weight matrices for all cell objects. Even though this is actually multiple LSTM cells stacked on each other, you can treat the multiple layers as one cell.
We also need to create an initial cell state of all zeros. This can be done like so
python
initial_state = cell.zero_state(batch_size, tf.float32)
Exercise
Step9: RNN Output
Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character, so we want this layer to have size $C$, the number of classes/characters we have in our text.
If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$.
We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells. We get the LSTM output as a list, lstm_output. First we need to concatenate this whole list into one array with tf.concat. Then, reshape it (with tf.reshape) to size $(M * N) \times L$.
One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.
Exercise
Step10: Training loss
Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(MN) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(MN) \times C$.
Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss.
Exercise
Step11: Optimizer
Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.
Step12: Build the network
Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.
Exercise
Step13: Hyperparameters
Here are the hyperparameters for the network.
batch_size - Number of sequences running through the network in one pass.
num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
lstm_size - The number of units in the hidden layers.
num_layers - Number of hidden LSTM layers to use
learning_rate - Learning rate for training
keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from.
Tips and Tricks
Monitoring Validation Loss vs. Training Loss
If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular
Step14: Time for training
This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint.
Here I'm saving checkpoints with the format
i{iteration number}_l{# hidden layer units}.ckpt
Exercise
Step15: Saved checkpoints
Read up on saving and loading checkpoints here
Step16: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
Step17: Here, pass in the path to a checkpoint and sample from the network. | Python Code:
import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
Explanation: Anna KaRNNa
In this notebook, we'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
End of explanation
with open('anna.txt', 'r') as f:
text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
encoded = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
Explanation: First we'll load the text file and convert it into integers for our network to use. Here I'm creating a couple dictionaries to convert the characters to and from integers. Encoding the characters as integers makes it easier to use as input in the network.
End of explanation
text[:100]
Explanation: Let's check out the first 100 characters, make sure everything is peachy. According to the American Book Review, this is the 6th best first line of a book ever.
End of explanation
encoded[:100]
Explanation: And we can see the characters encoded as integers.
End of explanation
len(vocab)
Explanation: Since the network is working with individual characters, it's similar to a classification problem in which we are trying to predict the next character from the previous text. Here's how many 'classes' our network has to pick from.
End of explanation
def get_batches(arr, n_seqs, n_steps):
'''Create a generator that returns batches of size
n_seqs x n_steps from arr.
Arguments
---------
arr: Array you want to make batches from
n_seqs: Batch size, the number of sequences per batch
n_steps: Number of sequence steps per batch
'''
# Get the batch size and number of batches we can make
batch_size =
n_batches =
# Keep only enough characters to make full batches
arr =
# Reshape into n_seqs rows
arr =
for n in range(0, arr.shape[1], n_steps):
# The features
x =
# The targets, shifted by one
y =
yield x, y
Explanation: Making training mini-batches
Here is where we'll make our mini-batches for training. Remember that we want our batches to be multiple sequences of some desired number of sequence steps. Considering a simple example, our batches would look like this:
<img src="assets/sequence_batching@1x.png" width=500px>
<br>
We have our text encoded as integers as one long array in encoded. Let's create a function that will give us an iterator for our batches. I like using generator functions to do this. Then we can pass encoded into this function and get our batch generator.
The first thing we need to do is discard some of the text so we only have completely full batches. Each batch contains $N \times M$ characters, where $N$ is the batch size (the number of sequences) and $M$ is the number of steps. Then, to get the number of batches we can make from some array arr, you divide the length of arr by the batch size. Once you know the number of batches and the batch size, you can get the total number of characters to keep.
After that, we need to split arr into $N$ sequences. You can do this using arr.reshape(size) where size is a tuple containing the dimensions sizes of the reshaped array. We know we want $N$ sequences (n_seqs below), let's make that the size of the first dimension. For the second dimension, you can use -1 as a placeholder in the size, it'll fill up the array with the appropriate data for you. After this, you should have an array that is $N \times (M * K)$ where $K$ is the number of batches.
Now that we have this array, we can iterate through it to get our batches. The idea is each batch is a $N \times M$ window on the array. For each subsequent batch, the window moves over by n_steps. We also want to create both the input and target arrays. Remember that the targets are the inputs shifted over one character. You'll usually see the first input character used as the last target character, so something like this:
python
y[:, :-1], y[:, -1] = x[:, 1:], x[:, 0]
where x is the input batch and y is the target batch.
The way I like to do this window is use range to take steps of size n_steps from $0$ to arr.shape[1], the total number of steps in each sequence. That way, the integers you get from range always point to the start of a batch, and each window is n_steps wide.
Exercise: Write the code for creating batches in the function below. The exercises in this notebook will not be easy. I've provided a notebook with solutions alongside this notebook. If you get stuck, checkout the solutions. The most important thing is that you don't copy and paste the code into here, type out the solution code yourself.
End of explanation
batches = get_batches(encoded, 10, 50)
x, y = next(batches)
print('x\n', x[:10, :10])
print('\ny\n', y[:10, :10])
Explanation: Now I'll make my data sets and we can check out what's going on here. Here I'm going to use a batch size of 10 and 50 sequence steps.
End of explanation
def build_inputs(batch_size, num_steps):
''' Define placeholders for inputs, targets, and dropout
Arguments
---------
batch_size: Batch size, number of sequences per batch
num_steps: Number of sequence steps in a batch
'''
# Declare placeholders we'll feed into the graph
inputs =
targets =
# Keep probability placeholder for drop out layers
keep_prob =
return inputs, targets, keep_prob
Explanation: If you implemented get_batches correctly, the above output should look something like
```
x
[[55 63 69 22 6 76 45 5 16 35]
[ 5 69 1 5 12 52 6 5 56 52]
[48 29 12 61 35 35 8 64 76 78]
[12 5 24 39 45 29 12 56 5 63]
[ 5 29 6 5 29 78 28 5 78 29]
[ 5 13 6 5 36 69 78 35 52 12]
[63 76 12 5 18 52 1 76 5 58]
[34 5 73 39 6 5 12 52 36 5]
[ 6 5 29 78 12 79 6 61 5 59]
[ 5 78 69 29 24 5 6 52 5 63]]
y
[[63 69 22 6 76 45 5 16 35 35]
[69 1 5 12 52 6 5 56 52 29]
[29 12 61 35 35 8 64 76 78 28]
[ 5 24 39 45 29 12 56 5 63 29]
[29 6 5 29 78 28 5 78 29 45]
[13 6 5 36 69 78 35 52 12 43]
[76 12 5 18 52 1 76 5 58 52]
[ 5 73 39 6 5 12 52 36 5 78]
[ 5 29 78 12 79 6 61 5 59 63]
[78 69 29 24 5 6 52 5 63 76]]
``
although the exact numbers will be different. Check to make sure the data is shifted over one step fory`.
Building the model
Below is where you'll build the network. We'll break it up into parts so it's easier to reason about each bit. Then we can connect them up into the whole network.
<img src="assets/charRNN.png" width=500px>
Inputs
First off we'll create our input placeholders. As usual we need placeholders for the training data and the targets. We'll also create a placeholder for dropout layers called keep_prob. This will be a scalar, that is a 0-D tensor. To make a scalar, you create a placeholder without giving it a size.
Exercise: Create the input placeholders in the function below.
End of explanation
def build_lstm(lstm_size, num_layers, batch_size, keep_prob):
''' Build LSTM cell.
Arguments
---------
keep_prob: Scalar tensor (tf.placeholder) for the dropout keep probability
lstm_size: Size of the hidden layers in the LSTM cells
num_layers: Number of LSTM layers
batch_size: Batch size
'''
### Build the LSTM Cell
# Use a basic LSTM cell
lstm =
# Add dropout to the cell outputs
drop =
# Stack up multiple LSTM layers, for deep learning
cell =
initial_state =
return cell, initial_state
Explanation: LSTM Cell
Here we will create the LSTM cell we'll use in the hidden layer. We'll use this cell as a building block for the RNN. So we aren't actually defining the RNN here, just the type of cell we'll use in the hidden layer.
We first create a basic LSTM cell with
python
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
where num_units is the number of units in the hidden layers in the cell. Then we can add dropout by wrapping it with
python
tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
You pass in a cell and it will automatically add dropout to the inputs or outputs. Finally, we can stack up the LSTM cells into layers with tf.contrib.rnn.MultiRNNCell. With this, you pass in a list of cells and it will send the output of one cell into the next cell. For example,
python
tf.contrib.rnn.MultiRNNCell([cell]*num_layers)
This might look a little weird if you know Python well because this will create a list of the same cell object. However, TensorFlow will create different weight matrices for all cell objects. Even though this is actually multiple LSTM cells stacked on each other, you can treat the multiple layers as one cell.
We also need to create an initial cell state of all zeros. This can be done like so
python
initial_state = cell.zero_state(batch_size, tf.float32)
Exercise: Below, implement the build_lstm function to create these LSTM cells and the initial state.
End of explanation
def build_output(lstm_output, in_size, out_size):
''' Build a softmax layer, return the softmax output and logits.
Arguments
---------
lstm_output: List of output tensors from the LSTM layer
in_size: Size of the input tensor, for example, size of the LSTM cells
out_size: Size of this softmax layer
'''
# Reshape output so it's a bunch of rows, one row for each step for each sequence.
# Concatenate lstm_output over axis 1 (the columns)
seq_output =
# Reshape seq_output to a 2D tensor with lstm_size columns
x =
# Connect the RNN outputs to a softmax layer
with tf.variable_scope('softmax'):
# Create the weight and bias variables here
softmax_w =
softmax_b =
# Since output is a bunch of rows of RNN cell outputs, logits will be a bunch
# of rows of logit outputs, one for each step and sequence
logits =
# Use softmax to get the probabilities for predicted characters
out =
return out, logits
Explanation: RNN Output
Here we'll create the output layer. We need to connect the output of the RNN cells to a full connected layer with a softmax output. The softmax output gives us a probability distribution we can use to predict the next character, so we want this layer to have size $C$, the number of classes/characters we have in our text.
If our input has batch size $N$, number of steps $M$, and the hidden layer has $L$ hidden units, then the output is a 3D tensor with size $N \times M \times L$. The output of each LSTM cell has size $L$, we have $M$ of them, one for each sequence step, and we have $N$ sequences. So the total size is $N \times M \times L$.
We are using the same fully connected layer, the same weights, for each of the outputs. Then, to make things easier, we should reshape the outputs into a 2D tensor with shape $(M * N) \times L$. That is, one row for each sequence and step, where the values of each row are the output from the LSTM cells. We get the LSTM output as a list, lstm_output. First we need to concatenate this whole list into one array with tf.concat. Then, reshape it (with tf.reshape) to size $(M * N) \times L$.
One we have the outputs reshaped, we can do the matrix multiplication with the weights. We need to wrap the weight and bias variables in a variable scope with tf.variable_scope(scope_name) because there are weights being created in the LSTM cells. TensorFlow will throw an error if the weights created here have the same names as the weights created in the LSTM cells, which they will be default. To avoid this, we wrap the variables in a variable scope so we can give them unique names.
Exercise: Implement the output layer in the function below.
End of explanation
def build_loss(logits, targets, lstm_size, num_classes):
''' Calculate the loss from the logits and the targets.
Arguments
---------
logits: Logits from final fully connected layer
targets: Targets for supervised learning
lstm_size: Number of LSTM hidden units
num_classes: Number of classes in targets
'''
# One-hot encode targets and reshape to match logits, one row per sequence per step
y_one_hot =
y_reshaped =
# Softmax cross entropy loss
loss =
return loss
Explanation: Training loss
Next up is the training loss. We get the logits and targets and calculate the softmax cross-entropy loss. First we need to one-hot encode the targets, we're getting them as encoded characters. Then, reshape the one-hot targets so it's a 2D tensor with size $(MN) \times C$ where $C$ is the number of classes/characters we have. Remember that we reshaped the LSTM outputs and ran them through a fully connected layer with $C$ units. So our logits will also have size $(MN) \times C$.
Then we run the logits and targets through tf.nn.softmax_cross_entropy_with_logits and find the mean to get the loss.
Exercise: Implement the loss calculation in the function below.
End of explanation
def build_optimizer(loss, learning_rate, grad_clip):
''' Build optmizer for training, using gradient clipping.
Arguments:
loss: Network loss
learning_rate: Learning rate for optimizer
'''
# Optimizer for training, using gradient clipping to control exploding gradients
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(loss, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
return optimizer
Explanation: Optimizer
Here we build the optimizer. Normal RNNs have have issues gradients exploding and disappearing. LSTMs fix the disappearance problem, but the gradients can still grow without bound. To fix this, we can clip the gradients above some threshold. That is, if a gradient is larger than that threshold, we set it to the threshold. This will ensure the gradients never grow overly large. Then we use an AdamOptimizer for the learning step.
End of explanation
class CharRNN:
def __init__(self, num_classes, batch_size=64, num_steps=50,
lstm_size=128, num_layers=2, learning_rate=0.001,
grad_clip=5, sampling=False):
# When we're using this network for sampling later, we'll be passing in
# one character at a time, so providing an option for that
if sampling == True:
batch_size, num_steps = 1, 1
else:
batch_size, num_steps = batch_size, num_steps
tf.reset_default_graph()
# Build the input placeholder tensors
self.inputs, self.targets, self.keep_prob =
# Build the LSTM cell
cell, self.initial_state =
### Run the data through the RNN layers
# First, one-hot encode the input tokens
x_one_hot =
# Run each sequence step through the RNN with tf.nn.dynamic_rnn
outputs, state =
self.final_state = state
# Get softmax predictions and logits
self.prediction, self.logits =
# Loss and optimizer (with gradient clipping)
self.loss =
self.optimizer =
Explanation: Build the network
Now we can put all the pieces together and build a class for the network. To actually run data through the LSTM cells, we will use tf.nn.dynamic_rnn. This function will pass the hidden and cell states across LSTM cells appropriately for us. It returns the outputs for each LSTM cell at each step for each sequence in the mini-batch. It also gives us the final LSTM state. We want to save this state as final_state so we can pass it to the first LSTM cell in the the next mini-batch run. For tf.nn.dynamic_rnn, we pass in the cell and initial state we get from build_lstm, as well as our input sequences. Also, we need to one-hot encode the inputs before going into the RNN.
Exercise: Use the functions you've implemented previously and tf.nn.dynamic_rnn to build the network.
End of explanation
batch_size = 10 # Sequences per batch
num_steps = 50 # Number of sequence steps per batch
lstm_size = 128 # Size of hidden layers in LSTMs
num_layers = 2 # Number of LSTM layers
learning_rate = 0.01 # Learning rate
keep_prob = 0.5 # Dropout keep probability
Explanation: Hyperparameters
Here are the hyperparameters for the network.
batch_size - Number of sequences running through the network in one pass.
num_steps - Number of characters in the sequence the network is trained on. Larger is better typically, the network will learn more long range dependencies. But it takes longer to train. 100 is typically a good number here.
lstm_size - The number of units in the hidden layers.
num_layers - Number of hidden LSTM layers to use
learning_rate - Learning rate for training
keep_prob - The dropout keep probability when training. If you're network is overfitting, try decreasing this.
Here's some good advice from Andrej Karpathy on training the network. I'm going to copy it in here for your benefit, but also link to where it originally came from.
Tips and Tricks
Monitoring Validation Loss vs. Training Loss
If you're somewhat new to Machine Learning or Neural Networks it can take a bit of expertise to get good models. The most important quantity to keep track of is the difference between your training loss (printed during training) and the validation loss (printed once in a while when the RNN is run on the validation data (by default every 1000 iterations)). In particular:
If your training loss is much lower than validation loss then this means the network might be overfitting. Solutions to this are to decrease your network size, or to increase dropout. For example you could try dropout of 0.5 and so on.
If your training/validation loss are about equal then your model is underfitting. Increase the size of your model (either number of layers or the raw number of neurons per layer)
Approximate number of parameters
The two most important parameters that control the model are lstm_size and num_layers. I would advise that you always use num_layers of either 2/3. The lstm_size can be adjusted based on how much data you have. The two important quantities to keep track of here are:
The number of parameters in your model. This is printed when you start training.
The size of your dataset. 1MB file is approximately 1 million characters.
These two should be about the same order of magnitude. It's a little tricky to tell. Here are some examples:
I have a 100MB dataset and I'm using the default parameter settings (which currently print 150K parameters). My data size is significantly larger (100 mil >> 0.15 mil), so I expect to heavily underfit. I am thinking I can comfortably afford to make lstm_size larger.
I have a 10MB dataset and running a 10 million parameter model. I'm slightly nervous and I'm carefully monitoring my validation loss. If it's larger than my training loss then I may want to try to increase dropout a bit and see if that helps the validation loss.
Best models strategy
The winning strategy to obtaining very good models (if you have the compute time) is to always err on making the network larger (as large as you're willing to wait for it to compute) and then try different dropout values (between 0,1). Whatever model has the best validation performance (the loss, written in the checkpoint filename, low is good) is the one you should use in the end.
It is very common in deep learning to run many different models with many different hyperparameter settings, and in the end take whatever checkpoint gave the best validation performance.
By the way, the size of your training and validation splits are also parameters. Make sure you have a decent amount of data in your validation set or otherwise the validation performance will be noisy and not very informative.
End of explanation
epochs = 20
# Save every N iterations
save_every_n = 200
model = CharRNN(len(vocab), batch_size=batch_size, num_steps=num_steps,
lstm_size=lstm_size, num_layers=num_layers,
learning_rate=learning_rate)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/______.ckpt')
counter = 0
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for x, y in get_batches(encoded, batch_size, num_steps):
counter += 1
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: keep_prob,
model.initial_state: new_state}
batch_loss, new_state, _ = sess.run([model.loss,
model.final_state,
model.optimizer],
feed_dict=feed)
end = time.time()
print('Epoch: {}/{}... '.format(e+1, epochs),
'Training Step: {}... '.format(counter),
'Training loss: {:.4f}... '.format(batch_loss),
'{:.4f} sec/batch'.format((end-start)))
if (counter % save_every_n == 0):
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
saver.save(sess, "checkpoints/i{}_l{}.ckpt".format(counter, lstm_size))
Explanation: Time for training
This is typical training code, passing inputs and targets into the network, then running the optimizer. Here we also get back the final LSTM state for the mini-batch. Then, we pass that state back into the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I save a checkpoint.
Here I'm saving checkpoints with the format
i{iteration number}_l{# hidden layer units}.ckpt
Exercise: Set the hyperparameters above to train the network. Watch the training loss, it should be consistently dropping. Also, I highly advise running this on a GPU.
End of explanation
tf.train.get_checkpoint_state('checkpoints')
Explanation: Saved checkpoints
Read up on saving and loading checkpoints here: https://www.tensorflow.org/programmers_guide/variables
End of explanation
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
samples = [c for c in prime]
model = CharRNN(len(vocab), lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.prediction, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
Explanation: Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
End of explanation
tf.train.latest_checkpoint('checkpoints')
checkpoint = tf.train.latest_checkpoint('checkpoints')
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i600_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = 'checkpoints/i1200_l512.ckpt'
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
Explanation: Here, pass in the path to a checkpoint and sample from the network.
End of explanation |
15,739 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lab 8 - Simple Harmonic Oscillator states
Problems from Chapter 12
Step1: Define the standard operators
Step2: Problem 12.1
Step3: Problem 12.2
Step4: Problem 12.3 (use n=2 as a test-case)
Step5: Problem 12.5 and 12.6
Step6: Problem 12.7
Step7: Alternatively, we can find the indeterminacy bound for ΔX and ΔP (the unitless operators)
Step8: Which is also satisfied by the calculated value (1.41 > 0.25)
Problem 12.8 | Python Code:
from numpy import sqrt
from qutip import *
Explanation: Lab 8 - Simple Harmonic Oscillator states
Problems from Chapter 12
End of explanation
N = 10 # pick a size for our state-space
a = destroy(N)
n = a.dag()*a
Explanation: Define the standard operators
End of explanation
a*a.dag() - a.dag()*a
Explanation: Problem 12.1:
End of explanation
n*a.dag() - a.dag()*n
n*a.dag() - a.dag()*n == a.dag()
Explanation: Problem 12.2:
End of explanation
psi = basis(N,2)
psi
a.dag()*psi
a.dag()*basis(N,2) == sqrt(3)*basis(N,3)
Explanation: Problem 12.3 (use n=2 as a test-case):
To define $|2\rangle$ use the basis(N,n) command where N is the dimension of the vector, and n is the quantum number.
End of explanation
a
a.dag()
Explanation: Problem 12.5 and 12.6:
These are simple, just view the matrix representation of the operators
End of explanation
X = 1/2 * (a + a.dag())
P = 1/2j * (a - a.dag())
psi = 1/sqrt(2)*(basis(N,1)+basis(N,2))
ex = psi.dag()*X*psi
exq = psi.dag()*X*X*psi
ep = psi.dag()*P*psi
epq = psi.dag()*P*P*psi
deltaX = sqrt(exq[0][0][0] - ex[0][0][0]**2)
deltaP = sqrt(epq[0][0][0] - ep[0][0][0]**2)
deltaX * deltaP * 2 # compare to uncertainty relation (ΔxΔp >= 1/2)
# the factor of two is to convert from the unitless version of the operator
Explanation: Problem 12.7:
First, define $\hat{X}$ and $\hat{P}$ operators
End of explanation
1/2*(psi.dag()*commutator(X,P)*psi).norm()
Explanation: Alternatively, we can find the indeterminacy bound for ΔX and ΔP (the unitless operators): $$\Delta X \Delta P \geq \frac{1}{2}\left|\left\langle \left[\hat{X},\hat{P}\right] \right\rangle\right|$$
End of explanation
psi = 1/sqrt(2)*(basis(N,2)+basis(N,4))
ex = psi.dag()*X*psi
exq = psi.dag()*X*X*psi
ep = psi.dag()*P*psi
epq = psi.dag()*P*P*psi
deltaX = sqrt(exq[0][0][0] - ex[0][0][0]**2)
deltaP = sqrt(epq[0][0][0] - ep[0][0][0]**2)
deltaX * deltaP * 2 # to compare to book solution which uses the full x and p operators with units
Explanation: Which is also satisfied by the calculated value (1.41 > 0.25)
Problem 12.8:
End of explanation |
15,740 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Training Ensemble on MNIST Dataset
On the function points branch of nengo
On the vision branch of nengo_extras
Step1: Load the MNIST training and testing images
Step2: Create array of images and rotated pairs and list of structural similarities
Each set of images contains an upright image and an image rotated at random amount
Step3: The Network
The network parameters must be the same here as when the weight matrices are used later on
The network is made up of an ensemble and a node
The connection (to v) computes the weights from the activities of the images to their similarites
Network is the same as was used for training rotation so that it can be used later on.
Step4: Evaluating the network statically
Functions for computing representation of the image at different levels of encoding/decoding
get_outs returns the output of the network
able to evaluate on many images
no need to run the simulator
Step5: Simulator
Generate the weight matrices between
activities of image pairs and structural similarites
activities of image pairs and dot procut of their activities
Step6: Testing the outputs
Step7: Saving weight matrices | Python Code:
import matplotlib.pyplot as plt
%matplotlib inline
import nengo
import numpy as np
import scipy.ndimage
from scipy.ndimage.interpolation import rotate
import matplotlib.animation as animation
from matplotlib import pylab
from PIL import Image
import nengo.spa as spa
import cPickle
import random
from nengo_extras.data import load_mnist
from nengo_extras.vision import Gabor, Mask
from skimage.measure import compare_ssim as ssim
Explanation: Training Ensemble on MNIST Dataset
On the function points branch of nengo
On the vision branch of nengo_extras
End of explanation
# --- load the data
img_rows, img_cols = 28, 28
(X_train, y_train), (X_test, y_test) = load_mnist()
X_train = 2 * X_train - 1 # normalize to -1 to 1
X_test = 2 * X_test - 1 # normalize to -1 to 1
Explanation: Load the MNIST training and testing images
End of explanation
random.seed(1)
'''Didn't work
n_imgs = len(X_train)
imgs = np.ndarray((n_imgs+1000,784*2))
for i in range(n_imgs):
imgs[i] = np.append(X_train[i],scipy.ndimage.interpolation.rotate(np.reshape(X_train[i],(28,28)),
random.randint(1,360),reshape=False,mode="nearest").ravel())
#Add some examples with no rotation
for i in range(1000):
imgs[n_imgs+i] = np.append(X_train[i],X_train[i])
#List of calculated similarities
similarities = np.ndarray((len(imgs),1))
for i in range(len(imgs)):
similarities[i] = ssim(imgs[i][:28**2].reshape(28,28),imgs[i][28**2:].reshape(28,28))
'''
#List of images
imgs = X_train.copy()
#Rotated images
rot_imgs = X_train.copy()
for img in rot_imgs:
img[:] = scipy.ndimage.interpolation.rotate(np.reshape(img,(28,28)),
random.randint(1,360),reshape=False,mode="nearest").ravel()
#List of calculated similarities
similarities = np.ndarray((len(imgs),1))
for i in range(len(imgs)):
similarities[i] = ssim(imgs[i].reshape(28,28),rot_imgs[i].reshape(28,28))
#Remove negative values, doesn't really change output
#similarities[similarities<0]=0
#Check to see if images and similarity generated correctly
index = np.random.randint(1,60000)
plt.subplot(121)
plt.imshow(np.reshape(imgs[index],(28,28)),cmap="gray")
plt.subplot(122)
plt.imshow(np.reshape(rot_imgs[index],(28,28)),cmap="gray")
#plt.imshow(np.reshape(imgs[index],(28*2,28)),cmap="gray")
#similarity = ssim(imgs[index][:28**2].reshape(28,28),imgs[index][28**2:].reshape(28,28))
similarity = similarities[index]
print(similarity)
Explanation: Create array of images and rotated pairs and list of structural similarities
Each set of images contains an upright image and an image rotated at random amount
End of explanation
rng = np.random.RandomState(9)
# --- set up network parameters
#Want to map from images to similarity
n_vis = X_train.shape[1] #imgs.shape[1]
n_out = similarities.shape[1]
#number of neurons/dimensions of semantic pointer
n_hid = 1000 #Try with more neurons for more accuracy
#Want the encoding/decoding done on the training images
ens_params = dict(
eval_points=X_train, #imgs,
neuron_type=nengo.LIF(), #originally used LIFRate()
intercepts=nengo.dists.Choice([-0.5]),
max_rates=nengo.dists.Choice([100]),
)
#Least-squares solver with L2 regularization.
solver = nengo.solvers.LstsqL2(reg=0.01)
#solver = nengo.solvers.LstsqL2(reg=0.0001)
#network that generates the weight matrices between neuron activity and images and the labels
with nengo.Network(seed=3) as model:
a = nengo.Ensemble(n_hid, n_vis, seed=3, **ens_params)
v = nengo.Node(size_in=n_out)
conn = nengo.Connection(
a, v, synapse=None,
eval_points=imgs, function=similarities,#want the similarities out
solver=solver)
# linear filter used for edge detection as encoders, more plausible for human visual system
encoders = Gabor().generate(n_hid, (11, 11), rng=rng)
encoders = Mask((28, 28)).populate(encoders, rng=rng, flatten=True)
#Set the ensembles encoders to this
a.encoders = encoders
#Check the encoders were correctly made
plt.imshow(encoders[0].reshape(28, 28), vmin=encoders[0].min(), vmax=encoders[0].max(), cmap='gray')
Explanation: The Network
The network parameters must be the same here as when the weight matrices are used later on
The network is made up of an ensemble and a node
The connection (to v) computes the weights from the activities of the images to their similarites
Network is the same as was used for training rotation so that it can be used later on.
End of explanation
#Get the neuron activity of an image or group of images (this is the semantic pointer in this case)
def get_activities(sim, images):
_, acts = nengo.utils.ensemble.tuning_curves(a, sim, inputs=images)
return acts
#Get similarity of activity using dot product
def get_dots(imgs):
dots = np.ndarray((60000,1))
for i in range(len(imgs)):
dots[i] = np.dot(imgs[i][:1000],imgs[i][1000:])
return dots
Explanation: Evaluating the network statically
Functions for computing representation of the image at different levels of encoding/decoding
get_outs returns the output of the network
able to evaluate on many images
no need to run the simulator
End of explanation
with nengo.Simulator(model) as sim:
#Neuron activities of different mnist image pairs
orig_acts = get_activities(sim,imgs)
rot_acts = get_activities(sim,rot_imgs)
acts = np.ndarray((orig_acts.shape[0],orig_acts.shape[1]*2))
for i in range(len(acts)):
acts[i] = np.append(orig_acts[i],rot_acts[i])
dot_similarities = get_dots(acts)
#solvers for a learning rule
solver = nengo.solvers.LstsqL2(reg=1e-8)
solver_ssim = nengo.solvers.LstsqL2(reg=1e-8)
#find weight matrix between neuron activity of the original image pair and the dot product of activities
#weights returns a tuple including information about learning process, just want the weight matrix
weights,_ = solver(acts, dot_similarities)
weights_ssim,_ = solver_ssim(acts,similarities)
Explanation: Simulator
Generate the weight matrices between
activities of image pairs and structural similarites
activities of image pairs and dot procut of their activities
End of explanation
test1 = X_test[random.randint(1,10000)]
test2 = scipy.ndimage.interpolation.rotate(np.reshape(test1,(28,28)),
random.randint(0,0),reshape=False,mode="nearest").ravel()
pylab.subplot(121)
pylab.imshow(test1.reshape(28,28),cmap='gray')
pylab.subplot(122)
pylab.imshow(test2.reshape(28,28),cmap='gray')
_,act1 = nengo.utils.ensemble.tuning_curves(a, sim, inputs=test1)
_,act2 = nengo.utils.ensemble.tuning_curves(a, sim, inputs=test2)
act = np.append(act1,act2)
print(np.dot(act,weights))
print(np.dot(act,weights_ssim))
Explanation: Testing the outputs
End of explanation
#filename = "two_img_similarity_dot_weights" + str(n_hid*2) +".p"
#cPickle.dump(weights.T, open( filename, "wb" ) )
filename = "two_img_similarity_ssim_weights2" + str(n_hid*2) +".p"
cPickle.dump(weights_ssim.T, open( filename, "wb" ) )
Explanation: Saving weight matrices
End of explanation |
15,741 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
KubeFlow Pipelines
Step1: import the necessary packages
Step2: Enter your gateway and the auth token
Use this extension on chrome to get token
Update values for the ingress gateway and auth session
Step3: Set the Log bucket and Tensorboard Image
Step4: Set the client and create the experiment
Step5: Set the Inference parameters
Step6: Load the the components yaml files for setting up the components
Step11: Define the pipeline
Step12: Compile the pipeline
Step13: Execute the pipeline
Step14: Wait for inference service below to go to READY True state
Step15: Get the Inference service name
Step16: Use the deployed model for prediction request and save the output into a json
Step17: Use the deployed model for explain request and save the output into a json
Step18: Clean up
Delete Viewers, Inference Services and Completed pods | Python Code:
! pip uninstall -y kfp
! pip install --no-cache-dir kfp
Explanation: KubeFlow Pipelines : Pytorch Cifar10 Image classification
This notebook shows PyTorch CIFAR10 end-to-end classification example using Kubeflow Pipelines.
An example notebook that demonstrates how to:
Get different tasks needed for the pipeline
Create a Kubeflow pipeline
Include Pytorch KFP components to preprocess, train, visualize and deploy the model in the pipeline
Submit a job for execution
Query(prediction and explain) the final deployed model
End of explanation
import kfp
import json
import os
from kfp.onprem import use_k8s_secret
from kfp import components
from kfp.components import load_component_from_file, load_component_from_url
from kfp import dsl
from kfp import compiler
import numpy as np
import logging
kfp.__version__
Explanation: import the necessary packages
End of explanation
INGRESS_GATEWAY='http://istio-ingressgateway.istio-system.svc.cluster.local'
AUTH="<enter your auth token>"
NAMESPACE="kubeflow-user-example-com"
COOKIE="authservice_session="+AUTH
EXPERIMENT="Default"
Explanation: Enter your gateway and the auth token
Use this extension on chrome to get token
Update values for the ingress gateway and auth session
End of explanation
MINIO_ENDPOINT="http://minio-service.kubeflow:9000"
LOG_BUCKET="mlpipeline"
TENSORBOARD_IMAGE="public.ecr.aws/pytorch-samples/tboard:latest"
Explanation: Set the Log bucket and Tensorboard Image
End of explanation
client = kfp.Client(host=INGRESS_GATEWAY+"/pipeline", cookies=COOKIE)
client.create_experiment(EXPERIMENT)
experiments = client.list_experiments(namespace=NAMESPACE)
my_experiment = experiments.experiments[0]
my_experiment
Explanation: Set the client and create the experiment
End of explanation
DEPLOY_NAME="torchserve"
MODEL_NAME="cifar10"
ISVC_NAME=DEPLOY_NAME+"."+NAMESPACE+"."+"example.com"
INPUT_REQUEST="https://raw.githubusercontent.com/kubeflow/pipelines/master/samples/contrib/pytorch-samples/cifar10/input.json"
Explanation: Set the Inference parameters
End of explanation
! python utils/generate_templates.py cifar10/template_mapping.json
prepare_tensorboard_op = load_component_from_file("yaml/tensorboard_component.yaml")
prep_op = components.load_component_from_file(
"yaml/preprocess_component.yaml"
)
train_op = components.load_component_from_file(
"yaml/train_component.yaml"
)
deploy_op = load_component_from_file("yaml/deploy_component.yaml")
pred_op = load_component_from_file("yaml/prediction_component.yaml")
minio_op = components.load_component_from_file(
"yaml/minio_component.yaml"
)
Explanation: Load the the components yaml files for setting up the components
End of explanation
@dsl.pipeline(
name="Training Cifar10 pipeline", description="Cifar 10 dataset pipeline"
)
def pytorch_cifar10( # pylint: disable=too-many-arguments
minio_endpoint=MINIO_ENDPOINT,
log_bucket=LOG_BUCKET,
log_dir=f"tensorboard/logs/{dsl.RUN_ID_PLACEHOLDER}",
mar_path=f"mar/{dsl.RUN_ID_PLACEHOLDER}/model-store",
config_prop_path=f"mar/{dsl.RUN_ID_PLACEHOLDER}/config",
model_uri=f"s3://mlpipeline/mar/{dsl.RUN_ID_PLACEHOLDER}",
tf_image=TENSORBOARD_IMAGE,
deploy=DEPLOY_NAME,
isvc_name=ISVC_NAME,
model=MODEL_NAME,
namespace=NAMESPACE,
confusion_matrix_log_dir=f"confusion_matrix/{dsl.RUN_ID_PLACEHOLDER}/",
checkpoint_dir="checkpoint_dir/cifar10",
input_req=INPUT_REQUEST,
cookie=COOKIE,
ingress_gateway=INGRESS_GATEWAY,
):
def sleep_op(seconds):
Sleep for a while.
return dsl.ContainerOp(
name="Sleep " + str(seconds) + " seconds",
image="python:alpine3.6",
command=["sh", "-c"],
arguments=[
'python -c "import time; time.sleep($0)"',
str(seconds)
],
)
This method defines the pipeline tasks and operations
pod_template_spec = json.dumps({
"spec": {
"containers": [{
"env": [
{
"name": "AWS_ACCESS_KEY_ID",
"valueFrom": {
"secretKeyRef": {
"name": "mlpipeline-minio-artifact",
"key": "accesskey",
}
},
},
{
"name": "AWS_SECRET_ACCESS_KEY",
"valueFrom": {
"secretKeyRef": {
"name": "mlpipeline-minio-artifact",
"key": "secretkey",
}
},
},
{
"name": "AWS_REGION",
"value": "minio"
},
{
"name": "S3_ENDPOINT",
"value": f"{minio_endpoint}",
},
{
"name": "S3_USE_HTTPS",
"value": "0"
},
{
"name": "S3_VERIFY_SSL",
"value": "0"
},
]
}]
}
})
prepare_tb_task = prepare_tensorboard_op(
log_dir_uri=f"s3://{log_bucket}/{log_dir}",
image=tf_image,
pod_template_spec=pod_template_spec,
).set_display_name("Visualization")
prep_task = (
prep_op().after(prepare_tb_task
).set_display_name("Preprocess & Transform")
)
confusion_matrix_url = f"minio://{log_bucket}/{confusion_matrix_log_dir}"
script_args = f"model_name=resnet.pth," \
f"confusion_matrix_url={confusion_matrix_url}"
# For GPU, set number of gpus and accelerator type
ptl_args = f"max_epochs=1, gpus=0, accelerator=None, profiler=pytorch"
train_task = (
train_op(
input_data=prep_task.outputs["output_data"],
script_args=script_args,
ptl_arguments=ptl_args
).after(prep_task).set_display_name("Training")
)
# For GPU uncomment below line and set GPU limit and node selector
# ).set_gpu_limit(1).add_node_selector_constraint
# ('cloud.google.com/gke-accelerator','nvidia-tesla-p4')
(
minio_op(
bucket_name="mlpipeline",
folder_name=log_dir,
input_path=train_task.outputs["tensorboard_root"],
filename="",
).after(train_task).set_display_name("Tensorboard Events Pusher")
)
(
minio_op(
bucket_name="mlpipeline",
folder_name=checkpoint_dir,
input_path=train_task.outputs["checkpoint_dir"],
filename="",
).after(train_task).set_display_name("checkpoint_dir Pusher")
)
minio_mar_upload = (
minio_op(
bucket_name="mlpipeline",
folder_name=mar_path,
input_path=train_task.outputs["checkpoint_dir"],
filename="cifar10_test.mar",
).after(train_task).set_display_name("Mar Pusher")
)
(
minio_op(
bucket_name="mlpipeline",
folder_name=config_prop_path,
input_path=train_task.outputs["checkpoint_dir"],
filename="config.properties",
).after(train_task).set_display_name("Conifg Pusher")
)
model_uri = str(model_uri)
# pylint: disable=unused-variable
isvc_yaml =
apiVersion: "serving.kubeflow.org/v1beta1"
kind: "InferenceService"
metadata:
name: {}
namespace: {}
spec:
predictor:
serviceAccountName: sa
pytorch:
storageUri: {}
resources:
requests:
cpu: 4
memory: 8Gi
limits:
cpu: 4
memory: 8Gi
.format(deploy, namespace, model_uri)
# For GPU inference use below yaml with gpu count and accelerator
gpu_count = "1"
accelerator = "nvidia-tesla-p4"
isvc_gpu_yaml = # pylint: disable=unused-variable
apiVersion: "serving.kubeflow.org/v1beta1"
kind: "InferenceService"
metadata:
name: {}
namespace: {}
spec:
predictor:
serviceAccountName: sa
pytorch:
storageUri: {}
resources:
requests:
cpu: 4
memory: 8Gi
limits:
cpu: 4
memory: 8Gi
nvidia.com/gpu: {}
nodeSelector:
cloud.google.com/gke-accelerator: {}
.format(deploy, namespace, model_uri, gpu_count, accelerator)
# Update inferenceservice_yaml for GPU inference
deploy_task = (
deploy_op(action="apply", inferenceservice_yaml=isvc_yaml
).after(minio_mar_upload).set_display_name("Deployer")
)
# Wait here for model to be loaded in torchserve for inference
sleep_task = sleep_op(5).after(deploy_task).set_display_name("Sleep")
# Make Inference request
pred_task = (
pred_op(
host_name=isvc_name,
input_request=input_req,
cookie=cookie,
url=ingress_gateway,
model=model,
inference_type="predict",
).after(sleep_task).set_display_name("Prediction")
)
(
pred_op(
host_name=isvc_name,
input_request=input_req,
cookie=cookie,
url=ingress_gateway,
model=model,
inference_type="explain",
).after(pred_task).set_display_name("Explanation")
)
dsl.get_pipeline_conf().add_op_transformer(
use_k8s_secret(
secret_name="mlpipeline-minio-artifact",
k8s_secret_key_to_env={
"secretkey": "MINIO_SECRET_KEY",
"accesskey": "MINIO_ACCESS_KEY",
},
)
)
Explanation: Define the pipeline
End of explanation
compiler.Compiler().compile(pytorch_cifar10, 'pytorch.tar.gz', type_check=True)
Explanation: Compile the pipeline
End of explanation
run = client.run_pipeline(my_experiment.id, 'pytorch-cifar10', 'pytorch.tar.gz')
Explanation: Execute the pipeline
End of explanation
!kubectl get isvc $DEPLOY
Explanation: Wait for inference service below to go to READY True state
End of explanation
INFERENCE_SERVICE_LIST = ! kubectl get isvc {DEPLOY_NAME} -n {NAMESPACE} -o json | python3 -c "import sys, json; print(json.load(sys.stdin)['status']['url'])"| tr -d '"' | cut -d "/" -f 3
INFERENCE_SERVICE_NAME = INFERENCE_SERVICE_LIST[0]
INFERENCE_SERVICE_NAME
Explanation: Get the Inference service name
End of explanation
!curl -v -H "Host: $INFERENCE_SERVICE_NAME" -H "Cookie: $COOKIE" "$INGRESS_GATEWAY/v1/models/$MODEL_NAME:predict" -d @./cifar10/input.json > cifar10_prediction_output.json
! cat cifar10_prediction_output.json
Explanation: Use the deployed model for prediction request and save the output into a json
End of explanation
!curl -v -H "Host: $INFERENCE_SERVICE_NAME" -H "Cookie: $COOKIE" "$INGRESS_GATEWAY/v1/models/$MODEL_NAME:explain" -d @./cifar10/input.json > cifar10_explanation_output.json
Explanation: Use the deployed model for explain request and save the output into a json
End of explanation
! kubectl delete --all isvc -n $NAMESPACE
! kubectl delete pod --field-selector=status.phase==Succeeded -n $NAMESPACE
Explanation: Clean up
Delete Viewers, Inference Services and Completed pods
End of explanation |
15,742 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
AVA dataset explorer
This study aims to explore and select a chunk of the AVA dataset to be used in preliminary tests of a aesthetic classifier.
Step1: First of all the dataset must be loaded. The headers of the dataset will also be loaded according to the definitions of thte AVA dataset.
Step2: Note that the challenge with the most photos has only 1108 instances and it might be too small to be used in the preliminary tests. Let's group the challenges by semantic tags, which are a ggood way to grab pictures with the same category type.
Step3: 0 is the absense of a tag, but 15 stands for nature while 14 for landscapes and 1 for abstract. Let's focus on these 3 tags and ignore the rest of the instances. | Python Code:
import pandas as pd
import numpy as np
import seaborn
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: AVA dataset explorer
This study aims to explore and select a chunk of the AVA dataset to be used in preliminary tests of a aesthetic classifier.
End of explanation
ava_header = ["row_number",
"image_id",
"1", "2", "3", "4", "5", "6", "7", "8", "9", "10",
"Semantic Tag 1", "Semantic Tag 2",
"Challenge ID"]
ava_dataset = pd.read_table("AVA.txt", sep = " ", header=None, names = ava_header)
ava_dataset.head()
weights = [1,2,3,4,5,6,7,8,9,10]
ones = [1,1,1,1,1,1,1,1,1,1]
ava_dataset["mean"] = ava_dataset.loc[:, '1':'10'].dot(weights) / ava_dataset.loc[:, '1':'10'].dot(ones)
ava_dataset["mean > 5"] = ava_dataset["mean"] >= 5.0
ava_dataset["mean > 6"] = ava_dataset["mean"] >= 6.5
ava_dataset["mean < 4"] = ava_dataset["mean"] <= 4.5
ava_dataset["mean_2houses"] = ava_dataset["mean"].round(1)
ava_dataset.loc[:,'1':'10'].head()
ava_dataset.head()
ava_challenge_counts = ava_dataset.groupby(["Challenge ID"]).size()
ava_challenge_counts.sort_values(ascending=False).head().reset_index()
Explanation: First of all the dataset must be loaded. The headers of the dataset will also be loaded according to the definitions of thte AVA dataset.
End of explanation
ava_challenge_counts = ava_dataset.groupby(["Semantic Tag 1"]).size().rename('Count')
ava_challenge_counts.sort_values(ascending=False).head().reset_index()
Explanation: Note that the challenge with the most photos has only 1108 instances and it might be too small to be used in the preliminary tests. Let's group the challenges by semantic tags, which are a ggood way to grab pictures with the same category type.
End of explanation
ava_nature = ava_dataset[ava_dataset["Semantic Tag 1"] == 15]
ava_landscapes = ava_dataset[ava_dataset["Semantic Tag 1"] == 14]
ava_abstract = ava_dataset[ava_dataset["Semantic Tag 1"] == 1]
pd.DataFrame(ava_challenge_counts.rename('Count').reset_index())
ordered_counts = ava_challenge_counts.rename('Count').reset_index().sort_values(by="Count", ascending=False).head(n=20)
ax = seaborn.barplot(ordered_counts["Semantic Tag 1"], ordered_counts["Count"], order=ordered_counts["Semantic Tag 1"])
ax.set(xlabel='Grupo Semântico', ylabel='Contagem')
ax.set_title("Distribuição das imagens por grupo semântico")
ordered_counts.head()
ava_abstract.head()
fig, axs = plt.subplots(ncols=3)
plot_nature = seaborn.countplot(x="mean_2houses", data=ava_nature, ax=axs[0])
plot_landscapes = seaborn.countplot(x="mean_2houses", data=ava_landscapes, ax=axs[1])
plot_abstract = seaborn.countplot(x="mean_2houses", data=ava_abstract, ax=axs[2])
fig.set_size_inches(15.5, 4.5)
def reduce_ticks(plot):
for ind, label in enumerate(plot.get_xticklabels()):
if ind % 10 == 9: # every 10th label is kept
label.set_visible(True)
else:
label.set_visible(False)
reduce_ticks(plot_nature)
reduce_ticks(plot_landscapes)
reduce_ticks(plot_abstract)
plot_nature.set(xlabel = "Média", ylabel="Contagem")
plot_landscapes.set(xlabel = "Média", ylabel="Contagem")
plot_abstract.set(xlabel = "Média", ylabel="Contagem")
plot_nature.set_title("Natureza")
plot_landscapes.set_title("Paisagens")
plot_abstract.set_title("Abstrato")
fig.savefig(filename="Médias")
plot_landscapes = seaborn.countplot(x="mean_2houses", data=ava_landscapes)
plot_abstract = seaborn.countplot(x="mean_2houses", data=ava_abstract)
ava_nature["mean_2houses"].mean()
ava_nature["mean_2houses"].std()
Explanation: 0 is the absense of a tag, but 15 stands for nature while 14 for landscapes and 1 for abstract. Let's focus on these 3 tags and ignore the rest of the instances.
End of explanation |
15,743 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: Keras 모델 저장 및 로드
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: 전체 모델 저장 및 로딩
전체 모델을 단일 아티팩트로 저장할 수 있습니다. 다음을 포함합니다.
모델의 아키텍처 및 구성
훈련 중에 학습된 모델의 가중치 값
모델의 컴파일 정보(compile()이 호출된 경우)
존재하는 옵티마이저와 그 상태(훈련을 중단한 곳에서 다시 시작할 수 있게 해줌)
APIs
model.save() 또는 tf.keras.models.save_model()
tf.keras.models.load_model()
전체 모델을 디스크에 저장하는 데 사용할 수 있는 두 형식은 TensorFlow SavedModel 형식과 이전 Keras H5 형식입니다. 권장하는 형식은 SavedModel입니다. 이는 model.save()를 사용할 때의 기본값입니다.
다음을 통해 H5 형식으로 전환할 수 있습니다.
format='h5'를 save()로 전달합니다.
.h5 또는 .keras로 끝나는 파일명을 save()로 전달합니다.
SavedModel 형식
예제
Step3: SavedModel이 포함하는 것
model.save('my_model')을 호출하면 다음을 포함하는 my_model 폴더를 생성합니다.
Step4: 모델 아키텍처 및 훈련 구성(옵티마이저, 손실 및 메트릭 포함)은 saved_model.pb에 저장됩니다. 가중치는 variables/ 디렉토리에 저장됩니다.
SavedModel 형식에 대한 자세한 내용은 SavedModel 가이드(디스크의 SavedModel 형식)를 참조하세요.
SavedModel이 사용자 정의 객체를 처리하는 방법
모델과 모델의 레이어를 저장할 때 SavedModel 형식은 클래스명, 호출 함수, 손실 및 가중치(구현된 경우에는 구성도 포함)를 저장합니다. 호출 함수는 모델/레이어의 계산 그래프를 정의합니다.
모델/레이어 구성이 없는 경우 호출 함수는 훈련, 평가 및 추론에 사용될 수 있는 기존 모델과 같은 모델을 만드는 데 사용됩니다.
그럼에도 불구하고 사용자 정의 모델 또는 레이어 클래스를 작성할 때 항상 get_config 및 from_config 메서드를 정의하는 것이 좋습니다. 이를 통해 필요한 경우 나중에 계산을 쉽게 업데이트할 수 있습니다. 자세한 내용은 사용자 정의 객체에 대한 섹션을 참조하세요.
다음은 구성 메서드를 덮어쓰지않고 SavedModel 형식에서 사용자 정의 레이어를 로딩할 때 발생하는 현상에 대한 예제입니다.
Step5: 위 예제에서 볼 수 있듯이 로더는 기존 모델처럼 작동하는 새 모델 클래스를 동적으로 만듭니다.
Keras H5 형식
Keras는 또한 모델의 아키텍처, 가중치 값 및 compile() 정보가 포함된 단일 HDF5 파일 저장을 지원합니다. SavedModel에 대한 가벼운 대안입니다.
예제
Step6: 제한 사항
SavedModel 형식과 비교하여 H5 파일에 포함되지 않은 두 가지가 있습니다.
model.add_loss() 및 model.add_metric()을 통해 추가된 외부 손실 및 메트릭은 SavedModel과 달리 저장되지 않습니다. 모델에 이러한 손실 및 메트릭이 있고 훈련을 재개하려는 경우 모델을 로드한 후 이러한 손실을 다시 추가해야 합니다. self.add_loss() 및 self.add_metric()을 통해 레이어 내부에 생성된 손실/메트릭에는 적용되지 않습니다. 레이어가 로드되는 한 이러한 손실 및 메트릭은 레이어의 call 메서드의 일부이므로 유지됩니다.
사용자 정의 레이어와 같은 사용자 정의 객체의 계산 그래프는 저장된 파일을 포함하지 않습니다. 로딩 시 Keras는 모델을 재구성하기 위해 이러한 객체의 Python 클래스/함수에 접근해야 합니다. 사용자 정의 객체를 참조하세요.
아키텍처 저장
모델의 구성(또는 아키텍처)은 모델에 포함된 레이어와 이러한 레이어의 연결 방법*을 지정합니다. 모델 구성이 있는 경우 가중치에 대해 새로 초기화된 상태로 컴파일 정보 없이 모델을 작성할 수 있습니다.
*이 기능은 서브 클래스 모델이 아닌 함수형 또는 Sequential API를 사용하여 정의된 모델에만 적용됩니다.
Sequential 모델 또는 Functional API 모델의 구성
이러한 유형의 모델은 레이어의 명시적 그래프입니다. 구성은 항상 구조화된 형식으로 제공됩니다.
APIs
get_config() 및 from_config()
tf.keras.models.model_to_json() 및 tf.keras.models.model_from_json()
get_config() 및 from_config()
config = model.get_config()을 호출하면 모델 구성이 포함된 Python dict가 반환됩니다. 그런 다음 Sequential.from_config(config)(Sequential 모델) 또는 Model.from_config(config)(Functional API 모델)를 통해 동일한 모델을 재구성할 수 있습니다.
모든 직렬화 가능 레이어에 대해서도 같은 워크플로가 작동합니다.
레이어 예제
Step7: Sequential 모델 예제
Step8: Functional 모델 예제
Step9: to_json() 및 tf.keras.models.model_from_json()
이것은 get_config / from_config와 비슷하지만, 모델을 JSON 문자열로 변환한 다음 기존 모델 클래스 없이 로드할 수 있습니다. 또한, 모델에만 해당하며 레이어용이 아닙니다.
예제
Step10: 사용자 정의 객체
모델과 레이어
서브 클래스 모델과 레이어의 아키텍처는 __init__ 및 call 메서드에 정의되어 있습니다. 그것들은 Python 바이트 코드로 간주하며 JSON 호환 구성으로 직렬화할 수 없습니다 -- 바이트 코드 직렬화를 시도할 수는 있지만(예
Step11: 이 메서드에는 몇 가지 단점이 있습니다.
추적 가능성을 위해 사용된 사용자 정의 객체에 항상 접근할 수 있어야 합니다. 다시 만들 수 없는 모델을 제품에 넣고 싶지 않을 것입니다.
tf.saved_model.load에 의해 반환된 객체는 Keras 모델이 아닙니다. 따라서 사용하기가 쉽지 않습니다. 예를 들면, .predict() 또는 .fit()에 접근할 수 없습니다.
사용을 권장하지는 않지만, 사용자 정의 객체의 코드를 잃어버렸거나 tf.keras.models.load_model() 모델을 로드하는 데 문제가 있는 경우와 같이 곤란한 상황에서는 도움이 될 수 있습니다.
tf.saved_model.load와 관련된 페이지에서 자세한 내용을 확인할 수 있습니다.
구성 메서드 정의하기
명세
Step12: 사용자 정의 객체 등록하기
Keras는 구성을 생성한 클래스를 기록합니다. 위의 예에서 tf.keras.layers.serialize는 사용자 정의 레이어의 직렬화된 형식을 생성합니다.
{'class_name'
Step13: 인메모리 모델 복제
tf.keras.models.clone_model()을 통해 모델의 인메모리 복제를 수행할 수도 있습니다. 이는 구성을 가져온 다음 구성에서 모델을 다시 생성하는 것과 같습니다(따라서 컴파일 정보 또는 레이어 가중치 값을 유지하지 않습니다).
예제
Step14: 모델의 가중치 값만 저장 및 로딩
모델의 가중치 값만 저장하고 로드하도록 선택할 수 있습니다. 다음과 같은 경우에 유용할 수 있습니다.
추론을 위한 모델만 필요합니다. 이 경우 훈련을 다시 시작할 필요가 없으므로 컴파일 정보나 옵티마이저 상태가 필요하지 않습니다.
전이 학습을 수행하고 있습니다. 이 경우 이전 모델의 상태를 재사용하는 새 모델을 훈련하므로 이전 모델의 컴파일 정보가 필요하지 않습니다.
인메모리 가중치 전이를 위한 API
get_weights 및 set_weights를 사용하여 다른 객체 간에 가중치를 복사할 수 있습니다.
tf.keras.layers.Layer.get_weights()
Step15: 메모리에서 호환 가능한 아키텍처를 사용하여 모델 간 가중치 전이하기
Step16: 상태 비저장 레이어의 경우
상태 비저장 레이어는 순서 또는 가중치 수를 변경하지 않기 때문에 상태 비저장 레이어가 남거나 없더라도 모델은 호환 가능한 아키텍처를 가질 수 있습니다.
Step17: 디스크에 가중치를 저장하고 다시 로딩하기 위한 API
다음 형식으로 model.save_weights를 호출하여 디스크에 가중치를 저장할 수 있습니다.
TensorFlow Checkpoint
HDF5
model.save_weights의 기본 형식은 TensorFlow 체크포인트입니다. 저장 형식을 지정하는 두 가지 방법이 있습니다.
save_format 인수
Step18: 형식 세부 사항
TensorFlow Checkpoint 형식은 객체 속성명을 사용하여 가중치를 저장하고 복원합니다. 예를 들어, tf.keras.layers.Dense 레이어를 고려해 봅시다. 레이어에는 dense.kernel과 dense.bias 두 가지 가중치가 있습니다. 레이어가 tf 형식으로 저장되면 결과 체크포인트에는 "kernel" 및 "bias"와 해당 가중치 값이 포함됩니다. 자세한 정보는 TF Checkpoint 가이드의 "로딩 메커니즘"을 참조하세요.
속성/그래프 에지는 변수명이 아니라 부모 객체에서 사용된 이름에 따라 이름이 지정됩니다. 아래 예제의 CustomLayer를 고려해 봅시다. 변수 CustomLayer.var는 "var_a"가 아니라, 키의 일부로서 "var"로 저장됩니다.
Step19: 전이 학습 예제
기본적으로 두 모델이 동일한 아키텍처를 갖는 한 동일한 검사점을 공유할 수 있습니다.
예제
Step20: 일반적으로 모델을 빌드할 때 동일한 API를 사용하는 것이 좋습니다. Sequential 및 Functional 또는 Functional 및 서브 클래스 등 간에 전환하는 경우, 항상 사전 훈련된 모델을 다시 빌드하고 사전 훈련된 가중치를 해당 모델에 로드합니다.
다음 질문은 모델 아키텍처가 상당히 다른 경우 어떻게 다른 모델에 가중치를 저장하고 로드하는가입니다. 해결책은 tf.train.Checkpoint를 사용하여 정확한 레이어/변수를 저장하고 복원하는 것입니다.
예제
Step21: HDF5 format
HDF5 형식에는 레이어 이름별로 그룹화된 가중치가 포함됩니다. 가중치는 훈련 가능한 가중치 목록을 훈련 불가능한 가중치 목록(layer.weights와 동일)에 연결하여 정렬된 목록입니다. 따라서 모델이 체크포인트에 저장된 것과 동일한 레이어 및 훈련 가능한 상태를 갖는 경우 hdf5 체크포인트을 사용할 수 있습니다.
예제
Step22: 모델에 중첩된 레이어가 포함된 경우 layer.trainable을 변경하면 layer.weights의 순서가 다르게 나타날 수 있습니다.
Step23: 전이 학습 예제
HDF5에서 사전 훈련된 가중치를 로딩할 때는 가중치를 기존 체크포인트 모델에 로드한 다음 원하는 가중치/레이어를 새 모델로 추출하는 것이 좋습니다.
예제 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
import numpy as np
import tensorflow as tf
from tensorflow import keras
Explanation: Keras 모델 저장 및 로드
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/guide/keras/save_and_serialize"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org에서 보기</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/guide/keras/save_and_serialize.ipynb" class=""><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab에서 실행</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/guide/keras/save_and_serialize.ipynb" class=""><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub에서 소스 보기</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/guide/keras/save_and_serialize.ipynb" class=""><img src="https://www.tensorflow.org/images/download_logo_32px.png">노트북 다운로드</a></td>
</table>
소개
Keras 모델은 다중 구성 요소로 이루어집니다.
모델에 포함된 레이어 및 레이어의 연결 방법을 지정하는 아키텍처 또는 구성
가중치 값의 집합("모델의 상태")
옵티마이저(모델을 컴파일하여 정의)
모델을 컴파일링하거나 add_loss() 또는 add_metric()을 호출하여 정의된 손실 및 메트릭의 집합
Keras API를 사용하면 이러한 조각을 한 번에 디스크에 저장하거나 선택적으로 일부만 저장할 수 있습니다.
TensorFlow SavedModel 형식(또는 이전 Keras H5 형식)으로 모든 것을 단일 아카이브에 저장합니다. 이것이 표준 관행입니다.
일반적으로 JSON 파일로 아키텍처 및 구성만 저장합니다.
가중치 값만 저장합니다. 이것은 일반적으로 모델을 훈련할 때 사용됩니다.
언제 사용해야 하는지, 어떻게 동작하는 것인지 각각 살펴봅시다.
저장 및 로딩에 대한 짧은 답변
다음은 이 가이드를 읽는데 10초 밖에 없는 경우 알아야 할 사항입니다.
Keras 모델 저장하기
python
model = ... # Get model (Sequential, Functional Model, or Model subclass) model.save('path/to/location')
모델을 다시 로딩하기
python
from tensorflow import keras model = keras.models.load_model('path/to/location')
이제 세부 사항을 확인해봅시다.
설정
End of explanation
def get_model():
# Create a simple model.
inputs = keras.Input(shape=(32,))
outputs = keras.layers.Dense(1)(inputs)
model = keras.Model(inputs, outputs)
model.compile(optimizer="adam", loss="mean_squared_error")
return model
model = get_model()
# Train the model.
test_input = np.random.random((128, 32))
test_target = np.random.random((128, 1))
model.fit(test_input, test_target)
# Calling `save('my_model')` creates a SavedModel folder `my_model`.
model.save("my_model")
# It can be used to reconstruct the model identically.
reconstructed_model = keras.models.load_model("my_model")
# Let's check:
np.testing.assert_allclose(
model.predict(test_input), reconstructed_model.predict(test_input)
)
# The reconstructed model is already compiled and has retained the optimizer
# state, so training can resume:
reconstructed_model.fit(test_input, test_target)
Explanation: 전체 모델 저장 및 로딩
전체 모델을 단일 아티팩트로 저장할 수 있습니다. 다음을 포함합니다.
모델의 아키텍처 및 구성
훈련 중에 학습된 모델의 가중치 값
모델의 컴파일 정보(compile()이 호출된 경우)
존재하는 옵티마이저와 그 상태(훈련을 중단한 곳에서 다시 시작할 수 있게 해줌)
APIs
model.save() 또는 tf.keras.models.save_model()
tf.keras.models.load_model()
전체 모델을 디스크에 저장하는 데 사용할 수 있는 두 형식은 TensorFlow SavedModel 형식과 이전 Keras H5 형식입니다. 권장하는 형식은 SavedModel입니다. 이는 model.save()를 사용할 때의 기본값입니다.
다음을 통해 H5 형식으로 전환할 수 있습니다.
format='h5'를 save()로 전달합니다.
.h5 또는 .keras로 끝나는 파일명을 save()로 전달합니다.
SavedModel 형식
예제:
End of explanation
!ls my_model
Explanation: SavedModel이 포함하는 것
model.save('my_model')을 호출하면 다음을 포함하는 my_model 폴더를 생성합니다.
End of explanation
class CustomModel(keras.Model):
def __init__(self, hidden_units):
super(CustomModel, self).__init__()
self.dense_layers = [keras.layers.Dense(u) for u in hidden_units]
def call(self, inputs):
x = inputs
for layer in self.dense_layers:
x = layer(x)
return x
model = CustomModel([16, 16, 10])
# Build the model by calling it
input_arr = tf.random.uniform((1, 5))
outputs = model(input_arr)
model.save("my_model")
# Delete the custom-defined model class to ensure that the loader does not have
# access to it.
del CustomModel
loaded = keras.models.load_model("my_model")
np.testing.assert_allclose(loaded(input_arr), outputs)
print("Original model:", model)
print("Loaded model:", loaded)
Explanation: 모델 아키텍처 및 훈련 구성(옵티마이저, 손실 및 메트릭 포함)은 saved_model.pb에 저장됩니다. 가중치는 variables/ 디렉토리에 저장됩니다.
SavedModel 형식에 대한 자세한 내용은 SavedModel 가이드(디스크의 SavedModel 형식)를 참조하세요.
SavedModel이 사용자 정의 객체를 처리하는 방법
모델과 모델의 레이어를 저장할 때 SavedModel 형식은 클래스명, 호출 함수, 손실 및 가중치(구현된 경우에는 구성도 포함)를 저장합니다. 호출 함수는 모델/레이어의 계산 그래프를 정의합니다.
모델/레이어 구성이 없는 경우 호출 함수는 훈련, 평가 및 추론에 사용될 수 있는 기존 모델과 같은 모델을 만드는 데 사용됩니다.
그럼에도 불구하고 사용자 정의 모델 또는 레이어 클래스를 작성할 때 항상 get_config 및 from_config 메서드를 정의하는 것이 좋습니다. 이를 통해 필요한 경우 나중에 계산을 쉽게 업데이트할 수 있습니다. 자세한 내용은 사용자 정의 객체에 대한 섹션을 참조하세요.
다음은 구성 메서드를 덮어쓰지않고 SavedModel 형식에서 사용자 정의 레이어를 로딩할 때 발생하는 현상에 대한 예제입니다.
End of explanation
model = get_model()
# Train the model.
test_input = np.random.random((128, 32))
test_target = np.random.random((128, 1))
model.fit(test_input, test_target)
# Calling `save('my_model.h5')` creates a h5 file `my_model.h5`.
model.save("my_h5_model.h5")
# It can be used to reconstruct the model identically.
reconstructed_model = keras.models.load_model("my_h5_model.h5")
# Let's check:
np.testing.assert_allclose(
model.predict(test_input), reconstructed_model.predict(test_input)
)
# The reconstructed model is already compiled and has retained the optimizer
# state, so training can resume:
reconstructed_model.fit(test_input, test_target)
Explanation: 위 예제에서 볼 수 있듯이 로더는 기존 모델처럼 작동하는 새 모델 클래스를 동적으로 만듭니다.
Keras H5 형식
Keras는 또한 모델의 아키텍처, 가중치 값 및 compile() 정보가 포함된 단일 HDF5 파일 저장을 지원합니다. SavedModel에 대한 가벼운 대안입니다.
예제:
End of explanation
layer = keras.layers.Dense(3, activation="relu")
layer_config = layer.get_config()
new_layer = keras.layers.Dense.from_config(layer_config)
Explanation: 제한 사항
SavedModel 형식과 비교하여 H5 파일에 포함되지 않은 두 가지가 있습니다.
model.add_loss() 및 model.add_metric()을 통해 추가된 외부 손실 및 메트릭은 SavedModel과 달리 저장되지 않습니다. 모델에 이러한 손실 및 메트릭이 있고 훈련을 재개하려는 경우 모델을 로드한 후 이러한 손실을 다시 추가해야 합니다. self.add_loss() 및 self.add_metric()을 통해 레이어 내부에 생성된 손실/메트릭에는 적용되지 않습니다. 레이어가 로드되는 한 이러한 손실 및 메트릭은 레이어의 call 메서드의 일부이므로 유지됩니다.
사용자 정의 레이어와 같은 사용자 정의 객체의 계산 그래프는 저장된 파일을 포함하지 않습니다. 로딩 시 Keras는 모델을 재구성하기 위해 이러한 객체의 Python 클래스/함수에 접근해야 합니다. 사용자 정의 객체를 참조하세요.
아키텍처 저장
모델의 구성(또는 아키텍처)은 모델에 포함된 레이어와 이러한 레이어의 연결 방법*을 지정합니다. 모델 구성이 있는 경우 가중치에 대해 새로 초기화된 상태로 컴파일 정보 없이 모델을 작성할 수 있습니다.
*이 기능은 서브 클래스 모델이 아닌 함수형 또는 Sequential API를 사용하여 정의된 모델에만 적용됩니다.
Sequential 모델 또는 Functional API 모델의 구성
이러한 유형의 모델은 레이어의 명시적 그래프입니다. 구성은 항상 구조화된 형식으로 제공됩니다.
APIs
get_config() 및 from_config()
tf.keras.models.model_to_json() 및 tf.keras.models.model_from_json()
get_config() 및 from_config()
config = model.get_config()을 호출하면 모델 구성이 포함된 Python dict가 반환됩니다. 그런 다음 Sequential.from_config(config)(Sequential 모델) 또는 Model.from_config(config)(Functional API 모델)를 통해 동일한 모델을 재구성할 수 있습니다.
모든 직렬화 가능 레이어에 대해서도 같은 워크플로가 작동합니다.
레이어 예제:
End of explanation
model = keras.Sequential([keras.Input((32,)), keras.layers.Dense(1)])
config = model.get_config()
new_model = keras.Sequential.from_config(config)
Explanation: Sequential 모델 예제:
End of explanation
inputs = keras.Input((32,))
outputs = keras.layers.Dense(1)(inputs)
model = keras.Model(inputs, outputs)
config = model.get_config()
new_model = keras.Model.from_config(config)
Explanation: Functional 모델 예제:
End of explanation
model = keras.Sequential([keras.Input((32,)), keras.layers.Dense(1)])
json_config = model.to_json()
new_model = keras.models.model_from_json(json_config)
Explanation: to_json() 및 tf.keras.models.model_from_json()
이것은 get_config / from_config와 비슷하지만, 모델을 JSON 문자열로 변환한 다음 기존 모델 클래스 없이 로드할 수 있습니다. 또한, 모델에만 해당하며 레이어용이 아닙니다.
예제:
End of explanation
model.save("my_model")
tensorflow_graph = tf.saved_model.load("my_model")
x = np.random.uniform(size=(4, 32)).astype(np.float32)
predicted = tensorflow_graph(x).numpy()
Explanation: 사용자 정의 객체
모델과 레이어
서브 클래스 모델과 레이어의 아키텍처는 __init__ 및 call 메서드에 정의되어 있습니다. 그것들은 Python 바이트 코드로 간주하며 JSON 호환 구성으로 직렬화할 수 없습니다 -- 바이트 코드 직렬화를 시도할 수는 있지만(예: pickle을 통해) 완전히 불안전하므로 모델을 다른 시스템에 로드할 수 없습니다.
사용자 정의 레이어를 사용하는 모델 또는 서브 클래스 모델을 저장/로드하려면 get_config 및 선택적으로 from_config 메서드를 덮어써야 합니다. 또한 Keras가 인식할 수 있도록 사용자 정의 객체를 등록해야 합니다.
사용자 정의 함수
사용자 정의 함수(예: 활성화 손실 또는 초기화)에는 get_config 메서드가 필요하지 않습니다. 함수명은 사용자 정의 객체로 등록되어 있는 한 로드하기에 충분합니다.
TensorFlow 그래프만 로딩하기
Keras가 생성한 TensorFlow 그래프를 로드할 수 있습니다. 그렇게 하면 custom_objects를 제공할 필요가 없습니다. 다음과 같이 해볼 수 있습니다.
End of explanation
class CustomLayer(keras.layers.Layer):
def __init__(self, a):
self.var = tf.Variable(a, name="var_a")
def call(self, inputs, training=False):
if training:
return inputs * self.var
else:
return inputs
def get_config(self):
return {"a": self.var.numpy()}
# There's actually no need to define `from_config` here, since returning
# `cls(**config)` is the default behavior.
@classmethod
def from_config(cls, config):
return cls(**config)
layer = CustomLayer(5)
layer.var.assign(2)
serialized_layer = keras.layers.serialize(layer)
new_layer = keras.layers.deserialize(
serialized_layer, custom_objects={"CustomLayer": CustomLayer}
)
Explanation: 이 메서드에는 몇 가지 단점이 있습니다.
추적 가능성을 위해 사용된 사용자 정의 객체에 항상 접근할 수 있어야 합니다. 다시 만들 수 없는 모델을 제품에 넣고 싶지 않을 것입니다.
tf.saved_model.load에 의해 반환된 객체는 Keras 모델이 아닙니다. 따라서 사용하기가 쉽지 않습니다. 예를 들면, .predict() 또는 .fit()에 접근할 수 없습니다.
사용을 권장하지는 않지만, 사용자 정의 객체의 코드를 잃어버렸거나 tf.keras.models.load_model() 모델을 로드하는 데 문제가 있는 경우와 같이 곤란한 상황에서는 도움이 될 수 있습니다.
tf.saved_model.load와 관련된 페이지에서 자세한 내용을 확인할 수 있습니다.
구성 메서드 정의하기
명세:
get_config는 Keras 아키텍처 및 모델 저장 API와 호환되도록 JSON 직렬화 가능 사전을 반환해야 합니다.
from_config(config)(classmethod)는 구성에서 생성된 새 레이어 또는 모델 객체를 반환해야 합니다. 기본 구현은 cls(**config)를 반환합니다.
예제:
End of explanation
class CustomLayer(keras.layers.Layer):
def __init__(self, units=32, **kwargs):
super(CustomLayer, self).__init__(**kwargs)
self.units = units
def build(self, input_shape):
self.w = self.add_weight(
shape=(input_shape[-1], self.units),
initializer="random_normal",
trainable=True,
)
self.b = self.add_weight(
shape=(self.units,), initializer="random_normal", trainable=True
)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
def get_config(self):
config = super(CustomLayer, self).get_config()
config.update({"units": self.units})
return config
def custom_activation(x):
return tf.nn.tanh(x) ** 2
# Make a model with the CustomLayer and custom_activation
inputs = keras.Input((32,))
x = CustomLayer(32)(inputs)
outputs = keras.layers.Activation(custom_activation)(x)
model = keras.Model(inputs, outputs)
# Retrieve the config
config = model.get_config()
# At loading time, register the custom objects with a `custom_object_scope`:
custom_objects = {"CustomLayer": CustomLayer, "custom_activation": custom_activation}
with keras.utils.custom_object_scope(custom_objects):
new_model = keras.Model.from_config(config)
Explanation: 사용자 정의 객체 등록하기
Keras는 구성을 생성한 클래스를 기록합니다. 위의 예에서 tf.keras.layers.serialize는 사용자 정의 레이어의 직렬화된 형식을 생성합니다.
{'class_name': 'CustomLayer', 'config': {'a': 2}}
from_config를 호출할 올바른 클래스를 찾는 데 사용되는 모든 내장 레이어, 모델, 옵티마이저 및 메트릭 클래스의 마스터 목록을 유지합니다. 클래스를 찾을 수 없으면 오류가 발생합니다(Value Error: Unknown layer). 다음 목록은 사용자 정의 클래스를 등록하는 몇 가지 방법입니다.
로딩 함수에서 custom_objects 인수 설정(위의 "구성 메서드 정의하기" 섹션의 예 참조).
tf.keras.utils.custom_object_scope 또는 tf.keras.utils.CustomObjectScope
tf.keras.utils.register_keras_serializable
사용자 정의 레이어 및 함수 예제
End of explanation
with keras.utils.custom_object_scope(custom_objects):
new_model = keras.models.clone_model(model)
Explanation: 인메모리 모델 복제
tf.keras.models.clone_model()을 통해 모델의 인메모리 복제를 수행할 수도 있습니다. 이는 구성을 가져온 다음 구성에서 모델을 다시 생성하는 것과 같습니다(따라서 컴파일 정보 또는 레이어 가중치 값을 유지하지 않습니다).
예제:
End of explanation
def create_layer():
layer = keras.layers.Dense(64, activation="relu", name="dense_2")
layer.build((None, 784))
return layer
layer_1 = create_layer()
layer_2 = create_layer()
# Copy weights from layer 2 to layer 1
layer_2.set_weights(layer_1.get_weights())
Explanation: 모델의 가중치 값만 저장 및 로딩
모델의 가중치 값만 저장하고 로드하도록 선택할 수 있습니다. 다음과 같은 경우에 유용할 수 있습니다.
추론을 위한 모델만 필요합니다. 이 경우 훈련을 다시 시작할 필요가 없으므로 컴파일 정보나 옵티마이저 상태가 필요하지 않습니다.
전이 학습을 수행하고 있습니다. 이 경우 이전 모델의 상태를 재사용하는 새 모델을 훈련하므로 이전 모델의 컴파일 정보가 필요하지 않습니다.
인메모리 가중치 전이를 위한 API
get_weights 및 set_weights를 사용하여 다른 객체 간에 가중치를 복사할 수 있습니다.
tf.keras.layers.Layer.get_weights(): numpy 배열의 리스트를 반환합니다.
tf.keras.layers.Layer.set_weights(): weights 인수 내 값으로 모델의 가중치를 설정합니다.
다음은 예제입니다.
메모리에서 레이어 간 가중치 전이하기
End of explanation
# Create a simple functional model
inputs = keras.Input(shape=(784,), name="digits")
x = keras.layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = keras.layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = keras.layers.Dense(10, name="predictions")(x)
functional_model = keras.Model(inputs=inputs, outputs=outputs, name="3_layer_mlp")
# Define a subclassed model with the same architecture
class SubclassedModel(keras.Model):
def __init__(self, output_dim, name=None):
super(SubclassedModel, self).__init__(name=name)
self.output_dim = output_dim
self.dense_1 = keras.layers.Dense(64, activation="relu", name="dense_1")
self.dense_2 = keras.layers.Dense(64, activation="relu", name="dense_2")
self.dense_3 = keras.layers.Dense(output_dim, name="predictions")
def call(self, inputs):
x = self.dense_1(inputs)
x = self.dense_2(x)
x = self.dense_3(x)
return x
def get_config(self):
return {"output_dim": self.output_dim, "name": self.name}
subclassed_model = SubclassedModel(10)
# Call the subclassed model once to create the weights.
subclassed_model(tf.ones((1, 784)))
# Copy weights from functional_model to subclassed_model.
subclassed_model.set_weights(functional_model.get_weights())
assert len(functional_model.weights) == len(subclassed_model.weights)
for a, b in zip(functional_model.weights, subclassed_model.weights):
np.testing.assert_allclose(a.numpy(), b.numpy())
Explanation: 메모리에서 호환 가능한 아키텍처를 사용하여 모델 간 가중치 전이하기
End of explanation
inputs = keras.Input(shape=(784,), name="digits")
x = keras.layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = keras.layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = keras.layers.Dense(10, name="predictions")(x)
functional_model = keras.Model(inputs=inputs, outputs=outputs, name="3_layer_mlp")
inputs = keras.Input(shape=(784,), name="digits")
x = keras.layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = keras.layers.Dense(64, activation="relu", name="dense_2")(x)
# Add a dropout layer, which does not contain any weights.
x = keras.layers.Dropout(0.5)(x)
outputs = keras.layers.Dense(10, name="predictions")(x)
functional_model_with_dropout = keras.Model(
inputs=inputs, outputs=outputs, name="3_layer_mlp"
)
functional_model_with_dropout.set_weights(functional_model.get_weights())
Explanation: 상태 비저장 레이어의 경우
상태 비저장 레이어는 순서 또는 가중치 수를 변경하지 않기 때문에 상태 비저장 레이어가 남거나 없더라도 모델은 호환 가능한 아키텍처를 가질 수 있습니다.
End of explanation
# Runnable example
sequential_model = keras.Sequential(
[
keras.Input(shape=(784,), name="digits"),
keras.layers.Dense(64, activation="relu", name="dense_1"),
keras.layers.Dense(64, activation="relu", name="dense_2"),
keras.layers.Dense(10, name="predictions"),
]
)
sequential_model.save_weights("ckpt")
load_status = sequential_model.load_weights("ckpt")
# `assert_consumed` can be used as validation that all variable values have been
# restored from the checkpoint. See `tf.train.Checkpoint.restore` for other
# methods in the Status object.
load_status.assert_consumed()
Explanation: 디스크에 가중치를 저장하고 다시 로딩하기 위한 API
다음 형식으로 model.save_weights를 호출하여 디스크에 가중치를 저장할 수 있습니다.
TensorFlow Checkpoint
HDF5
model.save_weights의 기본 형식은 TensorFlow 체크포인트입니다. 저장 형식을 지정하는 두 가지 방법이 있습니다.
save_format 인수: save_format="tf" 또는 save_format="h5"에 값을 설정합니다.
path 인수: 경로가 .h5 또는 .hdf5로 끝나면 HDF5 형식이 사용됩니다. save_format을 설정하지 않으면 다른 접미어의 경우 TensorFlow 체크포인트로 결과가 발생합니다.
인메모리 numpy 배열로 가중치를 검색하는 옵션도 있습니다. 각 API에는 장단점이 있으며 아래에서 자세히 설명합니다.
TF Checkpoint 형식
예제:
End of explanation
class CustomLayer(keras.layers.Layer):
def __init__(self, a):
self.var = tf.Variable(a, name="var_a")
layer = CustomLayer(5)
layer_ckpt = tf.train.Checkpoint(layer=layer).save("custom_layer")
ckpt_reader = tf.train.load_checkpoint(layer_ckpt)
ckpt_reader.get_variable_to_dtype_map()
Explanation: 형식 세부 사항
TensorFlow Checkpoint 형식은 객체 속성명을 사용하여 가중치를 저장하고 복원합니다. 예를 들어, tf.keras.layers.Dense 레이어를 고려해 봅시다. 레이어에는 dense.kernel과 dense.bias 두 가지 가중치가 있습니다. 레이어가 tf 형식으로 저장되면 결과 체크포인트에는 "kernel" 및 "bias"와 해당 가중치 값이 포함됩니다. 자세한 정보는 TF Checkpoint 가이드의 "로딩 메커니즘"을 참조하세요.
속성/그래프 에지는 변수명이 아니라 부모 객체에서 사용된 이름에 따라 이름이 지정됩니다. 아래 예제의 CustomLayer를 고려해 봅시다. 변수 CustomLayer.var는 "var_a"가 아니라, 키의 일부로서 "var"로 저장됩니다.
End of explanation
inputs = keras.Input(shape=(784,), name="digits")
x = keras.layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = keras.layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = keras.layers.Dense(10, name="predictions")(x)
functional_model = keras.Model(inputs=inputs, outputs=outputs, name="3_layer_mlp")
# Extract a portion of the functional model defined in the Setup section.
# The following lines produce a new model that excludes the final output
# layer of the functional model.
pretrained = keras.Model(
functional_model.inputs, functional_model.layers[-1].input, name="pretrained_model"
)
# Randomly assign "trained" weights.
for w in pretrained.weights:
w.assign(tf.random.normal(w.shape))
pretrained.save_weights("pretrained_ckpt")
pretrained.summary()
# Assume this is a separate program where only 'pretrained_ckpt' exists.
# Create a new functional model with a different output dimension.
inputs = keras.Input(shape=(784,), name="digits")
x = keras.layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = keras.layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = keras.layers.Dense(5, name="predictions")(x)
model = keras.Model(inputs=inputs, outputs=outputs, name="new_model")
# Load the weights from pretrained_ckpt into model.
model.load_weights("pretrained_ckpt")
# Check that all of the pretrained weights have been loaded.
for a, b in zip(pretrained.weights, model.weights):
np.testing.assert_allclose(a.numpy(), b.numpy())
print("\n", "-" * 50)
model.summary()
# Example 2: Sequential model
# Recreate the pretrained model, and load the saved weights.
inputs = keras.Input(shape=(784,), name="digits")
x = keras.layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = keras.layers.Dense(64, activation="relu", name="dense_2")(x)
pretrained_model = keras.Model(inputs=inputs, outputs=x, name="pretrained")
# Sequential example:
model = keras.Sequential([pretrained_model, keras.layers.Dense(5, name="predictions")])
model.summary()
pretrained_model.load_weights("pretrained_ckpt")
# Warning! Calling `model.load_weights('pretrained_ckpt')` won't throw an error,
# but will *not* work as expected. If you inspect the weights, you'll see that
# none of the weights will have loaded. `pretrained_model.load_weights()` is the
# correct method to call.
Explanation: 전이 학습 예제
기본적으로 두 모델이 동일한 아키텍처를 갖는 한 동일한 검사점을 공유할 수 있습니다.
예제:
End of explanation
# Create a subclassed model that essentially uses functional_model's first
# and last layers.
# First, save the weights of functional_model's first and last dense layers.
first_dense = functional_model.layers[1]
last_dense = functional_model.layers[-1]
ckpt_path = tf.train.Checkpoint(
dense=first_dense, kernel=last_dense.kernel, bias=last_dense.bias
).save("ckpt")
# Define the subclassed model.
class ContrivedModel(keras.Model):
def __init__(self):
super(ContrivedModel, self).__init__()
self.first_dense = keras.layers.Dense(64)
self.kernel = self.add_variable("kernel", shape=(64, 10))
self.bias = self.add_variable("bias", shape=(10,))
def call(self, inputs):
x = self.first_dense(inputs)
return tf.matmul(x, self.kernel) + self.bias
model = ContrivedModel()
# Call model on inputs to create the variables of the dense layer.
_ = model(tf.ones((1, 784)))
# Create a Checkpoint with the same structure as before, and load the weights.
tf.train.Checkpoint(
dense=model.first_dense, kernel=model.kernel, bias=model.bias
).restore(ckpt_path).assert_consumed()
Explanation: 일반적으로 모델을 빌드할 때 동일한 API를 사용하는 것이 좋습니다. Sequential 및 Functional 또는 Functional 및 서브 클래스 등 간에 전환하는 경우, 항상 사전 훈련된 모델을 다시 빌드하고 사전 훈련된 가중치를 해당 모델에 로드합니다.
다음 질문은 모델 아키텍처가 상당히 다른 경우 어떻게 다른 모델에 가중치를 저장하고 로드하는가입니다. 해결책은 tf.train.Checkpoint를 사용하여 정확한 레이어/변수를 저장하고 복원하는 것입니다.
예제:
End of explanation
# Runnable example
sequential_model = keras.Sequential(
[
keras.Input(shape=(784,), name="digits"),
keras.layers.Dense(64, activation="relu", name="dense_1"),
keras.layers.Dense(64, activation="relu", name="dense_2"),
keras.layers.Dense(10, name="predictions"),
]
)
sequential_model.save_weights("weights.h5")
sequential_model.load_weights("weights.h5")
Explanation: HDF5 format
HDF5 형식에는 레이어 이름별로 그룹화된 가중치가 포함됩니다. 가중치는 훈련 가능한 가중치 목록을 훈련 불가능한 가중치 목록(layer.weights와 동일)에 연결하여 정렬된 목록입니다. 따라서 모델이 체크포인트에 저장된 것과 동일한 레이어 및 훈련 가능한 상태를 갖는 경우 hdf5 체크포인트을 사용할 수 있습니다.
예제:
End of explanation
class NestedDenseLayer(keras.layers.Layer):
def __init__(self, units, name=None):
super(NestedDenseLayer, self).__init__(name=name)
self.dense_1 = keras.layers.Dense(units, name="dense_1")
self.dense_2 = keras.layers.Dense(units, name="dense_2")
def call(self, inputs):
return self.dense_2(self.dense_1(inputs))
nested_model = keras.Sequential([keras.Input((784,)), NestedDenseLayer(10, "nested")])
variable_names = [v.name for v in nested_model.weights]
print("variables: {}".format(variable_names))
print("\nChanging trainable status of one of the nested layers...")
nested_model.get_layer("nested").dense_1.trainable = False
variable_names_2 = [v.name for v in nested_model.weights]
print("\nvariables: {}".format(variable_names_2))
print("variable ordering changed:", variable_names != variable_names_2)
Explanation: 모델에 중첩된 레이어가 포함된 경우 layer.trainable을 변경하면 layer.weights의 순서가 다르게 나타날 수 있습니다.
End of explanation
def create_functional_model():
inputs = keras.Input(shape=(784,), name="digits")
x = keras.layers.Dense(64, activation="relu", name="dense_1")(inputs)
x = keras.layers.Dense(64, activation="relu", name="dense_2")(x)
outputs = keras.layers.Dense(10, name="predictions")(x)
return keras.Model(inputs=inputs, outputs=outputs, name="3_layer_mlp")
functional_model = create_functional_model()
functional_model.save_weights("pretrained_weights.h5")
# In a separate program:
pretrained_model = create_functional_model()
pretrained_model.load_weights("pretrained_weights.h5")
# Create a new model by extracting layers from the original model:
extracted_layers = pretrained_model.layers[:-1]
extracted_layers.append(keras.layers.Dense(5, name="dense_3"))
model = keras.Sequential(extracted_layers)
model.summary()
Explanation: 전이 학습 예제
HDF5에서 사전 훈련된 가중치를 로딩할 때는 가중치를 기존 체크포인트 모델에 로드한 다음 원하는 가중치/레이어를 새 모델로 추출하는 것이 좋습니다.
예제:
End of explanation |
15,744 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Remote Data
Yahoo
St. Louis Fed (FRED)
Google
documentation
Step1: Yahoo finance
Step2: FRED
source
Step3: Google | Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import datetime
from pandas_datareader import data, wb
Explanation: Remote Data
Yahoo
St. Louis Fed (FRED)
Google
documentation: http://pandas.pydata.org/pandas-docs/stable/remote_data.html
Installation Requirement
pandas-datareader is required; not included with default Anaconda installation (Summer 2016)
from command prompt: conda install -c anaconda pandas-datareader
documentation: https://anaconda.org/anaconda/pandas-datareader
End of explanation
start = datetime.datetime(2010, 1, 1)
end = datetime.datetime(2016, 7, 15)
yahoo_df = data.DataReader("F", 'yahoo', start, end)
yahoo_df.plot()
Explanation: Yahoo finance
End of explanation
start = datetime.datetime(2010, 1, 1)
end = datetime.datetime(2016, 7, 15)
unrate_df = data.DataReader('UNRATE', 'fred', start, end)
unrate_df.plot()
Explanation: FRED
source: http://quant-econ.net/py/pandas.html
End of explanation
start = datetime.datetime(2010, 1, 1)
end = datetime.datetime(2016, 7, 15)
google_df = data.DataReader("F", 'google', start, end)
google_df.plot()
Explanation: Google
End of explanation |
15,745 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python Training - Lesson 5 - Python idioms and Pythonic code
Python guidelines - how to code?
Style and readbility of code - PEP8
PEP8 is a set of common sense practices and rules on how to format the text of code, how to name variables, when to make newline breaks, etc. You should familiarize yourself with this standard, as most of work environments will use this to some extent.
https
Step1: Observe the differences. In the first case, we always need to now what exactly we should check before we perform our operation (so we "ask for permission"). What if we don't know?
Imagine - you open a file, but first you "ask for permission" - you check if the file exists. It exists, you open it, but then an exception is raised, like "You do not have sufficient rights to read this file". Our program fails.
If you first perform an operation, and "ask for forgiveness", then you have a much greater control, and you communicate something with your code - that it should work most of the time, except some times it does not.
If you always "ask for permission", then you are wasting computation.
Some recommendations for the EAFP rule
Step2: Loop over a range of numbers
Step3: Loop forwards and backwards through a list
Step4: Loop over a list AND the indexes at the same time
Step5: Loop over two lists at the same time
Step6: Calling a function until something happens
Step7: Looping over dictionary keys and values at the same time
Step8: Unpacking sequences
Step9: Unpacking with wildcard "*"
Step10: Updating multiple variables at once
Step11: Basics of itertools
These tools provide some nice operations you can do on collections. | Python Code:
# Example of what that means:
# Some dictionary which we obtained, have no control of.
dictionary = {"a":1, "b":2, "c":3}
# List of keys we always check.
some_keys = ["a", "b", "c", "d"]
# Old-style way - Look before you leap (LBYL)
for k in some_keys:
if k not in dictionary:
print("Expected to find key: " + str(k) + " but did not find it.")
continue
else:
print(dictionary[k])
# Pythonic way - ask for forgiveness, not permission
for k in some_keys:
try:
print(dictionary[k])
except KeyError:
print("Expected to find key: " + str(k) + " but did not find it.")
continue
except Exception as e:
print("Something terrible happened. Details: " + str(e))
continue
Explanation: Python Training - Lesson 5 - Python idioms and Pythonic code
Python guidelines - how to code?
Style and readbility of code - PEP8
PEP8 is a set of common sense practices and rules on how to format the text of code, how to name variables, when to make newline breaks, etc. You should familiarize yourself with this standard, as most of work environments will use this to some extent.
https://www.python.org/dev/peps/pep-0008/
Zen of Python code - PEP20
PEP20 is a set of short recommendations on how to write code.
https://www.python.org/dev/peps/pep-0020/
It's called Zen, because it's more of a spiritual guide, than a concrete ruleset. I recommend reading it, and a few examples on how this changes the look of code can be found here:
https://gist.github.com/evandrix/2030615
Hitch-hiker's guide to Python
The code style article can be found here:
http://docs.python-guide.org/en/latest/writing/style/
Glossary
This page has explanations for many words and acronyms used in Python world.
https://docs.python.org/2/glossary.html
I would expand here on these examples:
EAFP
Easier to ask for forgiveness than permission.
End of explanation
# This is bad:
s = ["a",1,(2,2), 20.00]
for elem in s:
if isinstance(elem, str):
print("This is string")
elif isinstance(elem, int):
print("This is an integer")
elif isinstance(elem, tuple):
print("This is a tuple")
else:
print("This is something else. Details:" + str(type(elem)))
# This is good:
s = ["a", 1, (2,2), 20.00]
helper_dict = {
str: "This is string",
int: "This is integer",
tuple: "This is a tuple"}
for elem in s:
# Notice "asking for forgiveness" and not "permission"
try:
print(helper_dict[type(elem)])
except Exception as e:
print("This is something else. Details: " + str(e))
# Another example, but to store FUNCTIONS instead of VARIABLES
from datetime import datetime
helper_dict = {"amount": float, "counter": int, "date": datetime.strptime}
# Types references are also functions that convert variables between types.
some_dict = {"currency": "USD", "amount": "10000", "source": "Poland", "target": "Poland", "counter": "9298", "date": "20171102"}
for key, value in some_dict.items():
try:
converted = helper_dict[key](value)
except Exception:
converted = str(value)
print(converted)
print(type(converted))
Explanation: Observe the differences. In the first case, we always need to now what exactly we should check before we perform our operation (so we "ask for permission"). What if we don't know?
Imagine - you open a file, but first you "ask for permission" - you check if the file exists. It exists, you open it, but then an exception is raised, like "You do not have sufficient rights to read this file". Our program fails.
If you first perform an operation, and "ask for forgiveness", then you have a much greater control, and you communicate something with your code - that it should work most of the time, except some times it does not.
If you always "ask for permission", then you are wasting computation.
Some recommendations for the EAFP rule:
EAFP (Easier to Ask for Forgiveness than Permission)
IO operations (Hard drive and Networking)
Actions that will almost always be successful
Database operations (when dealing with transactions and can rollback)
Fast prototyping in a throw away environment
LBYL (Look Before You Leap):
Irrevocable actions, or anything that may have a side effect
Operation that may fail more times than succeed
When an exception that needs special attention could be easily caught beforehand
Idiomatic Python
First, I recommend watching this video:
https://www.youtube.com/watch?v=OSGv2VnC0go
"Transforming Code into Beautiful, Idiomatic Python"
Most of these examples come from that video.
Pythonic, idiomatic Python
It just means, that the code uses Python idioms, the Python features that make this programming language unique. The code will be more readable, expressive, will be able to do more things than you thought it can. Let's go through some examples.
Change many "if" into a dictionary
To avoid the infamous "if" ladders, it is much much easier to change this into a dictionary.
First examples shows how to change the argument of "print" function with this approach. Try to count how many less "checks" are performed by the system.
End of explanation
# This is not productive
for i in [0,1,2,3,4,5]:
print(i)
# This is much better
for i in range(6):
print(i)
# The 'range' function does not return a simple list.
# It returns an "iterable" - which gives you elements one at a time,
# so the actual big list is not held there inside the statement.
Explanation: Loop over a range of numbers
End of explanation
cars = ['ford', 'volvo', 'chevrolet']
# This is bad
for i in range(len(cars)): print(cars[i])
# This is better
for car in cars: print(car)
# Reversed
for car in reversed(cars): print(car)
Explanation: Loop forwards and backwards through a list
End of explanation
# I want to know the index of an item inside iteration
# This is bad
for i in range(len(cars)):
print(str(i) + " " + cars[i])
# This is better
for i, car in enumerate(cars): print(str(i) + " " + car)
Explanation: Loop over a list AND the indexes at the same time
End of explanation
numbers = [1,2,3,3,4]
letters = ["a", "b", "c", "d", "e"]
# This is bad
for i in range(len(numbers)):
print(str(numbers[i]) + " " + letters[i])
# This is better
for number, letter in zip(numbers,letters): print(number,letter)
Explanation: Loop over two lists at the same time
End of explanation
# Lets write a simple file
import os
filename = 'example.txt'
try:
os.remove(filename)
except OSError:
pass
with open('example.txt', 'w+') as f:
[f.write(str(x) + "\n") for x in range(0,20)]
# Bad way
with open('example.txt', 'r') as f:
while True:
line = f.readline()
if line == '':
break
print(line)
# Better way
with open('example.txt', 'r') as f:
for line in iter(f.readline, ''):
print(line)
Explanation: Calling a function until something happens
End of explanation
dictionary = {k:v for k,v in zip(range(0,3), range(0,3))}
# Bad Way
for k in dictionary.keys():
print(k, dictionary[k])
# Much better way
for k, v in dictionary.items():
print(k, v)
Explanation: Looping over dictionary keys and values at the same time
End of explanation
seq = ["a", "b", "c", "d"]
# Bad way
first = seq[0]
second = seq[1]
third = seq[2]
fourth = seq[3]
print(first, second, third, fourth)
# Better way
first, second, third, fourth = seq
print(first, second, third, fourth)
Explanation: Unpacking sequences
End of explanation
seq = ["a", "b", "c", "d", "e", "d"]
start, *middle, end = seq
print(start)
print(middle)
print(end)
Explanation: Unpacking with wildcard "*"
End of explanation
# Bad fibonacci implementation
def fibonacci(n):
x = 0
y = 1
for i in range(n):
print(x)
t = y
y = x + y
x = t
fibonacci(8)
# Simpler implementation
def fibonacci(n):
x, y = 0, 1
for i in range(n):
print(x)
x, y = y, x + y
fibonacci(8)
# Multiple updates at the same time
x, y, z, u = range(0,4)
print(x, y, z, u)
x, y, z, u = x + 1, y + z, u - x, z**2
print(x, y, z, u)
Explanation: Updating multiple variables at once
End of explanation
import itertools
# List all the different sequences of a starting list
permutations = itertools.permutations([1,2,3])
print(list(permutations))
# Cycle constantly through a short sequence
from itertools import cycle
counter = 20
for item in cycle('Adamek'):
if counter > 0:
print(item)
counter -= 1
Explanation: Basics of itertools
These tools provide some nice operations you can do on collections.
End of explanation |
15,746 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Compute EBTEL Results
Run the single- and two-fluid EBTEL models for a variety of inputs. This will be the basis for the rest of our analysis.
First, import any needed modules.
Step1: Setup the base dictionary for all of the runs. We'll read in the base dictionary from the ebtel++ example configuration file.
Step2: Next, construct a function that will make it easy to run all of the different EBTEL configurations.
Step3: Configure instances of the XML output handler for printing files.
Step4: Finally, run the model for varying pulse duration.
Step5: And then run the models for varying flux-limiter, $f$.
Step6: Save both data structures to serialized files. | Python Code:
import sys
import os
import subprocess
import pickle
import numpy as np
sys.path.append(os.path.join(os.environ['EXP_DIR'],'ebtelPlusPlus/rsp_toolkit/python'))
from xml_io import InputHandler,OutputHandler
Explanation: Compute EBTEL Results
Run the single- and two-fluid EBTEL models for a variety of inputs. This will be the basis for the rest of our analysis.
First, import any needed modules.
End of explanation
ih = InputHandler(os.path.join(os.environ['EXP_DIR'],'ebtelPlusPlus','config','ebtel.example.cfg.xml'))
config_dict = ih.lookup_vars()
config_dict['use_adaptive_solver'] = False
config_dict['loop_length'] = 40.0e+8
config_dict['adaptive_solver_error'] = 1e-8
config_dict['calculate_dem'] = False
config_dict['total_time'] = 5000.0
config_dict['tau'] = 0.1
config_dict['use_c1_grav_correction'] = True
config_dict['use_c1_loss_correction'] = True
config_dict['c1_cond0'] = 6.0
config_dict['c1_rad0'] = 0.6
config_dict['heating']['background'] = 3.5e-5
config_dict['output_filename'] = '../results/_tmp_'
Explanation: Setup the base dictionary for all of the runs. We'll read in the base dictionary from the ebtel++ example configuration file.
End of explanation
def run_and_print(tau,h0,f,flux_opt,oh_inst):
#create heating event
oh_inst.output_dict['heating']['events'] = [
{'event':{'magnitude':h0,'rise_start':0.0,'rise_end':tau/2.0,'decay_start':tau/2.0,'decay_end':tau}}
]
#set heat flux options
oh_inst.output_dict['saturation_limit'] = f
oh_inst.output_dict['use_flux_limiting'] = flux_opt
#single-fluid
oh_inst.output_dict['force_single_fluid'] = True
oh_inst.output_dict['heating']['partition'] = 0.5
oh_inst.print_to_xml()
subprocess.call([os.path.join(os.environ['EXP_DIR'],'ebtelPlusPlus','bin','ebtel++.run'),
'-c',oh_inst.output_filename])
#save parameters to list
temp = np.loadtxt(oh_inst.output_dict['output_filename'])
t,T,n = temp[:,0],temp[:,1],temp[:,3]
#two-fluid
#--electron heating
oh_inst.output_dict['force_single_fluid'] = False
oh_inst.output_dict['heating']['partition'] = 1.0
oh_inst.print_to_xml()
subprocess.call([os.path.join(os.environ['EXP_DIR'],'ebtelPlusPlus','bin','ebtel++.run'),
'-c',oh_inst.output_filename])
temp = np.loadtxt(oh_inst.output_dict['output_filename'])
te,Tee,Tei,ne= temp[:,0],temp[:,1],temp[:,2],temp[:,3]
#--ion heating
oh_inst.output_dict['force_single_fluid'] = False
oh_inst.output_dict['heating']['partition'] = 0.0
oh_inst.print_to_xml()
subprocess.call([os.path.join(os.environ['EXP_DIR'],'ebtelPlusPlus','bin','ebtel++.run'),
'-c',oh_inst.output_filename])
temp = np.loadtxt(oh_inst.output_dict['output_filename'])
ti,Tie,Tii,ni = temp[:,0],temp[:,1],temp[:,2],temp[:,3]
#return dictionary
return {'t':t,'te':te,'ti':ti,'T':T,'Tee':Tee,'Tei':Tei,'Tie':Tie,'Tii':Tii,'n':n,'ne':ne,'ni':ni,
'heat_flux_option':flux_opt}
Explanation: Next, construct a function that will make it easy to run all of the different EBTEL configurations.
End of explanation
oh = OutputHandler(config_dict['output_filename']+'.xml',config_dict)
Explanation: Configure instances of the XML output handler for printing files.
End of explanation
tau_h = [20,40,200,500]
tau_h_results = []
for t in tau_h:
results = run_and_print(t,20.0/t,1.0,True,oh)
results['loop_length'] = config_dict['loop_length']
tau_h_results.append(results)
Explanation: Finally, run the model for varying pulse duration.
End of explanation
flux_lim = [{'f':1.0,'opt':True},{'f':0.53,'opt':True},{'f':1.0/6.0,'opt':True},{'f':0.1,'opt':True},
{'f':1.0/30.0,'opt':True},{'f':1.0,'opt':False}]
flux_lim_results = []
for i in range(len(flux_lim)):
results = run_and_print(200.0,0.1,flux_lim[i]['f'],flux_lim[i]['opt'],oh)
results['loop_length'] = config_dict['loop_length']
flux_lim_results.append(results)
Explanation: And then run the models for varying flux-limiter, $f$.
End of explanation
with open(__dest__[0],'wb') as f:
pickle.dump(tau_h_results,f)
with open(__dest__[1],'wb') as f:
pickle.dump(flux_lim_results,f)
Explanation: Save both data structures to serialized files.
End of explanation |
15,747 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Revisão de Álgebra Linear
Step1: Matrizes
$$ A = \begin{bmatrix} 123, & 343, & 100\
33, & 0, & -50 \end{bmatrix} $$
Step2: $$ A = \begin{bmatrix} 123, & 343, & 100\
33, & 0, & -50 \end{bmatrix} =
\begin{bmatrix} a_{0,0}, & a_{0,1}, & a_{0,2}\
a_{1,0}, & a_{1,1}, & a_{1,2} \end{bmatrix} $$
$$ a_{i,j} $$ é elemento da i-ésima linha e j-ésima coluna
Em NumPy, para matriz de duas dimensões, a primeira dimensão é o número de linhas shape[0] e
a segunda dimensão é o número de colunas, shape[1].
O primeiro índice i de A[i,j], é o índice das linhas e o segundo índice j, é o índice
das colunas.
Step3: Matriz vetor
Um vetor coluna é uma matriz de duas dimensões, porém com apenas uma coluna, tem o shape (n,1), isto é, tem n linhas e 1 coluna.
Step4: Adição de matrizes
$$ C = A + B $$
$$ c_{i,j} = a_{i,j} + b_{i,j} $$ para todo os elementos de $A$, $B$ e $C$.
É importante que as dimensões destas três matrizes sejam iguais.
Step5: Multiplicação de matrizes
Multiplicação matriz e escalar
$$ \beta A = \begin{bmatrix} \beta a_{0,0} & \beta a_{0,1} & \ldots & a_{0,m-1}\
\beta a_{1,0} & \beta a_{1,1} & \ldots & a_{1,m-1} \
\vdots & \vdots & \vdots & vdots \
\beta a_{n-1,0} & \beta a_{n1,1} & \ldots & a_{n-1,m-1}
\end{bmatrix} $$ | Python Code:
import numpy as np
from numpy.random import randn
Explanation: Revisão de Álgebra Linear
End of explanation
A = np.array([[123, 343, 100],
[ 33, 0, -50]])
print (A )
print (A.shape )
print (A.shape[0] )
print (A.shape[1] )
B = np.array([[5, 3, 2, 1, 4],
[0, 2, 1, 3, 8]])
print (B )
print (B.shape )
print (B.shape[0] )
print (B.shape[1] )
Explanation: Matrizes
$$ A = \begin{bmatrix} 123, & 343, & 100\
33, & 0, & -50 \end{bmatrix} $$
End of explanation
print ('A=\n', A )
for i in range(A.shape[0]):
for j in range(A.shape[1]):
print ('A[%d,%d] = %d' % (i,j, A[i,j]) )
Explanation: $$ A = \begin{bmatrix} 123, & 343, & 100\
33, & 0, & -50 \end{bmatrix} =
\begin{bmatrix} a_{0,0}, & a_{0,1}, & a_{0,2}\
a_{1,0}, & a_{1,1}, & a_{1,2} \end{bmatrix} $$
$$ a_{i,j} $$ é elemento da i-ésima linha e j-ésima coluna
Em NumPy, para matriz de duas dimensões, a primeira dimensão é o número de linhas shape[0] e
a segunda dimensão é o número de colunas, shape[1].
O primeiro índice i de A[i,j], é o índice das linhas e o segundo índice j, é o índice
das colunas.
End of explanation
B = np.array([[3],
[5]])
print ('B=\n', B )
print ('B.shape:', B.shape )
Explanation: Matriz vetor
Um vetor coluna é uma matriz de duas dimensões, porém com apenas uma coluna, tem o shape (n,1), isto é, tem n linhas e 1 coluna.
End of explanation
A = (10*randn(2,3)).astype(int)
B = randn(2,3)
C = A + B
print ('A=\n',A )
print ('B=\n',B )
print ('C=\n',C )
Explanation: Adição de matrizes
$$ C = A + B $$
$$ c_{i,j} = a_{i,j} + b_{i,j} $$ para todo os elementos de $A$, $B$ e $C$.
É importante que as dimensões destas três matrizes sejam iguais.
End of explanation
print ('A=\n', A )
print()
print ('4 * A=\n', 4 * A )
Explanation: Multiplicação de matrizes
Multiplicação matriz e escalar
$$ \beta A = \begin{bmatrix} \beta a_{0,0} & \beta a_{0,1} & \ldots & a_{0,m-1}\
\beta a_{1,0} & \beta a_{1,1} & \ldots & a_{1,m-1} \
\vdots & \vdots & \vdots & vdots \
\beta a_{n-1,0} & \beta a_{n1,1} & \ldots & a_{n-1,m-1}
\end{bmatrix} $$
End of explanation |
15,748 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interface caching
This section details the interface-caching mechanism, exposed in the nipype.caching module.
Interface caching
Step1: Note that the caching directory is a subdirectory called nipype_mem of the given base_dir. This is done to avoid polluting the base director.
In the corresponding execution context, nipype interfaces can be turned into callables that can be used as functions using the Memory.cache method. For instance, if we want to run the fslMerge command on a set of files
Step2: Note that the Memory.cache method takes interfaces classes, and not instances.
The resulting fsl_merge object can be applied as a function to parameters, that will form the inputs of the merge fsl commands. Those inputs are given as keyword arguments, bearing the same name as the name in the inputs specs of the interface. In IPython, you can also get the argument list by using the fsl_merge? syntax to inspect the docs
Step3: The results are standard nipype nodes results. In particular, they expose an outputs attribute that carries all the outputs of the process, as specified by the docs.
Step4: Finally, and most important, if the node is applied to the same input parameters, it is not computed, and the results are reloaded from the disk | Python Code:
from nipype.caching import Memory
mem = Memory(base_dir='.')
Explanation: Interface caching
This section details the interface-caching mechanism, exposed in the nipype.caching module.
Interface caching: why and how
Pipelines (also called workflows) specify processing by an execution graph. This is useful because it opens the door to dependency checking and enables
to minimize recomputations,
to have the execution engine transparently deal with intermediate file manipulations.
They, however, do not blend in well with arbitrary Python code, as they must rely on their own execution engine.
Interfaces give fine control of the execution of each step with a thin wrapper on the underlying software. As a result that can easily be inserted in Python code.
However, they force the user to specify explicit input and output file names and cannot do any caching.
This is why nipype exposes an intermediate mechanism, caching that provides transparent output file management and caching within imperative Python code rather than a workflow.
A big picture view: using the Memory object
nipype caching relies on the Memory class: it creates an
execution context that is bound to a disk cache:
End of explanation
from nipype.interfaces import fsl
fsl_merge = mem.cache(fsl.Merge)
Explanation: Note that the caching directory is a subdirectory called nipype_mem of the given base_dir. This is done to avoid polluting the base director.
In the corresponding execution context, nipype interfaces can be turned into callables that can be used as functions using the Memory.cache method. For instance, if we want to run the fslMerge command on a set of files:
End of explanation
filepath = '/data/ds000114/sub-01/ses-test/func/sub-01_ses-test_task-fingerfootlips_bold.nii.gz'
results = fsl_merge(dimension='t', in_files=[filepath, filepath])
Explanation: Note that the Memory.cache method takes interfaces classes, and not instances.
The resulting fsl_merge object can be applied as a function to parameters, that will form the inputs of the merge fsl commands. Those inputs are given as keyword arguments, bearing the same name as the name in the inputs specs of the interface. In IPython, you can also get the argument list by using the fsl_merge? syntax to inspect the docs:
```python
In [3]: fsl_merge?
String Form:PipeFunc(nipype.interfaces.fsl.utils.Merge,
base_dir=/home/varoquau/dev/nipype/nipype/caching/nipype_mem)
Namespace: Interactive
File: /home/varoquau/dev/nipype/nipype/caching/memory.py
Definition: fsl_merge(self, **kwargs)
Docstring: Use fslmerge to concatenate images
Inputs
Mandatory:
dimension: dimension along which the file will be merged
in_files: None
Optional:
args: Additional parameters to the command
environ: Environment variables (default={})
ignore_exception: Print an error message instead of throwing an exception in case the interface fails to run (default=False)
merged_file: None
output_type: FSL output type
Outputs
merged_file: None
Class Docstring:
...
```
Thus fsl_merge is applied to parameters as such:
End of explanation
results.outputs.merged_file
Explanation: The results are standard nipype nodes results. In particular, they expose an outputs attribute that carries all the outputs of the process, as specified by the docs.
End of explanation
results = fsl_merge(dimension='t', in_files=[filepath, filepath])
Explanation: Finally, and most important, if the node is applied to the same input parameters, it is not computed, and the results are reloaded from the disk:
End of explanation |
15,749 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Training a neural network on MNIST with Keras
This simple example demonstrates how to plug TensorFlow Datasets (TFDS) into a Keras model.
Copyright 2020 The TensorFlow Datasets Authors, Licensed under the Apache License, Version 2.0
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step1: Step 1
Step3: Build a training pipeline
Apply the following transformations
Step4: Build an evaluation pipeline
Your testing pipeline is similar to the training pipeline with small differences
Step5: Step 2 | Python Code:
import tensorflow as tf
import tensorflow_datasets as tfds
Explanation: Training a neural network on MNIST with Keras
This simple example demonstrates how to plug TensorFlow Datasets (TFDS) into a Keras model.
Copyright 2020 The TensorFlow Datasets Authors, Licensed under the Apache License, Version 2.0
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/datasets/keras_example"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/datasets/blob/master/docs/keras_example.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/datasets/blob/master/docs/keras_example.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/datasets/docs/keras_example.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
End of explanation
(ds_train, ds_test), ds_info = tfds.load(
'mnist',
split=['train', 'test'],
shuffle_files=True,
as_supervised=True,
with_info=True,
)
Explanation: Step 1: Create your input pipeline
Start by building an efficient input pipeline using advices from:
* The Performance tips guide
* The Better performance with the tf.data API guide
Load a dataset
Load the MNIST dataset with the following arguments:
shuffle_files=True: The MNIST data is only stored in a single file, but for larger datasets with multiple files on disk, it's good practice to shuffle them when training.
as_supervised=True: Returns a tuple (img, label) instead of a dictionary {'image': img, 'label': label}.
End of explanation
def normalize_img(image, label):
Normalizes images: `uint8` -> `float32`.
return tf.cast(image, tf.float32) / 255., label
ds_train = ds_train.map(
normalize_img, num_parallel_calls=tf.data.AUTOTUNE)
ds_train = ds_train.cache()
ds_train = ds_train.shuffle(ds_info.splits['train'].num_examples)
ds_train = ds_train.batch(128)
ds_train = ds_train.prefetch(tf.data.AUTOTUNE)
Explanation: Build a training pipeline
Apply the following transformations:
tf.data.Dataset.map: TFDS provide images of type tf.uint8, while the model expects tf.float32. Therefore, you need to normalize images.
tf.data.Dataset.cache As you fit the dataset in memory, cache it before shuffling for a better performance.<br/>
Note: Random transformations should be applied after caching.
tf.data.Dataset.shuffle: For true randomness, set the shuffle buffer to the full dataset size.<br/>
Note: For large datasets that can't fit in memory, use buffer_size=1000 if your system allows it.
tf.data.Dataset.batch: Batch elements of the dataset after shuffling to get unique batches at each epoch.
tf.data.Dataset.prefetch: It is good practice to end the pipeline by prefetching for performance.
End of explanation
ds_test = ds_test.map(
normalize_img, num_parallel_calls=tf.data.AUTOTUNE)
ds_test = ds_test.batch(128)
ds_test = ds_test.cache()
ds_test = ds_test.prefetch(tf.data.AUTOTUNE)
Explanation: Build an evaluation pipeline
Your testing pipeline is similar to the training pipeline with small differences:
You don't need to call tf.data.Dataset.shuffle.
Caching is done after batching because batches can be the same between epochs.
End of explanation
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(10)
])
model.compile(
optimizer=tf.keras.optimizers.Adam(0.001),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[tf.keras.metrics.SparseCategoricalAccuracy()],
)
model.fit(
ds_train,
epochs=6,
validation_data=ds_test,
)
Explanation: Step 2: Create and train the model
Plug the TFDS input pipeline into a simple Keras model, compile the model, and train it.
End of explanation |
15,750 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Interpolation Exercise 1
Step1: 2D trajectory interpolation
The file trajectory.npz contains 3 Numpy arrays that describe a 2d trajectory of a particle as a function of time
Step2: Use these arrays to create interpolated functions $x(t)$ and $y(t)$. Then use those functions to create the following arrays
Step3: Make a parametric plot of ${x(t),y(t)}$ that shows the interpolated values and the original points | Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
from scipy.interpolate import interp1d
Explanation: Interpolation Exercise 1
End of explanation
data=np.load('trajectory.npz')
t=data['t']
x=data['x']
y=data['y']
assert isinstance(x, np.ndarray) and len(x)==40
assert isinstance(y, np.ndarray) and len(y)==40
assert isinstance(t, np.ndarray) and len(t)==40
Explanation: 2D trajectory interpolation
The file trajectory.npz contains 3 Numpy arrays that describe a 2d trajectory of a particle as a function of time:
t which has discrete values of time t[i].
x which has values of the x position at those times: x[i] = x(t[i]).
x which has values of the y position at those times: y[i] = y(t[i]).
Load those arrays into this notebook and save them as variables x, y and t:
End of explanation
newt=np.linspace(t[0],t[len(t)-1],200)
nx=interp1d(t,x)
ny=interp1d(t,y)
newx=nx(newt)
newy=ny(newt)
assert newt[0]==t.min()
assert newt[-1]==t.max()
assert len(newt)==200
assert len(newx)==200
assert len(newy)==200
Explanation: Use these arrays to create interpolated functions $x(t)$ and $y(t)$. Then use those functions to create the following arrays:
newt which has 200 points between ${t_{min},t_{max}}$.
newx which has the interpolated values of $x(t)$ at those times.
newy which has the interpolated values of $y(t)$ at those times.
End of explanation
plt.plot(newx,newy,'g-')
plt.plot(x,y,'bo')
plt.xlabel('x')
plt.ylabel('y')
plt.title('Interpolation')
plt.xlim(-1,1.1)
plt.ylim(-1,1.2)
assert True # leave this to grade the trajectory plot
Explanation: Make a parametric plot of ${x(t),y(t)}$ that shows the interpolated values and the original points:
For the interpolated points, use a solid line.
For the original points, use circles of a different color and no line.
Customize you plot to make it effective and beautiful.
End of explanation |
15,751 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
EfficientNetV2 with tf-hub
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step1: 3.Inference with ImageNet 1k/2k checkpoints
3.1 ImageNet1k checkpoint
Step2: 3.2 ImageNet21k checkpoint
Step3: 4.Finetune with Flowers dataset.
Get hub_url and image_size
Step4: Get dataset
Step5: Training the model
Step6: Finally, the trained model can be saved for deployment to TF Serving or TF Lite (on mobile) as follows.
Step7: Optional | Python Code:
import itertools
import os
import matplotlib.pylab as plt
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub
print('TF version:', tf.__version__)
print('Hub version:', hub.__version__)
print('Phsical devices:', tf.config.list_physical_devices())
def get_hub_url_and_isize(model_name, ckpt_type, hub_type):
if ckpt_type == '1k':
ckpt_type = '' # json doesn't support empty string
else:
ckpt_type = '-' + ckpt_type # add '-' as prefix
hub_url_map = {
'efficientnetv2-b0': f'gs://cloud-tpu-checkpoints/efficientnet/v2/hub/efficientnetv2-b0/{hub_type}',
'efficientnetv2-b1': f'gs://cloud-tpu-checkpoints/efficientnet/v2/hub/efficientnetv2-b1/{hub_type}',
'efficientnetv2-b2': f'gs://cloud-tpu-checkpoints/efficientnet/v2/hub/efficientnetv2-b2/{hub_type}',
'efficientnetv2-b3': f'gs://cloud-tpu-checkpoints/efficientnet/v2/hub/efficientnetv2-b3/{hub_type}',
'efficientnetv2-s': f'gs://cloud-tpu-checkpoints/efficientnet/v2/hub/efficientnetv2-s/{hub_type}',
'efficientnetv2-m': f'gs://cloud-tpu-checkpoints/efficientnet/v2/hub/efficientnetv2-m/{hub_type}',
'efficientnetv2-l': f'gs://cloud-tpu-checkpoints/efficientnet/v2/hub/efficientnetv2-l/{hub_type}',
'efficientnetv2-b0-21k': f'gs://cloud-tpu-checkpoints/efficientnet/v2/hub/efficientnetv2-b0-21k/{hub_type}',
'efficientnetv2-b1-21k': f'gs://cloud-tpu-checkpoints/efficientnet/v2/hub/efficientnetv2-b1-21k/{hub_type}',
'efficientnetv2-b2-21k': f'gs://cloud-tpu-checkpoints/efficientnet/v2/hub/efficientnetv2-b2-21k/{hub_type}',
'efficientnetv2-b3-21k': f'gs://cloud-tpu-checkpoints/efficientnet/v2/hub/efficientnetv2-b3-21k/{hub_type}',
'efficientnetv2-s-21k': f'gs://cloud-tpu-checkpoints/efficientnet/v2/hub/efficientnetv2-s-21k/{hub_type}',
'efficientnetv2-m-21k': f'gs://cloud-tpu-checkpoints/efficientnet/v2/hub/efficientnetv2-m-21k/{hub_type}',
'efficientnetv2-l-21k': f'gs://cloud-tpu-checkpoints/efficientnet/v2/hub/efficientnetv2-l-21k/{hub_type}',
'efficientnetv2-xl-21k': f'gs://cloud-tpu-checkpoints/efficientnet/v2/hub/efficientnetv2-xl-21k/{hub_type}',
'efficientnetv2-b0-21k-ft1k': f'gs://cloud-tpu-checkpoints/efficientnet/v2/hub/efficientnetv2-b0-21k-ft1k/{hub_type}',
'efficientnetv2-b1-21k-ft1k': f'gs://cloud-tpu-checkpoints/efficientnet/v2/hub/efficientnetv2-b1-21k-ft1k/{hub_type}',
'efficientnetv2-b2-21k-ft1k': f'gs://cloud-tpu-checkpoints/efficientnet/v2/hub/efficientnetv2-b2-21k-ft1k/{hub_type}',
'efficientnetv2-b3-21k-ft1k': f'gs://cloud-tpu-checkpoints/efficientnet/v2/hub/efficientnetv2-b3-21k-ft1k/{hub_type}',
'efficientnetv2-s-21k-ft1k': f'gs://cloud-tpu-checkpoints/efficientnet/v2/hub/efficientnetv2-s-21k-ft1k/{hub_type}',
'efficientnetv2-m-21k-ft1k': f'gs://cloud-tpu-checkpoints/efficientnet/v2/hub/efficientnetv2-m-21k-ft1k/{hub_type}',
'efficientnetv2-l-21k-ft1k': f'gs://cloud-tpu-checkpoints/efficientnet/v2/hub/efficientnetv2-l-21k-ft1k/{hub_type}',
'efficientnetv2-xl-21k-ft1k': f'gs://cloud-tpu-checkpoints/efficientnet/v2/hub/efficientnetv2-xl-21k-ft1k/{hub_type}',
# efficientnetv1
'efficientnet_b0': f'https://tfhub.dev/tensorflow/efficientnet/b0/{hub_type}/1',
'efficientnet_b1': f'https://tfhub.dev/tensorflow/efficientnet/b1/{hub_type}/1',
'efficientnet_b2': f'https://tfhub.dev/tensorflow/efficientnet/b2/{hub_type}/1',
'efficientnet_b3': f'https://tfhub.dev/tensorflow/efficientnet/b3/{hub_type}/1',
'efficientnet_b4': f'https://tfhub.dev/tensorflow/efficientnet/b4/{hub_type}/1',
'efficientnet_b5': f'https://tfhub.dev/tensorflow/efficientnet/b5/{hub_type}/1',
'efficientnet_b6': f'https://tfhub.dev/tensorflow/efficientnet/b6/{hub_type}/1',
'efficientnet_b7': f'https://tfhub.dev/tensorflow/efficientnet/b7/{hub_type}/1',
}
image_size_map = {
'efficientnetv2-b0': 224,
'efficientnetv2-b1': 240,
'efficientnetv2-b2': 260,
'efficientnetv2-b3': 300,
'efficientnetv2-s': 384,
'efficientnetv2-m': 480,
'efficientnetv2-l': 480,
'efficientnetv2-xl': 512,
'efficientnet_b0': 224,
'efficientnet_b1': 240,
'efficientnet_b2': 260,
'efficientnet_b3': 300,
'efficientnet_b4': 380,
'efficientnet_b5': 456,
'efficientnet_b6': 528,
'efficientnet_b7': 600,
}
hub_url = hub_url_map.get(model_name + ckpt_type)
image_size = image_size_map.get(model_name, 224)
return hub_url, image_size
def get_imagenet_labels(filename):
labels = []
with open(filename, 'r') as f:
for line in f:
labels.append(line.split('\t')[1][:-1]) # split and remove line break.
return labels
Explanation: EfficientNetV2 with tf-hub
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://github.com/google/automl/blob/master/efficientnetv2/tfhub.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on github
</a>
</td><td>
<a target="_blank" href="https://colab.sandbox.google.com/github/google/automl/blob/master/efficientnetv2/tfhub.ipynb">
<img width=32px src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td><td>
<!----<a href="https://tfhub.dev/google/collections/image/1"><img src="https://www.tensorflow.org/images/hub_logo_32px.png" />TF Hub models</a>--->
</td>
</table>
1.Introduction
EfficientNetV2 is a family of classification models, with better accuracy, smaller size, and faster speed than previous models.
This doc describes some examples with EfficientNetV2 tfhub. For more details, please visit the official code: https://github.com/google/automl/tree/master/efficientnetv2
2.Select the TF2 SavedModel module to use
End of explanation
# Build model
import tensorflow_hub as hub
model_name = 'efficientnetv2-s' #@param {type:'string'}
ckpt_type = '1k' # @param ['21k-ft1k', '1k']
hub_type = 'classification' # @param ['classification', 'feature-vector']
hub_url, image_size = get_hub_url_and_isize(model_name, ckpt_type, hub_type)
tf.keras.backend.clear_session()
m = hub.KerasLayer(hub_url, trainable=False)
m.build([None, 224, 224, 3]) # Batch input shape.
# Download label map file and image
labels_map = '/tmp/imagenet1k_labels.txt'
image_file = '/tmp/panda.jpg'
tf.keras.utils.get_file(image_file, 'https://upload.wikimedia.org/wikipedia/commons/f/fe/Giant_Panda_in_Beijing_Zoo_1.JPG')
tf.keras.utils.get_file(labels_map, 'https://storage.googleapis.com/cloud-tpu-checkpoints/efficientnet/v2/imagenet1k_labels.txt')
# preprocess image.
image = tf.keras.preprocessing.image.load_img(image_file, target_size=(224, 224))
image = tf.keras.preprocessing.image.img_to_array(image)
image = (image - 128.) / 128.
logits = m(tf.expand_dims(image, 0), False)
# Output classes and probability
pred = tf.keras.layers.Softmax()(logits)
idx = tf.argsort(logits[0])[::-1][:5].numpy()
classes = get_imagenet_labels(labels_map)
for i, id in enumerate(idx):
print(f'top {i+1} ({pred[0][id]*100:.1f}%): {classes[id]} ')
from IPython import display
display.display(display.Image(image_file))
Explanation: 3.Inference with ImageNet 1k/2k checkpoints
3.1 ImageNet1k checkpoint
End of explanation
# Build model
import tensorflow_hub as hub
model_name = 'efficientnetv2-s' #@param {type:'string'}
ckpt_type = '21k' # @param ['21k']
hub_type = 'classification' # @param ['classification', 'feature-vector']
hub_url, image_size = get_hub_url_and_isize(model_name, ckpt_type, hub_type)
tf.keras.backend.clear_session()
m = hub.KerasLayer(hub_url, trainable=False)
m.build([None, 224, 224, 3]) # Batch input shape.
# Download label map file and image
labels_map = '/tmp/imagenet21k_labels.txt'
image_file = '/tmp/panda2.jpeg'
tf.keras.utils.get_file(image_file, 'https://upload.wikimedia.org/wikipedia/commons/f/fe/Giant_Panda_in_Beijing_Zoo_1.JPG')
tf.keras.utils.get_file(labels_map, 'https://storage.googleapis.com/cloud-tpu-checkpoints/efficientnet/v2/imagenet21k_labels.txt')
# preprocess image.
image = tf.keras.preprocessing.image.load_img(image_file, target_size=(224, 224))
image = tf.keras.preprocessing.image.img_to_array(image)
image = (image - 128.) / 128.
logits = m(tf.expand_dims(image, 0), False)
# Output classes and probability
pred = tf.keras.activations.sigmoid(logits) # 21k uses sigmoid for multi-label
idx = tf.argsort(logits[0])[::-1][:20].numpy()
classes = get_imagenet_labels(labels_map)
for i, id in enumerate(idx):
print(f'top {i+1} ({pred[0][id]*100:.1f}%): {classes[id]} ')
if pred[0][id] < 0.5:
break
from IPython import display
display.display(display.Image(image_file))
Explanation: 3.2 ImageNet21k checkpoint
End of explanation
# Build model
import tensorflow_hub as hub
model_name = 'efficientnetv2-b0' #@param {type:'string'}
ckpt_type = '1k' # @param ['21k', '21k-ft1k', '1k']
hub_type = 'feature-vector' # @param ['feature-vector']
batch_size = 32#@param {type:"integer"}
hub_url, image_size = get_hub_url_and_isize(model_name, ckpt_type, hub_type)
Explanation: 4.Finetune with Flowers dataset.
Get hub_url and image_size
End of explanation
data_dir = tf.keras.utils.get_file(
'flower_photos',
'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
untar=True)
datagen_kwargs = dict(rescale=1./255, validation_split=.20)
dataflow_kwargs = dict(target_size=(image_size, image_size),
batch_size=batch_size,
interpolation="bilinear")
valid_datagen = tf.keras.preprocessing.image.ImageDataGenerator(
**datagen_kwargs)
valid_generator = valid_datagen.flow_from_directory(
data_dir, subset="validation", shuffle=False, **dataflow_kwargs)
do_data_augmentation = False #@param {type:"boolean"}
if do_data_augmentation:
train_datagen = tf.keras.preprocessing.image.ImageDataGenerator(
rotation_range=40,
horizontal_flip=True,
width_shift_range=0.2, height_shift_range=0.2,
shear_range=0.2, zoom_range=0.2,
**datagen_kwargs)
else:
train_datagen = valid_datagen
train_generator = train_datagen.flow_from_directory(
data_dir, subset="training", shuffle=True, **dataflow_kwargs)
Explanation: Get dataset
End of explanation
# whether to finetune the whole model or just the top layer.
do_fine_tuning = True #@param {type:"boolean"}
num_epochs = 2 #@param {type:"integer"}
tf.keras.backend.clear_session()
model = tf.keras.Sequential([
# Explicitly define the input shape so the model can be properly
# loaded by the TFLiteConverter
tf.keras.layers.InputLayer(input_shape=[image_size, image_size, 3]),
hub.KerasLayer(hub_url, trainable=do_fine_tuning),
tf.keras.layers.Dropout(rate=0.2),
tf.keras.layers.Dense(train_generator.num_classes,
kernel_regularizer=tf.keras.regularizers.l2(0.0001))
])
model.build((None, image_size, image_size, 3))
model.summary()
model.compile(
optimizer=tf.keras.optimizers.SGD(learning_rate=0.005, momentum=0.9),
loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True, label_smoothing=0.1),
metrics=['accuracy'])
steps_per_epoch = train_generator.samples // train_generator.batch_size
validation_steps = valid_generator.samples // valid_generator.batch_size
hist = model.fit(
train_generator,
epochs=num_epochs, steps_per_epoch=steps_per_epoch,
validation_data=valid_generator,
validation_steps=validation_steps).history
def get_class_string_from_index(index):
for class_string, class_index in valid_generator.class_indices.items():
if class_index == index:
return class_string
x, y = next(valid_generator)
image = x[0, :, :, :]
true_index = np.argmax(y[0])
plt.imshow(image)
plt.axis('off')
plt.show()
# Expand the validation image to (1, 224, 224, 3) before predicting the label
prediction_scores = model.predict(np.expand_dims(image, axis=0))
predicted_index = np.argmax(prediction_scores)
print("True label: " + get_class_string_from_index(true_index))
print("Predicted label: " + get_class_string_from_index(predicted_index))
Explanation: Training the model
End of explanation
saved_model_path = f"/tmp/saved_flowers_model_{model_name}"
tf.saved_model.save(model, saved_model_path)
Explanation: Finally, the trained model can be saved for deployment to TF Serving or TF Lite (on mobile) as follows.
End of explanation
optimize_lite_model = True #@param {type:"boolean"}
#@markdown Setting a value greater than zero enables quantization of neural network activations. A few dozen is already a useful amount.
num_calibration_examples = 81 #@param {type:"slider", min:0, max:1000, step:1}
representative_dataset = None
if optimize_lite_model and num_calibration_examples:
# Use a bounded number of training examples without labels for calibration.
# TFLiteConverter expects a list of input tensors, each with batch size 1.
representative_dataset = lambda: itertools.islice(
([image[None, ...]] for batch, _ in train_generator for image in batch),
num_calibration_examples)
converter = tf.lite.TFLiteConverter.from_saved_model(saved_model_path)
if optimize_lite_model:
converter.optimizations = [tf.lite.Optimize.DEFAULT]
if representative_dataset: # This is optional, see above.
converter.representative_dataset = representative_dataset
lite_model_content = converter.convert()
with open(f"/tmp/lite_flowers_model_{model_name}.tflite", "wb") as f:
f.write(lite_model_content)
print("Wrote %sTFLite model of %d bytes." %
("optimized " if optimize_lite_model else "", len(lite_model_content)))
interpreter = tf.lite.Interpreter(model_content=lite_model_content)
# This little helper wraps the TF Lite interpreter as a numpy-to-numpy function.
def lite_model(images):
interpreter.allocate_tensors()
interpreter.set_tensor(interpreter.get_input_details()[0]['index'], images)
interpreter.invoke()
return interpreter.get_tensor(interpreter.get_output_details()[0]['index'])
#@markdown For rapid experimentation, start with a moderate number of examples.
num_eval_examples = 50 #@param {type:"slider", min:0, max:700}
eval_dataset = ((image, label) # TFLite expects batch size 1.
for batch in train_generator
for (image, label) in zip(*batch))
count = 0
count_lite_tf_agree = 0
count_lite_correct = 0
for image, label in eval_dataset:
probs_lite = lite_model(image[None, ...])[0]
probs_tf = model(image[None, ...]).numpy()[0]
y_lite = np.argmax(probs_lite)
y_tf = np.argmax(probs_tf)
y_true = np.argmax(label)
count +=1
if y_lite == y_tf: count_lite_tf_agree += 1
if y_lite == y_true: count_lite_correct += 1
if count >= num_eval_examples: break
print("TF Lite model agrees with original model on %d of %d examples (%g%%)." %
(count_lite_tf_agree, count, 100.0 * count_lite_tf_agree / count))
print("TF Lite model is accurate on %d of %d examples (%g%%)." %
(count_lite_correct, count, 100.0 * count_lite_correct / count))
Explanation: Optional: Deployment to TensorFlow Lite
TensorFlow Lite for mobile. Here we also runs tflite file in the TF Lite Interpreter to examine the resulting quality.
End of explanation |
15,752 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
This notebook covers using metrics to analyze the 'accuracy' of prophet models. In this notebook, we will extend the previous example (http
Step1: Read in the data
Read the data in from the retail sales CSV file in the examples folder then set the index to the 'date' column. We are also parsing dates in the data file.
Step2: Prepare for Prophet
As explained in previous prophet posts, for prophet to work, we need to change the names of these columns to 'ds' and 'y'.
Step3: Let's rename the columns as required by fbprophet. Additioinally, fbprophet doesn't like the index to be a datetime...it wants to see 'ds' as a non-index column, so we won't set an index differnetly than the integer index.
Step4: Now's a good time to take a look at your data. Plot the data using pandas' plot function
Step5: Running Prophet
Now, let's set prophet up to begin modeling our data using our promotions dataframe as part of the forecast
Note
Step6: We've instantiated the model, now we need to build some future dates to forecast into.
Step7: To forecast this future data, we need to run it through Prophet's model.
Step8: The resulting forecast dataframe contains quite a bit of data, but we really only care about a few columns. First, let's look at the full dataframe
Step9: We really only want to look at yhat, yhat_lower and yhat_upper, so we can do that with
Step10: Plotting Prophet results
Prophet has a plotting mechanism called plot. This plot functionality draws the original data (black dots), the model (blue line) and the error of the forecast (shaded blue area).
Step11: Personally, I'm not a fan of this visualization but I'm not going to build my own...you can see how I do that here
Step12: Now that we have our model, let's take a look at how it compares to our actual values using a few different metrics - R-Squared and Mean Squared Error (MSE).
To do this, we need to build a combined dataframe with yhat from the forecasts and the original 'y' values from the data.
Step13: You can see from the above, that the last part of the dataframe has "NaN" for 'y'...that's fine because we are only concerend about checking the forecast values versus the actual values so we can drop these "NaN" values.
Step14: Now let's take a look at our R-Squared value
Step15: An r-squared value of 0.99 is amazing (and probably too good to be true, which tells me this data is most likely overfit).
Step16: That's a large MSE value...and confirms my suspicion that this data is overfit and won't likely hold up well into the future. Remember...for MSE, closer to zero is better.
Now...let's see what the Mean Absolute Error (MAE) looks like. | Python Code:
import pandas as pd
import numpy as np
from fbprophet import Prophet
import matplotlib.pyplot as plt
from sklearn.metrics import mean_squared_error, r2_score, mean_absolute_error
%matplotlib inline
plt.rcParams['figure.figsize']=(20,10)
plt.style.use('ggplot')
Explanation: This notebook covers using metrics to analyze the 'accuracy' of prophet models. In this notebook, we will extend the previous example (http://pythondata.com/forecasting-time-series-data-prophet-part-3/).
Import necessary libraries
End of explanation
sales_df = pd.read_csv('../examples/retail_sales.csv', index_col='date', parse_dates=True)
sales_df.head()
Explanation: Read in the data
Read the data in from the retail sales CSV file in the examples folder then set the index to the 'date' column. We are also parsing dates in the data file.
End of explanation
df = sales_df.reset_index()
df.head()
Explanation: Prepare for Prophet
As explained in previous prophet posts, for prophet to work, we need to change the names of these columns to 'ds' and 'y'.
End of explanation
df=df.rename(columns={'date':'ds', 'sales':'y'})
df.head()
Explanation: Let's rename the columns as required by fbprophet. Additioinally, fbprophet doesn't like the index to be a datetime...it wants to see 'ds' as a non-index column, so we won't set an index differnetly than the integer index.
End of explanation
df.set_index('ds').y.plot()
Explanation: Now's a good time to take a look at your data. Plot the data using pandas' plot function
End of explanation
model = Prophet(weekly_seasonality=True)
model.fit(df);
Explanation: Running Prophet
Now, let's set prophet up to begin modeling our data using our promotions dataframe as part of the forecast
Note: Since we are using monthly data, you'll see a message from Prophet saying Disabling weekly seasonality. Run prophet with weekly_seasonality=True to override this. This is OK since we are workign with monthly data but you can disable it by using weekly_seasonality=True in the instantiation of Prophet.
End of explanation
future = model.make_future_dataframe(periods=24, freq = 'm')
future.tail()
Explanation: We've instantiated the model, now we need to build some future dates to forecast into.
End of explanation
forecast = model.predict(future)
Explanation: To forecast this future data, we need to run it through Prophet's model.
End of explanation
forecast.tail()
Explanation: The resulting forecast dataframe contains quite a bit of data, but we really only care about a few columns. First, let's look at the full dataframe:
End of explanation
forecast[['ds', 'yhat', 'yhat_lower', 'yhat_upper']].tail()
Explanation: We really only want to look at yhat, yhat_lower and yhat_upper, so we can do that with:
End of explanation
model.plot(forecast);
Explanation: Plotting Prophet results
Prophet has a plotting mechanism called plot. This plot functionality draws the original data (black dots), the model (blue line) and the error of the forecast (shaded blue area).
End of explanation
model.plot_components(forecast);
Explanation: Personally, I'm not a fan of this visualization but I'm not going to build my own...you can see how I do that here:
https://github.com/urgedata/pythondata/blob/master/fbprophet/fbprophet_part_one.ipynb.
Additionally, prophet let's us take a at the components of our model, including the holidays. This component plot is an important plot as it lets you see the components of your model including the trend and seasonality (identified in the yearly pane).
End of explanation
metric_df = forecast.set_index('ds')[['yhat']].join(df.set_index('ds').y).reset_index()
metric_df.tail()
Explanation: Now that we have our model, let's take a look at how it compares to our actual values using a few different metrics - R-Squared and Mean Squared Error (MSE).
To do this, we need to build a combined dataframe with yhat from the forecasts and the original 'y' values from the data.
End of explanation
metric_df.dropna(inplace=True)
metric_df.tail()
Explanation: You can see from the above, that the last part of the dataframe has "NaN" for 'y'...that's fine because we are only concerend about checking the forecast values versus the actual values so we can drop these "NaN" values.
End of explanation
r2_score(metric_df.y, metric_df.yhat)
Explanation: Now let's take a look at our R-Squared value
End of explanation
mean_squared_error(metric_df.y, metric_df.yhat)
Explanation: An r-squared value of 0.99 is amazing (and probably too good to be true, which tells me this data is most likely overfit).
End of explanation
mean_absolute_error(metric_df.y, metric_df.yhat)
Explanation: That's a large MSE value...and confirms my suspicion that this data is overfit and won't likely hold up well into the future. Remember...for MSE, closer to zero is better.
Now...let's see what the Mean Absolute Error (MAE) looks like.
End of explanation |
15,753 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<div class="jumbotron text-left"><b>
This tutorial describes how to perform a mixed optimization using the SMT toolbox. The idea is to use a Bayesian Optimization (EGO method) to solve an unconstrained optimization problem with mixed variables.
<div>
October 2020
Paul Saves and Nathalie BARTOLI ONERA/DTIS/M2CI)
<p class="alert alert-success" style="padding
Step1: Definition of the plot function
Step2: Local minimum trap
Step3: On this 1D test case, 4 iterations are required to find the global minimum, evaluated at iteration 5.
## 1D function with noisy values
The 1D function to optimize is described by
Step4: On this noisy case, it toook 7 iterations to understand the shape of the curve but then, it took time to explore the "random" noise aroudn the minimum.
2D mixed branin function
The 2D function to optimize is described by
Step5: On the left, we have the real model in green.
In the middle we have the mean surrogate $+3\times \mbox{ standard deviation}$ (red) and the mean surrogate $-3\times \mbox{ standard deviation}$ (blue) in order to represent an approximation of the $99\%$ confidence interval.
On the right, the contour plot of the mean surrogate are given where yellow points are the values at the evaluated points (DOE).
4D mixed test case
The 4D function to optimize is described by
Step6: Manipulate the DOE | Python Code:
%matplotlib inline
from math import exp
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import colors
from mpl_toolkits.mplot3d import Axes3D
from scipy.stats import norm
from scipy.optimize import minimize
import scipy
import six
from smt.applications import EGO
from smt.surrogate_models import KRG
from smt.sampling_methods import FullFactorial
from smt.sampling_methods import LHS
from sklearn import gaussian_process
from sklearn.gaussian_process.kernels import Matern, WhiteKernel, ConstantKernel
import matplotlib.font_manager
from smt.applications.mixed_integer import MixedIntegerSurrogateModel
import warnings
warnings.filterwarnings("ignore")
from smt.applications.mixed_integer import (
FLOAT,
INT,
ENUM,
MixedIntegerSamplingMethod,
cast_to_mixed_integer, unfold_with_enum_mask
)
Explanation: <div class="jumbotron text-left"><b>
This tutorial describes how to perform a mixed optimization using the SMT toolbox. The idea is to use a Bayesian Optimization (EGO method) to solve an unconstrained optimization problem with mixed variables.
<div>
October 2020
Paul Saves and Nathalie BARTOLI ONERA/DTIS/M2CI)
<p class="alert alert-success" style="padding:1em">
To use SMT models, please follow this link : https://github.com/SMTorg/SMT/blob/master/README.md. The documentation is available here: http://smt.readthedocs.io/en/latest/
</p>
The reference paper is available
here https://www.sciencedirect.com/science/article/pii/S0965997818309360?via%3Dihub
or as a preprint: http://mdolab.engin.umich.edu/content/python-surrogate-modeling-framework-derivatives
For mixed integer with continuous relaxation, the reference paper is available here https://www.sciencedirect.com/science/article/pii/S0925231219315619
### Mixed Integer EGO
For mixed integer EGO, the model is the continuous one. The discrete variables being relaxed continuously
End of explanation
def PlotEgo(criterion, xdoe, bounds,npt,n_iter=12,xtypes=None) :
ego = EGO(n_iter=n_iter, criterion=criterion, xdoe=xdoe,xtypes=xtypes, xlimits=bounds,n_start=20,n_max_optim=35,enable_tunneling=False, surrogate=KRG(print_global=False))
x_opt, y_opt, ind_best, x_data, y_data = ego.optimize(fun=f)
print("Minimum in x={:.0f} with f(x)={:.10f}".format(int(x_opt), float(y_opt)))
x_plot = np.atleast_2d(np.linspace(bounds[0][0], bounds[0][1], 9*(npt-1)+1)).T
fig = plt.figure(figsize=[15, 15])
for i in range(n_iter):
k = n_doe + i
x_data_k = x_data[0:k]
y_data_k = y_data[0:k]
#if check list, not already evaluated
y_data[k]=f(x_data[k][:, np.newaxis])
ego.gpr.set_training_values(x_data_k, y_data_k)
ego.gpr.train()
y_gp_plot = ego.gpr.predict_values(x_plot)
y_gp_plot_var = ego.gpr.predict_variances(x_plot)
y_ei_plot = ego.EI(x_plot,y_data_k)
ax = fig.add_subplot((n_iter + 1) // 2, 2, i + 1)
ax1 = ax.twinx()
ei, = ax1.plot(x_plot, y_ei_plot, color="red")
true_fun = ax.scatter(Xsol, Ysol,color='k',marker='d')
data, = ax.plot(
x_data_k, y_data_k, linestyle="", marker="o", color="orange"
)
if i < n_iter - 1:
opt, = ax.plot(
x_data[k], y_data[k], linestyle="", marker="*", color="r"
)
print(x_data[k], y_data[k])
gp, = ax.plot(x_plot, y_gp_plot, linestyle="--", color="g")
sig_plus = y_gp_plot + 3 * np.sqrt(y_gp_plot_var)
sig_moins = y_gp_plot - 3 * np.sqrt(y_gp_plot_var)
un_gp = ax.fill_between(
x_plot.T[0], sig_plus.T[0], sig_moins.T[0], alpha=0.3, color="g"
)
lines = [true_fun, data, gp, un_gp, opt, ei]
fig.suptitle("EGO optimization of a set of points")
fig.subplots_adjust(hspace=0.4, wspace=0.4, top=0.8)
ax.set_title("iteration {}".format(i + 1))
fig.legend(
lines,
[
"set of points",
"Given data points",
"Kriging prediction",
"Kriging 99% confidence interval",
"Next point to evaluate",
"Expected improvment function",
],
)
plt.show()
Explanation: Definition of the plot function
End of explanation
#definition of the 1D function
def f(X) :
x= X[:, 0]
if (np.abs(np.linalg.norm(np.floor(x))-np.linalg.norm(x))< 0.000001):
y = (x - 3.5) * np.sin((x - 3.5) / (np.pi))
else :
print("error")
return y
#to plot the function
bounds = np.array([[0, 25]])
npt=26
Xsol = np.linspace(bounds[0][0],bounds[0][1], npt)
Xs= Xsol[:, np.newaxis]
Ysol = f(Xs)
print("Min of the DOE: ",np.min(Ysol))
plt.scatter(Xs,Ysol,marker='d',color='k')
plt.show()
#to run the optimization process
n_iter = 8
xdoe = np.atleast_2d([0,10]).T
n_doe = xdoe.size
xtypes=[INT]
criterion = "EI" #'EI' or 'SBO' or 'UCB'
PlotEgo(criterion,xdoe,bounds,npt,n_iter,xtypes=xtypes)
Explanation: Local minimum trap: 1D function
The 1D function to optimize is described by:
- 1 discrete variable $\in [0, 25]$
End of explanation
def f(X) :
x= X[:, 0]
y = -np.square(x-25)/220+0.25*(np.sin((x - 3.5) * np.sin((x - 3.5) / (np.pi)))+np.cos(x**2))
return -y
#to plot the function
xlimits = np.array([[0, 60]])
npt=61
Xsol = np.linspace(xlimits[0][0],xlimits[0][1], npt)
Xs= Xsol[:, np.newaxis]
Ysol = f(Xs)
print("min of the DOE: ", np.min(Ysol))
plt.scatter(Xs,Ysol,marker='d',color='k')
plt.show()
#to run the optimization process
n_iter = 10
n_doe=2
sampling = MixedIntegerSamplingMethod(xtypes, xlimits, LHS, criterion="ese")
xdoe = sampling(2)
xtypes=[INT]
criterion = "EI" #'EI' or 'SBO' or 'UCB'
PlotEgo(criterion,xdoe,xlimits,npt,n_iter,xtypes)
Explanation: On this 1D test case, 4 iterations are required to find the global minimum, evaluated at iteration 5.
## 1D function with noisy values
The 1D function to optimize is described by:
- 1 discrete variable $\in [0, 60]$
End of explanation
#definition of the 2D function
#the first variable is a integer one and the second one is a continuous one
import math
def f(X) :
x1 = X[:,0]
x2 = X[:,1]
PI = math.pi #3.14159265358979323846
a = 1
b = 5.1/(4*np.power(PI,2))
c = 5/PI
r = 6
s = 10
t = 1/(8*PI)
y= a*(x2 - b*x1**2 + c*x1 -r)**2 + s*(1-t)*np.cos(x1) + s
return y
#to define and compute the doe
xtypes = [INT, FLOAT]
xlimits = np.array([[-5.0, 10.0],[0.0,15.0]])
n_doe=20
sampling = MixedIntegerSamplingMethod(xtypes, xlimits, LHS, criterion="ese")
xt = sampling(n_doe)
yt = f(xt)
#to build the mixed surrogate model
sm = MixedIntegerSurrogateModel(xtypes=xtypes, xlimits=xlimits, surrogate=KRG())
sm.set_training_values(xt, yt)
sm.train()
num = 100
x = np.linspace(-5.0,10., 100)
y = np.linspace(0,15., 100)
xv, yv = np.meshgrid(x, y)
x_plot= np.array([np.ravel(xv), np.ravel(yv)]).T
y_plot = f(np.floor(x_plot))
fig = plt.figure(figsize=[14, 7])
y_gp_plot = sm.predict_values(x_plot)
y_gp_plot_sd = np.sqrt(sm.predict_variances(x_plot))
l=y_gp_plot-3*y_gp_plot_sd
h=y_gp_plot+3*y_gp_plot_sd
ax = fig.add_subplot(1, 3, 1, projection='3d')
ax1 = fig.add_subplot(1, 3, 2, projection='3d')
ax2 = fig.add_subplot(1, 3,3)
ii=-100
ax.view_init(elev=15., azim=ii)
ax1.view_init(elev=15., azim=ii)
true_fun = ax.plot_surface(xv, yv, y_plot.reshape((100, 100)), label ='true_function',color='g')
data3 = ax2.scatter(xt.T[0],xt.T[1],s=60,marker="o",color="orange")
gp1 = ax1.plot_surface(xv, yv, l.reshape((100, 100)), color="b")
gp2 = ax1.plot_surface(xv, yv, h.reshape((100, 100)), color="r")
gp3 = ax2.contour(xv, yv, y_gp_plot.reshape((100, 100)), color="k", levels=[0,1,2,5,10,20,30,40,50,60])
fig.suptitle("Mixed Branin function surrogate")
ax.set_title("True model")
ax1.set_title("surrogate model, DOE de taille {}".format(n_doe))
ax2.set_title("surrogate mean response")
Explanation: On this noisy case, it toook 7 iterations to understand the shape of the curve but then, it took time to explore the "random" noise aroudn the minimum.
2D mixed branin function
The 2D function to optimize is described by:
- 1 discrete variable $\in [-5, 10]$
- 1 continuous variable $\in [0., 15.]$
End of explanation
#to define the 4D function
def function_test_mixed_integer(X):
import numpy as np
# float
x1 = X[:, 0]
# enum 1
c1 = X[:, 1]
x2 = c1 == 0
x3 = c1 == 1
x4 = c1 == 2
# enum 2
c2 = X[:, 2]
x5 = c2 == 0
x6 = c2 == 1
# int
i = X[:, 3]
y = (
(x2 + 2 * x3 + 3 * x4) * x5 * x1
+ (x2 + 2 * x3 + 3 * x4) * x6 * 0.95 * x1
+ i
)
return y
#to run the optimization process
n_iter = 15
xtypes = [FLOAT, (ENUM, 3), (ENUM, 2), INT]
xlimits = np.array([[-5, 5], ["blue", "red", "green"], ["large", "small"], [0, 2]])
criterion = "EI" #'EI' or 'SBO' or 'UCB'
qEI = "KB"
sm = KRG(print_global=False)
n_doe = 2
sampling = MixedIntegerSamplingMethod(xtypes, xlimits, LHS, criterion="ese")
xdoe = sampling(n_doe)
ydoe = function_test_mixed_integer(xdoe)
print('Initial DOE: \n', 'xdoe = ',xdoe, '\n ydoe = ',ydoe)
ego = EGO(
n_iter=n_iter,
criterion=criterion,
xdoe=xdoe,
ydoe=ydoe,
xtypes=xtypes,
xlimits=xlimits,
surrogate=sm,
qEI=qEI,
)
x_opt,y_opt, _, _, y_data = ego.optimize(fun=function_test_mixed_integer)
#to plot the objective function during the optimization process
min_ref = -15
mini = np.zeros(n_iter)
for k in range(n_iter):
mini[k] = np.log(np.abs(np.min(y_data[0 : k + n_doe - 1]) - min_ref))
x_plot = np.linspace(1, n_iter + 0.5, n_iter)
u = max(np.floor(max(mini)) + 1, -100)
l = max(np.floor(min(mini)) - 0.2, -10)
fig = plt.figure()
axes = fig.add_axes([0.1, 0.1, 0.8, 0.8])
axes.plot(x_plot, mini, color="r")
axes.set_ylim([l, u])
plt.title("minimum convergence plot", loc="center")
plt.xlabel("number of iterations")
plt.ylabel("log of the difference w.r.t the best")
plt.show()
print(" 4D EGO Optimization: Minimum in x=",cast_to_mixed_integer(xtypes, xlimits, x_opt), "with y value =",y_opt)
Explanation: On the left, we have the real model in green.
In the middle we have the mean surrogate $+3\times \mbox{ standard deviation}$ (red) and the mean surrogate $-3\times \mbox{ standard deviation}$ (blue) in order to represent an approximation of the $99\%$ confidence interval.
On the right, the contour plot of the mean surrogate are given where yellow points are the values at the evaluated points (DOE).
4D mixed test case
The 4D function to optimize is described by:
- 1 continuous variable $\in [-5, 5]$
- 1 categorical variable with 3 labels $["blue", "red", "green"]$
- 1 categorical variable with 2 labels $ ["large", "small"]$
- 1 discrete variable $\in [0, 2]$
End of explanation
#to give the initial doe in the initial space
print('Initial DOE in the initial space: ')
for i in range(n_doe):
print("Doe point i={} ={}".format((i), (cast_to_mixed_integer(xtypes, xlimits, xdoe[i]))),'\n')
#to give the initial doe in the relaxed space
print('Initial DOE in the unfold space (or relaxed space): ')
for i in range(n_doe):
print("Doe point i={} ={}".format((i), (unfold_with_enum_mask(xtypes, xdoe[i]))),'\n')
#to print the used DOE
print('Initial DOE in the fold space: ')
for i in range(n_doe):
print("Doe point i={} ={}".format((i), xdoe[i]),'\n')
Explanation: Manipulate the DOE
End of explanation |
15,754 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Work with Philadelphia crime rate data
The dataset has information about the house prices in Philadelphia, additionally, has information about the crime rates in various neighborhoods. So we can see some interesting observations in this dataset as follows
Load data and do initial analysis
Step1: Fit the regression model using crime rate as the feature
Step2: Look at the fit of the (initial) model
Step3: We can see that there is an outlier in the data, where the crime rate is high, but still, the house price is higher, hence not following the trend. This point is the center of the city (Center City data point)
Remove the Center CIty value, and re do the analysis
Center City is one observation with extremely high crime rate and high house prices. This is an outlier in some sense. So we can remove this and re fit the model
Step4: Notice the difference in the previous scatter plot and this one after removing the outlier (city center)
Step5: Look at the fit of the model with outlier removed
Step6: Compare coefficients for full data fit Vs. data with CenterCity removed
Step7: Remove high-value outlier neighborhoods and redo analysis
Step8: How much do the coefficients change? | Python Code:
crime_rate_data = graphlab.SFrame.read_csv('Philadelphia_Crime_Rate_noNA.csv')
crime_rate_data
graphlab.canvas.set_target('ipynb')
crime_rate_data.show(view='Scatter Plot', x = "CrimeRate", y = "HousePrice")
Explanation: Work with Philadelphia crime rate data
The dataset has information about the house prices in Philadelphia, additionally, has information about the crime rates in various neighborhoods. So we can see some interesting observations in this dataset as follows
Load data and do initial analysis
End of explanation
crime_model = graphlab.linear_regression.create(crime_rate_data,
target = 'HousePrice',
features = ['CrimeRate'],
validation_set = None,
verbose = False)
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Fit the regression model using crime rate as the feature
End of explanation
plt.plot(crime_rate_data['CrimeRate'], crime_rate_data['HousePrice'],
'.', crime_rate_data['CrimeRate'],
crime_model.predict(crime_rate_data), '-')
Explanation: Look at the fit of the (initial) model
End of explanation
crime_rate_data_noCC = crime_rate_data[crime_rate_data['MilesPhila'] != 0.0]
crime_rate_data_noCC.show(view='Scatter Plot', x = "CrimeRate", y = "HousePrice")
Explanation: We can see that there is an outlier in the data, where the crime rate is high, but still, the house price is higher, hence not following the trend. This point is the center of the city (Center City data point)
Remove the Center CIty value, and re do the analysis
Center City is one observation with extremely high crime rate and high house prices. This is an outlier in some sense. So we can remove this and re fit the model
End of explanation
crime_model_withNoCC = graphlab.linear_regression.create(crime_rate_data_noCC,
target = 'HousePrice',
features = ['CrimeRate'],
validation_set = None,
verbose = False)
Explanation: Notice the difference in the previous scatter plot and this one after removing the outlier (city center)
End of explanation
plt.plot(crime_rate_data_noCC['CrimeRate'], crime_rate_data_noCC['HousePrice'], '.',
crime_rate_data_noCC['CrimeRate'], crime_model_withNoCC.predict(crime_rate_data_noCC), '-')
Explanation: Look at the fit of the model with outlier removed
End of explanation
crime_model.get('coefficients')
crime_model_withNoCC.get('coefficients')
Explanation: Compare coefficients for full data fit Vs. data with CenterCity removed
End of explanation
crime_rate_data_noHighEnd = crime_rate_data_noCC[crime_rate_data_noCC['HousePrice'] < 350000]
crime_model_noHighEnd = graphlab.linear_regression.create(crime_rate_data_noHighEnd,
target = 'HousePrice',
features = ['CrimeRate'],
validation_set = None,
verbose = False)
Explanation: Remove high-value outlier neighborhoods and redo analysis
End of explanation
crime_model_withNoCC.get('coefficients')
crime_model_noHighEnd.get('coefficients')
Explanation: How much do the coefficients change?
End of explanation |
15,755 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
exportByFeat(img, fc, prop, folder, name, scale, dataType, **kwargs)
Step1: FeatureCollection
Step2: Image
Step3: Execute | Python Code:
import ee
ee.Initialize()
from geetools import batch
Explanation: exportByFeat(img, fc, prop, folder, name, scale, dataType, **kwargs):
Export an image clipped by features (Polygons). You can use the same arguments as the original function ee.batch.export.image.toDrive
Parameters
img: image to clip
fc: feature collection
prop: name of the property of the features to paste in the image
folder: same as ee.Export
name: name of the resulting image. If None uses image's ID
scale: same as ee.Export. Default to 1000
dataType: as downloaded images must have the same data type in all
bands, you have to set it here. Can be one of: "float", "double", "int",
"Uint8", "Int8" or a casting function like ee.Image.toFloat
kwargs: keyword arguments that will be passed to ee.batch.export.image.toDrive
Return a list of all tasks (for further processing/checking)
End of explanation
p1 = ee.Geometry.Point([-71,-42])
p2 = ee.Geometry.Point([-71,-43])
p3 = ee.Geometry.Point([-71,-44])
feat1 = ee.Feature(p1.buffer(1000), {'site': 1})
feat2 = ee.Feature(p2.buffer(1000), {'site': 2})
feat3 = ee.Feature(p3.buffer(1000), {'site': 3})
fc = ee.FeatureCollection([feat1, feat2, feat3])
Explanation: FeatureCollection
End of explanation
collection = ee.ImageCollection('COPERNICUS/S2').filterBounds(fc.geometry())
image = collection.mosaic()
Explanation: Image
End of explanation
task = batch.Export.image.toDriveByFeature(
image,
collection=fc,
folder='tools_exportbyfeat',
name='test {site}',
scale=10,
dataType='float',
verbose=True
)
Explanation: Execute
End of explanation |
15,756 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Title
Step1: Load Iris Flower Dataset
Step2: Standardize Features
Step3: Conduct Meanshift Clustering
MeanShift has two important parameters we should be aware of. First, bandwidth sets radius of the area (i.e. kernel) an observation uses to determine the direction to shift. In our analogy, bandwidth was how far a person could see through the fog. We can set this parameter manually, however by default a reasonable bandwidth is estimated automatically (with a significant increase in computational cost). Second, sometimes in meanshift there are no other observations within an observation's kernel. That is, a person on our football cannot see a single other person. By default, MeanShift assigns all these "orphan" observations to the kernel of the nearest observation. However, if we want to leave out these orphans, we can set cluster_all=False wherein orphan observations the label of -1. | Python Code:
# Load libraries
from sklearn import datasets
from sklearn.preprocessing import StandardScaler
from sklearn.cluster import MeanShift
Explanation: Title: Meanshift Clustering
Slug: meanshift_clustering
Summary: How to conduct meanshift clustering in scikit-learn.
Date: 2017-09-22 12:00
Category: Machine Learning
Tags: Clustering
Authors: Chris Albon
<a alt="Meanshift Clustering" href="https://machinelearningflashcards.com">
<img src="meanshift_clustering/Meanshift_Clustering_By_Analogy_print.png" class="flashcard center-block">
</a>
Preliminaries
End of explanation
# Load data
iris = datasets.load_iris()
X = iris.data
Explanation: Load Iris Flower Dataset
End of explanation
# Standarize features
scaler = StandardScaler()
X_std = scaler.fit_transform(X)
Explanation: Standardize Features
End of explanation
# Create meanshift object
clt = MeanShift(n_jobs=-1)
# Train model
model = clt.fit(X_std)
Explanation: Conduct Meanshift Clustering
MeanShift has two important parameters we should be aware of. First, bandwidth sets radius of the area (i.e. kernel) an observation uses to determine the direction to shift. In our analogy, bandwidth was how far a person could see through the fog. We can set this parameter manually, however by default a reasonable bandwidth is estimated automatically (with a significant increase in computational cost). Second, sometimes in meanshift there are no other observations within an observation's kernel. That is, a person on our football cannot see a single other person. By default, MeanShift assigns all these "orphan" observations to the kernel of the nearest observation. However, if we want to leave out these orphans, we can set cluster_all=False wherein orphan observations the label of -1.
End of explanation |
15,757 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Create geological model and drillhole from sections
This is to extract points from geological model defined in sections (dxf files) We also generate drillhole data. The section must be difined as dxf section with layers properly defined
Step1: Here we extract data from the dxf sections. We basically extract points from the lines and we asign it to its corresponding object, defined by the dxf 'layer'.
Step2: Generate working region
For this to work we need to define a bounding box. We will generate surfaces that have the same size or extend outside the box, otherwise we may get non expected results when we work with open surfaces
Step3: Generate solids
To generate solids, asuming we have stratigraphic units with contact surfaces that may be modeled indivicually, we defined surfaces interpolating in a 2D grid and optionally snapping points with known coordinates in 3D.
The surfaces are then used to cut the region to obtain individual closed solids.
Step4: By now we have surfaces that can be used for modeling, but you will get the best performance and number of octions with closed surfaces
<img src = 'fig2.JPG' height="50%" width="50%">
To generate solids we define implicit surfaces from surfaces and we cut the region using implisit surfaces. Note that implisit surfaces cluould be used to select points (drillholes and block centroids) but will not allow you to easily calculate proportion of blocks inside a given domain.
Implisit surfaces require consistent normals to determine inside or outside and sign of distances. The function pygslib.vtktools.implicit_surface() will update/calculate the normals. You can use the function pygslib.vtktools.calculate_normals() to invert the inside or outside direction of solids.
The behaviour of implisit surfaces can be changed by manipulating the normals manually and setting update_normals==False in the function pygslib.vtktools.implicit_surface()
Step5: by now we have a geological model defined with solids
<img src='fig3.JPG' height="50%" width="50%" >
Generate drillhole data
Now we extract drillhole traces from dxf files to
Step6: <img src='fig4.JPG' height="50%" width="50%" >
Tag drillholes with domain code
There are two main ways here
Step7: Modeling block model
Step8: when blocks are too large cmpared to solid resolution this algorithm fails, and require refinement
<img src='fig6.JPG' height="50%" width="50%">
Step9: working with smaller blocks fix this problem
<img src='fig7.JPG' height="50%" width="50%">
Simulated Au grades in fine grid
Step10: Assigning grade to drillholes
Step11: <img src = 'fig8.JPG'> | Python Code:
# import modules
import pygslib
import ezdxf
import pandas as pd
import numpy as np
Explanation: Create geological model and drillhole from sections
This is to extract points from geological model defined in sections (dxf files) We also generate drillhole data. The section must be difined as dxf section with layers properly defined:
<img src='fig1.JPG' height = '50%' width = '50%'>
Note that we could do geological interpretation in 2d with snapping if get point in line with same coordinates of drillhole intersects.
End of explanation
# get sections in dxf format, the sufix is the N coordinate (for now only for EW sections)
N = [-10, 0,50,100,150,200, 210]
s = {'x':[], # noth coordinates of the sections
'y':[],
'z':[],
'layer':[],
'id':[]}
pl_id = -1
for y in N:
dwg = ezdxf.readfile('S_{}.dxf'.format(y))
msp= dwg.modelspace()
for e in msp.query('LWPOLYLINE'):
p = e.get_rstrip_points()
if e.dxfattribs()['layer']=='dhole':
pl_id=pl_id+1
aid = int(pl_id)
else:
aid = None
for j in p:
s['x'].append(j[0])
s['y'].append(y)
s['z'].append(j[1])
s['layer'].append(e.dxfattribs()['layer'])
s['id'].append(aid)
S=pd.DataFrame(s)
S['id']=S['id'].values.astype(int)
print (S['layer'].unique())
Explanation: Here we extract data from the dxf sections. We basically extract points from the lines and we asign it to its corresponding object, defined by the dxf 'layer'.
End of explanation
# define working region
xorg = -10.
yorg = -10.
zorg = -10.
dx = 5.
dy = 5.
dz = 5.
nx = 40
ny = 44
nz = 36
Explanation: Generate working region
For this to work we need to define a bounding box. We will generate surfaces that have the same size or extend outside the box, otherwise we may get non expected results when we work with open surfaces
End of explanation
# get points defining each surface
hw_p = S.loc[S['layer']=='hw',['x','y','z']]
fw_p = S.loc[S['layer']=='fw',['x','y','z']]
topo_p = S.loc[S['layer']=='topo',['x','y','z']]
# generate vtk open surfaces
topo,x_topo,y_topo,z_topo = pygslib.vtktools.rbfinterpolate(x=topo_p['x'].values.astype('float'),
y=topo_p['y'].values.astype('float'),
z=topo_p['z'].values.astype('float'),
xorg=xorg, yorg=yorg,dx=dx,dy=dy,nx=nx,ny=ny,
snap = False)
hw,x_hw,y_hw,z_hw = pygslib.vtktools.rbfinterpolate( x=hw_p['x'].values.astype('float'),
y=hw_p['y'].values.astype('float'),
z=hw_p['z'].values.astype('float'),
xorg=xorg, yorg=yorg,dx=dx,dy=dy,nx=nx,ny=ny,
snap = False)
fw,x_fw,y_fw,z_fw = pygslib.vtktools.rbfinterpolate( x=fw_p['x'].values.astype('float'),
y=fw_p['y'].values.astype('float'),
z=fw_p['z'].values.astype('float'),
xorg=xorg, yorg=yorg,dx=dx,dy=dy,nx=nx,ny=ny,
snap = False)
# save the open surfaces
pygslib.vtktools.SavePolydata(topo, 'topo')
pygslib.vtktools.SavePolydata(hw, 'hw')
pygslib.vtktools.SavePolydata(fw, 'fw')
Explanation: Generate solids
To generate solids, asuming we have stratigraphic units with contact surfaces that may be modeled indivicually, we defined surfaces interpolating in a 2D grid and optionally snapping points with known coordinates in 3D.
The surfaces are then used to cut the region to obtain individual closed solids.
End of explanation
# create implicit surfaces
impl_topo = pygslib.vtktools.implicit_surface(topo)
impl_hw = pygslib.vtktools.implicit_surface(hw)
impl_fw = pygslib.vtktools.implicit_surface(fw)
# this is a grid (a box, we cut to generate geology). We can generate a grid ot tetras with surface point included to emulate snapping
region = pygslib.vtktools.define_region_grid(xorg, yorg, zorg, dx/2, dy/2, dz/4, nx*2, ny*2, nz*4) #, snapping_points = [topo,hw,fw])
pygslib.vtktools.SaveUnstructuredGrid(region, "region")
# evaluate surfaces
#below topo
region,topo_d = pygslib.vtktools.evaluate_region(region, implicit_func = impl_topo, func_name='topo_d', invert=False, capt = -10000)
#above hanging wall
region, hw_u = pygslib.vtktools.evaluate_region(region, implicit_func = impl_hw, func_name='hw_u', invert=True, capt = -10000)
#below hanging wall
region, hw_d = pygslib.vtktools.evaluate_region(region, implicit_func = impl_hw, func_name='hw_d', invert=False, capt = -10000)
#above footwall
region, fw_u = pygslib.vtktools.evaluate_region(region, implicit_func = impl_fw, func_name='fw_u', invert=True, capt = -10000)
#below footwall
region, fw_d = pygslib.vtktools.evaluate_region(region, implicit_func = impl_fw, func_name='fw_d', invert=False, capt = -10000)
# create intersection between hanging wall and foot wall
dom1= np.minimum(hw_d, fw_u)
region = pygslib.vtktools.set_region_field(region, dom1, 'dom1')
# extract surface
dom1_poly = pygslib.vtktools.extract_surface(region,'dom1')
# Save surface
pygslib.vtktools.SavePolydata(dom1_poly, 'dom1')
# create intersection between topo and hanging wall
dom_topo= np.minimum(topo_d, hw_u)
region = pygslib.vtktools.set_region_field(region, dom_topo, 'dom_topo')
# extract surface
dom_topo_poly = pygslib.vtktools.extract_surface(region,'dom_topo')
# Save surface
pygslib.vtktools.SavePolydata(dom_topo_poly, 'dom_topo')
# not boolean required below fw
# extract surface
dom_fw_poly = pygslib.vtktools.extract_surface(region,'fw_d')
# Save surface
pygslib.vtktools.SavePolydata(dom_fw_poly, 'dom_fw')
Explanation: By now we have surfaces that can be used for modeling, but you will get the best performance and number of octions with closed surfaces
<img src = 'fig2.JPG' height="50%" width="50%">
To generate solids we define implicit surfaces from surfaces and we cut the region using implisit surfaces. Note that implisit surfaces cluould be used to select points (drillholes and block centroids) but will not allow you to easily calculate proportion of blocks inside a given domain.
Implisit surfaces require consistent normals to determine inside or outside and sign of distances. The function pygslib.vtktools.implicit_surface() will update/calculate the normals. You can use the function pygslib.vtktools.calculate_normals() to invert the inside or outside direction of solids.
The behaviour of implisit surfaces can be changed by manipulating the normals manually and setting update_normals==False in the function pygslib.vtktools.implicit_surface()
End of explanation
# generate table collar from dxf traces
tcollar = {}
tcollar['BHID'] = S.loc[S['layer']=='dhole','id'].unique()
tcollar['XCOLLAR'] = S.loc[S['layer']=='dhole',['x','id']].groupby('id').first().values.ravel().astype(float)
tcollar['YCOLLAR'] = S.loc[S['layer']=='dhole',['y','id']].groupby('id').first().values.ravel().astype(float)
tcollar['ZCOLLAR'] = S.loc[S['layer']=='dhole',['z','id']].groupby('id').first().values.ravel().astype(float)
collar = pd.DataFrame(tcollar)
# generate table survey from dxf traces
tsurvey = {'BHID':[], 'AT':[], 'DIP':[], 'AZ':[]}
for i in collar['BHID']:
h = S.loc[(S['layer']=='dhole') & (S['id']==i),['x','y','z']].values
x= h[1][0]-h[0][0]
y= h[1][1]-h[0][1]
z= h[1][2]-h[0][2]
d0=np.sqrt(x**2+y**2+z**2)
az,dip = pygslib.drillhole.cart2ang(x/d0,y/d0,z/d0)
# add first interval
tsurvey['BHID'].append(i)
tsurvey['AT'].append(0)
tsurvey['AZ'].append(az)
tsurvey['DIP'].append(dip)
for j in range(1,h.shape[0]):
x= h[j][0]-h[j-1][0]
y= h[j][1]-h[j-1][1]
z= h[j][2]-h[j-1][2]
d=np.sqrt(x**2+y**2+z**2)
az,dip = pygslib.drillhole.cart2ang(x/d,y/d,z/d)
tsurvey['BHID'].append(i)
tsurvey['AT'].append(d+d0)
tsurvey['AZ'].append(az)
tsurvey['DIP'].append(dip)
d0 = d+d0
survey = pd.DataFrame(tsurvey)
# generate 'LENGTH' field of collar from table of surveys
collar['LENGTH'] = 0
for i in collar['BHID']:
collar.loc[collar['BHID']==i, 'LENGTH'] = survey.groupby('BHID')['AT'].max()[i]
# generate a dum assay table
assay = pd.DataFrame({'BHID':collar['BHID'],'TO':collar['LENGTH']})
assay['FROM'] = 0
# generate drillhole object
collar['BHID'] = collar['BHID'].values.astype('str')
survey['BHID'] = survey['BHID'].values.astype('str')
assay['BHID'] = assay['BHID'].values.astype('str')
assay['DUM'] = 0.
dhole = pygslib.drillhole.Drillhole(collar,survey)
# add assay table
dhole.addtable(assay, 'assay')
# validate results
dhole.validate()
dhole.validate_table('assay')
# composite. This is normaly completed after tagging but we need small intervals here to emulate real assay table
dhole.downh_composite(table_name='assay',variable_name='DUM', new_table_name='cmp',cint = 1)
# desurvey and export
dhole.desurvey(table_name='cmp', endpoints=True, warns=True)
dhole.intervals2vtk('cmp','cmp')
Explanation: by now we have a geological model defined with solids
<img src='fig3.JPG' height="50%" width="50%" >
Generate drillhole data
Now we extract drillhole traces from dxf files to:
- generate drillhole tables
- from tables generate drillhole object
- then we split drillhole data ('composite') and label drillhole intervals with Domain 1 (between hanging and footwall surfaces)
End of explanation
# tag using surfaces
dhole.table['cmp']['dist_hw'] = pygslib.vtktools.evaluate_implicit_points(implicit_mesh=impl_hw,
x=dhole.table['cmp']['xm'].values,
y=dhole.table['cmp']['ym'].values,
z=dhole.table['cmp']['zm'].values,
cap_dist=1,
normalize=False)
dhole.table['cmp']['dist_fw'] = pygslib.vtktools.evaluate_implicit_points(implicit_mesh=impl_fw,
x=dhole.table['cmp']['xm'].values,
y=dhole.table['cmp']['ym'].values,
z=dhole.table['cmp']['zm'].values,
cap_dist=1,
normalize=False)
dhole.table['cmp']['D1_surf'] = np.round((dhole.table['cmp']['dist_fw']+dhole.table['cmp']['dist_hw'])/2)
dhole.intervals2vtk('cmp','cmp')
# tag using solid dom1
inside1 = pygslib.vtktools.pointinsolid(dom1_poly,
x=dhole.table['cmp']['xm'].values,
y=dhole.table['cmp']['ym'].values,
z=dhole.table['cmp']['zm'].values)
dhole.table['cmp']['D1_solid'] = inside1.astype(int)
dhole.intervals2vtk('cmp','cmp')
Explanation: <img src='fig4.JPG' height="50%" width="50%" >
Tag drillholes with domain code
There are two main ways here:
- tagging samples using implicit functions with surfaces
- tagging samples using implicit functions with solids
The easiest is using solids.
In both cases we evaluate the distance between a point and a surface using the function pygslib.vtktools.evaluate_implicit_points(). The output of this function is a signed distance, where sign indicates:
- negative: the point is inside or above
- positive: the point is outside or below
- zero: the point is in the surface
The samples between two surfaces can be selected by evaluating the points with the two implicit surfaces and doing boolean operation with signed values
End of explanation
mod = pygslib.blockmodel.Blockmodel(xorg=xorg, yorg=yorg, zorg=zorg, dx=dx*2,dy=dy*2, dz=dz*2, nx=nx/2, ny=ny/2, nz=nz/2)
mod.fillwireframe(surface=dom1_poly)
mod.blocks2vtkImageData(path='d1_mod')
Explanation: Modeling block model
End of explanation
mod = pygslib.blockmodel.Blockmodel(xorg=xorg, yorg=yorg, zorg=zorg, dx=dx,dy=dy, dz=dz/2, nx=nx, ny=ny, nz=nz*2)
mod.fillwireframe(surface=dom1_poly)
mod.blocks2vtkImageData(path='d1_mod')
Explanation: when blocks are too large cmpared to solid resolution this algorithm fails, and require refinement
<img src='fig6.JPG' height="50%" width="50%">
End of explanation
# block model definition of xorg is for the corner of the block. To aline this with GSLIB grids use xorg-dx/2
sim_grid = pygslib.blockmodel.Blockmodel(xorg=xorg, yorg=yorg, zorg=zorg,
dx=dx/5,dy=dy/5, dz=dz/5, nx=nx*5, ny=ny*5, nz=nz*5)
sim_grid.fillwireframe(surface=dom1_poly)
print (sim_grid.xorg, sim_grid.yorg, sim_grid.zorg)
print (sim_grid.nx, sim_grid.ny, sim_grid.nz)
print (sim_grid.dx, sim_grid.dy, sim_grid.dz)
import subprocess
subprocess.call('echo sgsim.par | c:\gslib\sgsim.exe',shell=True)
sim1=pygslib.gslib.read_gslib_file('sgsim.out')
sim_grid.bmtable['sim1'] = np.exp(sim1['value'].values)*sim_grid.bmtable['__in'].values # to emulate Au grade with lognormal distribution
sim_grid.blocks2vtkImageData(path='sim1')
Explanation: working with smaller blocks fix this problem
<img src='fig7.JPG' height="50%" width="50%">
Simulated Au grades in fine grid
End of explanation
# we migrate block data to points within blocks
dhole.table['cmp']['Au']=sim_grid.block2point(dhole.table['cmp']['xm'].values,
dhole.table['cmp']['ym'].values,
dhole.table['cmp']['zm'].values,
'sim1')
dhole.intervals2vtk('cmp','cmp')
Explanation: Assigning grade to drillholes
End of explanation
dhole.collar.to_csv('collar.csv', index = False)
dhole.survey.to_csv('survey.csv', index = False)
dhole.table['cmp'].to_csv('assay.csv', index = False)
Explanation: <img src = 'fig8.JPG'>
End of explanation |
15,758 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Miniconda Installation
Install Miniconda from their Website.
All required python packages will be downloaded using the conda package and environment management system.
When Miniconda is installed on your system, open up your shell with the command <kbd>WINDOWS</kbd> + <kbd>R</kbd> -> cmd on windows or search for the
terminal on Linux systems.
Then type this to make sure you got the latest version.
Step1: A new environment capsule with preset libraries installed for one or more of your projects can be created. fauenv is the name of the new python 3.x environment in this example.
Step2: Check which environments are currently installed. root is the name of the default one.
Step3: Then, activate the desired environment.
Step4: Install Packages for Machine Learning / Pattern Recognition
With this command, all required packages as well as their dependencies will be installed in the latest version possible. Version conflicts between them are avoided automatically.
Note, those packages are only installed for the chosen environment.
If you want to install them on another environment in the same version,conda automatically creates the hardlinks to the library's directory, avoiding to have numerous copies of the same library on the filesystem.
Install
Use conda to install new packages.
Step5: Install packages not in the conda repository via pip.
Step6: Clean up
In order to free about 330MB of disc space after installation, delete the cached tar.bz archive files. Packages no longer needed as dependencies can be deleted as well.
Step7: Update
To update all packages of a specific environment call
Step8: Now, all is set up to get started.
Using Jupyter Notebook
The Jupyter Notebook is a web-based interactive computational environment to combine code execution, text, mathematics, plots and rich media into a single document.
Jupyter stands for Julia, Python and R, which are the main proramming languages supported.
There are several tutorials for this framework.
To start Jupyter Notebook call
Step9: The console will show an URL like http
Step10: Jupyter Notebook magic functions
Unlike traditional python, the Jupyter notebook offers some extended timing, profiling, aliasing and other functionalities via it's magic functions.
An example
Step11: In standard python this would be achieved by
Step12: Some useful magic functions are
- %time or %timeit for benchmarks
- %prun for profiling
- %magic returns a list of all magic functions
- %load or %loadpy to import a .py file into the notebook
- %quickref for a reference sheet
Step13: Further readings
There are some great websites where ready-made Jupyter Notebook files can be found and imported
Step14: Upgrading to a new version of python | Python Code:
conda update --all
Explanation: Miniconda Installation
Install Miniconda from their Website.
All required python packages will be downloaded using the conda package and environment management system.
When Miniconda is installed on your system, open up your shell with the command <kbd>WINDOWS</kbd> + <kbd>R</kbd> -> cmd on windows or search for the
terminal on Linux systems.
Then type this to make sure you got the latest version.
End of explanation
conda create -n fauenv python=3
Explanation: A new environment capsule with preset libraries installed for one or more of your projects can be created. fauenv is the name of the new python 3.x environment in this example.
End of explanation
conda info -e
Explanation: Check which environments are currently installed. root is the name of the default one.
End of explanation
activate fauenv
Explanation: Then, activate the desired environment.
End of explanation
conda install -n fauenv numpy scipy matplotlib scikit-learn scikit-image ipython ipython-notebook
conda install -n fauenv nose pip anaconda-client pillow ujson flask jinja2 natsort joblib numba pyside
Explanation: Install Packages for Machine Learning / Pattern Recognition
With this command, all required packages as well as their dependencies will be installed in the latest version possible. Version conflicts between them are avoided automatically.
Note, those packages are only installed for the chosen environment.
If you want to install them on another environment in the same version,conda automatically creates the hardlinks to the library's directory, avoiding to have numerous copies of the same library on the filesystem.
Install
Use conda to install new packages.
End of explanation
activate fauenv # if not in fauenv environment already
pip install visvis tinydb nibabel pydicom medpy simpleITK pycuda numpy-stl websockets
Explanation: Install packages not in the conda repository via pip.
End of explanation
conda clean -tps # delete downloaded cached tarballs (t), orphaned packages (p), and cached sources (s)
Explanation: Clean up
In order to free about 330MB of disc space after installation, delete the cached tar.bz archive files. Packages no longer needed as dependencies can be deleted as well.
End of explanation
conda update --all -n fauenv
activate fauenv # if not in fauenv environment already
pip freeze --local | grep -v '^\-e' | cut -d = -f 1 | xargs pip install -U
Explanation: Update
To update all packages of a specific environment call
End of explanation
jupyter notebook # --port 8888
Explanation: Now, all is set up to get started.
Using Jupyter Notebook
The Jupyter Notebook is a web-based interactive computational environment to combine code execution, text, mathematics, plots and rich media into a single document.
Jupyter stands for Julia, Python and R, which are the main proramming languages supported.
There are several tutorials for this framework.
To start Jupyter Notebook call
End of explanation
%pylab inline
Explanation: The console will show an URL like http://localhost:8888/tree and open the operating system's default browser with this location.
Use it to navigate, open, create, delete, and run .ipynb files.
Note: For inline plots use this magic function at the beginning of your code. This also imports numpy as np, imports matplotlib.pyplot as plt, and others.
End of explanation
%time sum(range(int(1e7)))
%timeit sum(range(10000000))
Explanation: Jupyter Notebook magic functions
Unlike traditional python, the Jupyter notebook offers some extended timing, profiling, aliasing and other functionalities via it's magic functions.
An example:
End of explanation
python -mtimeit -s"import test" "mytestFunction(42)"
Explanation: In standard python this would be achieved by
End of explanation
%quickref
Explanation: Some useful magic functions are
- %time or %timeit for benchmarks
- %prun for profiling
- %magic returns a list of all magic functions
- %load or %loadpy to import a .py file into the notebook
- %quickref for a reference sheet
End of explanation
conda update --all
conda update --all -n fauenv
activate fauenv
conda clean -tps
pip freeze --local | grep -v '^\-e' | cut -d = -f 1 | xargs pip install -U
jupyter notebook
Explanation: Further readings
There are some great websites where ready-made Jupyter Notebook files can be found and imported:
1. Nbviewer: An online .ipynb render engine for notebooks on github
1. Jupyter Notebook Tutorial
2. Good Scikit-learn Tutorial
3. Google: segmentation type:ipynb github
Day two
When you come back to working with scikit-learn and jupyter notebook, use those commands to get the fresh releases and start the server
End of explanation
activate fauenv
conda install python=3.6
Explanation: Upgrading to a new version of python
End of explanation |
15,759 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
How the Climate in Melbourne has Changed Over a Century
A strong and longer than usual heat wave has hit the southern states of Australia so we thought to look into the weather data for Melbourne, it’s second most populous city, to see if there are noteworthy signs of climate change.
In this notebook we will analyze over 100 years of maximum temperature and precipitation data in Melbourne, Australia using Bureau Of Meteorology (BOM) Climate data.
BOM Climate dataset contains variables like maximum and minimum temperature, rain and vapour pressure.
So, we will do the following
Step1: <font color='red'>Please put your datahub API key into a file called APIKEY and place it to the notebook folder or assign your API key directly to the variable API_key!</font>
Step2: At first, we need to define the dataset name and variables we want to use.
Step3: For starters, using Basemap we created a map of Australia and marked the location of Melbourne city.
Step4: Download the data with package API
Create package objects
Send commands for the package creation
Download the package files
Note that this package has over a 100 years of data and downloading it might take some time
Step5: Work with downloaded files
We start by opening the files with xarray, then we work with maximum temperature data and after that we will look into rainfall data.
Step6: First, we evaluate maximum temperature during the past century by finding out overall average maximum temperature in Melbourne. We are also computing annual maximum temperature data.
Step7: Now it is time to plot mean annual maximum temperature in Melbourne. We also marked overall average for 1911-2019 20.18 $^o$C with a red dotted line. The green line marks a trend.
We can notice that the temperature is going up and down quite regularly, which can be explained by El Niño–Southern Oscillation (ENSO) as ENSO plays a big imapct to Australian climate. Strongest El Nino years have been 1982-83, 1997-98 and 2015-16. We can see that during all those years the temperature is above average (red line).
The other significant thing we can see from looking at the plot is that the temperatures have been rising over the years. The small anomalies are normal, while after 2004 the temperature hasn't dropped below average at all. That's an unusual pattern. For example, in 2007 and 2014 the temperature was 1.5 degrees above overall average, while in 2017 temperature have been almost 6 $^o$C degree over the average.
Step8: According to Australian Government Department of Environment and Energy the average annual number of days above 35 degrees Celsius is likely to increase from 9 days currently experienced in Melbourne to up to 26 days by 2070 without global action to reduce emissions.
We also found out that during a 100 years, on average of 9 days a year the temperature exceeds 35 $^o$C.
Step9: We saw that the maximum temperature tend to have similar pattern than ENSO.
Just as important as the change with the temperature is what has happened to the rainfall in Melbourne. To investigate that, we created an annual precipitation plot, where a red line marks the average (617.5 mm). From the plot we can see that the average amount of rainfall has been constantly decreasing over the past century.
Less rainfall increases the risk of bushfires, impacts the agriculture in Melbourne and threatens its water reservoirs.
Step10: Alongside the decreased rainfall, the number of completely dry days in a year has changed as well. From the plot above, you can see that the number of days when it was completely dry in Melbourne grew in the 2000s. However, during the last years it has returned to normal. Again, marked with a green line is the trend that means that Melbourne is getting more and more completely dry days. | Python Code:
%matplotlib inline
import numpy as np
from dh_py_access import package_api
import dh_py_access.lib.datahub as datahub
import xarray as xr
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap
from po_data_process import make_comparison_plot, make_plot, make_anomalies_plot
import warnings
warnings.filterwarnings('ignore')
Explanation: How the Climate in Melbourne has Changed Over a Century
A strong and longer than usual heat wave has hit the southern states of Australia so we thought to look into the weather data for Melbourne, it’s second most populous city, to see if there are noteworthy signs of climate change.
In this notebook we will analyze over 100 years of maximum temperature and precipitation data in Melbourne, Australia using Bureau Of Meteorology (BOM) Climate data.
BOM Climate dataset contains variables like maximum and minimum temperature, rain and vapour pressure.
So, we will do the following:
1) use the Planet OS package API to fetch data;
2) see mean annual maximum temperature in Melbourne
3) plot number of days in year when max temperature exceeds 35 $^o$C in Melbourne;
4) find out annual precipitation by year;
5) see if there is more completely dry days than before.
End of explanation
server = 'api.planetos.com'
API_key = open('APIKEY').readlines()[0].strip() #'<YOUR API KEY HERE>'
version = 'v1'
Explanation: <font color='red'>Please put your datahub API key into a file called APIKEY and place it to the notebook folder or assign your API key directly to the variable API_key!</font>
End of explanation
dh=datahub.datahub(server,version,API_key)
dataset='bom_clim_australia'
variable_names = 'tmax,tmin,precip'
time_start = '1911-01-01T00:00:00'
time_end = '2019-03-01T00:00:00'
area_name = 'Melbourne'
latitude = -37.81; longitude = 144.98
Explanation: At first, we need to define the dataset name and variables we want to use.
End of explanation
plt.figure(figsize=(10,8))
m = Basemap(projection='merc',llcrnrlat=-39.9,urcrnrlat=-10.,\
llcrnrlon=105.49,urcrnrlon=155.8,lat_ts=20,resolution='l')
x,y = m(longitude,latitude)
m.drawcoastlines()
m.drawcountries()
m.drawstates()
m.bluemarble()
m.scatter(x,y,50,marker='o',color='#00FF00',zorder=4)
plt.show()
Explanation: For starters, using Basemap we created a map of Australia and marked the location of Melbourne city.
End of explanation
package = package_api.package_api(dh,dataset,variable_names,longitude,longitude,latitude,latitude,time_start,time_end,area_name=area_name)
package.make_package()
package.download_package()
Explanation: Download the data with package API
Create package objects
Send commands for the package creation
Download the package files
Note that this package has over a 100 years of data and downloading it might take some time
End of explanation
dd1 = xr.open_dataset(package.local_file_name)
Explanation: Work with downloaded files
We start by opening the files with xarray, then we work with maximum temperature data and after that we will look into rainfall data.
End of explanation
yearly_tmax = dd1.tmax.resample(time="1AS").mean('time')[:,0,0]
tmax_mean = yearly_tmax.mean(axis=0)
print ('Overall mean for tmax is ' + str("%.2f" % tmax_mean.values))
Explanation: First, we evaluate maximum temperature during the past century by finding out overall average maximum temperature in Melbourne. We are also computing annual maximum temperature data.
End of explanation
make_plot(yearly_tmax.loc[yearly_tmax['time.year'] < 2019],dataset,'Mean annual maximum temperature in Melbourne',ylabel = 'Temp [' + dd1.tmax.units + ']',compare_line = tmax_mean.values,trend=True)
Explanation: Now it is time to plot mean annual maximum temperature in Melbourne. We also marked overall average for 1911-2019 20.18 $^o$C with a red dotted line. The green line marks a trend.
We can notice that the temperature is going up and down quite regularly, which can be explained by El Niño–Southern Oscillation (ENSO) as ENSO plays a big imapct to Australian climate. Strongest El Nino years have been 1982-83, 1997-98 and 2015-16. We can see that during all those years the temperature is above average (red line).
The other significant thing we can see from looking at the plot is that the temperatures have been rising over the years. The small anomalies are normal, while after 2004 the temperature hasn't dropped below average at all. That's an unusual pattern. For example, in 2007 and 2014 the temperature was 1.5 degrees above overall average, while in 2017 temperature have been almost 6 $^o$C degree over the average.
End of explanation
daily_data = dd1.tmax.resample(time="1D").mean('time')[:,0,0]
make_plot(daily_data[np.where(daily_data.values > 35)].groupby('time.year').count(),dataset,'Number of days in year when max temperature exceeds 35 $^o$C in Melbourne',ylabel = 'Days of year')
print ('Yearly average days when temperature exceeds 35 C is ' + str("%.1f" % daily_data[np.where(daily_data.values > 35)].groupby('time.year').count().mean().values))
Explanation: According to Australian Government Department of Environment and Energy the average annual number of days above 35 degrees Celsius is likely to increase from 9 days currently experienced in Melbourne to up to 26 days by 2070 without global action to reduce emissions.
We also found out that during a 100 years, on average of 9 days a year the temperature exceeds 35 $^o$C.
End of explanation
mean_annual_prec = dd1.precip.resample(time="1AS").sum('time').mean(axis=(0,1,2))
annual_prec = dd1.precip.resample(time="1AS").sum('time')[:,0,0]
annual_prec = annual_prec.loc[annual_prec['time.year'] < 2019]
make_plot(annual_prec,dataset,'Annual precipitation by year',ylabel='Precipitation [' + dd1.precip.units + ']',compare_line = mean_annual_prec)
print ('Mean overall rainfall (1911-2017) is ' + str("%.1f" % mean_annual_prec) + " mm")
Explanation: We saw that the maximum temperature tend to have similar pattern than ENSO.
Just as important as the change with the temperature is what has happened to the rainfall in Melbourne. To investigate that, we created an annual precipitation plot, where a red line marks the average (617.5 mm). From the plot we can see that the average amount of rainfall has been constantly decreasing over the past century.
Less rainfall increases the risk of bushfires, impacts the agriculture in Melbourne and threatens its water reservoirs.
End of explanation
daily_rain = dd1.precip.resample(time="1D").mean('time') [:,0,0]
daily_rain = daily_rain.loc[daily_rain['time.year'] < 2019]
make_plot(daily_rain[np.where(daily_rain.values < 0.00001)].groupby('time.year').count(),dataset,'Completely dry days by year',ylabel = 'Days of year',compare_line = daily_rain[np.where(daily_rain.values < 0.00001)].groupby('time.year').count().mean(),trend=True)
Explanation: Alongside the decreased rainfall, the number of completely dry days in a year has changed as well. From the plot above, you can see that the number of days when it was completely dry in Melbourne grew in the 2000s. However, during the last years it has returned to normal. Again, marked with a green line is the trend that means that Melbourne is getting more and more completely dry days.
End of explanation |
15,760 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Quantum Electrodynamics with Geometric Algebra (WIP)
Theory overview
Quantum Electrodynamics (QED) describes electrons, positrons (anti-electrons) and photons in a 4-dimensional spacetime with fields defined for all spacetime positions $X$. The 4-dimensional spacetime can be described by the Spacetime Algebra (STA) with basis vectors $\gamma_0, \gamma_1, \gamma_2, \gamma_3$ and corresponding metric $[1, -1, -1, -1]$. It contains two fields. The electron-positron field is a bispinor-field $\psi(X)$ which in the context of Geometric Algebra (GA) is described by even-grade multivectors of the STA. The photon field $A(X)$ is a vector-field (ie. multivectors of degree 1, one basis for each dimension).
A field configuration, also known as a path, $P(X)$ contains values for the two fields at every spacetime position. Our goal is to calculate the QED action using GA which allows us to use algorithms that solve for field configurations . The action is the negative log-likelihood (NLL) of the field configuration, meaning it is a number which tells how likely a given field configuration is. It is not a probability as it is unnormalized. However even with only the NLL we can use sampling algorithms (eg. Markov-Chain Monte-Carlo, Variational Inference) to sample field configurations so that the sampled distribution matches the normalized distribution.
The Lagrangian is given in Hestenes' article Real Dirac Theory in equation (B.6) as
$\mathcal{L} = \langle \hbar (\nabla \psi(X)) i \gamma_3 \widetilde{\psi}(X) - e A(X) \psi(X) \gamma_0 \widetilde{\psi}(X) - m \psi(X) \widetilde{\psi}(X) \rangle$
where $\langle ... \rangle$ denotes getting the scalar part, $i = \gamma_2 \gamma_1$, $\nabla = \sum_{i=0}^{3} \gamma_i \delta^i$ and $\widetilde{\psi}(X)$ is the grade-reversal of $\psi$.
The action $S(P)$ for a field-configuration $P=(\psi, A)$ is calculated by integrating the Lagrangian $\mathcal{L}(P, X)$ over all space-time positions $X$.
$S(\psi, A) = \int_{X \in \mathcal{X}} \mathcal{L}(\psi, A, X) dx$
Finally as we are doing this numerically we need to discretize spacetime into a 4-dimensional grid. Integrals over spacetime then become sums over the grid. Derivatives become finite-differences or more complicated operations to avoid the aliasing which results in the fermion doubling problem.
Getting started
Let's start by defining the spacetime algebra as a geometric algebra in 1 time and 3 space dimensions with metric $[1, -1, -1, -1]$.
Step1: We can see our four basis vectors displayed here each with a different ... basis. Let's try squaring them.
Step2: Squaring the basis vectors gave us back another purely scalar multivector. The squared bases indeed return the correct metric.
We can create new multivectors of different kinds using the geometric algebra sta_ga object. Let's create some vectors such as the elements of the photon field and perform some operations to get a feel for them. We can use the methods on sta_ga, most of which take a batch_shape that says how many elements you want ([] meaning just a single one) and a kind that describes which elements it will set (eg. "even", "mv" (meaning all), "vector", "scalar", ...). Alternatively we can just build everything out of the basis vectors ourselves by adding and multiplying them.
Step3: Now let's do the same for the bispinors (elements of even degree).
Step4: Now we hopefully have some feel for how to operate with the geometric algebra numbers. So far we only worked with single numbers, but we can define a field (ie. a number for every grid point) by passing in a batch_shape that is the size of our grid. When printing the fields we won't see the actual numbers anymore, we will only see which blades are non-zero and the batch shape. However we can still access all of the numbers with the usual indexing rules.
Step5: By now you will probably believe me that we can do the same to create a bispinor field, so instead let's see how we can calculate derivatives.
As mentioned in the beginning, derivatives become finite differences. To calculate finite differences we can take a copy of the field, shift it back by one in a dimension and subtract it. For instance of we were to calculate the derivative
in the time direction we would shift the entire field by -1 along the time axis to get A(X + TimeDirection * GridSpacing) and subtract the actual field from this shifted field. All that is left then is to divide by the grid spacing.
d/dt A(X) = (A(X + TimeDirection * GridSpacing) - A(X)) / GridSpacing
To actually do the shifting we will use the with_changes method which allows copying of the multivector and overriding of its blade values so we will just shift the blade values themselves using tf.roll. A better abstraction that doesn't require using the internal blade values might be added later.
Step6: Maybe expectedly, as our field is just a constant value everywhere, we are left with a field that is zero everywhere. Now we have a finite differences operation that will work on fields of any kind.
Now we have all the tools we need to actually calculate the QED action given a field configuration. As a reminder, the QED Lagrangian is given by
$\mathcal{L} = \langle \hbar (\nabla \psi(X)) i \gamma_3 \widetilde{\psi}(X) - e A(X) \psi(X) \gamma_0 \widetilde{\psi}(X) - m \psi(X) \widetilde{\psi}(X) \rangle$
and the action $S(\psi, A)$ is the spacetime integral (now sum) over it.
Let's start with the mass term on the right $m \psi(X) \widetilde{\psi}(X)$.
Step7: Next the interaction term in the center that describes the scattering between the electron-positron field and the photon field $e A(X) \psi(X) \gamma_0 \widetilde{\psi}(X)$.
Step8: And finally the momentum term for which we needed the finite differences $\hbar (\nabla \psi(X)) i \gamma_3 \widetilde{\psi}(X)$.
Step9: Now that we have all the terms, we can add them up, sum over all grid points and take the scalar part to get the action.
Step11: Now that we can calculate the action for a given field configuration (ie. values for psi and a at every grid point) we could use a sampling algorithm
to sample fields and calculate quantities of interest such as the correlation function, vacuum energy and more. | Python Code:
sta = GeometricAlgebra([1, -1, -1, -1])
for basis in sta.basis_mvs:
sta.print(basis)
Explanation: Quantum Electrodynamics with Geometric Algebra (WIP)
Theory overview
Quantum Electrodynamics (QED) describes electrons, positrons (anti-electrons) and photons in a 4-dimensional spacetime with fields defined for all spacetime positions $X$. The 4-dimensional spacetime can be described by the Spacetime Algebra (STA) with basis vectors $\gamma_0, \gamma_1, \gamma_2, \gamma_3$ and corresponding metric $[1, -1, -1, -1]$. It contains two fields. The electron-positron field is a bispinor-field $\psi(X)$ which in the context of Geometric Algebra (GA) is described by even-grade multivectors of the STA. The photon field $A(X)$ is a vector-field (ie. multivectors of degree 1, one basis for each dimension).
A field configuration, also known as a path, $P(X)$ contains values for the two fields at every spacetime position. Our goal is to calculate the QED action using GA which allows us to use algorithms that solve for field configurations . The action is the negative log-likelihood (NLL) of the field configuration, meaning it is a number which tells how likely a given field configuration is. It is not a probability as it is unnormalized. However even with only the NLL we can use sampling algorithms (eg. Markov-Chain Monte-Carlo, Variational Inference) to sample field configurations so that the sampled distribution matches the normalized distribution.
The Lagrangian is given in Hestenes' article Real Dirac Theory in equation (B.6) as
$\mathcal{L} = \langle \hbar (\nabla \psi(X)) i \gamma_3 \widetilde{\psi}(X) - e A(X) \psi(X) \gamma_0 \widetilde{\psi}(X) - m \psi(X) \widetilde{\psi}(X) \rangle$
where $\langle ... \rangle$ denotes getting the scalar part, $i = \gamma_2 \gamma_1$, $\nabla = \sum_{i=0}^{3} \gamma_i \delta^i$ and $\widetilde{\psi}(X)$ is the grade-reversal of $\psi$.
The action $S(P)$ for a field-configuration $P=(\psi, A)$ is calculated by integrating the Lagrangian $\mathcal{L}(P, X)$ over all space-time positions $X$.
$S(\psi, A) = \int_{X \in \mathcal{X}} \mathcal{L}(\psi, A, X) dx$
Finally as we are doing this numerically we need to discretize spacetime into a 4-dimensional grid. Integrals over spacetime then become sums over the grid. Derivatives become finite-differences or more complicated operations to avoid the aliasing which results in the fermion doubling problem.
Getting started
Let's start by defining the spacetime algebra as a geometric algebra in 1 time and 3 space dimensions with metric $[1, -1, -1, -1]$.
End of explanation
print("e_0^2:", sta(sta.e0) ** 2)
print("e_1^2:", sta(sta.e1) ** 2)
print("e_2^2:", sta(sta.e2) ** 2)
print("e_3^2:", sta(sta.e3) ** 2)
Explanation: We can see our four basis vectors displayed here each with a different ... basis. Let's try squaring them.
End of explanation
v1 = sta.from_tensor_with_kind(tf.ones(4), kind="vector")
sta.print("v1:", v1)
v2 = sta.basis_mvs[0] + sta.basis_mvs[1]
sta.print("v2:", v2)
sta.print("v1 * v2 (Geometric product):", sta.geom_prod(v1, v2))
sta.print("v1 | v2 (Inner product):", sta.inner_prod(v1, v2))
sta.print("v1 ^ v2 (Exterior product):", sta.ext_prod(v1, v2))
v3 = v1 + v2
sta.print("v3 = v1 + v2:", v3)
sta.print("v1 | v3:", sta.inner_prod(v1, v3))
sta.print("v1 ^ v3:", sta.ext_prod(v1, v3))
v4 = sta.geom_prod(v1, v2)
sta.print("v4 = v1 * v2:", v3)
sta.print("v1^-1 * v4:", sta.geom_prod(sta.inverse(v1), v4), "should be", v2)
Explanation: Squaring the basis vectors gave us back another purely scalar multivector. The squared bases indeed return the correct metric.
We can create new multivectors of different kinds using the geometric algebra sta_ga object. Let's create some vectors such as the elements of the photon field and perform some operations to get a feel for them. We can use the methods on sta_ga, most of which take a batch_shape that says how many elements you want ([] meaning just a single one) and a kind that describes which elements it will set (eg. "even", "mv" (meaning all), "vector", "scalar", ...). Alternatively we can just build everything out of the basis vectors ourselves by adding and multiplying them.
End of explanation
b1 = sta.from_tensor_with_kind(tf.ones(8), kind="even")
sta.print("b1:", b1)
b2 = sta.from_scalar(4.0) + sta.geom_prod(sta.basis_mvs[0], sta.basis_mvs[1]) + sta.geom_prod(sta.basis_mvs[0], sta.basis_mvs[1])
sta.print("b2:", b2)
sta.print("b1 | b2:", sta.inner_prod(b1, b2))
sta.print("b1 ^ b2:", sta.ext_prod(b1, b2))
b3 = sta.geom_prod(b1, b2)
sta.print("b3 = b1 * b2:", b3)
sta.print("b3 * b2^-1:", sta.geom_prod(b3, sta.inverse(b2)), "should be", b1)
sta.print("~b2 (Grade reversal):", sta.reversion(b2))
sta.print("Scalar part of b2:", sta.keep_blades_with_name(b2, ""))
sta.print("e_01 part of b2:", sta.keep_blades_with_name(b2, "01"))
Explanation: Now let's do the same for the bispinors (elements of even degree).
End of explanation
a = sta.from_tensor_with_kind(tf.ones((10, 10, 10, 10, 4)), kind="vector")
sta.print("A(X):", a)
sta.print("A(t=0, x=5, y=3, z=9):", a[0, 5, 3, 9])
sta.print("A(t=0, z=[3,4,5]):", a[0, :, :, 3:6])
sta.print("e_0 part of A(X):", sta.select_blades_with_name(a, "0").shape)
sta.print("A(0, 0, 0, 0) * ~A(0, 0, 0, 0):", sta.geom_prod(a, sta.reversion(a))[0, 0, 0, 0])
Explanation: Now we hopefully have some feel for how to operate with the geometric algebra numbers. So far we only worked with single numbers, but we can define a field (ie. a number for every grid point) by passing in a batch_shape that is the size of our grid. When printing the fields we won't see the actual numbers anymore, we will only see which blades are non-zero and the batch shape. However we can still access all of the numbers with the usual indexing rules.
End of explanation
def finite_differences(field, axis, spacing):
shifted_field = tf.roll(field, shift=-1, axis=axis)
return (shifted_field - field) / spacing
deriv_t_a = finite_differences(a, axis=0, spacing=0.1)
sta.print("d/dt A(X) = (A(X + TimeDirection * GridSpacing) - A(X)) / GridSpacing:", deriv_t_a)
sta.print("d/dt A(0, 0, 0, 0):", deriv_t_a[0, 0, 0, 0])
Explanation: By now you will probably believe me that we can do the same to create a bispinor field, so instead let's see how we can calculate derivatives.
As mentioned in the beginning, derivatives become finite differences. To calculate finite differences we can take a copy of the field, shift it back by one in a dimension and subtract it. For instance of we were to calculate the derivative
in the time direction we would shift the entire field by -1 along the time axis to get A(X + TimeDirection * GridSpacing) and subtract the actual field from this shifted field. All that is left then is to divide by the grid spacing.
d/dt A(X) = (A(X + TimeDirection * GridSpacing) - A(X)) / GridSpacing
To actually do the shifting we will use the with_changes method which allows copying of the multivector and overriding of its blade values so we will just shift the blade values themselves using tf.roll. A better abstraction that doesn't require using the internal blade values might be added later.
End of explanation
def get_mass_term(psi, electron_mass):
return electron_mass * sta.geom_prod(psi, sta.reversion(psi))
# Define psi as some arbitrary even-graded field for now
psi = sta.from_tensor_with_kind(tf.ones([10, 10, 10, 10, 8]), kind="even") + sta.from_tensor_with_kind(tf.ones([10, 10, 10, 10, 1]), kind="scalar")
sta.print("Psi:", psi)
sta.print("Psi at (0, 0, 0, 0):", psi[0, 0, 0, 0])
# The electron mass in planck units (hbar=1, c=1) is actually not 1 but something tiny.
# However we won't bother with it for now.
mass_term = get_mass_term(psi=psi, electron_mass=1.0)
sta.print("Mass term:", mass_term)
sta.print("Mass term at (0, 0, 0, 0):", mass_term[0, 0, 0, 0])
Explanation: Maybe expectedly, as our field is just a constant value everywhere, we are left with a field that is zero everywhere. Now we have a finite differences operation that will work on fields of any kind.
Now we have all the tools we need to actually calculate the QED action given a field configuration. As a reminder, the QED Lagrangian is given by
$\mathcal{L} = \langle \hbar (\nabla \psi(X)) i \gamma_3 \widetilde{\psi}(X) - e A(X) \psi(X) \gamma_0 \widetilde{\psi}(X) - m \psi(X) \widetilde{\psi}(X) \rangle$
and the action $S(\psi, A)$ is the spacetime integral (now sum) over it.
Let's start with the mass term on the right $m \psi(X) \widetilde{\psi}(X)$.
End of explanation
def get_interaction_term(psi, a, electron_charge):
return sta.geom_prod(electron_charge * a, sta.geom_prod(psi, sta.geom_prod(sta.e("0"), sta.reversion(psi))))
interaction_term = get_interaction_term(psi=psi, a=a, electron_charge=1.0)
sta.print("Interaction term:", interaction_term)
sta.print("Interaction term at (0, 0, 0, 0):", interaction_term[0, 0, 0, 0])
Explanation: Next the interaction term in the center that describes the scattering between the electron-positron field and the photon field $e A(X) \psi(X) \gamma_0 \widetilde{\psi}(X)$.
End of explanation
def get_momentum_term(psi, spacing, hbar):
# Nabla Psi
dt_psi = finite_differences(psi, axis=0, spacing=spacing)
dx_psi = finite_differences(psi, axis=1, spacing=spacing)
dy_psi = finite_differences(psi, axis=2, spacing=spacing)
dz_psi = finite_differences(psi, axis=3, spacing=spacing)
d_psi = dt_psi + dx_psi + dy_psi + dz_psi
return sta.geom_prod(hbar * d_psi, sta.geom_prod(sta.e("213"), sta.reversion(psi)))
momentum_term = get_momentum_term(psi=psi, spacing=0.1, hbar=1.0)
sta.print("Momentum term:", momentum_term)
sta.print("Momentum term at (0, 0, 0, 0):", momentum_term[0, 0, 0, 0]) # Still zero ;(
Explanation: And finally the momentum term for which we needed the finite differences $\hbar (\nabla \psi(X)) i \gamma_3 \widetilde{\psi}(X)$.
End of explanation
def get_action(psi, a, spacing, electron_mass, electron_charge, hbar):
mass_term = get_mass_term(psi=psi, electron_mass=electron_mass)
interaction_term = get_interaction_term(psi=psi, a=a, electron_charge=electron_charge)
momentum_term = get_momentum_term(psi=psi, spacing=spacing, hbar=hbar)
# Sum terms and get scalar part
lagrangians = (momentum_term - mass_term - interaction_term)[..., 0]
# Sum lagrangians (one lagrangian for each spacetime point) over spacetime
# to get a single value, the action.
return tf.reduce_sum(lagrangians)
action = get_action(psi=psi, a=a, spacing=0.1, electron_mass=1.0, electron_charge=1.0, hbar=1.0)
print("Action:", action)
Explanation: Now that we have all the terms, we can add them up, sum over all grid points and take the scalar part to get the action.
End of explanation
def joint_log_prob(psi_config, a_config):
mv_psi_config = sta.from_tensor_with_kind(psi_config, "even")
mv_a_config = sta.from_tensor_with_kind(a_config, "vector")
action = get_action(mv_psi_config, mv_a_config, spacing=0.0000001, electron_mass=0.00001,
electron_charge=0.0854245, hbar=1.0)
# Action is the negative log likelihood of the fields, and since
# the sampling function expects a (positive) log likelihood,
# we return the negation.
return -action
num_chains = 50
kernel=tfp.mcmc.NoUTurnSampler(
target_log_prob_fn=joint_log_prob,
step_size=step_size
),
kernel=tfp.mcmc.HamiltonianMonteCarlo(
target_log_prob_fn=joint_log_prob,
step_size=step_size,
num_leapfrog_steps=3
),
@tf.function(experimental_compile=False)
def sample(initial_state, step_size):
return tfp.mcmc.sample_chain(
num_results=300,
num_burnin_steps=1000,
current_state=initial_state,
kernel=tfp.mcmc.HamiltonianMonteCarlo(
target_log_prob_fn=joint_log_prob,
step_size=step_size,
num_leapfrog_steps=3
),
trace_fn=None
)
gs = 6 # grid size
initial_state = [
# Psi (bispinor field, 8 components)
# A (vector field, 4 components)
tf.zeros((num_chains, gs, gs, gs, gs, 8), dtype=tf.float32),
tf.zeros((num_chains, gs, gs, gs, gs, 4), dtype=tf.float32)
]
variable_step_size = [0.001, 0.001]
chain_samples = sample(initial_state, variable_step_size)
print(chain_samples[0].shape)
print(chain_samples[1].shape)
print(tf.reduce_sum(tf.abs(chain_samples[0][0, 0] - chain_samples[0][1, 0])))
print(tf.reduce_sum(tf.abs(chain_samples[0][1, 0] - chain_samples[0][2, 0])))
import matplotlib.pyplot as plt
fig, axes = plt.subplots(2, 2, sharex=True, sharey=True, figsize=(5, 5))
for i in range(4):
ax = axes[i % 2][i // 2]
im = ax.imshow(chain_samples[1][0, 0, 0, 0, :, :, i])
fig.colorbar(im, ax=ax)
fig.show()
fig, axes = plt.subplots(2, 4, sharex=True, sharey=True, figsize=(10, 5))
for i in range(8):
ax = axes[i % 2][i // 2]
im = ax.imshow(chain_samples[0][0, 0, 0, 0, :, :, i])
fig.colorbar(im, ax=ax)
fig.show()
fig, axes = plt.subplots(2, 4, sharex=True, sharey=True, figsize=(10, 5))
for i in range(8):
ax = axes[i % 2][i // 2]
im = ax.imshow(chain_samples[0][0, 0, 0, 1, :, :, i])
fig.colorbar(im, ax=ax)
fig.show()
fig, axes = plt.subplots(2, 4, sharex=True, sharey=True, figsize=(10, 5))
for i in range(8):
ax = axes[i % 2][i // 2]
im = ax.imshow(chain_samples[0][0, 0, 0, 2, :, :, i])
fig.colorbar(im, ax=ax)
fig.show()
fig, axes = plt.subplots(2, 4, sharex=True, sharey=True, figsize=(10, 5))
for i in range(8):
ax = axes[i % 2][i // 2]
im = ax.imshow(chain_samples[0][0, 0, 0, 0, :, :, i])
fig.colorbar(im, ax=ax)
fig.show()
fig, axes = plt.subplots(2, 4, sharex=True, sharey=True, figsize=(10, 5))
for i in range(8):
ax = axes[i % 2][i // 2]
im = ax.imshow(chain_samples[0][100, 0, 0, 0, :, :, i])
fig.colorbar(im, ax=ax)
fig.show()
fig, axes = plt.subplots(2, 4, sharex=True, sharey=True, figsize=(10, 5))
for i in range(8):
ax = axes[i % 2][i // 2]
im = ax.imshow(chain_samples[0][200, 0, 0, 0, :, :, i])
fig.colorbar(im, ax=ax)
fig.show()
with plt.style.context("bmh"):
def plot_correlations(ax, samples, axis):
correlation_by_shift = []
correlation_std_by_shift = []
shifts = list(range(1, samples.shape[axis]))
#if samples.shape[-1] == 8:
# samples = sta.from_tensor_with_kind(samples, "even")
#elif samples.shape[-1] == 4:
# samples = sta.from_tensor_with_kind(samples, "vector")
for i in shifts:
shifted = tf.roll(samples, shift=-i, axis=axis)
correlations = tf.reduce_mean(samples * shifted, axis=[-1, -2, -3, -4, -5])
#correlations = tf.reduce_mean(sta.inner_prod(samples, shifted), axis=[-1, -2, -3, -4, -5])
correlation_by_shift.append(tf.reduce_mean(correlations))
correlation_std_by_shift.append(tf.math.reduce_std(correlations))
ax.errorbar(shifts, correlation_by_shift, correlation_std_by_shift, capsize=5)
fig, axes = plt.subplots(4, sharex=True, sharey=True, figsize=(14, 8))
plot_correlations(axes[0], chain_samples[0], axis=-2)
plot_correlations(axes[1], chain_samples[0], axis=-3)
plot_correlations(axes[2], chain_samples[0], axis=-4)
plot_correlations(axes[3], chain_samples[0], axis=-5)
fig, axes = plt.subplots(4, sharex=True, sharey=True, figsize=(14, 8))
plot_correlations(axes[0], chain_samples[1], axis=-2)
plot_correlations(axes[1], chain_samples[1], axis=-3)
plot_correlations(axes[2], chain_samples[1], axis=-4)
plot_correlations(axes[3], chain_samples[1], axis=-5)
Explanation: Now that we can calculate the action for a given field configuration (ie. values for psi and a at every grid point) we could use a sampling algorithm
to sample fields and calculate quantities of interest such as the correlation function, vacuum energy and more.
End of explanation |
15,761 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Python Gateway Tutorial
The UnetStack Python gateway API is available via unet-contrib, or from PyPi.
Import unetpy
If you haven't installed unetpy, you need to do that first
Step1: Open a connection to the modem or real-time simulator
For now, we'll assume that we have a modem running on localhost port 1100 (default)
Step2: Work with modem parameters
If we are connected to the modem, we can now access the agents and services that the modem provides. Let us try this with the physical layer first. What you'll see here depends on the modem you are using (we are using the portaudio modem on a laptop for this example).
Step3: We can query individual parameters or change them
Step4: We can work with the CONTROL (1) or DATA (2) channels too...
Step5: You can also work with higher layers
Step6: Send and receive messages
The messages supported on the Python gatweway are pretty much the same as the Java/Groovy messages. In Python, the named parameters for message initialization use equals (=) instead of colon (
Step7: And read the TxFrameNtf notification once the packet is sent out
Step8: Transmit and receive signals
For this part of the tutorial, we'll use numpy and arlpy. So if you don't have them installed, you'll need them
Step9: Generate a passband 100 ms 12 kHz pulse at a sampling rate of 96 kSa/s
Step10: and transmit it using the baseband service
Step11: By setting fc to 0, we told the modem that this was a passband signal. The sampling rate supported by passband signals will depend on the modem. In our case, the portaudio interface is set to accept 96 kSa/s passband signals.
Now let's ask the modem to record a signal for us
Step12: The notification has 4800 baseband (complex) samples as we had asked, and is sampled at a baseband rate of 12 kSa/s. The carrier frequency used by the modem is 12 kHz. We can convert our recorded signal to passband if we like
Step13: Clean up
Once we are done, we can clean up by closing the connection to the modem. | Python Code:
from unetpy import *
Explanation: Python Gateway Tutorial
The UnetStack Python gateway API is available via unet-contrib, or from PyPi.
Import unetpy
If you haven't installed unetpy, you need to do that first: pip install unetpy
End of explanation
sock = UnetSocket('localhost', 1100)
modem = sock.getGateway()
Explanation: Open a connection to the modem or real-time simulator
For now, we'll assume that we have a modem running on localhost port 1100 (default):
End of explanation
phy = modem.agentForService(Services.PHYSICAL)
phy
Explanation: Work with modem parameters
If we are connected to the modem, we can now access the agents and services that the modem provides. Let us try this with the physical layer first. What you'll see here depends on the modem you are using (we are using the portaudio modem on a laptop for this example).
End of explanation
phy.signalPowerLevel
phy.signalPowerLevel = -6
phy.signalPowerLevel
Explanation: We can query individual parameters or change them:
End of explanation
phy[1]
phy[1].frameLength = 12
phy[1].frameDuration
Explanation: We can work with the CONTROL (1) or DATA (2) channels too...
End of explanation
link = modem.agentForService(Services.LINK)
link
Explanation: You can also work with higher layers:
End of explanation
phy << TxFrameReq(to=2, data=[1,2,3,4])
Explanation: Send and receive messages
The messages supported on the Python gatweway are pretty much the same as the Java/Groovy messages. In Python, the named parameters for message initialization use equals (=) instead of colon (:), and you don't need the new keyword. It's easy to get used to:
End of explanation
txntf = modem.receive(TxFrameNtf, timeout=2000)
Explanation: And read the TxFrameNtf notification once the packet is sent out:
End of explanation
import numpy as np
import arlpy.signal as asig
import arlpy.plot as plt
Explanation: Transmit and receive signals
For this part of the tutorial, we'll use numpy and arlpy. So if you don't have them installed, you'll need them: pip install arlpy (which will also install numpy).
End of explanation
fs = 96000
x = asig.cw(12000, 0.1, fs)
Explanation: Generate a passband 100 ms 12 kHz pulse at a sampling rate of 96 kSa/s:
End of explanation
bb = modem.agentForService(Services.BASEBAND)
bb << TxBasebandSignalReq(signal=x, fc=0, fs=fs)
txntf = modem.receive(TxFrameNtf, timeout=2000)
Explanation: and transmit it using the baseband service:
End of explanation
bb << RecordBasebandSignalReq(recLength=4800)
rec = modem.receive(RxBasebandSignalNtf, timeout=2000)
rec.fc
rec.fs
Explanation: By setting fc to 0, we told the modem that this was a passband signal. The sampling rate supported by passband signals will depend on the modem. In our case, the portaudio interface is set to accept 96 kSa/s passband signals.
Now let's ask the modem to record a signal for us:
End of explanation
y = asig.bb2pb(rec.signal, rec.fs, rec.fc, fs)
plt.plot(y, fs=fs)
plt.specgram(y, fs=fs)
Explanation: The notification has 4800 baseband (complex) samples as we had asked, and is sampled at a baseband rate of 12 kSa/s. The carrier frequency used by the modem is 12 kHz. We can convert our recorded signal to passband if we like:
End of explanation
modem.close()
Explanation: Clean up
Once we are done, we can clean up by closing the connection to the modem.
End of explanation |
15,762 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Convolutional Autoencoder
Sticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
Step1: Network Architecture
The encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.
Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughlt 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data.
What's going on with the decoder
Okay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers aren't. Usually, you'll see deconvolutional layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but it reverse. A stride in the input layer results in a larger stride in the deconvolutional layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a deconvolutional layer. Deconvolution is often called "transpose convolution" which is what you'll find the TensorFlow API, with tf.nn.conv2d_transpose.
However, deconvolutional layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In this Distill article from Augustus Odena, et al, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with tf.image.resize_images, followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.
Exercise
Step2: Training
As before, here wi'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
Step3: Denoising
As I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.
Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.
Exercise
Step4: Checking out the performance
Here I'm adding noise to the test images and passing them through the autoencoder. It does a suprising great job of removing the noise, even though it's sometimes difficult to tell what the original number is. | Python Code:
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[2]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
Explanation: Convolutional Autoencoder
Sticking with the MNIST dataset, let's improve our autoencoder's performance using convolutional layers. Again, loading modules and the data.
End of explanation
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2), padding='same')
# Now 14x14x16
conv2 = tf.layers.conv2d(maxpool1, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding='same')
# Now 7x7x8
conv3 = tf.layers.conv2d(maxpool2, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
encoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2), padding='same')
# Now 4x4x8
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))
# Now 7x7x8
conv4 = tf.layers.conv2d(upsample1, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x8
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))
# Now 14x14x8
conv5 = tf.layers.conv2d(upsample2, 8, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x8
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))
# Now 28x28x8
conv6 = tf.layers.conv2d(upsample3, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x16
logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)
#Now 28x28x1
decoded = tf.nn.sigmoid(logits, name='decoded')
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(0.001).minimize(cost)
Explanation: Network Architecture
The encoder part of the network will be a typical convolutional pyramid. Each convolutional layer will be followed by a max-pooling layer to reduce the dimensions of the layers. The decoder though might be something new to you. The decoder needs to convert from a narrow representation to a wide reconstructed image. For example, the representation could be a 4x4x8 max-pool layer. This is the output of the encoder, but also the input to the decoder. We want to get a 28x28x1 image out from the decoder so we need to work our way back up from the narrow decoder input layer. A schematic of the network is shown below.
Here our final encoder layer has size 4x4x8 = 128. The original images have size 28x28 = 784, so the encoded vector is roughlt 16% the size of the original image. These are just suggested sizes for each of the layers. Feel free to change the depths and sizes, but remember our goal here is to find a small representation of the input data.
What's going on with the decoder
Okay, so the decoder has these "Upsample" layers that you might not have seen before. First off, I'll discuss a bit what these layers aren't. Usually, you'll see deconvolutional layers used to increase the width and height of the layers. They work almost exactly the same as convolutional layers, but it reverse. A stride in the input layer results in a larger stride in the deconvolutional layer. For example, if you have a 3x3 kernel, a 3x3 patch in the input layer will be reduced to one unit in a convolutional layer. Comparatively, one unit in the input layer will be expanded to a 3x3 path in a deconvolutional layer. Deconvolution is often called "transpose convolution" which is what you'll find the TensorFlow API, with tf.nn.conv2d_transpose.
However, deconvolutional layers can lead to artifacts in the final images, such as checkerboard patterns. This is due to overlap in the kernels which can be avoided by setting the stride and kernel size equal. In this Distill article from Augustus Odena, et al, the authors show that these checkerboard artifacts can be avoided by resizing the layers using nearest neighbor or bilinear interpolation (upsampling) followed by a convolutional layer. In TensorFlow, this is easily done with tf.image.resize_images, followed by a convolution. Be sure to read the Distill article to get a better understanding of deconvolutional layers and why we're using upsampling.
Exercise: Build the network shown above. Remember that a convolutional layer with strides of 1 and 'same' padding won't reduce the height and width. That is, if the input is 28x28 and the convolution layer has stride = 1 and 'same' padding, the convolutional layer will also be 28x28. The max-pool layers are used the reduce the width and height. A stride of 2 will reduce the size by 2. Odena et al claim that nearest neighbor interpolation works best for the upsampling, so make sure to include that as a parameter in tf.image.resize_images or use tf.image.resize_nearest_neighbor.
End of explanation
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
imgs = batch[0].reshape((-1, 28, 28, 1))
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
reconstructed = sess.run(decoded, feed_dict={inputs_: in_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
Explanation: Training
As before, here wi'll train the network. Instead of flattening the images though, we can pass them in as 28x28x1 arrays.
End of explanation
inputs_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='inputs')
targets_ = tf.placeholder(tf.float32, (None, 28, 28, 1), name='targets')
### Encoder
conv1 = tf.layers.conv2d(inputs_, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x32
maxpool1 = tf.layers.max_pooling2d(conv1, (2,2), (2,2), padding='same')
# Now 14x14x32
conv2 = tf.layers.conv2d(maxpool1, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x32
maxpool2 = tf.layers.max_pooling2d(conv2, (2,2), (2,2), padding='same')
# Now 7x7x32
conv3 = tf.layers.conv2d(maxpool2, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x16
encoded = tf.layers.max_pooling2d(conv3, (2,2), (2,2), padding='same')
# Now 4x4x16
### Decoder
upsample1 = tf.image.resize_nearest_neighbor(encoded, (7,7))
# Now 7x7x16
conv4 = tf.layers.conv2d(upsample1, 16, (3,3), padding='same', activation=tf.nn.relu)
# Now 7x7x16
upsample2 = tf.image.resize_nearest_neighbor(conv4, (14,14))
# Now 14x14x16
conv5 = tf.layers.conv2d(upsample2, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 14x14x32
upsample3 = tf.image.resize_nearest_neighbor(conv5, (28,28))
# Now 28x28x32
conv6 = tf.layers.conv2d(upsample3, 32, (3,3), padding='same', activation=tf.nn.relu)
# Now 28x28x32
logits = tf.layers.conv2d(conv6, 1, (3,3), padding='same', activation=None)
#Now 28x28x1
decoded = tf.nn.sigmoid(logits, name='decoded')
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)
cost = tf.reduce_mean(loss)
opt = tf.train.AdamOptimizer(0.001).minimize(cost)
sess = tf.Session()
epochs = 100
batch_size = 200
# Set's how much noise we're adding to the MNIST images
noise_factor = 0.5
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
# Get images from the batch
imgs = batch[0].reshape((-1, 28, 28, 1))
# Add random noise to the input images
noisy_imgs = imgs + noise_factor * np.random.randn(*imgs.shape)
# Clip the images to be between 0 and 1
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
# Noisy images as inputs, original images as targets
batch_cost, _ = sess.run([cost, opt], feed_dict={inputs_: noisy_imgs,
targets_: imgs})
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
Explanation: Denoising
As I've mentioned before, autoencoders like the ones you've built so far aren't too useful in practive. However, they can be used to denoise images quite successfully just by training the network on noisy images. We can create the noisy images ourselves by adding Gaussian noise to the training images, then clipping the values to be between 0 and 1. We'll use noisy images as input and the original, clean images as targets. Here's an example of the noisy images I generated and the denoised images.
Since this is a harder problem for the network, we'll want to use deeper convolutional layers here, more feature maps. I suggest something like 32-32-16 for the depths of the convolutional layers in the encoder, and the same depths going backward through the decoder. Otherwise the architecture is the same as before.
Exercise: Build the network for the denoising autoencoder. It's the same as before, but with deeper layers. I suggest 32-32-16 for the depths, but you can play with these numbers, or add more layers.
End of explanation
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))
in_imgs = mnist.test.images[:10]
noisy_imgs = in_imgs + noise_factor * np.random.randn(*in_imgs.shape)
noisy_imgs = np.clip(noisy_imgs, 0., 1.)
reconstructed = sess.run(decoded, feed_dict={inputs_: noisy_imgs.reshape((10, 28, 28, 1))})
for images, row in zip([noisy_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
Explanation: Checking out the performance
Here I'm adding noise to the test images and passing them through the autoencoder. It does a suprising great job of removing the noise, even though it's sometimes difficult to tell what the original number is.
End of explanation |
15,763 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sample Data from some function
Step1: A Gaussian process is a collection of random variables, any finite number of which have a joint Gaussian distribution. Below we outline a GP given some data and predict new values and the variance, this is assuming we know the parameters GP and kernel function. You can experiment with these parameters but we do not include an optimiser.
Polynomail basis function kernel
Step2: Squared Exponential kernel function
Step3: $p(f|X, X, Y) = \int^{\inf}{\inf} p(f|f,X)p(f|X,Y) df
= N(f| K_{xx}(K{xx} + \Sigma)^{-1}Y, K_{xx} - K_{xx}(K_{xx} + \Sigma)^{-1}K_{xx}$
Step4: Fit a GP (we skip here the optimisation) | Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from numpy import linalg
from sklearn import gaussian_process
from functools import partial
# Define the function which you wish to estimate, adding noise
def func(x):
return x*np.sin(x)
def noise(x):
return np.random.randn(len(x)) * 0.1
# Sample random values from [0, 2 * pi) and plot them
x_obs = np.sort(np.random.random_sample([20]) * 4 * np.pi)
y_obs = np.asarray([func(x) for x in x_obs]) + noise(x_obs)
plt.scatter(x_obs,y_obs)
plt.show()
Explanation: Sample Data from some function
End of explanation
def polynomial(x_1, x_2, sigma_f=2, l=1, deg=1):
k = np.empty((len(x_1), len(x_2)))
scale = 2 * (l ** 2)
for i in range(len(x_1)):
for j in range(len(x_2)):
val = 0
for d in range(deg):
val += (x_1[i] * x_2[j]) ** (d + 1)
k[i][j] = val / scale
return k
Explanation: A Gaussian process is a collection of random variables, any finite number of which have a joint Gaussian distribution. Below we outline a GP given some data and predict new values and the variance, this is assuming we know the parameters GP and kernel function. You can experiment with these parameters but we do not include an optimiser.
Polynomail basis function kernel
End of explanation
# Some helper functions we will use later
def squared_exponential(x_1, x_2, sigma_f=2, l=1):
k = np.empty((len(x_1), len(x_2)))
scale = 2 * (l ** 2)
for i in range(len(x_1)):
for j in range(len(x_2)):
k[i][j] = sigma_f * np.exp(-(x_1[i] - x_2[j]) ** 2 / scale)
return k
# A wrapper to make it easier to change covariance function
def get_cov_matrix(cov, x, y, sigma_f, l):
return cov(x, y, sigma_f, l)
def mean_of_gp(k_mat, k_s, k_ss, y, new):
temp = np.dot(k_s, linalg.inv(k_mat))
return np.dot(temp, y)
def var_of_gp(k_mat, k_s, k_ss):
temp = -np.dot(k_s, linalg.inv(k_mat))
temp = np.dot(temp, np.transpose(k_s))
return k_ss + temp
def get_diag(mat):
diag = np.empty(len(mat))
for i in range(len(mat)):
diag[i] = mat[i][i]
return diag
x_s = np.arange(0.5, 1.8 * np.pi, 0.1) # Data points for plotting GP, datapoints of interest
# This function is only for the visualisation
def log_p_y(sigma_f, sigma_n, l, deg, fun):
if fun == 'polynomial':
fun = partial(polynomial, deg=deg)
elif fun == 'squared_exponential':
fun = partial(squared_exponential)
k_mat = get_cov_matrix(fun, x_obs, x_obs, sigma_f, l)
k_s = get_cov_matrix(fun, x_s, x_obs, sigma_f, l)
k_ss = get_cov_matrix(fun, x_s, x_s, sigma_f, l)
k_mat += sigma_n * np.eye(len(x_values))
k_ss += sigma_n * np.eye(len(x_s))
t_1 = -0.5 * np.dot(np.transpose(y_obs), linalg.inv(k_mat))
t_1 = np.dot(t_1, y_obs)
t_2 = -0.5 * np.log(linalg.det(k_mat))
t_3 = -0.5 * len(x_obs) * np.log(2 * np.pi)
mean = mean_of_gp(k_mat, k_s, k_ss, y_obs, x_s)
var = var_of_gp(k_mat, k_s, k_ss)
sigma = np.sqrt(get_diag(var))
plt.fill_between(x_s, mean - 1.96 * sigma, mean + 1.96 * sigma, alpha=0.2)
plt.plot(x_s, mean)
plt.scatter(x_obs, y_obs)
plt.plot(x_s, np.sin(x_s))
plt.show()
return -(t_1 + t_2 + t_3)
Explanation: Squared Exponential kernel function
End of explanation
# Search the space for some sensible parameter values
from ipywidgets import interact
visual = interact(log_p_y,
sigma_f=(0.001, 5.0,0.01),
sigma_n=(0.001, 5.0,0.01),
l=(0.1, 10.0, 0.01),
deg=(1,5,1),
fun=('polynomial', 'squared_exponential'))
Explanation: $p(f|X, X, Y) = \int^{\inf}{\inf} p(f|f,X)p(f|X,Y) df
= N(f| K_{xx}(K{xx} + \Sigma)^{-1}Y, K_{xx} - K_{xx}(K_{xx} + \Sigma)^{-1}K_{xx}$
End of explanation
x_s = np.arange(0,x_obs.max(),0.01)
def build_gp(corr, nugget, lim):
# This limit is only for plotting purposes
lim = int(lim * len(x_s))
gp = gaussian_process.GaussianProcess(corr=corr,nugget=nugget)
gp.fit(x_obs.reshape(-1,1), y_obs.reshape(-1,1))
gp_pred, sigma2_pred = gp.predict(x_s[0:lim].reshape(-1,1), eval_MSE=True)
plt.scatter(x_obs, y_obs)
plt.plot(x_s[0:lim], gp_pred)
plt.plot(x_s[0:lim], [func(x) for x in x_s[0:lim]])
plt.fill_between(x_s[0:lim], gp_pred[:,0] - 1.96 * np.sqrt(sigma2_pred), gp_pred[:,0] + 1.96 * np.sqrt(sigma2_pred), alpha=0.2)
return gp
visual = interact(build_gp,
nugget=(0.0001,2.0001,0.0001),
lim=(0.1,1,0.1),
corr=('absolute_exponential',
'squared_exponential',
'generalized_exponential',
'cubic',
'linear'))
Explanation: Fit a GP (we skip here the optimisation)
End of explanation |
15,764 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Introduction
In this tutorial, you'll learn how to create interactive maps with the folium package. Along the way, you'll apply your new skills to visualize Boston crime data.
Step1: Your first interactive map
We begin by creating a relatively simple map with folium.Map().
Step2: Several arguments customize the appearance of the map
Step3: Plotting points
To reduce the amount of data we need to fit on the map, we'll (temporarily) confine our attention to daytime robberies.
Step4: folium.Marker
We add markers to the map with folium.Marker(). Each marker below corresponds to a different robbery.
Step5: folium.plugins.MarkerCluster
If we have a lot of markers to add, folium.plugins.MarkerCluster() can help to declutter the map. Each marker is added to a MarkerCluster object.
Step6: Bubble maps
A bubble map uses circles instead of markers. By varying the size and color of each circle, we can also show the relationship between location and two other variables.
We create a bubble map by using folium.Circle() to iteratively add circles. In the code cell below, robberies that occurred in hours 9-12 are plotted in green, whereas robberies from hours 13-17 are plotted in red.
Step7: Note that folium.Circle() takes several arguments
Step8: As you can see in the code cell above, folium.plugins.HeatMap() takes a couple of arguments
Step9: We also create a Pandas Series called plot_dict that shows the number of crimes in each district.
Step10: It's very important that plot_dict has the same index as districts - this is how the code knows how to match the geographical boundaries with appropriate colors.
Using the folium.Choropleth() class, we can create a choropleth map. If the map below does not render for you, try viewing the page in a different web browser. | Python Code:
#$HIDE_INPUT$
import pandas as pd
import geopandas as gpd
import math
import folium
from folium import Choropleth, Circle, Marker
from folium.plugins import HeatMap, MarkerCluster
Explanation: Introduction
In this tutorial, you'll learn how to create interactive maps with the folium package. Along the way, you'll apply your new skills to visualize Boston crime data.
End of explanation
# Create a map
m_1 = folium.Map(location=[42.32,-71.0589], tiles='openstreetmap', zoom_start=10)
# Display the map
m_1
Explanation: Your first interactive map
We begin by creating a relatively simple map with folium.Map().
End of explanation
#$HIDE_INPUT$
# Load the data
crimes = pd.read_csv("../input/geospatial-learn-course-data/crimes-in-boston/crimes-in-boston/crime.csv", encoding='latin-1')
# Drop rows with missing locations
crimes.dropna(subset=['Lat', 'Long', 'DISTRICT'], inplace=True)
# Focus on major crimes in 2018
crimes = crimes[crimes.OFFENSE_CODE_GROUP.isin([
'Larceny', 'Auto Theft', 'Robbery', 'Larceny From Motor Vehicle', 'Residential Burglary',
'Simple Assault', 'Harassment', 'Ballistics', 'Aggravated Assault', 'Other Burglary',
'Arson', 'Commercial Burglary', 'HOME INVASION', 'Homicide', 'Criminal Harassment',
'Manslaughter'])]
crimes = crimes[crimes.YEAR>=2018]
# Print the first five rows of the table
crimes.head()
Explanation: Several arguments customize the appearance of the map:
- location sets the initial center of the map. We use the latitude (42.32° N) and longitude (-71.0589° E) of the city of Boston.
- tiles changes the styling of the map; in this case, we choose the OpenStreetMap style. If you're curious, you can find the other options listed here.
- zoom_start sets the initial level of zoom of the map, where higher values zoom in closer to the map.
Take the time now to explore by zooming in and out, or by dragging the map in different directions.
The data
Now, we'll add some crime data to the map!
We won't focus on the data loading step. Instead, you can imagine you are at a point where you already have the data in a pandas DataFrame crimes. The first five rows of the data are shown below.
End of explanation
daytime_robberies = crimes[((crimes.OFFENSE_CODE_GROUP == 'Robbery') & \
(crimes.HOUR.isin(range(9,18))))]
Explanation: Plotting points
To reduce the amount of data we need to fit on the map, we'll (temporarily) confine our attention to daytime robberies.
End of explanation
# Create a map
m_2 = folium.Map(location=[42.32,-71.0589], tiles='cartodbpositron', zoom_start=13)
# Add points to the map
for idx, row in daytime_robberies.iterrows():
Marker([row['Lat'], row['Long']]).add_to(m_2)
# Display the map
m_2
Explanation: folium.Marker
We add markers to the map with folium.Marker(). Each marker below corresponds to a different robbery.
End of explanation
# Create the map
m_3 = folium.Map(location=[42.32,-71.0589], tiles='cartodbpositron', zoom_start=13)
# Add points to the map
mc = MarkerCluster()
for idx, row in daytime_robberies.iterrows():
if not math.isnan(row['Long']) and not math.isnan(row['Lat']):
mc.add_child(Marker([row['Lat'], row['Long']]))
m_3.add_child(mc)
# Display the map
m_3
Explanation: folium.plugins.MarkerCluster
If we have a lot of markers to add, folium.plugins.MarkerCluster() can help to declutter the map. Each marker is added to a MarkerCluster object.
End of explanation
# Create a base map
m_4 = folium.Map(location=[42.32,-71.0589], tiles='cartodbpositron', zoom_start=13)
def color_producer(val):
if val <= 12:
return 'forestgreen'
else:
return 'darkred'
# Add a bubble map to the base map
for i in range(0,len(daytime_robberies)):
Circle(
location=[daytime_robberies.iloc[i]['Lat'], daytime_robberies.iloc[i]['Long']],
radius=20,
color=color_producer(daytime_robberies.iloc[i]['HOUR'])).add_to(m_4)
# Display the map
m_4
Explanation: Bubble maps
A bubble map uses circles instead of markers. By varying the size and color of each circle, we can also show the relationship between location and two other variables.
We create a bubble map by using folium.Circle() to iteratively add circles. In the code cell below, robberies that occurred in hours 9-12 are plotted in green, whereas robberies from hours 13-17 are plotted in red.
End of explanation
# Create a base map
m_5 = folium.Map(location=[42.32,-71.0589], tiles='cartodbpositron', zoom_start=12)
# Add a heatmap to the base map
HeatMap(data=crimes[['Lat', 'Long']], radius=10).add_to(m_5)
# Display the map
m_5
Explanation: Note that folium.Circle() takes several arguments:
- location is a list containing the center of the circle, in latitude and longitude.
- radius sets the radius of the circle.
- Note that in a traditional bubble map, the radius of each circle is allowed to vary. We can implement this by defining a function similar to the color_producer() function that is used to vary the color of each circle.
- color sets the color of each circle.
- The color_producer() function is used to visualize the effect of the hour on robbery location.
Heatmaps
To create a heatmap, we use folium.plugins.HeatMap(). This shows the density of crime in different areas of the city, where red areas have relatively more criminal incidents.
As we'd expect for a big city, most of the crime happens near the center.
End of explanation
# GeoDataFrame with geographical boundaries of Boston police districts
districts_full = gpd.read_file('../input/geospatial-learn-course-data/Police_Districts/Police_Districts/Police_Districts.shp')
districts = districts_full[["DISTRICT", "geometry"]].set_index("DISTRICT")
districts.head()
Explanation: As you can see in the code cell above, folium.plugins.HeatMap() takes a couple of arguments:
- data is a DataFrame containing the locations that we'd like to plot.
- radius controls the smoothness of the heatmap. Higher values make the heatmap look smoother (i.e., with fewer gaps).
Choropleth maps
To understand how crime varies by police district, we'll create a choropleth map.
As a first step, we create a GeoDataFrame where each district is assigned a different row, and the "geometry" column contains the geographical boundaries.
End of explanation
# Number of crimes in each police district
plot_dict = crimes.DISTRICT.value_counts()
plot_dict.head()
Explanation: We also create a Pandas Series called plot_dict that shows the number of crimes in each district.
End of explanation
# Create a base map
m_6 = folium.Map(location=[42.32,-71.0589], tiles='cartodbpositron', zoom_start=12)
# Add a choropleth map to the base map
Choropleth(geo_data=districts.__geo_interface__,
data=plot_dict,
key_on="feature.id",
fill_color='YlGnBu',
legend_name='Major criminal incidents (Jan-Aug 2018)'
).add_to(m_6)
# Display the map
m_6
Explanation: It's very important that plot_dict has the same index as districts - this is how the code knows how to match the geographical boundaries with appropriate colors.
Using the folium.Choropleth() class, we can create a choropleth map. If the map below does not render for you, try viewing the page in a different web browser.
End of explanation |
15,765 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Measurement class basics
Objects of the measurement class are used to save all the information for one single measurement (in contrast to an object of the sample class that is used for multiple measurements).<br>
A measurement object is currently created for spectroscopy and transport measurements, with time domain measurement support coming in the near future. Besides all used instrument settings it records<br>
* UUID, run, and user
* measurement type (i.e. spectroscopy), used function (i.e. measure_2D()), and measurement axis
* used sample object (in its then form)
* git commit id of qkit
The measurement object is saved in the hdf5 file, viewable with qviewkit and saved in a separate .measurement file in the same folder as the hdf5 file. A saved measurement object can be loaded similar to a data file.
Step1: In qkit's file information database there is a dictionary, mapping the UUID of the data file to the absolute path of the saved .measurement file. This can be loaded by creating a measurement object and parsing the abspath.
Step2: The information in the file is JSON encoded (basically a large dict) and upon init gets converted into object attributes.
Step3: Besides this readout of parameters is also possible to set an instrument back to its settings during this measurement to recreate the mesurement environment. For this to work the specific instruments need to be initialized first.
Step4: The entries can be added or changed (be careful not to lose information) and saved again. | Python Code:
## start qkit and import the necessary classes; here we assume a already configured qkit environment
import qkit
qkit.start()
from qkit.measure.measurement_class import Measurement
Explanation: Measurement class basics
Objects of the measurement class are used to save all the information for one single measurement (in contrast to an object of the sample class that is used for multiple measurements).<br>
A measurement object is currently created for spectroscopy and transport measurements, with time domain measurement support coming in the near future. Besides all used instrument settings it records<br>
* UUID, run, and user
* measurement type (i.e. spectroscopy), used function (i.e. measure_2D()), and measurement axis
* used sample object (in its then form)
* git commit id of qkit
The measurement object is saved in the hdf5 file, viewable with qviewkit and saved in a separate .measurement file in the same folder as the hdf5 file. A saved measurement object can be loaded similar to a data file.
End of explanation
m = Measurement(qkit.fid.measure_db['XXXXXX'])
Explanation: In qkit's file information database there is a dictionary, mapping the UUID of the data file to the absolute path of the saved .measurement file. This can be loaded by creating a measurement object and parsing the abspath.
End of explanation
user = m.user
run_id = m.run_id
smpl = m.sample
Explanation: The information in the file is JSON encoded (basically a large dict) and upon init gets converted into object attributes.
End of explanation
m.update_instrument('vna')
## also possible for all measurements
## m.update_all_instruments()
Explanation: Besides this readout of parameters is also possible to set an instrument back to its settings during this measurement to recreate the mesurement environment. For this to work the specific instruments need to be initialized first.
End of explanation
m.analyzed = True
m.rating = 5
m.save(qkit.fid.measure_db['XXXXXX'])
Explanation: The entries can be added or changed (be careful not to lose information) and saved again.
End of explanation |
15,766 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Source of the materials
Step1: This code will print out a summary of the alignment
Step2: You’ll notice in the above output the sequences have been truncated. We
could instead write our own code to format this as we please by
iterating over the rows as SeqRecord objects
Step3: You could also use the alignment object’s format method to show it in
a particular file format – see Section [sec
Step4: To have a look at all the sequence annotation, try this
Step5: Sanger provide a nice web interface at
http
Step6: Multiple Alignments
The previous section focused on reading files containing a single
alignment. In general however, files can contain more than one
alignment, and to read these files we must use the Bio.AlignIO.parse()
function.
Suppose you have a small alignment in PHYLIP format
Step7: As with the function Bio.SeqIO.parse(), using Bio.AlignIO.parse()
returns an iterator. If you want to keep all the alignments in memory at
once, which will allow you to access them in any order, then turn the
iterator into a list
Step8: Ambiguous Alignments {#sec
Step9: Using Bio.AlignIO.read() or Bio.AlignIO.parse() without the
seq_count argument would give a single alignment containing all six
records for the first two examples. For the third example, an exception
would be raised because the lengths differ preventing them being turned
into a single alignment.
If the file format itself has a block structure allowing Bio.AlignIO
to determine the number of sequences in each alignment directly, then
the seq_count argument is not needed. If it is supplied, and doesn’t
agree with the file contents, an error is raised.
Note that this optional seq_count argument assumes each alignment in
the file has the same number of sequences. Hypothetically you may come
across stranger situations, for example a FASTA file containing several
alignments each with a different number of sequences – although I would
love to hear of a real world example of this. Assuming you cannot get
the data in a nicer file format, there is no straight forward way to
deal with this using Bio.AlignIO. In this case, you could consider
reading in the sequences themselves using Bio.SeqIO and batching them
together to create the alignments as appropriate.
Writing Alignments
We’ve talked about using Bio.AlignIO.read() and Bio.AlignIO.parse()
for alignment input (reading files), and now we’ll look at
Bio.AlignIO.write() which is for alignment output (writing files).
This is a function taking three arguments
Step10: Now we have a list of Alignment objects, we’ll write them to a PHYLIP
format file
Step11: And if you open this file in your favourite text editor it should look
like this
Step12: Or, using Bio.AlignIO.parse() and Bio.AlignIO.write()
Step13: The Bio.AlignIO.write() function expects to be given multiple
alignment objects. In the example above we gave it the alignment
iterator returned by Bio.AlignIO.parse().
In this case, we know there is only one alignment in the file so we
could have used Bio.AlignIO.read() instead, but notice we have to pass
this alignment to Bio.AlignIO.write() as a single element list
Step14: Either way, you should end up with the same new Clustal W format file
“PF05371_seed.aln” with the following content
Step15: This time the output looks like this
Step16: This time the output looks like this, using a longer indentation to
allow all the identifers to be given in full
Step17: Here is the new (strict) PHYLIP format output
Step18: You can also use the list-like append and extend methods to add more
rows to the alignment (as SeqRecord objects). Keeping the list
metaphor in mind, simple slicing of the alignment should also make sense
- it selects some of the rows giving back another alignment object
Step19: What if you wanted to select by column? Those of you who have used the
NumPy matrix or array objects won’t be surprised at this - you use a
double index.
Step20: Using two integer indices pulls out a single letter, short hand for
this
Step21: You can pull out a single column as a string like this
Step22: You can also select a range of columns. For example, to pick out those
same three rows we extracted earlier, but take just their first six
columns
Step23: Leaving the first index as
Step24: This brings us to a neat way to remove a section. Notice columns 7, 8
and 9 which are gaps in three of the seven sequences
Step25: Again, you can slice to get everything after the ninth column
Step26: Now, the interesting thing is that addition of alignment objects works
by column. This lets you do this as a way to remove a block of columns
Step27: Another common use of alignment addition would be to combine alignments
for several different genes into a meta-alignment. Watch out though -
the identifiers need to match up (see Section [sec
Step28: Note that you can only add two alignments together if they have the same
number of rows.
Alignments as arrays
Depending on what you are doing, it can be more useful to turn the
alignment object into an array of letters – and you can do this with
NumPy
Step29: If you will be working heavily with the columns, you can tell NumPy to
store the array by column (as in Fortran) rather then its default of by
row (as in C)
Step30: Note that this leaves the original Biopython alignment object and the
NumPy array in memory as separate objects - editing one will not update
the other!
Alignment Tools {#sec
Step31: (Ignore the entries starting with an underscore – these have special
meaning in Python.) The module Bio.Emboss.Applications has wrappers
for some of the EMBOSS suite,
including needle and water, which are described below in
Section [seq
Step32: For the most basic usage, all you need is to have a FASTA input file,
such as
opuntia.fasta
(available online or in the Doc/examples subdirectory of the Biopython
source code). This is a small FASTA file containing seven prickly-pear
DNA sequences (from the cactus family Opuntia).
By default ClustalW will generate an alignment and guide tree file with
names based on the input FASTA file, in this case opuntia.aln and
opuntia.dnd, but you can override this or make it explicit
Step33: Notice here we have given the executable name as clustalw2, indicating
we have version two installed, which has a different filename to version
one (clustalw, the default). Fortunately both versions support the
same set of arguments at the command line (and indeed, should be
functionally identical).
You may find that even though you have ClustalW installed, the above
command doesn’t work – you may get a message about “command not found”
(especially on Windows). This indicated that the ClustalW executable is
not on your PATH (an environment variable, a list of directories to be
searched). You can either update your PATH setting to include the
location of your copy of ClustalW tools (how you do this will depend on
your OS), or simply type in the full path of the tool. For example
Step34: Remember, in Python strings \n and \t are by default interpreted as
a new line and a tab – which is why we’re put a letter “r” at the start
for a raw string that isn’t translated in this way. This is generally
good practice when specifying a Windows style file name.
Internally this uses the subprocess module which is now the
recommended way to run another program in Python. This replaces older
options like the os.system() and the os.popen* functions.
Now, at this point it helps to know about how command line tools “work”.
When you run a tool at the command line, it will often print text output
directly to screen. This text can be captured or redirected, via two
“pipes”, called standard output (the normal results) and standard error
(for error messages and debug messages). There is also standard input,
which is any text fed into the tool. These names get shortened to stdin,
stdout and stderr. When the tool finishes, it has a return code (an
integer), which by convention is zero for success.
When you run the command line tool like this via the Biopython wrapper,
it will wait for it to finish, and check the return code. If this is non
zero (indicating an error), an exception is raised. The wrapper then
returns two strings, stdout and stderr.
In the case of ClustalW, when run at the command line all the important
output is written directly to the output files. Everything normally
printed to screen while you wait (via stdout or stderr) is boring and
can be ignored (assuming it worked).
What we care about are the two output files, the alignment and the guide
tree. We didn’t tell ClustalW what filenames to use, but it defaults to
picking names based on the input file. In this case the output should be
in the file opuntia.aln. You should be able to work out how to read in
the alignment using Bio.AlignIO by now
Step35: In case you are interested (and this is an aside from the main thrust of
this chapter), the opuntia.dnd file ClustalW creates is just a
standard Newick tree file, and Bio.Phylo can parse these
Step36: Chapter [sec
Step37: For the most basic usage, all you need is to have a FASTA input file,
such as
opuntia.fasta
(available online or in the Doc/examples subdirectory of the Biopython
source code). You can then tell MUSCLE to read in this FASTA file, and
write the alignment to an output file
Step38: Note that MUSCLE uses “-in” and “-out” but in Biopython we have to use
“input” and “out” as the keyword arguments or property names. This is
because “in” is a reserved word in Python.
By default MUSCLE will output the alignment as a FASTA file (using
gapped sequences). The Bio.AlignIO module should be able to read this
alignment using format=fasta. You can also ask for ClustalW-like
output
Step39: Or, strict ClustalW output where the original ClustalW header line is
used for maximum compatibility
Step40: The Bio.AlignIO module should be able to read these alignments using
format=clustal.
MUSCLE can also output in GCG MSF format (using the msf argument), but
Biopython can’t currently parse that, or using HTML which would give a
human readable web page (not suitable for parsing).
You can also set the other optional parameters, for example the maximum
number of iterations. See the built in help for details.
You would then run MUSCLE command line string as described above for
ClustalW, and parse the output using Bio.AlignIO to get an alignment
object.
MUSCLE using stdout
Using a MUSCLE command line as in the examples above will write the
alignment to a file. This means there will be no important information
written to the standard out (stdout) or standard error (stderr) handles.
However, by default MUSCLE will write the alignment to standard output
(stdout). We can take advantage of this to avoid having a temporary
output file! For example
Step41: If we run this via the wrapper, we get back the output as a string. In
order to parse this we can use StringIO to turn it into a handle.
Remember that MUSCLE defaults to using FASTA as the output format
Step42: The above approach is fairly simple, but if you are dealing with very
large output text the fact that all of stdout and stderr is loaded into
memory as a string can be a potential drawback. Using the subprocess
module we can work directly with handles instead
Step43: MUSCLE using stdin and stdout
We don’t actually need to have our FASTA input sequences prepared in a
file, because by default MUSCLE will read in the input sequence from
standard input! Note this is a bit more advanced and fiddly, so don’t
bother with this technique unless you need to.
First, we’ll need some unaligned sequences in memory as SeqRecord
objects. For this demonstration I’m going to use a filtered version of
the original FASTA file (using a generator expression), taking just six
of the seven sequences
Step44: Then we create the MUSCLE command line, leaving the input and output to
their defaults (stdin and stdout). I’m also going to ask for strict
ClustalW format as for the output.
Step45: Now for the fiddly bits using the subprocess module, stdin and stdout
Step46: That should start MUSCLE, but it will be sitting waiting for its FASTA
input sequences, which we must supply via its stdin handle
Step47: After writing the six sequences to the handle, MUSCLE will still be
waiting to see if that is all the FASTA sequences or not – so we must
signal that this is all the input data by closing the handle. At that
point MUSCLE should start to run, and we can ask for the output
Step48: Wow! There we are with a new alignment of just the six records, without
having created a temporary FASTA input file, or a temporary alignment
output file. However, a word of caution
Step49: You can then run the tool and parse the alignment as follows
Step50: You might find this easier, but it does require more memory (RAM) for
the strings used for the input FASTA and output Clustal formatted data.
EMBOSS needle and water {#seq
Step51: Why not try running this by hand at the command prompt? You should see
it does a pairwise comparison and records the output in the file
needle.txt (in the default EMBOSS alignment file format).
Even if you have EMBOSS installed, running this command may not work –
you might get a message about “command not found” (especially on
Windows). This probably means that the EMBOSS tools are not on your PATH
environment variable. You can either update your PATH setting, or simply
tell Biopython the full path to the tool, for example
Step52: Remember in Python that for a default string \n or \t means a new
line or a tab – which is why we’re put a letter “r” at the start for a
raw string.
At this point it might help to try running the EMBOSS tools yourself by
hand at the command line, to familiarise yourself the other options and
compare them to the Biopython help text
Step53: Note that you can also specify (or change or look at) the settings like
this
Step54: Next we want to use Python to run this command for us. As explained
above, for full control, we recommend you use the built in Python
subprocess module, but for simple usage the wrapper object usually
suffices
Step55: Next we can load the output file with Bio.AlignIO as discussed earlier
in this chapter, as the emboss format | Python Code:
from Bio import AlignIO
alignment = AlignIO.read("data/PF05371_seed.sth", "stockholm")
Explanation: Source of the materials: Biopython cookbook (adapted)
<font color='red'>Status: Draft</font>
Multiple Sequence Alignment objects {#chapter:Bio.AlignIO}
This chapter is about Multiple Sequence Alignments, by which we mean a
collection of multiple sequences which have been aligned together –
usually with the insertion of gap characters, and addition of leading or
trailing gaps – such that all the sequence strings are the same length.
Such an alignment can be regarded as a matrix of letters, where each row
is held as a SeqRecord object internally.
We will introduce the MultipleSeqAlignment object which holds this
kind of data, and the Bio.AlignIO module for reading and writing them
as various file formats (following the design of the Bio.SeqIO module
from the previous chapter). Note that both Bio.SeqIO and Bio.AlignIO
can read and write sequence alignment files. The appropriate choice will
depend largely on what you want to do with the data.
The final part of this chapter is about our command line wrappers for
common multiple sequence alignment tools like ClustalW and MUSCLE.
Parsing or Reading Sequence Alignments
We have two functions for reading in sequence alignments,
Bio.AlignIO.read() and Bio.AlignIO.parse() which following the
convention introduced in Bio.SeqIO are for files containing one or
multiple alignments respectively.
Using Bio.AlignIO.parse() will return an <span>iterator</span> which
gives MultipleSeqAlignment objects. Iterators are typically used in a
for loop. Examples of situations where you will have multiple different
alignments include resampled alignments from the PHYLIP tool seqboot,
or multiple pairwise alignments from the EMBOSS tools water or
needle, or Bill Pearson’s FASTA tools.
However, in many situations you will be dealing with files which contain
only a single alignment. In this case, you should use the
Bio.AlignIO.read() function which returns a single
MultipleSeqAlignment object.
Both functions expect two mandatory arguments:
The first argument is a <span>handle</span> to read the data from,
typically an open file (see Section [sec:appendix-handles]), or
a filename.
The second argument is a lower case string specifying the
alignment format. As in Bio.SeqIO we don’t try and guess the file
format for you! See http://biopython.org/wiki/AlignIO for a full
listing of supported formats.
There is also an optional seq_count argument which is discussed in
Section [sec:AlignIO-count-argument] below for dealing with ambiguous
file formats which may contain more than one alignment.
A further optional alphabet argument allowing you to specify the
expected alphabet. This can be useful as many alignment file formats do
not explicitly label the sequences as RNA, DNA or protein – which means
Bio.AlignIO will default to using a generic alphabet.
Single Alignments
As an example, consider the following annotation rich protein alignment
in the PFAM or Stockholm file format:
```
STOCKHOLM 1.0
=GS COATB_BPIKE/30-81 AC P03620.1
=GS COATB_BPIKE/30-81 DR PDB; 1ifl ; 1-52;
=GS Q9T0Q8_BPIKE/1-52 AC Q9T0Q8.1
=GS COATB_BPI22/32-83 AC P15416.1
=GS COATB_BPM13/24-72 AC P69541.1
=GS COATB_BPM13/24-72 DR PDB; 2cpb ; 1-49;
=GS COATB_BPM13/24-72 DR PDB; 2cps ; 1-49;
=GS COATB_BPZJ2/1-49 AC P03618.1
=GS Q9T0Q9_BPFD/1-49 AC Q9T0Q9.1
=GS Q9T0Q9_BPFD/1-49 DR PDB; 1nh4 A; 1-49;
=GS COATB_BPIF1/22-73 AC P03619.2
=GS COATB_BPIF1/22-73 DR PDB; 1ifk ; 1-50;
COATB_BPIKE/30-81 AEPNAATNYATEAMDSLKTQAIDLISQTWPVVTTVVVAGLVIRLFKKFSSKA
=GR COATB_BPIKE/30-81 SS -HHHHHHHHHHHHHH--HHHHHHHH--HHHHHHHHHHHHHHHHHHHHH----
Q9T0Q8_BPIKE/1-52 AEPNAATNYATEAMDSLKTQAIDLISQTWPVVTTVVVAGLVIKLFKKFVSRA
COATB_BPI22/32-83 DGTSTATSYATEAMNSLKTQATDLIDQTWPVVTSVAVAGLAIRLFKKFSSKA
COATB_BPM13/24-72 AEGDDP...AKAAFNSLQASATEYIGYAWAMVVVIVGATIGIKLFKKFTSKA
=GR COATB_BPM13/24-72 SS ---S-T...CHCHHHHCCCCTCCCTTCHHHHHHHHHHHHHHHHHHHHCTT--
COATB_BPZJ2/1-49 AEGDDP...AKAAFDSLQASATEYIGYAWAMVVVIVGATIGIKLFKKFASKA
Q9T0Q9_BPFD/1-49 AEGDDP...AKAAFDSLQASATEYIGYAWAMVVVIVGATIGIKLFKKFTSKA
=GR Q9T0Q9_BPFD/1-49 SS ------...-HHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHHH--
COATB_BPIF1/22-73 FAADDATSQAKAAFDSLTAQATEMSGYAWALVVLVVGATVGIKLFKKFVSRA
=GR COATB_BPIF1/22-73 SS XX-HHHH--HHHHHH--HHHHHHH--HHHHHHHHHHHHHHHHHHHHHHH---
=GC SS_cons XHHHHHHHHHHHHHHHCHHHHHHHHCHHHHHHHHHHHHHHHHHHHHHHHC--
=GC seq_cons AEssss...AptAhDSLpspAT-hIu.sWshVsslVsAsluIKLFKKFsSKA
//
```
This is the seed alignment for the Phage_Coat_Gp8 (PF05371) PFAM
entry, downloaded from a now out of date release of PFAM from
http://pfam.sanger.ac.uk/. We can load this file as follows (assuming
it has been saved to disk as “PF05371_seed.sth” in the current working
directory):
End of explanation
print(alignment)
Explanation: This code will print out a summary of the alignment:
End of explanation
from Bio import AlignIO
alignment = AlignIO.read("data/PF05371_seed.sth", "stockholm")
print("Alignment length %i" % alignment.get_alignment_length())
for record in alignment:
print("%s - %s" % (record.seq, record.id))
Explanation: You’ll notice in the above output the sequences have been truncated. We
could instead write our own code to format this as we please by
iterating over the rows as SeqRecord objects:
End of explanation
for record in alignment:
if record.dbxrefs:
print("%s %s" % (record.id, record.dbxrefs))
Explanation: You could also use the alignment object’s format method to show it in
a particular file format – see Section [sec:alignment-format-method]
for details.
Did you notice in the raw file above that several of the sequences
include database cross-references to the PDB and the associated known
secondary structure? Try this:
End of explanation
for record in alignment:
print(record)
Explanation: To have a look at all the sequence annotation, try this:
End of explanation
from Bio import AlignIO
help(AlignIO)
Explanation: Sanger provide a nice web interface at
http://pfam.sanger.ac.uk/family?acc=PF05371 which will actually let
you download this alignment in several other formats. This is what the
file looks like in the FASTA file format:
```
COATB_BPIKE/30-81
AEPNAATNYATEAMDSLKTQAIDLISQTWPVVTTVVVAGLVIRLFKKFSSKA
Q9T0Q8_BPIKE/1-52
AEPNAATNYATEAMDSLKTQAIDLISQTWPVVTTVVVAGLVIKLFKKFVSRA
COATB_BPI22/32-83
DGTSTATSYATEAMNSLKTQATDLIDQTWPVVTSVAVAGLAIRLFKKFSSKA
COATB_BPM13/24-72
AEGDDP---AKAAFNSLQASATEYIGYAWAMVVVIVGATIGIKLFKKFTSKA
COATB_BPZJ2/1-49
AEGDDP---AKAAFDSLQASATEYIGYAWAMVVVIVGATIGIKLFKKFASKA
Q9T0Q9_BPFD/1-49
AEGDDP---AKAAFDSLQASATEYIGYAWAMVVVIVGATIGIKLFKKFTSKA
COATB_BPIF1/22-73
FAADDATSQAKAAFDSLTAQATEMSGYAWALVVLVVGATVGIKLFKKFVSRA
```
Note the website should have an option about showing gaps as periods
(dots) or dashes, we’ve shown dashes above. Assuming you download and
save this as file “PF05371_seed.faa” then you can load it with almost
exactly the same code:
```
from Bio import AlignIO
alignment = AlignIO.read("PF05371_seed.faa", "fasta")
print(alignment)
```
All that has changed in this code is the filename and the format string.
You’ll get the same output as before, the sequences and record
identifiers are the same. However, as you should expect, if you check
each SeqRecord there is no annotation nor database cross-references
because these are not included in the FASTA file format.
Note that rather than using the Sanger website, you could have used
Bio.AlignIO to convert the original Stockholm format file into a FASTA
file yourself (see below).
With any supported file format, you can load an alignment in exactly the
same way just by changing the format string. For example, use “phylip”
for PHYLIP files, “nexus” for NEXUS files or “emboss” for the alignments
output by the EMBOSS tools. There is a full listing on the wiki page
(http://biopython.org/wiki/AlignIO) and in the built in documentation
(also
online):
End of explanation
from Bio import AlignIO
alignments = AlignIO.parse("data/resampled.phy", "phylip")
for alignment in alignments:
print(alignment)
print("")
Explanation: Multiple Alignments
The previous section focused on reading files containing a single
alignment. In general however, files can contain more than one
alignment, and to read these files we must use the Bio.AlignIO.parse()
function.
Suppose you have a small alignment in PHYLIP format:
```
5 6
Alpha AACAAC
Beta AACCCC
Gamma ACCAAC
Delta CCACCA
Epsilon CCAAAC
```
If you wanted to bootstrap a phylogenetic tree using the PHYLIP tools,
one of the steps would be to create a set of many resampled alignments
using the tool bootseq. This would give output something like this,
which has been abbreviated for conciseness:
```
5 6
Alpha AAACCA
Beta AAACCC
Gamma ACCCCA
Delta CCCAAC
Epsilon CCCAAA
5 6
Alpha AAACAA
Beta AAACCC
Gamma ACCCAA
Delta CCCACC
Epsilon CCCAAA
5 6
Alpha AAAAAC
Beta AAACCC
Gamma AACAAC
Delta CCCCCA
Epsilon CCCAAC
...
5 6
Alpha AAAACC
Beta ACCCCC
Gamma AAAACC
Delta CCCCAA
Epsilon CAAACC
```
If you wanted to read this in using Bio.AlignIO you could use:
End of explanation
from Bio import AlignIO
alignments = list(AlignIO.parse("data/resampled.phy", "phylip"))
last_align = alignments[-1]
first_align = alignments[0]
Explanation: As with the function Bio.SeqIO.parse(), using Bio.AlignIO.parse()
returns an iterator. If you want to keep all the alignments in memory at
once, which will allow you to access them in any order, then turn the
iterator into a list:
End of explanation
for alignment in AlignIO.parse(handle, "fasta", seq_count=2):
print("Alignment length %i" % alignment.get_alignment_length())
for record in alignment:
print("%s - %s" % (record.seq, record.id))
print("")
Explanation: Ambiguous Alignments {#sec:AlignIO-count-argument}
Many alignment file formats can explicitly store more than one
alignment, and the division between each alignment is clear. However,
when a general sequence file format has been used there is no such block
structure. The most common such situation is when alignments have been
saved in the FASTA file format. For example consider the following:
```
Alpha
ACTACGACTAGCTCAG--G
Beta
ACTACCGCTAGCTCAGAAG
Gamma
ACTACGGCTAGCACAGAAG
Alpha
ACTACGACTAGCTCAGG--
Beta
ACTACCGCTAGCTCAGAAG
Gamma
ACTACGGCTAGCACAGAAG
```
This could be a single alignment containing six sequences (with repeated
identifiers). Or, judging from the identifiers, this is probably two
different alignments each with three sequences, which happen to all have
the same length.
What about this next example?
```
Alpha
ACTACGACTAGCTCAG--G
Beta
ACTACCGCTAGCTCAGAAG
Alpha
ACTACGACTAGCTCAGG--
Gamma
ACTACGGCTAGCACAGAAG
Alpha
ACTACGACTAGCTCAGG--
Delta
ACTACGGCTAGCACAGAAG
```
Again, this could be a single alignment with six sequences. However this
time based on the identifiers we might guess this is three pairwise
alignments which by chance have all got the same lengths.
This final example is similar:
```
Alpha
ACTACGACTAGCTCAG--G
XXX
ACTACCGCTAGCTCAGAAG
Alpha
ACTACGACTAGCTCAGG
YYY
ACTACGGCAAGCACAGG
Alpha
--ACTACGAC--TAGCTCAGG
ZZZ
GGACTACGACAATAGCTCAGG
```
In this third example, because of the differing lengths, this cannot be
treated as a single alignment containing all six records. However, it
could be three pairwise alignments.
Clearly trying to store more than one alignment in a FASTA file is not
ideal. However, if you are forced to deal with these as input files
Bio.AlignIO can cope with the most common situation where all the
alignments have the same number of records. One example of this is a
collection of pairwise alignments, which can be produced by the EMBOSS
tools needle and water – although in this situation, Bio.AlignIO
should be able to understand their native output using “emboss” as the
format string.
To interpret these FASTA examples as several separate alignments, we can
use Bio.AlignIO.parse() with the optional seq_count argument which
specifies how many sequences are expected in each alignment (in these
examples, 3, 2 and 2 respectively). For example, using the third example
as the input data:
End of explanation
from Bio.Alphabet import generic_dna
from Bio.Seq import Seq
from Bio.SeqRecord import SeqRecord
from Bio.Align import MultipleSeqAlignment
align1 = MultipleSeqAlignment([
SeqRecord(Seq("ACTGCTAGCTAG", generic_dna), id="Alpha"),
SeqRecord(Seq("ACT-CTAGCTAG", generic_dna), id="Beta"),
SeqRecord(Seq("ACTGCTAGDTAG", generic_dna), id="Gamma"),
])
align2 = MultipleSeqAlignment([
SeqRecord(Seq("GTCAGC-AG", generic_dna), id="Delta"),
SeqRecord(Seq("GACAGCTAG", generic_dna), id="Epsilon"),
SeqRecord(Seq("GTCAGCTAG", generic_dna), id="Zeta"),
])
align3 = MultipleSeqAlignment([
SeqRecord(Seq("ACTAGTACAGCTG", generic_dna), id="Eta"),
SeqRecord(Seq("ACTAGTACAGCT-", generic_dna), id="Theta"),
SeqRecord(Seq("-CTACTACAGGTG", generic_dna), id="Iota"),
])
my_alignments = [align1, align2, align3]
Explanation: Using Bio.AlignIO.read() or Bio.AlignIO.parse() without the
seq_count argument would give a single alignment containing all six
records for the first two examples. For the third example, an exception
would be raised because the lengths differ preventing them being turned
into a single alignment.
If the file format itself has a block structure allowing Bio.AlignIO
to determine the number of sequences in each alignment directly, then
the seq_count argument is not needed. If it is supplied, and doesn’t
agree with the file contents, an error is raised.
Note that this optional seq_count argument assumes each alignment in
the file has the same number of sequences. Hypothetically you may come
across stranger situations, for example a FASTA file containing several
alignments each with a different number of sequences – although I would
love to hear of a real world example of this. Assuming you cannot get
the data in a nicer file format, there is no straight forward way to
deal with this using Bio.AlignIO. In this case, you could consider
reading in the sequences themselves using Bio.SeqIO and batching them
together to create the alignments as appropriate.
Writing Alignments
We’ve talked about using Bio.AlignIO.read() and Bio.AlignIO.parse()
for alignment input (reading files), and now we’ll look at
Bio.AlignIO.write() which is for alignment output (writing files).
This is a function taking three arguments: some MultipleSeqAlignment
objects (or for backwards compatibility the obsolete Alignment
objects), a handle or filename to write to, and a sequence format.
Here is an example, where we start by creating a few
MultipleSeqAlignment objects the hard way (by hand, rather than by
loading them from a file). Note we create some SeqRecord objects to
construct the alignment from.
End of explanation
from Bio import AlignIO
AlignIO.write(my_alignments, "my_example.phy", "phylip")
Explanation: Now we have a list of Alignment objects, we’ll write them to a PHYLIP
format file:
End of explanation
from Bio import AlignIO
count = AlignIO.convert("data/PF05371_seed.sth", "stockholm", "PF05371_seed.aln", "clustal")
print("Converted %i alignments" % count)
Explanation: And if you open this file in your favourite text editor it should look
like this:
```
3 12
Alpha ACTGCTAGCT AG
Beta ACT-CTAGCT AG
Gamma ACTGCTAGDT AG
3 9
Delta GTCAGC-AG
Epislon GACAGCTAG
Zeta GTCAGCTAG
3 13
Eta ACTAGTACAG CTG
Theta ACTAGTACAG CT-
Iota -CTACTACAG GTG
```
Its more common to want to load an existing alignment, and save that,
perhaps after some simple manipulation like removing certain rows or
columns.
Suppose you wanted to know how many alignments the Bio.AlignIO.write()
function wrote to the handle? If your alignments were in a list like the
example above, you could just use len(my_alignments), however you
can’t do that when your records come from a generator/iterator.
Therefore the Bio.AlignIO.write() function returns the number of
alignments written to the file.
Note - If you tell the Bio.AlignIO.write() function to write to a
file that already exists, the old file will be overwritten without any
warning.
Converting between sequence alignment file formats {#sec:converting-alignments}
Converting between sequence alignment file formats with Bio.AlignIO
works in the same way as converting between sequence file formats with
Bio.SeqIO (Section [sec:SeqIO-conversion]). We load generally the
alignment(s) using Bio.AlignIO.parse() and then save them using the
Bio.AlignIO.write() – or just use the Bio.AlignIO.convert() helper
function.
For this example, we’ll load the PFAM/Stockholm format file used earlier
and save it as a Clustal W format file:
End of explanation
from Bio import AlignIO
alignments = AlignIO.parse("data/PF05371_seed.sth", "stockholm")
count = AlignIO.write(alignments, "PF05371_seed.aln", "clustal")
print("Converted %i alignments" % count)
Explanation: Or, using Bio.AlignIO.parse() and Bio.AlignIO.write():
End of explanation
from Bio import AlignIO
alignment = AlignIO.read("data/PF05371_seed.sth", "stockholm")
AlignIO.write([alignment], "PF05371_seed.aln", "clustal")
Explanation: The Bio.AlignIO.write() function expects to be given multiple
alignment objects. In the example above we gave it the alignment
iterator returned by Bio.AlignIO.parse().
In this case, we know there is only one alignment in the file so we
could have used Bio.AlignIO.read() instead, but notice we have to pass
this alignment to Bio.AlignIO.write() as a single element list:
End of explanation
from Bio import AlignIO
AlignIO.convert("data/PF05371_seed.sth", "stockholm", "PF05371_seed.phy", "phylip")
Explanation: Either way, you should end up with the same new Clustal W format file
“PF05371_seed.aln” with the following content:
```
CLUSTAL X (1.81) multiple sequence alignment
COATB_BPIKE/30-81 AEPNAATNYATEAMDSLKTQAIDLISQTWPVVTTVVVAGLVIRLFKKFSS
Q9T0Q8_BPIKE/1-52 AEPNAATNYATEAMDSLKTQAIDLISQTWPVVTTVVVAGLVIKLFKKFVS
COATB_BPI22/32-83 DGTSTATSYATEAMNSLKTQATDLIDQTWPVVTSVAVAGLAIRLFKKFSS
COATB_BPM13/24-72 AEGDDP---AKAAFNSLQASATEYIGYAWAMVVVIVGATIGIKLFKKFTS
COATB_BPZJ2/1-49 AEGDDP---AKAAFDSLQASATEYIGYAWAMVVVIVGATIGIKLFKKFAS
Q9T0Q9_BPFD/1-49 AEGDDP---AKAAFDSLQASATEYIGYAWAMVVVIVGATIGIKLFKKFTS
COATB_BPIF1/22-73 FAADDATSQAKAAFDSLTAQATEMSGYAWALVVLVVGATVGIKLFKKFVS
COATB_BPIKE/30-81 KA
Q9T0Q8_BPIKE/1-52 RA
COATB_BPI22/32-83 KA
COATB_BPM13/24-72 KA
COATB_BPZJ2/1-49 KA
Q9T0Q9_BPFD/1-49 KA
COATB_BPIF1/22-73 RA
```
Alternatively, you could make a PHYLIP format file which we’ll name
“PF05371_seed.phy”:
End of explanation
from Bio import AlignIO
AlignIO.convert("data/PF05371_seed.sth", "stockholm", "PF05371_seed.phy", "phylip-relaxed")
Explanation: This time the output looks like this:
```
7 52
COATB_BPIK AEPNAATNYA TEAMDSLKTQ AIDLISQTWP VVTTVVVAGL VIRLFKKFSS
Q9T0Q8_BPI AEPNAATNYA TEAMDSLKTQ AIDLISQTWP VVTTVVVAGL VIKLFKKFVS
COATB_BPI2 DGTSTATSYA TEAMNSLKTQ ATDLIDQTWP VVTSVAVAGL AIRLFKKFSS
COATB_BPM1 AEGDDP---A KAAFNSLQAS ATEYIGYAWA MVVVIVGATI GIKLFKKFTS
COATB_BPZJ AEGDDP---A KAAFDSLQAS ATEYIGYAWA MVVVIVGATI GIKLFKKFAS
Q9T0Q9_BPF AEGDDP---A KAAFDSLQAS ATEYIGYAWA MVVVIVGATI GIKLFKKFTS
COATB_BPIF FAADDATSQA KAAFDSLTAQ ATEMSGYAWA LVVLVVGATV GIKLFKKFVS
KA
RA
KA
KA
KA
KA
RA
```
One of the big handicaps of the original PHYLIP alignment file format is
that the sequence identifiers are strictly truncated at ten characters.
In this example, as you can see the resulting names are still unique -
but they are not very readable. As a result, a more relaxed variant of
the original PHYLIP format is now quite widely used:
End of explanation
from Bio import AlignIO
alignment = AlignIO.read("data/PF05371_seed.sth", "stockholm")
name_mapping = {}
for i, record in enumerate(alignment):
name_mapping[i] = record.id
record.id = "seq%i" % i
print(name_mapping)
AlignIO.write([alignment], "PF05371_seed.phy", "phylip")
Explanation: This time the output looks like this, using a longer indentation to
allow all the identifers to be given in full::
```
7 52
COATB_BPIKE/30-81 AEPNAATNYA TEAMDSLKTQ AIDLISQTWP VVTTVVVAGL VIRLFKKFSS
Q9T0Q8_BPIKE/1-52 AEPNAATNYA TEAMDSLKTQ AIDLISQTWP VVTTVVVAGL VIKLFKKFVS
COATB_BPI22/32-83 DGTSTATSYA TEAMNSLKTQ ATDLIDQTWP VVTSVAVAGL AIRLFKKFSS
COATB_BPM13/24-72 AEGDDP---A KAAFNSLQAS ATEYIGYAWA MVVVIVGATI GIKLFKKFTS
COATB_BPZJ2/1-49 AEGDDP---A KAAFDSLQAS ATEYIGYAWA MVVVIVGATI GIKLFKKFAS
Q9T0Q9_BPFD/1-49 AEGDDP---A KAAFDSLQAS ATEYIGYAWA MVVVIVGATI GIKLFKKFTS
COATB_BPIF1/22-73 FAADDATSQA KAAFDSLTAQ ATEMSGYAWA LVVLVVGATV GIKLFKKFVS
KA
RA
KA
KA
KA
KA
RA
```
If you have to work with the original strict PHYLIP format, then you may
need to compress the identifers somehow – or assign your own names or
numbering system. This following bit of code manipulates the record
identifiers before saving the output:
End of explanation
from Bio import AlignIO
alignment = AlignIO.read("data/PF05371_seed.sth", "stockholm")
print("Number of rows: %i" % len(alignment))
for record in alignment:
print("%s - %s" % (record.seq, record.id))
Explanation: Here is the new (strict) PHYLIP format output:
```
7 52
seq0 AEPNAATNYA TEAMDSLKTQ AIDLISQTWP VVTTVVVAGL VIRLFKKFSS
seq1 AEPNAATNYA TEAMDSLKTQ AIDLISQTWP VVTTVVVAGL VIKLFKKFVS
seq2 DGTSTATSYA TEAMNSLKTQ ATDLIDQTWP VVTSVAVAGL AIRLFKKFSS
seq3 AEGDDP---A KAAFNSLQAS ATEYIGYAWA MVVVIVGATI GIKLFKKFTS
seq4 AEGDDP---A KAAFDSLQAS ATEYIGYAWA MVVVIVGATI GIKLFKKFAS
seq5 AEGDDP---A KAAFDSLQAS ATEYIGYAWA MVVVIVGATI GIKLFKKFTS
seq6 FAADDATSQA KAAFDSLTAQ ATEMSGYAWA LVVLVVGATV GIKLFKKFVS
KA
RA
KA
KA
KA
KA
RA
```
In general, because of the identifier limitation, working with strict
PHYLIP file formats shouldn’t be your first choice. Using the
PFAM/Stockholm format on the other hand allows you to record a lot of
additional annotation too.
Getting your alignment objects as formatted strings {#sec:alignment-format-method}
The Bio.AlignIO interface is based on handles, which means if you want
to get your alignment(s) into a string in a particular file format you
need to do a little bit more work (see below). However, you will
probably prefer to take advantage of the alignment object’s format()
method. This takes a single mandatory argument, a lower case string
which is supported by Bio.AlignIO as an output format. For example:
```
from Bio import AlignIO
alignment = AlignIO.read("PF05371_seed.sth", "stockholm")
print(alignment.format("clustal"))
```
As described in Section [sec:SeqRecord-format], the SeqRecord object
has a similar method using output formats supported by Bio.SeqIO.
Internally the format() method is using the StringIO string based
handle and calling Bio.AlignIO.write(). You can do this in your own
code if for example you are using an older version of Biopython:
```
from Bio import AlignIO
from StringIO import StringIO
alignments = AlignIO.parse("PF05371_seed.sth", "stockholm")
out_handle = StringIO()
AlignIO.write(alignments, out_handle, "clustal")
clustal_data = out_handle.getvalue()
print(clustal_data)
```
Manipulating Alignments {#sec:manipulating-alignments}
Now that we’ve covered loading and saving alignments, we’ll look at what
else you can do with them.
Slicing alignments
First of all, in some senses the alignment objects act like a Python
list of SeqRecord objects (the rows). With this model in mind
hopefully the actions of len() (the number of rows) and iteration
(each row as a SeqRecord) make sense:
End of explanation
print(alignment)
print(alignment[3:7])
Explanation: You can also use the list-like append and extend methods to add more
rows to the alignment (as SeqRecord objects). Keeping the list
metaphor in mind, simple slicing of the alignment should also make sense
- it selects some of the rows giving back another alignment object:
End of explanation
print(alignment[2, 6])
Explanation: What if you wanted to select by column? Those of you who have used the
NumPy matrix or array objects won’t be surprised at this - you use a
double index.
End of explanation
print(alignment[2].seq[6])
Explanation: Using two integer indices pulls out a single letter, short hand for
this:
End of explanation
print(alignment[:, 6])
Explanation: You can pull out a single column as a string like this:
End of explanation
print(alignment[3:6, :6])
Explanation: You can also select a range of columns. For example, to pick out those
same three rows we extracted earlier, but take just their first six
columns:
End of explanation
print(alignment[:, :6])
Explanation: Leaving the first index as : means take all the rows:
End of explanation
print(alignment[:, 6:9])
Explanation: This brings us to a neat way to remove a section. Notice columns 7, 8
and 9 which are gaps in three of the seven sequences:
End of explanation
print(alignment[:, 9:])
Explanation: Again, you can slice to get everything after the ninth column:
End of explanation
edited = alignment[:, :6] + alignment[:, 9:]
print(edited)
Explanation: Now, the interesting thing is that addition of alignment objects works
by column. This lets you do this as a way to remove a block of columns:
End of explanation
edited.sort()
print(edited)
Explanation: Another common use of alignment addition would be to combine alignments
for several different genes into a meta-alignment. Watch out though -
the identifiers need to match up (see Section [sec:SeqRecord-addition]
for how adding SeqRecord objects works). You may find it helpful to
first sort the alignment rows alphabetically by id:
End of explanation
import numpy as np
from Bio import AlignIO
alignment = AlignIO.read("data/PF05371_seed.sth", "stockholm")
align_array = np.array([list(rec) for rec in alignment], np.character)
print("Array shape %i by %i" % align_array.shape)
Explanation: Note that you can only add two alignments together if they have the same
number of rows.
Alignments as arrays
Depending on what you are doing, it can be more useful to turn the
alignment object into an array of letters – and you can do this with
NumPy:
End of explanation
align_array = np.array([list(rec) for rec in alignment], np.character, order="F")
Explanation: If you will be working heavily with the columns, you can tell NumPy to
store the array by column (as in Fortran) rather then its default of by
row (as in C):
End of explanation
import Bio.Align.Applications
dir(Bio.Align.Applications)
Explanation: Note that this leaves the original Biopython alignment object and the
NumPy array in memory as separate objects - editing one will not update
the other!
Alignment Tools {#sec:alignment-tools}
There are lots of algorithms out there for aligning sequences, both
pairwise alignments and multiple sequence alignments. These calculations
are relatively slow, and you generally wouldn’t want to write such an
algorithm in Python. Instead, you can use Biopython to invoke a command
line tool on your behalf. Normally you would:
Prepare an input file of your unaligned sequences, typically this
will be a FASTA file which you might create using Bio.SeqIO
(see Chapter [chapter:Bio.SeqIO]).
Call the command line tool to process this input file, typically via
one of Biopython’s command line wrappers (which we’ll discuss here).
Read the output from the tool, i.e. your aligned sequences,
typically using Bio.AlignIO (see earlier in this chapter).
All the command line wrappers we’re going to talk about in this chapter
follow the same style. You create a command line object specifying the
options (e.g. the input filename and the output filename), then invoke
this command line via a Python operating system call (e.g. using the
subprocess module).
Most of these wrappers are defined in the Bio.Align.Applications
module:
End of explanation
from Bio.Align.Applications import ClustalwCommandline
help(ClustalwCommandline)
Explanation: (Ignore the entries starting with an underscore – these have special
meaning in Python.) The module Bio.Emboss.Applications has wrappers
for some of the EMBOSS suite,
including needle and water, which are described below in
Section [seq:emboss-needle-water], and wrappers for the EMBOSS
packaged versions of the PHYLIP tools (which EMBOSS refer to as one of
their EMBASSY packages - third party tools with an EMBOSS style
interface). We won’t explore all these alignment tools here in the
section, just a sample, but the same principles apply.
ClustalW {#sec:align_clustal}
ClustalW is a popular command line tool for multiple sequence alignment
(there is also a graphical interface called ClustalX). Biopython’s
Bio.Align.Applications module has a wrapper for this alignment tool
(and several others).
Before trying to use ClustalW from within Python, you should first try
running the ClustalW tool yourself by hand at the command line, to
familiarise yourself the other options. You’ll find the Biopython
wrapper is very faithful to the actual command line API:
End of explanation
from Bio.Align.Applications import ClustalwCommandline
cline = ClustalwCommandline("clustalw2", infile="data/opuntia.fasta")
print(cline)
Explanation: For the most basic usage, all you need is to have a FASTA input file,
such as
opuntia.fasta
(available online or in the Doc/examples subdirectory of the Biopython
source code). This is a small FASTA file containing seven prickly-pear
DNA sequences (from the cactus family Opuntia).
By default ClustalW will generate an alignment and guide tree file with
names based on the input FASTA file, in this case opuntia.aln and
opuntia.dnd, but you can override this or make it explicit:
End of explanation
import os
from Bio.Align.Applications import ClustalwCommandline
clustalw_exe = r"C:\Program Files\new clustal\clustalw2.exe"
clustalw_cline = ClustalwCommandline(clustalw_exe, infile="data/opuntia.fasta")
assert os.path.isfile(clustalw_exe), "Clustal W executable missing"
stdout, stderr = clustalw_cline()
Explanation: Notice here we have given the executable name as clustalw2, indicating
we have version two installed, which has a different filename to version
one (clustalw, the default). Fortunately both versions support the
same set of arguments at the command line (and indeed, should be
functionally identical).
You may find that even though you have ClustalW installed, the above
command doesn’t work – you may get a message about “command not found”
(especially on Windows). This indicated that the ClustalW executable is
not on your PATH (an environment variable, a list of directories to be
searched). You can either update your PATH setting to include the
location of your copy of ClustalW tools (how you do this will depend on
your OS), or simply type in the full path of the tool. For example:
End of explanation
from Bio import AlignIO
align = AlignIO.read("data/opuntia.aln", "clustal")
print(align)
Explanation: Remember, in Python strings \n and \t are by default interpreted as
a new line and a tab – which is why we’re put a letter “r” at the start
for a raw string that isn’t translated in this way. This is generally
good practice when specifying a Windows style file name.
Internally this uses the subprocess module which is now the
recommended way to run another program in Python. This replaces older
options like the os.system() and the os.popen* functions.
Now, at this point it helps to know about how command line tools “work”.
When you run a tool at the command line, it will often print text output
directly to screen. This text can be captured or redirected, via two
“pipes”, called standard output (the normal results) and standard error
(for error messages and debug messages). There is also standard input,
which is any text fed into the tool. These names get shortened to stdin,
stdout and stderr. When the tool finishes, it has a return code (an
integer), which by convention is zero for success.
When you run the command line tool like this via the Biopython wrapper,
it will wait for it to finish, and check the return code. If this is non
zero (indicating an error), an exception is raised. The wrapper then
returns two strings, stdout and stderr.
In the case of ClustalW, when run at the command line all the important
output is written directly to the output files. Everything normally
printed to screen while you wait (via stdout or stderr) is boring and
can be ignored (assuming it worked).
What we care about are the two output files, the alignment and the guide
tree. We didn’t tell ClustalW what filenames to use, but it defaults to
picking names based on the input file. In this case the output should be
in the file opuntia.aln. You should be able to work out how to read in
the alignment using Bio.AlignIO by now:
End of explanation
from Bio import Phylo
tree = Phylo.read("data/opuntia.dnd", "newick")
Phylo.draw_ascii(tree)
Explanation: In case you are interested (and this is an aside from the main thrust of
this chapter), the opuntia.dnd file ClustalW creates is just a
standard Newick tree file, and Bio.Phylo can parse these:
End of explanation
from Bio.Align.Applications import MuscleCommandline
help(MuscleCommandline)
Explanation: Chapter [sec:Phylo] covers Biopython’s support for phylogenetic trees
in more depth.
MUSCLE
MUSCLE is a more recent multiple sequence alignment tool than ClustalW,
and Biopython also has a wrapper for it under the
Bio.Align.Applications module. As before, we recommend you try using
MUSCLE from the command line before trying it from within Python, as the
Biopython wrapper is very faithful to the actual command line API:
End of explanation
from Bio.Align.Applications import MuscleCommandline
cline = MuscleCommandline(input="data/opuntia.fasta", out="opuntia.txt")
print(cline)
Explanation: For the most basic usage, all you need is to have a FASTA input file,
such as
opuntia.fasta
(available online or in the Doc/examples subdirectory of the Biopython
source code). You can then tell MUSCLE to read in this FASTA file, and
write the alignment to an output file:
End of explanation
from Bio.Align.Applications import MuscleCommandline
cline = MuscleCommandline(input="data/opuntia.fasta", out="opuntia.aln", clw=True)
print(cline)
Explanation: Note that MUSCLE uses “-in” and “-out” but in Biopython we have to use
“input” and “out” as the keyword arguments or property names. This is
because “in” is a reserved word in Python.
By default MUSCLE will output the alignment as a FASTA file (using
gapped sequences). The Bio.AlignIO module should be able to read this
alignment using format=fasta. You can also ask for ClustalW-like
output:
End of explanation
from Bio.Align.Applications import MuscleCommandline
cline = MuscleCommandline(input="data/opuntia.fasta", out="opuntia.aln", clwstrict=True)
print(cline)
Explanation: Or, strict ClustalW output where the original ClustalW header line is
used for maximum compatibility:
End of explanation
from Bio.Align.Applications import MuscleCommandline
muscle_cline = MuscleCommandline(input="data/opuntia.fasta")
print(muscle_cline)
Explanation: The Bio.AlignIO module should be able to read these alignments using
format=clustal.
MUSCLE can also output in GCG MSF format (using the msf argument), but
Biopython can’t currently parse that, or using HTML which would give a
human readable web page (not suitable for parsing).
You can also set the other optional parameters, for example the maximum
number of iterations. See the built in help for details.
You would then run MUSCLE command line string as described above for
ClustalW, and parse the output using Bio.AlignIO to get an alignment
object.
MUSCLE using stdout
Using a MUSCLE command line as in the examples above will write the
alignment to a file. This means there will be no important information
written to the standard out (stdout) or standard error (stderr) handles.
However, by default MUSCLE will write the alignment to standard output
(stdout). We can take advantage of this to avoid having a temporary
output file! For example:
End of explanation
from Bio.Align.Applications import MuscleCommandline
muscle_cline = MuscleCommandline(input="data/opuntia.fasta")
stdout, stderr = muscle_cline()
from io import StringIO
from Bio import AlignIO
align = AlignIO.read(StringIO(stdout), "fasta")
print(align)
Explanation: If we run this via the wrapper, we get back the output as a string. In
order to parse this we can use StringIO to turn it into a handle.
Remember that MUSCLE defaults to using FASTA as the output format:
End of explanation
import subprocess
import sys
from Bio.Align.Applications import MuscleCommandline
muscle_cline = MuscleCommandline(input="data/opuntia.fasta")
child = subprocess.Popen(str(muscle_cline),
stdout=subprocess.PIPE, stderr=subprocess.PIPE,
shell=(sys.platform != "win32"),
universal_newlines=True)
from Bio import AlignIO
align = AlignIO.read(child.stdout, "fasta")
print(align)
Explanation: The above approach is fairly simple, but if you are dealing with very
large output text the fact that all of stdout and stderr is loaded into
memory as a string can be a potential drawback. Using the subprocess
module we can work directly with handles instead:
End of explanation
from Bio import SeqIO
records = (r for r in SeqIO.parse("data/opuntia.fasta", "fasta") if len(r) < 900)
Explanation: MUSCLE using stdin and stdout
We don’t actually need to have our FASTA input sequences prepared in a
file, because by default MUSCLE will read in the input sequence from
standard input! Note this is a bit more advanced and fiddly, so don’t
bother with this technique unless you need to.
First, we’ll need some unaligned sequences in memory as SeqRecord
objects. For this demonstration I’m going to use a filtered version of
the original FASTA file (using a generator expression), taking just six
of the seven sequences:
End of explanation
from Bio.Align.Applications import MuscleCommandline
muscle_cline = MuscleCommandline(clwstrict=True)
print(muscle_cline)
Explanation: Then we create the MUSCLE command line, leaving the input and output to
their defaults (stdin and stdout). I’m also going to ask for strict
ClustalW format as for the output.
End of explanation
import subprocess
import sys
child = subprocess.Popen(str(cline),
stdin=subprocess.PIPE, stdout=subprocess.PIPE,
stderr=subprocess.PIPE, universal_newlines=True,
shell=(sys.platform != "win32"))
Explanation: Now for the fiddly bits using the subprocess module, stdin and stdout:
End of explanation
SeqIO.write(records, child.stdin, "fasta")
child.stdin.close()
Explanation: That should start MUSCLE, but it will be sitting waiting for its FASTA
input sequences, which we must supply via its stdin handle:
End of explanation
from Bio import AlignIO
align = AlignIO.read(child.stdout, "clustal")
print(align)
Explanation: After writing the six sequences to the handle, MUSCLE will still be
waiting to see if that is all the FASTA sequences or not – so we must
signal that this is all the input data by closing the handle. At that
point MUSCLE should start to run, and we can ask for the output:
End of explanation
from Bio import SeqIO
records = (r for r in SeqIO.parse("data/opuntia.fasta", "fasta") if len(r) < 900)
handle = StringIO()
SeqIO.write(records, handle, "fasta")
data = handle.getvalue()
Explanation: Wow! There we are with a new alignment of just the six records, without
having created a temporary FASTA input file, or a temporary alignment
output file. However, a word of caution: Dealing with errors with this
style of calling external programs is much more complicated. It also
becomes far harder to diagnose problems, because you can’t try running
MUSCLE manually outside of Biopython (because you don’t have the input
file to supply). There can also be subtle cross platform issues (e.g.
Windows versus Linux, Python 2 versus Python 3), and how you run your
script can have an impact (e.g. at the command line, from IDLE or an
IDE, or as a GUI script). These are all generic Python issues though,
and not specific to Biopython.
If you find working directly with subprocess like this scary, there is
an alternative. If you execute the tool with muscle_cline() you can
supply any standard input as a big string, muscle_cline(stdin=...).
So, provided your data isn’t very big, you can prepare the FASTA input
in memory as a string using StringIO (see
Section [sec:appendix-handles]):
End of explanation
stdout, stderr = muscle_cline(stdin=data)
from Bio import AlignIO
align = AlignIO.read(StringIO(stdout), "clustal")
print(align)
Explanation: You can then run the tool and parse the alignment as follows:
End of explanation
from Bio.Emboss.Applications import NeedleCommandline
needle_cline = NeedleCommandline(asequence="data/alpha.faa", bsequence="data/beta.faa",
gapopen=10, gapextend=0.5, outfile="needle.txt")
print(needle_cline)
Explanation: You might find this easier, but it does require more memory (RAM) for
the strings used for the input FASTA and output Clustal formatted data.
EMBOSS needle and water {#seq:emboss-needle-water}
The EMBOSS suite includes the water
and needle tools for Smith-Waterman algorithm local alignment, and
Needleman-Wunsch global alignment. The tools share the same style
interface, so switching between the two is trivial – we’ll just use
needle here.
Suppose you want to do a global pairwise alignment between two
sequences, prepared in FASTA format as follows:
```
HBA_HUMAN
MVLSPADKTNVKAAWGKVGAHAGEYGAEALERMFLSFPTTKTYFPHFDLSHGSAQVKGHG
KKVADALTNAVAHVDDMPNALSALSDLHAHKLRVDPVNFKLLSHCLLVTLAAHLPAEFTP
AVHASLDKFLASVSTVLTSKYR
```
in a file alpha.fasta, and secondly in a file beta.fasta:
```
HBB_HUMAN
MVHLTPEEKSAVTALWGKVNVDEVGGEALGRLLVVYPWTQRFFESFGDLSTPDAVMGNPK
VKAHGKKVLGAFSDGLAHLDNLKGTFATLSELHCDKLHVDPENFRLLGNVLVCVLAHHFG
KEFTPPVQAAYQKVVAGVANALAHKYH
```
Let’s start by creating a complete needle command line object in one
go:
End of explanation
from Bio.Emboss.Applications import NeedleCommandline
needle_cline = NeedleCommandline(r"C:\EMBOSS\needle.exe",
asequence="data/alpha.faa", bsequence="data/beta.faa",
gapopen=10, gapextend=0.5, outfile="needle.txt")
Explanation: Why not try running this by hand at the command prompt? You should see
it does a pairwise comparison and records the output in the file
needle.txt (in the default EMBOSS alignment file format).
Even if you have EMBOSS installed, running this command may not work –
you might get a message about “command not found” (especially on
Windows). This probably means that the EMBOSS tools are not on your PATH
environment variable. You can either update your PATH setting, or simply
tell Biopython the full path to the tool, for example:
End of explanation
from Bio.Emboss.Applications import NeedleCommandline
help(NeedleCommandline)
Explanation: Remember in Python that for a default string \n or \t means a new
line or a tab – which is why we’re put a letter “r” at the start for a
raw string.
At this point it might help to try running the EMBOSS tools yourself by
hand at the command line, to familiarise yourself the other options and
compare them to the Biopython help text:
End of explanation
from Bio.Emboss.Applications import NeedleCommandline
needle_cline = NeedleCommandline()
needle_cline.asequence="data/alpha.faa"
needle_cline.bsequence="data/beta.faa"
needle_cline.gapopen=10
needle_cline.gapextend=0.5
needle_cline.outfile="needle.txt"
print(needle_cline)
print(needle_cline.outfile)
Explanation: Note that you can also specify (or change or look at) the settings like
this:
End of explanation
stdout, stderr = needle_cline()
print(stdout + stderr)
Explanation: Next we want to use Python to run this command for us. As explained
above, for full control, we recommend you use the built in Python
subprocess module, but for simple usage the wrapper object usually
suffices:
End of explanation
from Bio import AlignIO
align = AlignIO.read("needle.txt", "emboss")
print(align)
Explanation: Next we can load the output file with Bio.AlignIO as discussed earlier
in this chapter, as the emboss format:
End of explanation |
15,767 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Usage examples
Step1: The io module manages where data is stored and
This is my database root path, where all downloaded images are automatically stored.
Check if this is automatically set for you at a reasonable location. Otherwise, control it with the config.ini file.
Step2: OPUS API | Python Code:
from pyciss import io
Explanation: Usage examples
End of explanation
io.config
io.get_db_root()
Explanation: The io module manages where data is stored and
This is my database root path, where all downloaded images are automatically stored.
Check if this is automatically set for you at a reasonable location. Otherwise, control it with the config.ini file.
End of explanation
from pyciss import opusapi
opus = opusapi.OPUS()
opus.query_image_id('N1541711364')
opus.download_previews()
opus.show_images()
opus.download_results()
io.get_db_root()
Explanation: OPUS API
End of explanation |
15,768 | Given the following text description, write Python code to implement the functionality described.
Description:
Greatest odd factor of an even number
Python3 program for the above approach ; Function to print greatest odd factor ; Initialize i with 1 ; Iterate till i <= pow_2 ; find the pow ( 2 , i ) ; If factor is odd , then print the number and break ; Given Number ; Function Call
| Python Code:
import math
def greatestOddFactor(n ) :
pow_2 = int(math . log(n , 2 ) )
i = 1
while i <= pow_2 :
fac_2 =(2 ** i )
if(n % fac_2 == 0 ) :
if(( n // fac_2 ) % 2 == 1 ) :
print(n // fac_2 )
break
i += 1
N = 8642
greatestOddFactor(N )
|
15,769 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Bar data
Step1: Historical data
To get the earliest date of available bar data the "head timestamp" can be requested
Step2: To request hourly data of the last 60 trading days
Step3: Convert the list of bars to a data frame and print the first and last rows
Step4: Instruct the notebook to draw plot graphics inline
Step5: Plot the close data
Step6: There is also a utility function to plot bars as a candlestick plot. It can accept either a DataFrame or a list of bars. Here it will print the last 100 bars
Step7: Historical data with realtime updates
A new feature of the API is to get live updates for historical bars. This is done by setting endDateTime to an empty string and the keepUpToDate parameter to True.
Let's get some bars with an keepUpToDate subscription
Step8: Replot for every change of the last bar
Step9: Realtime bars
With reqRealTimeBars a subscription is started that sends a new bar every 5 seconds.
First we'll set up a event handler for bar updates
Step10: Then do the real request and connect the event handler,
Step11: let it run for half a minute and then cancel the realtime bars.
Step12: The advantage of reqRealTimeBars is that it behaves more robust when the connection to the IB server farms is interrupted. After the connection is restored, the bars from during the network outage will be backfilled and the live bars will resume.
reqHistoricalData + keepUpToDate will, at the moment of writing, leave the whole API inoperable after a network interruption. | Python Code:
from ib_insync import *
util.startLoop()
ib = IB()
ib.connect('127.0.0.1', 7497, clientId=14)
Explanation: Bar data
End of explanation
contract = Stock('TSLA', 'SMART', 'USD')
ib.reqHeadTimeStamp(contract, whatToShow='TRADES', useRTH=True)
Explanation: Historical data
To get the earliest date of available bar data the "head timestamp" can be requested:
End of explanation
bars = ib.reqHistoricalData(
contract,
endDateTime='',
durationStr='60 D',
barSizeSetting='1 hour',
whatToShow='TRADES',
useRTH=True,
formatDate=1)
bars[0]
Explanation: To request hourly data of the last 60 trading days:
End of explanation
df = util.df(bars)
display(df.head())
display(df.tail())
Explanation: Convert the list of bars to a data frame and print the first and last rows:
End of explanation
%matplotlib inline
Explanation: Instruct the notebook to draw plot graphics inline:
End of explanation
df.plot(y='close');
Explanation: Plot the close data
End of explanation
util.barplot(bars[-100:], title=contract.symbol);
Explanation: There is also a utility function to plot bars as a candlestick plot. It can accept either a DataFrame or a list of bars. Here it will print the last 100 bars:
End of explanation
contract = Forex('EURUSD')
bars = ib.reqHistoricalData(
contract,
endDateTime='',
durationStr='900 S',
barSizeSetting='10 secs',
whatToShow='MIDPOINT',
useRTH=True,
formatDate=1,
keepUpToDate=True)
Explanation: Historical data with realtime updates
A new feature of the API is to get live updates for historical bars. This is done by setting endDateTime to an empty string and the keepUpToDate parameter to True.
Let's get some bars with an keepUpToDate subscription:
End of explanation
from IPython.display import display, clear_output
import matplotlib.pyplot as plt
def onBarUpdate(bars, hasNewBar):
plt.close()
plot = util.barplot(bars)
clear_output(wait=True)
display(plot)
bars.updateEvent += onBarUpdate
ib.sleep(10)
ib.cancelHistoricalData(bars)
Explanation: Replot for every change of the last bar:
End of explanation
def onBarUpdate(bars, hasNewBar):
print(bars[-1])
Explanation: Realtime bars
With reqRealTimeBars a subscription is started that sends a new bar every 5 seconds.
First we'll set up a event handler for bar updates:
End of explanation
bars = ib.reqRealTimeBars(contract, 5, 'MIDPOINT', False)
bars.updateEvent += onBarUpdate
Explanation: Then do the real request and connect the event handler,
End of explanation
ib.sleep(30)
ib.cancelRealTimeBars(bars)
Explanation: let it run for half a minute and then cancel the realtime bars.
End of explanation
ib.disconnect()
Explanation: The advantage of reqRealTimeBars is that it behaves more robust when the connection to the IB server farms is interrupted. After the connection is restored, the bars from during the network outage will be backfilled and the live bars will resume.
reqHistoricalData + keepUpToDate will, at the moment of writing, leave the whole API inoperable after a network interruption.
End of explanation |
15,770 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
To understand how Pandas works with time series, we need to understand the datetime module of Python. The main object types of this module are below
date - stores year, month, day using Gregorian calendar
time - stores hours, minutes, seconds, microseconds
datetime - both date and time
timedelta - difference between two datetime values - represented as days, seconds, microseconds
datetime module
Step1: Time delta
You can get the time difference between two dates as timedelta objects
Step2: Shifting time with timedelta
Step3: Working with time stamps
Step4: Printing time stamps
Step5: The table below gives most popular format strings to use with strftime.
%Y - 4 digit year
%y - 2 digit year
%m - 2 digit month [02,10]
%d - 2 digit day [01,22]
%H - hours in 24 hour clock, 2 digits
%I - hours in 12 hour clock, 2 digits
%M - minutes, 2 digits
%S - seconds, 2 digits
%w - weekday as integer [0 (sun) - 6 (sat)]
%U - week number, 2 digits, from 00-53. First Sunday of the year is the start of the week (1) and days before belong to week 0
%W - week number where Monday is the start of the week
%z - UTC time zone offset as +HHMM or -HHMM, empty is time zone is unknown (naive)
%F - shortcut for %Y-%m-%d (2020-03-12)
%D - shortcut for %m/%d/%y (03/12/20)
If you notice, year, month, date are lower while hour, min, sec are upper cases.
Step6: Reading timestamps into datetime objects using strptime()
Step7: Parsing with dateutil package
dateutil is a 3rd party package and allows you to parse common date formats without explicitly stating the format. | Python Code:
from datetime import datetime
now = datetime.now()
now
(now.year, now.month, now.day, now.hour, now.minute)
Explanation: To understand how Pandas works with time series, we need to understand the datetime module of Python. The main object types of this module are below
date - stores year, month, day using Gregorian calendar
time - stores hours, minutes, seconds, microseconds
datetime - both date and time
timedelta - difference between two datetime values - represented as days, seconds, microseconds
datetime module
End of explanation
delta = datetime(2020,3,12) - datetime(2020,9,25)
delta
(delta.days, delta.seconds)
Explanation: Time delta
You can get the time difference between two dates as timedelta objects
End of explanation
from datetime import timedelta
datetime.now() + timedelta(days=25)
datetime.now() - timedelta(days=20, hours=5)
Explanation: Shifting time with timedelta
End of explanation
stamp = datetime.now()
str(stamp)
Explanation: Working with time stamps
End of explanation
stamp.strftime('%Y-%m-%d-%H-%M-%S')
Explanation: Printing time stamps
End of explanation
datetime.now().strftime('Current hour is %I and week number is %U')
datetime.now().strftime('%z')
datetime.now().strftime('%D')
Explanation: The table below gives most popular format strings to use with strftime.
%Y - 4 digit year
%y - 2 digit year
%m - 2 digit month [02,10]
%d - 2 digit day [01,22]
%H - hours in 24 hour clock, 2 digits
%I - hours in 12 hour clock, 2 digits
%M - minutes, 2 digits
%S - seconds, 2 digits
%w - weekday as integer [0 (sun) - 6 (sat)]
%U - week number, 2 digits, from 00-53. First Sunday of the year is the start of the week (1) and days before belong to week 0
%W - week number where Monday is the start of the week
%z - UTC time zone offset as +HHMM or -HHMM, empty is time zone is unknown (naive)
%F - shortcut for %Y-%m-%d (2020-03-12)
%D - shortcut for %m/%d/%y (03/12/20)
If you notice, year, month, date are lower while hour, min, sec are upper cases.
End of explanation
stamp_str = '03/12/20'
datetime.strptime(stamp_str, '%m/%d/%y')
Explanation: Reading timestamps into datetime objects using strptime()
End of explanation
from dateutil.parser import parse
parse('03/12/20')
Explanation: Parsing with dateutil package
dateutil is a 3rd party package and allows you to parse common date formats without explicitly stating the format.
End of explanation |
15,771 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Excercises Electric Machinery Fundamentals
Chapter 9
Problem 9-9
Step1: Description
How many pulses per second must be supplied to the control unit of the motor in Problem 9-8 to achieve
a rotational speed of 600 r/min?
Step2: SOLUTION
From Equation (9-20),
$$n_m = \frac{1}{3p}n_\text{pulses}$$ | Python Code:
%pylab notebook
Explanation: Excercises Electric Machinery Fundamentals
Chapter 9
Problem 9-9
End of explanation
p = 12
n_m = 600 # [r/min]
Explanation: Description
How many pulses per second must be supplied to the control unit of the motor in Problem 9-8 to achieve
a rotational speed of 600 r/min?
End of explanation
n_pulses = 3*p*n_m
print('''
n_pulses = {:.0f} pulses/min = {:.0f} pulses/sec
============================================'''.format(n_pulses, n_pulses/60))
Explanation: SOLUTION
From Equation (9-20),
$$n_m = \frac{1}{3p}n_\text{pulses}$$
End of explanation |
15,772 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step7: Arbres binaires
Le but de ce TP est d'implanter les fonctions usuelles telles que la génération exhaustive (fabriquer tous les éléments de l'ensemble), rank et unrank sur l'ensemble des arbres binaires.
Pour représenter les arbres binaires en python, on utilisera la structure suivante.
Exécutez les cellules suivantes et observez les exemples.
Step8: Il y a 5 arbres binaires de taille 3. L'un deux est celui que nous venons de construire.
Construisez explicitement les 4 autres
Step19: Le but de ce TP est d'implanter les fonctions de la classe BinaryTrees ci-dessous (avec un "s" à la fin) qui représente l'ensemble des arbres binaires d'une taille donnée. La structure de la classe vous est donnée ainsi que les méthodes de base.
Complétez les méthodes manquantes puis exécutez les exemples ci-dessous.
Step20: La suite de tests que nous avions définies sur les permutations peut aussi s'appliquer sur les arbres binaires.
Exécutez la cellule suivante puis vérifiez que les tests passent sur les exemples.
Step23: Voici une fonction qui calcule un arbre binaire aléatoire. On se demande si chaque arbre est obenu avec une probabilité uniforme.
Exécutez les cellules ci-dessous puis déterminez expérimentalment si la distribution de probabilité est uniforme.
Step24: La hauteur d'un arbre se calcule récursivement | Python Code:
class BinaryTree():
def __init__(self, children = None):
A binary tree is either a leaf or a node with two subtrees.
INPUT:
- children, either None (for a leaf), or a list of size excatly 2
of either two binary trees or 2 objects that can be made into binary trees
self._isleaf = (children is None)
if not self._isleaf:
if len(children) != 2:
raise ValueError("A binary tree needs exactly two children")
self._children = tuple(c if isinstance(c,BinaryTree) else BinaryTree(c) for c in children)
self._size = None
def __repr__(self):
if self.is_leaf():
return "leaf"
return str(self._children)
def __eq__(self, other):
Return true if other represents the same binary tree as self
if not isinstance(other, BinaryTree):
return False
if self.is_leaf():
return other.is_leaf()
return self.left() == other.left() and self.right() == other.right()
def left(self):
Return the left subtree of self
return self._children[0]
def right(self):
Return the right subtree of self
return self._children[1]
def is_leaf(self):
Return true is self is a leaf
return self._isleaf
def _compute_size(self):
Recursively computes the size of self
if self.is_leaf():
self._size = 0
else:
self._size = self.left().size() + self.right().size() + 1
def size(self):
Return the number of non leaf nodes in the binary tree
if self._size is None:
self._compute_size()
return self._size
leaf = BinaryTree()
t = BinaryTree()
t
t.size()
t = BinaryTree([[leaf,leaf], leaf]) # a tree of size 2
t
t.size()
t = BinaryTree([leaf, [leaf,leaf]]) # a different tree of size 2
t
t.size()
t = BinaryTree([[leaf, leaf], [leaf, leaf]]) # a tree of size 3
t
t.size()
Explanation: Arbres binaires
Le but de ce TP est d'implanter les fonctions usuelles telles que la génération exhaustive (fabriquer tous les éléments de l'ensemble), rank et unrank sur l'ensemble des arbres binaires.
Pour représenter les arbres binaires en python, on utilisera la structure suivante.
Exécutez les cellules suivantes et observez les exemples.
End of explanation
# t1 = BinaryTree(...)
# t2 = BinaryTree(...)
# t3 = BinaryTree(...)
# t4 = BinaryTree(...)
Explanation: Il y a 5 arbres binaires de taille 3. L'un deux est celui que nous venons de construire.
Construisez explicitement les 4 autres
End of explanation
import math
import random
class BinaryTrees():
def __init__(self, size):
The combinatorial set of binary trees of size `size`
INPUT:
- size a non negative integers
self._size = size
def size(self):
Return the size of the binary trees of the set
return self._size
def __repr__(self):
Default string repr of ``self``
return "Binary Trees of size " + str(self._size)
def cardinality(self):
Return the cardinality of the set
# This is given to you
n = self._size
f = math.factorial(n)
return math.factorial(2*n)/(f*f*(n+1))
def __iter__(self):
Iterator on the elements of the set
# write code here
def first(self):
Return the first element of the set
for t in self:
return t
def rank(self,t):
Return the rank of the binary tree t in the generation order of the set (starting at 0)
INPUT:
- t, a binary tree
# write code here
def unrank(self,i):
Return the binary tree corresponding to the rank ``i``
INPUT:
- i, a integer between 0 and the cardinality minus 1
# write code here
def next(self,t):
Return the next element following t in self
INPUT :
- t a binary tree
OUPUT :
The next binary tree or None if t is the last permutation of self
# write code here
def random_element(self):
Return a random element of ``self`` with uniform probability
# write code here
BinaryTrees(0)
list(BinaryTrees(0))
BinaryTrees(1)
list(BinaryTrees(1))
BinaryTrees(2)
list(BinaryTrees(2))
BT3 = BinaryTrees(3)
BT3
list(BT3)
t = BinaryTree(((leaf, leaf), (leaf, leaf)))
BT3.rank(t)
BT3.unrank(2)
BT3.next(t)
BT3.random_element()
Explanation: Le but de ce TP est d'implanter les fonctions de la classe BinaryTrees ci-dessous (avec un "s" à la fin) qui représente l'ensemble des arbres binaires d'une taille donnée. La structure de la classe vous est donnée ainsi que les méthodes de base.
Complétez les méthodes manquantes puis exécutez les exemples ci-dessous.
End of explanation
def test_cardinality_iter(S):
assert(len(list(S)) == S.cardinality())
def test_rank(S):
assert([S.rank(p) for p in S] == range(S.cardinality()))
def test_unrank(S):
assert(list(S) == [S.unrank(i) for i in xrange(S.cardinality())])
def test_next(S):
L = [S.first()]
while True:
p = S.next(L[-1])
if p == None:
break
L.append(p)
assert(L == list(S))
def all_tests(S):
tests = {"Cardinality / iter": test_cardinality_iter, "Rank": test_rank, "Unrank": test_unrank, "Next": test_next}
for k in tests:
print "Testsing: "+ k
try:
tests[k](S)
print "Passed"
except AssertionError:
print "Not passed"
all_tests(BinaryTrees(3))
all_tests(BinaryTrees(4))
all_tests(BinaryTrees(5))
all_tests(BinaryTrees(6))
Explanation: La suite de tests que nous avions définies sur les permutations peut aussi s'appliquer sur les arbres binaires.
Exécutez la cellule suivante puis vérifiez que les tests passent sur les exemples.
End of explanation
import random
def random_grow(t):
Randomly grows a binary tree
INPUT:
- t, a binary tree of size n
OUTPUT: a binary tree of size n+1
if t.is_leaf():
return BinaryTree([leaf,leaf])
c = [t.left(),t.right()]
i = random.randint(0,1)
c[i] = random_grow(c[i])
return BinaryTree(c)
def random_binary_tree(n):
Return a random binary tree of size n
t = leaf
for i in xrange(n):
t = random_grow(t)
return t
random_binary_tree(10)
Explanation: Voici une fonction qui calcule un arbre binaire aléatoire. On se demande si chaque arbre est obenu avec une probabilité uniforme.
Exécutez les cellules ci-dessous puis déterminez expérimentalment si la distribution de probabilité est uniforme.
End of explanation
assert BinaryTree([[leaf,leaf], leaf]).height() == 2
assert BinaryTree([leaf,[leaf, leaf]]).height() == 2
assert BinaryTree([[leaf,leaf], [leaf,leaf]]).height() == 2
assert BinaryTree([[leaf,[leaf,leaf]], [leaf,leaf]]).height() == 3
Explanation: La hauteur d'un arbre se calcule récursivement : pour une feuille, la hauteur est 0, sinon c'est le max de la hauteur des fils +1.
Rajoutez une méthode height à la classe des arbres binaires et vérifiez son fonctionnement avec les tests suivants.
End of explanation |
15,773 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Contents and Objective
Describing several commands and methods that will be used throughout the simulations
<b>Note
Step1: Tuples
Similar to lists but "immutable", i.e., entries can be appended, but not be changed
Defined by tuple( ) or by brackets with entities being separated by comma
Referenced by index in square brackets; <b>Note</b>
Step2: Dictionaries
Container in which entries are of type
Step3: Sets
As characterized by the naming, sets are representing mathematical sets; no double occurences of elements
Defined by keyword "set" of by curly brackets with entities being separated by comma
<b>Note</b>
Step4: Flow Control
Standards commands as for, while, ...
Functions for specific purposes
<b>Note
Step5: While Loops
while loops in Python are (as usual) constructed by checking condition and exiting loop if condition becomes False
<b>Note
Step6: Functions
Defined by key-word "def" followed by list of arguments in brackets
Doc string defined directly after "def" by ''' TEXT '''
Values returned by key word "return"; <b>Note | Python Code:
# defining lists
sport_list = [ 'cycling', 'football', 'fitness' ]
first_prime_numbers = [ 2, 3, 5, 7, 11, 13, 17, 19 ]
# getting contents
sport = sport_list[ 2 ]
third_prime = first_prime_numbers[ 2 ]
# printing
print( 'All sports:', sport_list )
print( 'Sport to be done:', sport )
print( '\nFirst primes:', first_prime_numbers )
print( 'Third prime number:', third_prime )
# adapt entries and append new entries
sport_list[ 1 ] = 'swimming'
sport_list.append( 'running' )
first_prime_numbers.append( 23 )
# printing
print( 'All sports:', sport_list )
print( 'First primes:', first_prime_numbers )
Explanation: Contents and Objective
Describing several commands and methods that will be used throughout the simulations
<b>Note:</b> Basic knowledge of programming languages and concepts is assumed. Only specific concepts that are different from, e.g., C++ or Matlab, are provided.
<b>NOTE 2:</b> The following summary is by no means complete or exhaustive, but only provides a short and simplified overview of the commands used throughout the simulations in the lecture. For a detailed introduction please have a look at one of the numerous web-tutorials or books on Python, e.g.,
https://www.python-kurs.eu/
https://link.springer.com/book/10.1007%2F978-1-4842-4246-9
https://primo.bibliothek.kit.edu/primo_library/libweb/action/search.do?mode=Basic&vid=KIT&vl%28freeText0%29=python&vl%28freeText0%29=python&fn=search&tab=kit&srt=date
Cell Types
There are two types of cells:
Text cells (called 'Markdown'): containing text, allowing use of LaTeX
Math/code cells: where code is being executed
As long as you are just reading the simulations, there is no need to be concerned about this fact.
Data Structures
In the following sections the basic data structures used in upcoming simulations will be introduced.
Basic types as int, float, string are supposed to be well-known.
Lists
Container-type structure for collecting entities (which may even be of different type)
Defined by key word list( ) or by square brackets with entities being separated by comma
Referenced by index in square brackets; <b>Note</b>: indexing starting at 0
Entries may be changed, appended, sliced,...
End of explanation
# defining tuple
sport_tuple = ( 'cycling', 'football', 'fitness' )
# getting contents
sport = sport_tuple[ 2 ]
# printing
print( 'All sports:', sport_tuple )
print( 'Sport to be done:', sport )
# append new entries
sport_tuple += ( 'running', )
# printing
print( 'All sports:', sport_tuple )
print()
# changing entries will fail
# --> ERROR is being generated on purpose
# --> NOTE: Error is handled by 'try: ... except: ...' statement
try:
sport_tuple[ 1 ] = 'swimming'
except:
print('ERROR: Entries within tuples cannot be adapted!')
Explanation: Tuples
Similar to lists but "immutable", i.e., entries can be appended, but not be changed
Defined by tuple( ) or by brackets with entities being separated by comma
Referenced by index in square brackets; <b>Note</b>: indexing starting at 0
End of explanation
# defining dictionaries
sports_days = { 'Monday': 'pause', 'Tuesday': 'fitness', 'Wednesday' : 'running',
'Thursday' : 'fitness', 'Friday' : 'swimming', 'Saturday' : 'cycling',
'Sunday' : 'cycling' }
print( 'Sport by day:', sports_days )
print( '\nOn Tuesday:', sports_days[ 'Tuesday' ])
# Changes are made by using the key as identifier
sports_days[ 'Tuesday' ] = 'running'
print( 'Sport by day:', sports_days )
Explanation: Dictionaries
Container in which entries are of type: ( key : value )
Defined by key word "dict" or by curly brackets with entities of shape "key : value" being separated by comma
Referenced by key in square brackets --> <b>Note</b>: Indexing by keys instead of indices might be a major advantage (at least sometimes)
End of explanation
# defining sets
sports_set = { 'fitness', 'running', 'swimming', 'cycling'}
print( sports_set )
print()
# indexing will fail
# --> ERROR is being generated on purpose
try:
print( sports_set[0] )
except:
print('ERROR: No indexing of sets!')
# adding elements (or not)
sports_set.add( 'pause' )
print(sports_set)
sports_set.add( 'fitness' )
print(sports_set)
# union of sets (also: intersection, complement, ...)
all_stuff_set = set( sports_set )
union_of_sets = all_stuff_set.union( first_prime_numbers)
print( union_of_sets )
Explanation: Sets
As characterized by the naming, sets are representing mathematical sets; no double occurences of elements
Defined by keyword "set" of by curly brackets with entities being separated by comma
<b>Note</b>: As in maths, sets don't possess ordering, so there is no indexing of sets!
End of explanation
# looping in lists simply parsing along the list
for s in sport_list:
print( s )
print()
# looping in dictionaries happens along keys
for s in sports_days:
print( '{}: \t{}'.format( s, sports_days[ s ] ) )
Explanation: Flow Control
Standards commands as for, while, ...
Functions for specific purposes
<b>Note:</b> Since commands and their concept are quite self-explaining, only short description of syntax is provided
For Loops
for loops in Python allow looping along every so-called iterable as, e.g., list, tuple, dicts.... <b>Note</b>: Not necessarily int
Syntax: for i in iterable:
<b>Note:</b> Blocks are structured by indentation; sub-command (as, e.g., in a loop) are indented
End of explanation
# initialize variables
sum_primes = 0
_n = 0
# sum primes up to sum-value of 20
while sum_primes < 20:
# add prime of according index
sum_primes += first_prime_numbers[ _n ]
# increase index
_n += 1
print( 'Sum of first {} primes is {}.'.format( _n, sum_primes ) )
Explanation: While Loops
while loops in Python are (as usual) constructed by checking condition and exiting loop if condition becomes False
<b>Note:</b> Blocks are structured by indentation; sub-command (as, e.g., in a loop) are indented
End of explanation
def get_n_th_prime( n, first_prime_numbers ):
'''
DOC String
IN: index of prime number, list of prime numbers
OUT: n-th prime number
'''
# do something smart as, e.g., checking that according index really exists
# "assert" does the job by checking first arg and--if not being TRUE--providing text given as second arg
try:
val = first_prime_numbers[ n - 1 ]
except:
return '"ERROR: Index not feasible!"'
# NOTE: since counting starts at 0, (n-1)st number is returned
# Furthermore, there is no need for a function here; a simple reference would have done the job!
return first_prime_numbers[ n - 1 ]
# show doc string
print( help( get_n_th_prime ) )
# apply functions
N = 3
print( '{}. prime number is {}.'.format( N, get_n_th_prime( N, first_prime_numbers ) ) )
print()
N = 30
print( '{}. prime number is {}.'.format( N, get_n_th_prime( N, first_prime_numbers ) ) )
Explanation: Functions
Defined by key-word "def" followed by list of arguments in brackets
Doc string defined directly after "def" by ''' TEXT '''
Values returned by key word "return"; <b>Note:</b> return "value" can be scalar, list, dict, vector, maxtrix,...
End of explanation |
15,774 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Elementary Web Scraping
Joshua G. Mausolf
Preliminary Steps
Step2: Elementary Web Scraping Using NLTK and Beautiful Soup
*Suppose we want to write a utility function that takes a URL as its argument, and returns the contents of the URL, with all HTML markup removed. *
How would we accomplish this?
There are multiple ways of approaching this problem. One method is to follow the example as shown in the NLTK Book, Chapter 3. This method, however does not fully utilize BeautifulSoup, and as a result, the output is not exactly the desired content.
HTML is a complex syntax with not only written text of paragraphs but also menu items, drop-down fields, and links, among other facets. If we want to read a content of a given page, we generally are interested in the text content, rather than all content, headers, meta-data, and so forth.
Below, I first demonstrate the NLTK method, which can be used to return a webpage, remove HTML with BeautifulSoup, and tokenize the results.
Step3: Now that we have defined the function, let's look at the raw text for the NLTK website
Step5: We see that parts of the function are still included, such as "'var', 'VERSION', 'false', 'true'". Such results would be misleading and confusing if we wanted to discover the content of the page because we are getting both the text and additional text items embedded in the page that are not necessarily HTML but rather part of the page design.
Diving Into Beautiful Soup with HTML
Beautiful Soup offers another and better option. We can specify that we only want the text of a page, located within a particular HTML tag. While all pages differ, a typical setup is to find text within a paragraph <p> .... </p> set of tags. Typically, these are in the "body" of the HTML not the head. They are also typically nested under a hierarchy of <div> tags.
Example 1
Step6: Now that we have defined this function, let us try it out.
Step7: So the NLTK website does not happen to be compatible with the specified function. Let us try another website without inspecting the HTML syntax. How about a news article from the The Atlantic?
Step8: So this last example worked. Let's try another from the White House, Speeches and Remarks website.
Step11: This also works without specifying the HTML syntax. Although not perfect, the text is far more readable output than if we use the prior NLTK method. The improvement here, beyond readability is that we are targeting only text in paragraph tags rather than all text on the website which may have little to do with the content. Note that if we want to analyze the above text using NLTK we would simply set the text as raw text and tokenize.
Returning to NLTK for Some Analysis
Step12: Now that we have established this function's utility for url's, what about the NLTK website, which did not work initially. Well, we can specify the correct <div> tag and get the results, as shown below | Python Code:
#Import NLTK and Texts
import nltk
from nltk import *
from nltk.book import *
from nltk.corpus import stopwords
#Import Web Scraping Modules
from urllib import request
from bs4 import BeautifulSoup
#Command All Matplotlib Graphs to Appear in Inline in Notebook
%matplotlib inline
Explanation: Elementary Web Scraping
Joshua G. Mausolf
Preliminary Steps
End of explanation
def nltk_web_read(url, _raw=0, words=1):
-----------------------------------------------------
This function returns the text of a website for a
given url
-----------------------------------------------------
OPTIONS
-----------------------------------------------------
- _raw = option to return raw text from HTML
- 0 = no (default)
- 1 = yes, return raw text
-----------------------------------------------------
- words = option to return word tokens from HTML
- 1 = return all words (default)
- 2 = return only alphanumeric words
-----------------------------------------------------
#Import Modules
from urllib import request
from bs4 import BeautifulSoup
response = request.urlopen(url)
html = response.read().decode('utf-8')
#Get Text from HTML
raw = BeautifulSoup(html, "html5lib").get_text()
raw[:200]
#Options
#Raw Text Option
if _raw==0:
pass
else:
print (raw[:200])
#return raw
#Get Tokens
tokens = word_tokenize(raw)
#Word Options
#All Words
if words==1:
print(tokens[:200])
#return tokens
#Alphanumeric Words
elif words==2:
words = [w for w in tokens if w.isalnum()]
print (words[:200])
#return words
Explanation: Elementary Web Scraping Using NLTK and Beautiful Soup
*Suppose we want to write a utility function that takes a URL as its argument, and returns the contents of the URL, with all HTML markup removed. *
How would we accomplish this?
There are multiple ways of approaching this problem. One method is to follow the example as shown in the NLTK Book, Chapter 3. This method, however does not fully utilize BeautifulSoup, and as a result, the output is not exactly the desired content.
HTML is a complex syntax with not only written text of paragraphs but also menu items, drop-down fields, and links, among other facets. If we want to read a content of a given page, we generally are interested in the text content, rather than all content, headers, meta-data, and so forth.
Below, I first demonstrate the NLTK method, which can be used to return a webpage, remove HTML with BeautifulSoup, and tokenize the results.
End of explanation
#Get All Raw Content
url = "http://www.nltk.org"
nltk_web_read(url, 1)
#Get ONLY Raw Text
nltk_web_read(url, 0, 2)
Explanation: Now that we have defined the function, let's look at the raw text for the NLTK website:
End of explanation
def get_website_text(url, div_class=0, _return=0):
-----------------------------------------------------
This function returns the text of a website for a
given URL using Beautiful Soup.
The URL must be specified.
If you do not know the HTML format but would like to
try parsing the URL, run as is. The parser looks for
the "div" class. However, depending on the webpage,
you may need to first inspect the HTML and specify
a "div class=<input>", where "<input>" could
be any number of unique strings specific to the
website.
After finding the content tag, this function returns
text in the paragraph <p> tag.
-----------------------------------------------------
OPTIONS
-----------------------------------------------------
- div_class = a specified class of the <div> tag
- 0 (default)
- looks for any div tags. Works on some
but not all websites.
- Any string
- looks for that string as a div class
Example:
get_website_text(url, "content-wrapper")
This input looks for the tag
<div class="content-wrapper">.
-----------------------------------------------------
- _return = option to return text for use in another
function.
- 0 = do not return, print instead (default)
- 1 = return text
-----------------------------------------------------
#Import Modules
from urllib import request
from bs4 import BeautifulSoup
#Get HTML from URL
response = request.urlopen(url)
html = response.read().decode('utf-8')
#Get Soup for Beautiful Soup
soup = BeautifulSoup(html, "html5lib")
#Class Option (Default=0)
#Define Content
#Look for Any Div Tag
if div_class ==0:
pass
content = soup.find("div")
#Parser Content Error Message
if len(str(content)) < 1000:
print ("Your request may not be returning the desired results.", '\n' \
"Consider inspecting the webpage and trying a different div tag", '\n')
print ("CURRENT RESULTS:", '\n', content)
else:
pass
#Look for Specific Div Tag
else:
try:
content = soup.find("div", {"class":str(div_class)})
#Parser Content Error Message
if len(str(content)) < 1000:
print ("Your request may not be returning the desired results.", '\n' \
"Consider inspecting the webpage and trying a different div tag", '\n')
print ("CURRENT RESULTS:", '\n', content)
else:
pass
#Print Error Message For Failure
except:
print ("Error: Please check your div class='input'.", \
"A valid 'input' must be specified")
return
#Get Paragraph Body
paragraph = ["".join(x.findAll(text=True)) for x in content.findAll("p")]
paragraph_body = "\n\n%s" % ("\n\n".join(paragraph))
#Return Function Option
if _return==1:
return paragraph_body
else:
print (paragraph_body)
pass
Explanation: We see that parts of the function are still included, such as "'var', 'VERSION', 'false', 'true'". Such results would be misleading and confusing if we wanted to discover the content of the page because we are getting both the text and additional text items embedded in the page that are not necessarily HTML but rather part of the page design.
Diving Into Beautiful Soup with HTML
Beautiful Soup offers another and better option. We can specify that we only want the text of a page, located within a particular HTML tag. While all pages differ, a typical setup is to find text within a paragraph <p> .... </p> set of tags. Typically, these are in the "body" of the HTML not the head. They are also typically nested under a hierarchy of <div> tags.
Example 1: NLTK Website
http://www.nltk.org
```HTML
<div class="section" id="natural-language-toolkit">
<h1>Natural Language Toolkit<a class="headerlink" href="#natural-language-toolkit" title="Permalink to this headline">¶</a></h1>
<p>NLTK is a leading platform for building Python programs to work with human language data.
It provides easy-to-use interfaces to <a class="reference external" href="http://nltk.org/nltk_data/">over 50 corpora and lexical
resources</a> such as WordNet,
along with a suite of text processing libraries for classification, tokenization, stemming, tagging, parsing, and semantic reasoning,
wrappers for industrial-strength NLP libraries,
and an active <a class="reference external" href="http://groups.google.com/group/nltk-users">discussion forum</a>.</p>
<p>Thanks to a hands-on guide introducing programming fundamentals alongside topics in computational linguistics, plus comprehensive API documentation,
NLTK is suitable for linguists, engineers, students, educators, researchers, and industry users alike.</p></div>
```
Example 2: TheAtlantic Online
http://www.theatlantic.com/politics/archive/2016/02/berniebro-revisited/460212/
``` HTML
<div class="article-body" itemprop="articleBody">
<section id="article-section-1"><p>O reader, hear my plea: I am the victim of semantic drift.</p><p>Four months ago, I <a href="http://www.theatlantic.com/politics/archive/2015/10/here-comes-the-berniebro-bernie-sanders/411070/" data-omni-click="r'article',r'link',r'0',r'460212'">coined the term “Berniebro”</a> to describe a phenomenon I saw on Facebook: Men, mostly my age, mostly of my background, mostly with my political beliefs, were hectoring their friends about how great Bernie was even when their friends wanted to do something else, like talk about the NBA.</p> </section>
</div>
```
Each is only a small snippet of HTML. While we could specify that we only want content in a div class="section" tag, each website varies in terms of the classes provided. It is typically unique to the website design and CSS.
If we generalize to finding all text in paragraphs <p> subsumed under a <div> we can get the full text printed for most websites.
Writing a Beautiful Soup Function
Below I display this function followed by several examples.
End of explanation
#Example NLTK Website
url = "http://www.nltk.org"
get_website_text(url)
#get_website_text(url, "content-wrapper")
Explanation: Now that we have defined this function, let us try it out.
End of explanation
#The Atlantic Online
url = "http://www.theatlantic.com/politics/archive/2016/02/berniebro-revisited/460212/"
text = get_website_text(url, 0, 1)
#Print a Subset of the Text
print(text[60:1000])
Explanation: So the NLTK website does not happen to be compatible with the specified function. Let us try another website without inspecting the HTML syntax. How about a news article from the The Atlantic?
End of explanation
#The White House
url = "https://www.whitehouse.gov/the-press-office/2016/01/27/remarks-president-righteous-among-nations-award-ceremony"
text = get_website_text(url, 0, 1)
#Print a Subset of the Text
print(text[0:1500])
#To Print All of It
#get_website_text(url)
Explanation: So this last example worked. Let's try another from the White House, Speeches and Remarks website.
End of explanation
raw = get_website_text(url, 0, 1)
tokens = word_tokenize(raw)
print (tokens[:100])
# %load word_freq_nltk.py
def words(text, k=10, r=0, sw=0):
This functions returns all alphabetic words of
a specified length for a given text.
Defaults, k=10 and r=0, sw=0.
-------------------------------------------------
- k = the length of the word.
-------------------------------------------------
- r = the evaluation option.
It takes values 0 (the default), 1, or 2.
0. "equals" | len(word) == k
1. "less than" | len(word) < k.
2. "greater than" | len(word) > k.
-------------------------------------------------
- sw = stop words (English)
Stop words are high-frequency words like
(the, to and also, is), among others.
In this function, sw takes values
0 (the default) or 1.
The function prints an exception
statement if other values are entered.
-------------------------------------------------
#Not Accounting for Stopwords
if sw == 0:
#Option to Return Words == K
if r == 0:
ucw = [w.lower() for w in text if w.isalpha() and len(w) == k ]
return ucw
#Option to Return Words < K
elif r == 1:
ucw = [w.lower() for w in text if w.isalpha() and len(w) < k ]
return ucw
#Option to Return Words > K
elif r == 2:
ucw = [w.lower() for w in text if w.isalpha() and len(w) > k ]
return ucw
else:
pass
elif sw == 1:
#Option to Return Words == K
if r == 0:
ucw = [w.lower() for w in text if w.lower() not in stopwords.words('english') \
and w.isalpha() and len(w) == k]
return ucw
#Option to Return Words < K
elif r == 1:
ucw = [w.lower() for w in text if w.lower() not in stopwords.words('english') \
and w.isalpha() and len(w) < k]
return ucw
#Option to Return Words > K
elif r == 2:
ucw = [w.lower() for w in text if w.lower() not in stopwords.words('english') \
and w.isalpha() and len(w) > k]
return ucw
else:
pass
else:
print ("Please input a valid stopwords option: 0 = no, 1 = yes")
def freq_words(text, k=10, r=0, n=20, sw=0):
This function uses the words function to
generate a specified frequency distribtion,
of the most frequent words and related graph.
You can specify word length, an equality option
(to look for words =, >, or <) a given length.
You can specify how many words to return and
if you want to exclude stopwords.
Defaults, k=10 and r=0, n=20, sw.
-------------------------------------------------
- k = the length of the word.
-------------------------------------------------
- r = the evaluation option.
It takes values 0 (the default), 1, or 2.
0. "equals" | len(word) == k
1. "less than" | len(word) < k.
2. "greater than" | len(word) > k.
-------------------------------------------------
- n = the number of most common results.
The default value is 20. For example, if you
want to see the top 100 results, input 100.
-------------------------------------------------
- sw = stop words (English)
Stop words are high-frequency words like
(the, to and also, is), among others.
In this function, sw takes values
0 (the default) or 1.
The function prints an exception
statement if other values are entered.
-------------------------------------------------
#Generate the Frequency Distribution for specified text, k, and r.
fdist = FreqDist(words(text, k, r, sw))
#Clean up the Title of the Text
clean_title0 = str(text).replace("<Text: ", "").replace(">", "").replace('[', '').replace(']', '')
clean_title1 = clean_title0.replace("'", '').replace('"', '').replace(',', '')[0:10]+"..."
try:
c2 = clean_title1.split(" by ")[0].title()
except:
c2 = clean_title0.title()
#Creating Possible Titles
figtitle1 = "Most Frequent "+str(k)+"-Letter Words in "+c2
figtitle2 = "Most Frequent Words Less Than "+str(k)+"-Letters in "+c2
figtitle3 = "Most Frequent Words Greater Than "+str(k)+"-Letters in "+c2
figtitle4 = "Most Frequent Words of Any Length "+c2
figelse = "Most Frequent Words in "+c2
#Setting the Title Based on Inputs
if r == 0:
figtitle = figtitle1
elif r == 1:
figtitle = figtitle2
elif r == 2 and k != 0:
figtitle = figtitle3
elif r == 2 and k == 0:
figtitle = figtitle4
else:
print ("else")
figtitle = figelse
#Print Plot and Most Common Words
fdist.plot(n, title=figtitle, cumulative=True)
print (figtitle+":", '\n', fdist.most_common(n))
if sw == 1:
print ("*NOTE: Excluding English Stopwords")
else:
pass
#Get Top 30 Words > 7 Letter's in President Obama's Embassy Speech
freq_words(tokens, 7, 2, 30, 1)
freq_words(text5, 0, 2, 50, 1)
Explanation: This also works without specifying the HTML syntax. Although not perfect, the text is far more readable output than if we use the prior NLTK method. The improvement here, beyond readability is that we are targeting only text in paragraph tags rather than all text on the website which may have little to do with the content. Note that if we want to analyze the above text using NLTK we would simply set the text as raw text and tokenize.
Returning to NLTK for Some Analysis
End of explanation
#Example NLTK Website, Specify the <div class = >
url = "http://www.nltk.org"
get_website_text(url, "content-wrapper")
Explanation: Now that we have established this function's utility for url's, what about the NLTK website, which did not work initially. Well, we can specify the correct <div> tag and get the results, as shown below:
End of explanation |
15,775 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Sentiment Analysis with an RNN
In this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the sequence of words. Here we'll use a dataset of movie reviews, accompanied by labels.
The architecture for this network is shown below.
<img src="assets/network_diagram.png" width=400px>
Here, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own.
From the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function.
We don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label.
Step1: Data preprocessing
The first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.
You can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \n. To deal with those, I'm going to split the text into each review using \n as the delimiter. Then I can combined all the reviews back together into one big string.
First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.
Step2: Encoding the words
The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.
Exercise
Step3: Encoding the labels
Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1.
Exercise
Step4: If you built labels correctly, you should see the next output.
Step5: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Exercise
Step6: Exercise
Step7: If you build features correctly, it should look like that cell output below.
Step8: Training, Validation, Test
With our data in nice shape, we'll split it into training, validation, and test sets.
Exercise
Step9: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like
Step10: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Exercise
Step11: Embedding
Now we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights.
Exercise
Step12: LSTM cell
<img src="assets/network_diagram.png" width=400px>
Next, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph.
To create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation
Step13: RNN forward pass
<img src="assets/network_diagram.png" width=400px>
Now we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network.
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state)
Above I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer.
Exercise
Step14: Output
We only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[
Step15: Validation accuracy
Here we can add a few nodes to calculate the accuracy which we'll use in the validation pass.
Step16: Batching
This is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size].
Step17: Training
Below is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists.
Step18: Testing | Python Code:
import numpy as np
import tensorflow as tf
with open('../sentiment_network/reviews.txt', 'r') as f:
reviews = f.read()
with open('../sentiment_network/labels.txt', 'r') as f:
labels = f.read()
reviews[:2000]
Explanation: Sentiment Analysis with an RNN
In this notebook, you'll implement a recurrent neural network that performs sentiment analysis. Using an RNN rather than a feedfoward network is more accurate since we can include information about the sequence of words. Here we'll use a dataset of movie reviews, accompanied by labels.
The architecture for this network is shown below.
<img src="assets/network_diagram.png" width=400px>
Here, we'll pass in words to an embedding layer. We need an embedding layer because we have tens of thousands of words, so we'll need a more efficient representation for our input data than one-hot encoded vectors. You should have seen this before from the word2vec lesson. You can actually train up an embedding with word2vec and use it here. But it's good enough to just have an embedding layer and let the network learn the embedding table on it's own.
From the embedding layer, the new representations will be passed to LSTM cells. These will add recurrent connections to the network so we can include information about the sequence of words in the data. Finally, the LSTM cells will go to a sigmoid output layer here. We're using the sigmoid because we're trying to predict if this text has positive or negative sentiment. The output layer will just be a single unit then, with a sigmoid activation function.
We don't care about the sigmoid outputs except for the very last one, we can ignore the rest. We'll calculate the cost from the output of the last step and the training label.
End of explanation
from string import punctuation
all_text = ''.join([c for c in reviews if c not in punctuation])
reviews = all_text.split('\n')
all_text = ' '.join(reviews)
words = all_text.split()
all_text[:2000]
words[:100]
Explanation: Data preprocessing
The first step when building a neural network model is getting your data into the proper form to feed into the network. Since we're using embedding layers, we'll need to encode each word with an integer. We'll also want to clean it up a bit.
You can see an example of the reviews data above. We'll want to get rid of those periods. Also, you might notice that the reviews are delimited with newlines \n. To deal with those, I'm going to split the text into each review using \n as the delimiter. Then I can combined all the reviews back together into one big string.
First, let's remove all punctuation. Then get all the text without the newlines and split it into individual words.
End of explanation
# Create your dictionary that maps vocab words to integers here
from collections import Counter
counts = Counter(words)
vocab = sorted(counts, key=counts.get, reverse=True)
vocab_to_int = {word: ii for ii, word in enumerate(vocab, 1)}
reviews_ints = []
for each in reviews:
reviews_ints.append([vocab_to_int[word] for word in each.split()])
Explanation: Encoding the words
The embedding lookup requires that we pass in integers to our network. The easiest way to do this is to create dictionaries that map the words in the vocabulary to integers. Then we can convert each of our reviews into integers so they can be passed into the network.
Exercise: Now you're going to encode the words with integers. Build a dictionary that maps words to integers. Later we're going to pad our input vectors with zeros, so make sure the integers start at 1, not 0.
Also, convert the reviews to integers and store the reviews in a new list called reviews_ints.
End of explanation
# Convert labels to 1s and 0s for 'positive' and 'negative'
labels = labels.split()
labels = np.array([1 if each == 'positive' else 0 for each in labels])
Explanation: Encoding the labels
Our labels are "positive" or "negative". To use these labels in our network, we need to convert them to 0 and 1.
Exercise: Convert labels from positive and negative to 1 and 0, respectively.
End of explanation
from collections import Counter
review_lens = Counter([len(x) for x in reviews_ints])
print("Zero-length reviews: {}".format(review_lens[0]))
print("Maximum review length: {}".format(max(review_lens)))
Explanation: If you built labels correctly, you should see the next output.
End of explanation
# Filter out that review with 0 length
reviews_ints = [review for review in reviews_ints if len(review)>0]
print(len(reviews_ints))
print(reviews_ints[1])
# print([review[0:200] if len(review)>200 else 0 for review in reviews_ints])
Explanation: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Exercise: First, remove the review with zero length from the reviews_ints list.
End of explanation
seq_len = 200
# features = np.array([review[0:200] if len(review)>200 else np.append(np.zeros((200 - len(review))),review) for review in reviews])
print(len(reviews))
features = np.zeros((len(reviews), seq_len), dtype=int)
for i, row in enumerate(reviews_ints):
features[i, -len(row):] = np.array(row)[:seq_len]
print(len(features))
Explanation: Exercise: Now, create an array features that contains the data we'll pass to the network. The data should come from review_ints, since we want to feed integers to the network. Each row should be 200 elements long. For reviews shorter than 200 words, left pad with 0s. That is, if the review is ['best', 'movie', 'ever'], [117, 18, 128] as integers, the row will look like [0, 0, 0, ..., 0, 117, 18, 128]. For reviews longer than 200, use on the first 200 words as the feature vector.
This isn't trivial and there are a bunch of ways to do this. But, if you're going to be building your own deep learning networks, you're going to have to get used to preparing your data.
End of explanation
features[:10,:100]
Explanation: If you build features correctly, it should look like that cell output below.
End of explanation
from sklearn.model_selection import train_test_split
split_frac = 0.8
# train_x, val_x, train_y, val_y = train_test_split(features, labels, train_size=split_frac)
# val_x, test_x, val_y, test_y = train_test_split(val_x, val_y, test_size=0.5)
split_idx = int(len(features)*0.8)
train_x, val_x = features[:split_idx], features[split_idx:]
train_y, val_y = labels[:split_idx], labels[split_idx:]
test_idx = int(len(val_x)*0.5)
val_x, test_x = val_x[:test_idx], val_x[test_idx:]
val_y, test_y = val_y[:test_idx], val_y[test_idx:]
print("\t\t\tFeature Shapes:")
print("Train set: \t\t{}".format(train_x.shape),
"\nValidation set: \t{}".format(val_x.shape),
"\nTest set: \t\t{}".format(test_x.shape))
Explanation: Training, Validation, Test
With our data in nice shape, we'll split it into training, validation, and test sets.
Exercise: Create the training, validation, and test sets here. You'll need to create sets for the features and the labels, train_x and train_y for example. Define a split fraction, split_frac as the fraction of data to keep in the training set. Usually this is set to 0.8 or 0.9. The rest of the data will be split in half to create the validation and testing data.
End of explanation
lstm_size = 256
lstm_layers = 1
batch_size = 500
learning_rate = 0.001
Explanation: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like:
Feature Shapes:
Train set: (20000, 200)
Validation set: (2500, 200)
Test set: (2501, 200)
Build the graph
Here, we'll build the graph. First up, defining the hyperparameters.
lstm_size: Number of units in the hidden layers in the LSTM cells. Usually larger is better performance wise. Common values are 128, 256, 512, etc.
lstm_layers: Number of LSTM layers in the network. I'd start with 1, then add more if I'm underfitting.
batch_size: The number of reviews to feed the network in one training pass. Typically this should be set as high as you can go without running out of memory.
learning_rate: Learning rate
End of explanation
n_words = len(vocab)
# Create the graph object
graph = tf.Graph()
# Add nodes to the graph
with graph.as_default():
inputs_ = tf.placeholder(tf.int32, [None, None], name='inputs')
labels_ = tf.placeholder(tf.int32, [None, None], name='labels')
keep_prob = tf.placeholder(tf.float32, name='prob')
Explanation: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Exercise: Create the inputs_, labels_, and drop out keep_prob placeholders using tf.placeholder. labels_ needs to be two-dimensional to work with some functions later. Since keep_prob is a scalar (a 0-dimensional tensor), you shouldn't provide a size to tf.placeholder.
End of explanation
# Size of the embedding vectors (number of units in the embedding layer)
embed_size = 300
with graph.as_default():
embedding = tf.Variable(tf.random_uniform((n_words, embed_size), -1, 1))
embed = tf.nn.embedding_lookup(embedding, inputs_)
Explanation: Embedding
Now we'll add an embedding layer. We need to do this because there are 74000 words in our vocabulary. It is massively inefficient to one-hot encode our classes here. You should remember dealing with this problem from the word2vec lesson. Instead of one-hot encoding, we can have an embedding layer and use that layer as a lookup table. You could train an embedding layer using word2vec, then load it here. But, it's fine to just make a new layer and let the network learn the weights.
Exercise: Create the embedding lookup matrix as a tf.Variable. Use that embedding matrix to get the embedded vectors to pass to the LSTM cell with tf.nn.embedding_lookup. This function takes the embedding matrix and an input tensor, such as the review vectors. Then, it'll return another tensor with the embedded vectors. So, if the embedding layer has 200 units, the function will return a tensor with size [batch_size, 200].
End of explanation
with graph.as_default():
# Your basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
# Getting an initial state of all zeros
initial_state = cell.zero_state(batch_size, tf.float32)
Explanation: LSTM cell
<img src="assets/network_diagram.png" width=400px>
Next, we'll create our LSTM cells to use in the recurrent network (TensorFlow documentation). Here we are just defining what the cells look like. This isn't actually building the graph, just defining the type of cells we want in our graph.
To create a basic LSTM cell for the graph, you'll want to use tf.contrib.rnn.BasicLSTMCell. Looking at the function documentation:
tf.contrib.rnn.BasicLSTMCell(num_units, forget_bias=1.0, input_size=None, state_is_tuple=True, activation=<function tanh at 0x109f1ef28>)
you can see it takes a parameter called num_units, the number of units in the cell, called lstm_size in this code. So then, you can write something like
lstm = tf.contrib.rnn.BasicLSTMCell(num_units)
to create an LSTM cell with num_units. Next, you can add dropout to the cell with tf.contrib.rnn.DropoutWrapper. This just wraps the cell in another cell, but with dropout added to the inputs and/or outputs. It's a really convenient way to make your network better with almost no effort! So you'd do something like
drop = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)
Most of the time, you're network will have better performance with more layers. That's sort of the magic of deep learning, adding more layers allows the network to learn really complex relationships. Again, there is a simple way to create multiple layers of LSTM cells with tf.contrib.rnn.MultiRNNCell:
cell = tf.contrib.rnn.MultiRNNCell([drop] * lstm_layers)
Here, [drop] * lstm_layers creates a list of cells (drop) that is lstm_layers long. The MultiRNNCell wrapper builds this into multiple layers of RNN cells, one for each cell in the list.
So the final cell you're using in the network is actually multiple (or just one) LSTM cells with dropout. But it all works the same from an achitectural viewpoint, just a more complicated graph in the cell.
Exercise: Below, use tf.contrib.rnn.BasicLSTMCell to create an LSTM cell. Then, add drop out to it with tf.contrib.rnn.DropoutWrapper. Finally, create multiple LSTM layers with tf.contrib.rnn.MultiRNNCell.
Here is a tutorial on building RNNs that will help you out.
End of explanation
with graph.as_default():
outputs, final_state = tf.nn.dynamic_rnn(cell, embed, initial_state=initial_state)
Explanation: RNN forward pass
<img src="assets/network_diagram.png" width=400px>
Now we need to actually run the data through the RNN nodes. You can use tf.nn.dynamic_rnn to do this. You'd pass in the RNN cell you created (our multiple layered LSTM cell for instance), and the inputs to the network.
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, initial_state=initial_state)
Above I created an initial state, initial_state, to pass to the RNN. This is the cell state that is passed between the hidden layers in successive time steps. tf.nn.dynamic_rnn takes care of most of the work for us. We pass in our cell and the input to the cell, then it does the unrolling and everything else for us. It returns outputs for each time step and the final_state of the hidden layer.
Exercise: Use tf.nn.dynamic_rnn to add the forward pass through the RNN. Remember that we're actually passing in vectors from the embedding layer, embed.
End of explanation
with graph.as_default():
predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid)
cost = tf.losses.mean_squared_error(labels_, predictions)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
Explanation: Output
We only care about the final output, we'll be using that as our sentiment prediction. So we need to grab the last output with outputs[:, -1], the calculate the cost from that and labels_.
End of explanation
with graph.as_default():
correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_)
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
Explanation: Validation accuracy
Here we can add a few nodes to calculate the accuracy which we'll use in the validation pass.
End of explanation
def get_batches(x, y, batch_size=100):
n_batches = len(x)//batch_size
x, y = x[:n_batches*batch_size], y[:n_batches*batch_size]
for ii in range(0, len(x), batch_size):
yield x[ii:ii+batch_size], y[ii:ii+batch_size]
Explanation: Batching
This is a simple function for returning batches from our data. First it removes data such that we only have full batches. Then it iterates through the x and y arrays and returns slices out of those arrays with size [batch_size].
End of explanation
epochs = 10
with graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=graph) as sess:
sess.run(tf.global_variables_initializer())
iteration = 1
for e in range(epochs):
state = sess.run(initial_state)
for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 0.5,
initial_state: state}
loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed)
if iteration%5==0:
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Train loss: {:.3f}".format(loss))
if iteration%25==0:
val_acc = []
val_state = sess.run(cell.zero_state(batch_size, tf.float32))
for x, y in get_batches(val_x, val_y, batch_size):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: val_state}
batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed)
val_acc.append(batch_acc)
print("Val acc: {:.3f}".format(np.mean(val_acc)))
iteration +=1
saver.save(sess, "checkpoints/sentiment.ckpt")
Explanation: Training
Below is the typical training code. If you want to do this yourself, feel free to delete all this code and implement it yourself. Before you run this, make sure the checkpoints directory exists.
End of explanation
test_acc = []
with tf.Session(graph=graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('/output/checkpoints'))
test_state = sess.run(cell.zero_state(batch_size, tf.float32))
for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: test_state}
batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed)
test_acc.append(batch_acc)
print("Test accuracy: {:.3f}".format(np.mean(test_acc)))
Explanation: Testing
End of explanation |
15,776 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step6: Copyright 2019 Google LLC
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
https
Step9: Search Algorithms
Here we implement both plain evolution and evolution + Progressive Dynamic Hurdles.
Step10: Experiments
Step11: Plain Evolution
Step12: Plain Evolution
Step13: Progressive Dynamic Hurdles
Finally, we run Progressive Dynamic Hurdles (PDH). By establishing a hurdle, PDH is able to utilize early observations to filter out flagrantly bad models, training only the most promising models for the high-cost maximum number of train steps. This perfect signal clarifies which of these promising models are truly the best and improves the search by generating candidates from the best parents.
Step14: Mean Fitness Comparison
To demonstrate the effectiveness of Progressive Dynamic Hurdles, we compare the mean top fitness of each algorithm, with each run 500 times. | Python Code:
DIM = 100 # Number of bits in the bit strings (i.e. the "models").
NOISE_STDEV = 0.01 # Standard deviation of the simulated training noise.
EARLY_SIGNAL_NOISE = 0.005 # Standard deviation of the noise added to earlier
# observations.
REDUCTION_FACTOR = 100.0 # The factor by which the number of train steps is
# reduced for earlier observations.
class Model(object):
A class representing a model.
Attributes:
arch: the architecture as an int representing a bit-string of length `DIM`.
As a result, the integers are required to be less than `2**DIM`.
observed_accuracy: the simulated validation accuracy observed for the model
during the search. This may be either the accuracy after training for
the maximum number of steps or the accuracy after training for 1/100 the
maximum number of steps.
true_accuracy: the simulated validation accuracy after the maximum train
steps.
def __init__(self):
self.arch = None
self.observed_accuracy = None
self.true_accuracy = None
def get_final_accuracy(arch):
Simulates training for the maximum number of steps and then evaluating.
Args:
arch: the architecture as an int representing a bit-string.
accuracy = float(_sum_bits(arch)) / float(DIM)
accuracy += random.gauss(mu=0.0, sigma=NOISE_STDEV)
accuracy = 0.0 if accuracy < 0.0 else accuracy
accuracy = 1.0 if accuracy > 1.0 else accuracy
return accuracy
def get_early_accuracy(final_accuracy):
Simulates training for 1/100 the maximum steps and then evaluating.
Args:
final_accuracy: the accuracy of the model if trained for the maximum number
of steps.
observed_accuracy = final_accuracy/REDUCTION_FACTOR + random.gauss(mu=0,
sigma=EARLY_SIGNAL_NOISE)
observed_accuracy = 0.0 if observed_accuracy < 0.0 else observed_accuracy
observed_accuracy = 1.0 if observed_accuracy > 1.0 else observed_accuracy
return observed_accuracy
def _sum_bits(arch):
Returns the number of 1s in the bit string.
Args:
arch: an int representing the bit string.
total = 0
for _ in range(DIM):
total += arch & 1
arch = (arch >> 1)
return total
import random
def random_architecture():
Returns a random architecture (bit-string) represented as an int.
return random.randint(0, 2**DIM - 1)
def mutate_arch(parent_arch):
Computes the architecture for a child of the given parent architecture.
Args:
parent_arch: an int representing the architecture (bit-string) of the
parent.
Returns:
An int representing the architecture (bit-string) of the child.
position = random.randint(0, DIM - 1) # Index of the bit to flip.
# Flip the bit at position `position` in `child_arch`.
child_arch = parent_arch ^ (1 << position)
return child_arch
Explanation: Copyright 2019 Google LLC
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
Summary
This is an implementation of the Progressive Dynamic Hurdles (PDH) algorithm that was presented in the Evolved Transformer paper. A toy problem is used to compare the algorithm to plain evolution.
Toy Search Space
This is a toy problem meant to simulate the architecture search problem presented in the paper. It is adapted
from the toy problem presented by Real et al. (2018) with code based off of their Colab.
A key difference here is that there is not a single function used to compute all fitnesses. Instead there are two fitness functions, depending on how many steps a Model is allowed to "train" for:
1) get_final_accuracy(): simulates the fitness computation of a model that has been trained for a
hypothetical maximum number of train steps.
2) get_early_accuracy(): simulates the fitness computation of a model that has been trained for 1/100th
the hypothetical maximum number of train steps.
Having two different fitnesses for two different number of train steps is necessary to apply the PDH algorithm. In line with this, each Model also has two accuracies: an observed_accuracy, which is what we observe when the model is evaluated during the search, and a true_accuracy, which is the accuracy the model achieves when it is trained for the maximum number of train steps.
End of explanation
import collections
import random
import copy
def plain_evolution(cycles, population_size, sample_size, early_observation):
Plain evolution.
Args:
cycles: the number of cycles the search is run for.
population_size: the size of the population.
sample_size: the size of the sample for both parent selection and killing.
early_observation: boolean. Whether or not we are observing the models early
by evaluating them for 1/100th the maximum number of train steps.
population = collections.deque()
history = [] # Not used by the algorithm, only used to report results.
# Initialize the population with random models.
while len(population) < population_size:
model = Model()
model.arch = random_architecture()
model.true_accuracy = get_final_accuracy(model.arch)
# If we are observing early, get the early accuracy that corresponds to the
# true_accuracy. Else, we are training each model for the maximum number of
# steps and so the observed_accuracy is the true_accuracy.
if early_observation:
model.observed_accuracy = get_early_accuracy(model.true_accuracy)
else:
model.observed_accuracy = model.true_accuracy
population.append(model)
history.append(model)
# Carry out evolution in cycles. Each cycle produces a model and removes
# another.
while len(history) < cycles:
# Sample randomly chosen models from the current population.
sample = random.sample(population, sample_size)
# The parent is the best model in the samples, according to their observed
# accuracy.
parent = max(sample, key=lambda i: i.observed_accuracy)
# Create the child model and store it.
child = Model()
child.arch = mutate_arch(parent.arch)
child.true_accuracy = get_final_accuracy(child.arch)
# If we are observing early, get the early accuracy that corresponds to the
# true_accuracy. Else, we are training each model for the maximum number of
# steps and so the observed_accuracy is the true_accuracy.
if early_observation:
child.observed_accuracy = get_early_accuracy(child.true_accuracy)
else:
child.observed_accuracy = child.true_accuracy
# Choose model to kill.
sample_indexes = random.sample(range(len(population)), sample_size)
min_fitness = float("inf")
kill_index = population_size
for sample_index in sample_indexes:
if population[sample_index].observed_accuracy < min_fitness:
min_fitness = population[sample_index].observed_accuracy
kill_index = sample_index
# Replace victim with child.
population[kill_index] = child
history.append(child)
return history, population
def pdh_evolution(train_resources, population_size, sample_size):
Evolution with PDH.
Args:
train_resources: the resources alotted for training. An early obsevation
costs 1, while a maximum train step observation costs 100.
population_size: the size of the population.
sample_size: the size of the sample for both parent selection and killing.
population = collections.deque()
history = [] # Not used by the algorithm, only used to report results.
resources_used = 0 # The number of resource units used.
# Initialize the population with random models.
while len(population) < population_size:
model = Model()
model.arch = random_architecture()
model.true_accuracy = get_final_accuracy(model.arch)
# Always initialize with the early observation, since no hurdle has been
# established.
model.observed_accuracy = get_early_accuracy(model.true_accuracy)
population.append(model)
history.append(model)
# Since we are only performing an early observation, we are only consuming
# 1 resource unit.
resources_used += 1
# Carry out evolution in cycles. Each cycle produces a model and removes
# another.
hurdle = None
while resources_used < train_resources:
# Sample randomly chosen models from the current population.
sample = random.sample(population, sample_size)
# The parent is the best model in the sample, according to the observed
# accuracy.
parent = max(sample, key=lambda i: i.observed_accuracy)
# Create the child model and store it.
child = Model()
child.arch = mutate_arch(parent.arch)
child.true_accuracy = get_final_accuracy(child.arch)
# Once the hurdle has been established, a model is trained for the maximum
# amount of train steps if it overcomes the hurdle value. Otherwise, it
# only trains for the lesser amount of train steps.
if hurdle:
child.observed_accuracy = get_early_accuracy(child.true_accuracy)
# Performing the early observation costs 1 resource unit.
resources_used += 1
if child.observed_accuracy > hurdle:
child.observed_accuracy = child.true_accuracy
# Now that the model has trained longer, we consume additional
# resource units.
resources_used += REDUCTION_FACTOR - 1
else:
child.observed_accuracy = get_early_accuracy(child.true_accuracy)
# Since we are only performing an early observation, we are only consuming
# 1 resource unit.
resources_used += 1
# Choose model to kill.
sample_indexes = random.sample(range(len(population)), sample_size)
min_fitness = float("inf")
kill_index = population_size
for sample_index in sample_indexes:
if population[sample_index].observed_accuracy < min_fitness:
min_fitness = population[sample_index].observed_accuracy
kill_index = sample_index
# Replace victim with child.
population[kill_index] = child
history.append(child)
# Create a hurdle, splitting resources such that the number of models
# trained before and after the hurdle are approximately even. Here, our
# appoximation is assuming that every model after the hurdle trains for the
# maximum number of steps.
if not hurdle and resources_used >= int(train_resources/REDUCTION_FACTOR):
hurdle = 0
for model in population:
hurdle += model.observed_accuracy
hurdle /= len(population)
return history, population
Explanation: Search Algorithms
Here we implement both plain evolution and evolution + Progressive Dynamic Hurdles.
End of explanation
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
import seaborn as sns
TOTAL_RESOURCES = 10000 # Total number of resource units.
POPULATION_SIZE = 100 # The size of the population.
SAMPLE_SIZE = 10 # The size the subpopulation used for selecting parents and
# kill targets.
def graph_values(values, title, xlim, ylim):
plt.figure()
sns.set_style('white')
xvalues = range(len(values))
yvalues = values
ax = plt.gca()
dot_size = int(TOTAL_RESOURCES / xlim)
ax.scatter(
xvalues, yvalues, marker='.', facecolor=(0.0, 0.0, 0.0),
edgecolor=(0.0, 0.0, 0.0), linewidth=1, s=dot_size)
ax.xaxis.set_major_locator(ticker.LinearLocator(numticks=2))
ax.xaxis.set_major_formatter(ticker.ScalarFormatter())
ax.yaxis.set_major_locator(ticker.LinearLocator(numticks=2))
ax.yaxis.set_major_formatter(ticker.ScalarFormatter())
ax.set_title(title, fontsize=20)
fig = plt.gcf()
fig.set_size_inches(8, 6)
fig.tight_layout()
ax.tick_params(
axis='x', which='both', bottom=True, top=False, labelbottom=True,
labeltop=False, labelsize=14, pad=10)
ax.tick_params(
axis='y', which='both', left=True, right=False, labelleft=True,
labelright=False, labelsize=14, pad=5)
plt.xlabel('Number of Models Evaluated', labelpad=-16, fontsize=16)
plt.ylabel('Accuracy', labelpad=-30, fontsize=16)
plt.xlim(0, xlim + .05)
plt.ylim(0, ylim + .05)
sns.despine()
def graph_history(history):
observed_accuracies = [i.observed_accuracy for i in history]
true_accuracies = [i.true_accuracy for i in history]
graph_values(observed_accuracies, "Observed Accuracy",
xlim=len(history), ylim=max(observed_accuracies))
graph_values(true_accuracies, "True Accuracy",
xlim=len(history), ylim=max(true_accuracies))
Explanation: Experiments
End of explanation
history, _ = plain_evolution(
cycles=TOTAL_RESOURCES, population_size=POPULATION_SIZE,
sample_size=SAMPLE_SIZE,
early_observation=True)
graph_history(history)
Explanation: Plain Evolution: Early Observartion
First, we run plain evolution with early observations. We get to observe many models (10K), but, while still useful, the accuracy signal is noisy, hurting the performance of the search.
End of explanation
history, _ = plain_evolution(
cycles=TOTAL_RESOURCES/REDUCTION_FACTOR, population_size=POPULATION_SIZE,
sample_size=SAMPLE_SIZE, early_observation=False)
graph_history(history)
Explanation: Plain Evolution: Maximum Step Observartion
Next, we run plain evolution with observations after each model has been trained for the maximum number
of steps. The signal here is perfect, with the observed accuracy matching the true accuracy, but we see very few models since we are controlling for number of resources.
End of explanation
history, _ = pdh_evolution(train_resources=TOTAL_RESOURCES,
population_size=POPULATION_SIZE,
sample_size=SAMPLE_SIZE)
graph_history(history)
Explanation: Progressive Dynamic Hurdles
Finally, we run Progressive Dynamic Hurdles (PDH). By establishing a hurdle, PDH is able to utilize early observations to filter out flagrantly bad models, training only the most promising models for the high-cost maximum number of train steps. This perfect signal clarifies which of these promising models are truly the best and improves the search by generating candidates from the best parents.
End of explanation
import numpy as np
num_trials = 500
print("===========================")
print("Mean Top Fitness Comparison")
print("===========================")
max_fitnesses = []
for _ in range(num_trials):
_, population = plain_evolution(
cycles=TOTAL_RESOURCES, population_size=POPULATION_SIZE,
sample_size=SAMPLE_SIZE,
early_observation=True)
# Assume all models in the final population are fully evaluated
max_fitness = max([indiv.true_accuracy for indiv in population])
max_fitnesses.append(max_fitness)
max_fitnesses = np.array(max_fitnesses)
print("Early Observation Plain Evolution: %.4s ± %.4s" %
(np.mean(max_fitnesses), np.std(max_fitnesses)))
max_fitnesses = []
for _ in range(num_trials):
_, population = plain_evolution(
cycles=TOTAL_RESOURCES/REDUCTION_FACTOR, population_size=POPULATION_SIZE,
sample_size=SAMPLE_SIZE,
early_observation=False)
# Assume all models in the final population are fully evaluated
max_fitness = max([indiv.true_accuracy for indiv in population])
max_fitnesses.append(max_fitness)
max_fitnesses = np.array(max_fitnesses)
print("Max Step Observation Plain Evolution: %.4s ± %.4s" %
(np.mean(max_fitnesses), np.std(max_fitnesses)))
max_fitnesses = []
for _ in range(num_trials):
_, population = pdh_evolution(train_resources=TOTAL_RESOURCES,
population_size=POPULATION_SIZE,
sample_size=SAMPLE_SIZE)
# Assume all models in the final population are fully evaluated
max_fitness = max([indiv.true_accuracy for indiv in population])
max_fitnesses.append(max_fitness)
max_fitnesses = np.array(max_fitnesses)
print("Progressive Dynamic Hurdles: %.4s ± %.4s" %
(np.mean(max_fitnesses), np.std(max_fitnesses)))
Explanation: Mean Fitness Comparison
To demonstrate the effectiveness of Progressive Dynamic Hurdles, we compare the mean top fitness of each algorithm, with each run 500 times.
End of explanation |
15,777 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Plotly Alpha Shapes as Mesh3d instances
Starting with a finite set of 3D points, Plotly can generate a Mesh3d object, that depending on a key value can be the convex hull of that set, its Delaunay triangulation or an alpha set.
This notebook is devoted to the presentation of the alpha shape as a computational geometric object, its interpretation, and visualization with Plotly.
Alpha shape of a finite point set $S$ is a polytope whose structure depends only on the set $S$ and a parameter $\alpha$.
Although it is less known in comparison to other computational geometric objects, it has been used in many practical applications in pattern recognition, surface reconstruction, molecular structure modeling, porous media, astrophysics.
In order to understand how the algorithm underlying Mesh3d works, we present shortly a few notions of Computational Geometry.
Simplicial complexes and Delaunay triangulation
Let S be a finite set of 2D or 3D points. A point is called $0$-simplex or vertex. The convex hull of
Step1: If $T$ is the set of points defining a $k$-simplex, then any proper subset of $T$ defines an $\ell$-simplex, $\ell<k$.
These $\ell$-simplexes (or $\ell$-simplices) are called faces.
A 2-simplex has three $1$-simplexes, and three 0-simplexes as faces, whereas a tetrahedron has as faces three 2-simplexes, six 1-simplexes and four zero simplexes.
k-simplexes are building blocks for different structures in Computational Geometry, mainly for creating meshes from point clouds.
Let $S$ be a finite set in $\mathbb{R}^d$, $d=2,3$ (i.e. a set of 2D or 3D points). A collection $\mathcal{K}$ of k-simplexes, $0\leq k\leq d$, having as vertices the points of $S$,
is a simplicial complex if its simplexes have the following properties
Step2: Triangular meshes used in computer graphics are examples of simplicial complexes.
The underlying space of a simplicial complex, $\mathcal{K}$, denoted $|\mathcal{K}|$,
is the union of its simplexes, i.e. it is a region in plane or in the 3D space, depending on whether d=2 or 3.
A subcomplex of the simplicial complex $\mathcal{K}$ is a collection, $\mathcal{L}$, of simplexes in $\mathcal{K}$ that also form a simplicial complex.
The points of a finite set $S$ in $\mathbb{R}^2$ (respectively $\mathbb{R}^3$) are in general position if no $3$ (resp 4) points are collinear (coplanar), and no 4 (resp 5) points lie on the same circle (sphere).
A particular simplicial complex associated to a finite set of 2D or 3D points, in general position, is the Delaunay triangulation.
A triangulation of a finite point set $S \subset \mathbb{R}^2$ (or $\mathbb{R}^3$)
is a collection $\mathcal{T}$ of triangles (tetrahedra),
such that
Step3: Alpha shape of a finite set of points
The notion of Alpha Shape was introduced by Edelsbrunner with the aim to give a mathematical description of the shape of a point set.
In this notebook we give a constructive definition of this geometric structure. A more detailed approach of 3D alpha shapes can be found in the original paper.
An intuitive description of the alpha shape was given by Edelsbrunner and his coauthor
in a preprint of the last paper mentioned above
Step4: We notice that the Delaunay triangulation has as boundary a convex set (it is a triangulation of the convex hull
of the given point set).
Each $\alpha$-complex is obtained from the Delaunay triangulation, removing the triangles whose circumcircle has radius greater or equal to alpha.
In the last subplot the triangles of the $0.115$-complex are filled in with light blue. The filled in region is the underlying space of the $0.115$-complex.
The $0.115$-alpha shape of the given point set can be considered either the filled in region or its boundary.
This example illustrates that the underlying space of an $\alpha$-complex in neither convex nor necessarily connected. It can consist in many connected components (in our illustration above, $|\mathcal{C}_{0.115}|$ has three components).
In a family of alpha shapes, the parameter $\alpha$ controls the level of detail of the associated alpha shape. If $\alpha$ decreases to zero, the corresponding alpha shape degenerates to the point set, $S$, while if it tends to infinity the alpha shape tends to the convex hull of the set $S$.
Plotly Mesh3d
In order to generate the alpha shape of a given set of 3D points corresponding to a parameter $\alpha$,
the Delaunay triagulation or the convex hull we define an instance of the go.Mesh3d class. The real value
of the key alphahull points out the mesh type to be generated
Step5: We notice in the subplots above that as alphahull increases, i.e. $\alpha$ decreases, some parts of the alpha shape shrink and
develop enclosed void regions. The last plotted alpha shape points out a polytope that contains faces of tetrahedra,
and patches of triangles.
In some cases as $\alpha$ varies it is also possible to develop components that are strings of edges and even isolated points.
Such experimental results suggested the use of alpha shapes in modeling molecular structure.
A search on WEB gives many results related to applications of alpha shapes in structural molecular biology.
Here
is an alpha shape illustrating a molecular-like structure associated to a point set of 5000 points.
Generating an alpha shape with Mesh3d
Step6: Load data
Step7: Define two traces
Step8: Generating the alpha shape of a set of 2D points
We construct the alpha shape of a set of 2D points from the Delaunay triangulation,
defined as a scipy.spatial.Delaunay object.
Step9: Compute the circumcenter and circumradius of a triangle (see their definitions here)
Step10: Filter out the Delaunay triangulation to get the $\alpha$-complex
Step11: Get data for the Plotly plot of a subcomplex of the Delaunay triangulation | Python Code:
from IPython.display import IFrame
IFrame('https://plot.ly/~empet/13475/', width=800, height=350)
Explanation: Plotly Alpha Shapes as Mesh3d instances
Starting with a finite set of 3D points, Plotly can generate a Mesh3d object, that depending on a key value can be the convex hull of that set, its Delaunay triangulation or an alpha set.
This notebook is devoted to the presentation of the alpha shape as a computational geometric object, its interpretation, and visualization with Plotly.
Alpha shape of a finite point set $S$ is a polytope whose structure depends only on the set $S$ and a parameter $\alpha$.
Although it is less known in comparison to other computational geometric objects, it has been used in many practical applications in pattern recognition, surface reconstruction, molecular structure modeling, porous media, astrophysics.
In order to understand how the algorithm underlying Mesh3d works, we present shortly a few notions of Computational Geometry.
Simplicial complexes and Delaunay triangulation
Let S be a finite set of 2D or 3D points. A point is called $0$-simplex or vertex. The convex hull of:
- two distinct points is a 1-simplex or edge;
- three non-colinear points is a 2-simplex or triangle;
- four non-coplanar points in $\mathbb{R}^3$ is a 3-simplex or tetrahedron;
End of explanation
IFrame('https://plot.ly/~empet/13503/', width=600, height=475)
Explanation: If $T$ is the set of points defining a $k$-simplex, then any proper subset of $T$ defines an $\ell$-simplex, $\ell<k$.
These $\ell$-simplexes (or $\ell$-simplices) are called faces.
A 2-simplex has three $1$-simplexes, and three 0-simplexes as faces, whereas a tetrahedron has as faces three 2-simplexes, six 1-simplexes and four zero simplexes.
k-simplexes are building blocks for different structures in Computational Geometry, mainly for creating meshes from point clouds.
Let $S$ be a finite set in $\mathbb{R}^d$, $d=2,3$ (i.e. a set of 2D or 3D points). A collection $\mathcal{K}$ of k-simplexes, $0\leq k\leq d$, having as vertices the points of $S$,
is a simplicial complex if its simplexes have the following properties:
1. If $\sigma$ is a simplex in $\mathcal{K}$, then all its faces are also simplexes in $\mathcal{K}$;
2. If $\sigma, \tau$ are two simplexes in $\mathcal{K}$, then their intersection is either empty or a face in both simplexes.
The next figure illustrates a simplicial complex(left), and a collection of $k$-simplexes (right), $0\leq k\leq 2$
that do not form a simplicial complex because the condition 2 in the definition above is violated.
End of explanation
IFrame('https://plot.ly/~empet/13497/', width=550, height=550)
Explanation: Triangular meshes used in computer graphics are examples of simplicial complexes.
The underlying space of a simplicial complex, $\mathcal{K}$, denoted $|\mathcal{K}|$,
is the union of its simplexes, i.e. it is a region in plane or in the 3D space, depending on whether d=2 or 3.
A subcomplex of the simplicial complex $\mathcal{K}$ is a collection, $\mathcal{L}$, of simplexes in $\mathcal{K}$ that also form a simplicial complex.
The points of a finite set $S$ in $\mathbb{R}^2$ (respectively $\mathbb{R}^3$) are in general position if no $3$ (resp 4) points are collinear (coplanar), and no 4 (resp 5) points lie on the same circle (sphere).
A particular simplicial complex associated to a finite set of 2D or 3D points, in general position, is the Delaunay triangulation.
A triangulation of a finite point set $S \subset \mathbb{R}^2$ (or $\mathbb{R}^3$)
is a collection $\mathcal{T}$ of triangles (tetrahedra),
such that:
1. The union of all triangles (tetrahedra) in $\mathcal{T}$ is the convex hull of $S$.
2. The union of all vertices of triangles (tetrahedra) in $\mathcal{T}$ is the set $S$.
3. For every distinct pair $\sigma, \tau \in \mathcal{T}$, the intersection $\sigma \cap \tau$ is either empty or a common face of $\sigma$ and $\tau$.
A Delaunay triangulation of the set $S\subset\mathbb{R}^2$ ($\mathbb{R}^3$) is a triangulation with the property
that the open balls bounded by the circumcircles (circumspheres) of the triangulation
triangles (tetrahedra) contain no point in $S$. One says that these balls are empty.
If the points of $S$ are in general position, then the Delaunay triangulation of $S$ is unique.
Here is an example of Delaunay triangulation of a set of ten 2D points. It illustrates the emptiness of two balls bounded by circumcircles.
End of explanation
IFrame('https://plot.ly/~empet/13479/', width=825, height=950)
Explanation: Alpha shape of a finite set of points
The notion of Alpha Shape was introduced by Edelsbrunner with the aim to give a mathematical description of the shape of a point set.
In this notebook we give a constructive definition of this geometric structure. A more detailed approach of 3D alpha shapes can be found in the original paper.
An intuitive description of the alpha shape was given by Edelsbrunner and his coauthor
in a preprint of the last paper mentioned above:
A huge mass of ice-cream fills a region in the 3D space,
and the point set $S$ consists in hard chocolate pieces spread in the ice-cream mass.
Using a sphere-formed ice-cream spoon we carve out the ice-cream such that to avoid bumping into
chocolate pieces. At the end of this operation the region containing the ciocolate pieces
and the remaining ice cream is bounded by caps, arcs and points of chocolate. Straightening
all round faces to triangles and line segments we get the intuitive image of the
alpha shape of the point set $S$.
Now we give the steps of the computational alpha shape construction.
Let $S$ be a finite set of points from $\mathbb{R}^d$, in general position, $\mathcal{D}$ its Delaunay triangulation
and $\alpha$ a positive number.
Select the d-simplexes of $\mathcal{D}$ (i.e. triangles in the case d=2, respectively tetrahedra
for d=3) whose circumsphere has the radius less than $\alpha$. These simplexes and their faces form
a simplicial subcomplex of the Delaunay triangulation, $\mathcal{D}$.
It is denoted $\mathcal{C}_\alpha$, and called $\alpha$-complex.
The $\alpha$-shape of the set $S$ is defined by its authors, either as the underlying space of the $\alpha$-complex,
i.e. the union of all its simplexes or as the boundary of the $\alpha$-complex.
The boundary of the $\alpha$-complex is the subcomplex consisting in all k-simplexes, $0\leq k<d$, that are faces of a single $d$-simplex (these are called external faces).
In the ice-cream example the alpha shape was defined as the boundary of the alpha-complex.
The underlying space of the $\alpha$-complex is the region where the ice-cream spoon has no access, because its radius ($\alpha$) exceeds the radius of circumscribed spheres to tetrahedra formed by pieces of chocolate.
To get insight into the process of construction of an alpha shape we illustrate it first for a set of 2D points.
The following panel displays the Delaunay triangulation of a set of 2D points, and a sequence of $\alpha$-complexes (and alpha shapes):
End of explanation
IFrame('https://plot.ly/~empet/13481/', width=900, height=950)
Explanation: We notice that the Delaunay triangulation has as boundary a convex set (it is a triangulation of the convex hull
of the given point set).
Each $\alpha$-complex is obtained from the Delaunay triangulation, removing the triangles whose circumcircle has radius greater or equal to alpha.
In the last subplot the triangles of the $0.115$-complex are filled in with light blue. The filled in region is the underlying space of the $0.115$-complex.
The $0.115$-alpha shape of the given point set can be considered either the filled in region or its boundary.
This example illustrates that the underlying space of an $\alpha$-complex in neither convex nor necessarily connected. It can consist in many connected components (in our illustration above, $|\mathcal{C}_{0.115}|$ has three components).
In a family of alpha shapes, the parameter $\alpha$ controls the level of detail of the associated alpha shape. If $\alpha$ decreases to zero, the corresponding alpha shape degenerates to the point set, $S$, while if it tends to infinity the alpha shape tends to the convex hull of the set $S$.
Plotly Mesh3d
In order to generate the alpha shape of a given set of 3D points corresponding to a parameter $\alpha$,
the Delaunay triagulation or the convex hull we define an instance of the go.Mesh3d class. The real value
of the key alphahull points out the mesh type to be generated:
alphahull=$1/\alpha$ generates the $\alpha$-shape, -1 corresponds to the Delaunay
triangulation and 0, to the convex hull of the point set.
The other parameters in the definition of a Mesh3d are given here.
Mesh3d generates and displays an $\alpha$-shape as the boundary of the $\alpha$-complex.
An intuitive idea on the topological structure modification, as $\alpha=1/$alphahull varies can be gained from the following three different alpha shapes of the same point set:
End of explanation
import numpy as np
import plotly.plotly as py
import plotly.graph_objs as go
from plotly import tools as tls
Explanation: We notice in the subplots above that as alphahull increases, i.e. $\alpha$ decreases, some parts of the alpha shape shrink and
develop enclosed void regions. The last plotted alpha shape points out a polytope that contains faces of tetrahedra,
and patches of triangles.
In some cases as $\alpha$ varies it is also possible to develop components that are strings of edges and even isolated points.
Such experimental results suggested the use of alpha shapes in modeling molecular structure.
A search on WEB gives many results related to applications of alpha shapes in structural molecular biology.
Here
is an alpha shape illustrating a molecular-like structure associated to a point set of 5000 points.
Generating an alpha shape with Mesh3d
End of explanation
pts = np.loadtxt('Data/data-file.txt')
x, y, z = zip(*pts)
Explanation: Load data:
End of explanation
points = go.Scatter3d(mode='markers',
name='',
x =x,
y= y,
z= z,
marker=dict(size=2, color='#458B00'))
simplexes = go.Mesh3d(alphahull =10.0,
name = '',
x =x,
y= y,
z= z,
color='#90EE90',
opacity=0.15)
axis = dict(showbackground=True,
backgroundcolor="rgb(245, 245, 245)",
gridcolor="rgb(255, 255, 255)",
gridwidth=2,
zerolinecolor="rgb(255, 255, 255)",
tickfont=dict(size=11),
titlefont =dict(size=12))
x_style = dict(axis, range=[-2.85, 4.25], tickvals=np.linspace(-2.85, 4.25, 5)[1:].round(1))
y_style = dict(axis, range=[-2.65, 1.32], tickvals=np.linspace(-2.65, 1.32, 4)[1:].round(1))
z_style = dict(axis, range=[-3.67,1.4], tickvals=np.linspace(-3.67, 1.4, 5).round(1))
layout = go.Layout(title='Alpha shape of a set of 3D points. Alpha=0.1',
width=500,
height=500,
scene=dict(xaxis=x_style,
yaxis=y_style,
zaxis=z_style))
fig = go.FigureWidget(data=[points, simplexes], layout=layout)
#fig
fig = go.FigureWidget(data=[points, simplexes], layout=layout)
#py.plot(fig, filename='3D-AlphaS-ex')
IFrame('https://plot.ly/~empet/13499/', width=550, height=550)
Explanation: Define two traces: one for plotting the point set and another for the alpha shape:
End of explanation
from scipy.spatial import Delaunay
def sq_norm(v): #squared norm
return np.linalg.norm(v)**2
Explanation: Generating the alpha shape of a set of 2D points
We construct the alpha shape of a set of 2D points from the Delaunay triangulation,
defined as a scipy.spatial.Delaunay object.
End of explanation
def circumcircle(points,simplex):
A = [points[simplex[k]] for k in range(3)]
M = [[1.0]*4]
M += [[sq_norm(A[k]), A[k][0], A[k][1], 1.0 ] for k in range(3)]
M = np.asarray(M, dtype=np.float32)
S = np.array([0.5*np.linalg.det(M[1:, [0,2,3]]), -0.5*np.linalg.det(M[1:, [0,1,3]])])
a = np.linalg.det(M[1:, 1:])
b = np.linalg.det(M[1:, [0,1,2]])
return S/a, np.sqrt(b/a + sq_norm(S)/a**2) #center=S/a, radius=np.sqrt(b/a+sq_norm(S)/a**2)
Explanation: Compute the circumcenter and circumradius of a triangle (see their definitions here):
End of explanation
def get_alpha_complex(alpha, points, simplexes):
#alpha is the parameter for the alpha shape
#points are given data points
#simplexes is the list of indices in the array of points
#that define 2-simplexes in the Delaunay triangulation
return filter(lambda simplex: circumcircle(points,simplex)[1] < alpha, simplexes)
pts = np.loadtxt('Data/data-ex-2d.txt')
tri = Delaunay(pts)
colors = ['#C0223B', '#404ca0', 'rgba(173,216,230, 0.5)']# colors for vertices, edges and 2-simplexes
Explanation: Filter out the Delaunay triangulation to get the $\alpha$-complex:
End of explanation
def Plotly_data(points, complex_s):
#points are the given data points,
#complex_s is the list of indices in the array of points defining 2-simplexes(triangles)
#in the simplicial complex to be plotted
X = []
Y = []
for s in complex_s:
X += [points[s[k]][0] for k in [0,1,2,0]] + [None]
Y += [points[s[k]][1] for k in [0,1,2,0]] + [None]
return X, Y
def make_trace(x, y, point_color=colors[0], line_color=colors[1]):# define the trace
#for an alpha complex
return go.Scatter(mode='markers+lines', #vertices and
#edges of the alpha-complex
name='',
x=x,
y=y,
marker=dict(size=6.5, color=point_color),
line=dict(width=1.25, color=line_color))
figure = tls.make_subplots(rows=1, cols=2,
subplot_titles=('Delaunay triangulation', 'Alpha shape, alpha=0.15'),
horizontal_spacing=0.1,
)
title = 'Delaunay triangulation and Alpha Complex/Shape for a Set of 2D Points'
figure.layout.update(title=title,
font=dict(family="Open Sans, sans-serif"),
showlegend=False,
hovermode='closest',
autosize=False,
width=800,
height=460,
margin=dict(l=65,
r=65,
b=85,
t=120));
axis_style = dict(showline=True,
mirror=True,
zeroline=False,
showgrid=False,
showticklabels=True,
range=[-0.1,1.1],
tickvals=[0, 0.2, 0.4, 0.6, 0.8, 1.0],
ticklen=5
)
for s in range(1,3):
figure.layout.update({'xaxis{}'.format(s): axis_style})
figure.layout.update({'yaxis{}'.format(s): axis_style})
alpha_complex = list(get_alpha_complex(0.15, pts, tri.simplices))
X, Y = Plotly_data(pts, tri.simplices)# get data for Delaunay triangulation
figure.append_trace(make_trace(X, Y), 1, 1)
X, Y = Plotly_data(pts, alpha_complex)# data for alpha complex
figure.append_trace(make_trace(X, Y), 1, 2)
shapes = []
for s in alpha_complex: #fill in the triangles of the alpha complex
A = pts[s[0]]
B = pts[s[1]]
C = pts[s[2]]
shapes.append(dict(path=f'M {A[0]}, {A[1]} L {B[0]}, {B[1]} L {C[0]}, {C[1]} Z',
fillcolor='rgba(173, 216, 230, 0.5)',
line=dict(color=colors[1], width=1.25),
xref='x2',
yref='y2'
))
figure.layout.shapes=shapes
py.plot(figure, filename='2D-AlphaS-ex', width=850)
IFrame('https://plot.ly/~empet/13501', width=800, height=460)
Explanation: Get data for the Plotly plot of a subcomplex of the Delaunay triangulation:
End of explanation |
15,778 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Dataset
Download the dataset and save it to a directory at per your convience. IMDB comments
Step1: Run the following lines when you run this notebook first time on your system.
Step2: Now let's see how to create text classifier using nltk and scikit learn.
Step3: Vader Sentiment Analysis
Step5: As we see above the accuracy is the range of 0.70. Vader model performed better for the positive sentiment compared to negative sentiment. Let's now use statistical model using TFIDF which generally perform better.
Sentiment Analysis using statistical model using TFIDF
Step7: Lets drop the following words from stopwords since they are likely good indicator of sentiment.
Step8: Let's estimate the memory requirment if the data is presented in dense matrix format
Step9: Byte size of the training doc sparse doc
Step10: Classification Model
Step11: Important terms for a document
Step13: Build Pipeline for classificaiton Model
Step14: Hashing Vectorizer
Convert a collection of text documents to a matrix of deterministic hash token (murmur3) occurrences
It turns a collection of text documents into a scipy.sparse matrix holding token occurrence counts (or binary occurrence information), possibly normalized as token frequencies if norm=’l1’ or projected on the euclidean unit sphere if norm=’l2’.
Advantages
- it is very low memory scalable to large datasets as there is no need to store a vocabulary dictionary in memory
- it is fast to pickle and un-pickle as it holds no state besides the constructor parameters
- it can be used in a streaming (partial fit) or parallel pipeline as there is no state computed during fit.
Disadvantages
- there is no way to compute the inverse transform (from feature indices to string feature names) which can be a problem when trying to introspect which features are most important to a model.
- there can be collisions | Python Code:
import pandas as pd # Used for dataframe functions
import json # parse json string
import nltk # Natural language toolkit for TDIDF etc.
from bs4 import BeautifulSoup # Parse html string .. to extract text
import re # Regex parser
import numpy as np # Linear algebbra
from sklearn import * # machine learning
import matplotlib.pyplot as plt # Visualization
# Wordcloud does not work on Windows.
# Comment the below if you want to skip
from wordcloud import WordCloud # Word cloud visualization
import scipy #Sparse matrix
np.set_printoptions(precision=4)
pd.options.display.max_columns = 1000
pd.options.display.max_rows = 10
pd.options.display.float_format = lambda f: "%.4f" % f
%matplotlib inline
Explanation: Dataset
Download the dataset and save it to a directory at per your convience. IMDB comments
End of explanation
import nltk
nltk.download("punkt")
nltk.download("stopwords")
nltk.download("wordnet")
nltk.download('averaged_perceptron_tagger')
nltk.download("vader_lexicon")
print(nltk.__version__)
Explanation: Run the following lines when you run this notebook first time on your system.
End of explanation
# The following line does not work on Windows system
!head -n 1 /data/imdb-comments.json
data = []
with open("/data/imdb-comments.json", "r", encoding="utf8") as f:
for l in f.readlines():
data.append(json.loads(l))
comments = pd.DataFrame.from_dict(data)
comments.sample(10)
comments.info()
comments.label.value_counts()
comments.groupby(["label", "sentiment"]).content.count().unstack()
np.random.seed(1)
v = list(comments["content"].sample(1))[0]
v
comments.head()
comments["content"].values[0]
Explanation: Now let's see how to create text classifier using nltk and scikit learn.
End of explanation
from nltk.sentiment.vader import SentimentIntensityAnalyzer
sia = SentimentIntensityAnalyzer()
sia.polarity_scores(comments["content"].values[0])
def sentiment_score(text):
return sia.polarity_scores(text)["compound"]
sentiment_score(comments["content"].values[0])
%%time
comments["vader_score"] = comments["content"].apply(lambda text: sentiment_score(text))
comments["vader_sentiment"] = np.where(comments["vader_score"]>0, "pos", "neg")
comments.head()
comments.vader_sentiment.value_counts()
print(metrics.classification_report(comments["sentiment"], comments["vader_sentiment"]))
Explanation: Vader Sentiment Analysis
End of explanation
def preprocess(text):
# Remove html tags
text = BeautifulSoup(text.lower(), "html5lib").text
# Replace the occurrences of multiple consecutive non-word ccharacters
# with a single space (" ")
text = re.sub(r"[\W]+", " ", text)
return text
preprocess(v)
%%time
# Apply the preprocessing logic to all comments
comments["content"] = comments["content"].apply(preprocess)
comments_train = comments[comments["label"] == "train"]
comments_train.sample(10)
comments_test = comments[comments["label"] == "test"]
comments_test.sample(10)
X_train = comments_train["content"].values
y_train = np.where(comments_train.sentiment == "pos", 1, 0)
X_test = comments_test["content"].values
y_test = np.where(comments_test.sentiment == "pos", 1, 0)
# http://snowball.tartarus.org/algorithms/porter/stemmer.html
# http://www.nltk.org/howto/stem.html
from nltk.stem.snowball import SnowballStemmer
from nltk.stem.porter import PorterStemmer
print(SnowballStemmer.languages)
porter = PorterStemmer()
snowball = SnowballStemmer("english")
lemmatizer = nltk.wordnet.WordNetLemmatizer()
values = []
for s in nltk.word_tokenize(
revival
allowance
inference
relational
runner
runs
ran
has
having
generously
wasn't
leaves
swimming
relative
relating
):
values.append((s, porter.stem(s)
, snowball.stem(s), lemmatizer.lemmatize(s, "v")))
pd.DataFrame(values, columns = ["original", "porter", "snowball", "lemmatizer"])
stopwords = nltk.corpus.stopwords.words("english")
print(len(stopwords), stopwords)
Explanation: As we see above the accuracy is the range of 0.70. Vader model performed better for the positive sentiment compared to negative sentiment. Let's now use statistical model using TFIDF which generally perform better.
Sentiment Analysis using statistical model using TFIDF
End of explanation
stopwords.remove("no")
stopwords.remove("nor")
stopwords.remove("not")
sentence = Financial Services revenues increased $0.5 billion, or 5%, primarily due to
lower impairments and volume growth, partially offset by lower gains.
stemmer = SnowballStemmer("english")
#stemmer = PorterStemmer()
def my_tokenizer(s):
terms = nltk.word_tokenize(s.lower())
#terms = re.split("\s", s.lower())
#terms = [re.sub(r"[\.!]", "", v) for v in terms if len(v)>2]
#terms = [v for v in terms if len(v)>2]
terms = [v for v in terms if v not in stopwords]
terms = [stemmer.stem(w) for w in terms]
#terms = [term for term in terms if len(term) > 2]
return terms
print(my_tokenizer(sentence))
tfidf = feature_extraction.text.TfidfVectorizer(tokenizer=my_tokenizer, max_df = 0.95, min_df=0.0001
, ngram_range=(1, 2))
corpus = ["Today is Wednesday"
, "Delhi weather is hot today."
, "Delhi roads are not busy in the morning"]
doc_term_matrix = tfidf.fit_transform(corpus)
# returns term and index in the feature matrix
print("Vocabulary: ", tfidf.vocabulary_)
columns = [None] * len(tfidf.vocabulary_)
for term in tfidf.vocabulary_:
columns[tfidf.vocabulary_[term]] = term
columns
scores = pd.DataFrame(doc_term_matrix.toarray()
, columns= columns)
scores
X_train_tfidf = tfidf.fit_transform(X_train)
X_test_tfidf = tfidf.transform(X_test)
X_test_tfidf.shape, y_test.shape, X_train_tfidf.shape, y_train.shape
Explanation: Lets drop the following words from stopwords since they are likely good indicator of sentiment.
End of explanation
cell_count = np.product(X_train_tfidf.shape)
bytes = cell_count * 4
GBs = bytes / (1024 ** 3)
GBs
sparsity = 1 - X_train_tfidf.count_nonzero() / cell_count
sparsity
1 - X_train_tfidf.nnz / cell_count
print("Type of doc_term_matrix", type(X_train_tfidf))
Explanation: Let's estimate the memory requirment if the data is presented in dense matrix format
End of explanation
print(X_train_tfidf.data.nbytes / (1024.0 ** 3), "GB")
Explanation: Byte size of the training doc sparse doc
End of explanation
%%time
lr = linear_model.LogisticRegression(C = 0.6, random_state = 1
, n_jobs = 8, solver="saga")
lr.fit(X_train_tfidf, y_train)
y_train_pred = lr.predict(X_train_tfidf)
y_test_pred = lr.predict(X_test_tfidf)
print("Training accuracy: ", metrics.accuracy_score(y_train, y_train_pred))
print("Test accuracy: ", metrics.accuracy_score(y_test, y_test_pred))
fpr, tpr, thresholds = metrics.roc_curve(y_test,
lr.predict_proba(X_test_tfidf)[:, [1]])
auc = metrics.auc(fpr, tpr)
plt.plot(fpr, tpr)
plt.ylim(0, 1)
plt.xlim(0, 1)
plt.plot([0,1], [0,1], ls = "--", color = "k")
plt.xlabel("False Postive Rate")
plt.ylabel("True Positive Rate")
plt.title("ROC Curve, auc: %.4f" % auc);
%%time
from sklearn import naive_bayes, ensemble
bayes = naive_bayes.MultinomialNB(alpha=1)
bayes.fit(X_train_tfidf, y_train)
print("accuracy: ", bayes.score(X_test_tfidf, y_test))
%%time
est = tree.DecisionTreeClassifier()
est.fit(X_train_tfidf, y_train)
print("accuracy: ", est.score(X_test_tfidf, y_test))
columns = [None] * len(tfidf.vocabulary_)
for term in tfidf.vocabulary_:
columns[tfidf.vocabulary_[term]] = term
result = pd.DataFrame({"feature": columns
, "importance": est.feature_importances_})
result = result.sort_values("importance", ascending = False)
result = result[result.importance > 0.0]
print("Top 50 terms: ", list(result.feature[:50]))
Explanation: Classification Model
End of explanation
vocab_by_term = tfidf.vocabulary_
vocab_by_idx = dict({(vocab_by_term[term], term)
for term in vocab_by_term})
str(vocab_by_term)[:100]
str(vocab_by_idx)[:100]
idx = 5
print("Content:\n", X_train[idx])
row = X_train_tfidf[idx]
terms = [(vocab_by_idx[row.indices[i]], row.data[i])
for i, term in enumerate(row.indices)]
pd.Series(dict(terms)).sort_values(ascending = False)
idx = 50
row = X_train_tfidf[idx]
terms = [(vocab_by_idx[row.indices[i]], row.data[i])
for i, term in enumerate(row.indices)]
top_terms= list(pd.Series(dict(terms))\
.sort_values(ascending = False)[:50].index)
wc = WordCloud(background_color="white",
width=500, height=500, max_words=50).generate("+".join(top_terms))
plt.figure(figsize=(10, 10))
plt.imshow(wc)
plt.axis("off");
Explanation: Important terms for a document
End of explanation
%%time
tfidf =feature_extraction.text.TfidfVectorizer(
tokenizer=my_tokenizer
, stop_words = stopwords
, ngram_range=(1, 2)
)
pipe = pipeline.Pipeline([
("tfidf", tfidf),
("est", linear_model.LogisticRegression(C = 1.0, random_state = 1
, n_jobs = 8, solver="saga"))
])
pipe.fit(X_train, y_train)
import pickle
with open("/tmp/model.pkl", "wb") as f:
pickle.dump(pipe, f)
!ls -lh /tmp/model.pkl
with open("/tmp/model.pkl", "rb") as f:
model = pickle.load(f)
doc1 = when we started watching this series on
cable i had no idea how addictive it would be
even when you hate a character you hold back because
they are so beautifully developed you can almost
understand why they react to frustration fear greed
or temptation the way they do it s almost as if the
viewer is experiencing one of christopher s learning
curves i can t understand why adriana would put up with
christopher s abuse of her verbally physically and
emotionally but i just have to read the newspaper to
see how many women can and do tolerate such behavior
carmella has a dream house endless supply of expensive
things but i m sure she would give it up for a loving
and faithful husband or maybe not that s why i watch
it doesn t matter how many times you watch an episode
you can find something you missed the first five times
we even watch episodes out of sequence watch season 1
on late night with commercials but all the language a e
with language censored reruns on the movie network whenever
they re on we re there we ve been totally spoiled now i also
love the malaprop s an albacore around my neck is my favorite of
johnny boy when these jewels have entered our family vocabulary
it is a sign that i should get a life i will when the series
ends and i have collected all the dvd s and put the collection
in my will
doc1 = preprocess(doc1)
model.predict_proba(np.array([doc1]))[:, 1]
Explanation: Build Pipeline for classificaiton Model
End of explanation
hashing_vectorizer = feature_extraction.text.HashingVectorizer(n_features=2 ** 3
, tokenizer=my_tokenizer, ngram_range=(1, 2))
corpus = ["Today is Wednesday"
, "Delhi weather is hot today."
, "Delhi roads are not busy in the morning"]
doc_term_matrix = hashing_vectorizer.fit_transform(corpus)
pd.DataFrame(doc_term_matrix.toarray()) # Each cell is normalized (l2) row-wise
%%time
n_features = int(X_train_tfidf.shape[1] * 0.8)
hashing_vectorizer = feature_extraction.text.HashingVectorizer(n_features=n_features
, tokenizer=my_tokenizer, ngram_range=(1, 2))
X_train_hash = hashing_vectorizer.fit_transform(X_train)
X_test_hash = hashing_vectorizer.transform(X_test)
X_train_hash
X_train_hash.shape, X_test_hash.shape
print(X_train_hash.data.nbytes / (1024.0 ** 3), "GB")
%%time
lr = linear_model.LogisticRegression(C = 1.0, random_state = 1,
solver = "liblinear")
lr.fit(X_train_hash, y_train)
y_train_pred = lr.predict(X_train_hash)
y_test_pred = lr.predict(X_test_hash)
print("Training accuracy: ", metrics.accuracy_score(y_train, y_train_pred))
print("Test accuracy: ", metrics.accuracy_score(y_test, y_test_pred))
print(metrics.classification_report(y_test, y_test_pred))
Explanation: Hashing Vectorizer
Convert a collection of text documents to a matrix of deterministic hash token (murmur3) occurrences
It turns a collection of text documents into a scipy.sparse matrix holding token occurrence counts (or binary occurrence information), possibly normalized as token frequencies if norm=’l1’ or projected on the euclidean unit sphere if norm=’l2’.
Advantages
- it is very low memory scalable to large datasets as there is no need to store a vocabulary dictionary in memory
- it is fast to pickle and un-pickle as it holds no state besides the constructor parameters
- it can be used in a streaming (partial fit) or parallel pipeline as there is no state computed during fit.
Disadvantages
- there is no way to compute the inverse transform (from feature indices to string feature names) which can be a problem when trying to introspect which features are most important to a model.
- there can be collisions: distinct tokens can be mapped to the same feature index. However in practice this is rarely an issue if n_features is large enough (e.g. 2 ** 18 for text classification problems).
- no IDF weighting as this would render the transformer stateful.
End of explanation |
15,779 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2018 The TensorFlow Authors.
Licensed under the Apache License, Version 2.0 (the "License").
Neural Machine Translation with Attention
<table class="tfo-notebook-buttons" align="left"><td>
<a target="_blank" href="https
Step1: Download and prepare the dataset
We'll use a language dataset provided by http
Step2: Limit the size of the dataset to experiment faster (optional)
Training on the complete dataset of >100,000 sentences will take a long time. To train faster, we can limit the size of the dataset to 30,000 sentences (of course, translation quality degrades with less data)
Step3: Create a tf.data dataset
Step4: Write the encoder and decoder model
Here, we'll implement an encoder-decoder model with attention which you can read about in the TensorFlow Neural Machine Translation (seq2seq) tutorial. This example uses a more recent set of APIs. This notebook implements the attention equations from the seq2seq tutorial. The following diagram shows that each input words is assigned a weight by the attention mechanism which is then used by the decoder to predict the next word in the sentence.
<img src="https
Step5: Define the optimizer and the loss function
Step6: Training
Pass the input through the encoder which return encoder output and the encoder hidden state.
The encoder output, encoder hidden state and the decoder input (which is the start token) is passed to the decoder.
The decoder returns the predictions and the decoder hidden state.
The decoder hidden state is then passed back into the model and the predictions are used to calculate the loss.
Use teacher forcing to decide the next input to the decoder.
Teacher forcing is the technique where the target word is passed as the next input to the decoder.
The final step is to calculate the gradients and apply it to the optimizer and backpropagate.
Step7: Translate
The evaluate function is similar to the training loop, except we don't use teacher forcing here. The input to the decoder at each time step is its previous predictions along with the hidden state and the encoder output.
Stop predicting when the model predicts the end token.
And store the attention weights for every time step.
Note | Python Code:
from __future__ import absolute_import, division, print_function
# Import TensorFlow >= 1.9 and enable eager execution
import tensorflow as tf
tf.enable_eager_execution()
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
import unicodedata
import re
import numpy as np
import os
import time
print(tf.__version__)
Explanation: Copyright 2018 The TensorFlow Authors.
Licensed under the Apache License, Version 2.0 (the "License").
Neural Machine Translation with Attention
<table class="tfo-notebook-buttons" align="left"><td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/contrib/eager/python/examples/nmt_with_attention/nmt_with_attention.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td><td>
<a target="_blank" href="https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples/nmt_with_attention/nmt_with_attention.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a></td></table>
This notebook trains a sequence to sequence (seq2seq) model for Spanish to English translation using tf.keras and eager execution. This is an advanced example that assumes some knowledge of sequence to sequence models.
After training the model in this notebook, you will be able to input a Spanish sentence, such as "¿todavia estan en casa?", and return the English translation: "are you still at home?"
The translation quality is reasonable for a toy example, but the generated attention plot is perhaps more interesting. This shows which parts of the input sentence has the model's attention while translating:
<img src="https://tensorflow.org/images/spanish-english.png" alt="spanish-english attention plot">
Note: This example takes approximately 10 mintues to run on a single P100 GPU.
End of explanation
# Download the file
path_to_zip = tf.keras.utils.get_file(
'spa-eng.zip', origin='http://download.tensorflow.org/data/spa-eng.zip',
extract=True)
path_to_file = os.path.dirname(path_to_zip)+"/spa-eng/spa.txt"
# Converts the unicode file to ascii
def unicode_to_ascii(s):
return ''.join(c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn')
def preprocess_sentence(w):
w = unicode_to_ascii(w.lower().strip())
# creating a space between a word and the punctuation following it
# eg: "he is a boy." => "he is a boy ."
# Reference:- https://stackoverflow.com/questions/3645931/python-padding-punctuation-with-white-spaces-keeping-punctuation
w = re.sub(r"([?.!,¿])", r" \1 ", w)
w = re.sub(r'[" "]+', " ", w)
# replacing everything with space except (a-z, A-Z, ".", "?", "!", ",")
w = re.sub(r"[^a-zA-Z?.!,¿]+", " ", w)
w = w.rstrip().strip()
# adding a start and an end token to the sentence
# so that the model know when to start and stop predicting.
w = '<start> ' + w + ' <end>'
return w
# 1. Remove the accents
# 2. Clean the sentences
# 3. Return word pairs in the format: [ENGLISH, SPANISH]
def create_dataset(path, num_examples):
lines = open(path, encoding='UTF-8').read().strip().split('\n')
word_pairs = [[preprocess_sentence(w) for w in l.split('\t')] for l in lines[:num_examples]]
return word_pairs
# This class creates a word -> index mapping (e.g,. "dad" -> 5) and vice-versa
# (e.g., 5 -> "dad") for each language,
class LanguageIndex():
def __init__(self, lang):
self.lang = lang
self.word2idx = {}
self.idx2word = {}
self.vocab = set()
self.create_index()
def create_index(self):
for phrase in self.lang:
self.vocab.update(phrase.split(' '))
self.vocab = sorted(self.vocab)
self.word2idx['<pad>'] = 0
for index, word in enumerate(self.vocab):
self.word2idx[word] = index + 1
for word, index in self.word2idx.items():
self.idx2word[index] = word
def max_length(tensor):
return max(len(t) for t in tensor)
def load_dataset(path, num_examples):
# creating cleaned input, output pairs
pairs = create_dataset(path, num_examples)
# index language using the class defined above
inp_lang = LanguageIndex(sp for en, sp in pairs)
targ_lang = LanguageIndex(en for en, sp in pairs)
# Vectorize the input and target languages
# Spanish sentences
input_tensor = [[inp_lang.word2idx[s] for s in sp.split(' ')] for en, sp in pairs]
# English sentences
target_tensor = [[targ_lang.word2idx[s] for s in en.split(' ')] for en, sp in pairs]
# Calculate max_length of input and output tensor
# Here, we'll set those to the longest sentence in the dataset
max_length_inp, max_length_tar = max_length(input_tensor), max_length(target_tensor)
# Padding the input and output tensor to the maximum length
input_tensor = tf.keras.preprocessing.sequence.pad_sequences(input_tensor,
maxlen=max_length_inp,
padding='post')
target_tensor = tf.keras.preprocessing.sequence.pad_sequences(target_tensor,
maxlen=max_length_tar,
padding='post')
return input_tensor, target_tensor, inp_lang, targ_lang, max_length_inp, max_length_tar
Explanation: Download and prepare the dataset
We'll use a language dataset provided by http://www.manythings.org/anki/. This dataset contains language translation pairs in the format:
May I borrow this book? ¿Puedo tomar prestado este libro?
There are a variety of languages available, but we'll use the English-Spanish dataset. For convenience, we've hosted a copy of this dataset on Google Cloud, but you can also download your own copy. After downloading the dataset, here are the steps we'll take to prepare the data:
Add a start and end token to each sentence.
Clean the sentences by removing special characters.
Create a word index and reverse word index (dictionaries mapping from word → id and id → word).
Pad each sentence to a maximum length.
End of explanation
# Try experimenting with the size of that dataset
num_examples = 30000
input_tensor, target_tensor, inp_lang, targ_lang, max_length_inp, max_length_targ = load_dataset(path_to_file, num_examples)
# Creating training and validation sets using an 80-20 split
input_tensor_train, input_tensor_val, target_tensor_train, target_tensor_val = train_test_split(input_tensor, target_tensor, test_size=0.2)
# Show length
len(input_tensor_train), len(target_tensor_train), len(input_tensor_val), len(target_tensor_val)
Explanation: Limit the size of the dataset to experiment faster (optional)
Training on the complete dataset of >100,000 sentences will take a long time. To train faster, we can limit the size of the dataset to 30,000 sentences (of course, translation quality degrades with less data):
End of explanation
BUFFER_SIZE = len(input_tensor_train)
BATCH_SIZE = 64
embedding_dim = 256
units = 1024
vocab_inp_size = len(inp_lang.word2idx)
vocab_tar_size = len(targ_lang.word2idx)
dataset = tf.data.Dataset.from_tensor_slices((input_tensor_train, target_tensor_train)).shuffle(BUFFER_SIZE)
dataset = dataset.apply(tf.contrib.data.batch_and_drop_remainder(BATCH_SIZE))
Explanation: Create a tf.data dataset
End of explanation
def gru(units):
# If you have a GPU, we recommend using CuDNNGRU(provides a 3x speedup than GRU)
# the code automatically does that.
if tf.test.is_gpu_available():
return tf.keras.layers.CuDNNGRU(units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
else:
return tf.keras.layers.GRU(units,
return_sequences=True,
return_state=True,
recurrent_activation='sigmoid',
recurrent_initializer='glorot_uniform')
class Encoder(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, enc_units, batch_sz):
super(Encoder, self).__init__()
self.batch_sz = batch_sz
self.enc_units = enc_units
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.gru = gru(self.enc_units)
def call(self, x, hidden):
x = self.embedding(x)
output, state = self.gru(x, initial_state = hidden)
return output, state
def initialize_hidden_state(self):
return tf.zeros((self.batch_sz, self.enc_units))
class Decoder(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, dec_units, batch_sz):
super(Decoder, self).__init__()
self.batch_sz = batch_sz
self.dec_units = dec_units
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.gru = gru(self.dec_units)
self.fc = tf.keras.layers.Dense(vocab_size)
# used for attention
self.W1 = tf.keras.layers.Dense(self.dec_units)
self.W2 = tf.keras.layers.Dense(self.dec_units)
self.V = tf.keras.layers.Dense(1)
def call(self, x, hidden, enc_output):
# enc_output shape == (batch_size, max_length, hidden_size)
# hidden shape == (batch_size, hidden size)
# hidden_with_time_axis shape == (batch_size, 1, hidden size)
# we are doing this to perform addition to calculate the score
hidden_with_time_axis = tf.expand_dims(hidden, 1)
# score shape == (batch_size, max_length, hidden_size)
score = tf.nn.tanh(self.W1(enc_output) + self.W2(hidden_with_time_axis))
# attention_weights shape == (batch_size, max_length, 1)
# we get 1 at the last axis because we are applying score to self.V
attention_weights = tf.nn.softmax(self.V(score), axis=1)
# context_vector shape after sum == (batch_size, hidden_size)
context_vector = attention_weights * enc_output
context_vector = tf.reduce_sum(context_vector, axis=1)
# x shape after passing through embedding == (batch_size, 1, embedding_dim)
x = self.embedding(x)
# x shape after concatenation == (batch_size, 1, embedding_dim + hidden_size)
x = tf.concat([tf.expand_dims(context_vector, 1), x], axis=-1)
# passing the concatenated vector to the GRU
output, state = self.gru(x)
# output shape == (batch_size * max_length, hidden_size)
output = tf.reshape(output, (-1, output.shape[2]))
# output shape == (batch_size * max_length, vocab)
x = self.fc(output)
return x, state, attention_weights
def initialize_hidden_state(self):
return tf.zeros((self.batch_sz, self.dec_units))
encoder = Encoder(vocab_inp_size, embedding_dim, units, BATCH_SIZE)
decoder = Decoder(vocab_tar_size, embedding_dim, units, BATCH_SIZE)
Explanation: Write the encoder and decoder model
Here, we'll implement an encoder-decoder model with attention which you can read about in the TensorFlow Neural Machine Translation (seq2seq) tutorial. This example uses a more recent set of APIs. This notebook implements the attention equations from the seq2seq tutorial. The following diagram shows that each input words is assigned a weight by the attention mechanism which is then used by the decoder to predict the next word in the sentence.
<img src="https://www.tensorflow.org/images/seq2seq/attention_mechanism.jpg" width="500" alt="attention mechanism">
The input is put through an encoder model which gives us the encoder output of shape (batch_size, max_length, hidden_size) and the encoder hidden state of shape (batch_size, hidden_size).
Here are the equations that are implemented:
<img src="https://www.tensorflow.org/images/seq2seq/attention_equation_0.jpg" alt="attention equation 0" width="800">
<img src="https://www.tensorflow.org/images/seq2seq/attention_equation_1.jpg" alt="attention equation 1" width="800">
We're using Bahdanau attention. Lets decide on notation before writing the simplified form:
FC = Fully connected (dense) layer
EO = Encoder output
H = hidden state
X = input to the decoder
And the pseudo-code:
score = FC(tanh(FC(EO) + FC(H)))
attention weights = softmax(score, axis = 1). Softmax by default is applied on the last axis but here we want to apply it on the 1st axis, since the shape of score is (batch_size, max_length, hidden_size). Max_length is the length of our input. Since we are trying to assign a weight to each input, softmax should be applied on that axis.
context vector = sum(attention weights * EO, axis = 1). Same reason as above for choosing axis as 1.
embedding output = The input to the decoder X is passed through an embedding layer.
merged vector = concat(embedding output, context vector)
This merged vector is then given to the GRU
The shapes of all the vectors at each step have been specified in the comments in the code:
End of explanation
optimizer = tf.train.AdamOptimizer()
def loss_function(real, pred):
mask = 1 - np.equal(real, 0)
loss_ = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=real, logits=pred) * mask
return tf.reduce_mean(loss_)
Explanation: Define the optimizer and the loss function
End of explanation
EPOCHS = 10
for epoch in range(EPOCHS):
start = time.time()
hidden = encoder.initialize_hidden_state()
total_loss = 0
for (batch, (inp, targ)) in enumerate(dataset):
loss = 0
with tf.GradientTape() as tape:
enc_output, enc_hidden = encoder(inp, hidden)
dec_hidden = enc_hidden
dec_input = tf.expand_dims([targ_lang.word2idx['<start>']] * BATCH_SIZE, 1)
# Teacher forcing - feeding the target as the next input
for t in range(1, targ.shape[1]):
# passing enc_output to the decoder
predictions, dec_hidden, _ = decoder(dec_input, dec_hidden, enc_output)
loss += loss_function(targ[:, t], predictions)
# using teacher forcing
dec_input = tf.expand_dims(targ[:, t], 1)
total_loss += (loss / int(targ.shape[1]))
variables = encoder.variables + decoder.variables
gradients = tape.gradient(loss, variables)
optimizer.apply_gradients(zip(gradients, variables), tf.train.get_or_create_global_step())
if batch % 100 == 0:
print('Epoch {} Batch {} Loss {:.4f}'.format(epoch + 1,
batch,
loss.numpy() / int(targ.shape[1])))
print('Epoch {} Loss {:.4f}'.format(epoch + 1,
total_loss/len(input_tensor)))
print('Time taken for 1 epoch {} sec\n'.format(time.time() - start))
Explanation: Training
Pass the input through the encoder which return encoder output and the encoder hidden state.
The encoder output, encoder hidden state and the decoder input (which is the start token) is passed to the decoder.
The decoder returns the predictions and the decoder hidden state.
The decoder hidden state is then passed back into the model and the predictions are used to calculate the loss.
Use teacher forcing to decide the next input to the decoder.
Teacher forcing is the technique where the target word is passed as the next input to the decoder.
The final step is to calculate the gradients and apply it to the optimizer and backpropagate.
End of explanation
def evaluate(sentence, encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ):
attention_plot = np.zeros((max_length_targ, max_length_inp))
sentence = preprocess_sentence(sentence)
inputs = [inp_lang.word2idx[i] for i in sentence.split(' ')]
inputs = tf.keras.preprocessing.sequence.pad_sequences([inputs], maxlen=max_length_inp, padding='post')
inputs = tf.convert_to_tensor(inputs)
result = ''
hidden = [tf.zeros((1, units))]
enc_out, enc_hidden = encoder(inputs, hidden)
dec_hidden = enc_hidden
dec_input = tf.expand_dims([targ_lang.word2idx['<start>']], 0)
for t in range(max_length_targ):
predictions, dec_hidden, attention_weights = decoder(dec_input, dec_hidden, enc_out)
# storing the attention weigths to plot later on
attention_weights = tf.reshape(attention_weights, (-1, ))
attention_plot[t] = attention_weights.numpy()
predicted_id = tf.multinomial(tf.exp(predictions), num_samples=1)[0][0].numpy()
result += targ_lang.idx2word[predicted_id] + ' '
if targ_lang.idx2word[predicted_id] == '<end>':
return result, sentence, attention_plot
# the predicted ID is fed back into the model
dec_input = tf.expand_dims([predicted_id], 0)
return result, sentence, attention_plot
# function for plotting the attention weights
def plot_attention(attention, sentence, predicted_sentence):
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(1, 1, 1)
ax.matshow(attention, cmap='viridis')
fontdict = {'fontsize': 14}
ax.set_xticklabels([''] + sentence, fontdict=fontdict, rotation=90)
ax.set_yticklabels([''] + predicted_sentence, fontdict=fontdict)
plt.show()
def translate(sentence, encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ):
result, sentence, attention_plot = evaluate(sentence, encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ)
print('Input: {}'.format(sentence))
print('Predicted translation: {}'.format(result))
attention_plot = attention_plot[:len(result.split(' ')), :len(sentence.split(' '))]
plot_attention(attention_plot, sentence.split(' '), result.split(' '))
translate('hace mucho frio aqui.', encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ)
translate('esta es mi vida.', encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ)
translate('¿todavia estan en casa?', encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ)
# wrong translation
translate('trata de averiguarlo.', encoder, decoder, inp_lang, targ_lang, max_length_inp, max_length_targ)
Explanation: Translate
The evaluate function is similar to the training loop, except we don't use teacher forcing here. The input to the decoder at each time step is its previous predictions along with the hidden state and the encoder output.
Stop predicting when the model predicts the end token.
And store the attention weights for every time step.
Note: The encoder output is calculated only once for one input.
End of explanation |
15,780 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Homework 10 Key
CHE 116
Step1: 1. Conceptual Questions
Describe the general process for parametric hypothesis tests.
Why would you choose a non-parametric hypothesis test over a parametric one?
Why would you choose a parametric hypothesis test over a non-parametric one?
If you do not reject the null hypothesis, does that mean you've proved it?
1.1
You compute the end-points of an interval with values as extreme and more extreme as your sample data. You integrate the area of this interval to obtain your p-value. If the p-value is less than your significance threshold, you reject the null hypothesis.
1.2
To avoid assuming normal or other distribution
1.3
A parametric test can show significance with small amounts of data.
1.4
No
2. Short Answer Questions
If your p-value is 0.4 and $\alpha = 0.1$, should you reject the null hypothesis?
What is your p-value if your $T$-value is -2 in the two-tailed/two-sided $t$-test with a DOF of 4?
For a one-sample $zM$ test, what is the minimum number of standard deviations away from the population mean a sample should be to reject the null hypothesis with $\alpha = 0.05$?
For an N-sample $zM$ test, what is the minimum number of standard deviations away from the population mean a sample should be to reject the null hypothesis with $\alpha = 0.05$ in terms of $N$?
In a Poisson hypothesis test, what is the p-value if $\mu = 4.3$ and the sample is 8?
What is the standard error for $\bar{x} = 4$, $\sigma_x = 0.4$ and $N = 11$?
2.1
No
2.2
Step2: 2.3
Step3: 2.4
$$
1.96 = \frac{\sqrt{N}\bar{x}}{\sigma}
$$
You should be
$
\frac{1.96}{\sqrt{N}}
$ standard deviations away
2.5
Step4: 2.6
Step5: 3. Choose the hypothesis test
State which hypothesis test best fits the example below and state the null hypothesis. You can justify your answer if you feel like mulitiple tests fit.
You know that coffee should be brewed at 186 $^\circ{}$F. You measure coffee from Starbuck 10 times over a week and want to know if they're brewing at the correct temperature.
You believe that the real estate market in SF is the same as NYC. You gather 100 home prices from both markets to compare them.
Australia banned most guns in 2002. You compare homicide rates before and after this date.
A number of states have recently legalized recreational marijuana. You gather teen drug use data for the year prior and two years after the legislation took effect.
You think your mail is being stolen. You know that you typically get five pieces of mail on Wednesdays, but this Wednesday you got no mail.
3.1
t-test
Null
Step6: 4.2
Poisson
The number of accidents is from the population distribution
0.032
Reject
Yes, there is a significant difference
Step7: 4.3
t-test
The new bills are from the population distribution of previous bills
0.09
Do not reject
No, the new bill is not significantly different
Step8: 5. Exponential Test (5 Bonus Points)
Your dog typically greets you within 10 seconds of coming home. Is it significant that your dog took 16 seconds? | Python Code:
import scipy.stats as ss
import numpy as np
Explanation: Homework 10 Key
CHE 116: Numerical Methods and Statistics
4/3/2019
End of explanation
import scipy.stats as ss
ss.t.cdf(-2, 4) * 2
Explanation: 1. Conceptual Questions
Describe the general process for parametric hypothesis tests.
Why would you choose a non-parametric hypothesis test over a parametric one?
Why would you choose a parametric hypothesis test over a non-parametric one?
If you do not reject the null hypothesis, does that mean you've proved it?
1.1
You compute the end-points of an interval with values as extreme and more extreme as your sample data. You integrate the area of this interval to obtain your p-value. If the p-value is less than your significance threshold, you reject the null hypothesis.
1.2
To avoid assuming normal or other distribution
1.3
A parametric test can show significance with small amounts of data.
1.4
No
2. Short Answer Questions
If your p-value is 0.4 and $\alpha = 0.1$, should you reject the null hypothesis?
What is your p-value if your $T$-value is -2 in the two-tailed/two-sided $t$-test with a DOF of 4?
For a one-sample $zM$ test, what is the minimum number of standard deviations away from the population mean a sample should be to reject the null hypothesis with $\alpha = 0.05$?
For an N-sample $zM$ test, what is the minimum number of standard deviations away from the population mean a sample should be to reject the null hypothesis with $\alpha = 0.05$ in terms of $N$?
In a Poisson hypothesis test, what is the p-value if $\mu = 4.3$ and the sample is 8?
What is the standard error for $\bar{x} = 4$, $\sigma_x = 0.4$ and $N = 11$?
2.1
No
2.2
End of explanation
-ss.norm.ppf(0.025)
Explanation: 2.3
End of explanation
1 - ss.poisson.cdf(7, mu=4.3)
Explanation: 2.4
$$
1.96 = \frac{\sqrt{N}\bar{x}}{\sigma}
$$
You should be
$
\frac{1.96}{\sqrt{N}}
$ standard deviations away
2.5
End of explanation
import math
0.4 / math.sqrt(11)
Explanation: 2.6
End of explanation
p = ss.wilcoxon([181, 182, 181, 182, 182, 183, 185], [180, 179, 184, 179, 180, 183, 180])
print(p[1])
Explanation: 3. Choose the hypothesis test
State which hypothesis test best fits the example below and state the null hypothesis. You can justify your answer if you feel like mulitiple tests fit.
You know that coffee should be brewed at 186 $^\circ{}$F. You measure coffee from Starbuck 10 times over a week and want to know if they're brewing at the correct temperature.
You believe that the real estate market in SF is the same as NYC. You gather 100 home prices from both markets to compare them.
Australia banned most guns in 2002. You compare homicide rates before and after this date.
A number of states have recently legalized recreational marijuana. You gather teen drug use data for the year prior and two years after the legislation took effect.
You think your mail is being stolen. You know that you typically get five pieces of mail on Wednesdays, but this Wednesday you got no mail.
3.1
t-test
Null: The coffee is brewed at the correct temperature.
3.2
Wilcoxon Sum of Ranks
The real estate prices in SF and NYC are from the same distribution.
3.3
Wilcoxon Sum of Ranks
The homicide rates before the dat are from the same distribution
3.4
Wilcoxon Signed Ranks
The teen drug use data for the year prior and the year after two years after the legislation are from the same distribution
3.5
Poisson
Your mail is not being stolen
4. Hypothesis Tests
Do the following:
[1 Point] State the test type
[1 Point] State the null hypothesis
[2 Points] State the p-value
[1 Point] State if you accept/reject the null hypothesis
[1 Point] Answer the question
You have heard an urban legend that you are taller in the morning. Using the height measurements in centimeters below, answer the question
|Morning | Evening|
|:---|----:|
| 181 | 180 |
| 182 | 179 |
| 181 | 184 |
| 182 | 179 |
| 182 | 180 |
| 183 | 183 |
| 185 | 180 |
On a typical day in Rochester, there are 11 major car accidents. On the Monday after daylight savings time in the Spring, there are 18 major car accidents. Is this significant?
Your cellphone bill is typically \$20. The last four have been \$21, \$30. \$25, \$23. Has it significantly changed?
4.1
Wilcoxon Signed Rank Test
The two heights are from the same distribution
0.17
Cannot reject
No evidence for a difference in heights
End of explanation
1 - ss.poisson.cdf(17, mu=11)
Explanation: 4.2
Poisson
The number of accidents is from the population distribution
0.032
Reject
Yes, there is a significant difference
End of explanation
import numpy as np
data = [21, 30, 25, 23]
se = np.std(data, ddof=1) / np.sqrt(len(data))
T = (np.mean(data) - 20) / se
ss.t.cdf(-abs(T), df=len(data) - 1) * 2
Explanation: 4.3
t-test
The new bills are from the population distribution of previous bills
0.09
Do not reject
No, the new bill is not significantly different
End of explanation
1 - ss.expon.cdf(16, scale=10)
Explanation: 5. Exponential Test (5 Bonus Points)
Your dog typically greets you within 10 seconds of coming home. Is it significant that your dog took 16 seconds?
End of explanation |
15,781 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
pyphysio tutorial
1. Signals
The class Signal together with the class Algorithm are the two main classes in pyphysio.
In this first tutorial we will see how the class Signal can be used to facilitate the management and processing of signals.
A signal is an ordered vector of timestamp-value pairs, where the timestamp is the instant at which the measured phenomenon had that value.
In pyphysio a signal is represented by the class Signal which extends the numpy.ndarray class.
In this part we will see the different types of signals that can be defined and their properties.
We start importing the packages needed for the tutorial
Step1: And then we import two classess of pyphysio
Step2: 1.1 EvenlySignal
When the signal is sampled with a fixed sampling frequency it is sufficient to know the timestamp at which the acquisition started and the sampling frequency (assumed to be constant) to reconstruct the timestamp of each sample. This type of signal is represented by the class EvenlySignal.
Therefore to create an instance of EvenlySignal these are the input attributes needed
Step3: 1.1.2 Plotting a signal
A shortcut is provided in pyphysio to plot a signal, using the matplotlib library
Step4: 1.3 Working with physiological signals
In this second example we import the sample data included in pyphysio to show how the EvenlySignal class can be used to represent physiological signals
Step5: The imported values can be used to create two new signals of the EvenlySignal class.
Note that we set different starting times for the ecg and the eda signal
Step6: In the following plot note that the EDA signal start 10 seconds before the ECG signal.
Using the start_time parameter is therefore possible to manually synchronize multiple signals.
Step7: 1.4 Managing the sampling frequency
The sampling frequency of a signal is defined before the acquisition. However it is possible to numerically change it in order to oversample or downsample the signal, according to the signal type and characteristics.
Note in the plot below the effect of downsampling the ECG.
Step8: 1.2 UnevenlySignal
Other types of signals, for instance triggers indicating occurrences of heartbeats or events, are series of samples which are not equally temporally spaced. Thus the sampling frequency is not fixed and it is necessary to store the timestamp of each sample. This type of signals is represented by the class UnevenlySignal.
Therefore to create an instance of UnevenlySignal additional input attributes are needed
Step9: Create an UnevenlySignal object providing the indices
Step10: Create an UnevenlySignal object providing the instants
Step11: Note in the following plot that the interval between the last two samples is different from all the others
Step12: 1.2.2 From UnevenlySignal to EvenlySignal
It is possible to obtain an EvenlySignal from an UnevenlySignal by interpolation, using the method to_evenly of the class UnevenlySignal
Step13: Note how the interval between the last two samples has been interpolated in the EvenlySignal version (blue) of the original signal (yellow)
Step14: 1.3 Segmentation of signals
Two general class functions are provided to segment a signal | Python Code:
# import packages
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: pyphysio tutorial
1. Signals
The class Signal together with the class Algorithm are the two main classes in pyphysio.
In this first tutorial we will see how the class Signal can be used to facilitate the management and processing of signals.
A signal is an ordered vector of timestamp-value pairs, where the timestamp is the instant at which the measured phenomenon had that value.
In pyphysio a signal is represented by the class Signal which extends the numpy.ndarray class.
In this part we will see the different types of signals that can be defined and their properties.
We start importing the packages needed for the tutorial:
End of explanation
# import the Signal classes
from pyphysio import EvenlySignal, UnevenlySignal
Explanation: And then we import two classess of pyphysio: EvenlySignal and UnevenlySignal, both subclassess of the abstract class `Signal':
End of explanation
# create a signal
## create fake data
np.random.seed(4)
signal_values = np.random.uniform(0, 1, size = 1000)
## set the sampling frequency
fsamp = 100 # Hz
## set the starting time
tstart = 100 # s
## create the Evenly signal
s_fake = EvenlySignal(values = signal_values, sampling_freq = fsamp, signal_type = 'fake', start_time = tstart)
# chech signal properties
print('Sampling frequency: {}'.format( s_fake.get_sampling_freq() ))
print('Start time: {}'.format( s_fake.get_start_time() ))
print('End time: {}'.format( s_fake.get_end_time() ))
print('Duration: {}'.format( s_fake.get_duration() ))
print('Signal type : {}'.format( s_fake.get_signal_type() ))
print('First ten instants: {}'.format( s_fake.get_times()[0:10] ))
Explanation: 1.1 EvenlySignal
When the signal is sampled with a fixed sampling frequency it is sufficient to know the timestamp at which the acquisition started and the sampling frequency (assumed to be constant) to reconstruct the timestamp of each sample. This type of signal is represented by the class EvenlySignal.
Therefore to create an instance of EvenlySignal these are the input attributes needed:
* values : (unidimensional numpy array) values of the signal;
* sampling_freq : (float>0) sampling frequency;
* start_time : (float) temporal reference of the start of the signal. This is optional, if omitted it will set to 0;
* signal_type : (string) identifier of the type of the signal. In future releases of pyphysio it will be used to check the appropriateness of the algorithms applied to the signal. Now it is optional and if omitted it will set to ''.
Class functions are provided to facilitate the management and processing of signals. For instance:
* get_...() and set_...() type functions can be used to check/set signal attributes;
* plot() will plot the signal using matplotlib;
* segment_time(t_start, t_stop) and segment_idx(idx_start, idx_stop) can be used to extract a portion of the signal;
* resample(fout) can be used to change the sampling frequency.
1.1.1 Creation of a EvenlySignal object
In the following we generate a fake EvenlySignal using random generated numbers. Then we will use the methods provided by the class to inspect the signal characteristics:
End of explanation
## plot
s_fake.plot()
Explanation: 1.1.2 Plotting a signal
A shortcut is provided in pyphysio to plot a signal, using the matplotlib library:
End of explanation
import pyphysio as ph
# import data from included examples
from pyphysio.tests import TestData
ecg_data = TestData.ecg()
eda_data = TestData.eda()
Explanation: 1.3 Working with physiological signals
In this second example we import the sample data included in pyphysio to show how the EvenlySignal class can be used to represent physiological signals:
End of explanation
# create two signals
fsamp = 2048
tstart_ecg = 15
tstart_eda = 5
ecg = EvenlySignal(values = ecg_data,
sampling_freq = fsamp,
signal_type = 'ecg',
start_time = tstart_ecg)
eda = EvenlySignal(values = eda_data,
sampling_freq = fsamp,
signal_type = 'eda',
start_time = tstart_eda)
Explanation: The imported values can be used to create two new signals of the EvenlySignal class.
Note that we set different starting times for the ecg and the eda signal:
End of explanation
# plot
ax1 = plt.subplot(211)
ecg.plot()
plt.subplot(212, sharex=ax1)
eda.plot()
# check signal properties
print('ECG')
print('Sampling frequency: {}'.format( ecg.get_sampling_freq() ))
print('Start time: {}'.format( ecg.get_start_time() ))
print('End time: {}'.format( ecg.get_end_time() ))
print('Duration: {}'.format( ecg.get_duration() ))
print('Signal type: {}'.format( ecg.get_signal_type() ))
print('First ten instants: {}'.format( ecg.get_times()[0:10] ))
print('')
print('EDA')
print('Sampling frequency: {}'.format( eda.get_sampling_freq() ))
print('Start time: {}'.format( eda.get_start_time() ))
print('End time: {}'.format( eda.get_end_time() ))
print('Duration: {}'.format( eda.get_duration() ))
print('Signal type : {}'.format( eda.get_signal_type() ))
print('First ten instants: {}'.format( eda.get_times()[0:10] ))
Explanation: In the following plot note that the EDA signal start 10 seconds before the ECG signal.
Using the start_time parameter is therefore possible to manually synchronize multiple signals.
End of explanation
# resampling
ecg_128 = ecg.resample(fout=128)
ecg.plot() # plotting the original signal
ecg_128.plot('.') # plotting the samples of the downsampled signal
plt.xlim((40,42)) # setting the range of the x axis between 40 and 42 seconds
Explanation: 1.4 Managing the sampling frequency
The sampling frequency of a signal is defined before the acquisition. However it is possible to numerically change it in order to oversample or downsample the signal, according to the signal type and characteristics.
Note in the plot below the effect of downsampling the ECG.
End of explanation
## create fake data
signal_values = np.arange(100)
Explanation: 1.2 UnevenlySignal
Other types of signals, for instance triggers indicating occurrences of heartbeats or events, are series of samples which are not equally temporally spaced. Thus the sampling frequency is not fixed and it is necessary to store the timestamp of each sample. This type of signals is represented by the class UnevenlySignal.
Therefore to create an instance of UnevenlySignal additional input attributes are needed:
* x_values : (unidimensional numpy array) information about the temporal position of each sample. Should be of the same size of values;
* x_type : ('instants' or 'indices') indicate what type of x_values have been used.
Two ways are allowed to define an UnevenlySignal:
1. by defining the indexes (x_type='indices'): x_values are indices of an array and the instants are automatically computed using the information from the sampling_frequency and the start_time.
2. by defining the instants (x_type='instants'): x_values are instants and the indices are automatically computed using the information from the sampling_frequency and the start_time.
As a general rule, the start_time is always associated to the index 0.
An additional class function is provided to transform an UnevenlySignal to an EvenlySignal:
* to_evenly() create an EvenlySignal by interpolating the signal with given signal sampling frequency.
1.2.1 Creating an UnevenlySignal object
In the following we generate a fake UnevenlySignal using random generated numbers.
We will use two methods to provide the temporal information about each sample:
1. by providing the information about the indices;
2. by providing the informatino about the instants.
Then we will use the provided methods to inspect the signal characteristics:
End of explanation
## create fake indices
idx = np.arange(100)
idx[-1] = 125
## set the sampling frequency
fsamp = 10 # Hz
## set the starting time
tstart = 10 # s
## create an Unevenly signal defining the indices
x_values_idx = idx
s_fake_idx = UnevenlySignal(values = signal_values,
sampling_freq = fsamp,
signal_type = 'fake',
start_time = tstart,
x_values = x_values_idx,
x_type = 'indices')
Explanation: Create an UnevenlySignal object providing the indices:
End of explanation
## create an Unevenly signal defining the instants
x_values_time = np.arange(100)/fsamp
x_values_time[-1] = 12.5
x_values_time += 10
## set the starting time
tstart = 0
s_fake_time = UnevenlySignal(values = signal_values,
sampling_freq = fsamp,
signal_type = 'fake',
start_time = tstart,
x_values = x_values_time,
x_type = 'instants')
Explanation: Create an UnevenlySignal object providing the instants:
End of explanation
#plot
ax1=plt.subplot(211)
s_fake_idx.plot('.-')
plt.subplot(212, sharex=ax1)
s_fake_time.plot('.-')
# note that the times are the same but not the starting_time nor the indices:
# check samples instants
print('Instants:')
print(s_fake_idx.get_times())
print(s_fake_time.get_times())
# check samples indices
print('Indices:')
print(s_fake_idx.get_indices())
print(s_fake_time.get_indices())
# check start_time
print('Start time:')
print(s_fake_idx.get_start_time())
print(s_fake_time.get_start_time())
# chech signal properties
print('Defined by Indices')
print('Sampling frequency: {}'.format( s_fake_idx.get_sampling_freq() ))
print('Start time: {}'.format( s_fake_idx.get_start_time() ))
print('End time: {}'.format( s_fake_idx.get_end_time() ))
print('Duration: {}'.format( s_fake_idx.get_duration() ))
print('Signal type: {}'.format( s_fake_idx.get_signal_type() ))
print('First ten instants: {}'.format( s_fake_idx.get_times()[0:10] ))
print('')
print('Defined by Instants')
print('Sampling frequency: {}'.format( s_fake_time.get_sampling_freq() ))
print('Start time: {}'.format( s_fake_time.get_start_time() ))
print('End time: {}'.format( s_fake_time.get_end_time() ))
print('Duration: {}'.format( s_fake_time.get_duration() ))
print('Signal type: {}'.format( s_fake_time.get_signal_type() ))
print('First ten instants: {}'.format( s_fake_time.get_times()[0:10] ))
Explanation: Note in the following plot that the interval between the last two samples is different from all the others:
End of explanation
# to_evenly
s_fake_time_evenly = s_fake_time.to_evenly(kind = 'linear')
Explanation: 1.2.2 From UnevenlySignal to EvenlySignal
It is possible to obtain an EvenlySignal from an UnevenlySignal by interpolation, using the method to_evenly of the class UnevenlySignal:
End of explanation
s_fake_time_evenly.plot('.-b')
s_fake_time.plot('.-y')
# check type
print(type(s_fake_time_evenly))
print(type(s_fake_time))
Explanation: Note how the interval between the last two samples has been interpolated in the EvenlySignal version (blue) of the original signal (yellow):
End of explanation
# segmentation of ES
ecg_segment = ecg.segment_time(45, 54)
eda_segment = eda.segment_time(45, 54)
# plot
ax1 = plt.subplot(211)
ecg.plot()
ecg_segment.plot('r')
plt.subplot(212, sharex=ax1)
eda.plot()
eda_segment.plot('r')
print(ecg_segment.get_start_time())
# segmentation of US
s_fake_idx_segment = s_fake_idx.segment_time(10.5, 18)
s_fake_time_segment = s_fake_time.segment_time(10.5, 18)
# plot
ax1 = plt.subplot(211)
s_fake_idx.plot('.-')
s_fake_idx_segment.plot('.-r')
plt.subplot(212, sharex=ax1)
s_fake_time.plot('.-')
s_fake_time_segment.plot('.-r')
print(s_fake_time_segment.get_start_time())
Explanation: 1.3 Segmentation of signals
Two general class functions are provided to segment a signal:
1. segment_time(t_start, t_stop) is used to extract a portion of the signal between the instants t_start and
t_stop;
2. segment_idx(idx_start, idx_stop) is used to extract a portion of the signal between the indices idx_start and idx_stop.
The output signal will inherit sampling_freq and signal_nature but the start_time will be set to t_start or to the instant corresponding to idx_start accordingly to the method used.
End of explanation |
15,782 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
<a href="https
Step1: KL and non overlapping distributions
non overlapping distributions (visual)
explain ratio will be infinity - integral
move the distributions closer and they will not have signal
Step2: Approximation of the ratio using the f-gan approach
Step3: Gradients
In order to see why the learned density ratio has useful properties for learning, we can plot the gradients of the learned density ratio across the input space
Step4: Wasserstein distance for the same two distributions
Computing the Wasserstein critic in 1 dimension. Reminder that the Wasserstein distance is defined as
Step5: MMD computation
The MMD is an IPM defined as | Python Code:
import jax
import random
import numpy as np
import jax.numpy as jnp
import seaborn as sns
import matplotlib.pyplot as plt
import scipy
!pip install -qq dm-haiku
!pip install -qq optax
try:
import haiku as hk
except ModuleNotFoundError:
%pip install -qq haiku
import haiku as hk
try:
import optax
except ModuleNotFoundError:
%pip install -qq optax
import optax
sns.set(rc={"lines.linewidth": 2.8}, font_scale=2)
sns.set_style("whitegrid")
Explanation: <a href="https://colab.research.google.com/github/probml/probml-notebooks/blob/main/notebooks/IPM_divergences.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Critics in IPMs variational bounds on $f$-divergences
Author: Mihaela Rosca
This colab uses a simple example (two 1-d distributions) to show how the critics of various IPMs (Wasserstein distance and MMD) look like. We also look at how smooth estimators (neural nets) can estimte density ratios which are not
smooth, and how that can be useful in providing a good learning signal for a model.
End of explanation
import scipy.stats
from scipy.stats import truncnorm
from scipy.stats import beta
# We allow a displacement from 0 of the beta distribution.
class TranslatedBeta:
def __init__(self, a, b, expand_dims=False, displacement=0):
self._a = a
self._b = b
self.expand_dims = expand_dims
self.displacement = displacement
def rvs(self, size):
val = beta.rvs(self._a, self._b, size=size) + self.displacement
return np.expand_dims(val, axis=1) if self.expand_dims else val
def pdf(self, x):
return beta.pdf(x - self.displacement, self._a, self._b)
p_param1 = 3
p_param2 = 5
q_param1 = 2
q_param2 = 3
start_p = 0
start_r = 1
start_q = 2
p_dist = TranslatedBeta(p_param1, p_param2, displacement=start_p)
q_dist = TranslatedBeta(q_param1, q_param2, displacement=start_q)
r_dist = TranslatedBeta(q_param1, q_param2, displacement=start_r)
plt.figure(figsize=(14, 10))
p_x_samples = p_dist.rvs(size=15)
q_x_samples = q_dist.rvs(size=15)
p_linspace_x = np.linspace(start_p, start_p + 1, 100)
p_x_pdfs = p_dist.pdf(p_linspace_x)
q_linspace_x = np.linspace(start_q, start_q + 1, 100)
q_x_pdfs = q_dist.pdf(q_linspace_x)
plt.plot(p_linspace_x, p_x_pdfs, "b", label=r"$p_1(x)$")
plt.plot(p_x_samples, [0] * len(p_x_samples), "bo", ms=10)
plt.plot(q_linspace_x, q_x_pdfs, "r", label=r"$p_2(x)$")
plt.plot(q_x_samples, [0] * len(q_x_samples), "rd", ms=10)
plt.ylim(-0.5, 2.7)
plt.xlim(-0.2, 3.5)
plt.axis("off")
plt.legend()
plt.xticks([])
plt.yticks([])
plt.figure(figsize=(14, 8))
local_start_p = 0
local_start_r = 1.2
local_start_q = 2.4
local_p_dist = TranslatedBeta(p_param1, p_param2, displacement=local_start_p)
local_q_dist = TranslatedBeta(q_param1, q_param2, displacement=local_start_q)
local_r_dist = TranslatedBeta(q_param1, q_param2, displacement=local_start_r)
p_linspace_x = np.linspace(local_start_p, local_start_p + 1, 100)
q_linspace_x = np.linspace(local_start_q, local_start_q + 1, 100)
r_linspace_x = np.linspace(local_start_r, local_start_r + 1, 100)
p_x_pdfs = local_p_dist.pdf(p_linspace_x)
q_x_pdfs = local_q_dist.pdf(q_linspace_x)
r_x_pdfs = local_r_dist.pdf(r_linspace_x)
plt.plot(p_linspace_x, p_x_pdfs, "b")
plt.plot(q_linspace_x, q_x_pdfs, "r")
plt.plot(r_linspace_x, r_x_pdfs, "g")
num_samples = 15
plt.plot(local_p_dist.rvs(size=num_samples), [0] * num_samples, "bo", ms=10, label=r"$p^*$")
plt.plot(local_q_dist.rvs(size=num_samples), [0] * num_samples, "rd", ms=10, label=r"$q(\theta_1)$")
plt.plot(local_r_dist.rvs(size=num_samples), [0] * num_samples, "gd", ms=10, label=r"$q(\theta_2)$")
plt.ylim(-0.5, 2.7)
plt.xlim(-0.2, 3.5)
plt.axis("off")
plt.legend(framealpha=0)
plt.xticks([])
plt.yticks([])
Explanation: KL and non overlapping distributions
non overlapping distributions (visual)
explain ratio will be infinity - integral
move the distributions closer and they will not have signal
End of explanation
model_transform = hk.without_apply_rng(
hk.transform(
lambda *args, **kwargs: hk.Sequential(
[hk.Linear(10), jax.nn.relu, hk.Linear(10), jax.nn.tanh, hk.Linear(40), hk.Linear(1)]
)(*args, **kwargs)
)
)
BATCH_SIZE = 100
NUM_UPDATES = 1000
dist1 = TranslatedBeta(p_param1, p_param2, expand_dims=True, displacement=start_p)
dist2 = TranslatedBeta(q_param1, q_param2, expand_dims=True, displacement=start_q)
@jax.jit
def estimate_kl(params, dist1_batch, dist2_batch):
dist1_logits = model_transform.apply(params, dist1_batch)
dist2_logits = model_transform.apply(params, dist2_batch)
return jnp.mean(dist1_logits - jnp.exp(dist2_logits - 1))
def update(params, opt_state, dist1_batch, dist2_batch):
model_loss = lambda *args: -estimate_kl(*args)
loss, grads = jax.value_and_grad(model_loss, has_aux=False)(params, dist1_batch, dist2_batch)
params_update, new_opt_state = optim.update(grads, opt_state, params)
new_params = optax.apply_updates(params, params_update)
return loss, new_params, new_opt_state
NUM_UPDATES = 200
rng = jax.random.PRNGKey(1)
init_model_params = model_transform.init(rng, dist1.rvs(BATCH_SIZE))
params = init_model_params
optim = optax.adam(learning_rate=0.0005, b1=0.9, b2=0.999)
opt_state = optim.init(init_model_params)
for i in range(NUM_UPDATES):
# Get a new batch of data
x = dist1.rvs(BATCH_SIZE)
y = dist2.rvs(BATCH_SIZE)
loss, params, opt_state = update(params, opt_state, x, y)
if i % 50 == 0:
print("Loss at {}".format(i))
print(loss)
plotting_x = np.expand_dims(np.linspace(-1.0, 3.5, 100), axis=1)
# TODO: how do you get the ratio values form the estimate - need to check the fgan paper
ratio_values = model_transform.apply(params, plotting_x)
# ratio_values = 1 + np.log(model_transform.apply(params, plotting_x))
plt.figure(figsize=(14, 8))
p_linspace_x = np.linspace(start_p, start_p + 1, 100)
q_linspace_x = np.linspace(start_q, start_q + 1, 100)
plt.plot(p_linspace_x, p_x_pdfs, "b", label=r"$p^*$")
plt.plot(p_x_samples, [0] * len(p_x_samples), color="b", marker=10, linestyle="None", ms=18)
plt.plot(q_linspace_x, q_x_pdfs, "g", label=r"$q(\theta)$")
plt.plot(q_x_samples, [0] * len(q_x_samples), color="g", marker=11, linestyle="None", ms=18)
x = np.linspace(-1, 3.5, 200)
ratio = p_dist.pdf(x) / q_dist.pdf(x)
plt.hlines(6.1, -0.6, start_q, linestyles="--", color="r")
plt.hlines(6.1, start_q + 1, 3.5, linestyles="--", color="r")
plt.text(3.4, 5.6, r"$\infty$")
plt.plot(x, ratio, "r", label=r"$\frac{p^*}{q(\theta)}$", linewidth=4)
plt.plot(
plotting_x, ratio_values[:, 0].T, color="darkgray", label=r"MLP approx to $\frac{p^*}{q(\theta)}$", linewidth=4
)
plt.ylim(-2.5, 8)
plt.xlim(-0.2, 3.5)
plt.axis("off")
plt.legend(loc="upper center", bbox_to_anchor=(0.35, 0.0, 0.25, 1.0), ncol=4, framealpha=0)
plt.xticks([])
plt.yticks([])
Explanation: Approximation of the ratio using the f-gan approach
End of explanation
plt.figure(figsize=(14, 8))
grad_fn = jax.grad(lambda x: model_transform.apply(params, x)[0])
grad_values = jax.vmap(grad_fn)(plotting_x)
plt.figure(figsize=(14, 8))
p_linspace_x = np.linspace(start_p, start_p + 1, 100)
q_linspace_x = np.linspace(start_q, start_q + 1, 100)
plt.plot(p_linspace_x, p_x_pdfs, "b", label=r"$p^*$")
plt.plot(p_x_samples, [0] * len(p_x_samples), color="b", marker=10, linestyle="None", ms=18)
plt.plot(q_linspace_x, q_x_pdfs, "g", label=r"$q(\theta)$")
plt.plot(q_x_samples, [0] * len(q_x_samples), color="g", marker=11, linestyle="None", ms=18)
x = np.linspace(-1, 3.5, 200)
ratio = p_dist.pdf(x) / q_dist.pdf(x)
plt.hlines(5.8, -0.6, start_q, linestyles="--", color="r")
plt.hlines(5.8, start_q + 1, 3.5, linestyles="--", color="r")
plt.text(3.4, 5.4, r"$\infty$")
plt.plot(x, ratio, "r", label=r"$\frac{p^*}{q(\theta)}$", linewidth=4)
plt.plot(
plotting_x,
ratio_values[:, 0].T,
color="darkgray",
label=r"$f_{\phi}$ approximating $\frac{p^*}{q(\theta)}$",
linewidth=4,
)
plt.plot(plotting_x, grad_values[:, 0].T, color="orange", label=r"$\nabla_{x} f_{\phi}(x)$", linewidth=4, ls="-.")
plt.ylim(-2.5, 8)
plt.xlim(-0.2, 3.5)
plt.axis("off")
plt.legend(loc="upper center", bbox_to_anchor=(0.35, 0.0, 0.25, 1.0), ncol=4, framealpha=0)
plt.xticks([])
plt.yticks([])
Explanation: Gradients
In order to see why the learned density ratio has useful properties for learning, we can plot the gradients of the learned density ratio across the input space
End of explanation
from scipy.optimize import linprog
def get_W_witness_spectrum(p_samples, q_samples):
n = len(p_samples)
m = len(q_samples)
X = np.concatenate([p_samples, q_samples], axis=0)
## AG: repeat [-1/n] n times
c = np.array(n * [-1 / n] + m * [1 / m])
A_ub, b_ub = [], []
for i in range(n + m):
for j in range(n + m):
if i == j:
continue
z = np.zeros(n + m)
z[i] = 1
z[j] = -1
A_ub.append(z)
b_ub.append(np.abs(X[i] - X[j]))
## AG: Minimize: c^T * x
## Subject to: A_ub * x <= b_ub
res = linprog(c=c, A_ub=A_ub, b_ub=b_ub, method="simplex", options={"tol": 1e-5})
a = res["x"]
## AG: second argument xs to be passed into the internal
## function.
def witness_spectrum(x):
diff = np.abs(x - X[:, np.newaxis])
one = np.min(a[:, np.newaxis] + diff, axis=0)
two = np.max(a[:, np.newaxis] - diff, axis=0)
return one, two
return witness_spectrum
x = np.linspace(-1, 3.5, 100)
wass_estimate = get_W_witness_spectrum(p_x_samples + start_p, q_x_samples + start_q)(x)
wa, wb = wass_estimate
w = (wa + wb) / 2
w -= w.mean()
plt.figure(figsize=(14, 6))
display_offset = 0.8
plt.plot(p_linspace_x, display_offset + p_x_pdfs, "b", label=r"$p^*$")
plt.plot(p_x_samples, [display_offset] * len(p_x_samples), color="b", marker=10, linestyle="None", ms=18)
plt.plot(q_linspace_x, display_offset + q_x_pdfs, "g", label=r"$q(\theta)$")
plt.plot(q_x_samples, [display_offset] * len(q_x_samples), color="g", marker=11, linestyle="None", ms=18)
x = np.linspace(-1, 3.5, 100)
plt.plot(x, w + display_offset, "r", label=r"$f^{\star}$", linewidth=4)
plt.ylim(-2.5, 8)
plt.xlim(-0.2, 3.5)
plt.axis("off")
plt.legend(loc="upper center", bbox_to_anchor=(0.35, 0.0, 0.5, 1.34), ncol=3, framealpha=0)
plt.xticks([])
plt.yticks([])
Explanation: Wasserstein distance for the same two distributions
Computing the Wasserstein critic in 1 dimension. Reminder that the Wasserstein distance is defined as:
$$
W(p, q) = \sup_{\|\|f\|\|_{Lip} \le 1} E_p(x) f(x) - E_q(x) f(x)
$$
The below code finds the values of f evaluated at the samples of the two distributions. This vector is computed to maximise the empirical (Monte Carlo) estimate of the IPM:
$$
\frac{1}{n}\sum_{i=1}^n f(x_i) - \frac{1}{m}\sum_{i=1}^m f(y_j)
$$
where $x_i$ are samples from the first distribution, while $y_j$ are samples
from the second distribution. Since we want the function $f$ to be 1-Lipschitz,
an inequality constraint is added to ensure that for all two choices of samples
in the two distributions, $\forall x \in {x_1, ... x_n, y_1, ... y_m}, \forall y \in {x_1, ... x_n, y_1, ... y_m}$
$$
f(x) - f(y) \le |x - y| \
f(y) - f(x) \le |x - y| \
$$
This maximisation needs to occur under the constraint that the function $f$
is 1-Lipschitz, which is ensured uisng the constraint on the linear program.
Note: This approach does not scale to large datasets.
Thank you to Arthur Gretton and Dougal J Sutherland for this version of the code.
End of explanation
def covariance(kernel_fn, X, Y):
num_rows = len(X)
num_cols = len(Y)
K = np.zeros((num_rows, num_cols))
for i in range(num_rows):
for j in range(num_cols):
K[i, j] = kernel_fn(X[i], Y[j])
return K
def gaussian_kernel(x1, x2, gauss_var=0.1, height=2.2):
return height * np.exp(-np.linalg.norm(x1 - x2) ** 2 / gauss_var)
def evaluate_mmd_critic(p_samples, q_samples):
n = p_samples.shape[0]
m = q_samples.shape[0]
p_cov = covariance(gaussian_kernel, p_samples, p_samples)
print("indices")
print(np.diag_indices(n))
p_samples_norm = np.sum(p_cov) - np.sum(p_cov[np.diag_indices(n)])
p_samples_norm /= n * (n - 1)
q_cov = covariance(gaussian_kernel, q_samples, q_samples)
q_samples_norm = np.sum(q_cov) - np.sum(q_cov[np.diag_indices(m)])
q_samples_norm /= m * (m - 1)
p_q_cov = covariance(gaussian_kernel, p_samples, q_samples)
p_q_norm = np.sum(p_q_cov)
p_q_norm /= n * m
norm = p_samples_norm + q_samples_norm - 2 * p_q_norm
def critic(x):
p_val = np.mean([gaussian_kernel(x, y) for y in p_samples])
q_val = np.mean([gaussian_kernel(x, y) for y in q_samples])
return (p_val - q_val) / norm
return critic
critic_fn = evaluate_mmd_critic(p_x_samples, q_x_samples)
plt.figure(figsize=(14, 6))
display_offset = 0
plt.plot(p_linspace_x, display_offset + p_x_pdfs, "b", label=r"$p^*$")
plt.plot(p_x_samples, [display_offset] * len(p_x_samples), color="b", marker=10, linestyle="None", ms=18)
plt.plot(q_linspace_x, display_offset + q_x_pdfs, "g", label=r"$q(\theta)$")
plt.plot(q_x_samples, [display_offset] * len(q_x_samples), color="g", marker=11, linestyle="None", ms=18)
x = np.linspace(-1, 3.5, 100)
plt.plot(
start_p + x, np.array([critic_fn(x_val) for x_val in x]) + display_offset, "r", label=r"$f^{\star}$", linewidth=4
)
plt.ylim(-2.5, 8)
plt.xlim(-0.2, 3.5)
plt.axis("off")
plt.legend(loc="upper center", bbox_to_anchor=(0.35, 0.0, 0.5, 1.34), ncol=3, framealpha=0)
plt.xticks([])
plt.yticks([])
Explanation: MMD computation
The MMD is an IPM defined as:
$$
MMD(p, q) = \sup_{\|\|f\|\|_{\mathcal{H}} \le 1} E_p(x) f(x) - E_q(x) f(x)
$$
where $\mathcal{H}$ is a RKHS. Using the mean embedding operators in an RKHS, we can write:
$$
E_p(x) f(x) = \langle f, \mu_p \rangle \
E_q(x) f(x) = \langle f, \mu_q \rangle \
$$
replacing in the MMD:
$$
MMD(p, q) = \sup_{\|\|f\|\|_{\mathcal{H}} \le 1} \langle f, \mu_p - \mu_q \rangle
$$
which means that
$$
f = \frac{\mu_p - \mu_q}{\|\|\mu_p - \mu_q\|\|_{\mathcal{H}}}
$$
To obtain an estimate of $f$ evaluated at $x$ we use that:
$$
f(x) = \frac{\mathbb{E}{p(y)} k(x, y) - \mathbb{E}{q(y)} k(x, y)}{\|\|\mu_p - \mu_q\|\|_{\mathcal{H}}}
$$
to estimate $\|\|\mu_p - \mu_q\|\|_{\mathcal{H}}$ we use:
$$
\|\|\mu_p - \mu_q\|\|_{\mathcal{H}} = \langle \mu_p - \mu_q, \mu_p - \mu_q \rangle = \langle \mu_p, \mu_p \rangle + \langle \mu_q, \mu_q \rangle
- 2 \langle \mu_p, \mu_q \rangle
$$
To estimate the dot products, we use:
$$
\langle \mu_p, \mu_p \rangle = E_p(x) \mu_p(x) = E_p(x) \langle \mu_p, k(x, \cdot) \rangle = E_p(x) E_p(x') k(x, x')
$$
For more details see the slides here: http://www.gatsby.ucl.ac.uk/~gretton/coursefiles/lecture5_distribEmbed_1.pdf
End of explanation |
15,783 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
A Walk through PyStyl
Experiments in stylometry typically kick off with creating a corpus, or the collection of texts which we would like to compare. In pystyl, we use the Corpus class to represent such a text collection
Step1: Specifying a language such as 'en' (English) is optional. Adding texts from a directory to the corpus is easy
Step2: By default, this function assumes that all texts under this directory have been encoded in UTF-8 and that they have a .txt extension. Additionally, the syntax of the filename should be <category>_<title>.txt, where category is a label indicates e.g. a text's authorship, genre or date of composition. In our case, this directory looked like
Step3: Our corpus currently holds these 7 texts in their raw form
Step4: In stylometry, it typical to preprocess your corpus and remove, let's say, punctuation and lowercase texts. In pystyl, we achieve this via the preprocess() method, where the alpha_only parameter controls whether we only wish to keep alphabetic symbols
Step5: Now, the corpus is ready to be tokenized, which is helpful if we want to start counting words
Step6: The corpus now holds our texts in a tokenized form. Of course, the novels wildly vary in length. If we would like to split these into shorter segments of e.g. 10,000 words, we can use the segment() function.
Step7: In stylometry, it is common to manually remove certain words, such as personal pronouns, which are more strongly tied to narrative perspective than authorial writing style. To remove these from our English texts, we can do
Step8: As you can see, all personal pronouns have now been removed from our corpus segments. We are now ready to vectorize our corpus, which means that we will represent it as as a large two-dimensional matrix in which each row represents one of our textual segments, and each individual feature (e.g. a function word frequency) is represented in a column.
Step9: As you can see, we have now included the 30 most common words in our corpus model (mfi stands for 'most frequent items'). These features are returned by the vectorize() method. Many other options are available; to extract the 50 most common character trigrams, for instance, you could run
Step10: A more fundamental issue is the vectorization model we select. By default, the vectorizer will create a simple term-frequency model, which means that we will record the relative frequencies of our most frequent items in each text. In stylometry, however, there exist many more models. PyStyl also supports the tf-idf model (term frequency-inverse document frequency), which is commonly used in information retrieval to assign more weight to lower-frequency items.
Step11: PyStyl also supports the std model which underpins Burrows's famous Delta method (and which is typically also a solid model for other applications)
Step12: Vectorization is a foundational issue in stylometry, since it very much controls how our analyses 'see' texts. Luckily, the vectorize() method comes with many options to control this process. With the following options, we can for install control the proportion of segments to control in how many segments a feature should minimally occur (a procedure also known as 'culling') | Python Code:
%load_ext autoreload
%autoreload 1
%matplotlib inline
from pystyl.corpus import Corpus
corpus = Corpus(language='en')
Explanation: A Walk through PyStyl
Experiments in stylometry typically kick off with creating a corpus, or the collection of texts which we would like to compare. In pystyl, we use the Corpus class to represent such a text collection:
End of explanation
corpus.add_directory(directory='data/dummy')
Explanation: Specifying a language such as 'en' (English) is optional. Adding texts from a directory to the corpus is easy:
End of explanation
ls data/dummy
Explanation: By default, this function assumes that all texts under this directory have been encoded in UTF-8 and that they have a .txt extension. Additionally, the syntax of the filename should be <category>_<title>.txt, where category is a label indicates e.g. a text's authorship, genre or date of composition. In our case, this directory looked like:
End of explanation
print(corpus)
Explanation: Our corpus currently holds these 7 texts in their raw form:
End of explanation
corpus.preprocess(alpha_only=True, lowercase=True)
print(corpus)
Explanation: In stylometry, it typical to preprocess your corpus and remove, let's say, punctuation and lowercase texts. In pystyl, we achieve this via the preprocess() method, where the alpha_only parameter controls whether we only wish to keep alphabetic symbols:
End of explanation
corpus.tokenize()
print(corpus)
Explanation: Now, the corpus is ready to be tokenized, which is helpful if we want to start counting words:
End of explanation
corpus.segment(segment_size=20000)
print(corpus)
Explanation: The corpus now holds our texts in a tokenized form. Of course, the novels wildly vary in length. If we would like to split these into shorter segments of e.g. 10,000 words, we can use the segment() function.
End of explanation
corpus.remove_tokens(rm_pronouns=True)
print(corpus)
Explanation: In stylometry, it is common to manually remove certain words, such as personal pronouns, which are more strongly tied to narrative perspective than authorial writing style. To remove these from our English texts, we can do:
End of explanation
corpus.vectorize(mfi=100)
Explanation: As you can see, all personal pronouns have now been removed from our corpus segments. We are now ready to vectorize our corpus, which means that we will represent it as as a large two-dimensional matrix in which each row represents one of our textual segments, and each individual feature (e.g. a function word frequency) is represented in a column.
End of explanation
corpus.vectorize(mfi=20, ngram_type='char', ngram_size=3)
Explanation: As you can see, we have now included the 30 most common words in our corpus model (mfi stands for 'most frequent items'). These features are returned by the vectorize() method. Many other options are available; to extract the 50 most common character trigrams, for instance, you could run:
End of explanation
corpus.vectorize(mfi=30, vector_space='tf_idf')
Explanation: A more fundamental issue is the vectorization model we select. By default, the vectorizer will create a simple term-frequency model, which means that we will record the relative frequencies of our most frequent items in each text. In stylometry, however, there exist many more models. PyStyl also supports the tf-idf model (term frequency-inverse document frequency), which is commonly used in information retrieval to assign more weight to lower-frequency items.
End of explanation
corpus.vectorize(mfi=30, vector_space='tf_std')
Explanation: PyStyl also supports the std model which underpins Burrows's famous Delta method (and which is typically also a solid model for other applications):
End of explanation
corpus.vectorize(mfi=30, min_df=0.80)
from pystyl.analysis import distance_matrix, hierarchical_clustering
from pystyl.visualization import scatterplot, scatterplot_3d
from pystyl.analysis import pca
pca_coor, pca_loadings = pca(corpus, nb_dimensions=2)
scatterplot(corpus, coor=pca_coor, nb_clusters=0, loadings=pca_loadings, plot_type='static',\
save=False, show=False, return_svg=False, outputfile="/Users/mike/Desktop/pca.pdf")
pca_coor, pca_loadings = pca(corpus, nb_dimensions=3)
scatterplot_3d(corpus, coor=pca_coor, outputfile="/Users/mike/Desktop/3d.pdf",\
save=True, show=False, return_svg=False)
from pystyl.analysis import distance_matrix
dm = distance_matrix(corpus, 'minmax')
from pystyl.visualization import clustermap
clustermap(corpus, distance_matrix=dm, fontsize=8, color_leafs=True,\
outputfile='/Users/mike/Desktop/cm.pdf',
show=False, save=False, return_svg=False)
from pystyl.analysis import hierarchical_clustering
cluster_tree = hierarchical_clustering(dm, linkage='ward')
from pystyl.visualization import scipy_dendrogram, ete_dendrogram
scipy_dendrogram(corpus=corpus, tree=cluster_tree, outputfile='~/Desktop/scipy_dendrogram.pdf',\
fontsize=5, color_leafs=True, show=False, save=F, return_svg=False)
ete_dendrogram(corpus=corpus, tree=cluster_tree, outputfile='~/Desktop/ete_dendrogram.png',\
fontsize=5, color_leafs=True, show=False, save=True, return_svg=False,
save_newick=False)
from IPython.display import Image
Image(filename='/Users/mike/Desktop/ete_dendrogram.png')
Explanation: Vectorization is a foundational issue in stylometry, since it very much controls how our analyses 'see' texts. Luckily, the vectorize() method comes with many options to control this process. With the following options, we can for install control the proportion of segments to control in how many segments a feature should minimally occur (a procedure also known as 'culling'):
End of explanation |
15,784 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Let's Grow your Own Inner Core!
Choose a model in the list
Step1: Define the geodynamical model
Un-comment one of the model
Step2: Change the values of the parameters to get the model you want (here, parameters for .TranslationGrowthRotation())
Step3: Define a proxy type, and a proxy name (to be used in the figures to annotate the axes)
You can re-define it later if you want (or define another proxy_type2 if needed)
Step4: Parameters for the geodynamical model
This will input the different parameters in the model.
Step5: Different data set and visualisations
Perfect sampling at the equator (to visualise the flow lines)
You can add more points to get a better precision.
Step6: Perfect sampling in the first 100km (to visualise the depth evolution)
Step7: Random data set, in the first 100km - bottom turning point only
Calculate the data
Step8: Real Data set from Waszek paper | Python Code:
%matplotlib inline
# import statements
import numpy as np
import matplotlib.pyplot as plt #for figures
from mpl_toolkits.basemap import Basemap #to render maps
import math
import json #to write dict with parameters
from GrowYourIC import positions, geodyn, geodyn_trg, geodyn_static, plot_data, data
plt.rcParams['figure.figsize'] = (8.0, 3.0) #size of figures
cm = plt.cm.get_cmap('viridis')
cm2 = plt.cm.get_cmap('winter')
Explanation: Let's Grow your Own Inner Core!
Choose a model in the list:
- geodyn_trg.TranslationGrowthRotation()
- geodyn_static.Hemispheres()
Choose a proxy type:
- age
- position
- phi
- theta
- growth rate
set the parameters for the model : geodynModel.set_parameters(parameters)
set the units : geodynModel.define_units()
Choose a data set:
- data.SeismicFromFile(filename) # Lauren's data set
- data.RandomData(numbers_of_points)
- data.PerfectSamplingEquator(numbers_of_points)
organized on a cartesian grid. numbers_of_points is the number of points along the x or y axis. The total number of points is numbers_of_points**2*pi/4
- as a special plot function to show streamlines: plot_c_vec(self,modelgeodyn)
- data.PerfectSamplingEquatorRadial(Nr, Ntheta)
same than below, but organized on a polar grid, not a cartesian grid.
Extract the info:
- calculate the proxy value for all points of the data set: geodyn.evaluate_proxy(data_set, geodynModel)
- extract the positions as numpy arrays: extract_rtp or extract_xyz
- calculate other variables: positions.angular_distance_to_point(t,p, t_point, p_point)
End of explanation
## un-comment one of them
geodynModel = geodyn_trg.TranslationGrowthRotation() #can do all the models presented in the paper
# geodynModel = geodyn_static.Hemispheres() #this is a static model, only hemispheres.
Explanation: Define the geodynamical model
Un-comment one of the model
End of explanation
age_ic_dim = 1e9 #in years
rICB_dim = 1221. #in km
v_g_dim = rICB_dim/age_ic_dim # in km/years #growth rate
print("Growth rate is {:.2e} km/years".format(v_g_dim))
v_g_dim_seconds = v_g_dim*1e3/(np.pi*1e7)
translation_velocity_dim = 0.8*v_g_dim_seconds#4e-10 #0.8*v_g_dim_seconds#4e-10 #m.s, value for today's Earth with Q_cmb = 10TW (see Alboussiere et al. 2010)
time_translation = rICB_dim*1e3/translation_velocity_dim /(np.pi*1e7)
maxAge = 2.*time_translation/1e6
print("The translation recycles the inner core material in {0:.2e} million years".format(maxAge))
print("Translation velocity is {0:.2e} km/years".format(translation_velocity_dim*np.pi*1e7/1e3))
units = None #we give them already dimensionless parameters.
rICB = 1.
age_ic = 1.
omega = 0.#0.5*np.pi/200e6*age_ic_dim#0.5*np.pi #0. #0.5*np.pi/200e6*age_ic_dim# 0.#0.5*np.pi#0.#0.5*np.pi/200e6*age_ic_dim #0. #-0.5*np.pi # Rotation rates has to be in ]-np.pi, np.pi[
print("Rotation rate is {:.2e}".format(omega))
velocity_amplitude = translation_velocity_dim*age_ic_dim*np.pi*1e7/rICB_dim/1e3
velocity_center = [0., 100.]#center of the eastern hemisphere
velocity = geodyn_trg.translation_velocity(velocity_center, velocity_amplitude)
exponent_growth = 1.#0.1#1
print(v_g_dim, velocity_amplitude, omega/age_ic_dim*180/np.pi*1e6)
Explanation: Change the values of the parameters to get the model you want (here, parameters for .TranslationGrowthRotation())
End of explanation
proxy_type = "age"#"growth rate"
proxy_name = "age (Myears)" #growth rate (km/Myears)"
proxy_lim = [0, maxAge] #or None
#proxy_lim = None
fig_name = "figures/test_" #to name the figures
print(rICB, age_ic, velocity_amplitude, omega, exponent_growth, proxy_type)
print(velocity)
Explanation: Define a proxy type, and a proxy name (to be used in the figures to annotate the axes)
You can re-define it later if you want (or define another proxy_type2 if needed)
End of explanation
parameters = dict({'units': units,
'rICB': rICB,
'tau_ic':age_ic,
'vt': velocity,
'exponent_growth': exponent_growth,
'omega': omega,
'proxy_type': proxy_type})
geodynModel.set_parameters(parameters)
geodynModel.define_units()
param = parameters
param['vt'] = parameters['vt'].tolist() #for json serialization
# write file with parameters, readable with json, byt also human-readable
with open(fig_name+'parameters.json', 'w') as f:
json.dump(param, f)
print(parameters)
Explanation: Parameters for the geodynamical model
This will input the different parameters in the model.
End of explanation
npoints = 10 #number of points in the x direction for the data set.
data_set = data.PerfectSamplingEquator(npoints, rICB = 1.)
data_set.method = "bt_point"
proxy = geodyn.evaluate_proxy(data_set, geodynModel, proxy_type="age", verbose = False)
data_set.plot_c_vec(geodynModel, proxy=proxy, cm=cm, nameproxy="age (Myears)")
plt.savefig(fig_name+"equatorial_plot.pdf", bbox_inches='tight')
Explanation: Different data set and visualisations
Perfect sampling at the equator (to visualise the flow lines)
You can add more points to get a better precision.
End of explanation
data_meshgrid = data.Equator_upperpart(10,10)
data_meshgrid.method = "bt_point"
proxy_meshgrid = geodyn.evaluate_proxy(data_meshgrid, geodynModel, proxy_type=proxy_type, verbose = False)
#r, t, p = data_meshgrid.extract_rtp("bottom_turning_point")
fig3, ax3 = plt.subplots(figsize=(8, 2))
X, Y, Z = data_meshgrid.mesh_RPProxy(proxy_meshgrid)
sc = ax3.contourf(Y, rICB_dim*(1.-X), Z, 100, cmap=cm)
sc2 = ax3.contour(sc, levels=sc.levels[::15], colors = "k")
ax3.set_ylim(-0, 120)
fig3.gca().invert_yaxis()
ax3.set_xlim(-180,180)
cbar = fig3.colorbar(sc)
#cbar.set_clim(0, maxAge)
cbar.set_label(proxy_name)
ax3.set_xlabel("longitude")
ax3.set_ylabel("depth below ICB (km)")
plt.savefig(fig_name+"meshgrid.pdf", bbox_inches='tight')
npoints = 20 #number of points in the x direction for the data set.
data_set = data.PerfectSamplingSurface(npoints, rICB = 1., depth=0.01)
data_set.method = "bt_point"
proxy_surface = geodyn.evaluate_proxy(data_set, geodynModel, proxy_type=proxy_type, verbose = False)
#r, t, p = data_set.extract_rtp("bottom_turning_point")
X, Y, Z = data_set.mesh_TPProxy(proxy_surface)
## map
m, fig = plot_data.setting_map()
y, x = m(Y, X)
sc = m.contourf(y, x, Z, 30, cmap=cm, zorder=2, edgecolors='none')
plt.title("Dataset: {},\n geodynamic model: {}".format(data_set.name, geodynModel.name))
cbar = plt.colorbar(sc)
cbar.set_label(proxy_name)
fig.savefig(fig_name+"map_surface.pdf", bbox_inches='tight')
Explanation: Perfect sampling in the first 100km (to visualise the depth evolution)
End of explanation
# random data set
data_set_random = data.RandomData(300)
data_set_random.method = "bt_point"
proxy_random = geodyn.evaluate_proxy(data_set_random, geodynModel, proxy_type=proxy_type, verbose=False)
data_path = "../GrowYourIC/data/"
geodynModel.data_path = data_path
if proxy_type == "age":
# ## domain size and Vp
proxy_random_size = geodyn.evaluate_proxy(data_set_random, geodynModel, proxy_type="domain_size", verbose=False)
proxy_random_dV = geodyn.evaluate_proxy(data_set_random, geodynModel, proxy_type="dV_V", verbose=False)
r, t, p = data_set_random.extract_rtp("bottom_turning_point")
dist = positions.angular_distance_to_point(t, p, *velocity_center)
## map
m, fig = plot_data.setting_map()
x, y = m(p, t)
sc = m.scatter(x, y, c=proxy_random,s=8, zorder=10, cmap=cm, edgecolors='none')
plt.title("Dataset: {},\n geodynamic model: {}".format(data_set_random.name, geodynModel.name))
cbar = plt.colorbar(sc)
cbar.set_label(proxy_name)
fig.savefig(fig_name+data_set_random.shortname+"_map.pdf", bbox_inches='tight')
## phi and distance plots
fig, ax = plt.subplots(2,2, figsize=(8.0, 5.0))
sc1 = ax[0,0].scatter(p, proxy_random, c=abs(t),s=3, cmap=cm2, vmin =-0, vmax =90, linewidth=0)
phi = np.linspace(-180,180, 50)
#analytic_equator = np.maximum(2*np.sin((phi-10)*np.pi/180.)*rICB_dim*1e3/translation_velocity_dim /(np.pi*1e7)/1e6,0.)
#ax[0,0].plot(phi,analytic_equator, 'r', linewidth=2)
ax[0,0].set_xlabel("longitude")
ax[0,0].set_ylabel(proxy_name)
if proxy_lim is not None:
ax[0,0].set_ylim(proxy_lim)
sc2 = ax[0,1].scatter(dist, proxy_random, c=abs(t), cmap=cm2, vmin=-0, vmax =90, s=3, linewidth=0)
ax[0,1].set_xlabel("angular distance to ({}, {})".format(*velocity_center))
phi = np.linspace(-90,90, 100)
if proxy_type == "age":
analytic_equator = np.maximum(2*np.sin((phi-10)*np.pi/180.)*rICB_dim*1e3/translation_velocity_dim /(np.pi*1e7)/1e6,0.)
ax[0,0].plot(phi,analytic_equator, 'r', linewidth=2)
analytic_equator = np.maximum(2*np.sin((-phi)*np.pi/180.)*rICB_dim*1e3/translation_velocity_dim /(np.pi*1e7)/1e6,0.)
ax[0,1].plot(phi+90,analytic_equator, 'r', linewidth=2)
ax[0,1].set_xlim([0,180])
ax[0,0].set_xlim([-180,180])
cbar = fig.colorbar(sc1)
cbar.set_label("longitude: abs(theta)")
if proxy_lim is not None:
ax[0,1].set_ylim(proxy_lim)
## figure with domain size and Vp
if proxy_type == "age":
sc3 = ax[1,0].scatter(dist, proxy_random_size, c=abs(t), cmap=cm2, vmin =-0, vmax =90, s=3, linewidth=0)
ax[1,0].set_xlabel("angular distance to ({}, {})".format(*velocity_center))
ax[1,0].set_ylabel("domain size (m)")
ax[1,0].set_xlim([0,180])
ax[1,0].set_ylim([0, 2500.000])
sc4 = ax[1,1].scatter(dist, proxy_random_dV, c=abs(t), cmap=cm2, vmin=-0, vmax =90, s=3, linewidth=0)
ax[1,1].set_xlabel("angular distance to ({}, {})".format(*velocity_center))
ax[1,1].set_ylabel("dV/V")
ax[1,1].set_xlim([0,180])
ax[1,1].set_ylim([-0.017, -0.002])
fig.savefig(fig_name +data_set_random.shortname+ '_long_dist.pdf', bbox_inches='tight')
fig, ax = plt.subplots(figsize=(8, 2))
sc=ax.scatter(p,rICB_dim*(1.-r), c=proxy_random, s=10,cmap=cm, linewidth=0)
ax.set_ylim(-0,120)
fig.gca().invert_yaxis()
ax.set_xlim(-180,180)
cbar = fig.colorbar(sc)
if proxy_lim is not None:
cbar.set_clim(0, maxAge)
ax.set_xlabel("longitude")
ax.set_ylabel("depth below ICB (km)")
cbar.set_label(proxy_name)
fig.savefig(fig_name+data_set_random.shortname+"_depth.pdf", bbox_inches='tight')
Explanation: Random data set, in the first 100km - bottom turning point only
Calculate the data
End of explanation
## real data set
data_set = data.SeismicFromFile("../GrowYourIC/data/WD11.dat")
data_set.method = "bt_point"
proxy2 = geodyn.evaluate_proxy(data_set, geodynModel, proxy_type=proxy_type, verbose=False)
if proxy_type == "age":
## domain size and DV/V
proxy_size = geodyn.evaluate_proxy(data_set, geodynModel, proxy_type="domain_size", verbose=False)
proxy_dV = geodyn.evaluate_proxy(data_set, geodynModel, proxy_type="dV_V", verbose=False)
r, t, p = data_set.extract_rtp("bottom_turning_point")
dist = positions.angular_distance_to_point(t, p, *velocity_center)
## map
m, fig = plot_data.setting_map()
x, y = m(p, t)
sc = m.scatter(x, y, c=proxy2,s=8, zorder=10, cmap=cm, edgecolors='none')
plt.title("Dataset: {},\n geodynamic model: {}".format(data_set.name, geodynModel.name))
cbar = plt.colorbar(sc)
cbar.set_label(proxy_name)
fig.savefig(fig_name+data_set.shortname+"_map.pdf", bbox_inches='tight')
## phi and distance plots
fig, ax = plt.subplots(2,2, figsize=(8.0, 5.0))
sc1 = ax[0,0].scatter(p, proxy2, c=abs(t),s=3, cmap=cm2, vmin =-0, vmax =90, linewidth=0)
phi = np.linspace(-180,180, 50)
#analytic_equator = np.maximum(2*np.sin((phi-10)*np.pi/180.)*rICB_dim*1e3/translation_velocity_dim /(np.pi*1e7)/1e6,0.)
#ax[0,0].plot(phi,analytic_equator, 'r', linewidth=2)
ax[0,0].set_xlabel("longitude")
ax[0,0].set_ylabel(proxy_name)
if proxy_lim is not None:
ax[0,0].set_ylim(proxy_lim)
sc2 = ax[0,1].scatter(dist, proxy2, c=abs(t), cmap=cm2, vmin=-0, vmax =90, s=3, linewidth=0)
ax[0,1].set_xlabel("angular distance to ({}, {})".format(*velocity_center))
phi = np.linspace(-90,90, 100)
if proxy_type == "age":
analytic_equator = np.maximum(2*np.sin((-phi)*np.pi/180.)*rICB_dim*1e3/translation_velocity_dim /(np.pi*1e7)/1e6,0.)
ax[0,1].plot(phi+90,analytic_equator, 'r', linewidth=2)
analytic_equator = np.maximum(2*np.sin((phi-10)*np.pi/180.)*rICB_dim*1e3/translation_velocity_dim /(np.pi*1e7)/1e6,0.)
ax[0,0].plot(phi,analytic_equator, 'r', linewidth=2)
ax[0,1].set_xlim([0,180])
ax[0,0].set_xlim([-180,180])
cbar = fig.colorbar(sc1)
cbar.set_label("longitude: abs(theta)")
if proxy_lim is not None:
ax[0,1].set_ylim(proxy_lim)
## figure with domain size and Vp
if proxy_type == "age":
sc3 = ax[1,0].scatter(dist, proxy_size, c=abs(t), cmap=cm2, vmin =-0, vmax =90, s=3, linewidth=0)
ax[1,0].set_xlabel("angular distance to ({}, {})".format(*velocity_center))
ax[1,0].set_ylabel("domain size (m)")
ax[1,0].set_xlim([0,180])
ax[1,0].set_ylim([0, 2500.000])
sc4 = ax[1,1].scatter(dist, proxy_dV, c=abs(t), cmap=cm2, vmin=-0, vmax =90, s=3, linewidth=0)
ax[1,1].set_xlabel("angular distance to ({}, {})".format(*velocity_center))
ax[1,1].set_ylabel("dV/V")
ax[1,1].set_xlim([0,180])
ax[1,1].set_ylim([-0.017, -0.002])
fig.savefig(fig_name + data_set.shortname+'_long_dist.pdf', bbox_inches='tight')
fig, ax = plt.subplots(figsize=(8, 2))
sc=ax.scatter(p,rICB_dim*(1.-r), c=proxy2, s=10,cmap=cm, linewidth=0)
ax.set_ylim(-0,120)
fig.gca().invert_yaxis()
ax.set_xlim(-180,180)
cbar = fig.colorbar(sc)
if proxy_lim is not None:
cbar.set_clim(0, maxAge)
ax.set_xlabel("longitude")
ax.set_ylabel("depth below ICB (km)")
cbar.set_label(proxy_name)
fig.savefig(fig_name+data_set.shortname+"_depth.pdf", bbox_inches='tight')
Explanation: Real Data set from Waszek paper
End of explanation |
15,785 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Copyright 2020 The TensorFlow Authors.
Step1: 使用 Tensorflow Lattice 实现道德形状约束
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: 导入所需的软件包:
Step3: 本教程中使用的默认值:
Step4: 案例研究 1:法学院入学
在本教程的第一部分中,我们将考虑一个使用法学院招生委员会 (LSAC) 的 Law School Admissions 数据集的案例研究。我们将训练分类器利用以下两个特征来预测学生是否会通过考试:学生的 LSAT 分数和本科生的 GPA。
假设分类器的分数用于指导法学院的招生或奖学金评定。根据基于成绩的社会规范,我们预期具有更高 GPA 和更高 LSAT 分数的学生应当从分类器中获得更高的分数。但是,我们会观察到,模型很容易违反这些直观的规范,有时会惩罚 GPA 或 LSAT 分数较高的人员。
为了解决这种不公平的惩罚问题,我们可以施加单调性约束,这样在其他条件相同的情况下,模型永远不会惩罚更高的 GPA 或更高的 LSAT 分数。在本教程中,我们将展示如何使用 TFL 施加这些单调性约束。
加载法学院数据
Step5: 预处理数据集:
Step7: 将数据划分为训练/验证/测试集
Step8: 可视化数据分布
首先,我们可视化数据的分布。我们将为所有通过考试的学生以及所有未通过考试的学生绘制 GPA 和 LSAT 分数。
Step11: 训练校准线性模型以预测考试的通过情况
接下来,我们将通过 TFL 训练校准线性模型,以预测学生是否会通过考试。两个输入特征分别是 LSAT 分数和本科 GPA,而训练标签将是学生是否通过了考试。
我们首先在没有任何约束的情况下训练校准线性模型。然后,我们在具有单调性约束的情况下训练校准线性模型,并观察模型输出和准确率的差异。
用于训练 TFL 校准线性 Estimator 的辅助函数
下面这些函数将用于此法学院案例研究以及下面的信用违约案例研究。
Step14: 用于配置法学院数据集特征的辅助函数
下面这些辅助函数专用于法学院案例研究。
Step15: 用于可视化训练的模型输出的辅助函数
Step16: 训练无约束(非单调)的校准线性模型
Step17: 训练单调的校准线性模型
Step18: 训练其他无约束的模型
我们演示了可以将 TFL 校准线性模型训练成在 LSAT 分数和 GPA 上均单调,而不会牺牲过多的准确率。
但是,与其他类型的模型(如深度神经网络 (DNN) 或梯度提升树 (GBT))相比,校准线性模型表现如何?DNN 和 GBT 看起来会有公平合理的输出吗?为了解决这一问题,我们接下来将训练无约束的 DNN 和 GBT。实际上,我们将观察到 DNN 和 GBT 都很容易违反 LSAT 分数和本科生 GPA 中的单调性。
训练无约束的深度神经网络 (DNN) 模型
之前已对此架构进行了优化,可以实现较高的验证准确率。
Step19: 训练无约束的梯度提升树 (GBT) 模型
之前已对此树形结构进行了优化,可以实现较高的验证准确率。
Step20: 案例研究 2:信用违约
我们将在本教程中考虑的第二个案例研究是预测个人的信用违约概率。我们将使用 UCI 存储库中的 Default of Credit Card Clients 数据集。这些数据收集自 30,000 名中国台湾信用卡用户,并包含一个二元标签,用于标识用户是否在时间窗口内拖欠了付款。特征包括婚姻状况、性别、教育程度以及在 2005 年 4-9 月的每个月中,用户拖欠现有账单的时间有多长。
正如我们在第一个案例研究中所做的那样,我们再次阐明了使用单调性约束来避免不公平的惩罚:使用该模型来确定用户的信用评分时,在其他条件都相同的情况下,如果许多人因较早支付账单而受到惩罚,那么这对他们来说是不公平的。因此,我们应用了单调性约束,使模型不会惩罚提前付款。
加载信用违约数据
Step21: 将数据划分为训练/验证/测试集
Step22: 可视化数据分布
首先,我们可视化数据的分布。我们将为婚姻状况和还款状况不同的人绘制观察到的违约率的平均值和标准误差。还款状态表示一个人已偿还贷款的月数(截至 2005 年 4 月)。
Step25: 训练校准线性模型以预测信用违约率
接下来,我们将通过 TFL 训练校准线性模型,以预测某人是否会拖欠贷款。两个输入特征将是该人的婚姻状况以及该人截至 4 月已偿还贷款的月数(还款状态)。训练标签将是该人是否拖欠过贷款。
我们首先在没有任何约束的情况下训练校准线性模型。然后,我们在具有单调性约束的情况下训练校准线性模型,并观察模型输出和准确率的差异。
用于配置信用违约数据集特征的辅助函数
下面这些辅助函数专用于信用违约案例研究。
Step26: 用于可视化训练的模型输出的辅助函数
Step27: 训练无约束(非单调)的校准线性模型
Step28: 训练单调的校准线性模型 | Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
#@test {"skip": true}
!pip install tensorflow-lattice seaborn
Explanation: 使用 Tensorflow Lattice 实现道德形状约束
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://tensorflow.google.cn/lattice/tutorials/shape_constraints_for_ethics"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png">在 TensorFlow.org 上查看 </a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/lattice/tutorials/shape_constraints_for_ethics.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png">在 Google Colab 中运行 </a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/lattice/tutorials/shape_constraints_for_ethics.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png">在 GitHub 中查看源代码</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/lattice/tutorials/shape_constraints_for_ethics.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png"> 下载笔记本</a></td>
</table>
概述
本教程演示了如何使用 TensorFlow Lattice (TFL) 库训练对行为负责,并且不违反特定的道德或公平假设的模型。特别是,我们将侧重于使用单调性约束来避免对某些特性的不公平惩罚。本教程包括 Serena Wang 和 Maya Gupta 在 AISTATS 2020 上发表的论文 Deontological Ethics By Monotonicity Shape Constraints 中的实验演示。
我们将在公共数据集上使用 TFL Canned Estimator,但请注意,本教程中的所有内容也可以使用通过 TFL Keras 层构造的模型来完成。
在继续之前,请确保您的运行时已安装所有必需的软件包(如下方代码单元中导入的软件包)。
设置
安装 TF Lattice 软件包:
End of explanation
import tensorflow as tf
import logging
import matplotlib.pyplot as plt
import numpy as np
import os
import pandas as pd
import seaborn as sns
from sklearn.model_selection import train_test_split
import sys
import tensorflow_lattice as tfl
logging.disable(sys.maxsize)
Explanation: 导入所需的软件包:
End of explanation
# List of learning rate hyperparameters to try.
# For a longer list of reasonable hyperparameters, try [0.001, 0.01, 0.1].
LEARNING_RATES = [0.01]
# Default number of training epochs and batch sizes.
NUM_EPOCHS = 1000
BATCH_SIZE = 1000
# Directory containing dataset files.
DATA_DIR = 'https://raw.githubusercontent.com/serenalwang/shape_constraints_for_ethics/master'
Explanation: 本教程中使用的默认值:
End of explanation
# Load data file.
law_file_name = 'lsac.csv'
law_file_path = os.path.join(DATA_DIR, law_file_name)
raw_law_df = pd.read_csv(law_file_path, delimiter=',')
Explanation: 案例研究 1:法学院入学
在本教程的第一部分中,我们将考虑一个使用法学院招生委员会 (LSAC) 的 Law School Admissions 数据集的案例研究。我们将训练分类器利用以下两个特征来预测学生是否会通过考试:学生的 LSAT 分数和本科生的 GPA。
假设分类器的分数用于指导法学院的招生或奖学金评定。根据基于成绩的社会规范,我们预期具有更高 GPA 和更高 LSAT 分数的学生应当从分类器中获得更高的分数。但是,我们会观察到,模型很容易违反这些直观的规范,有时会惩罚 GPA 或 LSAT 分数较高的人员。
为了解决这种不公平的惩罚问题,我们可以施加单调性约束,这样在其他条件相同的情况下,模型永远不会惩罚更高的 GPA 或更高的 LSAT 分数。在本教程中,我们将展示如何使用 TFL 施加这些单调性约束。
加载法学院数据
End of explanation
# Define label column name.
LAW_LABEL = 'pass_bar'
def preprocess_law_data(input_df):
# Drop rows with where the label or features of interest are missing.
output_df = input_df[~input_df[LAW_LABEL].isna() & ~input_df['ugpa'].isna() &
(input_df['ugpa'] > 0) & ~input_df['lsat'].isna()]
return output_df
law_df = preprocess_law_data(raw_law_df)
Explanation: 预处理数据集:
End of explanation
def split_dataset(input_df, random_state=888):
Splits an input dataset into train, val, and test sets.
train_df, test_val_df = train_test_split(
input_df, test_size=0.3, random_state=random_state)
val_df, test_df = train_test_split(
test_val_df, test_size=0.66, random_state=random_state)
return train_df, val_df, test_df
law_train_df, law_val_df, law_test_df = split_dataset(law_df)
Explanation: 将数据划分为训练/验证/测试集
End of explanation
def plot_dataset_contour(input_df, title):
plt.rcParams['font.family'] = ['serif']
g = sns.jointplot(
x='ugpa',
y='lsat',
data=input_df,
kind='kde',
xlim=[1.4, 4],
ylim=[0, 50])
g.plot_joint(plt.scatter, c='b', s=10, linewidth=1, marker='+')
g.ax_joint.collections[0].set_alpha(0)
g.set_axis_labels('Undergraduate GPA', 'LSAT score', fontsize=14)
g.fig.suptitle(title, fontsize=14)
# Adust plot so that the title fits.
plt.subplots_adjust(top=0.9)
plt.show()
law_df_pos = law_df[law_df[LAW_LABEL] == 1]
plot_dataset_contour(
law_df_pos, title='Distribution of students that passed the bar')
law_df_neg = law_df[law_df[LAW_LABEL] == 0]
plot_dataset_contour(
law_df_neg, title='Distribution of students that failed the bar')
Explanation: 可视化数据分布
首先,我们可视化数据的分布。我们将为所有通过考试的学生以及所有未通过考试的学生绘制 GPA 和 LSAT 分数。
End of explanation
def train_tfl_estimator(train_df, monotonicity, learning_rate, num_epochs,
batch_size, get_input_fn,
get_feature_columns_and_configs):
Trains a TFL calibrated linear estimator.
Args:
train_df: pandas dataframe containing training data.
monotonicity: if 0, then no monotonicity constraints. If 1, then all
features are constrained to be monotonically increasing.
learning_rate: learning rate of Adam optimizer for gradient descent.
num_epochs: number of training epochs.
batch_size: batch size for each epoch. None means the batch size is the full
dataset size.
get_input_fn: function that returns the input_fn for a TF estimator.
get_feature_columns_and_configs: function that returns TFL feature columns
and configs.
Returns:
estimator: a trained TFL calibrated linear estimator.
feature_columns, feature_configs = get_feature_columns_and_configs(
monotonicity)
model_config = tfl.configs.CalibratedLinearConfig(
feature_configs=feature_configs, use_bias=False)
estimator = tfl.estimators.CannedClassifier(
feature_columns=feature_columns,
model_config=model_config,
feature_analysis_input_fn=get_input_fn(input_df=train_df, num_epochs=1),
optimizer=tf.keras.optimizers.Adam(learning_rate))
estimator.train(
input_fn=get_input_fn(
input_df=train_df, num_epochs=num_epochs, batch_size=batch_size))
return estimator
def optimize_learning_rates(
train_df,
val_df,
test_df,
monotonicity,
learning_rates,
num_epochs,
batch_size,
get_input_fn,
get_feature_columns_and_configs,
):
Optimizes learning rates for TFL estimators.
Args:
train_df: pandas dataframe containing training data.
val_df: pandas dataframe containing validation data.
test_df: pandas dataframe containing test data.
monotonicity: if 0, then no monotonicity constraints. If 1, then all
features are constrained to be monotonically increasing.
learning_rates: list of learning rates to try.
num_epochs: number of training epochs.
batch_size: batch size for each epoch. None means the batch size is the full
dataset size.
get_input_fn: function that returns the input_fn for a TF estimator.
get_feature_columns_and_configs: function that returns TFL feature columns
and configs.
Returns:
A single TFL estimator that achieved the best validation accuracy.
estimators = []
train_accuracies = []
val_accuracies = []
test_accuracies = []
for lr in learning_rates:
estimator = train_tfl_estimator(
train_df=train_df,
monotonicity=monotonicity,
learning_rate=lr,
num_epochs=num_epochs,
batch_size=batch_size,
get_input_fn=get_input_fn,
get_feature_columns_and_configs=get_feature_columns_and_configs)
estimators.append(estimator)
train_acc = estimator.evaluate(
input_fn=get_input_fn(train_df, num_epochs=1))['accuracy']
val_acc = estimator.evaluate(
input_fn=get_input_fn(val_df, num_epochs=1))['accuracy']
test_acc = estimator.evaluate(
input_fn=get_input_fn(test_df, num_epochs=1))['accuracy']
print('accuracies for learning rate %f: train: %f, val: %f, test: %f' %
(lr, train_acc, val_acc, test_acc))
train_accuracies.append(train_acc)
val_accuracies.append(val_acc)
test_accuracies.append(test_acc)
max_index = val_accuracies.index(max(val_accuracies))
return estimators[max_index]
Explanation: 训练校准线性模型以预测考试的通过情况
接下来,我们将通过 TFL 训练校准线性模型,以预测学生是否会通过考试。两个输入特征分别是 LSAT 分数和本科 GPA,而训练标签将是学生是否通过了考试。
我们首先在没有任何约束的情况下训练校准线性模型。然后,我们在具有单调性约束的情况下训练校准线性模型,并观察模型输出和准确率的差异。
用于训练 TFL 校准线性 Estimator 的辅助函数
下面这些函数将用于此法学院案例研究以及下面的信用违约案例研究。
End of explanation
def get_input_fn_law(input_df, num_epochs, batch_size=None):
Gets TF input_fn for law school models.
return tf.compat.v1.estimator.inputs.pandas_input_fn(
x=input_df[['ugpa', 'lsat']],
y=input_df['pass_bar'],
num_epochs=num_epochs,
batch_size=batch_size or len(input_df),
shuffle=False)
def get_feature_columns_and_configs_law(monotonicity):
Gets TFL feature configs for law school models.
feature_columns = [
tf.feature_column.numeric_column('ugpa'),
tf.feature_column.numeric_column('lsat'),
]
feature_configs = [
tfl.configs.FeatureConfig(
name='ugpa',
lattice_size=2,
pwl_calibration_num_keypoints=20,
monotonicity=monotonicity,
pwl_calibration_always_monotonic=False),
tfl.configs.FeatureConfig(
name='lsat',
lattice_size=2,
pwl_calibration_num_keypoints=20,
monotonicity=monotonicity,
pwl_calibration_always_monotonic=False),
]
return feature_columns, feature_configs
Explanation: 用于配置法学院数据集特征的辅助函数
下面这些辅助函数专用于法学院案例研究。
End of explanation
def get_predicted_probabilities(estimator, input_df, get_input_fn):
predictions = estimator.predict(
input_fn=get_input_fn(input_df=input_df, num_epochs=1))
return [prediction['probabilities'][1] for prediction in predictions]
def plot_model_contour(estimator, input_df, num_keypoints=20):
x = np.linspace(min(input_df['ugpa']), max(input_df['ugpa']), num_keypoints)
y = np.linspace(min(input_df['lsat']), max(input_df['lsat']), num_keypoints)
x_grid, y_grid = np.meshgrid(x, y)
positions = np.vstack([x_grid.ravel(), y_grid.ravel()])
plot_df = pd.DataFrame(positions.T, columns=['ugpa', 'lsat'])
plot_df[LAW_LABEL] = np.ones(len(plot_df))
predictions = get_predicted_probabilities(
estimator=estimator, input_df=plot_df, get_input_fn=get_input_fn_law)
grid_predictions = np.reshape(predictions, x_grid.shape)
plt.rcParams['font.family'] = ['serif']
plt.contour(
x_grid,
y_grid,
grid_predictions,
colors=('k',),
levels=np.linspace(0, 1, 11))
plt.contourf(
x_grid,
y_grid,
grid_predictions,
cmap=plt.cm.bone,
levels=np.linspace(0, 1, 11)) # levels=np.linspace(0,1,8));
plt.xticks(fontsize=20)
plt.yticks(fontsize=20)
cbar = plt.colorbar()
cbar.ax.set_ylabel('Model score', fontsize=20)
cbar.ax.tick_params(labelsize=20)
plt.xlabel('Undergraduate GPA', fontsize=20)
plt.ylabel('LSAT score', fontsize=20)
Explanation: 用于可视化训练的模型输出的辅助函数
End of explanation
nomon_linear_estimator = optimize_learning_rates(
train_df=law_train_df,
val_df=law_val_df,
test_df=law_test_df,
monotonicity=0,
learning_rates=LEARNING_RATES,
batch_size=BATCH_SIZE,
num_epochs=NUM_EPOCHS,
get_input_fn=get_input_fn_law,
get_feature_columns_and_configs=get_feature_columns_and_configs_law)
plot_model_contour(nomon_linear_estimator, input_df=law_df)
Explanation: 训练无约束(非单调)的校准线性模型
End of explanation
mon_linear_estimator = optimize_learning_rates(
train_df=law_train_df,
val_df=law_val_df,
test_df=law_test_df,
monotonicity=1,
learning_rates=LEARNING_RATES,
batch_size=BATCH_SIZE,
num_epochs=NUM_EPOCHS,
get_input_fn=get_input_fn_law,
get_feature_columns_and_configs=get_feature_columns_and_configs_law)
plot_model_contour(mon_linear_estimator, input_df=law_df)
Explanation: 训练单调的校准线性模型
End of explanation
feature_names = ['ugpa', 'lsat']
dnn_estimator = tf.estimator.DNNClassifier(
feature_columns=[
tf.feature_column.numeric_column(feature) for feature in feature_names
],
hidden_units=[100, 100],
optimizer=tf.keras.optimizers.Adam(learning_rate=0.008),
activation_fn=tf.nn.relu)
dnn_estimator.train(
input_fn=get_input_fn_law(
law_train_df, batch_size=BATCH_SIZE, num_epochs=NUM_EPOCHS))
dnn_train_acc = dnn_estimator.evaluate(
input_fn=get_input_fn_law(law_train_df, num_epochs=1))['accuracy']
dnn_val_acc = dnn_estimator.evaluate(
input_fn=get_input_fn_law(law_val_df, num_epochs=1))['accuracy']
dnn_test_acc = dnn_estimator.evaluate(
input_fn=get_input_fn_law(law_test_df, num_epochs=1))['accuracy']
print('accuracies for DNN: train: %f, val: %f, test: %f' %
(dnn_train_acc, dnn_val_acc, dnn_test_acc))
plot_model_contour(dnn_estimator, input_df=law_df)
Explanation: 训练其他无约束的模型
我们演示了可以将 TFL 校准线性模型训练成在 LSAT 分数和 GPA 上均单调,而不会牺牲过多的准确率。
但是,与其他类型的模型(如深度神经网络 (DNN) 或梯度提升树 (GBT))相比,校准线性模型表现如何?DNN 和 GBT 看起来会有公平合理的输出吗?为了解决这一问题,我们接下来将训练无约束的 DNN 和 GBT。实际上,我们将观察到 DNN 和 GBT 都很容易违反 LSAT 分数和本科生 GPA 中的单调性。
训练无约束的深度神经网络 (DNN) 模型
之前已对此架构进行了优化,可以实现较高的验证准确率。
End of explanation
tree_estimator = tf.estimator.BoostedTreesClassifier(
feature_columns=[
tf.feature_column.numeric_column(feature) for feature in feature_names
],
n_batches_per_layer=2,
n_trees=20,
max_depth=4)
tree_estimator.train(
input_fn=get_input_fn_law(
law_train_df, num_epochs=NUM_EPOCHS, batch_size=BATCH_SIZE))
tree_train_acc = tree_estimator.evaluate(
input_fn=get_input_fn_law(law_train_df, num_epochs=1))['accuracy']
tree_val_acc = tree_estimator.evaluate(
input_fn=get_input_fn_law(law_val_df, num_epochs=1))['accuracy']
tree_test_acc = tree_estimator.evaluate(
input_fn=get_input_fn_law(law_test_df, num_epochs=1))['accuracy']
print('accuracies for GBT: train: %f, val: %f, test: %f' %
(tree_train_acc, tree_val_acc, tree_test_acc))
plot_model_contour(tree_estimator, input_df=law_df)
Explanation: 训练无约束的梯度提升树 (GBT) 模型
之前已对此树形结构进行了优化,可以实现较高的验证准确率。
End of explanation
# Load data file.
credit_file_name = 'credit_default.csv'
credit_file_path = os.path.join(DATA_DIR, credit_file_name)
credit_df = pd.read_csv(credit_file_path, delimiter=',')
# Define label column name.
CREDIT_LABEL = 'default'
Explanation: 案例研究 2:信用违约
我们将在本教程中考虑的第二个案例研究是预测个人的信用违约概率。我们将使用 UCI 存储库中的 Default of Credit Card Clients 数据集。这些数据收集自 30,000 名中国台湾信用卡用户,并包含一个二元标签,用于标识用户是否在时间窗口内拖欠了付款。特征包括婚姻状况、性别、教育程度以及在 2005 年 4-9 月的每个月中,用户拖欠现有账单的时间有多长。
正如我们在第一个案例研究中所做的那样,我们再次阐明了使用单调性约束来避免不公平的惩罚:使用该模型来确定用户的信用评分时,在其他条件都相同的情况下,如果许多人因较早支付账单而受到惩罚,那么这对他们来说是不公平的。因此,我们应用了单调性约束,使模型不会惩罚提前付款。
加载信用违约数据
End of explanation
credit_train_df, credit_val_df, credit_test_df = split_dataset(credit_df)
Explanation: 将数据划分为训练/验证/测试集
End of explanation
def get_agg_data(df, x_col, y_col, bins=11):
xbins = pd.cut(df[x_col], bins=bins)
data = df[[x_col, y_col]].groupby(xbins).agg(['mean', 'sem'])
return data
def plot_2d_means_credit(input_df, x_col, y_col, x_label, y_label):
plt.rcParams['font.family'] = ['serif']
_, ax = plt.subplots(nrows=1, ncols=1)
plt.setp(ax.spines.values(), color='black', linewidth=1)
ax.tick_params(
direction='in', length=6, width=1, top=False, right=False, labelsize=18)
df_single = get_agg_data(input_df[input_df['MARRIAGE'] == 1], x_col, y_col)
df_married = get_agg_data(input_df[input_df['MARRIAGE'] == 2], x_col, y_col)
ax.errorbar(
df_single[(x_col, 'mean')],
df_single[(y_col, 'mean')],
xerr=df_single[(x_col, 'sem')],
yerr=df_single[(y_col, 'sem')],
color='orange',
marker='s',
capsize=3,
capthick=1,
label='Single',
markersize=10,
linestyle='')
ax.errorbar(
df_married[(x_col, 'mean')],
df_married[(y_col, 'mean')],
xerr=df_married[(x_col, 'sem')],
yerr=df_married[(y_col, 'sem')],
color='b',
marker='^',
capsize=3,
capthick=1,
label='Married',
markersize=10,
linestyle='')
leg = ax.legend(loc='upper left', fontsize=18, frameon=True, numpoints=1)
ax.set_xlabel(x_label, fontsize=18)
ax.set_ylabel(y_label, fontsize=18)
ax.set_ylim(0, 1.1)
ax.set_xlim(-2, 8.5)
ax.patch.set_facecolor('white')
leg.get_frame().set_edgecolor('black')
leg.get_frame().set_facecolor('white')
leg.get_frame().set_linewidth(1)
plt.show()
plot_2d_means_credit(credit_train_df, 'PAY_0', 'default',
'Repayment Status (April)', 'Observed default rate')
Explanation: 可视化数据分布
首先,我们可视化数据的分布。我们将为婚姻状况和还款状况不同的人绘制观察到的违约率的平均值和标准误差。还款状态表示一个人已偿还贷款的月数(截至 2005 年 4 月)。
End of explanation
def get_input_fn_credit(input_df, num_epochs, batch_size=None):
Gets TF input_fn for credit default models.
return tf.compat.v1.estimator.inputs.pandas_input_fn(
x=input_df[['MARRIAGE', 'PAY_0']],
y=input_df['default'],
num_epochs=num_epochs,
batch_size=batch_size or len(input_df),
shuffle=False)
def get_feature_columns_and_configs_credit(monotonicity):
Gets TFL feature configs for credit default models.
feature_columns = [
tf.feature_column.numeric_column('MARRIAGE'),
tf.feature_column.numeric_column('PAY_0'),
]
feature_configs = [
tfl.configs.FeatureConfig(
name='MARRIAGE',
lattice_size=2,
pwl_calibration_num_keypoints=3,
monotonicity=monotonicity,
pwl_calibration_always_monotonic=False),
tfl.configs.FeatureConfig(
name='PAY_0',
lattice_size=2,
pwl_calibration_num_keypoints=10,
monotonicity=monotonicity,
pwl_calibration_always_monotonic=False),
]
return feature_columns, feature_configs
Explanation: 训练校准线性模型以预测信用违约率
接下来,我们将通过 TFL 训练校准线性模型,以预测某人是否会拖欠贷款。两个输入特征将是该人的婚姻状况以及该人截至 4 月已偿还贷款的月数(还款状态)。训练标签将是该人是否拖欠过贷款。
我们首先在没有任何约束的情况下训练校准线性模型。然后,我们在具有单调性约束的情况下训练校准线性模型,并观察模型输出和准确率的差异。
用于配置信用违约数据集特征的辅助函数
下面这些辅助函数专用于信用违约案例研究。
End of explanation
def plot_predictions_credit(input_df,
estimator,
x_col,
x_label='Repayment Status (April)',
y_label='Predicted default probability'):
predictions = get_predicted_probabilities(
estimator=estimator, input_df=input_df, get_input_fn=get_input_fn_credit)
new_df = input_df.copy()
new_df.loc[:, 'predictions'] = predictions
plot_2d_means_credit(new_df, x_col, 'predictions', x_label, y_label)
Explanation: 用于可视化训练的模型输出的辅助函数
End of explanation
nomon_linear_estimator = optimize_learning_rates(
train_df=credit_train_df,
val_df=credit_val_df,
test_df=credit_test_df,
monotonicity=0,
learning_rates=LEARNING_RATES,
batch_size=BATCH_SIZE,
num_epochs=NUM_EPOCHS,
get_input_fn=get_input_fn_credit,
get_feature_columns_and_configs=get_feature_columns_and_configs_credit)
plot_predictions_credit(credit_train_df, nomon_linear_estimator, 'PAY_0')
Explanation: 训练无约束(非单调)的校准线性模型
End of explanation
mon_linear_estimator = optimize_learning_rates(
train_df=credit_train_df,
val_df=credit_val_df,
test_df=credit_test_df,
monotonicity=1,
learning_rates=LEARNING_RATES,
batch_size=BATCH_SIZE,
num_epochs=NUM_EPOCHS,
get_input_fn=get_input_fn_credit,
get_feature_columns_and_configs=get_feature_columns_and_configs_credit)
plot_predictions_credit(credit_train_df, mon_linear_estimator, 'PAY_0')
Explanation: 训练单调的校准线性模型
End of explanation |
15,786 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Continuous training pipeline with Kubeflow Pipeline and AI Platform
Learning Objectives
Step6: The pipeline uses a mix of custom and pre-build components.
Pre-build components. The pipeline uses the following pre-build components that are included with the KFP distribution
Step7: The custom components execute in a container image defined in base_image/Dockerfile.
Step8: The training step in the pipeline employes the AI Platform Training component to schedule a AI Platform Training job in a custom training container. The custom training image is defined in trainer_image/Dockerfile.
Step9: Building and deploying the pipeline
Before deploying to AI Platform Pipelines, the pipeline DSL has to be compiled into a pipeline runtime format, also refered to as a pipeline package. The runtime format is based on Argo Workflow, which is expressed in YAML.
Configure environment settings
Update the below constants with the settings reflecting your lab environment.
REGION - the compute region for AI Platform Training and Prediction
ARTIFACT_STORE - the GCS bucket created during installation of AI Platform Pipelines. The bucket name will be similar to qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default.
ENDPOINT - set the ENDPOINT constant to the endpoint to your AI Platform Pipelines instance. Then endpoint to the AI Platform Pipelines instance can be found on the AI Platform Pipelines page in the Google Cloud Console.
Open the SETTINGS for your instance
Use the value of the host variable in the Connect to this Kubeflow Pipelines instance from a Python client via Kubeflow Pipelines SKD section of the SETTINGS window.
Run gsutil ls without URLs to list all of the Cloud Storage buckets under your default project ID.
Step10: HINT
Step11: Build the trainer image
Step12: Note
Step13: Build the base image for custom components
Step14: Compile the pipeline
You can compile the DSL using an API from the KFP SDK or using the KFP compiler.
To compile the pipeline DSL using the KFP compiler.
Set the pipeline's compile time settings
The pipeline can run using a security context of the GKE default node pool's service account or the service account defined in the user-gcp-sa secret of the Kubernetes namespace hosting KFP. If you want to use the user-gcp-sa service account you change the value of USE_KFP_SA to True.
Note that the default AI Platform Pipelines configuration does not define the user-gcp-sa secret.
Step15: Use the CLI compiler to compile the pipeline
Step16: The result is the covertype_training_pipeline.yaml file.
Step17: Deploy the pipeline package
Step18: Submitting pipeline runs
You can trigger pipeline runs using an API from the KFP SDK or using KFP CLI. To submit the run using KFP CLI, execute the following commands. Notice how the pipeline's parameters are passed to the pipeline run.
List the pipelines in AI Platform Pipelines
Step19: Submit a run
Find the ID of the covertype_continuous_training pipeline you uploaded in the previous step and update the value of PIPELINE_ID .
Step20: Run the pipeline using the kfp command line by retrieving the variables from the environment to pass to the pipeline where | Python Code:
!grep 'BASE_IMAGE =' -A 5 pipeline/covertype_training_pipeline.py
Explanation: Continuous training pipeline with Kubeflow Pipeline and AI Platform
Learning Objectives:
1. Learn how to use Kubeflow Pipeline(KFP) pre-build components (BiqQuery, AI Platform training and predictions)
1. Learn how to use KFP lightweight python components
1. Learn how to build a KFP with these components
1. Learn how to compile, upload, and run a KFP with the command line
In this lab, you will build, deploy, and run a KFP pipeline that orchestrates BigQuery and AI Platform services to train, tune, and deploy a scikit-learn model.
Understanding the pipeline design
The workflow implemented by the pipeline is defined using a Python based Domain Specific Language (DSL). The pipeline's DSL is in the covertype_training_pipeline.py file that we will generate below.
The pipeline's DSL has been designed to avoid hardcoding any environment specific settings like file paths or connection strings. These settings are provided to the pipeline code through a set of environment variables.
End of explanation
%%writefile ./pipeline/covertype_training_pipeline.py
# Copyright 2019 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
KFP orchestrating BigQuery and Cloud AI Platform services.
import os
from helper_components import evaluate_model
from helper_components import retrieve_best_run
from jinja2 import Template
import kfp
from kfp.components import func_to_container_op
from kfp.dsl.types import Dict
from kfp.dsl.types import GCPProjectID
from kfp.dsl.types import GCPRegion
from kfp.dsl.types import GCSPath
from kfp.dsl.types import String
from kfp.gcp import use_gcp_secret
# Defaults and environment settings
BASE_IMAGE = os.getenv('BASE_IMAGE')
TRAINER_IMAGE = os.getenv('TRAINER_IMAGE')
RUNTIME_VERSION = os.getenv('RUNTIME_VERSION')
PYTHON_VERSION = os.getenv('PYTHON_VERSION')
COMPONENT_URL_SEARCH_PREFIX = os.getenv('COMPONENT_URL_SEARCH_PREFIX')
USE_KFP_SA = os.getenv('USE_KFP_SA')
TRAINING_FILE_PATH = 'datasets/training/data.csv'
VALIDATION_FILE_PATH = 'datasets/validation/data.csv'
TESTING_FILE_PATH = 'datasets/testing/data.csv'
# Parameter defaults
SPLITS_DATASET_ID = 'splits'
HYPERTUNE_SETTINGS =
{
"hyperparameters": {
"goal": "MAXIMIZE",
"maxTrials": 6,
"maxParallelTrials": 3,
"hyperparameterMetricTag": "accuracy",
"enableTrialEarlyStopping": True,
"params": [
{
"parameterName": "max_iter",
"type": "DISCRETE",
"discreteValues": [500, 1000]
},
{
"parameterName": "alpha",
"type": "DOUBLE",
"minValue": 0.0001,
"maxValue": 0.001,
"scaleType": "UNIT_LINEAR_SCALE"
}
]
}
}
# Helper functions
def generate_sampling_query(source_table_name, num_lots, lots):
Prepares the data sampling query.
sampling_query_template =
SELECT *
FROM
`{{ source_table }}` AS cover
WHERE
MOD(ABS(FARM_FINGERPRINT(TO_JSON_STRING(cover))), {{ num_lots }}) IN ({{ lots }})
query = Template(sampling_query_template).render(
source_table=source_table_name, num_lots=num_lots, lots=str(lots)[1:-1])
return query
# Create component factories
component_store = kfp.components.ComponentStore(
local_search_paths=None, url_search_prefixes=[COMPONENT_URL_SEARCH_PREFIX])
bigquery_query_op = component_store.load_component('bigquery/query')
mlengine_train_op = component_store.load_component('ml_engine/train')
mlengine_deploy_op = component_store.load_component('ml_engine/deploy')
retrieve_best_run_op = func_to_container_op(
retrieve_best_run, base_image=BASE_IMAGE)
evaluate_model_op = func_to_container_op(evaluate_model, base_image=BASE_IMAGE)
@kfp.dsl.pipeline(
name='Covertype Classifier Training',
description='The pipeline training and deploying the Covertype classifierpipeline_yaml'
)
def covertype_train(project_id,
region,
source_table_name,
gcs_root,
dataset_id,
evaluation_metric_name,
evaluation_metric_threshold,
model_id,
version_id,
replace_existing_version,
hypertune_settings=HYPERTUNE_SETTINGS,
dataset_location='US'):
Orchestrates training and deployment of an sklearn model.
# Create the training split
query = generate_sampling_query(
source_table_name=source_table_name, num_lots=10, lots=[1, 2, 3, 4])
training_file_path = '{}/{}'.format(gcs_root, TRAINING_FILE_PATH)
create_training_split = bigquery_query_op(
query=query,
project_id=project_id,
dataset_id=dataset_id,
table_id='',
output_gcs_path=training_file_path,
dataset_location=dataset_location)
# Create the validation split
query = generate_sampling_query(
source_table_name=source_table_name, num_lots=10, lots=[8])
validation_file_path = '{}/{}'.format(gcs_root, VALIDATION_FILE_PATH)
create_validation_split = bigquery_query_op(
query=query,
project_id=project_id,
dataset_id=dataset_id,
table_id='',
output_gcs_path=validation_file_path,
dataset_location=dataset_location)
# Create the testing split
query = generate_sampling_query(
source_table_name=source_table_name, num_lots=10, lots=[9])
testing_file_path = '{}/{}'.format(gcs_root, TESTING_FILE_PATH)
create_testing_split = bigquery_query_op(
query=query,
project_id=project_id,
dataset_id=dataset_id,
table_id='',
output_gcs_path=testing_file_path,
dataset_location=dataset_location)
# Tune hyperparameters
tune_args = [
'--training_dataset_path',
create_training_split.outputs['output_gcs_path'],
'--validation_dataset_path',
create_validation_split.outputs['output_gcs_path'], '--hptune', 'True'
]
job_dir = '{}/{}/{}'.format(gcs_root, 'jobdir/hypertune',
kfp.dsl.RUN_ID_PLACEHOLDER)
hypertune = mlengine_train_op(
project_id=project_id,
region=region,
master_image_uri=TRAINER_IMAGE,
job_dir=job_dir,
args=tune_args,
training_input=hypertune_settings)
# Retrieve the best trial
get_best_trial = retrieve_best_run_op(
project_id, hypertune.outputs['job_id'])
# Train the model on a combined training and validation datasets
job_dir = '{}/{}/{}'.format(gcs_root, 'jobdir', kfp.dsl.RUN_ID_PLACEHOLDER)
train_args = [
'--training_dataset_path',
create_training_split.outputs['output_gcs_path'],
'--validation_dataset_path',
create_validation_split.outputs['output_gcs_path'], '--alpha',
get_best_trial.outputs['alpha'], '--max_iter',
get_best_trial.outputs['max_iter'], '--hptune', 'False'
]
train_model = mlengine_train_op(
project_id=project_id,
region=region,
master_image_uri=TRAINER_IMAGE,
job_dir=job_dir,
args=train_args)
# Evaluate the model on the testing split
eval_model = evaluate_model_op(
dataset_path=str(create_testing_split.outputs['output_gcs_path']),
model_path=str(train_model.outputs['job_dir']),
metric_name=evaluation_metric_name)
# Deploy the model if the primary metric is better than threshold
with kfp.dsl.Condition(eval_model.outputs['metric_value'] > evaluation_metric_threshold):
deploy_model = mlengine_deploy_op(
model_uri=train_model.outputs['job_dir'],
project_id=project_id,
model_id=model_id,
version_id=version_id,
runtime_version=RUNTIME_VERSION,
python_version=PYTHON_VERSION,
replace_existing_version=replace_existing_version)
# Configure the pipeline to run using the service account defined
# in the user-gcp-sa k8s secret
if USE_KFP_SA == 'True':
kfp.dsl.get_pipeline_conf().add_op_transformer(
use_gcp_secret('user-gcp-sa'))
Explanation: The pipeline uses a mix of custom and pre-build components.
Pre-build components. The pipeline uses the following pre-build components that are included with the KFP distribution:
BigQuery query component
AI Platform Training component
AI Platform Deploy component
Custom components. The pipeline uses two custom helper components that encapsulate functionality not available in any of the pre-build components. The components are implemented using the KFP SDK's Lightweight Python Components mechanism. The code for the components is in the helper_components.py file:
Retrieve Best Run. This component retrieves a tuning metric and hyperparameter values for the best run of a AI Platform Training hyperparameter tuning job.
Evaluate Model. This component evaluates a sklearn trained model using a provided metric and a testing dataset.
End of explanation
!cat base_image/Dockerfile
Explanation: The custom components execute in a container image defined in base_image/Dockerfile.
End of explanation
!cat trainer_image/Dockerfile
Explanation: The training step in the pipeline employes the AI Platform Training component to schedule a AI Platform Training job in a custom training container. The custom training image is defined in trainer_image/Dockerfile.
End of explanation
!gsutil ls
Explanation: Building and deploying the pipeline
Before deploying to AI Platform Pipelines, the pipeline DSL has to be compiled into a pipeline runtime format, also refered to as a pipeline package. The runtime format is based on Argo Workflow, which is expressed in YAML.
Configure environment settings
Update the below constants with the settings reflecting your lab environment.
REGION - the compute region for AI Platform Training and Prediction
ARTIFACT_STORE - the GCS bucket created during installation of AI Platform Pipelines. The bucket name will be similar to qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default.
ENDPOINT - set the ENDPOINT constant to the endpoint to your AI Platform Pipelines instance. Then endpoint to the AI Platform Pipelines instance can be found on the AI Platform Pipelines page in the Google Cloud Console.
Open the SETTINGS for your instance
Use the value of the host variable in the Connect to this Kubeflow Pipelines instance from a Python client via Kubeflow Pipelines SKD section of the SETTINGS window.
Run gsutil ls without URLs to list all of the Cloud Storage buckets under your default project ID.
End of explanation
REGION = 'us-central1'
ENDPOINT = '337dd39580cbcbd2-dot-us-central2.pipelines.googleusercontent.com' # TO DO: REPLACE WITH YOUR ENDPOINT
ARTIFACT_STORE_URI = 'gs://qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default' # TO DO: REPLACE WITH YOUR ARTIFACT_STORE NAME
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
Explanation: HINT:
For ENDPOINT, use the value of the host variable in the Connect to this Kubeflow Pipelines instance from a Python client via Kubeflow Pipelines SDK section of the SETTINGS window.
For ARTIFACT_STORE_URI, copy the bucket name which starts with the qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default prefix from the previous cell output. Your copied value should look like 'gs://qwiklabs-gcp-xx-xxxxxxx-kubeflowpipelines-default'
End of explanation
IMAGE_NAME='trainer_image'
TAG='latest'
TRAINER_IMAGE='gcr.io/{}/{}:{}'.format(PROJECT_ID, IMAGE_NAME, TAG)
Explanation: Build the trainer image
End of explanation
!gcloud builds submit --timeout 15m --tag $TRAINER_IMAGE trainer_image
Explanation: Note: Please ignore any incompatibility ERROR that may appear for the packages visions as it will not affect the lab's functionality.
End of explanation
IMAGE_NAME='base_image'
TAG='latest'
BASE_IMAGE='gcr.io/{}/{}:{}'.format(PROJECT_ID, IMAGE_NAME, TAG)
!gcloud builds submit --timeout 15m --tag $BASE_IMAGE base_image
Explanation: Build the base image for custom components
End of explanation
USE_KFP_SA = False
COMPONENT_URL_SEARCH_PREFIX = 'https://raw.githubusercontent.com/kubeflow/pipelines/0.2.5/components/gcp/'
RUNTIME_VERSION = '1.15'
PYTHON_VERSION = '3.7'
%env USE_KFP_SA={USE_KFP_SA}
%env BASE_IMAGE={BASE_IMAGE}
%env TRAINER_IMAGE={TRAINER_IMAGE}
%env COMPONENT_URL_SEARCH_PREFIX={COMPONENT_URL_SEARCH_PREFIX}
%env RUNTIME_VERSION={RUNTIME_VERSION}
%env PYTHON_VERSION={PYTHON_VERSION}
Explanation: Compile the pipeline
You can compile the DSL using an API from the KFP SDK or using the KFP compiler.
To compile the pipeline DSL using the KFP compiler.
Set the pipeline's compile time settings
The pipeline can run using a security context of the GKE default node pool's service account or the service account defined in the user-gcp-sa secret of the Kubernetes namespace hosting KFP. If you want to use the user-gcp-sa service account you change the value of USE_KFP_SA to True.
Note that the default AI Platform Pipelines configuration does not define the user-gcp-sa secret.
End of explanation
!dsl-compile --py pipeline/covertype_training_pipeline.py --output covertype_training_pipeline.yaml
Explanation: Use the CLI compiler to compile the pipeline
End of explanation
!head covertype_training_pipeline.yaml
Explanation: The result is the covertype_training_pipeline.yaml file.
End of explanation
PIPELINE_NAME='covertype_continuous_training'
!kfp --endpoint $ENDPOINT pipeline upload \
-p $PIPELINE_NAME \
covertype_training_pipeline.yaml
Explanation: Deploy the pipeline package
End of explanation
!kfp --endpoint $ENDPOINT pipeline list
Explanation: Submitting pipeline runs
You can trigger pipeline runs using an API from the KFP SDK or using KFP CLI. To submit the run using KFP CLI, execute the following commands. Notice how the pipeline's parameters are passed to the pipeline run.
List the pipelines in AI Platform Pipelines
End of explanation
PIPELINE_ID='0918568d-758c-46cf-9752-e04a4403cd84' # TO DO: REPLACE WITH YOUR PIPELINE ID
EXPERIMENT_NAME = 'Covertype_Classifier_Training'
RUN_ID = 'Run_001'
SOURCE_TABLE = 'covertype_dataset.covertype'
DATASET_ID = 'splits'
EVALUATION_METRIC = 'accuracy'
EVALUATION_METRIC_THRESHOLD = '0.69'
MODEL_ID = 'covertype_classifier'
VERSION_ID = 'v01'
REPLACE_EXISTING_VERSION = 'True'
GCS_STAGING_PATH = '{}/staging'.format(ARTIFACT_STORE_URI)
Explanation: Submit a run
Find the ID of the covertype_continuous_training pipeline you uploaded in the previous step and update the value of PIPELINE_ID .
End of explanation
!kfp --endpoint $ENDPOINT run submit \
-e $EXPERIMENT_NAME \
-r $RUN_ID \
-p $PIPELINE_ID \
project_id=$PROJECT_ID \
gcs_root=$GCS_STAGING_PATH \
region=$REGION \
source_table_name=$SOURCE_TABLE \
dataset_id=$DATASET_ID \
evaluation_metric_name=$EVALUATION_METRIC \
evaluation_metric_threshold=$EVALUATION_METRIC_THRESHOLD \
model_id=$MODEL_ID \
version_id=$VERSION_ID \
replace_existing_version=$REPLACE_EXISTING_VERSION
Explanation: Run the pipeline using the kfp command line by retrieving the variables from the environment to pass to the pipeline where:
EXPERIMENT_NAME is set to the experiment used to run the pipeline. You can choose any name you want. If the experiment does not exist it will be created by the command
RUN_ID is the name of the run. You can use an arbitrary name
PIPELINE_ID is the id of your pipeline. Use the value retrieved by the kfp pipeline list command
GCS_STAGING_PATH is the URI to the Cloud Storage location used by the pipeline to store intermediate files. By default, it is set to the staging folder in your artifact store.
REGION is a compute region for AI Platform Training and Prediction.
You should be already familiar with these and other parameters passed to the command. If not go back and review the pipeline code.
End of explanation |
15,787 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Kernel Density Estimation
Kernel density estimation is the process of estimating an unknown probability density function using a kernel function $K(u)$. While a histogram counts the number of data points in somewhat arbitrary regions, a kernel density estimate is a function defined as the sum of a kernel function on every data point. The kernel function typically exhibits the following properties
Step1: A univariate example
Step2: We create a bimodal distribution
Step3: The simplest non-parametric technique for density estimation is the histogram.
Step4: Fitting with the default arguments
The histogram above is discontinuous. To compute a continuous probability density function,
we can use kernel density estimation.
We initialize a univariate kernel density estimator using KDEUnivariate.
Step5: We present a figure of the fit, as well as the true distribution.
Step6: In the code above, default arguments were used. We can also vary the bandwidth of the kernel, as we will now see.
Varying the bandwidth using the bw argument
The bandwidth of the kernel can be adjusted using the bw argument.
In the following example, a bandwidth of bw=0.2 seems to fit the data well.
Step7: Comparing kernel functions
In the example above, a Gaussian kernel was used. Several other kernels are also available.
Step8: The available kernel functions
Step9: The available kernel functions on three data points
We now examine how the kernel density estimate will fit to three equally spaced data points.
Step10: A more difficult case
The fit is not always perfect. See the example below for a harder case.
Step11: The KDE is a distribution
Since the KDE is a distribution, we can access attributes and methods such as
Step12: Cumulative distribution, it's inverse, and the survival function
Step13: The Cumulative Hazard Function | Python Code:
%matplotlib inline
import numpy as np
from scipy import stats
import statsmodels.api as sm
import matplotlib.pyplot as plt
from statsmodels.distributions.mixture_rvs import mixture_rvs
Explanation: Kernel Density Estimation
Kernel density estimation is the process of estimating an unknown probability density function using a kernel function $K(u)$. While a histogram counts the number of data points in somewhat arbitrary regions, a kernel density estimate is a function defined as the sum of a kernel function on every data point. The kernel function typically exhibits the following properties:
Symmetry such that $K(u) = K(-u)$.
Normalization such that $\int_{-\infty}^{\infty} K(u) \ du = 1$ .
Monotonically decreasing such that $K'(u) < 0$ when $u > 0$.
Expected value equal to zero such that $\mathrm{E}[K] = 0$.
For more information about kernel density estimation, see for instance Wikipedia - Kernel density estimation.
A univariate kernel density estimator is implemented in sm.nonparametric.KDEUnivariate.
In this example we will show the following:
Basic usage, how to fit the estimator.
The effect of varying the bandwidth of the kernel using the bw argument.
The various kernel functions available using the kernel argument.
End of explanation
np.random.seed(12345) # Seed the random number generator for reproducible results
Explanation: A univariate example
End of explanation
# Location, scale and weight for the two distributions
dist1_loc, dist1_scale, weight1 = -1, 0.5, 0.25
dist2_loc, dist2_scale, weight2 = 1, 0.5, 0.75
# Sample from a mixture of distributions
obs_dist = mixture_rvs(
prob=[weight1, weight2],
size=250,
dist=[stats.norm, stats.norm],
kwargs=(
dict(loc=dist1_loc, scale=dist1_scale),
dict(loc=dist2_loc, scale=dist2_scale),
),
)
Explanation: We create a bimodal distribution: a mixture of two normal distributions with locations at -1 and 1.
End of explanation
fig = plt.figure(figsize=(12, 5))
ax = fig.add_subplot(111)
# Scatter plot of data samples and histogram
ax.scatter(
obs_dist,
np.abs(np.random.randn(obs_dist.size)),
zorder=15,
color="red",
marker="x",
alpha=0.5,
label="Samples",
)
lines = ax.hist(obs_dist, bins=20, edgecolor="k", label="Histogram")
ax.legend(loc="best")
ax.grid(True, zorder=-5)
Explanation: The simplest non-parametric technique for density estimation is the histogram.
End of explanation
kde = sm.nonparametric.KDEUnivariate(obs_dist)
kde.fit() # Estimate the densities
Explanation: Fitting with the default arguments
The histogram above is discontinuous. To compute a continuous probability density function,
we can use kernel density estimation.
We initialize a univariate kernel density estimator using KDEUnivariate.
End of explanation
fig = plt.figure(figsize=(12, 5))
ax = fig.add_subplot(111)
# Plot the histogram
ax.hist(
obs_dist,
bins=20,
density=True,
label="Histogram from samples",
zorder=5,
edgecolor="k",
alpha=0.5,
)
# Plot the KDE as fitted using the default arguments
ax.plot(kde.support, kde.density, lw=3, label="KDE from samples", zorder=10)
# Plot the true distribution
true_values = (
stats.norm.pdf(loc=dist1_loc, scale=dist1_scale, x=kde.support) * weight1
+ stats.norm.pdf(loc=dist2_loc, scale=dist2_scale, x=kde.support) * weight2
)
ax.plot(kde.support, true_values, lw=3, label="True distribution", zorder=15)
# Plot the samples
ax.scatter(
obs_dist,
np.abs(np.random.randn(obs_dist.size)) / 40,
marker="x",
color="red",
zorder=20,
label="Samples",
alpha=0.5,
)
ax.legend(loc="best")
ax.grid(True, zorder=-5)
Explanation: We present a figure of the fit, as well as the true distribution.
End of explanation
fig = plt.figure(figsize=(12, 5))
ax = fig.add_subplot(111)
# Plot the histogram
ax.hist(
obs_dist,
bins=25,
label="Histogram from samples",
zorder=5,
edgecolor="k",
density=True,
alpha=0.5,
)
# Plot the KDE for various bandwidths
for bandwidth in [0.1, 0.2, 0.4]:
kde.fit(bw=bandwidth) # Estimate the densities
ax.plot(
kde.support,
kde.density,
"--",
lw=2,
color="k",
zorder=10,
label="KDE from samples, bw = {}".format(round(bandwidth, 2)),
)
# Plot the true distribution
ax.plot(kde.support, true_values, lw=3, label="True distribution", zorder=15)
# Plot the samples
ax.scatter(
obs_dist,
np.abs(np.random.randn(obs_dist.size)) / 50,
marker="x",
color="red",
zorder=20,
label="Data samples",
alpha=0.5,
)
ax.legend(loc="best")
ax.set_xlim([-3, 3])
ax.grid(True, zorder=-5)
Explanation: In the code above, default arguments were used. We can also vary the bandwidth of the kernel, as we will now see.
Varying the bandwidth using the bw argument
The bandwidth of the kernel can be adjusted using the bw argument.
In the following example, a bandwidth of bw=0.2 seems to fit the data well.
End of explanation
from statsmodels.nonparametric.kde import kernel_switch
list(kernel_switch.keys())
Explanation: Comparing kernel functions
In the example above, a Gaussian kernel was used. Several other kernels are also available.
End of explanation
# Create a figure
fig = plt.figure(figsize=(12, 5))
# Enumerate every option for the kernel
for i, (ker_name, ker_class) in enumerate(kernel_switch.items()):
# Initialize the kernel object
kernel = ker_class()
# Sample from the domain
domain = kernel.domain or [-3, 3]
x_vals = np.linspace(*domain, num=2 ** 10)
y_vals = kernel(x_vals)
# Create a subplot, set the title
ax = fig.add_subplot(3, 3, i + 1)
ax.set_title('Kernel function "{}"'.format(ker_name))
ax.plot(x_vals, y_vals, lw=3, label="{}".format(ker_name))
ax.scatter([0], [0], marker="x", color="red")
plt.grid(True, zorder=-5)
ax.set_xlim(domain)
plt.tight_layout()
Explanation: The available kernel functions
End of explanation
# Create three equidistant points
data = np.linspace(-1, 1, 3)
kde = sm.nonparametric.KDEUnivariate(data)
# Create a figure
fig = plt.figure(figsize=(12, 5))
# Enumerate every option for the kernel
for i, kernel in enumerate(kernel_switch.keys()):
# Create a subplot, set the title
ax = fig.add_subplot(3, 3, i + 1)
ax.set_title('Kernel function "{}"'.format(kernel))
# Fit the model (estimate densities)
kde.fit(kernel=kernel, fft=False, gridsize=2 ** 10)
# Create the plot
ax.plot(kde.support, kde.density, lw=3, label="KDE from samples", zorder=10)
ax.scatter(data, np.zeros_like(data), marker="x", color="red")
plt.grid(True, zorder=-5)
ax.set_xlim([-3, 3])
plt.tight_layout()
Explanation: The available kernel functions on three data points
We now examine how the kernel density estimate will fit to three equally spaced data points.
End of explanation
obs_dist = mixture_rvs(
[0.25, 0.75],
size=250,
dist=[stats.norm, stats.beta],
kwargs=(dict(loc=-1, scale=0.5), dict(loc=1, scale=1, args=(1, 0.5))),
)
kde = sm.nonparametric.KDEUnivariate(obs_dist)
kde.fit()
fig = plt.figure(figsize=(12, 5))
ax = fig.add_subplot(111)
ax.hist(obs_dist, bins=20, density=True, edgecolor="k", zorder=4, alpha=0.5)
ax.plot(kde.support, kde.density, lw=3, zorder=7)
# Plot the samples
ax.scatter(
obs_dist,
np.abs(np.random.randn(obs_dist.size)) / 50,
marker="x",
color="red",
zorder=20,
label="Data samples",
alpha=0.5,
)
ax.grid(True, zorder=-5)
Explanation: A more difficult case
The fit is not always perfect. See the example below for a harder case.
End of explanation
obs_dist = mixture_rvs(
[0.25, 0.75],
size=1000,
dist=[stats.norm, stats.norm],
kwargs=(dict(loc=-1, scale=0.5), dict(loc=1, scale=0.5)),
)
kde = sm.nonparametric.KDEUnivariate(obs_dist)
kde.fit(gridsize=2 ** 10)
kde.entropy
kde.evaluate(-1)
Explanation: The KDE is a distribution
Since the KDE is a distribution, we can access attributes and methods such as:
entropy
evaluate
cdf
icdf
sf
cumhazard
End of explanation
fig = plt.figure(figsize=(12, 5))
ax = fig.add_subplot(111)
ax.plot(kde.support, kde.cdf, lw=3, label="CDF")
ax.plot(np.linspace(0, 1, num=kde.icdf.size), kde.icdf, lw=3, label="Inverse CDF")
ax.plot(kde.support, kde.sf, lw=3, label="Survival function")
ax.legend(loc="best")
ax.grid(True, zorder=-5)
Explanation: Cumulative distribution, it's inverse, and the survival function
End of explanation
fig = plt.figure(figsize=(12, 5))
ax = fig.add_subplot(111)
ax.plot(kde.support, kde.cumhazard, lw=3, label="Cumulative Hazard Function")
ax.legend(loc="best")
ax.grid(True, zorder=-5)
Explanation: The Cumulative Hazard Function
End of explanation |
15,788 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Categorical Ordinary Least-Squares
Unit 12, Lecture 3
Numerical Methods and Statistics
Prof. Andrew White, April 19 2018
Goals
Step1: Regression with discrete domains
Let's say we have a potential drug molecule that improves your test-taking abilities. We give 10 people the drug and 7 are given a plcebo. Here's their results on an exam
Step2: Let's try regressing it! To regress, we need two dimensions. Right now we only have one. Let's create a category or dummy variable indicating if the exam score came from the drug or control group
Step3: It looks like we can indeed regress this data! The equation we're modeling is this
Step4: Well that's interesting. But can we get a $p$-value out of this? We can, by seeing if the slope is necessary. Let's take the null hypothesis to be that $\beta_1 = 0$
Step5: We get a $p$-value of $0.032$, which is quite close to what the sum of ranks test gave. However, we can now do more dimensions than the sum of ranks test.
Multiple Categories
Now let's say that I'm studying plant growth and I have two categories
Step6: You can see we have 10 examples of each possible combination of categories. Let's now regress it using multidimensional least squares. Our columns will be $[1, \delta_s, \delta_f, \delta_s \delta_f] $, so we'll be doing 4-dimensional regression.
Step7: That tells us about what you'd expect | Python Code:
%matplotlib inline
import random
import numpy as np
import matplotlib.pyplot as plt
from math import sqrt, pi, erf
import seaborn
seaborn.set_context("talk")
#seaborn.set_style("white")
import scipy.stats
Explanation: Categorical Ordinary Least-Squares
Unit 12, Lecture 3
Numerical Methods and Statistics
Prof. Andrew White, April 19 2018
Goals:
Regress on categorical data
Learn that unordered categories have to be split into dummy variables
Know when to test interaction variables
End of explanation
import scipy.stats as boop
drug = np.array([35, 38, 25 , 34, 42, 41 , 27, 32, 43, 36])
control = np.array([27, 29, 34 , 32, 35 , 22, 19])
boop.ranksums(drug, control)
Explanation: Regression with discrete domains
Let's say we have a potential drug molecule that improves your test-taking abilities. We give 10 people the drug and 7 are given a plcebo. Here's their results on an exam:
Drug Group | Control Group
---------- | ---------:|
35 | 27
38 | 29
25 | 34
34 | 32
42 | 35
41 | 22
27 | 19
32 |
43 |
36 |
Did the drug make a difference?
We know how to solve this from hypothesis testing - Wilcoxon Sum of Ranks
End of explanation
drugged = np.concatenate( (np.ones(10), np.zeros(7)) )
exam = np.concatenate( (drug, control) )
print(np.column_stack( (drugged, exam) ) )
plt.plot(drugged, exam, 'o')
plt.xlim(-0.1, 1.1)
plt.xlabel('Drug Category')
plt.ylabel('Exam Score')
plt.show()
Explanation: Let's try regressing it! To regress, we need two dimensions. Right now we only have one. Let's create a category or dummy variable indicating if the exam score came from the drug or control group:
End of explanation
dof = len(drugged) - 2
cov = np.cov(drugged, exam, ddof=2)
slope = cov[0,1] / cov[0,0]
intercept = np.mean(exam - slope * drugged)
print(slope, intercept)
plt.plot(drugged, exam, 'o')
plt.plot(drugged, slope * drugged + intercept, '-')
plt.xlim(-0.1, 1.1)
plt.xlabel('Drug Category')
plt.ylabel('Exam Score')
plt.show()
Explanation: It looks like we can indeed regress this data! The equation we're modeling is this:
$$y = \beta_0 + \beta_1 \delta_d + \epsilon $$
End of explanation
s2_e = np.sum( (exam - slope * drugged - intercept)**2 ) / dof
s2_b = s2_e / np.sum( (drugged - np.mean(drugged)) ** 2 )
slope_se = np.sqrt(s2_b)
T = slope / slope_se
#The null hypothesis is that you have no slope, so DOF = N - 1
print('{:.2}'.format(2 * (1 - boop.t.cdf(T, len(drugged) - 1))))
Explanation: Well that's interesting. But can we get a $p$-value out of this? We can, by seeing if the slope is necessary. Let's take the null hypothesis to be that $\beta_1 = 0$
End of explanation
growth = [13.4,11.4, 11.9, 13.1, 14.8, 12.0, 14.3, 13.2, 13.4, 12.2, 1.9, 3.1, 4.4, 3.9, 4.2, 3.2, 2.1, 2.1, 3.4, 4.2, 5.6, 4.5, 4.7, 6.9, 5.1, 3.2, 2.7, 5.2, 5.4, 4.9, 3.5, 2.3, 3.4, 5.4, 4.1, 4.0, 3.2, 4.1, 3.3, 3.4]
fertilizer = [1.0,1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
sunlight = [1.0,1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]
print(np.column_stack((fertilizer, sunlight, growth)))
Explanation: We get a $p$-value of $0.032$, which is quite close to what the sum of ranks test gave. However, we can now do more dimensions than the sum of ranks test.
Multiple Categories
Now let's say that I'm studying plant growth and I have two categories: I fertilized the plant and I put the plant in direct sunlight. Let's turn that into two variables: $\delta_s$ for sunlight and $\delta_f$ for fertilizer.
Now you might believe that there is an interaction between these two factors. That is, having sunlight and fertilizer has an effect beyond the sum of the two individually. I can write this as $\delta_{fs}$. This category is 1 when both fertilizer and direct sunlight is provided. It turns out:
$$\delta_{fs} = \delta_s \times \delta_f $$
Our model equation will then be:
$$ g = \beta_0 + \beta_s \delta_s + \beta_f \delta_f + \beta_{sf} \delta_s \delta_f + \epsilon$$
and here's some data
End of explanation
N = len(growth)
dof = N - 4
x_mat = np.column_stack( (np.ones(N), sunlight, fertilizer, np.array(sunlight) * np.array(fertilizer)) )
print(x_mat)
import scipy.linalg as linalg
#I'll use the lstsq convienence function
#It gives back a bunch of stuff I don't want though
#I'll throw it away by assigning it all the an underscore
beta,*_ = linalg.lstsq(x_mat, growth)
print(beta)
Explanation: You can see we have 10 examples of each possible combination of categories. Let's now regress it using multidimensional least squares. Our columns will be $[1, \delta_s, \delta_f, \delta_s \delta_f] $, so we'll be doing 4-dimensional regression.
End of explanation
s2_e = np.sum( (growth - x_mat.dot(beta))**2 ) / dof
s2_b = linalg.inv(x_mat.transpose().dot(x_mat)) * s2_e
s_b = np.sqrt(np.diag(s2_b))
names = ['beta_0', 'beta_s', 'beta_f', 'beta_sf']
for bi, si, ni in zip(beta, s_b, names):
print('{}: {:.3} +/- {:.2}'.format(ni, bi, si * boop.t.ppf(0.975, dof)))
Explanation: That tells us about what you'd expect: the combination is quite important! Let's get individual confidence intervals on the factors
End of explanation |
15,789 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Step1: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
Step2: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following
Step5: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
Step8: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint
Step10: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
Step12: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step17: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note
Step20: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling
Step23: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option
Step26: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option
Step29: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option
Step32: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model
Step35: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following
Step37: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
Step38: Hyperparameters
Tune the following parameters
Step40: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
Step42: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
Step45: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters. | Python Code:
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/input/cifar-10/python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
Explanation: Image Classification
In this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.
Get the Data
Run the following cell to download the CIFAR-10 dataset for python.
End of explanation
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 5
sample_id = 29
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
Explanation: Explore the Data
The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:
* airplane
* automobile
* bird
* cat
* deer
* dog
* frog
* horse
* ship
* truck
Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.
Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
End of explanation
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
return (x - x.min()) / (x.max()-x.min())
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_normalize(normalize)
Explanation: Implement Preprocess Functions
Normalize
In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
End of explanation
from sklearn import preprocessing
rrange = np.arange(10)
def one_hot_encode(x):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
lb = preprocessing.LabelBinarizer()
lb.fit(rrange)
return lb.transform(x)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_one_hot_encode(one_hot_encode)
Explanation: One-hot encode
Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.
Hint: Don't reinvent the wheel.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
Explanation: Randomize Data
As you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.
Preprocess all the data and save it
Running the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
import tensorflow as tf
def neural_net_image_input(image_shape):
Return a Tensor for a bach of image input
: image_shape: Shape of the images
: return: Tensor for image input.
return tf.placeholder(tf.float32, (None, image_shape[0], image_shape[1], image_shape[2]), name="x")
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
return tf.placeholder(tf.float32, (None, n_classes), name="y")
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
return tf.placeholder(tf.float32, name="keep_prob")
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
Explanation: Build the network
For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.
Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.
However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d.
Let's begin!
Input
The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions
* Implement neural_net_image_input
* Return a TF Placeholder
* Set the shape using image_shape with batch size set to None.
* Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_label_input
* Return a TF Placeholder
* Set the shape using n_classes with batch size set to None.
* Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder.
* Implement neural_net_keep_prob_input
* Return a TF Placeholder for dropout keep probability.
* Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder.
These names will be used at the end of the project to load your saved model.
Note: None for shapes in TensorFlow allow for a dynamic size.
End of explanation
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
input_feature_map = x_tensor.get_shape()[3].value
weight = tf.Variable(
tf.truncated_normal(
[conv_ksize[0], conv_ksize[1], x_tensor.get_shape().as_list()[3], conv_num_outputs],
mean=0,
stddev=0.01
),
name="conv2d_weight"
)
#bias = tf.Variable(tf.truncated_normal([conv_num_outputs], dtype=tf.float32))
bias = tf.Variable(tf.zeros([conv_num_outputs], dtype=tf.float32), name="conv2d_bias")
cstrides = [1, conv_strides[0], conv_strides[1], 1]
pstrides = [1, pool_strides[0], pool_strides[1], 1]
output = tf.nn.conv2d(
x_tensor,
weight,
strides=cstrides,
padding="SAME"
)
output = tf.nn.bias_add(output, bias)
output = tf.nn.relu(output)
output = tf.nn.max_pool(
output,
ksize=[1, pool_ksize[0], pool_ksize[1], 1],
strides=pstrides,
padding="SAME"
)
return output
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_con_pool(conv2d_maxpool)
Explanation: Convolution and Max Pooling Layer
Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:
* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.
* Apply a convolution to x_tensor using weight and conv_strides.
* We recommend you use same padding, but you're welcome to use any padding.
* Add bias
* Add a nonlinear activation to the convolution.
* Apply Max Pooling using pool_ksize and pool_strides.
* We recommend you use same padding, but you're welcome to use any padding.
Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.
End of explanation
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
tensor_dims = x_tensor.get_shape().as_list()
return tf.reshape(x_tensor, [-1, tensor_dims[1]*tensor_dims[2]*tensor_dims[3]])
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_flatten(flatten)
Explanation: Flatten Layer
Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
def fully_conn(x_tensor, num_outputs):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
tensor_shape = x_tensor.get_shape().as_list()
weights = tf.Variable(tf.truncated_normal([tensor_shape[1], num_outputs], mean=0.0, stddev=0.03), name="weight_fc")
bias = tf.Variable(tf.zeros([num_outputs]), name="weight_bias")
output = tf.add(tf.matmul(x_tensor, weights), bias)
output = tf.nn.relu(output)
return output
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_fully_conn(fully_conn)
Explanation: Fully-Connected Layer
Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
End of explanation
def output(x_tensor, num_outputs):
Apply a output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
tensor_shape = x_tensor.get_shape().as_list()
weights = tf.Variable(tf.truncated_normal([tensor_shape[1], num_outputs], mean=0, stddev=0.01), name="output_weight")
bias = tf.Variable(tf.zeros([num_outputs]), name="output_bias")
output = tf.add(tf.matmul(x_tensor, weights), bias)
return output
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_output(output)
Explanation: Output Layer
Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
Note: Activation, softmax, or cross entropy should not be applied to this.
End of explanation
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
conv_ksize = (3, 3)
conv_strides = (1, 1)
pool_ksize = (2, 2)
pool_strides = (2, 2)
num_outputs = 10
network = conv2d_maxpool(x, 16, conv_ksize, conv_strides, pool_ksize, pool_strides)
network = conv2d_maxpool(network, 32, conv_ksize, conv_strides, pool_ksize, pool_strides)
network = conv2d_maxpool(network, 64, conv_ksize, conv_strides, pool_ksize, pool_strides)
network = flatten(network)
network = fully_conn(network, 512)
network = tf.nn.dropout(network, keep_prob=keep_prob)
network = fully_conn(network, 1024)
network = output(network, num_outputs)
return network
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
Explanation: Create Convolutional Model
Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:
Apply 1, 2, or 3 Convolution and Max Pool layers
Apply a Flatten Layer
Apply 1, 2, or 3 Fully Connected Layers
Apply an Output Layer
Return the output
Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
End of explanation
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
session.run(optimizer, feed_dict={
x: feature_batch,
y: label_batch,
keep_prob: keep_probability
})
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_train_nn(train_neural_network)
Explanation: Train the Neural Network
Single Optimization
Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:
* x for image input
* y for labels
* keep_prob for keep probability for dropout
This function will be called for each batch, so tf.global_variables_initializer() has already been called.
Note: Nothing needs to be returned. This function is only optimizing the neural network.
End of explanation
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
loss = session.run(cost, feed_dict={
x: feature_batch,
y: label_batch,
keep_prob: 1.0
})
accuracies = np.zeros(5)
for i in [0, 1000, 2000, 3000, 4000]:
valid_acc = session.run(accuracy, feed_dict={
x: valid_features[i:i+1000],
y: valid_labels[i:i+1000],
keep_prob: 1.0
})
index = int(i/1000)
accuracies[index] = valid_acc
accuracy = np.mean(accuracies)
print("Loss: {loss} - Validation Accuracy: {valid_acc}".format(loss=loss, valid_acc=accuracy))
Explanation: Show Stats
Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
End of explanation
# TODO: Tune Parameters
epochs = 50
batch_size = 1024
keep_probability = .5
Explanation: Hyperparameters
Tune the following parameters:
* Set epochs to the number of iterations until the network stops learning or start overfitting
* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:
* 64
* 128
* 256
* ...
* Set keep_probability to the probability of keeping a node using dropout
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
Explanation: Train on a Single CIFAR-10 Batch
Instead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
Explanation: Fully Train the Model
Now that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
Explanation: Checkpoint
The model has been saved to disk.
Test Model
Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.
End of explanation |
15,790 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Nested rejection sampling
This example demonstrates how to use nested rejection sampling [1] to sample from the posterior distribution for a logistic model fitted to model-simulated data.
Nested sampling is the craziest way to calculate an integral that you'll ever come across, which has found widespread application in physics. The idea is based upon repeatedly partitioning the prior density to a given area of parameter space based on likelihood thresholds. These repeated partitions form sort of Matryoshka dolls of spaces, where the later surfaces are "nested" within the earlier ones. The space between the Matryoshka volumes constitutes "shells", whose volume can itself be approximated. By summing the volumes of these shells, the marginal likelihood can be calculated. It's bonkers, but it works. It works especially well for multimodal distributions, where traditional methods of calculating the marginal likelihood fail. As a very useful bi-product of nested sampling, posterior samples can be produced by importance sampling.
[1] "Nested Sampling for General Bayesian Computation", John Skilling, Bayesian Analysis (2006) https
Step1: Create the nested sampler that will be used to sample from the posterior.
Step2: Run the sampler!
Step3: Plot posterior samples versus true parameter values (dashed lines)
Step4: Plot posterior predictive simulations versus the observed data
Step5: Marginal likelihood estimate
Nested sampling calculates the denominator of Bayes' rule through applying the trapezium rule to the integral,
$$Z = \int_{0}^{1} \mathcal{L}(X) dX,$$
where $X$ is the prior probability mass.
Step6: With PINTS we can access the segments of the discretised integral, meaning we can plot the function being integrated.
Step7: Examine active and inactive points at end of sampling run
At each step of the nested sampling algorithm, the point with the lowest likelihood is discarded (and inactivated) and a new active point is drawn from the prior, with the restriction of that its likelihood exceeds the discarded one. The likelihood of the inactived point essentially defines the height of a segment of the discretised integral for $Z$. Its width is approximately given by $w_i = X_{i-1}-X_{i+1}$, where $X_i = \text{exp}(-i / N)$ and $N$ is the number of active particles and $i$ is the iteration.
PINTS keeps track of active and inactive points at the end of the nested sampling run. The active points (orange) are concentrated in a region of high likelihood, whose likelihood always exceeds the discarded inactive points (blue).
Step8: Sample some other posterior samples from recent run
In nested sampling, we can apply importance sampling to the inactivated points to generate posterior samples. In this case, the weight of each inactive point is given by $w_i \mathcal{L}_i$, where $\mathcal{L}_i$ is its likelihood. Since we use importance sampling, we can always generate an alternative set of posterior samples by re-applying this method. | Python Code:
import pints
import pints.toy as toy
import numpy as np
import matplotlib.pyplot as plt
# Load a forward model
model = toy.LogisticModel()
# Create some toy data
r = 0.015
k = 500
real_parameters = [r, k]
times = np.linspace(0, 1000, 100)
signal_values = model.simulate(real_parameters, times)
# Add independent Gaussian noise
sigma = 10
observed_values = signal_values + pints.noise.independent(sigma, signal_values.shape)
# Plot
plt.plot(times,signal_values,label = 'signal')
plt.plot(times,observed_values,label = 'observed')
plt.xlabel('Time')
plt.ylabel('Values')
plt.legend()
plt.show()
Explanation: Nested rejection sampling
This example demonstrates how to use nested rejection sampling [1] to sample from the posterior distribution for a logistic model fitted to model-simulated data.
Nested sampling is the craziest way to calculate an integral that you'll ever come across, which has found widespread application in physics. The idea is based upon repeatedly partitioning the prior density to a given area of parameter space based on likelihood thresholds. These repeated partitions form sort of Matryoshka dolls of spaces, where the later surfaces are "nested" within the earlier ones. The space between the Matryoshka volumes constitutes "shells", whose volume can itself be approximated. By summing the volumes of these shells, the marginal likelihood can be calculated. It's bonkers, but it works. It works especially well for multimodal distributions, where traditional methods of calculating the marginal likelihood fail. As a very useful bi-product of nested sampling, posterior samples can be produced by importance sampling.
[1] "Nested Sampling for General Bayesian Computation", John Skilling, Bayesian Analysis (2006) https://projecteuclid.org/download/pdf_1/euclid.ba/1340370944.
First create fake data.
End of explanation
# Create an object with links to the model and time series
problem = pints.SingleOutputProblem(model, times, observed_values)
# Create a log-likelihood function (adds an extra parameter!)
log_likelihood = pints.GaussianLogLikelihood(problem)
# Create a uniform prior over both the parameters and the new noise variable
log_prior = pints.UniformLogPrior(
[0.01, 400, sigma * 0.5],
[0.02, 600, sigma * 1.5])
# Create a nested ellipsoidal rejectection sampler
sampler = pints.NestedController(log_likelihood, log_prior, method=pints.NestedRejectionSampler)
# Set number of iterations
sampler.set_iterations(3000)
# Set the number of posterior samples to generate
sampler.set_n_posterior_samples(300)
Explanation: Create the nested sampler that will be used to sample from the posterior.
End of explanation
samples = sampler.run()
print('Done!')
Explanation: Run the sampler!
End of explanation
# Plot output
import pints.plot
pints.plot.histogram([samples], ref_parameters=[r, k, sigma])
plt.show()
vTheta = samples[0]
pints.plot.pairwise(samples, kde=True)
plt.show()
Explanation: Plot posterior samples versus true parameter values (dashed lines)
End of explanation
pints.plot.series(samples[:100], problem)
plt.show()
Explanation: Plot posterior predictive simulations versus the observed data
End of explanation
print('marginal log-likelihood = ' + str(sampler.marginal_log_likelihood())
+ ' ± ' + str(sampler.marginal_log_likelihood_standard_deviation()))
Explanation: Marginal likelihood estimate
Nested sampling calculates the denominator of Bayes' rule through applying the trapezium rule to the integral,
$$Z = \int_{0}^{1} \mathcal{L}(X) dX,$$
where $X$ is the prior probability mass.
End of explanation
v_log_likelihood = sampler.log_likelihood_vector()
v_log_likelihood = v_log_likelihood[:-sampler._sampler.n_active_points()]
X = sampler.prior_space()
X = X[:-1]
plt.plot(X, v_log_likelihood)
plt.xlabel('prior volume enclosed by X(L) > L')
plt.ylabel('log likelihood')
plt.show()
Explanation: With PINTS we can access the segments of the discretised integral, meaning we can plot the function being integrated.
End of explanation
m_active = sampler.active_points()
m_inactive = sampler.inactive_points()
f, axarr = plt.subplots(1,3,figsize=(15,6))
axarr[0].scatter(m_inactive[:,0],m_inactive[:,1])
axarr[0].scatter(m_active[:,0],m_active[:,1],alpha=0.1)
axarr[0].set_xlim([0.008,0.022])
axarr[0].set_xlabel('r')
axarr[0].set_ylabel('k')
axarr[1].scatter(m_inactive[:,0],m_inactive[:,2])
axarr[1].scatter(m_active[:,0],m_active[:,2],alpha=0.1)
axarr[1].set_xlim([0.008,0.022])
axarr[1].set_xlabel('r')
axarr[1].set_ylabel('sigma')
axarr[2].scatter(m_inactive[:,1],m_inactive[:,2])
axarr[2].scatter(m_active[:,1],m_active[:,2],alpha=0.1)
axarr[2].set_xlabel('k')
axarr[2].set_ylabel('sigma')
plt.show()
Explanation: Examine active and inactive points at end of sampling run
At each step of the nested sampling algorithm, the point with the lowest likelihood is discarded (and inactivated) and a new active point is drawn from the prior, with the restriction of that its likelihood exceeds the discarded one. The likelihood of the inactived point essentially defines the height of a segment of the discretised integral for $Z$. Its width is approximately given by $w_i = X_{i-1}-X_{i+1}$, where $X_i = \text{exp}(-i / N)$ and $N$ is the number of active particles and $i$ is the iteration.
PINTS keeps track of active and inactive points at the end of the nested sampling run. The active points (orange) are concentrated in a region of high likelihood, whose likelihood always exceeds the discarded inactive points (blue).
End of explanation
samples_new = sampler.sample_from_posterior(1000)
pints.plot.pairwise(samples_new, kde=True)
plt.show()
Explanation: Sample some other posterior samples from recent run
In nested sampling, we can apply importance sampling to the inactivated points to generate posterior samples. In this case, the weight of each inactive point is given by $w_i \mathcal{L}_i$, where $\mathcal{L}_i$ is its likelihood. Since we use importance sampling, we can always generate an alternative set of posterior samples by re-applying this method.
End of explanation |
15,791 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
零和ゲームのナッシュ均衡
quantecon.game_theory と scipy.optimize.linprog で求めてみる
Step1: じゃんけん・ゲームを例に計算してみる.
あとあと便利なので,NumPy array としてプレイヤー0の利得行列を定義しておく
Step2: quantecon.game_theory でナッシュ均衡を求める
Step3: プレイヤー1の行列は -U の転置 (.T) であることに注意.
Step4: scipy.optimize.linprog で線形計画問題を解くことでナッシュ均衡を求める
主問題
Step5: 各入力を定義する
Step6: scipy.optimize.linprog に渡して解かせる
Step7: 結果
Step8: プレイヤー1の均衡戦略
Step9: ゲームの値
Step10: scipy.optimize.linprog は双対解を返してくれないようなので,双対問題も定式化して解かせる
Step11: プレイヤー0の均衡戦略
Step12: ゲームの値
Step13: 微妙は誤差はある
Step14: 関数としてまとめてみる
Step15: ランダムに行列を発生させて計算させてみる | Python Code:
import numpy as np
from scipy.optimize import linprog
import quantecon.game_theory as gt
Explanation: 零和ゲームのナッシュ均衡
quantecon.game_theory と scipy.optimize.linprog で求めてみる
End of explanation
U = np.array(
[[0, -1, 1],
[1, 0, -1],
[-1, 1, 0]]
)
Explanation: じゃんけん・ゲームを例に計算してみる.
あとあと便利なので,NumPy array としてプレイヤー0の利得行列を定義しておく:
End of explanation
p0 = gt.Player(U)
p1 = gt.Player(-U.T)
Explanation: quantecon.game_theory でナッシュ均衡を求める
End of explanation
g = gt.NormalFormGame((p0, p1))
print(g)
gt.lemke_howson(g)
gt.support_enumeration(g)
Explanation: プレイヤー1の行列は -U の転置 (.T) であることに注意.
End of explanation
m, n = U.shape
Explanation: scipy.optimize.linprog で線形計画問題を解くことでナッシュ均衡を求める
主問題:
$$
\min u
$$
subject to
$$
U y - \mathbf{1} u \leq \mathbf{0},\quad
\mathbf{1}' y = 1,\quad
y \geq \mathbf{0}
$$
これを
scipy.optimize.linprog
の形式
$$
\max c' x
$$
subject to
$$
A_{\mathit{ub}} x \leq b_{\mathit{ub}},\quad
A_{\mathit{eq}} x = b_{\mathit{eq}},\quad
l \leq x \leq u
$$
に合わせると,
$$
x =
\begin{pmatrix}
y \ u
\end{pmatrix},\ %
c =
\begin{pmatrix}
\mathbf{0} \ 1
\end{pmatrix},\ %
A_{\mathit{ub}} =
\begin{pmatrix}
U & -\mathbf{1}
\end{pmatrix},\ %
b_{\mathit{ub}} = \mathbf{0},\ %
A_{\mathit{eq}} =
\begin{pmatrix}
\mathbf{1}' & 0
\end{pmatrix},\ %
b_{\mathit{eq}} = \begin{pmatrix}1\end{pmatrix}
$$
$$
l =
\begin{pmatrix}
0 & \cdots & 0 & -\infty
\end{pmatrix}',\ %
u =
\begin{pmatrix}
\infty & \cdots & \infty
\end{pmatrix}'
$$
両プレイヤーの戦略の数をそれぞれ $m$, $n$ とする:
End of explanation
c = np.zeros(n+1)
c[-1] = 1
c
A_ub = np.hstack((U, np.full((m, 1), -1)))
A_ub
b_ub = np.zeros(m)
b_ub
A_eq = np.ones((1, n+1))
A_eq[0, -1] = 0
A_eq
b_eq = np.ones(1)
b_eq
bounds = [(0, None)] * n + [(None, None)]
bounds
Explanation: 各入力を定義する:
End of explanation
res_p = linprog(c, A_ub, b_ub, A_eq, b_eq, bounds)
Explanation: scipy.optimize.linprog に渡して解かせる:
End of explanation
res_p
Explanation: 結果:
End of explanation
res_p.x[:-1]
Explanation: プレイヤー1の均衡戦略:
End of explanation
res_p.x[-1]
Explanation: ゲームの値:
End of explanation
c = np.zeros(m+1)
c[-1] = -1
A_ub = np.hstack((-U.T, np.full((n, 1), 1)))
b_ub = np.zeros(n)
A_eq = np.ones((1, m+1))
A_eq[0, -1] = 0
b_eq = np.ones(1)
bounds = [(0, None)] * m + [(None, None)]
res_d = linprog(c, A_ub, b_ub, A_eq, b_eq, bounds)
res_d
Explanation: scipy.optimize.linprog は双対解を返してくれないようなので,双対問題も定式化して解かせる:
$$
\min -v
$$
subject to
$$
-U' x + \mathbf{1} v \leq \mathbf{0},\quad
\mathbf{1}' x = 1,\quad
x \geq \mathbf{0}
$$
End of explanation
res_d.x[:-1]
Explanation: プレイヤー0の均衡戦略:
End of explanation
res_d.x[-1]
Explanation: ゲームの値:
End of explanation
res_p.x[-1] - res_d.x[-1]
Explanation: 微妙は誤差はある:
End of explanation
def solve_zerosum_lemke_howson(U):
g = gt.NormalFormGame((gt.Player(U), gt.Player(-U.T)))
NE = gt.lemke_howson(g)
return NE
def solve_zerosum_linprog(U, method='revised simplex'):
U = np.asarray(U)
m, n = U.shape
# Primal problem
c = np.zeros(n+1)
c[-1] = 1
A_ub = np.hstack((U, np.full((m, 1), -1)))
b_ub = np.zeros(m)
A_eq = np.ones((1, n+1))
A_eq[0, -1] = 0
b_eq = np.ones(1)
bounds = [(0, None)] * n + [(None, None)]
res_p = linprog(c, A_ub, b_ub, A_eq, b_eq, bounds, method=method)
# Dual problem
c = np.zeros(m+1)
c[-1] = -1
A_ub = np.hstack((-U.T, np.full((n, 1), 1)))
b_ub = np.zeros(n)
A_eq = np.ones((1, m+1))
A_eq[0, -1] = 0
b_eq = np.ones(1)
bounds = [(0, None)] * m + [(None, None)]
res_d = linprog(c, A_ub, b_ub, A_eq, b_eq, bounds, method=method)
NE = (res_d.x[:-1], res_p.x[:-1])
return NE
solve_zerosum_lemke_howson(U)
solve_zerosum_linprog(U)
Explanation: 関数としてまとめてみる
End of explanation
m, n = 4, 3
U = np.random.randn(m, n)
U
solve_zerosum_lemke_howson(U)
solve_zerosum_linprog(U)
Explanation: ランダムに行列を発生させて計算させてみる
End of explanation |
15,792 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Encoder-Decoder Analysis
Model Architecture
Step1: Perplexity on Each Dataset
Step2: Loss vs. Epoch
Step3: Perplexity vs. Epoch
Step4: Generations
Step5: BLEU Analysis
Step6: N-pairs BLEU Analysis
This analysis randomly samples 1000 pairs of generations/ground truths and treats them as translations, giving their BLEU score. We can expect very low scores in the ground truth and high scores can expose hyper-common generations
Step7: Alignment Analysis
This analysis computs the average Smith-Waterman alignment score for generations, with the same intuition as N-pairs BLEU, in that we expect low scores in the ground truth and hyper-common generations to raise the scores | Python Code:
report_file = '/Users/bking/IdeaProjects/LanguageModelRNN/reports/encdec_noing_250_512_025dr.json'
log_file = '/Users/bking/IdeaProjects/LanguageModelRNN/logs/encdec_noing_250_512_025dr_logs.json'
import json
import matplotlib.pyplot as plt
with open(report_file) as f:
report = json.loads(f.read())
with open(log_file) as f:
logs = json.loads(f.read())
print'Encoder: \n\n', report['architecture']['encoder']
print'Decoder: \n\n', report['architecture']['decoder']
Explanation: Encoder-Decoder Analysis
Model Architecture
End of explanation
print('Train Perplexity: ', report['train_perplexity'])
print('Valid Perplexity: ', report['valid_perplexity'])
print('Test Perplexity: ', report['test_perplexity'])
Explanation: Perplexity on Each Dataset
End of explanation
%matplotlib inline
for k in logs.keys():
plt.plot(logs[k][0], logs[k][1], label=str(k) + ' (train)')
plt.plot(logs[k][0], logs[k][2], label=str(k) + ' (valid)')
plt.title('Loss v. Epoch')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plt.show()
Explanation: Loss vs. Epoch
End of explanation
%matplotlib inline
for k in logs.keys():
plt.plot(logs[k][0], logs[k][3], label=str(k) + ' (train)')
plt.plot(logs[k][0], logs[k][4], label=str(k) + ' (valid)')
plt.title('Perplexity v. Epoch')
plt.xlabel('Epoch')
plt.ylabel('Perplexity')
plt.legend()
plt.show()
Explanation: Perplexity vs. Epoch
End of explanation
def print_sample(sample):
enc_input = ' '.join([w for w in sample['encoder_input'].split(' ') if w != '<pad>'])
gold = ' '.join([w for w in sample['gold'].split(' ') if w != '<mask>'])
print('Input: '+ enc_input + '\n')
print('Gend: ' + sample['generated'] + '\n')
print('True: ' + gold + '\n')
print('\n')
for sample in report['train_samples']:
print_sample(sample)
for sample in report['valid_samples']:
print_sample(sample)
for sample in report['test_samples']:
print_sample(sample)
Explanation: Generations
End of explanation
print 'Overall Score: ', report['bleu']['score'], '\n'
print '1-gram Score: ', report['bleu']['components']['1']
print '2-gram Score: ', report['bleu']['components']['2']
print '3-gram Score: ', report['bleu']['components']['3']
print '4-gram Score: ', report['bleu']['components']['4']
Explanation: BLEU Analysis
End of explanation
npairs_generated = report['n_pairs_bleu_generated']
npairs_gold = report['n_pairs_bleu_gold']
print 'Overall Score (Generated): ', npairs_generated['score'], '\n'
print '1-gram Score: ', npairs_generated['components']['1']
print '2-gram Score: ', npairs_generated['components']['2']
print '3-gram Score: ', npairs_generated['components']['3']
print '4-gram Score: ', npairs_generated['components']['4']
print '\n'
print 'Overall Score: (Gold)', npairs_gold['score'], '\n'
print '1-gram Score: ', npairs_gold['components']['1']
print '2-gram Score: ', npairs_gold['components']['2']
print '3-gram Score: ', npairs_gold['components']['3']
print '4-gram Score: ', npairs_gold['components']['4']
Explanation: N-pairs BLEU Analysis
This analysis randomly samples 1000 pairs of generations/ground truths and treats them as translations, giving their BLEU score. We can expect very low scores in the ground truth and high scores can expose hyper-common generations
End of explanation
print 'Average Generated Score: ', report['average_alignment_generated']
print 'Average Gold Score: ', report['average_alignment_gold']
Explanation: Alignment Analysis
This analysis computs the average Smith-Waterman alignment score for generations, with the same intuition as N-pairs BLEU, in that we expect low scores in the ground truth and hyper-common generations to raise the scores
End of explanation |
15,793 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Path Management
Goal
Normalize paths on different platform
Create, copy and remove folders
Handle errors
Modules
Step1: See also
Step2: Multiplatform Path Management
The os.path module seems verbose but it's the best way to manage paths. It's
Step3: Manage trees
Python modules can
Step4: Encoding
Goals
A string is more than a sequence of bytes
A string is a couple (bytes, encoding)
Use unicode_literals in python2
Manage differently encoded filenames
A string is not a sequence of bytes | Python Code:
import os
import os.path
import shutil
import errno
import glob
import sys
Explanation: Path Management
Goal
Normalize paths on different platform
Create, copy and remove folders
Handle errors
Modules
End of explanation
# Be python3 ready
from __future__ import unicode_literals, print_function
Explanation: See also:
pathlib on Python 3.4+
End of explanation
import os
import sys
basedir, hosts = "/", "etc/hosts"
# sys.platform shows the current operating system
if sys.platform.startswith('win'):
basedir = 'c:/windows/system32/drivers'
print(basedir)
# Join removes redundant "/"
hosts = os.path.join(basedir, hosts)
print(hosts)
# normpath fixes "/" orientation
# and redundant ".."
hosts = os.path.normpath(hosts)
print("Normalized path is", hosts)
# realpath resolves symlinks (on unix)
! mkdir -p /tmp/course
! ln -sf /etc/hosts /tmp/course/hosts
realfile = os.path.realpath("/tmp/course/hosts")
print(realfile)
# Exercise: given the following path
base, path = "/usr", "bin/foo"
# Which is the expected output of result?
result = os.path.join(base, path)
print(result)
Explanation: Multiplatform Path Management
The os.path module seems verbose but it's the best way to manage paths. It's:
safe
multiplatform
Here we check the operating system and prepend the right path
End of explanation
# os and shutil supports basic file operations
# like recursive copy and tree creation.
!rm -rf /tmp/course/foo
from os import makedirs
makedirs("/tmp/course/foo/bar")
# while os.path can be used to test file existence
from os.path import isdir
assert isdir("/tmp/course/foo/bar")
# Check the directory content with either one of
!tree /tmp/course || find /tmp/course
# We can use exception handlers to check
# what happened.
try:
# python2 does not allow to ignore
# already existing directories
# and raises an OSError
makedirs("/tmp/course/foo/bar")
except OSError as e:
# Just use the errno module to
# check the error value
print(e)
import errno
assert e.errno == errno.EEXIST
from shutil import copytree, rmtree
# Now copy recursively two directories
# and check the result
copytree("/tmp/course/foo", "/tmp/course/foo2")
assert isdir("/tmp/course/foo2/bar")
#This command should work on both unix and windows
!ls /tmp/course/foo2/
# Now remove it and check the outcome
rmtree("/tmp/course/foo")
assert not isdir("/tmp/course/foo/bar")
#This command should work on both unix and windows
!ls /tmp/course/
# Cleanup created files
rmtree("/tmp/course")
Explanation: Manage trees
Python modules can:
- manage directory trees
- and basic errors
End of explanation
import os
import os.path
import glob
from os.path import isdir
basedir = "/tmp/course"
if not isdir(basedir):
os.makedirs(basedir)
# Py3 doesn't need the 'u' prefix before the string.
the_string = u"S\u00fcd" # Sued
print(the_string)
print type(the_string)
# the_string Sued can be encoded in different...
in_utf8 = the_string.encode('utf-8')
in_win = the_string.encode('cp1252')
# ...byte-sequences
assert type(in_utf8) == bytes
print type(in_utf8)
# Now you can see the differences between
print(repr(in_utf8))
# and
print(repr(in_win))
# Decoding bytes using the wrong map...
# ...gives Sad results
print(in_utf8.decode('cp1252'))
print (in_utf8.decode('utf-8'))
Explanation: Encoding
Goals
A string is more than a sequence of bytes
A string is a couple (bytes, encoding)
Use unicode_literals in python2
Manage differently encoded filenames
A string is not a sequence of bytes
End of explanation |
15,794 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Lab
Step1: Create Cloud Storage bucket for storing Vertex Pipeline artifacts
Step2: Create BigQuery dataset
Step3: Exploratory Data Analysis in BigQuery
Step4: Create BigQuery dataset for ML classification task
Step5: Verify data split proportions
Step6: Create
Import libraries
Step8: Create and run an AutoML Tabular classification pipeline using Kubeflow Pipelines SDK
Create a custom KFP evaluation component
Step9: Define the pipeline
Step10: Compile and run the pipeline
Step11: Run the pipeline
Step12: Query your deployed model to retrieve online predictions and explanations | Python Code:
# Add installed depedencies to Python PATH variable.
PATH=%env PATH
%env PATH={PATH}:/home/jupyter/.local/bin
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
REGION = 'us-central1'
BQ_DATASET_NAME = 'chicago_taxi'
BQ_TABLE_NAME = 'chicago_taxi_tips_raw'
BQ_LOCATION = 'US'
!echo {PROJECT_ID}
!echo {REGION}
Explanation: Lab: Chicago taxifare tip prediction with AutoML Tables on Vertex Pipelines using Kubeflow Pipelines SDK
Learning objectives
Perform exploratory data analysis (EDA) on tabular data using BigQuery.
Create a BigQuery dataset for a ML classification task.
Define an AutoML tables pipeline using the Kubeflow Pipelines (KFP) SDK for model training, evaluation, and conditional deployment.
Create a custom model evaluation component using the KFP SDK.
Incorporate pre-built KFP components into your pipeline from google_cloud_components.
Query your model for online predictions and explanations.
Setup
Define constants
End of explanation
BUCKET_NAME = f"gs://{PROJECT_ID}-bucket"
print(BUCKET_NAME)
!gsutil ls -al $BUCKET_NAME
USER = "dougkelly" # <---CHANGE THIS
PIPELINE_ROOT = "{}/pipeline_root/{}".format(BUCKET_NAME, USER)
PIPELINE_ROOT
Explanation: Create Cloud Storage bucket for storing Vertex Pipeline artifacts
End of explanation
!bq --location=US mk -d \
$PROJECT_ID:$BQ_DATASET_NAME
Explanation: Create BigQuery dataset
End of explanation
%%bigquery data
SELECT
CAST(EXTRACT(DAYOFWEEK FROM trip_start_timestamp) AS string) AS trip_dayofweek,
FORMAT_DATE('%A',cast(trip_start_timestamp as date)) AS trip_dayname,
COUNT(*) as trip_count,
FROM `bigquery-public-data.chicago_taxi_trips.taxi_trips`
WHERE
EXTRACT(YEAR FROM trip_start_timestamp) = 2015
GROUP BY
trip_dayofweek,
trip_dayname
ORDER BY
trip_dayofweek
;
data.plot(kind='bar', x='trip_dayname', y='trip_count');
Explanation: Exploratory Data Analysis in BigQuery
End of explanation
SAMPLE_SIZE = 100000
YEAR = 2020
sql_script = '''
CREATE OR REPLACE TABLE `@PROJECT_ID.@DATASET.@TABLE`
AS (
WITH
taxitrips AS (
SELECT
trip_start_timestamp,
trip_seconds,
trip_miles,
payment_type,
pickup_longitude,
pickup_latitude,
dropoff_longitude,
dropoff_latitude,
tips,
fare
FROM
`bigquery-public-data.chicago_taxi_trips.taxi_trips`
WHERE 1=1
AND pickup_longitude IS NOT NULL
AND pickup_latitude IS NOT NULL
AND dropoff_longitude IS NOT NULL
AND dropoff_latitude IS NOT NULL
AND trip_miles > 0
AND trip_seconds > 0
AND fare > 0
AND EXTRACT(YEAR FROM trip_start_timestamp) = @YEAR
)
SELECT
trip_start_timestamp,
EXTRACT(MONTH from trip_start_timestamp) as trip_month,
EXTRACT(DAY from trip_start_timestamp) as trip_day,
EXTRACT(DAYOFWEEK from trip_start_timestamp) as trip_day_of_week,
EXTRACT(HOUR from trip_start_timestamp) as trip_hour,
trip_seconds,
trip_miles,
payment_type,
ST_AsText(
ST_SnapToGrid(ST_GeogPoint(pickup_longitude, pickup_latitude), 0.1)
) AS pickup_grid,
ST_AsText(
ST_SnapToGrid(ST_GeogPoint(dropoff_longitude, dropoff_latitude), 0.1)
) AS dropoff_grid,
ST_Distance(
ST_GeogPoint(pickup_longitude, pickup_latitude),
ST_GeogPoint(dropoff_longitude, dropoff_latitude)
) AS euclidean,
CONCAT(
ST_AsText(ST_SnapToGrid(ST_GeogPoint(pickup_longitude,
pickup_latitude), 0.1)),
ST_AsText(ST_SnapToGrid(ST_GeogPoint(dropoff_longitude,
dropoff_latitude), 0.1))
) AS loc_cross,
IF((tips/fare >= 0.2), 1, 0) AS tip_bin,
IF(ABS(MOD(FARM_FINGERPRINT(STRING(trip_start_timestamp)), 10)) < 9, 'UNASSIGNED', 'TEST') AS data_split
FROM
taxitrips
LIMIT @LIMIT
)
'''
sql_script = sql_script.replace(
'@PROJECT_ID', PROJECT_ID).replace(
'@DATASET', BQ_DATASET_NAME).replace(
'@TABLE', BQ_TABLE_NAME).replace(
'@YEAR', str(YEAR)).replace(
'@LIMIT', str(SAMPLE_SIZE))
# print(sql_script)
from google.cloud import bigquery
bq_client = bigquery.Client(project=PROJECT_ID, location=BQ_LOCATION)
job = bq_client.query(sql_script)
_ = job.result()
Explanation: Create BigQuery dataset for ML classification task
End of explanation
%%bigquery
SELECT data_split, COUNT(*)
FROM dougkelly-vertex-demos.chicago_taxi.chicago_taxi_tips_raw
GROUP BY data_split
Explanation: Verify data split proportions
End of explanation
import json
import logging
from typing import NamedTuple
import kfp
# from google.cloud import aiplatform
from google_cloud_pipeline_components import aiplatform as gcc_aip
from kfp.v2 import dsl
from kfp.v2.dsl import (ClassificationMetrics, Input, Metrics, Model, Output,
component)
from kfp.v2.google.client import AIPlatformClient
Explanation: Create
Import libraries
End of explanation
@component(
base_image="gcr.io/deeplearning-platform-release/tf2-cpu.2-3:latest",
output_component_file="components/tables_eval_component.yaml", # Optional: you can use this to load the component later
packages_to_install=["google-cloud-aiplatform==1.0.0"],
)
def classif_model_eval_metrics(
project: str,
location: str,
api_endpoint: str,
thresholds_dict_str: str,
model: Input[Model],
metrics: Output[Metrics],
metricsc: Output[ClassificationMetrics],
) -> NamedTuple("Outputs", [("dep_decision", str)]): # Return parameter.
This function renders evaluation metrics for an AutoML Tabular classification model.
It retrieves the classification model evaluation generated by the AutoML Tabular training
process, does some parsing, and uses that info to render the ROC curve and confusion matrix
for the model. It also uses given metrics threshold information and compares that to the
evaluation results to determine whether the model is sufficiently accurate to deploy.
import json
import logging
from google.cloud import aiplatform
# Fetch model eval info
def get_eval_info(client, model_name):
from google.protobuf.json_format import MessageToDict
response = client.list_model_evaluations(parent=model_name)
metrics_list = []
metrics_string_list = []
for evaluation in response:
print("model_evaluation")
print(" name:", evaluation.name)
print(" metrics_schema_uri:", evaluation.metrics_schema_uri)
metrics = MessageToDict(evaluation._pb.metrics)
for metric in metrics.keys():
logging.info("metric: %s, value: %s", metric, metrics[metric])
metrics_str = json.dumps(metrics)
metrics_list.append(metrics)
metrics_string_list.append(metrics_str)
return (
evaluation.name,
metrics_list,
metrics_string_list,
)
# Use the given metrics threshold(s) to determine whether the model is
# accurate enough to deploy.
def classification_thresholds_check(metrics_dict, thresholds_dict):
for k, v in thresholds_dict.items():
logging.info("k {}, v {}".format(k, v))
if k in ["auRoc", "auPrc"]: # higher is better
if metrics_dict[k] < v: # if under threshold, don't deploy
logging.info(
"{} < {}; returning False".format(metrics_dict[k], v)
)
return False
logging.info("threshold checks passed.")
return True
def log_metrics(metrics_list, metricsc):
test_confusion_matrix = metrics_list[0]["confusionMatrix"]
logging.info("rows: %s", test_confusion_matrix["rows"])
# log the ROC curve
fpr = []
tpr = []
thresholds = []
for item in metrics_list[0]["confidenceMetrics"]:
fpr.append(item.get("falsePositiveRate", 0.0))
tpr.append(item.get("recall", 0.0))
thresholds.append(item.get("confidenceThreshold", 0.0))
print(f"fpr: {fpr}")
print(f"tpr: {tpr}")
print(f"thresholds: {thresholds}")
metricsc.log_roc_curve(fpr, tpr, thresholds)
# log the confusion matrix
annotations = []
for item in test_confusion_matrix["annotationSpecs"]:
annotations.append(item["displayName"])
logging.info("confusion matrix annotations: %s", annotations)
metricsc.log_confusion_matrix(
annotations,
test_confusion_matrix["rows"],
)
# log textual metrics info as well
for metric in metrics_list[0].keys():
if metric != "confidenceMetrics":
val_string = json.dumps(metrics_list[0][metric])
metrics.log_metric(metric, val_string)
# metrics.metadata["model_type"] = "AutoML Tabular classification"
logging.getLogger().setLevel(logging.INFO)
aiplatform.init(project=project)
# extract the model resource name from the input Model Artifact
model_resource_path = model.uri.replace("aiplatform://v1/", "")
logging.info("model path: %s", model_resource_path)
client_options = {"api_endpoint": api_endpoint}
# Initialize client that will be used to create and send requests.
client = aiplatform.gapic.ModelServiceClient(client_options=client_options)
eval_name, metrics_list, metrics_str_list = get_eval_info(
client, model_resource_path
)
logging.info("got evaluation name: %s", eval_name)
logging.info("got metrics list: %s", metrics_list)
log_metrics(metrics_list, metricsc)
thresholds_dict = json.loads(thresholds_dict_str)
deploy = classification_thresholds_check(metrics_list[0], thresholds_dict)
if deploy:
dep_decision = "true"
else:
dep_decision = "false"
logging.info("deployment decision is %s", dep_decision)
return (dep_decision,)
import time
DISPLAY_NAME = "automl-tab-chicago-taxi-tips-{}".format(str(int(time.time())))
print(DISPLAY_NAME)
Explanation: Create and run an AutoML Tabular classification pipeline using Kubeflow Pipelines SDK
Create a custom KFP evaluation component
End of explanation
@kfp.dsl.pipeline(name="automl-tab-chicago-taxi-tips-train", pipeline_root=PIPELINE_ROOT)
def pipeline(
bq_source: str = "bq://dougkelly-vertex-demos:chicago_taxi.chicago_taxi_tips_raw",
display_name: str = DISPLAY_NAME,
project: str = PROJECT_ID,
gcp_region: str = REGION,
api_endpoint: str = "us-central1-aiplatform.googleapis.com",
thresholds_dict_str: str = '{"auRoc": 0.90}',
):
dataset_create_op = gcc_aip.TabularDatasetCreateOp(
project=project, display_name=display_name, bq_source=bq_source
)
training_op = gcc_aip.AutoMLTabularTrainingJobRunOp(
project=project,
display_name=display_name,
optimization_prediction_type="classification",
optimization_objective="maximize-au-roc", # binary classification
budget_milli_node_hours=750,
training_fraction_split=0.9,
validation_fraction_split=0.1,
column_transformations=[
{"numeric": {"column_name": "trip_seconds"}},
{"numeric": {"column_name": "trip_miles"}},
{"categorical": {"column_name": "trip_month"}},
{"categorical": {"column_name": "trip_day"}},
{"categorical": {"column_name": "trip_day_of_week"}},
{"categorical": {"column_name": "trip_hour"}},
{"categorical": {"column_name": "payment_type"}},
{"numeric": {"column_name": "euclidean"}},
{"categorical": {"column_name": "tip_bin"}},
],
dataset=dataset_create_op.outputs["dataset"],
target_column="tip_bin",
)
model_eval_task = classif_model_eval_metrics(
project,
gcp_region,
api_endpoint,
thresholds_dict_str,
training_op.outputs["model"],
)
with dsl.Condition(
model_eval_task.outputs["dep_decision"] == "true",
name="deploy_decision",
):
deploy_op = gcc_aip.ModelDeployOp( # noqa: F841
model=training_op.outputs["model"],
project=project,
machine_type="n1-standard-4",
)
Explanation: Define the pipeline
End of explanation
from kfp.v2 import compiler # noqa: F811
compiler.Compiler().compile(
pipeline_func=pipeline, package_path="automl-tab-chicago-taxi-tips-train_pipeline.json"
)
Explanation: Compile and run the pipeline
End of explanation
from kfp.v2.google.client import AIPlatformClient # noqa: F811
api_client = AIPlatformClient(project_id=PROJECT_ID, region=REGION)
response = api_client.create_run_from_job_spec(
"automl-tab-chicago-taxi-tips-train_pipeline.json",
pipeline_root=PIPELINE_ROOT,
parameter_values={"project": PROJECT_ID, "display_name": DISPLAY_NAME},
)
Explanation: Run the pipeline
End of explanation
from google.cloud import aiplatform
import matplotlib.pyplot as plt
import pandas as pd
endpoint = aiplatform.Endpoint(
endpoint_name="2677161280053182464",
project=PROJECT_ID,
location=REGION)
%%bigquery test_df
SELECT
CAST(trip_month AS STRING) AS trip_month,
CAST(trip_day AS STRING) AS trip_day,
CAST(trip_day_of_week AS STRING) AS trip_day_of_week,
CAST(trip_hour AS STRING) AS trip_hour,
CAST(trip_seconds AS STRING) AS trip_seconds,
trip_miles,
payment_type,
euclidean
FROM
dougkelly-vertex-demos.chicago_taxi.chicago_taxi_tips_raw
WHERE
data_split = 'TEST'
AND tip_bin = 1
test_instance = test_df.iloc[0]
test_instance_dict = test_instance.to_dict()
test_instance_dict
explained_prediction = endpoint.explain([test_instance_dict])
pd.DataFrame.from_dict(explained_prediction.predictions[0]).plot(kind='bar');
pd.DataFrame.from_dict(explained_prediction.explanations[0].attributions[0].feature_attributions, orient='index').plot(kind='barh');
Explanation: Query your deployed model to retrieve online predictions and explanations
End of explanation |
15,795 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Custom Estimators
In this notebook we'll write an Custom Estimator (using a model function we specifiy). On the way, we'll use tf.layers to write our model. In the next notebook, we'll use tf.layers to write a Custom Estimator for a Convolutional Neural Network.
Step1: Import the dataset. Here, we'll need to convert the labels to a one-hot encoding, and we'll reshape the MNIST images to (784,).
Step2: When using Estimators, we do not manage the TensorFlow session directly. Instead, we skip straight to defining our hyperparameters.
Step3: To write a Custom Estimator we'll specify our own model function. Here, we'll use tf.layers to replicate the model from the third notebook.
Step4: Input functions, as before. | Python Code:
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import math
import numpy as np
import tensorflow as tf
Explanation: Custom Estimators
In this notebook we'll write an Custom Estimator (using a model function we specifiy). On the way, we'll use tf.layers to write our model. In the next notebook, we'll use tf.layers to write a Custom Estimator for a Convolutional Neural Network.
End of explanation
# We'll use Keras (included with TensorFlow) to import the data
# I figured I'd do all the preprocessing and reshaping here,
# rather than in the model.
(x_train, y_train), (x_test, y_test) = tf.contrib.keras.datasets.mnist.load_data()
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
y_train = y_train.astype('int32')
y_test = y_test.astype('int32')
# Normalize the color values to 0-1
# (as imported, they're 0-255)
x_train /= 255
x_test /= 255
# Flatten 28x28 images to (784,)
x_train = x_train.reshape(x_train.shape[0], 784)
x_test = x_test.reshape(x_test.shape[0], 784)
# Convert to one-hot.
y_train = tf.contrib.keras.utils.to_categorical(y_train, num_classes=10)
y_test = tf.contrib.keras.utils.to_categorical(y_test, num_classes=10)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
Explanation: Import the dataset. Here, we'll need to convert the labels to a one-hot encoding, and we'll reshape the MNIST images to (784,).
End of explanation
# Number of neurons in each hidden layer
HIDDEN1_SIZE = 500
HIDDEN2_SIZE = 250
Explanation: When using Estimators, we do not manage the TensorFlow session directly. Instead, we skip straight to defining our hyperparameters.
End of explanation
def model_fn(features, labels, mode):
# First we'll create 2 fully-connected layers, with ReLU activations.
# Notice we're retrieving the 'x' feature (we'll provide this in the input function
# in a moment).
fc1 = tf.layers.dense(features['x'], HIDDEN1_SIZE, activation=tf.nn.relu, name="fc1")
fc2 = tf.layers.dense(fc1, HIDDEN2_SIZE, activation=tf.nn.relu, name="fc2")
# Add dropout operation; 0.9 probability that a neuron will be kept
dropout = tf.layers.dropout(
inputs=fc2, rate=0.1, training = mode == tf.estimator.ModeKeys.TRAIN, name="dropout")
# Finally, we'll calculate logits. This will be
# the input to our Softmax function. Notice we
# don't apply an activation at this layer.
# If you've commented out the dropout layer,
# switch the input here to 'fc2'.
logits = tf.layers.dense(dropout, units=10, name="logits")
# Generate Predictions
classes = tf.argmax(logits, axis=1)
predictions = {
'classes': classes,
'probabilities': tf.nn.softmax(logits, name='softmax_tensor')
}
if mode == tf.estimator.ModeKeys.PREDICT:
# Return an EstimatorSpec for prediction
return tf.estimator.EstimatorSpec(mode=mode, predictions=predictions)
# Compute the loss, per usual.
loss = tf.losses.softmax_cross_entropy(
onehot_labels=labels, logits=logits)
if mode == tf.estimator.ModeKeys.TRAIN:
# Configure the Training Op
train_op = tf.contrib.layers.optimize_loss(
loss=loss,
global_step=tf.train.get_global_step(),
learning_rate=1e-3,
optimizer='Adam')
# Return an EstimatorSpec for training
return tf.estimator.EstimatorSpec(mode=mode, predictions=predictions,
loss=loss, train_op=train_op)
assert mode == tf.estimator.ModeKeys.EVAL
# Configure the accuracy metric for evaluation
metrics = {'accuracy': tf.metrics.accuracy(classes, tf.argmax(labels, axis=1))}
return tf.estimator.EstimatorSpec(mode=mode,
predictions=predictions,
loss=loss,
eval_metric_ops=metrics)
Explanation: To write a Custom Estimator we'll specify our own model function. Here, we'll use tf.layers to replicate the model from the third notebook.
End of explanation
train_input = tf.estimator.inputs.numpy_input_fn(
{'x': x_train},
y_train,
num_epochs=None, # repeat forever
shuffle=True #
)
test_input = tf.estimator.inputs.numpy_input_fn(
{'x': x_test},
y_test,
num_epochs=1, # loop through the dataset once
shuffle=False # don't shuffle the test data
)
# At this point, our Estimator will work just like a canned one.
estimator = tf.estimator.Estimator(model_fn=model_fn)
# Train the estimator using our input function.
estimator.train(input_fn=train_input, steps=2000)
# Evaluate the estimator using our input function.
# We should see our accuracy metric below
evaluation = estimator.evaluate(input_fn=test_input)
print(evaluation)
MAX_TO_PRINT = 5
# This returns a generator object
predictions = estimator.predict(input_fn=test_input)
i = 0
for p in predictions:
true_label = np.argmax(y_test[i])
predicted_label = p['classes']
print("Example %d. True: %d, Predicted: %s" % (i, true_label, predicted_label))
i += 1
if i == MAX_TO_PRINT: break
Explanation: Input functions, as before.
End of explanation |
15,796 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Gender Pay Gap Inequality in the U.S. and Potential Insights
A Research Project at NYU's Stern School of Buinsess — May 2016
Written by Jerry "Joa" Allen (joa218@nyu.edu)
Abstract
Although it has been a longstanding issue, the gender pay gap has been an especially touched upon topic in recent times. There's the well-quoted statistic stating women earn 77% as much as their male counterparts in exchange for equal work. However, this statistic is met with contention from various economists. Some claim that women having less pay for equal work is possibly true in certain cases, but it is not by and large the case. This paper is meant to provide insights as it pertains to potential drivers of the gender pay gap.
Acessing and Parsing the Data
I decided to access the 2014 American Time Use Study, which is the most recent year available. The dataset I manipulate is the ATUS Activity Summary File. In brief, this file mostly outlines how respondents spent their time as it pertains to various activities, ranging from sleep to eldercare. Moreover, the file also contains information regarding the sex (ie. unfortunately gender was unavailable) of the respondents, amongst other demographic information. What I am largely interested in is investigating gender equality (or lack thereof) when it comes to labor force status, hours worked, childcare, and eldercare. Moreover, I will also weigh in on the implications these insights have on the gender
pay gap. With that in mind, I plan to produce figures which will concisely compare men and women along the variables mentioned above.
In terms of accessing the data, it is available on http
Step1: Labor Force Status
The notion of women making up to 23 less cents on the dollar than men has been challenged numerous times. Many claim, including Resident Fellow at the Harvard Institute of Politics, Karen Agness, that this statistic in manipulated and misled by popular media and the government. The extent of systemic discrimination on women in the U.S. suggested by this statistic is far from conclusive, as it does not take into account the many factors that are producing this number. Figure 1 illustrates the difference in labor force placement between men and women. It is worth noting that there were 20% more female respondents in this survey, such that the female count is inflated compared to that of males. Even when adjusting for greater number of female respondents, there is about 25% more females not in the labor force than males. Naturally, this kind of discrepancy in labor force status is likely to contribute to the overall gender pay gap we are witnessing in the U.S. Moreover, the number of men and women unemployed and looking are nearly the same. Although it may not debunk, this insight discredits the notion of systemic hiring discrimination considering there are more women not working, but there are not more women looking for a job. If there was systemic hiring discrimination against women, there would presumably be a greater share of women looking for a job than men.
Step2: Differences in Main Stream of Income
Figure 2 clearly illustrates men earning more income than women. There's a sizable share of women earning less than 500/week, while there are very few making more than 1500/week. On the other hand, the men's income is a more evenly distributed, as opposed to being as bottom heavy as women's income. The interquartile range of men is about 1000 compared to about 600 for women. Furthermore, the figure clearly portrays men having a lot more of an income upside, as the upper quartile of women is about 1000, while the upper quartile of men is about 1500 (ie. displayed in the black lines within the axes objects). This difference in income is just as stark, when observing the top earners between men and women, as the top earner for men (about 2900) is about 30% more than his women counterpart. If nothing else, this figure reinforces the fact that men make more money than women, and their income is more widely distributed. The below figures will provide potential drivers for this inequality as it pertains to differences in time use between men and women.
Step3: Differences in Hours Worked
One obvious factor to investigate is the number of hours worked for both men and women. This will surely have an impact on the earnings for each sex. Figure 3 shows that males work considerably more hours than females. A clear indicator of this is the upper quartile for women being 40 hours/week is virtually equal to the lower quartile for men. It does not require statistical analysis to presume the more hours one works, the more income that person tends to earn. This perhaps explains, at least to some degree, the stark difference in incomes between men and women, shown in the Figure 2. However, the question remains what women are spending their time doing more than men if they are not working more hours than men. The implication is that women are enduring certain responsibilities (ie. more so than men) that take up their time, and this in turn has a negative impact on their income.
Step4: The Differences in the Time Spent Providing Child Care
Secondary child care is referring to time spent looking after children, while taking on something else as a primary activity. In sum, it is keeping a watchful eye over children, without providing one's full and undivided attention. Harvard Economics Professor, Claudia Goldin postulated that women providing more family care is a potential reason for the pay gap. Moreover, she touched upon research that viably suggests that women value temporal flexibility more than men, while men value income more than women. Figure 4 displays that women provide secondary child care more than men, as over 25% provide more than 200 minutes/day of such care. The fat tail on blue object depicts that their is a great deal of women providing hundreds of minutes of child care each day. Resultantly, the women who have these responsibilities are presumably earning less income than men and women who do not. | Python Code:
import sys # system module
import pandas as pd # data package
import matplotlib.pyplot as plt # graphics module
import datetime as dt # date and time module
import numpy as np # foundation for Pandas
import seaborn.apionly as sns # matplotlib graphics (no styling)
# these lines make our graphics show up in the notebook
%matplotlib inline
# check versions (overkill, but why not?)
print('Python version:', sys.version)
print('Pandas version: ', pd.__version__)
print('Today: ', dt.date.today())
atus = (pd.read_csv('/Users/JOA/Documents/Academics/NYU/Spring 2016/Data_Bootcamp/atussum_2014/atussum_2014.dat'))
atus['TESEX'] = atus['TESEX'].replace({1: 'Male', 2:'Female'})
atus['TELFS'] = atus['TELFS'].replace({1: "Employed(at work)", 2: "Employed(absent)",
3:'Unemployed(on layoff)', 4: 'Unemployed(looking)',
5: "Not in labor force"})#TELFS refers to labor force status
atus = atus.set_index('TESEX')
atus.index.name = 'Sex'
atus = atus[['TEHRUSLT', 'TELFS', 'TRERNWA', 'TRTEC', 'TRTHH']]
atus = atus.replace(to_replace=[-1], value=[None]) # -1 represents blank answers
atus = atus.replace(to_replace=[-2], value=[None]) # -2 represents a "don't know" answer
atus = atus.replace(to_replace=[-3], value=[None]) # -3 represents a refuse to answer
atus = atus.replace(to_replace=[-4], value=[None]) # -4 represents an "hours vary" answer that is of no use
atus['TRERNWA'] = atus['TRERNWA']/100 #TRERNWA measures weekly income. The original values implied 2 decimal places
atus = atus.rename(columns={'TEHRUSLT':'Hours Worked/Wk','TELFS':'Labor Force Status', 'TRERNWA':'Main Job Income/Wk'
,'TRTEC': 'Elderly Care (mins)','TRTHH':'Secondary Child Care (mins)'})
atus['Sex'] = atus.index
atus.columns = ['Hours Worked/Wk', 'Labor Force Status', 'Main Job Income/Wk',
'Elderly Care (mins)', 'Secondary Child Care (mins)', 'Sex'] #added in Sex as column for sns plot purposes
fig, ax = plt.subplots()
fig.set_size_inches(11.7, 5.5)
ax.set_title('Figure 1. Labor Force Status Count', weight = 'bold', fontsize = 17)
sns.countplot(x= 'Labor Force Status', hue='Sex', data= atus)
plt.xlabel('Labor Force Status',weight='bold',fontsize=13)
plt.ylabel('Count',weight='bold', fontsize=13)
Explanation: Gender Pay Gap Inequality in the U.S. and Potential Insights
A Research Project at NYU's Stern School of Buinsess — May 2016
Written by Jerry "Joa" Allen (joa218@nyu.edu)
Abstract
Although it has been a longstanding issue, the gender pay gap has been an especially touched upon topic in recent times. There's the well-quoted statistic stating women earn 77% as much as their male counterparts in exchange for equal work. However, this statistic is met with contention from various economists. Some claim that women having less pay for equal work is possibly true in certain cases, but it is not by and large the case. This paper is meant to provide insights as it pertains to potential drivers of the gender pay gap.
Acessing and Parsing the Data
I decided to access the 2014 American Time Use Study, which is the most recent year available. The dataset I manipulate is the ATUS Activity Summary File. In brief, this file mostly outlines how respondents spent their time as it pertains to various activities, ranging from sleep to eldercare. Moreover, the file also contains information regarding the sex (ie. unfortunately gender was unavailable) of the respondents, amongst other demographic information. What I am largely interested in is investigating gender equality (or lack thereof) when it comes to labor force status, hours worked, childcare, and eldercare. Moreover, I will also weigh in on the implications these insights have on the gender
pay gap. With that in mind, I plan to produce figures which will concisely compare men and women along the variables mentioned above.
In terms of accessing the data, it is available on http://www.bls.gov/tus/datafiles_2014.htm, and under the ATUS Activity Summary zip. Furthermore, descriptions of the column variables and their units of measurement can be found at http://www.bls.gov/tus/atuscpscodebk14.pdf and http://www.bls.gov/tus/atusintcodebk14.pdf.
End of explanation
fig, ax = plt.subplots()
fig.set_size_inches(11.7, 8.27)
ax.set_title('Figure 2. Income Per Week From Main Job', weight='bold', fontsize = 17)
sns.set_style("whitegrid")
sns.violinplot(x='Sex',y='Main Job Income/Wk', data = atus)
plt.xlabel('Sex',weight='bold',fontsize=13)
plt.ylabel('Main Job Income/Wk ($)',weight='bold', fontsize=13)
Explanation: Labor Force Status
The notion of women making up to 23 less cents on the dollar than men has been challenged numerous times. Many claim, including Resident Fellow at the Harvard Institute of Politics, Karen Agness, that this statistic in manipulated and misled by popular media and the government. The extent of systemic discrimination on women in the U.S. suggested by this statistic is far from conclusive, as it does not take into account the many factors that are producing this number. Figure 1 illustrates the difference in labor force placement between men and women. It is worth noting that there were 20% more female respondents in this survey, such that the female count is inflated compared to that of males. Even when adjusting for greater number of female respondents, there is about 25% more females not in the labor force than males. Naturally, this kind of discrepancy in labor force status is likely to contribute to the overall gender pay gap we are witnessing in the U.S. Moreover, the number of men and women unemployed and looking are nearly the same. Although it may not debunk, this insight discredits the notion of systemic hiring discrimination considering there are more women not working, but there are not more women looking for a job. If there was systemic hiring discrimination against women, there would presumably be a greater share of women looking for a job than men.
End of explanation
fig, ax = plt.subplots()
fig.set_size_inches(11.7, 8.27)
ax.set_title('Figure 3. Hours Worked Per Week', weight='bold',fontsize = 17)
sns.set_style('whitegrid')
sns.boxplot(x='Sex', y='Hours Worked/Wk', data= atus)
plt.xlabel('Sex',weight='bold',fontsize=13)
plt.ylabel('Hours Worked/Wk',weight='bold', fontsize=13)
Explanation: Differences in Main Stream of Income
Figure 2 clearly illustrates men earning more income than women. There's a sizable share of women earning less than 500/week, while there are very few making more than 1500/week. On the other hand, the men's income is a more evenly distributed, as opposed to being as bottom heavy as women's income. The interquartile range of men is about 1000 compared to about 600 for women. Furthermore, the figure clearly portrays men having a lot more of an income upside, as the upper quartile of women is about 1000, while the upper quartile of men is about 1500 (ie. displayed in the black lines within the axes objects). This difference in income is just as stark, when observing the top earners between men and women, as the top earner for men (about 2900) is about 30% more than his women counterpart. If nothing else, this figure reinforces the fact that men make more money than women, and their income is more widely distributed. The below figures will provide potential drivers for this inequality as it pertains to differences in time use between men and women.
End of explanation
fig, ax = plt.subplots()
fig.set_size_inches(11.7, 8.27)
ax.set(xlim=(0, 1400))
ax.set_title('Figure 4. Mins/Day Providing Secondary Child Care (<13y/o)', weight='bold', fontsize = 17)
sns.violinplot(data= atus, x='Secondary Child Care (mins)', y='Sex')
plt.xlabel('Secondary Child Care (Mins/Day)',weight='bold',fontsize=13)
plt.ylabel('Sex',weight='bold', fontsize=13)
Explanation: Differences in Hours Worked
One obvious factor to investigate is the number of hours worked for both men and women. This will surely have an impact on the earnings for each sex. Figure 3 shows that males work considerably more hours than females. A clear indicator of this is the upper quartile for women being 40 hours/week is virtually equal to the lower quartile for men. It does not require statistical analysis to presume the more hours one works, the more income that person tends to earn. This perhaps explains, at least to some degree, the stark difference in incomes between men and women, shown in the Figure 2. However, the question remains what women are spending their time doing more than men if they are not working more hours than men. The implication is that women are enduring certain responsibilities (ie. more so than men) that take up their time, and this in turn has a negative impact on their income.
End of explanation
fig, ax = plt.subplots()
fig.set_size_inches(11.27, 5.5)
ax.set(ylim=(0, 1400))
ax.set_title("Figure 5. Mins/Day Providing Elderly Care", weight='bold',fontsize = 17)
sns.set_style("whitegrid")
sns.swarmplot(x='Sex', y='Elderly Care (mins)', data= atus)
plt.xlabel('Sex',weight='bold',fontsize=13)
plt.ylabel('Elderly Care (Mins/Day)',weight='bold', fontsize=13)
Explanation: The Differences in the Time Spent Providing Child Care
Secondary child care is referring to time spent looking after children, while taking on something else as a primary activity. In sum, it is keeping a watchful eye over children, without providing one's full and undivided attention. Harvard Economics Professor, Claudia Goldin postulated that women providing more family care is a potential reason for the pay gap. Moreover, she touched upon research that viably suggests that women value temporal flexibility more than men, while men value income more than women. Figure 4 displays that women provide secondary child care more than men, as over 25% provide more than 200 minutes/day of such care. The fat tail on blue object depicts that their is a great deal of women providing hundreds of minutes of child care each day. Resultantly, the women who have these responsibilities are presumably earning less income than men and women who do not.
End of explanation |
15,797 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Re-referencing the EEG signal
This example shows how to load raw data and apply some EEG referencing schemes.
Step1: We will now apply different EEG referencing schemes and plot the resulting
evoked potentials. Note that when we construct epochs with mne.Epochs, we
supply the proj=True argument. This means that any available projectors
are applied automatically. Specifically, if there is an average reference
projector set by raw.set_eeg_reference('average', projection=True), MNE
applies this projector when creating epochs. | Python Code:
# Authors: Marijn van Vliet <w.m.vanvliet@gmail.com>
# Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
#
# License: BSD (3-clause)
import mne
from mne.datasets import sample
from matplotlib import pyplot as plt
print(__doc__)
# Setup for reading the raw data
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
event_id, tmin, tmax = 1, -0.2, 0.5
# Read the raw data
raw = mne.io.read_raw_fif(raw_fname, preload=True)
events = mne.read_events(event_fname)
# The EEG channels will be plotted to visualize the difference in referencing
# schemes.
picks = mne.pick_types(raw.info, meg=False, eeg=True, eog=True, exclude='bads')
Explanation: Re-referencing the EEG signal
This example shows how to load raw data and apply some EEG referencing schemes.
End of explanation
reject = dict(eog=150e-6)
epochs_params = dict(events=events, event_id=event_id, tmin=tmin, tmax=tmax,
picks=picks, reject=reject, proj=True)
fig, (ax1, ax2, ax3) = plt.subplots(nrows=3, ncols=1, sharex=True)
# No reference. This assumes that the EEG has already been referenced properly.
# This explicitly prevents MNE from adding a default EEG reference. Any average
# reference projector is automatically removed.
raw.set_eeg_reference([])
evoked_no_ref = mne.Epochs(raw, **epochs_params).average()
evoked_no_ref.plot(axes=ax1, titles=dict(eeg='Original reference'), show=False,
time_unit='s')
# Average reference. This is normally added by default, but can also be added
# explicitly.
raw.set_eeg_reference('average', projection=True)
evoked_car = mne.Epochs(raw, **epochs_params).average()
evoked_car.plot(axes=ax2, titles=dict(eeg='Average reference'), show=False,
time_unit='s')
# Re-reference from an average reference to the mean of channels EEG 001 and
# EEG 002.
raw.set_eeg_reference(['EEG 001', 'EEG 002'])
evoked_custom = mne.Epochs(raw, **epochs_params).average()
evoked_custom.plot(axes=ax3, titles=dict(eeg='Custom reference'),
time_unit='s')
Explanation: We will now apply different EEG referencing schemes and plot the resulting
evoked potentials. Note that when we construct epochs with mne.Epochs, we
supply the proj=True argument. This means that any available projectors
are applied automatically. Specifically, if there is an average reference
projector set by raw.set_eeg_reference('average', projection=True), MNE
applies this projector when creating epochs.
End of explanation |
15,798 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
In an jupyter notebook if your bokeh tooltips extend beyond the extent of your plot, the css from the jupyter notebook can interfere with the display leaving something like this (note this is a screenshot not the fixed plot (which is at bottom of this notebook)
Step3: Unfortunately Bokeh can't solve this as Bokeh can't control the CSS of the parent element, which belongs to Jupyter. This can be solved in two ways | Python Code:
Image(url="https://raw.githubusercontent.com/birdsarah/bokeh-miscellany/master/cut-off-tooltip.png", width=400, height=400)
Explanation: In an jupyter notebook if your bokeh tooltips extend beyond the extent of your plot, the css from the jupyter notebook can interfere with the display leaving something like this (note this is a screenshot not the fixed plot (which is at bottom of this notebook):
End of explanation
from IPython.core.display import HTML
HTML(
<style>
div.output_subarea {
overflow-x: visible;
}
</style>
)
from bokeh.plotting import figure, ColumnDataSource
from bokeh.models import HoverTool
source = ColumnDataSource(
data=dict(
x=[1, 2, 3, 4, 5],
y=[2, 5, 8, 2, 7],
desc=['A', 'b', 'C', 'd', 'E'],
imgs = [
'http://bokeh.pydata.org/static/snake.jpg',
'http://bokeh.pydata.org/static/snake2.png',
'http://bokeh.pydata.org/static/snake3D.png',
'http://bokeh.pydata.org/static/snake4_TheRevenge.png',
'http://bokeh.pydata.org/static/snakebite.jpg'
]
)
)
hover = HoverTool(
tooltips=
<div>
<div>
<img
src="@imgs" height="42" alt="@imgs" width="42"
style="float: left; margin: 0px 15px 15px 0px;"
border="2"
></img>
</div>
<div>
<span style="font-size: 17px; font-weight: bold;">@desc</span>
<span style="font-size: 15px; color: #966;">[$index]</span>
</div>
<div>
<span style="font-size: 15px;">Location</span>
<span style="font-size: 10px; color: #696;">($x, $y)</span>
</div>
</div>
)
p = figure(plot_width=200, plot_height=200, tools=[hover], title='Hover')
p.circle('x', 'y', size=20, source=source)
show(p)
Explanation: Unfortunately Bokeh can't solve this as Bokeh can't control the CSS of the parent element, which belongs to Jupyter. This can be solved in two ways:
You can apply a style to a single notebook
You can add custom css to your global notebook settings
1. Applying CSS to single Jupyter Notebook
End of explanation |
15,799 | Given the following text description, write Python code to implement the functionality described below step by step
Description:
Hausaufgaben
1) Geben Sie alle Unicode-Zeichen zwischen 34 und 250 aus und geben Sie alle aus, die keine Buchstaben oder Zahlen sind
Step1: oder
Step2: Wir können die Funktion auch direkt verwenden
Step3: Aber hier stimmt die Logik nicht - wir wollen ja die Zeichen ausgeben, die <b>keine</b> Buchstaben sind, d.h. wir müssen verneinen. Dafür gibt es, wie erwähnt, itertools.filterfalse()
Step4: 2) Wie könnte man alle Dateien mit der Endung *.txt in einem Unterverzeichnis hintereinander ausgeben?
Step5: 3) Schauen Sie sich in der Python-Dokumentation die Funktionen sorted und itemgetter an. Wie kann man diese so kombinieren, dass man damit ein Dictionary nach dem value sortieren kann.
Step6: Hier die Definition der Methode
Step7: mit key übergeben wir eine Funktion, die das jeweilige Element aus dem Iterator bearbeitet
Step8: Und nun wollen wir diese Liste nach den Noten sortieren.
Step9: Die Zahl, die itemgetter als Parameter übergeben wird, ist der index im Tuple, der zum Sortieren verwendet werden soll, hier also das zweite Element. Nun können wir das auf ein Dictionary anwenden
Step10: Exkurs
Step11: Programme strukturieren
Aufgabe 1 von der letzten Sitzung
Step12: 4) Was ist die Operation, die im Kern verlangt wird?<br/>
jedes Wort durch eine Zahl ersetzen und zwar die Anzahl der Vokale.<br/>
daraus können wir ableiten
Step13: Nun können wir die Lösung für unsere Aufgabe schon einmal als Pseudecode hinschreiben
Step14: Und das Testen nicht vergessen
Step15: Nun muss noch die eigentlich Kernmethode erledigt werden
Step16: Test, Test
Step17: Ok, nun können wir alles zusammensetzen
Step18: Wir können die Anwendung der Methode auf jedes Element unserer Wortliste auch einfach mit map() erledigen
Step19: Hier noch einmal das ganze Skript im Zusammenhang
Step20: Und hier die etwas stärker komprimierte Form - schauen Sie mal, ob Sie sehen, was hier gemacht wird. Aber tatsächlich ist die Kürze der Schreibung nicht so wichtig! Dadurch wird ein Programm nicht effizienter.
Step21: Aufgabe 1
Finden Sie heraus, welche Wörter in Faust I von Goethe und Maria Stuart von Schiller nur in dem jeweiligen Text vorkommen. Begründen Sie Ihr Vorgehen.
Funktionales Programmieren - 2. Teil
Daten aus zwei Iteratoren verwenden
Wir haben bereits gesehen, wie wir Daten aus zwei Iteratoren verwenden können, wenn wir praktisch verschachtelte Schleifen brauchen
Step22: Wie gehen wir aber vor, wenn wir die beiden Iteratoren elementweise bearbeiten wollen, also die beiden ersten Elemente, dann die beiden nächsten Elemente usw.
Wenn die Iteratoren einen Index haben, dann können wir einen Counter verwenden
Step23: Eleganter ist die Verwendung der Methode zip, die man außerdem auch dann verwenden kann, wenn kein Index vorhanden ist.
Step24: zip() beendet die Arbeit, sobald der kürzere Iterator erschöpft ist. Wenn Sie das nicht wollen, dann verwenden Sie itertools.zip_longest()
Aufgabe 2
Sie haben zwei Dateien, die eigentlich identisch sein sollen. Prüfen Sie das und geben Sie jede Zeile aus, die nicht identisch ist. (Basteln Sie sich selbst vorher zwei Testdateien, die 5 identische Zeilen und 2 nicht-identische haben).
Aufgabe 2a (optional)
Die deutsche Sprache hat 29 Buchstaben einschließlich der Umlaute. Wieviele Wörter mit drei Buchstaben können wir daraus bilden, wenn wir vereinfachend davon ausgehen, dass der Unterschied zwischen Groß- und Kleinschreibung keine Rolle spielt und dass ein Buchstabe nur einmal in einem Wort vorkommen darf?
Generator Expressions
Generator expression sind in der Syntax und Funktionsweise List Comprehensions sehr ähnlich. Noch einmal ein Blick auf letzere
Step25: Sie sehen, der Rückgabe-Wert ist eine Liste. Wie Sie wissen, sind Listen iterierbar, aber eine Liste ist immer ganz im Arbeitsspeicher. Das kann Probleme machen
Step26: In einem solchen Fall ist es sinnvoll generator expressions zu verwenden, die im Prinzip nur statt der eckigen Klammern runde verwenden. Diese geben nicht eine Liste zurück, sondern einen Iterator
Step27: Kurzum, wenn Sie sehr große Datenmengen verarbeiten (oder unendlichen Listen), dann sollten Sie mit generator expressions arbeiten.
Generators
Wenn Sie selbst Funktionen schreiben, dann können Sie als Rückgabewert natürlich Listen verwenden. Aber auch hier könnte es sein, dass die Größe der Liste möglicherweise den Arbeitsspeicher sprengt. Dann können Sie Generators verwenden. Der entscheidende Unterschied liegt in der Verwendung des Schlüsselworts yield in der Definition der Funktion. <br/>
Step28: Noch ein -etwas länges - Beispiel. Im folgenden definieren wir erst einmal eine Funktion, die eine Liste zufälliger Buchstabenkombinationen in definierbarer Länge zurückgibt.
Step29: oder auch
Step30: Nun wollen wir die Funktion so umschreiben, dass Sie als Parameter außerdem die Anzahl der Ngramme enthält, die zurückgegeben werden soll
Step31: Um nun zu verhindern, dass wir ein Speicherproblem bekommen, wenn die Anzahl zu groß ist, machen wir daraus einen generator
Step32: Aufgabe 3
Schreiben Sie eine Funktion, die für beliebig große Textdateien jeweils den nächsten Satz ausgibt. Gehen Sie dabei von der (überstark vereinfachten) Regel aus, dass ein Satz durch eines dieser Zeichen [.?!] + Leerzeichen oder Absatzmarke beendet wird.
reduce
functools.reduce(func, iter, [initial_value]) wendet die Funktion func auf alle Elemente einer iterierbaren Struktur (iter) an. Die Funktion func muss als Eingabe zwei Elemente akzeptieren und ein Element ausgeben. Das Besondere der reduce-Funktion besteht darin, dass die Ausgabe der ersten Anwendung der Funktion, Teil der Eingabe der nächsten Anwendung der Funktion wird. Angenommen in der iterierbaren Datenstruktur findet sich [A, B, C, D], dann würde die Funktion func zuerst also Parameter A und B nehmen. Die Ausgabe X wird dann wiederum zur Eingabe für den Aufruf der Funktion
Step33: Hier ein etwas realistischeres Beispiel. Das 'flatttening of a list of lists' (Denken Sie daran, dass die Addition auf Listen angewandt diese verknüpft.)
Step35: Aufgabe 4
Hier ist eine Liste der Jahres-Zinssätze für Privatdarlehen der letzten 10 Jahre (frei erfunden!)
Step36: Hausaufgabe
Primzahlen sind Zahlen, die nur durch sich selbst und durch 1 teilbar ohne Rest sind. Die anderen Zahlen lassen sich dagegen in Faktoren zerlegen. Man kann offensichtlich alle Faktoren zerlegen, bis jede Zahl, die nicht eine Primzahl ist als Multiplikation von Primzahlen geschrieben werden kann. Schreiben Sie ein Programm, dass die Primzahlen von 2 bis 100 ermittelt. <br/>
Tipp 1
Step37: <br/><br/><br/><br/><br/><br/><br/><br/><br/>
Aufgabe 2a
Die deutsche Sprache hat 29 Buchstaben einschließlich der Umlaute. Wieviele Wörter mit drei Buchstaben können wir daraus bilden, wenn wir vereinfachend davon ausgehen, dass der Unterschied zwischen Groß- und Kleinschreibung keine Rolle spielt und dass ein Buchstabe nur einmal in einem Wort vorkommen darf?
Step38: <br/><br/><br/><br/><br/><br/><br/><br/><br/><br/>
Aufgabe 3
Schreiben Sie eine Funktion, die für beliebig große Textdateien jeweils den nächsten Satz ausgibt. Gehen Sie dabei von der (überstark vereinfachten) Regel aus, dass ein Satz durch eines dieser Zeichen [.?!] + Leerzeichen oder Absatzmarke beendet wird.
Step39: <br/><br/><br/><br/><br/><br/><br/><br/><br/>
Aufgabe 4
Hier ist eine Liste der Jahres-Zinssätze für Privatdarlehen der letzten 10 Jahre (frei erfunden!) | Python Code:
#1a) alle unicode Z. zwischen 34 u 250 ausgeben
a = [chr(c) for c in range(34,250)]
print(a[:50])
Explanation: Hausaufgaben
1) Geben Sie alle Unicode-Zeichen zwischen 34 und 250 aus und geben Sie alle aus, die keine Buchstaben oder Zahlen sind
End of explanation
a = list(map(chr, range(34, 250)))
print(a[:50])
#1b) nur die ausgeben, die keine Buchstaben oder Zahlen sind
def is_no_char(c):
if c.isalnum():
return False
else:
return True
a = list(filter(is_no_char, [chr(c) for c in range(34,250)]))
print(a[:20])
Explanation: oder
End of explanation
a = list(filter(lambda c: c.isalnum(), [chr(c) for c in range(34,250)]))
print(a[:20])
Explanation: Wir können die Funktion auch direkt verwenden
End of explanation
from itertools import filterfalse
a = list(filterfalse(lambda c: c.isalnum(), [chr(c) for c in range(34,250)]))
print(a[:20])
Explanation: Aber hier stimmt die Logik nicht - wir wollen ja die Zeichen ausgeben, die <b>keine</b> Buchstaben sind, d.h. wir müssen verneinen. Dafür gibt es, wie erwähnt, itertools.filterfalse()
End of explanation
import glob
for file in glob.glob("test\\*.*"):
with open(file, "r") as fin:
[print(l) for l in fin]
Explanation: 2) Wie könnte man alle Dateien mit der Endung *.txt in einem Unterverzeichnis hintereinander ausgeben?
End of explanation
a = [4,2,1,3]
sorted(a)
Explanation: 3) Schauen Sie sich in der Python-Dokumentation die Funktionen sorted und itemgetter an. Wie kann man diese so kombinieren, dass man damit ein Dictionary nach dem value sortieren kann.
End of explanation
#mit reverse drehen wir die Sortierreihenfolge um
b = ["b", "a", "d", "c","e"]
sorted(b, reverse=True)
Explanation: Hier die Definition der Methode: <br/>
sorted(iterable[, key][, reverse]) <br/>
End of explanation
c = ["d", "a", "A", "U"]
sorted(c)
sorted(c, key=str.upper)
#nehmen wir an, wir haben eine Liste von 2-Tuples, z.B. Namen und Noten
d = [("Michael", 3), ("Vicky", 2), ("Cordula", 1) ]
Explanation: mit key übergeben wir eine Funktion, die das jeweilige Element aus dem Iterator bearbeitet
End of explanation
from operator import itemgetter
sorted(d, key=itemgetter(1))
Explanation: Und nun wollen wir diese Liste nach den Noten sortieren.
End of explanation
wl = dict(haus=8, auto=12, tier=10, mensch=13)
[e for e in wl.items()]
sorted(wl.items(), key=itemgetter(1), reverse=True)
[a[0] for a in sorted(wl.items(), key=itemgetter(1), reverse=True)]
Explanation: Die Zahl, die itemgetter als Parameter übergeben wird, ist der index im Tuple, der zum Sortieren verwendet werden soll, hier also das zweite Element. Nun können wir das auf ein Dictionary anwenden:
End of explanation
class student:
def __init__(self, name, note):
self.name = name
self.note = note
studenten = [student("Cordula",2), student("Vicky", 3), student("Michael", 1)]
[print(stud.name + "\t" + str(stud.note)) for stud in studenten]
from operator import attrgetter
[print(stud.name + "\t" + str(stud.note)) for stud in sorted(studenten, key=attrgetter("note"))]
Explanation: Exkurs: Sortieren für Objekte
End of explanation
#parses a string and returns a list of words
def tokenize (line):
wordlist = []
#hier passiert was Aufregendes
return wordlist
Explanation: Programme strukturieren
Aufgabe 1 von der letzten Sitzung:<br/>
Ersetzen Sie eine Reihe von Worten durch eine Reihe von Zahlen, die die Anzahl der Vokale anzeigen. Z.B.: "Dies ist ein Satz" -> "2 1 2 1"
1) Schritt: Analysieren Sie den Input und den Output - welche Datenstruktur ist hier notwendig?
"Dies ist ein Satz" - String. Wir brauchen also eine Variable für diesen string<br/>
"2 1 2 1" - String (sieht aber aus, wie eine Liste, die zu einem String konvertiert wurde! Also eine Variable für die Ausgabeliste.<br/>
2) Schritt: Analysieren Sie, auf welcher Datenstruktur operiert wird<br/>
"Ersetzen Sie eine Reihe von Worten" -> also auf Worten. Da wir die Ausgabe der Zahlen in der Sequenz der Worte brauchen, brauchen wir die Worte in einer Liste, die die Reihenfolge bewahrt.
3) Wie kommen Sie von der Datenstruktur des Inputs zur Datenstruktur, die Sie für die Verarbeitung brauchen?<br/>
String, der einen Satz enthält -> Liste von Worten
End of explanation
#count vowels in a word, returns their number
def count_vowels(word):
nr_of_vowels = 0
#something will happen here
return nr_of_vowels
Explanation: 4) Was ist die Operation, die im Kern verlangt wird?<br/>
jedes Wort durch eine Zahl ersetzen und zwar die Anzahl der Vokale.<br/>
daraus können wir ableiten: wir brauchen eine Funktion, die ein Wort als Input hat und die Anzahl der Vokale als Output. Damit fangen wir an:
End of explanation
#parses a string and returns a list of words
import re
def tokenize (line):
wordlist = re.findall("\w+", line)
return wordlist
Explanation: Nun können wir die Lösung für unsere Aufgabe schon einmal als Pseudecode hinschreiben:
So, nun müssen wir nur noch die Methoden ausfüllen und den Pseude-Code in richtigen Code verwandeln. Also erst einmal tokenize. Das zerlegen von Strings in Listen kann man mit mehreren vorgefertigten Methoden erledigen. Wir können die primitive split()-Methode nehmen, die allerdings nicht so gut geeignet ist, wenn der Text nicht nur an einem einzelnen Zeichen zerlegt werden soll. Oder wir verwenden re.findall() aus dem regular expressions-Modul, das hier deutlich flexibler ist. Eine richtige Tokenisierung müsste natürlich noch einmal komplexer sein.
End of explanation
tokenize("Dies ist ein Test. Und gleich noch einer!")
Explanation: Und das Testen nicht vergessen:
End of explanation
#count vowels in a word, returns their number
def count_vowels(word):
nr_of_vowels = 0
for c in word:
if c in "aeiouäöüAEIUOÄÖÜ":
nr_of_vowels += 1
return nr_of_vowels
Explanation: Nun muss noch die eigentlich Kernmethode erledigt werden: Das Zählen der Vokale. Am einfachsten ist wieder einmal eine Schleife, die bei jedem Buchstaben prüft, ob es ein Vokal ist und dann einen Zähler um eins erhöht.
End of explanation
count_vowels("Bauernhaus")
Explanation: Test, Test:
End of explanation
#input
text = "Dies ist ein langer, lausiger Textbaustein."
#output
list_of_vowels = []
#preprocess input
wordlist = tokenize(text)
#main loop
#for each word in wordlist do count_vowels and add result to list_of_vowels
#Es gibt zwei Möglichkeiten, diesen Loop zu gestalten. Erstens ganz traditionell:
for word in wordlist:
list_of_vowels.append(count_vowels(word))
print (str(list_of_vowels))
Explanation: Ok, nun können wir alles zusammensetzen:
End of explanation
#output
list_of_vowels = []
list(map(count_vowels, wordlist))
Explanation: Wir können die Anwendung der Methode auf jedes Element unserer Wortliste auch einfach mit map() erledigen:
End of explanation
#parses a string and returns a list of words
import re
def tokenize (line):
wordlist = re.findall("\w+", line)
return wordlist
#count vowels in a word, returns their number
def count_vowels(word):
nr_of_vowels = 0
for c in word:
if c in "aeiouäöüAEIUOÄÖÜ":
nr_of_vowels += 1
return nr_of_vowels
#input
text = "Dies ist ein langer, lausiger Textbaustein."
#output
list_of_vowels = []
#preprocess input
wordlist = tokenize(text)
#apply count method on all words in list
list(map(count_vowels, wordlist))
Explanation: Hier noch einmal das ganze Skript im Zusammenhang:
End of explanation
import re
#input
text = "Dies ist ein langer, lausiger Textbaustein."
#count vowels in a word, returns their number
def cv(word):
return sum([1 for c in word if c in "aeiouäöüAEIUOÄÖÜ"])
list(map(cv, re.findall("\w+",text)))
Explanation: Und hier die etwas stärker komprimierte Form - schauen Sie mal, ob Sie sehen, was hier gemacht wird. Aber tatsächlich ist die Kürze der Schreibung nicht so wichtig! Dadurch wird ein Programm nicht effizienter.
End of explanation
a = [1,2,3]
b = ["a","b","c"]
[(x,y) for x in a for y in b]
Explanation: Aufgabe 1
Finden Sie heraus, welche Wörter in Faust I von Goethe und Maria Stuart von Schiller nur in dem jeweiligen Text vorkommen. Begründen Sie Ihr Vorgehen.
Funktionales Programmieren - 2. Teil
Daten aus zwei Iteratoren verwenden
Wir haben bereits gesehen, wie wir Daten aus zwei Iteratoren verwenden können, wenn wir praktisch verschachtelte Schleifen brauchen:
End of explanation
for i in range(len(a)):
print (a[i], " ", b[i])
Explanation: Wie gehen wir aber vor, wenn wir die beiden Iteratoren elementweise bearbeiten wollen, also die beiden ersten Elemente, dann die beiden nächsten Elemente usw.
Wenn die Iteratoren einen Index haben, dann können wir einen Counter verwenden:
End of explanation
for (x,y) in zip(a,b):
print (x, " ", y)
Explanation: Eleganter ist die Verwendung der Methode zip, die man außerdem auch dann verwenden kann, wenn kein Index vorhanden ist.
End of explanation
a = [3,1,4,2]
[i for i in a]
Explanation: zip() beendet die Arbeit, sobald der kürzere Iterator erschöpft ist. Wenn Sie das nicht wollen, dann verwenden Sie itertools.zip_longest()
Aufgabe 2
Sie haben zwei Dateien, die eigentlich identisch sein sollen. Prüfen Sie das und geben Sie jede Zeile aus, die nicht identisch ist. (Basteln Sie sich selbst vorher zwei Testdateien, die 5 identische Zeilen und 2 nicht-identische haben).
Aufgabe 2a (optional)
Die deutsche Sprache hat 29 Buchstaben einschließlich der Umlaute. Wieviele Wörter mit drei Buchstaben können wir daraus bilden, wenn wir vereinfachend davon ausgehen, dass der Unterschied zwischen Groß- und Kleinschreibung keine Rolle spielt und dass ein Buchstabe nur einmal in einem Wort vorkommen darf?
Generator Expressions
Generator expression sind in der Syntax und Funktionsweise List Comprehensions sehr ähnlich. Noch einmal ein Blick auf letzere:
End of explanation
#b = range(1000000000000) #dauert sehr lange
[i for i in b]
Explanation: Sie sehen, der Rückgabe-Wert ist eine Liste. Wie Sie wissen, sind Listen iterierbar, aber eine Liste ist immer ganz im Arbeitsspeicher. Das kann Probleme machen:
End of explanation
g = (i for i in b)
type(g)
for x in g:
print(x)
if x > 10: break
Explanation: In einem solchen Fall ist es sinnvoll generator expressions zu verwenden, die im Prinzip nur statt der eckigen Klammern runde verwenden. Diese geben nicht eine Liste zurück, sondern einen Iterator:
End of explanation
def generate_ints(N):
for i in range(N):
yield i
#hier benutzen wir die Funktion; der Parameter N könnte beliebig groß sein, da
#immer nur die Zahl ausgegeben wird, die gerade abgefragt wird.
for j in generate_ints(5):
print(j)
Explanation: Kurzum, wenn Sie sehr große Datenmengen verarbeiten (oder unendlichen Listen), dann sollten Sie mit generator expressions arbeiten.
Generators
Wenn Sie selbst Funktionen schreiben, dann können Sie als Rückgabewert natürlich Listen verwenden. Aber auch hier könnte es sein, dass die Größe der Liste möglicherweise den Arbeitsspeicher sprengt. Dann können Sie Generators verwenden. Der entscheidende Unterschied liegt in der Verwendung des Schlüsselworts yield in der Definition der Funktion. <br/>
End of explanation
import random
def get_random_NGram(ngram_length):
chars = []
for i in range(ngram_length):
x = random.randint(66,90)
chars.append(chr(x))
return "".join(chars)
get_random_NGram(5)
Explanation: Noch ein -etwas länges - Beispiel. Im folgenden definieren wir erst einmal eine Funktion, die eine Liste zufälliger Buchstabenkombinationen in definierbarer Länge zurückgibt.
End of explanation
import random
def get_random_NGram(ngram_length):
return "".join([chr(random.randint(66,90)) for i in range(ngram_length)])
get_random_NGram(3)
Explanation: oder auch:
End of explanation
import random
def get_random_NGram(ngram_length, nr_of_ngrams):
ngrams = []
for j in range(nr_of_ngrams):
chars = []
for i in range(ngram_length):
x = random.randint(66,90)
chars.append(chr(x))
ngrams.append("".join(chars))
return ngrams
get_random_NGram(5,5)
Explanation: Nun wollen wir die Funktion so umschreiben, dass Sie als Parameter außerdem die Anzahl der Ngramme enthält, die zurückgegeben werden soll:
End of explanation
import random
def get_random_NGram(ngram_length, nr_of_ngrams):
for j in range(nr_of_ngrams):
chars = []
for i in range(ngram_length):
x = random.randint(66,90)
chars.append(chr(x))
yield "".join(chars)
for x in get_random_NGram(5,5):
print(x)
Explanation: Um nun zu verhindern, dass wir ein Speicherproblem bekommen, wenn die Anzahl zu groß ist, machen wir daraus einen generator:
End of explanation
import functools
from operator import mul
a = [1,2,3,4,5]
functools.reduce(mul, a)
Explanation: Aufgabe 3
Schreiben Sie eine Funktion, die für beliebig große Textdateien jeweils den nächsten Satz ausgibt. Gehen Sie dabei von der (überstark vereinfachten) Regel aus, dass ein Satz durch eines dieser Zeichen [.?!] + Leerzeichen oder Absatzmarke beendet wird.
reduce
functools.reduce(func, iter, [initial_value]) wendet die Funktion func auf alle Elemente einer iterierbaren Struktur (iter) an. Die Funktion func muss als Eingabe zwei Elemente akzeptieren und ein Element ausgeben. Das Besondere der reduce-Funktion besteht darin, dass die Ausgabe der ersten Anwendung der Funktion, Teil der Eingabe der nächsten Anwendung der Funktion wird. Angenommen in der iterierbaren Datenstruktur findet sich [A, B, C, D], dann würde die Funktion func zuerst also Parameter A und B nehmen. Die Ausgabe X wird dann wiederum zur Eingabe für den Aufruf der Funktion: es berechnet dann also: func(func(A, B), C) usw. bis die Liste erschöpft ist. Sie können den Anfangswert (also den Wert von A) als initialvalue setzen.<br/>
Ein Beispiel. Wir multiplizieren alle Zahlen einer Liste. Erst wird 1 * 2 multipliziert. Das Ergebnis wird dann mit 3 multipliziert usw.:
End of explanation
a = [1,2,3]
b = [4,5,6]
c = [7,8,9]
x = [a,b,c]
x
from operator import add
functools.reduce(add, x)
Explanation: Hier ein etwas realistischeres Beispiel. Das 'flatttening of a list of lists' (Denken Sie daran, dass die Addition auf Listen angewandt diese verknüpft.)
End of explanation
def test():
Stupid test function
L = [i for i in range(100)]
if __name__ == '__main__':
import timeit
print(timeit.repeat("test()", setup="from __main__ import test", number=1000000, repeat=3))
import random
def get_random_NGram(ngram_length):
chars = []
for i in range(ngram_length):
x = random.randint(66,90)
chars.append(chr(x))
a = "".join(chars)
if __name__ == '__main__':
import timeit
print(timeit.repeat("get_random_NGram(5)", setup="from __main__ import get_random_NGram", number=100000, repeat=3))
def get_random_NGram(ngram_length):
a = "".join([chr(random.randint(66,90)) for i in range(ngram_length)])
if __name__ == '__main__':
import timeit
print(timeit.repeat("get_random_NGram(5)", setup="from __main__ import get_random_NGram", number=100000, repeat=3))
Explanation: Aufgabe 4
Hier ist eine Liste der Jahres-Zinssätze für Privatdarlehen der letzten 10 Jahre (frei erfunden!):
2004: 4,3 - 2005: 4,0 - 2006: 3,5 - 2007: 3,0 - 2008: 2,5 - 2009: 3,2 - 2010: 3,3 - 2011: 1,8 - 2012: 1,4 - 2013: 0,7<br/>Wieviel hat jemand auf dem Konto, der Anfang 2004 750€ eingezahlt hat.
Exkurs: timeit
End of explanation
with open("file1.txt", "r", encoding="utf-8") as f1:
with open("file2.txt", "r", encoding="utf-8") as f2:
[print(x + y) for x,y in zip(f1,f2) if x != y]
Explanation: Hausaufgabe
Primzahlen sind Zahlen, die nur durch sich selbst und durch 1 teilbar ohne Rest sind. Die anderen Zahlen lassen sich dagegen in Faktoren zerlegen. Man kann offensichtlich alle Faktoren zerlegen, bis jede Zahl, die nicht eine Primzahl ist als Multiplikation von Primzahlen geschrieben werden kann. Schreiben Sie ein Programm, dass die Primzahlen von 2 bis 100 ermittelt. <br/>
Tipp 1: Die Eingabe ist eine Liste der Zahlen von 2 bis 100. Die Ausgabe ist eine Liste der Primzahlen<br/>
Tipp 2: Wenn Sie prüfen, ob eine Zahl eine Primzahl ist, dann können Sie bei der Wurzel der Zahl aufhören, neue Faktoren zu suchen. <br/>
Wie müsste man die Funktion umschreiben, um beliebig große Zahlen zu bearbeiten?
Aufgaben
<br/><br/><br/><br/><br/><br/><br/><br/><br/><br/><br/>
Aufgabe 1
Finden Sie heraus, welche Wörter in Faust I von Goethe und Maria Stuart von Schiller nur in dem jeweiligen Text vorkommen. Begründen Sie Ihr Vorgehen.
Eingabe: zwei Text-Dateien<br/>
Ausgabe: Wortliste<br/>
Zentrale Operation: Vergleiche zwei Wortlisten, welche items kommen nur in der einen oder der anderen vor. <br/>
In Pseudecode:<br/>
wordlist1 = get_wordlist(Goethe)<br/>
wordlist2 = get_wordlist(Schiller)<br/>
unique_words = compare(wordlist1, wordlist2)<br/>
output(unique_wordlist)<br/>
<br/><br/><br/><br/><br/><br/><br/><br/><br/><br/>
Aufgabe 2
Sie haben zwei Dateien, die eigentlich identisch sein sollen. Prüfen Sie das und geben Sie jede Zeile aus, die nicht identisch ist. (Basteln Sie sich selbst vorher zwei Testdateien, die 5 identische Zeilen und 2 nicht-identische haben).
End of explanation
import itertools
chars = list("abcdefghijklmnopqrstuvwxyzöäü")
len(list(itertools.permutations(chars, 3)))
Explanation: <br/><br/><br/><br/><br/><br/><br/><br/><br/>
Aufgabe 2a
Die deutsche Sprache hat 29 Buchstaben einschließlich der Umlaute. Wieviele Wörter mit drei Buchstaben können wir daraus bilden, wenn wir vereinfachend davon ausgehen, dass der Unterschied zwischen Groß- und Kleinschreibung keine Rolle spielt und dass ein Buchstabe nur einmal in einem Wort vorkommen darf?
End of explanation
def get_sentence(filehandler):
in_markup = False
sentence = ""
while True:
c = filehandler.read(1)
if c == "":
break
elif c in ".?!":
sentence += c
in_markup = True
elif (c == " " or c == "\n") and in_markup == True:
yield sentence
sentence = ""
in_markup = False
else:
if in_markup == True:
in_markup == False
if c != "\n":
sentence += c
with open("text.txt", "r", encoding="utf-8") as fin:
for s in get_sentence(fin):
print(s)
Explanation: <br/><br/><br/><br/><br/><br/><br/><br/><br/><br/>
Aufgabe 3
Schreiben Sie eine Funktion, die für beliebig große Textdateien jeweils den nächsten Satz ausgibt. Gehen Sie dabei von der (überstark vereinfachten) Regel aus, dass ein Satz durch eines dieser Zeichen [.?!] + Leerzeichen oder Absatzmarke beendet wird.
End of explanation
from functools import reduce
#we define a function which adds the yearly interests to the sum
def jahreszins(betrag, zins):
return betrag + (betrag * zins / 100)
#we put the interests into a list
zinsen = [4.3, 4.0, 3.5, 3.0, 2.5, 3.2, 3.3, 1.8, 1.4, 0.7]
#we use reduce to calculate the result
reduce(jahreszins, zinsen, 750)
Explanation: <br/><br/><br/><br/><br/><br/><br/><br/><br/>
Aufgabe 4
Hier ist eine Liste der Jahres-Zinssätze für Privatdarlehen der letzten 10 Jahre (frei erfunden!):
2004: 4,3 - 2005: 4,0 - 2006: 3,5 - 2007: 3,0 - 2008: 2,5 - 2009: 3,2 - 2010: 3,3 - 2011: 1,8 - 2012: 1,4 - 2013: 0,7<br/>Wieviel hat jemand auf dem Konto, der Anfang 2004 750€ eingezahlt hat.
End of explanation |