Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 149
45.7k
| code_prompt
stringlengths 67
140k
|
---|---|---|
200 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Tour of Free(mium) HERE APIs
A short "teaser" presentation rushing through a small subset of many free APIs made by HERE Technologies under the Freemium plan. This notebook shows simple examples mostly for geocoding, places, maps and routing. They are designed for rapid consumption during a meetup talk. To that end, some code snippets longer than a few lines are imported from a module named utils.py. Third-party modules are imported in the respective sections below as needed. (See utils.py for a rough requirements list.)
Goal
Step1: Geocoding
documentation
Step2: Geopy Plugin
"geocoders united"
HERE plugin in Geopy 1.15.0, https
Step3: Places
https
Step5: Maps
https
Step6: Full Maps
Step7: Now HERE
Step8: Geocoding Revisited
more GIS-savvy
(a litlle) more geo-spatial smarts
Step10: Routing
https
Step11: Isolines
Step12: More interactively
Step13: More to come... (in another meetup ;)
dynamic map content (based on traitlets)
streaming data
ZeroMQ integration
sneak preview below | <ASSISTANT_TASK:>
Python Code:
import random
import urllib
import utils
app_id = utils.app_id
app_code = utils.app_code
berlin_lat_lon = [52.5, 13.4]
here_berlin_addr = 'Invalidenstr. 116, 10115 Berlin, Germany'
Explanation: Tour of Free(mium) HERE APIs
A short "teaser" presentation rushing through a small subset of many free APIs made by HERE Technologies under the Freemium plan. This notebook shows simple examples mostly for geocoding, places, maps and routing. They are designed for rapid consumption during a meetup talk. To that end, some code snippets longer than a few lines are imported from a module named utils.py. Third-party modules are imported in the respective sections below as needed. (See utils.py for a rough requirements list.)
Goal: Showing enough examples to wet you appetite for more, not delivering a polished "paper" or "package".
N.B.: This notebook is saved intentionally without cells executed as some of those would contain the HERE credentials used.
Freemium Plan
started in August 2018
large number of APIs included
250,000 API calls/month
5,000 monthly users on iOS/Android
no credit card needed
https://engage.here.com/freemium
https://developer.here.com/documentation
Setup
Credentials are imported from a here_credentials.py module if existing (via utils.py) defined as app_id and app_code, or from environment variables (HEREMAPS_APP_ID, HEREMAPS_APP_CODE).
End of explanation
import requests
here_berlin_addr
searchtext = urllib.parse.quote(here_berlin_addr)
searchtext
url = (
'https://geocoder.api.here.com/6.2/geocode.json'
f'?searchtext={searchtext}&app_id={app_id}&app_code={app_code}'
)
utils.mask_app_id(url)
obj = requests.get(url).json()
obj
loc = obj['Response']['View'][0]['Result'][0]['Location']['DisplayPosition']
loc['Latitude'], loc['Longitude']
Explanation: Geocoding
documentation: https://developer.here.com/documentation#geocoder
raw REST
geopy plugin
geocoding
reverse geocoding
Raw REST
End of explanation
from geopy.geocoders import Here
geocoder = Here(app_id, app_code)
here_berlin_addr
loc = geocoder.geocode(here_berlin_addr)
loc
loc.latitude, loc.longitude
loc.raw
here_berlin_lat_lon = loc.latitude, loc.longitude
here_berlin_lat_lon
loc = geocoder.reverse('{}, {}'.format(*here_berlin_lat_lon))
loc
loc.latitude, loc.longitude
Explanation: Geopy Plugin
"geocoders united"
HERE plugin in Geopy 1.15.0, https://github.com/geopy/geopy/releases/tag/1.15.0
pip install geopy>=1.15.0
only most essential parts of geocoder API covered!
End of explanation
searchtext = 'Cafe'
lat, lon = here_berlin_lat_lon
url = (
'https://places.api.here.com/places/v1/autosuggest'
f'?q={searchtext}&at={lat},{lon}'
f'&app_id={app_id}&app_code={app_code}'
)
utils.mask_app_id(url)
obj = requests.get(url).json()
obj
for p in [res for res in obj['results'] if res['type']=='urn:nlp-types:place']:
print('{!r:23} {:4d} m {}'.format(p['position'], p['distance'], p['title']))
Explanation: Places
https://developer.here.com/documentation#places
receive places within some neighbourhood
End of explanation
from IPython.display import Image
(lat, lon), zoom = berlin_lat_lon, 10
xtile, ytile = utils.deg2tile(lat, lon, zoom)
xtile, ytile
# %load -s deg2tile utils
def deg2tile(lat_deg, lon_deg, zoom):
lat_rad = radians(lat_deg)
n = 2.0 ** zoom
xtile = int((lon_deg + 180.0) / 360.0 * n)
ytile = int((1.0 - log(tan(lat_rad) + (1 / cos(lat_rad))) / pi) / 2.0 * n)
return (xtile, ytile)
# not used here
tiles_url = utils.build_here_tiles_url(
maptype='base',
tiletype='maptile',
scheme='normal.day',
x=xtile,
y=ytile,
z=zoom)
utils.mask_app_id(tiles_url)
img = Image(url=tiles_url)
img
# %load -s build_here_tiles_url utils
def build_here_tiles_url(**kwdict):
Return a HERE map tiles URL, based on default values that can be
overwritten by kwdict...
To be used for map building services like leaflet, folium, and
geopandas (with additional fields inside a dict)...
params = dict(
app_id = app_id,
app_code = app_code,
maptype = 'traffic',
tiletype = 'traffictile',
scheme = 'normal.day',
tilesize = '256',
tileformat = 'png8',
lg = 'eng',
x = '{x}',
y = '{y}',
z = '{z}',
server = random.choice('1234')
)
params.update(kwdict)
url = (
'https://{server}.{maptype}.maps.api.here.com'
'/maptile/2.1/{tiletype}/newest/{scheme}/{z}/{x}/{y}/{tilesize}/{tileformat}'
'?lg={lg}&app_id={app_id}&app_code={app_code}'
).format(**params)
return url
Explanation: Maps
https://developer.here.com/documentation#map_tile
get single maptiles
use different maptiles
build entire maps
use folium, ipyleaflet & geopandas
revisit geocoding with shapes
draw these shapes
Single Map Tiles
End of explanation
import folium
folium.Map(location=berlin_lat_lon, zoom_start=10, tiles='Stamen Terrain')
m = folium.Map(location=berlin_lat_lon, zoom_start=10)
folium.GeoJson('stops_berlin.geojson', name='BVG Stops').add_to(m)
folium.LayerControl().add_to(m)
m
Explanation: Full Maps
End of explanation
tiles_url = utils.build_here_tiles_url()
utils.mask_app_id(tiles_url)
folium.Map(
location=berlin_lat_lon,
zoom_start=10,
tiles=tiles_url,
attr='HERE.com')
Explanation: Now HERE
End of explanation
%matplotlib inline
import geopandas
import shapely
import shapely.wkt
from geopy.geocoders import Here
geocoder = Here(app_id, app_code)
here_berlin_addr
loc = geocoder.geocode(
here_berlin_addr,
additional_data='IncludeShapeLevel,postalCode') # <- get shapes!
loc.raw
wkt_shape = loc.raw['Location']['Shape']['Value']
shape = shapely.wkt.loads(wkt_shape)
shape
type(shape)
here_berlin_point = shapely.geometry.Point(*reversed(here_berlin_lat_lon))
here_berlin_point
shape.contains(here_berlin_point)
shape.contains(shapely.geometry.Point(0, 0))
data = [
['10115 Berlin', shape],
['HERE HQ', here_berlin_point]
]
df = geopandas.GeoDataFrame(data=data, columns=['object', 'geometry'])
df
url = utils.build_here_tiles_url(x='tileX', y='tileY', z='tileZ')
utils.mask_app_id(url)
df.crs = {'init': 'epsg:4326'} # dataframe is WGS84
ax = df.plot(figsize=(10, 10), alpha=0.5, edgecolor='k')
utils.add_basemap(ax, zoom=15, url=url)
# %load -s add_basemap utils
def add_basemap(ax, zoom, url='http://tile.stamen.com/terrain/tileZ/tileX/tileY.png'):
# Special thanks to Prof. Martin Christen at FHNW.ch in Basel for
# his GIS-Hack to make the output scales show proper lat/lon values!
xmin, xmax, ymin, ymax = ax.axis()
basemap, extent = ctx.bounds2img(xmin, ymin, xmax, ymax, zoom=zoom, ll=True, url=url)
# calculate extent from WebMercator to WGS84
xmin84, ymin84 = Mercator2WGS84(extent[0], extent[2])
xmax84, ymax84 = Mercator2WGS84(extent[1], extent[3])
extentwgs84 = (xmin84, xmax84, ymin84, ymax84)
ax.imshow(basemap, extent=extentwgs84, interpolation='bilinear')
# restore original x/y limits
ax.axis((xmin, xmax, ymin, ymax))
Explanation: Geocoding Revisited
more GIS-savvy
(a litlle) more geo-spatial smarts
End of explanation
from ipyleaflet import Map, Marker, CircleMarker, Polyline, basemap_to_tiles
from ipywidgets import HTML
here_berlin_addr
here_berlin_lat_lon
dt_oper_berlin_addr = 'Bismarkstr. 35, 10627 Berlin, Germany'
loc = geocoder.geocode(dt_oper_berlin_addr)
dt_oper_berlin_lat_lon = loc.latitude, loc.longitude
dt_oper_berlin_lat_lon
route = utils.get_route_positions(
here_berlin_lat_lon,
dt_oper_berlin_lat_lon,
mode='fastest;car;traffic:disabled',
language='en')
route
center = utils.mid_point(
here_berlin_lat_lon,
dt_oper_berlin_lat_lon)
here_basemap = utils.build_here_basemap()
layers = [basemap_to_tiles(here_basemap)]
m = Map(center=center, layers=layers, zoom=13)
m
route[0]['shape'][:4]
path = list(utils.chunks(route[0]['shape'], 2))
path[:2]
sum(map(lambda pq: utils.geo_distance(*pq), list(utils.pairwise(path))))
m += Polyline(locations=path, color='red', fill=False)
for man in route[0]['leg'][0]['maneuver']:
lat = man['position']['latitude']
lon = man['position']['longitude']
desc = man['instruction']
marker = Marker(location=(lat, lon), draggable=False)
marker.popup = HTML(value=desc)
m += marker
for lat, lon in path:
m += CircleMarker(location=(lat, lon), radius=3, color='blue')
reverse_route = utils.get_route_positions(
dt_oper_berlin_lat_lon,
here_berlin_lat_lon,
mode='shortest;pedestrian',
language='en')
utils.add_route_to_map(reverse_route, m)
path = list(utils.chunks(reverse_route[0]['shape'], 2))
sum(map(lambda pq: utils.geo_distance(*pq), list(utils.pairwise(path))))
# %load -s add_route_to_map utils.py
def add_route_to_map(route, some_map, color='blue'):
Add a route from the HERE REST API to the given map.
This includes markers for all points where a maneuver is needed, like 'turn left'.
And it includes a path with lat/lons from start to end and little circle markers
around them.
path_positions = list(chunks(route[0]['shape'], 2))
maneuvers = {
(man['position']['latitude'], man['position']['longitude']): man['instruction']
for man in route[0]['leg'][0]['maneuver']}
polyline = Polyline(
locations=path_positions,
color=color,
fill=False
)
some_map += polyline
for lat, lon in path_positions:
if (lat, lon) in maneuvers:
some_map += CircleMarker(location=(lat, lon), radius=2)
marker = Marker(location=(lat, lon), draggable=False)
message1 = HTML()
message1.value = maneuvers[(lat, lon)]
marker.popup = message1
some_map += marker
else:
some_map += CircleMarker(location=(lat, lon), radius=3)
Explanation: Routing
https://developer.here.com/documentation#routing
routes
modes
maneuvers
ipyleaflet preferred over folium
End of explanation
import requests
import ipywidgets as widgets
lat, lon = here_berlin_lat_lon
url = (
'https://isoline.route.api.here.com'
'/routing/7.2/calculateisoline.json'
f'?app_id={app_id}&app_code={app_code}'
f'&start=geo!{lat},{lon}'
'&mode=fastest;car;traffic:disabled'
'&range=300,600' # seconds/meters
'&rangetype=time' # time/distance
#'&departure=now' # 2013-07-04T17:00:00+02
#'&resolution=20' # meters
)
obj = requests.get(url).json()
obj
here_basemap = utils.build_here_basemap()
layers = [basemap_to_tiles(here_basemap)]
m = Map(center=(lat, lon), layers=layers, zoom=12)
m
m += Marker(location=(lat, lon))
for isoline in obj['response']['isoline']:
shape = isoline['component'][0]['shape']
path = [tuple(map(float, pos.split(','))) for pos in shape]
m += Polyline(locations=path, color='red', weight=2, fill=True)
Explanation: Isolines
End of explanation
here_basemap = utils.build_here_basemap()
layers = [basemap_to_tiles(here_basemap)]
m = Map(center=(lat, lon), layers=layers, zoom=13)
m
lat, lon = here_berlin_lat_lon
dist_iso = utils.Isoline(m,
lat=lat, lon=lon,
app_id=app_id, app_code=app_code)
# can't get this working directly on dist_iso with __call__ :(
def dist_iso_func(meters=1000):
dist_iso(meters=meters)
widgets.interact(dist_iso_func, meters=(1000, 2000, 200))
# %load -s Isoline utils
class Isoline(object):
def __init__(self, the_map, **kwdict):
self.the_map = the_map
self.isoline = None
self.url = (
'https://isoline.route.api.here.com'
'/routing/7.2/calculateisoline.json'
'?app_id={app_id}&app_code={app_code}'
'&start=geo!{lat},{lon}'
'&mode=fastest;car;traffic:disabled'
'&range={{meters}}' # seconds/meters
'&rangetype=distance' # time/distance
#'&departure=now' # 2013-07-04T17:00:00+02
#'&resolution=20' # meters
).format(**kwdict)
self.cache = {}
def __call__(self, meters=1000):
if meters not in self.cache:
print('loading', meters)
url = self.url.format(meters=meters)
obj = requests.get(url).json()
self.cache[meters] = obj
obj = self.cache[meters]
isoline = obj['response']['isoline'][0]
shape = isoline['component'][0]['shape']
path = [tuple(map(float, pos.split(','))) for pos in shape]
if self.isoline:
self.the_map -= self.isoline
self.isoline = Polyline(locations=path, color='red', weight=2, fill=True)
self.the_map += self.isoline
Explanation: More interactively
End of explanation
here_basemap = utils.build_here_basemap()
layers = [basemap_to_tiles(here_basemap)]
m = Map(center=berlin_lat_lon, layers=layers, zoom=13)
m
marker = Marker(location=berlin_lat_lon)
marker.location
m += marker
m -= marker
m += marker
marker.location = [52.49, 13.39]
loc = marker.location
for i in range(5000):
d_lat = (random.random() - 0.5) / 100
d_lon = (random.random() - 0.5) / 100
marker.location = [loc[0] + d_lat, loc[1] + d_lon]
Explanation: More to come... (in another meetup ;)
dynamic map content (based on traitlets)
streaming data
ZeroMQ integration
sneak preview below
End of explanation
<END_TASK> |
201 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
A Deeper Understanding of Deep Learning
Time for another mini-movie!
In this video, Dan introduces stochastic gradient descent (SGD) and back-propagation and explains how they are used to set the weights in a deep learning model.
Step1: Here is the ReLU activation function link that Dan mentioned.
Now, get ready to train your own models from scratch.
Deep Learning From Scratch
Step2: Let's build our model
Step3: Compile and fit
Step4: You know the drill, practice makes perfect!
Exercise
Step5: Specify Model
STEPS
Step6: Compile Model
Run the command fashion_model.compile.
Specify the following arguments
Step7: Fit Model
Run the command fashion_model.fit.
The arguments you will use are | <ASSISTANT_TASK:>
Python Code:
from IPython.display import YouTubeVideo
YouTubeVideo('kQmHaI5Jw1c', width=800, height=450)
Explanation: A Deeper Understanding of Deep Learning
Time for another mini-movie!
In this video, Dan introduces stochastic gradient descent (SGD) and back-propagation and explains how they are used to set the weights in a deep learning model.
End of explanation
from IPython.display import YouTubeVideo
YouTubeVideo('YbNE3zhtsoo', width=800, height=450)
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from tensorflow.python import keras
from tensorflow.python.keras.models import Sequential
from tensorflow.python.keras.layers import Dense, Flatten, Conv2D, Dropout
img_rows, img_cols = 28, 28
num_classes = 10
def data_prep(raw):
out_y = keras.utils.to_categorical(raw.label, num_classes)
num_images = raw.shape[0]
x_as_array = raw.values[:, 1:]
x_shaped_array = x_as_array.reshape(num_images, img_rows, img_cols, 1)
out_x = x_shaped_array / 255
return out_x, out_y
train_file = 'inputs/digit_recognizer/train.csv'
raw_data = pd.read_csv(train_file)
x, y = data_prep(raw_data)
print(x[0], y[0])
Explanation: Here is the ReLU activation function link that Dan mentioned.
Now, get ready to train your own models from scratch.
Deep Learning From Scratch
End of explanation
model = Sequential()
model.add(Conv2D(20, kernel_size=(3, 3),
activation='relu',
input_shape=(img_rows, img_cols, 1)))
model.add(Conv2D(20, kernel_size=(3, 3), activation='relu'))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))
Explanation: Let's build our model:
End of explanation
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer='adam',
metrics=['accuracy'])
model.fit(x, y,
batch_size=128,
epochs=2,
validation_split=0.2)
Explanation: Compile and fit:
End of explanation
import numpy as np
from sklearn.model_selection import train_test_split
from tensorflow.python import keras
img_rows, img_cols = 28, 28
num_classes = 10
def prep_data(raw, train_size, val_size):
y = raw[:, 0]
out_y = keras.utils.to_categorical(y, num_classes)
x = raw[:, 1:]
num_images = raw.shape[0]
out_x = x.reshape(num_images, img_rows, img_cols, 1)
out_x = out_x / 255
return out_x, out_y
fashion_file = 'inputs/fashionmnist/train.csv'
fashion_data = np.loadtxt(fashion_file, skiprows=1, delimiter=',')
x, y = prep_data(fashion_data, train_size=50000, val_size=5000)
Explanation: You know the drill, practice makes perfect!
Exercise: Modeling from Scratch
You have seen how to build a model from scratch to identify handwritten digits.
Now it's time to build a model that can identify different types of clothing.
To make models that can be trained quickly, we'll work with low-resolution (and therefore small) images.
For example, the model will identify the image below as a shoe:
Data Preparation
End of explanation
from tensorflow.python import keras
from tensorflow.python.keras.models import Sequential
from tensorflow.python.keras.layers import Dense, Flatten, Conv2D
fashion_model = Sequential()
fashion_model.add(Conv2D(12, kernel_size = (3, 3),
activation='relu',
input_shape=(img_rows, img_cols, 1)))
fashion_model.add(Conv2D(12, kernel_size=(3,3), activation='relu'))
fashion_model.add(Conv2D(12, kernel_size=(3,3), activation='relu'))
fashion_model.add(Flatten())
fashion_model.add(Dense(100, activation='relu'))
fashion_model.add(Dense(num_classes, activation='softmax'))
fashion_model
Explanation: Specify Model
STEPS:
1. Create a Sequential model. Call it fashion_model.
2. Add 3 Conv2D layers to fashion_model. Give each layer 12 filters, and specify a kernel_size of 3 and a relu activation. You will need to specify the input_shape for the first Conv2D layer. The input shape in this case is (img_rows, img_cols, 1).
3. Add a Flatten layer to fashion_model after the last Conv2D layer.
4. Add a Dense layer with 100 neurons to fashion_model after the Flatten layer.
5. Add your prediction layer to fashion_model. This is a Dense layer. We alrady have a variable called num_classes. Use this variable when specifying the number of nodes in this layer. The activation should be softmax (or you will have problems later).
End of explanation
fashion_model.compile(loss=keras.losses.categorical_crossentropy,
optimizer='adam',
metrics=['accuracy'])
Explanation: Compile Model
Run the command fashion_model.compile.
Specify the following arguments:
1. loss = keras.losses.categorical_crossentropy
2. optimizer = 'adam'
3. metrics = ['accuracy']
End of explanation
fashion_model.fit(x, y,
batch_size=100,
epochs=4,
validation_split=0.2)
Explanation: Fit Model
Run the command fashion_model.fit.
The arguments you will use are:
1. The first two are arguments are the data used to fit the model, which are x and y respectively.
2. batch_size = 100
3. epochs = 4
4. validation_split = 0.2
When you run this command, you can watch your model start improving.
You will see validation accuracies after each epoch.
Go get 'em tiger. Roar!
End of explanation
<END_TASK> |
202 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Chemistry Scheme Scope
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables Form
Is Required
Step9: 1.6. Number Of Tracers
Is Required
Step10: 1.7. Family Approach
Is Required
Step11: 1.8. Coupling With Chemical Reactivity
Is Required
Step12: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required
Step13: 2.2. Code Version
Is Required
Step14: 2.3. Code Languages
Is Required
Step15: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required
Step16: 3.2. Split Operator Advection Timestep
Is Required
Step17: 3.3. Split Operator Physical Timestep
Is Required
Step18: 3.4. Split Operator Chemistry Timestep
Is Required
Step19: 3.5. Split Operator Alternate Order
Is Required
Step20: 3.6. Integrated Timestep
Is Required
Step21: 3.7. Integrated Scheme Type
Is Required
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required
Step23: 4.2. Convection
Is Required
Step24: 4.3. Precipitation
Is Required
Step25: 4.4. Emissions
Is Required
Step26: 4.5. Deposition
Is Required
Step27: 4.6. Gas Phase Chemistry
Is Required
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required
Step30: 4.9. Photo Chemistry
Is Required
Step31: 4.10. Aerosols
Is Required
Step32: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required
Step33: 5.2. Global Mean Metrics Used
Is Required
Step34: 5.3. Regional Metrics Used
Is Required
Step35: 5.4. Trend Metrics Used
Is Required
Step36: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required
Step37: 6.2. Matches Atmosphere Grid
Is Required
Step38: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required
Step39: 7.2. Canonical Horizontal Resolution
Is Required
Step40: 7.3. Number Of Horizontal Gridpoints
Is Required
Step41: 7.4. Number Of Vertical Levels
Is Required
Step42: 7.5. Is Adaptive Grid
Is Required
Step43: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required
Step44: 8.2. Use Atmospheric Transport
Is Required
Step45: 8.3. Transport Details
Is Required
Step46: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required
Step47: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required
Step48: 10.2. Method
Is Required
Step49: 10.3. Prescribed Climatology Emitted Species
Is Required
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step51: 10.5. Interactive Emitted Species
Is Required
Step52: 10.6. Other Emitted Species
Is Required
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required
Step54: 11.2. Method
Is Required
Step55: 11.3. Prescribed Climatology Emitted Species
Is Required
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required
Step57: 11.5. Interactive Emitted Species
Is Required
Step58: 11.6. Other Emitted Species
Is Required
Step59: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required
Step60: 12.2. Prescribed Upper Boundary
Is Required
Step61: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required
Step62: 13.2. Species
Is Required
Step63: 13.3. Number Of Bimolecular Reactions
Is Required
Step64: 13.4. Number Of Termolecular Reactions
Is Required
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required
Step67: 13.7. Number Of Advected Species
Is Required
Step68: 13.8. Number Of Steady State Species
Is Required
Step69: 13.9. Interactive Dry Deposition
Is Required
Step70: 13.10. Wet Deposition
Is Required
Step71: 13.11. Wet Oxidation
Is Required
Step72: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required
Step73: 14.2. Gas Phase Species
Is Required
Step74: 14.3. Aerosol Species
Is Required
Step75: 14.4. Number Of Steady State Species
Is Required
Step76: 14.5. Sedimentation
Is Required
Step77: 14.6. Coagulation
Is Required
Step78: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required
Step79: 15.2. Gas Phase Species
Is Required
Step80: 15.3. Aerosol Species
Is Required
Step81: 15.4. Number Of Steady State Species
Is Required
Step82: 15.5. Interactive Dry Deposition
Is Required
Step83: 15.6. Coagulation
Is Required
Step84: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required
Step85: 16.2. Number Of Reactions
Is Required
Step86: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required
Step87: 17.2. Environmental Conditions
Is Required | <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ncc', 'noresm2-mh', 'atmoschem')
Explanation: ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era: CMIP6
Institute: NCC
Source ID: NORESM2-MH
Topic: Atmoschem
Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry.
Properties: 84 (39 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:24
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmospheric chemistry model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmospheric chemistry model code.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Chemistry Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Form of prognostic variables in the atmospheric chemistry component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of advected tracers in the atmospheric chemistry model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry calculations (not advection) generalized into families of species?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 1.8. Coupling With Chemical Reactivity
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the evolution of a given variable
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemical species advection (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for physics (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.4. Split Operator Chemistry Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemistry (in seconds).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.5. Split Operator Alternate Order
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.6. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the atmospheric chemistry model (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3.7. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.2. Convection
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.3. Precipitation
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.4. Emissions
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.5. Deposition
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.6. Gas Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.9. Photo Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 4.10. Aerosols
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the atmopsheric chemistry grid
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
* Does the atmospheric chemistry grid match the atmosphere grid?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 7.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview of transport implementation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.2. Use Atmospheric Transport
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is transport handled by the atmosphere, rather than within atmospheric cehmistry?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Transport Details
Is Required: FALSE Type: STRING Cardinality: 0.1
If transport is handled within the atmospheric chemistry scheme, describe it.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric chemistry emissions
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 10.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via any other method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed as spatially uniform
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an interactive method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an "other method"
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview gas phase atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Species included in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.3. Number Of Bimolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of bi-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.4. Number Of Termolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of ter-molecular reactions in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.7. Number Of Advected Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of advected species in the gas phase chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.8. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.9. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.10. Wet Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 13.11. Wet Oxidation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview stratospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
Explanation: 14.2. Gas Phase Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Gas phase species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
Explanation: 14.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the stratospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.5. Sedimentation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 14.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview tropospheric heterogenous atmospheric chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Gas Phase Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of gas phase species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
Explanation: 15.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the tropospheric heterogeneous chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.5. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 15.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric photo chemistry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 16.2. Number Of Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the photo-chemistry scheme.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
Explanation: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Photolysis scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.2. Environmental Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)
End of explanation
<END_TASK> |
203 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Interferometry
The idea behind interferometry is that electromagnetic waves emitted from a certain point in the sky will cause characteristic interference between signals received by antennas. The reason is simply that the wave's travel time differs slightly between the antennas. With the period nature of the wave, this means that if we add up the measurements of both antennas, the signals will either sum up or cancel each other out.
Step1: 1. Expected
Let's say that we are able to measure the interference pattern from two point source $(l_1,m_1)$ and $(l_2,m_2)$ at all points of a $(u,v)$ plane. In reality, this would require a thick mesh of receivers placed exactly perpendicular to the phase centre. This would show us a patterns that look as follows | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
from matplotlib import cm, colors
import numpy as np
from ipywidgets import interact
plt.rcParams['figure.figsize'] = 16, 8
import sys
sys.path.append('../..')
from crocodile.simulate import *
from crocodile.synthesis import *
from util.visualize import *
Explanation: Interferometry
The idea behind interferometry is that electromagnetic waves emitted from a certain point in the sky will cause characteristic interference between signals received by antennas. The reason is simply that the wave's travel time differs slightly between the antennas. With the period nature of the wave, this means that if we add up the measurements of both antennas, the signals will either sum up or cancel each other out.
End of explanation
N = 500; max_uv=25000
max_lm = 1/512/16; step_lm = 1/32768
bounds_lm = (-max_lm,max_lm,step_lm)
extent_lm = (-max_lm,max_lm,-max_lm,max_lm)
extent_uv = extent=(-max_uv,max_uv,-max_uv,max_uv)
def draw_uv(ax, s_split):
ax.set_xlabel('u [$\lambda$]'); ax.set_ylabel('v [$\lambda$]')
ax.set_title("Interference pattern")
return ax.imshow(s_split.real, norm=colors.Normalize(vmin=-1, vmax=1), extent=extent_uv, origin='lower')
def draw_sky(ax, ls, ms):
ax.set_title("Sky (around phase centre)")
ax.set_xlabel('l [$1$]'); ax.set_ylabel('m [$1$]')
ax.plot(ls, ms, 'ro')
ax.set_xlim([-max_lm, max_lm])
ax.set_ylim([-max_lm, max_lm])
def show_pattern_2(l1,m1, l2,m2):
# Generate uvw coordinates
v,u = max_uv*2*coordinates2(N)
w = np.full(u.shape, 0, dtype=float)
uvw = np.concatenate(np.transpose((u,v,w)))
# Simulate for a full uv plane
s = simulate_point(uvw, l1, m1)
s += simulate_point(uvw, l2, m2)
fig = plt.figure()
draw_sky(fig.add_subplot(121), [l1,l2], [m1,m2])
im = draw_uv(fig.add_subplot(122), np.array(np.split(s, N)))
fig.colorbar(im,shrink=.4,pad=0.025)
plt.show()
interact(show_pattern_2,l1=bounds_lm, m1=bounds_lm,l2=bounds_lm, m2=bounds_lm);
Explanation: 1. Expected
Let's say that we are able to measure the interference pattern from two point source $(l_1,m_1)$ and $(l_2,m_2)$ at all points of a $(u,v)$ plane. In reality, this would require a thick mesh of receivers placed exactly perpendicular to the phase centre. This would show us a patterns that look as follows:
End of explanation
<END_TASK> |
204 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
<table>
<tr align=left><td><img align=left src="./images/CC-BY.png">
<td>Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved MIT license. (c) Marc Spiegelman</td>
</table>
Step1: Fun with polynomials
GOAL
Step2: and plot them
Step3: Now let's find the interpolating polynomial that goes through exactly three points.
$f(-1)=0$, $f(0)=2$, $f(1)=-1$ by solving the invertible system of linear equations
$$
[\, p_0(x)\quad p_1(x)\quad p_2(x)\, ] \mathbf{c} = f(x)
$$
for the three points in $x=[-1,0,1]^T$
Step4: Least Squares problems
Step5: Errors
Now let's show that the error $e= f(x) - A(x)c$ is orthogonal to the column space of $A$ i.e. $A^T e = 0$
Step6: Projection of a function onto $P_2[-1,1]$
Now let's extend this problem to finding the best fit projection of a continuous function $f(x)$ onto $P_2$. While we could extend the previous approach by sampling $f(x)$ at a large number of points and calculating the least-squares solution, we can also solve the continuous problem by changing the definition of the inner product from the dot product in $R^n$ to the inner product for continuous functions
$$
<f,g> = \int_{-1}^{1} fg dx
$$
However the overall approach remains the same as the discrete least squares problem.
If we now consider a function $u \in P_2[-1,1]$ such that
$$
u(x) = \sum_i c_i p_i(x)
$$
then the continous error (or residual) is given by
$$
e(x) = u(x) - f(x)
$$
for the continuous variable $x\in[-1,1]$.
The least square problem now becomes "find $\mathbf{c}\in R^3$ that minimizes $||e||_{L2}$", i.e. the "length" of $e$ in the $L^2$ norm. Alternatively this requires that the error $e(x)$ is orthogonal to all the basis vectors in $P_2$, i.e.
$$
<p_i,e> = 0 \quad \mathrm{for\, }i=0,1,2
$$
or
$$
\int_{-1}^{1} p_i e dx = \int_{-1}^{1} p_i ( u - f) dx = 0
$$
or solve
$$
\int_{-1}^{1} p_i \left(\sum_j c_j p_j(x)\right)dx = \int_{-1}^{1} p_i f dx
$$
for all $i,j=0,1,2$. Rearranging the summation and the integral sign, we can rewrite the problem as
$$
\sum_j M_{ij} c_j = \hat{f}_i
$$
where
$$
M_{ij} = <p_i,p_j>=\int_{-1}^{1} p_i p_j dx\quad \mathrm{and}\quad \hat{f}i = <p_i,f> = \int{-1}^{1} p_i f dx
$$
or in matrix vector notation $M\mathbf{c} = \hat{\mathbf{f}}$ where $M$ is the "mass-matrix (and corresponds to the symmetric matrix $A^TA$) and $\hat{\mathbf{f}}$ is the "load vector" which corresponds to $A^t\mathbf{b}$.
For the simple monomial basis, we can calculate the terms of $M$ easily, but here we will just use scipy's numerical quadrature routines
We'll start by defining our function and calculating its interpolation onto $P_2[-1,1]$ as the unique quadratic that interpolates $f(x)$ at $x=[-1,0,1]$
Step9: Now calculate the mass matrix and load vector and solve for the galerkin projection of $f$ onto $P_2[-1,1]$
Step10: And let's just plot out the three function $f(x)$, $f_{interp}(x)$ it's interpolant, and $u(x)$ it's projection onto $P_2[-1,1]$ | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import scipy.linalg as la
import matplotlib.pyplot as plt
Explanation: <table>
<tr align=left><td><img align=left src="./images/CC-BY.png">
<td>Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved MIT license. (c) Marc Spiegelman</td>
</table>
End of explanation
# lambda functions for each of the basis functions
p0 = lambda x: np.ones(x.shape)
p1 = lambda x: x
p2 = lambda x: x**2
# lambda function for the matrix whose columns are p_i(x)
A = lambda x: np.array([ p0(x), p1(x), p2(x)]).transpose()
# lambda function for any vector in P_2, v = c[0]*p0 + c[1]*p1 + c[2]*p2
v = lambda c,x : np.dot(A(x),c)
x = np.array([-1.,0.,1.])
print p0(x),p1(x),p2(x)
print A(x)
c = np.array([1,2,-1])
print v(c,x)
Explanation: Fun with polynomials
GOAL: Explore the ideas of Interpolation, Least Squares fitting and projection of continous functions onto the function space $P_2[-1,1]$
The Space $P_2[-1,1]$
Consider the space of all second order polynomials on the closed interval $x\in[-1,1]$ which is a subspace of continuous functions $C^0[-1,1]$. To completely describe a vector space we need a basis: a set of linear independent vectors that span the space. While there are many possible bases for $P_2[-1,1]$, here we will consider the simplest monomial basis $p_0(x)=1$, $p_1(x)=x$, $p_2(x)=x^2$ or
$$
P_2(x)[-1,1] = \mathrm{span}<1,x,x^2>
$$
i.e. every vector in $P_2$ can be written as a linear combination of the basis vectors as
$$
f(x) = c_0p_0 + c_1p_1 + c_2p_2 = c_0 + c_1 x + c_2x^2
$$
The space P_2(x)[-1,1] is said to be isomorphic to $R^3$ as every vector in $P_2$ can be associated with a unique vector in $R^3$
$$
\mathbf{c}= [ c_0, c_1, c_2]^T
$$
here we will set up a bit of python to evaluate polynomials
End of explanation
x = np.linspace(-1,1)
plt.figure()
plt.plot(x,p0(x),label='$p_0$')
plt.hold(True)
plt.plot(x,p1(x),label='$p_1$')
plt.plot(x,p2(x),label='$p_2$')
plt.xlabel('x')
plt.ylim(-1.5,1.5)
plt.legend(loc='best')
plt.grid()
plt.show()
Explanation: and plot them
End of explanation
x = np.array([-1.,0.,1.])
f = np.array([0.,2.,-1.])
c = la.solve(A(x),f)
# and plot it out
xx = np.linspace(-1,1) # use well sampled space for plotting the quadratic
plt.figure()
# plot the parabola
plt.plot(xx,v(c,xx),'r-')
# plot the interpolating points
plt.plot(x,f,'bo')
plt.xlabel('x')
plt.ylabel('$f(x)$')
plt.ylim(-1.5,2.5)
plt.title('$c={}$: $v ={}p_0 + {}p_1 + {}p_2$'.format(c,c[0],c[1],c[2]))
plt.grid()
plt.show()
Explanation: Now let's find the interpolating polynomial that goes through exactly three points.
$f(-1)=0$, $f(0)=2$, $f(1)=-1$ by solving the invertible system of linear equations
$$
[\, p_0(x)\quad p_1(x)\quad p_2(x)\, ] \mathbf{c} = f(x)
$$
for the three points in $x=[-1,0,1]^T$
End of explanation
# choose 7 evenly spaced points in [-1,1]
x = np.linspace(-1,1,7)
# perturb the parabola with uniform random noise
f = v(c,x) + np.random.uniform(-.5,.5,len(x))
# and plot with respect to the underlying parabola
plt.figure()
plt.plot(x,f,'bo')
plt.hold(True)
plt.plot(xx,v(c,xx),'r',label='v')
plt.xlabel('x')
plt.ylim(-1.5,2.5)
plt.grid()
# now calculate and plot the leastsquares solution to Ac = f
c_ls,res,rank,s = la.lstsq(A(x),f)
plt.plot(xx,v(c_ls,xx),'g',label='v_lstsq')
plt.title('$c={}$: $v={}p_0 + {}p_1 + {}p_2$'.format(c_ls,c_ls[0],c_ls[1],c_ls[2]))
plt.legend(loc='best')
plt.show()
# and show that this is the same solution we would get if we tried to solve the normal equations direction
AtA = np.dot(A(x).transpose(),A(x))
Atf = np.dot(A(x).transpose(),f)
c_norm = la.solve(AtA,Atf)
print 'numpy least-squares c = {}'.format(c_ls)
print 'normal equations = {}'.format(c_norm)
print 'difference = {}'.format(c_ls-c_norm)
print
print 'ATA ={}'.format(AtA)
Explanation: Least Squares problems:
Given the value of a function at any three distinct points is sufficient to describe uniquely the interpolating quadratic through those points. But suppose we were given more than 3 points, say 7, in which case the matrix $A$ would be $7\times3$ with rank $r=3$ and unless those 7 points were on the same parabola, there would be no solution to the overdetermined problem. Here we will create that problem by adding more points to the interpolating parabola calculated above and then perturb it with uniform random noise.
End of explanation
# calculate the error vector
e = f - v(c_ls,x)
print 'error vector\n e={}\n'.format(e)
# and calculate the matrix vector product A^T e
print 'A^T e = {}'.format(np.dot(A(x).transpose(),e))
Explanation: Errors
Now let's show that the error $e= f(x) - A(x)c$ is orthogonal to the column space of $A$ i.e. $A^T e = 0$
End of explanation
# set the function to be projected
f = lambda x : np.cos(2*x) + np.sin(1.5*x)
# calculate the interpolation of f onto P2, when sampled at points -1,0,1
x = np.array([-1., 0., 1.])
c_interp = la.solve(A(x),f(x))
Explanation: Projection of a function onto $P_2[-1,1]$
Now let's extend this problem to finding the best fit projection of a continuous function $f(x)$ onto $P_2$. While we could extend the previous approach by sampling $f(x)$ at a large number of points and calculating the least-squares solution, we can also solve the continuous problem by changing the definition of the inner product from the dot product in $R^n$ to the inner product for continuous functions
$$
<f,g> = \int_{-1}^{1} fg dx
$$
However the overall approach remains the same as the discrete least squares problem.
If we now consider a function $u \in P_2[-1,1]$ such that
$$
u(x) = \sum_i c_i p_i(x)
$$
then the continous error (or residual) is given by
$$
e(x) = u(x) - f(x)
$$
for the continuous variable $x\in[-1,1]$.
The least square problem now becomes "find $\mathbf{c}\in R^3$ that minimizes $||e||_{L2}$", i.e. the "length" of $e$ in the $L^2$ norm. Alternatively this requires that the error $e(x)$ is orthogonal to all the basis vectors in $P_2$, i.e.
$$
<p_i,e> = 0 \quad \mathrm{for\, }i=0,1,2
$$
or
$$
\int_{-1}^{1} p_i e dx = \int_{-1}^{1} p_i ( u - f) dx = 0
$$
or solve
$$
\int_{-1}^{1} p_i \left(\sum_j c_j p_j(x)\right)dx = \int_{-1}^{1} p_i f dx
$$
for all $i,j=0,1,2$. Rearranging the summation and the integral sign, we can rewrite the problem as
$$
\sum_j M_{ij} c_j = \hat{f}_i
$$
where
$$
M_{ij} = <p_i,p_j>=\int_{-1}^{1} p_i p_j dx\quad \mathrm{and}\quad \hat{f}i = <p_i,f> = \int{-1}^{1} p_i f dx
$$
or in matrix vector notation $M\mathbf{c} = \hat{\mathbf{f}}$ where $M$ is the "mass-matrix (and corresponds to the symmetric matrix $A^TA$) and $\hat{\mathbf{f}}$ is the "load vector" which corresponds to $A^t\mathbf{b}$.
For the simple monomial basis, we can calculate the terms of $M$ easily, but here we will just use scipy's numerical quadrature routines
We'll start by defining our function and calculating its interpolation onto $P_2[-1,1]$ as the unique quadratic that interpolates $f(x)$ at $x=[-1,0,1]$
End of explanation
from scipy.integrate import quad
def mij(i,j,x):
integrand for component Mij of the mass matrix
p = np.array([1., x, x**2])
return p[i]*p[j]
def fi(i,x,f):
integrand for component i of the load vector
p = np.array([1., x, x**2])
return p[i]*f(x)
# construct the symmetric mass matrix M_ij = <p_i,p_j>
M = np.zeros((3,3))
fhat = np.zeros(3)
R = np.zeros((3,3)) # quadrature residuals
# loop over the upper triangular elements of M (and fill in the symmetric parts)
for i in range(0,3):
fhat[i] = quad(lambda x: fi(i,x,f),-1.,1.)[0]
for j in range(i,3):
result = quad(lambda x: mij(i,j,x),-1.,1.)
M[i,j] = result[0]
M[j,i] = M[i,j]
R[i,j] = result[1]
R[j,i] = R[i,j]
print 'M = {}\n'.format(M)
print 'fhat = {}\n'.format(fhat)
# and solve for c
c_galerkin = la.solve(M,fhat)
print 'c_galerkin ={}'.format(c_galerkin)
Explanation: Now calculate the mass matrix and load vector and solve for the galerkin projection of $f$ onto $P_2[-1,1]$
End of explanation
# now plot them all out and compare
plt.figure()
plt.plot(xx,f(xx),'r',label='$f(x)$')
plt.hold(True)
plt.plot(x,f(x),'ro')
plt.plot(xx,v(c_interp,xx),'g',label='$f_{interp}(x)$')
plt.plot(xx,v(c_galerkin,xx),'b',label='$u(x)$')
plt.xlabel('x')
plt.grid()
plt.legend(loc='best')
plt.show()
Explanation: And let's just plot out the three function $f(x)$, $f_{interp}(x)$ it's interpolant, and $u(x)$ it's projection onto $P_2[-1,1]$
End of explanation
<END_TASK> |
205 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Copyright 2020 The TensorFlow Authors.
Step1: Tensorflow Lattice와 형상 제약 조건
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: 필수 패키지 가져오기
Step3: 이 가이드에서 사용되는 기본값
Step4: 레스토랑 순위 지정을 위한 훈련 데이터세트
사용자가 레스토랑 검색 결과를 클릭할지 여부를 결정하는 단순화된 시나리오를 상상해봅니다. 이 작업은 주어진 입력 특성에 따른 클릭률(CTR)을 예측하는 것입니다.
평균 평점(avg_rating)
Step6: 이 CTR 함수의 등고선도를 살펴보겠습니다.
Step7: 데이터 준비하기
이제 합성 데이터세트를 만들어야 합니다. 레스토랑과 해당 특징의 시뮬레이션된 데이터세트를 생성하는 것으로 작업을 시작합니다.
Step8: 훈련, 검증 및 테스트 데이터세트를 생성해 보겠습니다. 검색 결과에 레스토랑이 표시되면 사용자의 참여(클릭 또는 클릭 없음)를 샘플 포인트로 기록할 수 있습니다.
실제로 사용자가 모든 검색 결과를 확인하지 않는 경우가 많습니다. 즉, 사용자는 현재 사용 중인 순위 모델에서 이미 '좋은' 것으로 간주되는 식당만 볼 수 있습니다. 결과적으로 '좋은' 레스토랑은 훈련 데이터세트에서 더 자주 좋은 인상을 남기고 더 과장되게 표현됩니다. 더 많은 특성을 사용할 때 훈련 데이터세트에서는 특성 공간의 '나쁜' 부분에 큰 간격이 생길 수 있습니다.
모델이 순위 지정에 사용되면 훈련 데이터세트로 잘 표현되지 않는 보다 균일한 분포로 모든 관련 결과에 대해 평가되는 경우가 많습니다. 이 경우 과도하게 표현된 데이터 포인트에 과대 적합이 발생하여 일반화될 수 없기 때문에, 유연하고 복잡한 모델은 실패할 수 있습니다. 이 문제는 도메인 지식을 적용하여 모델이 훈련 데이터세트에서 선택할 수 없을 때 합리적인 예측을 할 수 있도록 안내하는 형상 제약 조건을 추가함으로써 처리합니다.
이 예에서 훈련 데이터세트는 대부분 우수하고 인기 있는 음식점과의 사용자 상호 작용으로 구성됩니다. 테스트 데이터세트에는 위에서 설명한 평가 설정을 시뮬레이션하도록 균일한 분포가 있습니다. 해당 테스트 데이터세트는 실제 문제 설정에서는 사용할 수 없습니다.
Step9: 훈련 및 평가에 사용되는 input_fns 정의하기
Step10: 그래디언트 Boosted 트리 적합화하기
avg_rating과 num_reviews 두 가지 특성으로 시작하겠습니다.
검증 및 테스트 메트릭을 플롯하고 계산하기 위한 몇 가지 보조 함수를 만듭니다.
Step11: TensorFlow 그래디언트 boosted 결정 트리를 데이터세트에 적합하도록 맞출 수 있습니다.
Step12: 모델이 실제 CTR의 일반적인 형상을 포착하고 적절한 검증 메트릭을 가지고 있지만, 입력 공간의 여러 부분에서 반직관적인 동작을 보입니다. 평균 평점 또는 리뷰 수가 증가하면 예상 CTR이 감소하는데, 이는 훈련 데이터세트에서 잘 다루지 않는 영역에 샘플 포인트가 부족하기 때문입니다. 모델은 데이터에서만 올바른 동작을 추론할 방법이 없습니다.
이 문제를 해결하기 위해 모델이 평균 평점과 리뷰 수에 대해 단조롭게 증가하는 값을 출력해야 한다는 형상 제약 조건을 적용합니다. 나중에 TFL에서 이를 구현하는 방법을 살펴보겠습니다.
DNN 적합화하기
DNN 분류자로 같은 단계를 반복할 수 있습니다. 여기서 비슷한 패턴이 관찰되는데 리뷰 수가 적은 샘플 포인트가 충분하지 않으면 무의미한 외삽이 발생합니다. 검증 메트릭이 트리 솔루션보다 우수하더라도 테스트 메트릭은 훨씬 나쁘다는 점을 유의하세요.
Step13: 형상 제약 조건
TensorFlow Lattice(TFL)는 훈련 데이터 이상의 모델 동작을 보호하기 위해 형상 제약 조건을 적용하는 데 중점을 둡니다. 이러한 형상 제약 조건은 TFL Keras 레이어에 적용됩니다. 자세한 내용은 JMLR 논문에서 찾을 수 있습니다.
이 튜토리얼에서는 다양한 형상 제약을 다루기 위해 준비된 TF estimator를 사용하지만, 해당 모든 단계는 TFL Keras 레이어에서 생성된 모델로 수행할 수 있습니다.
다른 TensorFlow estimator와 마찬가지로 준비된 TFL estimator는 특성 열을 사용하여 입력 형식을 정의하고 훈련 input_fn을 사용하여 데이터를 전달합니다. 준비된 TFL estimator을 사용하려면 다음이 필요합니다.
모델 구성
Step14: CalibratedLatticeConfig를 사용하면 먼저 calibrator를 각 입력(숫자 특성에 대한 부분 선형 함수)에 적용한 다음 격자 레이어를 적용하여 보정된 특성을 비선형적으로 융합하는 준비된 분류자를 생성합니다. tfl.visualization을 사용하여 모델을 시각화할 수 있습니다. 특히 다음 플롯은 미리 준비된 estimator에 포함된 두 개의 훈련된 calibrator를 보여줍니다.
Step15: 제약 조건이 추가되면 평균 평점이 증가하거나 리뷰 수가 증가함에 따라 예상 CTR이 항상 증가합니다. 이것은 calibrator와 격자가 단조로운지 확인하여 수행됩니다.
감소 수익
감소 수익은 특정 특성값을 증가시키는 한계 이득이 값이 증가함에 따라 감소한다는 것을 의미합니다. 해당 경우에는 num_reviews 특성이 이 패턴을 따를 것으로 예상하므로 그에 따라 calibrator를 구성할 수 있습니다. 감소하는 수익률은 두 가지 충분한 조건으로 분해할 수 있습니다.
calibrator가 단조롭게 증가하고 있으며
calibrator는 오목합니다.
Step16: 오목 제약 조건을 추가하여 테스트 메트릭이 어떻게 향상되는지 확인하세요. 예측 플롯은 또한 지상 진실과 더 유사합니다.
2D 형상 제약 조건
Step17: 다음 플롯은 훈련된 격자 함수를 나타냅니다. 신뢰 제약 조건으로 인해, 보정된 num_reviews의 큰 값이 보정된 avg_rating에 대한 경사를 더 높여서 격자 출력에서 더 중요한 이동이 있을 것을 예상합니다.
Step18: Smoothing Calibrator
이제 avg_rating의 calibrator를 살펴보겠습니다. 단조롭게 증가하지만 기울기의 변화는 갑작스럽고 해석하기 어렵습니다. 이는 regularizer_configs의 regularizer 설정으로 이 calibrator를 스무딩하는 것을 고려해볼 수 있음을 의미합니다.
여기에서는 곡률의 변화를 줄이기 위해 wrinkle regularizer를 적용합니다. 또한 laplacian regularizer를 사용하여 calibrator를 평면화하고 hessian regularizer를 사용하여 보다 선형적으로 만들 수 있습니다.
Step19: 이제 calibrator가 매끄럽고 전체 예상 CTR이 실제와 더 잘 일치합니다. 해당 적용은 테스트 메트릭과 등고선 플롯 모두에 반영됩니다.
범주형 보정을 위한 부분 단조
지금까지 모델에서 숫자 특성 중 두 가지만 사용했습니다. 여기에서는 범주형 보정 레이어를 사용하여 세 번째 특성을 추가합니다. 다시 플롯 및 메트릭 계산을 위한 도우미 함수를 설정하는 것으로 시작합니다.
Step20: 세 번째 특성인 dollar_rating을 포함하려면 범주형 특성이 특성 열과 특성 구성 모두에서 TFL 내에서 약간 다른 처리가 필요하다는 점을 기억해야 합니다. 여기서 다른 모든 입력이 고정될 때 'DD' 레스토랑의 출력이 'D' 레스토랑보다 커야 한다는 부분 단조 제약 조건을 적용합니다. 해당 적용은 특성 구성에서 monotonicity 설정을 사용하여 수행됩니다.
Step21: 범주형 calibrator는 모델 출력의 선호도를 보여줍니다. DD > D > DDD > DDDD는 설정과 일치합니다. 결측값에 대한 열도 있습니다. 훈련 및 테스트 데이터에는 누락된 특성이 없지만, 모델은 다운스트림 모델 제공 중에 발생하는 누락된 값에 대한 대체 값을 제공합니다.
dollar_rating을 조건으로 이 모델의 예상 CTR도 플롯합니다. 필요한 모든 제약 조건이 각 슬라이스에서 충족됩니다.
출력 보정
지금까지 훈련한 모든 TFL 모델의 경우 격자 레이어(모델 그래프에서 'Lattice'로 표시됨)가 모델 예측을 직접 출력합니다. 때때로 격자 출력이 모델 출력을 내도록 재조정되어야 하는지는 확실하지 않습니다.
특성은 $log$ 카운트이고 레이블은 카운트입니다.
격자는 매우 적은 수의 꼭짓점을 갖도록 구성되지만 레이블 분포는 비교적 복잡합니다.
이러한 경우 격자 출력과 모델 출력 사이에 또 다른 calibrator를 추가하여 모델 유연성을 높일 수 있습니다. 방금 구축한 모델에 5개의 키포인트가 있는 보정 레이어를 추가하겠습니다. 또한 함수를 원활하게 유지하기 위해 출력 calibrator용 regularizer를 추가합니다. | <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
#@test {"skip": true}
!pip install tensorflow-lattice
Explanation: Tensorflow Lattice와 형상 제약 조건
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/lattice/tutorials/shape_constraints"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org에서 보기</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/lattice/tutorials/shape_constraints.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab에서 실행하기</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/lattice/tutorials/shape_constraints.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub에서소스 보기</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/lattice/tutorials/shape_constraints.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">노트북 다운로드하기</a></td>
</table>
개요
이 튜토리얼은 TensorFlow Lattice(TFL) 라이브러리에서 제공하는 제약 조건 및 regularizer에 대한 개요입니다. 여기서는 합성 데이터세트에 TFL canned estimator를 사용하지만, 해당 튜토리얼의 모든 내용은 TFL Keras 레이어로 구성된 모델로도 수행될 수 있습니다.
계속하기 전에 런타임에 필요한 모든 패키지가 아래 코드 셀에서 가져온 대로 설치되어 있는지 먼저 확인하세요.
설정
TF Lattice 패키지 설치하기
End of explanation
import tensorflow as tf
from IPython.core.pylabtools import figsize
import itertools
import logging
import matplotlib
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
import sys
import tensorflow_lattice as tfl
logging.disable(sys.maxsize)
Explanation: 필수 패키지 가져오기
End of explanation
NUM_EPOCHS = 1000
BATCH_SIZE = 64
LEARNING_RATE=0.01
Explanation: 이 가이드에서 사용되는 기본값
End of explanation
def click_through_rate(avg_ratings, num_reviews, dollar_ratings):
dollar_rating_baseline = {"D": 3, "DD": 2, "DDD": 4, "DDDD": 4.5}
return 1 / (1 + np.exp(
np.array([dollar_rating_baseline[d] for d in dollar_ratings]) -
avg_ratings * np.log1p(num_reviews) / 4))
Explanation: 레스토랑 순위 지정을 위한 훈련 데이터세트
사용자가 레스토랑 검색 결과를 클릭할지 여부를 결정하는 단순화된 시나리오를 상상해봅니다. 이 작업은 주어진 입력 특성에 따른 클릭률(CTR)을 예측하는 것입니다.
평균 평점(avg_rating): [1,5] 범위의 값을 가진 숫자 특성입니다.
리뷰 수(num_reviews): 200개로 제한되는 값이 있는 숫자 특성으로, 트렌드를 측정하는 데 사용됩니다.
달러 등급(dollar_rating): { "D", "DD", "DDD", "DDDD"} 세트에 문자열 값이 있는 범주형 특성입니다.
실제 CTR이 공식으로 제공되는 합성 데이터세트를 만듭니다. $$ CTR = 1 / (1 + exp{\mbox {b(dollar_rating)}-\mbox {avg_rating} \times log(\mbox {num_reviews}) / 4}) $$ 여기서 $b(\cdot)$는 각 dollar_rating을 기준값으로 변환합니다. $$ \mbox{D}\to 3,\ \ mbox{DD}\to 2,\ \ mbox{DDD}\to 4,\ \mbox{DDDD}\to 4.5. $$
이 공식은 일반적인 사용자 패턴을 반영합니다. 예를 들어 다른 모든 사항이 수정된 경우 사용자는 별표 평점이 더 높은 식당을 선호하며 '$$'식당은 '$'보다 더 많은 클릭을 받고 '$$$' 및 '$$$'가 이어집니다.
End of explanation
def color_bar():
bar = matplotlib.cm.ScalarMappable(
norm=matplotlib.colors.Normalize(0, 1, True),
cmap="viridis",
)
bar.set_array([0, 1])
return bar
def plot_fns(fns, split_by_dollar=False, res=25):
Generates contour plots for a list of (name, fn) functions.
num_reviews, avg_ratings = np.meshgrid(
np.linspace(0, 200, num=res),
np.linspace(1, 5, num=res),
)
if split_by_dollar:
dollar_rating_splits = ["D", "DD", "DDD", "DDDD"]
else:
dollar_rating_splits = [None]
if len(fns) == 1:
fig, axes = plt.subplots(2, 2, sharey=True, tight_layout=False)
else:
fig, axes = plt.subplots(
len(dollar_rating_splits), len(fns), sharey=True, tight_layout=False)
axes = axes.flatten()
axes_index = 0
for dollar_rating_split in dollar_rating_splits:
for title, fn in fns:
if dollar_rating_split is not None:
dollar_ratings = np.repeat(dollar_rating_split, res**2)
values = fn(avg_ratings.flatten(), num_reviews.flatten(),
dollar_ratings)
title = "{}: dollar_rating={}".format(title, dollar_rating_split)
else:
values = fn(avg_ratings.flatten(), num_reviews.flatten())
subplot = axes[axes_index]
axes_index += 1
subplot.contourf(
avg_ratings,
num_reviews,
np.reshape(values, (res, res)),
vmin=0,
vmax=1)
subplot.title.set_text(title)
subplot.set(xlabel="Average Rating")
subplot.set(ylabel="Number of Reviews")
subplot.set(xlim=(1, 5))
_ = fig.colorbar(color_bar(), cax=fig.add_axes([0.95, 0.2, 0.01, 0.6]))
figsize(11, 11)
plot_fns([("CTR", click_through_rate)], split_by_dollar=True)
Explanation: 이 CTR 함수의 등고선도를 살펴보겠습니다.
End of explanation
def sample_restaurants(n):
avg_ratings = np.random.uniform(1.0, 5.0, n)
num_reviews = np.round(np.exp(np.random.uniform(0.0, np.log(200), n)))
dollar_ratings = np.random.choice(["D", "DD", "DDD", "DDDD"], n)
ctr_labels = click_through_rate(avg_ratings, num_reviews, dollar_ratings)
return avg_ratings, num_reviews, dollar_ratings, ctr_labels
np.random.seed(42)
avg_ratings, num_reviews, dollar_ratings, ctr_labels = sample_restaurants(2000)
figsize(5, 5)
fig, axs = plt.subplots(1, 1, sharey=False, tight_layout=False)
for rating, marker in [("D", "o"), ("DD", "^"), ("DDD", "+"), ("DDDD", "x")]:
plt.scatter(
x=avg_ratings[np.where(dollar_ratings == rating)],
y=num_reviews[np.where(dollar_ratings == rating)],
c=ctr_labels[np.where(dollar_ratings == rating)],
vmin=0,
vmax=1,
marker=marker,
label=rating)
plt.xlabel("Average Rating")
plt.ylabel("Number of Reviews")
plt.legend()
plt.xlim((1, 5))
plt.title("Distribution of restaurants")
_ = fig.colorbar(color_bar(), cax=fig.add_axes([0.95, 0.2, 0.01, 0.6]))
Explanation: 데이터 준비하기
이제 합성 데이터세트를 만들어야 합니다. 레스토랑과 해당 특징의 시뮬레이션된 데이터세트를 생성하는 것으로 작업을 시작합니다.
End of explanation
def sample_dataset(n, testing_set):
(avg_ratings, num_reviews, dollar_ratings, ctr_labels) = sample_restaurants(n)
if testing_set:
# Testing has a more uniform distribution over all restaurants.
num_views = np.random.poisson(lam=3, size=n)
else:
# Training/validation datasets have more views on popular restaurants.
num_views = np.random.poisson(lam=ctr_labels * num_reviews / 50.0, size=n)
return pd.DataFrame({
"avg_rating": np.repeat(avg_ratings, num_views),
"num_reviews": np.repeat(num_reviews, num_views),
"dollar_rating": np.repeat(dollar_ratings, num_views),
"clicked": np.random.binomial(n=1, p=np.repeat(ctr_labels, num_views))
})
# Generate datasets.
np.random.seed(42)
data_train = sample_dataset(500, testing_set=False)
data_val = sample_dataset(500, testing_set=False)
data_test = sample_dataset(500, testing_set=True)
# Plotting dataset densities.
figsize(12, 5)
fig, axs = plt.subplots(1, 2, sharey=False, tight_layout=False)
for ax, data, title in [(axs[0], data_train, "training"),
(axs[1], data_test, "testing")]:
_, _, _, density = ax.hist2d(
x=data["avg_rating"],
y=data["num_reviews"],
bins=(np.linspace(1, 5, num=21), np.linspace(0, 200, num=21)),
density=True,
cmap="Blues",
)
ax.set(xlim=(1, 5))
ax.set(ylim=(0, 200))
ax.set(xlabel="Average Rating")
ax.set(ylabel="Number of Reviews")
ax.title.set_text("Density of {} examples".format(title))
_ = fig.colorbar(density, ax=ax)
Explanation: 훈련, 검증 및 테스트 데이터세트를 생성해 보겠습니다. 검색 결과에 레스토랑이 표시되면 사용자의 참여(클릭 또는 클릭 없음)를 샘플 포인트로 기록할 수 있습니다.
실제로 사용자가 모든 검색 결과를 확인하지 않는 경우가 많습니다. 즉, 사용자는 현재 사용 중인 순위 모델에서 이미 '좋은' 것으로 간주되는 식당만 볼 수 있습니다. 결과적으로 '좋은' 레스토랑은 훈련 데이터세트에서 더 자주 좋은 인상을 남기고 더 과장되게 표현됩니다. 더 많은 특성을 사용할 때 훈련 데이터세트에서는 특성 공간의 '나쁜' 부분에 큰 간격이 생길 수 있습니다.
모델이 순위 지정에 사용되면 훈련 데이터세트로 잘 표현되지 않는 보다 균일한 분포로 모든 관련 결과에 대해 평가되는 경우가 많습니다. 이 경우 과도하게 표현된 데이터 포인트에 과대 적합이 발생하여 일반화될 수 없기 때문에, 유연하고 복잡한 모델은 실패할 수 있습니다. 이 문제는 도메인 지식을 적용하여 모델이 훈련 데이터세트에서 선택할 수 없을 때 합리적인 예측을 할 수 있도록 안내하는 형상 제약 조건을 추가함으로써 처리합니다.
이 예에서 훈련 데이터세트는 대부분 우수하고 인기 있는 음식점과의 사용자 상호 작용으로 구성됩니다. 테스트 데이터세트에는 위에서 설명한 평가 설정을 시뮬레이션하도록 균일한 분포가 있습니다. 해당 테스트 데이터세트는 실제 문제 설정에서는 사용할 수 없습니다.
End of explanation
train_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(
x=data_train,
y=data_train["clicked"],
batch_size=BATCH_SIZE,
num_epochs=NUM_EPOCHS,
shuffle=False,
)
# feature_analysis_input_fn is used for TF Lattice estimators.
feature_analysis_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(
x=data_train,
y=data_train["clicked"],
batch_size=BATCH_SIZE,
num_epochs=1,
shuffle=False,
)
val_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(
x=data_val,
y=data_val["clicked"],
batch_size=BATCH_SIZE,
num_epochs=1,
shuffle=False,
)
test_input_fn = tf.compat.v1.estimator.inputs.pandas_input_fn(
x=data_test,
y=data_test["clicked"],
batch_size=BATCH_SIZE,
num_epochs=1,
shuffle=False,
)
Explanation: 훈련 및 평가에 사용되는 input_fns 정의하기
End of explanation
def analyze_two_d_estimator(estimator, name):
# Extract validation metrics.
metric = estimator.evaluate(input_fn=val_input_fn)
print("Validation AUC: {}".format(metric["auc"]))
metric = estimator.evaluate(input_fn=test_input_fn)
print("Testing AUC: {}".format(metric["auc"]))
def two_d_pred(avg_ratings, num_reviews):
results = estimator.predict(
tf.compat.v1.estimator.inputs.pandas_input_fn(
x=pd.DataFrame({
"avg_rating": avg_ratings,
"num_reviews": num_reviews,
}),
shuffle=False,
))
return [x["logistic"][0] for x in results]
def two_d_click_through_rate(avg_ratings, num_reviews):
return np.mean([
click_through_rate(avg_ratings, num_reviews,
np.repeat(d, len(avg_ratings)))
for d in ["D", "DD", "DDD", "DDDD"]
],
axis=0)
figsize(11, 5)
plot_fns([("{} Estimated CTR".format(name), two_d_pred),
("CTR", two_d_click_through_rate)],
split_by_dollar=False)
Explanation: 그래디언트 Boosted 트리 적합화하기
avg_rating과 num_reviews 두 가지 특성으로 시작하겠습니다.
검증 및 테스트 메트릭을 플롯하고 계산하기 위한 몇 가지 보조 함수를 만듭니다.
End of explanation
feature_columns = [
tf.feature_column.numeric_column("num_reviews"),
tf.feature_column.numeric_column("avg_rating"),
]
gbt_estimator = tf.estimator.BoostedTreesClassifier(
feature_columns=feature_columns,
# Hyper-params optimized on validation set.
n_batches_per_layer=1,
max_depth=2,
n_trees=50,
learning_rate=0.05,
config=tf.estimator.RunConfig(tf_random_seed=42),
)
gbt_estimator.train(input_fn=train_input_fn)
analyze_two_d_estimator(gbt_estimator, "GBT")
Explanation: TensorFlow 그래디언트 boosted 결정 트리를 데이터세트에 적합하도록 맞출 수 있습니다.
End of explanation
feature_columns = [
tf.feature_column.numeric_column("num_reviews"),
tf.feature_column.numeric_column("avg_rating"),
]
dnn_estimator = tf.estimator.DNNClassifier(
feature_columns=feature_columns,
# Hyper-params optimized on validation set.
hidden_units=[16, 8, 8],
optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),
config=tf.estimator.RunConfig(tf_random_seed=42),
)
dnn_estimator.train(input_fn=train_input_fn)
analyze_two_d_estimator(dnn_estimator, "DNN")
Explanation: 모델이 실제 CTR의 일반적인 형상을 포착하고 적절한 검증 메트릭을 가지고 있지만, 입력 공간의 여러 부분에서 반직관적인 동작을 보입니다. 평균 평점 또는 리뷰 수가 증가하면 예상 CTR이 감소하는데, 이는 훈련 데이터세트에서 잘 다루지 않는 영역에 샘플 포인트가 부족하기 때문입니다. 모델은 데이터에서만 올바른 동작을 추론할 방법이 없습니다.
이 문제를 해결하기 위해 모델이 평균 평점과 리뷰 수에 대해 단조롭게 증가하는 값을 출력해야 한다는 형상 제약 조건을 적용합니다. 나중에 TFL에서 이를 구현하는 방법을 살펴보겠습니다.
DNN 적합화하기
DNN 분류자로 같은 단계를 반복할 수 있습니다. 여기서 비슷한 패턴이 관찰되는데 리뷰 수가 적은 샘플 포인트가 충분하지 않으면 무의미한 외삽이 발생합니다. 검증 메트릭이 트리 솔루션보다 우수하더라도 테스트 메트릭은 훨씬 나쁘다는 점을 유의하세요.
End of explanation
feature_columns = [
tf.feature_column.numeric_column("num_reviews"),
tf.feature_column.numeric_column("avg_rating"),
]
model_config = tfl.configs.CalibratedLatticeConfig(
feature_configs=[
tfl.configs.FeatureConfig(
name="num_reviews",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_num_keypoints=20,
),
tfl.configs.FeatureConfig(
name="avg_rating",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_num_keypoints=20,
)
])
tfl_estimator = tfl.estimators.CannedClassifier(
feature_columns=feature_columns,
model_config=model_config,
feature_analysis_input_fn=feature_analysis_input_fn,
optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),
config=tf.estimator.RunConfig(tf_random_seed=42),
)
tfl_estimator.train(input_fn=train_input_fn)
analyze_two_d_estimator(tfl_estimator, "TF Lattice")
Explanation: 형상 제약 조건
TensorFlow Lattice(TFL)는 훈련 데이터 이상의 모델 동작을 보호하기 위해 형상 제약 조건을 적용하는 데 중점을 둡니다. 이러한 형상 제약 조건은 TFL Keras 레이어에 적용됩니다. 자세한 내용은 JMLR 논문에서 찾을 수 있습니다.
이 튜토리얼에서는 다양한 형상 제약을 다루기 위해 준비된 TF estimator를 사용하지만, 해당 모든 단계는 TFL Keras 레이어에서 생성된 모델로 수행할 수 있습니다.
다른 TensorFlow estimator와 마찬가지로 준비된 TFL estimator는 특성 열을 사용하여 입력 형식을 정의하고 훈련 input_fn을 사용하여 데이터를 전달합니다. 준비된 TFL estimator을 사용하려면 다음이 필요합니다.
모델 구성: 모델 아키텍처 및 특성별 형상 제약 조건 및 regularizer를 정의합니다.
특성 분석 input_fn: TFL 초기화를 위해 데이터를 전달하는 TF input_fn.
자세한 설명은 준비된 estimator 튜토리얼 또는 API 설명서를 참조하세요.
단조
먼저 두 특성에 단조 형상 제약 조건을 추가하여 단조 문제를 해결합니다.
TFL에 형상 제약 조건을 적용하기 위해 특성 구성에 제약 조건을 지정합니다. 다음 코드는 monotonicity="increasing"을 설정하여 num_reviews 및 avg_rating 모두에 대해 출력이 단조롭게 증가하도록 요구할 수 있는 방법을 보여줍니다.
End of explanation
def save_and_visualize_lattice(tfl_estimator):
saved_model_path = tfl_estimator.export_saved_model(
"/tmp/TensorFlow_Lattice_101/",
tf.estimator.export.build_parsing_serving_input_receiver_fn(
feature_spec=tf.feature_column.make_parse_example_spec(
feature_columns)))
model_graph = tfl.estimators.get_model_graph(saved_model_path)
figsize(8, 8)
tfl.visualization.draw_model_graph(model_graph)
return model_graph
_ = save_and_visualize_lattice(tfl_estimator)
Explanation: CalibratedLatticeConfig를 사용하면 먼저 calibrator를 각 입력(숫자 특성에 대한 부분 선형 함수)에 적용한 다음 격자 레이어를 적용하여 보정된 특성을 비선형적으로 융합하는 준비된 분류자를 생성합니다. tfl.visualization을 사용하여 모델을 시각화할 수 있습니다. 특히 다음 플롯은 미리 준비된 estimator에 포함된 두 개의 훈련된 calibrator를 보여줍니다.
End of explanation
feature_columns = [
tf.feature_column.numeric_column("num_reviews"),
tf.feature_column.numeric_column("avg_rating"),
]
model_config = tfl.configs.CalibratedLatticeConfig(
feature_configs=[
tfl.configs.FeatureConfig(
name="num_reviews",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_convexity="concave",
pwl_calibration_num_keypoints=20,
),
tfl.configs.FeatureConfig(
name="avg_rating",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_num_keypoints=20,
)
])
tfl_estimator = tfl.estimators.CannedClassifier(
feature_columns=feature_columns,
model_config=model_config,
feature_analysis_input_fn=feature_analysis_input_fn,
optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),
config=tf.estimator.RunConfig(tf_random_seed=42),
)
tfl_estimator.train(input_fn=train_input_fn)
analyze_two_d_estimator(tfl_estimator, "TF Lattice")
_ = save_and_visualize_lattice(tfl_estimator)
Explanation: 제약 조건이 추가되면 평균 평점이 증가하거나 리뷰 수가 증가함에 따라 예상 CTR이 항상 증가합니다. 이것은 calibrator와 격자가 단조로운지 확인하여 수행됩니다.
감소 수익
감소 수익은 특정 특성값을 증가시키는 한계 이득이 값이 증가함에 따라 감소한다는 것을 의미합니다. 해당 경우에는 num_reviews 특성이 이 패턴을 따를 것으로 예상하므로 그에 따라 calibrator를 구성할 수 있습니다. 감소하는 수익률은 두 가지 충분한 조건으로 분해할 수 있습니다.
calibrator가 단조롭게 증가하고 있으며
calibrator는 오목합니다.
End of explanation
feature_columns = [
tf.feature_column.numeric_column("num_reviews"),
tf.feature_column.numeric_column("avg_rating"),
]
model_config = tfl.configs.CalibratedLatticeConfig(
feature_configs=[
tfl.configs.FeatureConfig(
name="num_reviews",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_convexity="concave",
pwl_calibration_num_keypoints=20,
# Larger num_reviews indicating more trust in avg_rating.
reflects_trust_in=[
tfl.configs.TrustConfig(
feature_name="avg_rating", trust_type="edgeworth"),
],
),
tfl.configs.FeatureConfig(
name="avg_rating",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_num_keypoints=20,
)
])
tfl_estimator = tfl.estimators.CannedClassifier(
feature_columns=feature_columns,
model_config=model_config,
feature_analysis_input_fn=feature_analysis_input_fn,
optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),
config=tf.estimator.RunConfig(tf_random_seed=42),
)
tfl_estimator.train(input_fn=train_input_fn)
analyze_two_d_estimator(tfl_estimator, "TF Lattice")
model_graph = save_and_visualize_lattice(tfl_estimator)
Explanation: 오목 제약 조건을 추가하여 테스트 메트릭이 어떻게 향상되는지 확인하세요. 예측 플롯은 또한 지상 진실과 더 유사합니다.
2D 형상 제약 조건: 신뢰
리뷰가 한두 개밖에 없는 레스토랑의 별 5개는 신뢰할 수 없는 평가일 가능성이 높지만(실제 레스토랑 경험은 나쁠 수 있습니다), 수백 개의 리뷰가 있는 레스토랑에 대한 4성급은 훨씬 더 신뢰할 수 있습니다(이 경우에 레스토랑 경험은 좋을 것입니다). 레스토랑 리뷰 수는 평균 평점에 대한 신뢰도에 영향을 미친다는 것을 알 수 있습니다.
TFL 신뢰 제약 조건을 실행하여 한 특성의 더 큰(또는 더 작은) 값이 다른 특성에 대한 더 많은 신뢰 또는 신뢰를 나타냄을 모델에 알릴 수 있습니다. 이는 특성 구성에서 reflects_trust_in 구성을 설정하여 수행됩니다.
End of explanation
lat_mesh_n = 12
lat_mesh_x, lat_mesh_y = tfl.test_utils.two_dim_mesh_grid(
lat_mesh_n**2, 0, 0, 1, 1)
lat_mesh_fn = tfl.test_utils.get_hypercube_interpolation_fn(
model_graph.output_node.weights.flatten())
lat_mesh_z = [
lat_mesh_fn([lat_mesh_x.flatten()[i],
lat_mesh_y.flatten()[i]]) for i in range(lat_mesh_n**2)
]
trust_plt = tfl.visualization.plot_outputs(
(lat_mesh_x, lat_mesh_y),
{"Lattice Lookup": lat_mesh_z},
figsize=(6, 6),
)
trust_plt.title("Trust")
trust_plt.xlabel("Calibrated avg_rating")
trust_plt.ylabel("Calibrated num_reviews")
trust_plt.show()
Explanation: 다음 플롯은 훈련된 격자 함수를 나타냅니다. 신뢰 제약 조건으로 인해, 보정된 num_reviews의 큰 값이 보정된 avg_rating에 대한 경사를 더 높여서 격자 출력에서 더 중요한 이동이 있을 것을 예상합니다.
End of explanation
feature_columns = [
tf.feature_column.numeric_column("num_reviews"),
tf.feature_column.numeric_column("avg_rating"),
]
model_config = tfl.configs.CalibratedLatticeConfig(
feature_configs=[
tfl.configs.FeatureConfig(
name="num_reviews",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_convexity="concave",
pwl_calibration_num_keypoints=20,
regularizer_configs=[
tfl.configs.RegularizerConfig(name="calib_wrinkle", l2=1.0),
],
reflects_trust_in=[
tfl.configs.TrustConfig(
feature_name="avg_rating", trust_type="edgeworth"),
],
),
tfl.configs.FeatureConfig(
name="avg_rating",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_num_keypoints=20,
regularizer_configs=[
tfl.configs.RegularizerConfig(name="calib_wrinkle", l2=1.0),
],
)
])
tfl_estimator = tfl.estimators.CannedClassifier(
feature_columns=feature_columns,
model_config=model_config,
feature_analysis_input_fn=feature_analysis_input_fn,
optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),
config=tf.estimator.RunConfig(tf_random_seed=42),
)
tfl_estimator.train(input_fn=train_input_fn)
analyze_two_d_estimator(tfl_estimator, "TF Lattice")
_ = save_and_visualize_lattice(tfl_estimator)
Explanation: Smoothing Calibrator
이제 avg_rating의 calibrator를 살펴보겠습니다. 단조롭게 증가하지만 기울기의 변화는 갑작스럽고 해석하기 어렵습니다. 이는 regularizer_configs의 regularizer 설정으로 이 calibrator를 스무딩하는 것을 고려해볼 수 있음을 의미합니다.
여기에서는 곡률의 변화를 줄이기 위해 wrinkle regularizer를 적용합니다. 또한 laplacian regularizer를 사용하여 calibrator를 평면화하고 hessian regularizer를 사용하여 보다 선형적으로 만들 수 있습니다.
End of explanation
def analyze_three_d_estimator(estimator, name):
# Extract validation metrics.
metric = estimator.evaluate(input_fn=val_input_fn)
print("Validation AUC: {}".format(metric["auc"]))
metric = estimator.evaluate(input_fn=test_input_fn)
print("Testing AUC: {}".format(metric["auc"]))
def three_d_pred(avg_ratings, num_reviews, dollar_rating):
results = estimator.predict(
tf.compat.v1.estimator.inputs.pandas_input_fn(
x=pd.DataFrame({
"avg_rating": avg_ratings,
"num_reviews": num_reviews,
"dollar_rating": dollar_rating,
}),
shuffle=False,
))
return [x["logistic"][0] for x in results]
figsize(11, 22)
plot_fns([("{} Estimated CTR".format(name), three_d_pred),
("CTR", click_through_rate)],
split_by_dollar=True)
Explanation: 이제 calibrator가 매끄럽고 전체 예상 CTR이 실제와 더 잘 일치합니다. 해당 적용은 테스트 메트릭과 등고선 플롯 모두에 반영됩니다.
범주형 보정을 위한 부분 단조
지금까지 모델에서 숫자 특성 중 두 가지만 사용했습니다. 여기에서는 범주형 보정 레이어를 사용하여 세 번째 특성을 추가합니다. 다시 플롯 및 메트릭 계산을 위한 도우미 함수를 설정하는 것으로 시작합니다.
End of explanation
feature_columns = [
tf.feature_column.numeric_column("num_reviews"),
tf.feature_column.numeric_column("avg_rating"),
tf.feature_column.categorical_column_with_vocabulary_list(
"dollar_rating",
vocabulary_list=["D", "DD", "DDD", "DDDD"],
dtype=tf.string,
default_value=0),
]
model_config = tfl.configs.CalibratedLatticeConfig(
feature_configs=[
tfl.configs.FeatureConfig(
name="num_reviews",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_convexity="concave",
pwl_calibration_num_keypoints=20,
regularizer_configs=[
tfl.configs.RegularizerConfig(name="calib_wrinkle", l2=1.0),
],
reflects_trust_in=[
tfl.configs.TrustConfig(
feature_name="avg_rating", trust_type="edgeworth"),
],
),
tfl.configs.FeatureConfig(
name="avg_rating",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_num_keypoints=20,
regularizer_configs=[
tfl.configs.RegularizerConfig(name="calib_wrinkle", l2=1.0),
],
),
tfl.configs.FeatureConfig(
name="dollar_rating",
lattice_size=2,
pwl_calibration_num_keypoints=4,
# Here we only specify one monotonicity:
# `D` resturants has smaller value than `DD` restaurants
monotonicity=[("D", "DD")],
),
])
tfl_estimator = tfl.estimators.CannedClassifier(
feature_columns=feature_columns,
model_config=model_config,
feature_analysis_input_fn=feature_analysis_input_fn,
optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),
config=tf.estimator.RunConfig(tf_random_seed=42),
)
tfl_estimator.train(input_fn=train_input_fn)
analyze_three_d_estimator(tfl_estimator, "TF Lattice")
_ = save_and_visualize_lattice(tfl_estimator)
Explanation: 세 번째 특성인 dollar_rating을 포함하려면 범주형 특성이 특성 열과 특성 구성 모두에서 TFL 내에서 약간 다른 처리가 필요하다는 점을 기억해야 합니다. 여기서 다른 모든 입력이 고정될 때 'DD' 레스토랑의 출력이 'D' 레스토랑보다 커야 한다는 부분 단조 제약 조건을 적용합니다. 해당 적용은 특성 구성에서 monotonicity 설정을 사용하여 수행됩니다.
End of explanation
feature_columns = [
tf.feature_column.numeric_column("num_reviews"),
tf.feature_column.numeric_column("avg_rating"),
tf.feature_column.categorical_column_with_vocabulary_list(
"dollar_rating",
vocabulary_list=["D", "DD", "DDD", "DDDD"],
dtype=tf.string,
default_value=0),
]
model_config = tfl.configs.CalibratedLatticeConfig(
output_calibration=True,
output_calibration_num_keypoints=5,
regularizer_configs=[
tfl.configs.RegularizerConfig(name="output_calib_wrinkle", l2=0.1),
],
feature_configs=[
tfl.configs.FeatureConfig(
name="num_reviews",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_convexity="concave",
pwl_calibration_num_keypoints=20,
regularizer_configs=[
tfl.configs.RegularizerConfig(name="calib_wrinkle", l2=1.0),
],
reflects_trust_in=[
tfl.configs.TrustConfig(
feature_name="avg_rating", trust_type="edgeworth"),
],
),
tfl.configs.FeatureConfig(
name="avg_rating",
lattice_size=2,
monotonicity="increasing",
pwl_calibration_num_keypoints=20,
regularizer_configs=[
tfl.configs.RegularizerConfig(name="calib_wrinkle", l2=1.0),
],
),
tfl.configs.FeatureConfig(
name="dollar_rating",
lattice_size=2,
pwl_calibration_num_keypoints=4,
# Here we only specify one monotonicity:
# `D` resturants has smaller value than `DD` restaurants
monotonicity=[("D", "DD")],
),
])
tfl_estimator = tfl.estimators.CannedClassifier(
feature_columns=feature_columns,
model_config=model_config,
feature_analysis_input_fn=feature_analysis_input_fn,
optimizer=tf.keras.optimizers.Adam(learning_rate=LEARNING_RATE),
config=tf.estimator.RunConfig(tf_random_seed=42),
)
tfl_estimator.train(input_fn=train_input_fn)
analyze_three_d_estimator(tfl_estimator, "TF Lattice")
_ = save_and_visualize_lattice(tfl_estimator)
Explanation: 범주형 calibrator는 모델 출력의 선호도를 보여줍니다. DD > D > DDD > DDDD는 설정과 일치합니다. 결측값에 대한 열도 있습니다. 훈련 및 테스트 데이터에는 누락된 특성이 없지만, 모델은 다운스트림 모델 제공 중에 발생하는 누락된 값에 대한 대체 값을 제공합니다.
dollar_rating을 조건으로 이 모델의 예상 CTR도 플롯합니다. 필요한 모든 제약 조건이 각 슬라이스에서 충족됩니다.
출력 보정
지금까지 훈련한 모든 TFL 모델의 경우 격자 레이어(모델 그래프에서 'Lattice'로 표시됨)가 모델 예측을 직접 출력합니다. 때때로 격자 출력이 모델 출력을 내도록 재조정되어야 하는지는 확실하지 않습니다.
특성은 $log$ 카운트이고 레이블은 카운트입니다.
격자는 매우 적은 수의 꼭짓점을 갖도록 구성되지만 레이블 분포는 비교적 복잡합니다.
이러한 경우 격자 출력과 모델 출력 사이에 또 다른 calibrator를 추가하여 모델 유연성을 높일 수 있습니다. 방금 구축한 모델에 5개의 키포인트가 있는 보정 레이어를 추가하겠습니다. 또한 함수를 원활하게 유지하기 위해 출력 calibrator용 regularizer를 추가합니다.
End of explanation
<END_TASK> |
206 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Project 1
Used Vehicle Price Prediction
Introduction
1.2 Million listings scraped from TrueCar.com - Price, Mileage, Make, Model dataset from Kaggle
Step1: Exercise P1.1 (50%)
Develop a machine learning model that predicts the price of the of car using as an input ['Year', 'Mileage', 'State', 'Make', 'Model']
Submit the prediction of the testing set to Kaggle
https
Step2: Submission example | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
data = pd.read_csv('https://github.com/albahnsen/PracticalMachineLearningClass/raw/master/datasets/dataTrain_carListings.zip')
data.head()
data.shape
data.Price.describe()
data.plot(kind='scatter', y='Price', x='Year')
data.plot(kind='scatter', y='Price', x='Mileage')
data.columns
Explanation: Project 1
Used Vehicle Price Prediction
Introduction
1.2 Million listings scraped from TrueCar.com - Price, Mileage, Make, Model dataset from Kaggle: data
Each observation represents the price of an used car
End of explanation
data_test = pd.read_csv('https://github.com/albahnsen/PracticalMachineLearningClass/raw/master/datasets/dataTest_carListings.zip', index_col=0)
data_test.head()
data_test.shape
Explanation: Exercise P1.1 (50%)
Develop a machine learning model that predicts the price of the of car using as an input ['Year', 'Mileage', 'State', 'Make', 'Model']
Submit the prediction of the testing set to Kaggle
https://www.kaggle.com/c/miia4200-20191-p1-usedcarpriceprediction
Evaluation:
25% - Performance of the model in the Kaggle Private Leaderboard
25% - Notebook explaining the modeling process
End of explanation
import numpy as np
np.random.seed(42)
y_pred = pd.DataFrame(np.random.rand(data_test.shape[0]) * 75000 + 5000, index=data_test.index, columns=['Price'])
y_pred.to_csv('test_submission.csv', index_label='ID')
y_pred.head()
Explanation: Submission example
End of explanation
<END_TASK> |
207 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Automated Assessment of Car Damage Photos using SAS
Import the required packages including swat for accessing CAS actions
https
Step1: Set up the environment and Connect to SAS from Python
Creating an Authinfo file
Step2: Load images and resize
Step3: Convert colours
Step4: Apply noise reduction and binarization
Step5: Compare images with the labeled images in the historical data
Using the similarity index for decision making | <ASSISTANT_TASK:>
Python Code:
# import the required packages
from swat import *
from pprint import pprint
import numpy as np
import matplotlib.pyplot as plt
import cv2
# define the function to display the processed image files.
def imageShow(session, casTable, imageId, nimages):
a = session.table.fetch(sastypes=False,sortby=[{'name':'_id_'}],table=casTable,to=nimages)
fig = plt.figure(figsize=(20, 20))
for i in range(nimages):
imageData = a['Fetch'].ix[ i][ imageId]
img_np = cv2.imdecode(np.fromstring( imageData, np.uint8),1)
fig.add_subplot(1,nimages,i+1)
plt.imshow(img_np)
img_np[:,:,[0,2]]=img_np[:,:,[2,0]]
plt.xticks([]), plt.yticks([])
Explanation: Automated Assessment of Car Damage Photos using SAS
Import the required packages including swat for accessing CAS actions
https://github.com/sassoftware/python-swat
End of explanation
# define the host machine and port for CAS connection: port is 5570 from Linux client and 8777 from Windows client.
hostmachine = 'my-viya-server.my-domain.com'
port = 8777
# authentication file on the client machine with user name and password (see the link above).
_authinfo = r"my-local-path\_authinfo"
# path on the Viya server where the image files to be processed are stored.
path_source_images = "my-host-path-for-sources"
path_reference_images = "my-host-path-for-references"
# set up a CAS session.
conn = CAS(hostmachine, port, authinfo = _authinfo)
# load CAS image action set for processing images.
conn.loadactionset('image')
Explanation: Set up the environment and Connect to SAS from Python
Creating an Authinfo file:
http://go.documentation.sas.com/?docsetId=caspg&docsetTarget=n0i9fvsmff624fn1nh29sa6cx6lk.htm&docsetVersion=3.2&locale=en
End of explanation
conn.image.loadImages(casout={'name':'inputTable', 'replace':True}, path= path_source_images)
conn.image.processimages(casout={'name':'inputTable_resized', 'replace':True},
imagefunctions=[{'functionoptions':{'width':1000,'functiontype':'RESIZE','height':600}}],
imagetable={'name':'inputTable'})
imageTable = conn.CASTable('inputTable_resized')
imageShow(conn, imageTable, 0, 4)
Explanation: Load images and resize
End of explanation
r = conn.image.processImages(casout={'name':'resultingImages','replace':True},
imagetable={'name':'inputTable_resized'},
imagefunctions=[
{'options':{'functiontype':'CONVERT_COLOR'}} #change color space
])
print(r)
outTable = conn.CASTable('resultingImages')
type(outTable)
imageShow(conn, outTable, 0, 4)
Explanation: Convert colours
End of explanation
r = conn.image.processImages(casout={'name':'resultingImages','replace':True},
imagetable={'name':'inputTable_resized'},
imagefunctions=[
{'options':{'functiontype':'CONVERT_COLOR'}}, #change color space
{'options':{'functiontype':'BILATERAL_FILTER', #noise reduction
'diameter':13,'sigmacolor':30,'sigmaspace':30}},
{'options':{'functiontype':'THRESHOLD', #image binarization
'type':'OTSU','value':125}},
{'options':{'functiontype':'LAPLACIAN', #edge detection with the Laplace operator
'kernelsize':12}}
])
print(r)
outTable = conn.CASTable('resultingImages')
imageShow(conn, outTable, 0, 4)
outTable.head(4)
Explanation: Apply noise reduction and binarization
End of explanation
# Process reference files to compare.
conn.image.loadImages(casout={'name':'inTable', 'replace':True}, path= 'path_reference_images')
conn.image.processImages(casout={'name':'refTable','replace':True},
imagetable={'name':'inTable'},
imagefunctions=[{'functionoptions':{'width':1000,'functiontype':'RESIZE','height':600}}, # resize
{'options':{'functiontype':'CONVERT_COLOR'}}, #change color space
{'options':{'functiontype':'BILATERAL_FILTER', #noise reduction
'diameter':13,'sigmacolor':30,'sigmaspace':30}},
{'options':{'functiontype':'THRESHOLD', #image binarization
'type':'OTSU','value':125}}
])
# Compare reference and source images to find the similarity index.
results = conn.image.compareImages(
casOut={
"name":"output",
"replace":True
},
pairAll=True,
referenceImages={
"table":{
"name":'refTable'
}},
sourceImages={
"table":{
"name":'resultingImages'
}}
)
scoreTable = conn.CASTable("output")
del scoreTable['_channel4_']
del scoreTable['_channel3_']
print(results)
print(scoreTable.head())
# end the CAS session.
conn.session.endsession()
Explanation: Compare images with the labeled images in the historical data
Using the similarity index for decision making
End of explanation
<END_TASK> |
208 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Machine Learning Engineer Nanodegree
Introduction and Foundations
Project 0
Step1: From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship
Step3: The very same sample of the RMS Titanic data now shows the Survived feature removed from the DataFrame. Note that data (the passenger data) and outcomes (the outcomes of survival) are now paired. That means for any passenger data.loc[i], they have the survival outcome outcome[i].
To measure the performance of our predictions, we need a metric to score our predictions against the true outcomes of survival. Since we are interested in how accurate our predictions are, we will calculate the proportion of passengers where our prediction of their survival is correct. Run the code cell below to create our accuracy_score function and test a prediction on the first five passengers.
Think
Step5: Tip
Step6: Question 1
Using the RMS Titanic data, how accurate would a prediction be that none of the passengers survived?
Hint
Step7: Answer
Step9: Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females did survive the ship sinking. Let's build on our previous prediction
Step10: Question 2
How accurate would a prediction be that all female passengers survived and the remaining passengers did not survive?
Hint
Step11: Answer
Step13: Examining the survival statistics, the majority of males younger than 10 survived the ship sinking, whereas most males age 10 or older did not survive the ship sinking. Let's continue to build on our previous prediction
Step14: Question 3
How accurate would a prediction be that all female passengers and all male passengers younger than 10 survived?
Hint
Step15: Answer
Step17: After exploring the survival statistics visualization, fill in the missing code below so that the function will make your prediction.
Make sure to keep track of the various features and conditions you tried before arriving at your final prediction model.
Hint
Step18: Question 4
Describe the steps you took to implement the final prediction model so that it got an accuracy of at least 80%. What features did you look at? Were certain features more informative than others? Which conditions did you use to split the survival outcomes in the data? How accurate are your predictions?
Hint | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
# RMS Titanic data visualization code
from titanic_visualizations import survival_stats
from IPython.display import display
%matplotlib inline
# Load the dataset
in_file = 'titanic_data.csv'
full_data = pd.read_csv(in_file)
# Print the first few entries of the RMS Titanic data
display(full_data.head())
Explanation: Machine Learning Engineer Nanodegree
Introduction and Foundations
Project 0: Titanic Survival Exploration
In 1912, the ship RMS Titanic struck an iceberg on its maiden voyage and sank, resulting in the deaths of most of its passengers and crew. In this introductory project, we will explore a subset of the RMS Titanic passenger manifest to determine which features best predict whether someone survived or did not survive. To complete this project, you will need to implement several conditional predictions and answer the questions below. Your project submission will be evaluated based on the completion of the code and your responses to the questions.
Tip: Quoted sections like this will provide helpful instructions on how to navigate and use an iPython notebook.
Getting Started
To begin working with the RMS Titanic passenger data, we'll first need to import the functionality we need, and load our data into a pandas DataFrame.
Run the code cell below to load our data and display the first few entries (passengers) for examination using the .head() function.
Tip: You can run a code cell by clicking on the cell and using the keyboard shortcut Shift + Enter or Shift + Return. Alternatively, a code cell can be executed using the Play button in the hotbar after selecting it. Markdown cells (text cells like this one) can be edited by double-clicking, and saved using these same shortcuts. Markdown allows you to write easy-to-read plain text that can be converted to HTML.
End of explanation
# Store the 'Survived' feature in a new variable and remove it from the dataset
outcomes = full_data['Survived']
data = full_data.drop('Survived', axis = 1)
# Show the new dataset with 'Survived' removed
display(data.head())
Explanation: From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship:
- Survived: Outcome of survival (0 = No; 1 = Yes)
- Pclass: Socio-economic class (1 = Upper class; 2 = Middle class; 3 = Lower class)
- Name: Name of passenger
- Sex: Sex of the passenger
- Age: Age of the passenger (Some entries contain NaN)
- SibSp: Number of siblings and spouses of the passenger aboard
- Parch: Number of parents and children of the passenger aboard
- Ticket: Ticket number of the passenger
- Fare: Fare paid by the passenger
- Cabin Cabin number of the passenger (Some entries contain NaN)
- Embarked: Port of embarkation of the passenger (C = Cherbourg; Q = Queenstown; S = Southampton)
Since we're interested in the outcome of survival for each passenger or crew member, we can remove the Survived feature from this dataset and store it as its own separate variable outcomes. We will use these outcomes as our prediction targets.
Run the code cell below to remove Survived as a feature of the dataset and store it in outcomes.
End of explanation
def accuracy_score(truth, pred):
Returns accuracy score for input truth and predictions.
# Ensure that the number of predictions matches number of outcomes
if len(truth) == len(pred):
# Calculate and return the accuracy as a percent
return "Predictions have an accuracy of {:.2f}%.".format((truth == pred).mean()*100)
else:
return "Number of predictions does not match number of outcomes!"
# Test the 'accuracy_score' function
predictions = pd.Series(np.ones(5, dtype = int))
print accuracy_score(outcomes[:5], predictions)
Explanation: The very same sample of the RMS Titanic data now shows the Survived feature removed from the DataFrame. Note that data (the passenger data) and outcomes (the outcomes of survival) are now paired. That means for any passenger data.loc[i], they have the survival outcome outcome[i].
To measure the performance of our predictions, we need a metric to score our predictions against the true outcomes of survival. Since we are interested in how accurate our predictions are, we will calculate the proportion of passengers where our prediction of their survival is correct. Run the code cell below to create our accuracy_score function and test a prediction on the first five passengers.
Think: Out of the first five passengers, if we predict that all of them survived, what would you expect the accuracy of our predictions to be?
End of explanation
def predictions_0(data):
Model with no features. Always predicts a passenger did not survive.
predictions = []
for _, passenger in data.iterrows():
# Predict the survival of 'passenger'
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_0(data)
Explanation: Tip: If you save an iPython Notebook, the output from running code blocks will also be saved. However, the state of your workspace will be reset once a new session is started. Make sure that you run all of the code blocks from your previous session to reestablish variables and functions before picking up where you last left off.
Making Predictions
If we were asked to make a prediction about any passenger aboard the RMS Titanic whom we knew nothing about, then the best prediction we could make would be that they did not survive. This is because we can assume that a majority of the passengers (more than 50%) did not survive the ship sinking.
The predictions_0 function below will always predict that a passenger did not survive.
End of explanation
print accuracy_score(outcomes, predictions)
Explanation: Question 1
Using the RMS Titanic data, how accurate would a prediction be that none of the passengers survived?
Hint: Run the code cell below to see the accuracy of this prediction.
End of explanation
survival_stats(data, outcomes, 'Sex')
Explanation: Answer: 61.62%
Let's take a look at whether the feature Sex has any indication of survival rates among passengers using the survival_stats function. This function is defined in the titanic_visualizations.py Python script included with this project. The first two parameters passed to the function are the RMS Titanic data and passenger survival outcomes, respectively. The third parameter indicates which feature we want to plot survival statistics across.
Run the code cell below to plot the survival outcomes of passengers based on their sex.
End of explanation
def predictions_1(data):
Model with one feature:
- Predict a passenger survived if they are female.
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# and write your prediction conditions here
predictions.append( passenger['Sex'] == 'female' )
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_1(data)
Explanation: Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females did survive the ship sinking. Let's build on our previous prediction: If a passenger was female, then we will predict that they survived. Otherwise, we will predict the passenger did not survive.
Fill in the missing code below so that the function will make this prediction.
Hint: You can access the values of each feature for a passenger like a dictionary. For example, passenger['Sex'] is the sex of the passenger.
End of explanation
print accuracy_score(outcomes, predictions)
Explanation: Question 2
How accurate would a prediction be that all female passengers survived and the remaining passengers did not survive?
Hint: Run the code cell below to see the accuracy of this prediction.
End of explanation
survival_stats(data, outcomes, 'Age', ["Sex == 'male'"])
Explanation: Answer: 78.68%
Using just the Sex feature for each passenger, we are able to increase the accuracy of our predictions by a significant margin. Now, let's consider using an additional feature to see if we can further improve our predictions. For example, consider all of the male passengers aboard the RMS Titanic: Can we find a subset of those passengers that had a higher rate of survival? Let's start by looking at the Age of each male, by again using the survival_stats function. This time, we'll use a fourth parameter to filter out the data so that only passengers with the Sex 'male' will be included.
Run the code cell below to plot the survival outcomes of male passengers based on their age.
End of explanation
def predictions_2(data):
Model with two features:
- Predict a passenger survived if they are female.
- Predict a passenger survived if they are male and younger than 10.
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# and write your prediction conditions here
predictions.append( ( passenger['Sex'] == 'female' or passenger['Age'] < 10 ) )
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_2(data)
Explanation: Examining the survival statistics, the majority of males younger than 10 survived the ship sinking, whereas most males age 10 or older did not survive the ship sinking. Let's continue to build on our previous prediction: If a passenger was female, then we will predict they survive. If a passenger was male and younger than 10, then we will also predict they survive. Otherwise, we will predict they do not survive.
Fill in the missing code below so that the function will make this prediction.
Hint: You can start your implementation of this function using the prediction code you wrote earlier from predictions_1.
End of explanation
print accuracy_score(outcomes, predictions)
Explanation: Question 3
How accurate would a prediction be that all female passengers and all male passengers younger than 10 survived?
Hint: Run the code cell below to see the accuracy of this prediction.
End of explanation
survival_stats(data, outcomes, "Age", ["Sex == 'female'", "Pclass == 3" ] )
Explanation: Answer: 79.35%
Adding the feature Age as a condition in conjunction with Sex improves the accuracy by a small margin more than with simply using the feature Sex alone. Now it's your turn: Find a series of features and conditions to split the data on to obtain an outcome prediction accuracy of at least 80%. This may require multiple features and multiple levels of conditional statements to succeed. You can use the same feature multiple times with different conditions.
Pclass, Sex, Age, SibSp, and Parch are some suggested features to try.
Use the survival_stats function below to to examine various survival statistics.
Hint: To use mulitple filter conditions, put each condition in the list passed as the last argument. Example: ["Sex == 'male'", "Age < 18"]
End of explanation
def predictions_3(data):
Model with multiple features. Makes a prediction with an accuracy of at least 80%.
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# and write your prediction conditions here
predictions.append( (passenger['Sex'] == 'female' and passenger['Pclass']<3 ) or
(passenger['Sex'] == 'female' and passenger['Pclass']==3 and passenger['SibSp']<2.5 and passenger['Age']< 40 ) or
(passenger['Sex'] == 'male' and passenger['Age'] < 10 and passenger['SibSp']<2.5 ) )
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_3(data)
Explanation: After exploring the survival statistics visualization, fill in the missing code below so that the function will make your prediction.
Make sure to keep track of the various features and conditions you tried before arriving at your final prediction model.
Hint: You can start your implementation of this function using the prediction code you wrote earlier from predictions_2.
End of explanation
print accuracy_score(outcomes, predictions)
Explanation: Question 4
Describe the steps you took to implement the final prediction model so that it got an accuracy of at least 80%. What features did you look at? Were certain features more informative than others? Which conditions did you use to split the survival outcomes in the data? How accurate are your predictions?
Hint: Run the code cell below to see the accuracy of your predictions.
End of explanation
<END_TASK> |
209 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
On this notebook the best models and input parameters will be searched for. The problem at hand is predicting the price of any stock symbol 14 days ahead, assuming one model for all the symbols. The best training period length, base period length, and base period step will be determined, using the MRE metrics (and/or the R^2 metrics). The step for the rolling validation will be determined taking into consideration a compromise between having enough points (I consider about 1000 different target days may be good enough), and the time needed to compute the validation.
Step1: Let's get the data.
Step2: Let's find the best params set for some different models
- Dummy Predictor (mean)
Step3: - Linear Predictor
Step4: - Random Forest model | <ASSISTANT_TASK:>
Python Code:
# Basic imports
import os
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import datetime as dt
import scipy.optimize as spo
import sys
from time import time
from sklearn.metrics import r2_score, median_absolute_error
%matplotlib inline
%pylab inline
pylab.rcParams['figure.figsize'] = (20.0, 10.0)
%load_ext autoreload
%autoreload 2
sys.path.append('../../')
import predictor.feature_extraction as fe
import utils.preprocessing as pp
import utils.misc as misc
AHEAD_DAYS = 14
Explanation: On this notebook the best models and input parameters will be searched for. The problem at hand is predicting the price of any stock symbol 14 days ahead, assuming one model for all the symbols. The best training period length, base period length, and base period step will be determined, using the MRE metrics (and/or the R^2 metrics). The step for the rolling validation will be determined taking into consideration a compromise between having enough points (I consider about 1000 different target days may be good enough), and the time needed to compute the validation.
End of explanation
datasets_params_list_df = pd.read_pickle('../../data/datasets_params_list_df.pkl')
print(datasets_params_list_df.shape)
datasets_params_list_df.head()
train_days_arr = 252 * np.array([1, 2, 3])
params_list_df = pd.DataFrame()
for train_days in train_days_arr:
temp_df = datasets_params_list_df[datasets_params_list_df['ahead_days'] == AHEAD_DAYS].copy()
temp_df['train_days'] = train_days
params_list_df = params_list_df.append(temp_df, ignore_index=True)
print(params_list_df.shape)
params_list_df.head()
Explanation: Let's get the data.
End of explanation
from predictor.dummy_mean_predictor import DummyPredictor
PREDICTOR_NAME = 'dummy'
# Global variables
eval_predictor = DummyPredictor()
step_eval_days = 60 # The step to move between training/validation pairs
params = {'eval_predictor': eval_predictor, 'step_eval_days': step_eval_days}
results_df = misc.parallelize_dataframe(params_list_df, misc.apply_mean_score_eval, params)
results_df['r2'] = results_df.apply(lambda x: x['scores'][0], axis=1)
results_df['mre'] = results_df.apply(lambda x: x['scores'][1], axis=1)
# Pickle that!
results_df.to_pickle('../../data/results_ahead{}_{}_df.pkl'.format(AHEAD_DAYS, PREDICTOR_NAME))
results_df['mre'].plot()
print('Minimum MRE param set: \n {}'.format(results_df.iloc[np.argmin(results_df['mre'])]))
print('Maximum R^2 param set: \n {}'.format(results_df.iloc[np.argmax(results_df['r2'])]))
Explanation: Let's find the best params set for some different models
- Dummy Predictor (mean)
End of explanation
from predictor.linear_predictor import LinearPredictor
PREDICTOR_NAME = 'linear'
# Global variables
eval_predictor = LinearPredictor()
step_eval_days = 60 # The step to move between training/validation pairs
params = {'eval_predictor': eval_predictor, 'step_eval_days': step_eval_days}
results_df = misc.parallelize_dataframe(params_list_df, misc.apply_mean_score_eval, params)
results_df['r2'] = results_df.apply(lambda x: x['scores'][0], axis=1)
results_df['mre'] = results_df.apply(lambda x: x['scores'][1], axis=1)
# Pickle that!
results_df.to_pickle('../../data/results_ahead{}_{}_df.pkl'.format(AHEAD_DAYS, PREDICTOR_NAME))
results_df['mre'].plot()
print('Minimum MRE param set: \n {}'.format(results_df.iloc[np.argmin(results_df['mre'])]))
print('Maximum R^2 param set: \n {}'.format(results_df.iloc[np.argmax(results_df['r2'])]))
Explanation: - Linear Predictor
End of explanation
from predictor.random_forest_predictor import RandomForestPredictor
PREDICTOR_NAME = 'random_forest'
# Global variables
eval_predictor = RandomForestPredictor()
step_eval_days = 60 # The step to move between training/validation pairs
params = {'eval_predictor': eval_predictor, 'step_eval_days': step_eval_days}
results_df = misc.parallelize_dataframe(params_list_df, misc.apply_mean_score_eval, params)
results_df['r2'] = results_df.apply(lambda x: x['scores'][0], axis=1)
results_df['mre'] = results_df.apply(lambda x: x['scores'][1], axis=1)
# Pickle that!
results_df.to_pickle('../../data/results_ahead{}_{}_df.pkl'.format(AHEAD_DAYS, PREDICTOR_NAME))
results_df['mre'].plot()
print('Minimum MRE param set: \n {}'.format(results_df.iloc[np.argmin(results_df['mre'])]))
print('Maximum R^2 param set: \n {}'.format(results_df.iloc[np.argmax(results_df['r2'])]))
Explanation: - Random Forest model
End of explanation
<END_TASK> |
210 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Introduction
Welcome to the Petals to the Metal competition! In this competition, you’re challenged to build a machine learning model that identifies the type of flowers in a dataset of images.
In this tutorial notebook, you'll learn how to build an image classifier in Keras and train it on a Tensor Processing Unit (TPU). Then, in the following exercise, you'll create your own notebook and make a submission to the competition. At the end, you'll have a complete project you can build off of with ideas of your own.
<blockquote style="margin-right
Step1: Distribution Strategy
A TPU has eight different cores and each of these cores acts as its own accelerator. (A TPU is sort of like having eight GPUs in one machine.) We tell TensorFlow how to make use of all these cores at once through a distribution strategy. Run the following cell to create the distribution strategy that we'll later apply to our model.
Step2: TensorFlow will distribute training among the eight TPU cores by creating eight different replicas of your model.
Loading the Competition Data
When used with TPUs, datasets are often serialized into TFRecords. This is a format convenient for distributing data to each of the TPUs cores.
We've included functions in the petal_helper script that will load the TFRecords for you and create a data pipeline you can use with your model. There is one function for each split.
Step3: These are tf.data.Dataset objects. You can think about a dataset in TensorFlow as a stream of data records. Check out this guide for more on working with the tf.data API.
Define Model
Now we're ready to create a neural network for classifying images! We'll use what's known as transfer learning. With transfer learning, you reuse part of a pretrained model to get a head-start on a new dataset.
For this tutorial, we'll to use a model called VGG16 pretrained on ImageNet). Later, you might want to experiment with other models included with Keras. (Xception wouldn't be a bad choice.)
The distribution strategy we created earlier contains a context manager, strategy.scope. This context manager tells TensorFlow how to divide the work of training among the eight TPU cores. When using TensorFlow with a TPU, it's important to define your model in a strategy.scope() context.
Step4: Training
And now we're ready to train the model! After defining a few parameters, we're good to go.
Step5: This next cell shows how the loss and metrics progressed during training. Thankfully, it converges!
Step6: Predictions
Once you're satisfied with everything, you're ready to make predictions on the test set.
Step7: We'll generate a file submission.csv. This file is what you'll submit to get your score on the leaderboard. | <ASSISTANT_TASK:>
Python Code:
from petal_helper import *
import tensorflow as tf
Explanation: Introduction
Welcome to the Petals to the Metal competition! In this competition, you’re challenged to build a machine learning model that identifies the type of flowers in a dataset of images.
In this tutorial notebook, you'll learn how to build an image classifier in Keras and train it on a Tensor Processing Unit (TPU). Then, in the following exercise, you'll create your own notebook and make a submission to the competition. At the end, you'll have a complete project you can build off of with ideas of your own.
<blockquote style="margin-right:auto; margin-left:auto; background-color: #ebf9ff; padding: 1em; margin:24px;">
<strong>Get Started Now!</strong><br>
You don't need to understand everything in this notebook before starting the exercise. If you like, go ahead and <b><a href="#$NEXT_NOTEBOOK_URL$">open the exercise</a></b> and follow the instructions at the beginning to enter the competition. Then, return to this tutorial for a walkthrough of your starter project.
</blockquote>
Load the Helper Functions
Attached to the notebook is a utility script called petal_helper. It contains a number of helper functions related to data loading and visualization. The following cell will import them into your notebook session. We'll also import TensorFlow, which we'll use to create our model.
End of explanation
# Detect TPU, return appropriate distribution strategy
try:
tpu = tf.distribute.cluster_resolver.TPUClusterResolver()
print('Running on TPU ', tpu.master())
except ValueError:
tpu = None
if tpu:
tf.config.experimental_connect_to_cluster(tpu)
tf.tpu.experimental.initialize_tpu_system(tpu)
strategy = tf.distribute.experimental.TPUStrategy(tpu)
else:
strategy = tf.distribute.get_strategy()
print("REPLICAS: ", strategy.num_replicas_in_sync)
Explanation: Distribution Strategy
A TPU has eight different cores and each of these cores acts as its own accelerator. (A TPU is sort of like having eight GPUs in one machine.) We tell TensorFlow how to make use of all these cores at once through a distribution strategy. Run the following cell to create the distribution strategy that we'll later apply to our model.
End of explanation
ds_train = get_training_dataset()
ds_valid = get_validation_dataset()
ds_test = get_test_dataset()
print("Training:", ds_train)
print ("Validation:", ds_valid)
print("Test:", ds_test)
Explanation: TensorFlow will distribute training among the eight TPU cores by creating eight different replicas of your model.
Loading the Competition Data
When used with TPUs, datasets are often serialized into TFRecords. This is a format convenient for distributing data to each of the TPUs cores.
We've included functions in the petal_helper script that will load the TFRecords for you and create a data pipeline you can use with your model. There is one function for each split.
End of explanation
with strategy.scope():
pretrained_model = tf.keras.applications.VGG16(
weights='imagenet',
include_top=False ,
input_shape=[*IMAGE_SIZE, 3]
)
pretrained_model.trainable = False
model = tf.keras.Sequential([
# To a base pretrained on ImageNet to extract features from images...
pretrained_model,
# ... attach a new head to act as a classifier.
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(len(CLASSES), activation='softmax')
])
model.compile(
optimizer='adam',
loss = 'sparse_categorical_crossentropy',
metrics=['sparse_categorical_accuracy'],
)
model.summary()
Explanation: These are tf.data.Dataset objects. You can think about a dataset in TensorFlow as a stream of data records. Check out this guide for more on working with the tf.data API.
Define Model
Now we're ready to create a neural network for classifying images! We'll use what's known as transfer learning. With transfer learning, you reuse part of a pretrained model to get a head-start on a new dataset.
For this tutorial, we'll to use a model called VGG16 pretrained on ImageNet). Later, you might want to experiment with other models included with Keras. (Xception wouldn't be a bad choice.)
The distribution strategy we created earlier contains a context manager, strategy.scope. This context manager tells TensorFlow how to divide the work of training among the eight TPU cores. When using TensorFlow with a TPU, it's important to define your model in a strategy.scope() context.
End of explanation
# Define the batch size. This will be 16 with TPU off and 128 (=16*8) with TPU on
BATCH_SIZE = 16 * strategy.num_replicas_in_sync
# Define training epochs
EPOCHS = 12
STEPS_PER_EPOCH = NUM_TRAINING_IMAGES // BATCH_SIZE
history = model.fit(
ds_train,
validation_data=ds_valid,
epochs=EPOCHS,
steps_per_epoch=STEPS_PER_EPOCH,
)
Explanation: Training
And now we're ready to train the model! After defining a few parameters, we're good to go.
End of explanation
display_training_curves(
history.history['loss'],
history.history['val_loss'],
'loss',
211,
)
display_training_curves(
history.history['sparse_categorical_accuracy'],
history.history['val_sparse_categorical_accuracy'],
'accuracy',
212,
)
Explanation: This next cell shows how the loss and metrics progressed during training. Thankfully, it converges!
End of explanation
test_ds = get_test_dataset(ordered=True)
print('Computing predictions...')
test_images_ds = test_ds.map(lambda image, idnum: image)
probabilities = model.predict(test_images_ds)
predictions = np.argmax(probabilities, axis=-1)
print(predictions)
Explanation: Predictions
Once you're satisfied with everything, you're ready to make predictions on the test set.
End of explanation
print('Generating submission.csv file...')
# Get image ids from test set and convert to unicode
test_ids_ds = test_ds.map(lambda image, idnum: idnum).unbatch()
test_ids = next(iter(test_ids_ds.batch(NUM_TEST_IMAGES))).numpy().astype('U')
# Write the submission file
np.savetxt(
'submission.csv',
np.rec.fromarrays([test_ids, predictions]),
fmt=['%s', '%d'],
delimiter=',',
header='id,label',
comments='',
)
# Look at the first few predictions
!head submission.csv
Explanation: We'll generate a file submission.csv. This file is what you'll submit to get your score on the leaderboard.
End of explanation
<END_TASK> |
211 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Collect all tweets from @realDonaldTrump
Step1: To sort tweets by favorites or retweets, need to convert unicode to integers
Step2: For some reason the search did not include Trump's username - random errors like this sometimes happen when querying the twitter website.
Look at word frequencies
Step3: Make word frequency dataframe
Step4: Look at most and least Trump-like tweets at varying levels of background requirement
At least 100 background occurrences
Step5: At least 1000 background occurrences
Step6: At least 10,000 background occurrences
Step7: And now look at least Trump-like relative to Twitter background
Step8: Trump does not post about things happening automatically. | <ASSISTANT_TASK:>
Python Code:
import sys
sys.path.append('..')
from twords.twords import Twords
import matplotlib.pyplot as plt
%matplotlib inline
import pandas as pd
# this pandas line makes the dataframe display all text in a line; useful for seeing entire tweets
pd.set_option('display.max_colwidth', -1)
twit = Twords()
# set path to folder that contains jar files for twitter search
twit.jar_folder_path = "../jar_files_and_background/"
twit.get_all_user_tweets("realdonaldtrump", tweets_per_run=500)
twit.data_path = "realdonaldtrump"
twit.get_java_tweets_from_csv_list()
twit.convert_tweet_dates_to_standard()
Explanation: Collect all tweets from @realDonaldTrump
End of explanation
twit.tweets_df["retweets"] = twit.tweets_df["retweets"].map(int)
twit.tweets_df["favorites"] = twit.tweets_df["favorites"].map(int)
twit.tweets_df.sort_values("favorites", ascending=False)[:5]
twit.tweets_df.sort_values("retweets", ascending=False)[:5]
Explanation: To sort tweets by favorites or retweets, need to convert unicode to integers:
End of explanation
twit.background_path = '../jar_files_and_background/freq_table_72319443_total_words_twitter_corpus.csv'
twit.create_Background_dict()
twit.create_Stop_words()
twit.keep_column_of_original_tweets()
twit.lower_tweets()
twit.keep_only_unicode_tweet_text()
twit.remove_urls_from_tweets()
twit.remove_punctuation_from_tweets()
twit.drop_non_ascii_characters_from_tweets()
twit.drop_duplicate_tweets()
twit.convert_tweet_dates_to_standard()
twit.sort_tweets_by_date()
Explanation: For some reason the search did not include Trump's username - random errors like this sometimes happen when querying the twitter website.
Look at word frequencies
End of explanation
twit.create_word_bag()
twit.make_nltk_object_from_word_bag()
twit.create_word_freq_df(10000)
twit.word_freq_df.sort_values("log relative frequency", ascending = False, inplace = True)
twit.word_freq_df.head(20)
Explanation: Make word frequency dataframe:
End of explanation
num_words_to_plot = 32
background_cutoff = 100
twit.word_freq_df[twit.word_freq_df['background occurrences']>background_cutoff].sort_values("log relative frequency", ascending=True).set_index("word")["log relative frequency"][-num_words_to_plot:].plot.barh(figsize=(20,
num_words_to_plot/2.), fontsize=30, color="c");
plt.title("log relative frequency", fontsize=30);
ax = plt.axes();
ax.xaxis.grid(linewidth=4);
Explanation: Look at most and least Trump-like tweets at varying levels of background requirement
At least 100 background occurrences:
End of explanation
num_words_to_plot = 32
background_cutoff = 1000
twit.word_freq_df[twit.word_freq_df['background occurrences']>background_cutoff].sort_values("log relative frequency", ascending=True).set_index("word")["log relative frequency"][-num_words_to_plot:].plot.barh(figsize=(20,
num_words_to_plot/2.), fontsize=30, color="c");
plt.title("log relative frequency", fontsize=30);
ax = plt.axes();
ax.xaxis.grid(linewidth=4);
Explanation: At least 1000 background occurrences:
End of explanation
num_words_to_plot = 32
background_cutoff = 10000
twit.word_freq_df[twit.word_freq_df['background occurrences']>background_cutoff].sort_values("log relative frequency", ascending=True).set_index("word")["log relative frequency"][-num_words_to_plot:].plot.barh(figsize=(20,
num_words_to_plot/2.), fontsize=30, color="c");
plt.title("log relative frequency", fontsize=30);
ax = plt.axes();
ax.xaxis.grid(linewidth=4);
Explanation: At least 10,000 background occurrences:
End of explanation
num_words_to_plot = 32
background_cutoff = 10000
twit.word_freq_df[twit.word_freq_df['background occurrences']>background_cutoff].sort_values("log relative frequency", ascending=False).set_index("word")["log relative frequency"][-num_words_to_plot:].plot.barh(figsize=(20,
num_words_to_plot/2.), fontsize=30, color="c");
plt.title("log relative frequency", fontsize=30);
ax = plt.axes();
ax.xaxis.grid(linewidth=4);
Explanation: And now look at least Trump-like relative to Twitter background:
End of explanation
twit.tweets_containing("fuck")
Explanation: Trump does not post about things happening automatically.
End of explanation
<END_TASK> |
212 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: Numerical Methods
For Numerical Relativity, we need to
evolve the spacetime (hyperbolic PDEs with "smooth" fields);
evolve the matter (hyperbolic PDEs with discontinuous fields);
solve initial data (elliptic PDEs);
extract gravitational waves (interpolation and integration);
find and analyse horizons (interpolation, BVPs).
These can be built on some simple foundations.
The general concepts that underpin most numerical methods are
the solution of linear systems $A {\bf x} = {\bf b}$;
the solution of nonlinear root-finding problems ${\bf f} ( {\bf x} ) = {\bf 0}$;
the representation of a function or field $f(x)$ by discrete data $f_i$, by interpolation or other means;
the (discrete) Fast Fourier Transform;
stochastic concepts and methods.
For Numerical Relativity, there has been little need (yet!) for stochastic methods, and the use of FFTs is mostly restricted to analysis. All of these points can be found in standard numerical packages and libraries
Step4: The errors here are significant. What matters is how fast the errors reduce as we change the grid spacing. Try changing from $\Delta x = 1$ to $\Delta x = 0.1$
Step5: We see roughly the expected scaling, with forward and backward differencing errors reducing by roughly $10$, and central differencing errors reducing by roughly $10^2$.
Convergence
The feature that we always want to show is that the error $\cal E$ reduces with the grid spacing $\Delta x$. In particular, for most methods in Numerical Relativity, we expect a power law relationship | <ASSISTANT_TASK:>
Python Code:
def backward_differencing(f, x_i, dx):
Backward differencing of f at x_i with grid spacing dx.
f_i = f(x_i)
f_i_minus_1 = f(x_i - dx)
return (f_i - f_i_minus_1) / dx
def forward_differencing(f, x_i, dx):
Forward differencing of f at x_i with grid spacing dx.
f_i = f(x_i)
f_i_plus_1 = f(x_i + dx)
return (f_i_plus_1 - f_i) / dx
def central_differencing(f, x_i, dx):
Second order central differencing of f at x_i with grid spacing dx.
f_i = f(x_i)
f_i_minus_1 = f(x_i - dx)
f_i_plus_1 = f(x_i + dx)
first_derivative = (f_i_plus_1 - f_i_minus_1) / (2.0 * dx)
second_derivative = (f_i_minus_1 - 2.0 * f_i + f_i_plus_1) / (dx**2)
return first_derivative, second_derivative
import numpy
bd = backward_differencing(numpy.exp, 0.0, dx=1.0)
fd = forward_differencing(numpy.exp, 0.0, dx=1.0)
cd1, cd2 = central_differencing(numpy.exp, 0.0, dx=1.0)
print("Backward difference should be 1, is {}, error {}".format(bd, abs(bd - 1.0)))
print("Forward difference should be 1, is {}, error {}".format(fd, abs(fd - 1.0)))
print("Central difference (1st derivative) should be 1, is {}, error {}".format(cd1, abs(cd1 - 1.0)))
print("Central difference (2nd derivative) should be 1, is {}, error {}".format(cd2, abs(cd2 - 1.0)))
Explanation: Numerical Methods
For Numerical Relativity, we need to
evolve the spacetime (hyperbolic PDEs with "smooth" fields);
evolve the matter (hyperbolic PDEs with discontinuous fields);
solve initial data (elliptic PDEs);
extract gravitational waves (interpolation and integration);
find and analyse horizons (interpolation, BVPs).
These can be built on some simple foundations.
The general concepts that underpin most numerical methods are
the solution of linear systems $A {\bf x} = {\bf b}$;
the solution of nonlinear root-finding problems ${\bf f} ( {\bf x} ) = {\bf 0}$;
the representation of a function or field $f(x)$ by discrete data $f_i$, by interpolation or other means;
the (discrete) Fast Fourier Transform;
stochastic concepts and methods.
For Numerical Relativity, there has been little need (yet!) for stochastic methods, and the use of FFTs is mostly restricted to analysis. All of these points can be found in standard numerical packages and libraries: the question, however, is
what do we need to understand about these methods before implementing or using them?
when is it faster or better to implement our own version rather than using a library?
Finite differencing
As a first step we'll quickly cover finite differencing: the approximation of derivatives of a function $f$ when the only information about $f$ is its value at a set of points, or nodes, ${x_i}$, denoted ${f_i}$.
Here we have the "representation of a function" problem. We represent the function $f$ using a piecewise polynomial function $g$. This polynomial must interpolate $f$: that is, $g(x_i) \equiv f(x_i) = f_i$. We then approximate derivatives of $f$ by derivatives of $g$.
As simple examples, let's assume we know three points, ${f_{i-1}, f_i, f_{i+1}}$. Then we have the linear polynomial approximations
$$
g_{FD} = \frac{x - x_{i+1}}{x_i - x_{i+1}} f_i + \frac{x - x_{i}}{x_{i+1} - x_{i}} f_{i+1}
$$
and
$$
g_{BD} = \frac{x - x_{i}}{x_{i-1} - x_{i}} f_{i-1} + \frac{x - x_{i-1}}{x_i - x_{i-1}} f_i
$$
or the quadratic polynomial approximation
$$
g_{CD} = \frac{(x - x_{i})(x - x_{i+1})}{(x_{i-1} - x_{i})(x_{i-1} - x_{i+1})} f_{i-1} + \frac{(x - x_{i-1})(x - x_{i+1})}{(x_{i} - x_{i-1})(x_{i} - x_{i+1})} f_{i} + \frac{(x - x_{i-1})(x - x_{i})}{(x_{i+1} - x_{i-1})(x_{i+1} - x_{i})} f_{i+1}.
$$
Note how this Lagrange form is built out of indicator polynomials that take the value $1$ at one node and vanish at all others.
By differentiating these polynomial interpolating functions we get approximations to the derivatives of $f$. Each approximation is different, with different errors.
We'll assume that the nodes are equally spaced, with grid spacing $\Delta x = x_{i+1} - x_i$. The approximations above give the standard forward difference
$$
\left. \frac{\partial g_{FD}}{\partial x} \right|{x = x_i} \to \left. \frac{\partial f}{\partial x} \right|{x = x_i} = \frac{1}{\Delta x} \left( f_{i+1} - f_i \right) + {\cal O} \left( \Delta x \right),
$$
the standard backward difference
$$
\left. \frac{\partial g_{BD}}{\partial x} \right|{x = x_i} \to \left. \frac{\partial f}{\partial x} \right|{x = x_i} = \frac{1}{\Delta x} \left( f_{i} - f_{i-1} \right) + {\cal O} \left( \Delta x \right),
$$
and the standard central difference approximations
\begin{align}
\left. \frac{\partial g_{CD}}{\partial x} \right|{x = x_i} & \to \left. \frac{\partial f}{\partial x} \right|{x = x_i} \ & = \frac{1}{2 \, \Delta x} \left( f_{i+1} - f_{i-1} \right) + {\cal O} \left( \Delta x^2 \right), \
\left. \frac{\partial^2 g_{CD}}{\partial x^2} \right|{x = x_i} & \to \left. \frac{\partial^2 f}{\partial x^2} \right|{x = x_i} \ & = \frac{1}{\left( \Delta x \right)^2} \left( f_{i-1} - 2 f_i + f_{i+1} \right) + {\cal O} \left( \Delta x^2 \right).
\end{align}
Testing this in code
We'll use finite differencing repeatedly. To test our code we'll be testing the differencing. Let's check the above approximations applied to a simple function,
$$
f(x) = \exp \left[ x \right].
$$
All derivatives match the original function, which evaluated at $x=0$ gives $1$.
First we write the functions, then we test them.
End of explanation
bd = backward_differencing(numpy.exp, 0.0, dx=0.1)
fd = forward_differencing(numpy.exp, 0.0, dx=0.1)
cd1, cd2 = central_differencing(numpy.exp, 0.0, dx=0.1)
print("Backward difference should be 1, is {}, error {}".format(bd, abs(bd - 1.0)))
print("Forward difference should be 1, is {}, error {}".format(fd, abs(fd - 1.0)))
print("Central difference (1st derivative) should be 1, is {}, error {}".format(cd1, abs(cd1 - 1.0)))
print("Central difference (2nd derivative) should be 1, is {}, error {}".format(cd2, abs(cd2 - 1.0)))
Explanation: The errors here are significant. What matters is how fast the errors reduce as we change the grid spacing. Try changing from $\Delta x = 1$ to $\Delta x = 0.1$:
End of explanation
from matplotlib import pyplot
%matplotlib notebook
dxs = numpy.logspace(-5, 0, 10)
bd_errors = numpy.zeros_like(dxs)
fd_errors = numpy.zeros_like(dxs)
cd1_errors = numpy.zeros_like(dxs)
cd2_errors = numpy.zeros_like(dxs)
for i, dx in enumerate(dxs):
bd_errors[i] = abs(backward_differencing(numpy.exp, 0.0, dx) - 1.0)
fd_errors[i] = abs(forward_differencing(numpy.exp, 0.0, dx) - 1.0)
cd1, cd2 = central_differencing(numpy.exp, 0.0, dx)
cd1_errors[i] = abs(cd1 - 1.0)
cd2_errors[i] = abs(cd2 - 1.0)
pyplot.figure()
pyplot.loglog(dxs, bd_errors, 'kx', label='Backwards')
pyplot.loglog(dxs, fd_errors, 'b+', label='Forwards')
pyplot.loglog(dxs, cd1_errors, 'go', label='Central (1st)')
pyplot.loglog(dxs, cd2_errors, 'r^', label='Central (2nd)')
pyplot.loglog(dxs, dxs*(bd_errors[0]/dxs[0]), 'k-', label=r"$p=1$")
pyplot.loglog(dxs, dxs**2*(cd1_errors[0]/dxs[0]**2), 'k--', label=r"$p=2$")
pyplot.xlabel(r"$\Delta x$")
pyplot.ylabel("Error")
pyplot.legend(loc="lower right")
pyplot.show()
Explanation: We see roughly the expected scaling, with forward and backward differencing errors reducing by roughly $10$, and central differencing errors reducing by roughly $10^2$.
Convergence
The feature that we always want to show is that the error $\cal E$ reduces with the grid spacing $\Delta x$. In particular, for most methods in Numerical Relativity, we expect a power law relationship:
$$
{\cal E} \propto \left( \Delta x \right)^p.
$$
If we can measure the error (by knowing the exact solution) then we can measure the convergence rate $p$, by using
$$
\log \left( {\cal E} \right) = p \, \log \left( \Delta x \right) + \text{constant}.
$$
This is the slope of the best-fit straight line through the plot of the error against the grid spacing, on a logarithmic scale.
If we do not know the exact solution (the usual case), we can use self convergence to do the same measurement.
We check this for our finite differencing above.
End of explanation
<END_TASK> |
213 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
beapy
beapy is a Python package for obtaining data from the API of the Bureau of Economic Analysis.
Step1: Methods for searching for data
getDataSetList()
Step2: Datasets
There are 10 datasets available through the BEA API
Step3: Example | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import urllib
import datetime
import matplotlib.pyplot as plt
%matplotlib inline
%load_ext autoreload
%autoreload 2
import beapy
apiKey = '3EDEAA66-4B2B-4926-83C9-FD2089747A5B'
bea = beapy.initialize(apiKey =apiKey)
Explanation: beapy
beapy is a Python package for obtaining data from the API of the Bureau of Economic Analysis.
End of explanation
# Get a list of the the data sets available from the BEA along with descriptions.
bea.getDataSetList()
# The getDataSet() method adds a dataSetList attiribute that is a list of the available datasets:
print(bea.dataSetList)
# Get a list of the the parameters for the NIPA dataset
bea.getParameterList('NIPA')
# The getParameterList() method adds a parameterList attiribute that is a list of the parameters of the chosen dataset.
print(bea.parameterList)
# Get a list of the values that the Frequency parameter in the NIPA dataset can take:
bea.getParameterValues('NIPA','Frequency')
# Download data from Table 1.1.5, TableID: 5. and plot
results = bea.getNipa(TableID=5,Frequency='A',Year='X')
frame =results['data']
np.log(frame['Gross domestic product']).plot(grid=True,lw=3)
Explanation: Methods for searching for data
getDataSetList(): returns the available datasets.
getParameterList(dataSetName): returns the parameters of the specified dataset.
getParameterValues(dataSetName,ParameterName): returns the values accepted for a parameter of the specified dataset.
End of explanation
bea.getParameterValues('RegionalData','KeyCode')
bea.getParameterValues('RegionalData','GeoFips')
bea.getParameterValues('RegionalData','Year')
bea.getParameterValues('RegionalData','KeyCode')
Explanation: Datasets
There are 10 datasets available through the BEA API:
RegionalData (statistics by state, county, and MSA)
NIPA (National Income and Product Accounts)
~~NIUnderlyingDetail (National Income and Product Accounts)~~
Fixed Assets
~~Direct Investment and Multinational Enterprises (MNEs)~~
Gross Domestic Product by Industry (GDPbyIndustry)
ITA (International Transactions)
IIP (International Investment Position)
Regional Income (detailed regional income and employment data sets)
RegionalProduct (detailed state and MSA product data sets)
beapy provides a separate method for accessing the data in each datset:
getRegionalData.(KeyCode=None,GeoFips='STATE',Year='ALL')
getNipa.(TableID=None,Frequency=None,Year='X',ShowMillions='N')
~~getNIUnderlyingDetail.()~~
getFixedAssets.()
~~getDirectInvestmentMNEs.()~~
getGrossDomesticProductByIndustry.()
getIta.()
getIip.()
getRegionalIncome.()
getRegionalProduct.()
Datasets and methods with a ~~strikethrough~~ are not currently accessible with the package.
Regional Data
getRegionalData.(KeyCode=None,GeoFips='STATE',Year='ALL')
Method for accessing data from the US at county, state, and regional levels.
End of explanation
# Get per capita personal income at the state level for all years.
result = bea.getRegionalData(KeyCode='PCPI_SI',GeoFips = 'STATE', Year = 'ALL')
frame = result['data']
# For each state including Washington, D.C., find the percentage difference between state pc income and US pc income.
for state in frame.columns:
f = 100*(frame[state] - frame['United States'])/frame['United States']
f.plot(grid=True)
Explanation: Example: Converging relative per capita incomes in the US
End of explanation
<END_TASK> |
214 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Cartopy
Cartopy is a Python package designed for geospatial data processing in order to produce maps and other geospatial data analyses.
We test here a few map examples using cartopy.
Step1: There is a list of the available map projections in Cartopy.
Step2: Examples
This has been taken from the gallery | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (16, 10)
import cartopy.crs as ccrs
Explanation: Cartopy
Cartopy is a Python package designed for geospatial data processing in order to produce maps and other geospatial data analyses.
We test here a few map examples using cartopy.
End of explanation
# Set the projection to use
ax = plt.axes(projection=ccrs.PlateCarree())
# Draw coastlines
ax.coastlines();
ax = plt.axes(projection=ccrs.Mollweide())
# Add a land image
ax.stock_img();
Explanation: There is a list of the available map projections in Cartopy.
End of explanation
fig = plt.figure(figsize=(16, 10))
# Set the projection to use
ax = fig.add_subplot(1, 1, 1, projection=ccrs.Robinson())
# make the map global rather than have it zoom in to
# the extents of any plotted data
ax.set_global()
# Add a land image
ax.stock_img()
# Draw coastlines
ax.coastlines()
# Plot a point
ax.plot(-0.08, 51.53, 'o', color="r", markersize=8, transform=ccrs.PlateCarree())
# Draw a straight line
ax.plot([-0.08, 132], [51.53, 43.17], linewidth=3, transform=ccrs.PlateCarree())
# Draw a geodetic line
ax.plot([-0.08, 132], [51.53, 43.17], linewidth=3, transform=ccrs.Geodetic());
# Set the projection to use
ax = plt.axes(projection=ccrs.PlateCarree())
ax.stock_img();
ny_lon, ny_lat = -75, 43
delhi_lon, delhi_lat = 77.23, 28.61
# Draw a geodetic line
plt.plot([ny_lon, delhi_lon], [ny_lat, delhi_lat],
color='blue', linewidth=2, marker='o', transform=ccrs.Geodetic())
# Draw a straight line
plt.plot([ny_lon, delhi_lon], [ny_lat, delhi_lat],
color='gray', linestyle='--', transform=ccrs.PlateCarree())
# Write two labels
plt.text(ny_lon-3, ny_lat-12, 'New York',
horizontalalignment='right', transform=ccrs.Geodetic())
plt.text(delhi_lon+3, delhi_lat-12, 'Delhi',
horizontalalignment='left', transform=ccrs.Geodetic());
Explanation: Examples
This has been taken from the gallery
End of explanation
<END_TASK> |
215 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Calculate Political Opinion Models
Step1: Training the Base Model
Calculate the base model (from german wiki), that is later used as a base for training the classification models.
Step2: Save model to disk. Don't finalize the model because we need to train it with new data later!
Step3: Training the Classifier
Step4: Political Ideology Detection | <ASSISTANT_TASK:>
Python Code:
# import logging
# logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s')
# logging.root.level = logging.INFO
from os import path
from random import shuffle
from corputil import FileCorpus, ListCorpus
from corputil.utils import load_stopwords
from gensim.models.word2vec import LineSentence, Word2Vec
spd = [
path.join('data', 'Politics', 'SPD.txt'),
path.join('data', 'Politics', 'SPD_EU.txt'),
path.join('data', 'Politics', 'SPD_Fraktion.txt')
]
linke = [
path.join('data', 'Politics', 'Linke.txt'),
path.join('data', 'Politics', 'Linke_EU.txt'),
path.join('data', 'Politics', 'Linke_Fraktion.txt')
]
gruene = [
path.join('data', 'Politics', 'Grüne.txt'),
path.join('data', 'Politics', 'Grüne_EU.txt'),
path.join('data', 'Politics', 'Grüne_Fraktion.txt')
]
fdp = [
path.join('data', 'Politics', 'FDP.txt'),
path.join('data', 'Politics', 'FDP_EU.txt'),
path.join('data', 'Politics', 'FDP_Fraktion.txt')
]
cdu = [
path.join('data', 'Politics', 'CDU.txt'),
path.join('data', 'Politics', 'CDU_EU.txt'),
path.join('data', 'Politics', 'CDU_Fraktion.txt')
]
npd = [
path.join('data', 'Politics', 'NPD_Fraktion_MV.txt'),
path.join('data', 'Politics', 'NPD_Fraktion_Sachsen.txt'),
path.join('data', 'Politics', 'NPD_Jung.txt')
]
corpora = [
FileCorpus(linke),
FileCorpus(spd),
FileCorpus(gruene),
FileCorpus(fdp),
FileCorpus(cdu),
FileCorpus(npd)
]
parties = [
'Linke',
'SPD',
'Gruene',
'FDP',
'CDU',
'NPD'
]
Explanation: Calculate Political Opinion Models
End of explanation
sentences = LineSentence(path.join('data', 'Archive', 'Corpus_Wiki.txt'))
base = Word2Vec(sentences, workers=4, iter=4, size=100, window=2, sg=1)
Explanation: Training the Base Model
Calculate the base model (from german wiki), that is later used as a base for training the classification models.
End of explanation
base.save(path.join('models', 'word2vec', 'Base.w2v'))
base = None
sentences = None
Explanation: Save model to disk. Don't finalize the model because we need to train it with new data later!
End of explanation
for party, corpus in zip(parties, corpora):
sentences = list(corpus.sentences_token())
shuffle(sentences)
model = Word2Vec.load(path.join('models', 'word2vec', 'Base.w2v'))
model.train(sentences, total_examples=len(sentences))
model.save(path.join('models', 'word2vec', '{}.w2v'.format(party)))
Explanation: Training the Classifier
End of explanation
models = [path.join('models', 'word2vec', '{}.w2v'.format(party)) for party in parties]
labels = ['2015-44', '2015-45', '2015-46', '2015-47', '2015-48', '2015-49', '2015-50', '2015-51',
'2015-52', '2015-53', '2016-01', '2016-02', '2016-03', '2016-04', '2016-05', '2016-06']
files = [path.join('data', 'CurrentNews', '{}.csv').format(label) for label in labels]
out = [path.join('data', 'CurrentNews', 's_{}.csv').format(label) for label in labels]
import pandas as pd
import numpy as np
def calc_score(doc, mod):
model = Word2Vec.load(mod)
score = model.score(doc, len(doc))
return score
# Taken from Matt Taddy: https://github.com/TaddyLab/gensim/blob/deepir/docs/notebooks/deepir.ipynb
def calc_probability(df, mods):
docs = list(ListCorpus(list(df.loc[:, 'text'])).doc_sentences_token())
sentlist = [s for d in docs for s in d]
llhd = np.array( [ calc_score(sentlist, m) for m in mods ] )
lhd = np.exp(llhd - llhd.max(axis=0))
prob = pd.DataFrame( (lhd/lhd.sum(axis=0)).transpose() )
prob["doc"] = [i for i,d in enumerate(docs) for s in d]
prob = prob.groupby("doc").mean()
return prob
# raw = pd.concat([pd.read_csv(file, sep='|', encoding='utf-8') for file in files], ignore_index=True)
# prob = calc_probability(raw, models)
# data = pd.concat([raw, prob], axis=1)
# data.groupby('site').mean()
for file, o in zip(files, out):
data = pd.read_csv(file, sep='|', encoding='utf-8')
sentiment = calc_probability(data, models)
csv = pd.concat([data, sentiment], axis=1)
csv.rename(columns={ 0: 'LINKE', 1: 'SPD', 2: 'GRÜNE', 3: 'FDP', 4: 'CDU', 5: 'NPD' }, inplace=True)
csv.to_csv(o, index=False, encoding='utf-8', sep='|')
Explanation: Political Ideology Detection
End of explanation
<END_TASK> |
216 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Supervised and Unsupervised learning example
We are going to try to be able to identify handwritten digits
These handwritten digits are contained in the MNIST data set.
You can download it from
Step1: Part 1
Step2: Part 2
Step3: It's a 50000 x 784 array
Step4: Likewise, eval and test contain handwritten digits and their class.
We won't look at those for now.
Part 3
Step5: Part 3
Step6: Part 4
Step7: Part 4
Step8: This new classification is a vast improvement over the previous one !
We are now going to do a visualisation of what has exactly happened | <ASSISTANT_TASK:>
Python Code:
# Let's import the relevant packages first
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
from sklearn import manifold
import gzip, cPickle
import pandas as pd
from sklearn.cluster import KMeans
from sklearn import metrics
Explanation: Supervised and Unsupervised learning example
We are going to try to be able to identify handwritten digits
These handwritten digits are contained in the MNIST data set.
You can download it from : http://deeplearning.net/data/mnist/mnist.pkl.gz
And then unzip it
The script assumes the zipped data set has moved to the script's directory
End of explanation
# Unzip and load the data set
f = gzip.open("../data/mnist.pkl.gz", "rb")
train, val, test = cPickle.load(f)
f.close()
Explanation: Part 1 : Load the data
End of explanation
train_data = train[0]
train_class = train[1]
print train_data.shape
Explanation: Part 2 : Data exploration
Let's have a word on the data.
train contains 2 arrays : a data and a target array
The data is stored in train[0]
The targets (= class of the digit) are stored in train[1]
End of explanation
%matplotlib inline
first_digit = train_data[0]
# reshape the digit to a 28*28 array
first_digit = np.reshape(first_digit, (28,28))
# Then plot the digit
fig = plt.figure()
im = plt.imshow(first_digit, cmap = mpl.cm.Greys)
im.set_interpolation("nearest")
plt.show()
# We recognize a handwritten 5.
# let's look at the actual class of this digit
first_digit_class = train_class[0]
print "First digit class :", first_digit_class
# it's indeed a 5 !
Explanation: It's a 50000 x 784 array :
There are 50000 handwritten digits
Each digit is stored in an array of dimension 784 = 28*28
This array stores the grayscale value of a 28*28 picture of the digit.
Let's visualise the first digit.
N.B. : Close the figure to continue
End of explanation
# Let's define a list of feature names
# We have 784 pixels, let's index them from 0 to 783
lfeat = ["p" + str(i) for i in range(784)]
# Build a dataframe with all features
df_mnist = pd.DataFrame(train_data, columns = lfeat)
# Add the target = digit class
df_mnist["Class"] = train_class
# Let's have a look at the first few rows
df_mnist.iloc[:5,:]
Explanation: Likewise, eval and test contain handwritten digits and their class.
We won't look at those for now.
Part 3 : pandas format
Now that we know the structure of the data, let's put it in a pandas dataframe. It will be easier to manipulate.
End of explanation
# Initialise the kmeans method
# we use 10 clusters under the naive assumption of one cluster per class of digit
km = KMeans(n_clusters=10, n_jobs = -1, precompute_distances=True)
#n_jobs = -1 to speed up with max # of CPU
#precompute_distances = True to speed up algorithm as well
#We'll take a subset of data, otherwise, it takes too much time
data_subset = df_mnist[lfeat].values[:2000]
class_subset = df_mnist["Class"].values[:2000]
#Let's examine the statistics of our subset
for i in range(10):
print "%s samples of class %s" % (len(np.where(class_subset == i)[0]),i)
#Now fit
pred_km = km.fit_predict(data_subset)
Explanation: Part 3 : First attempt at unsupervised classification
Let's see how far we can go with a simple unsupervised learning method
We will use the K-Means algorithm.
The KMeans algorithm clusters data by trying to separate samples in n groups
of equal variance, minimizing a criterion known as the inertia or within-cluster sum-of-squares.
This algorithm requires the number of clusters to be specified.
What KMeans finds are the location of centroids= the mean of each of the 10 groups of equal variances.
Of course, we hope the algorithm has found 10 well separated groups of points
KMeans will classify using the following rule : a point will be associated to the nearest cluster (i.e. the group of points whose centroid is closest to this particular point).
N.B.
Let's call i the index of this cluster.
Of course, i is not equal to the class of the digit.
It could very well be that all the 8 digits belong to cluster #3
End of explanation
print "Rand score:", metrics.adjusted_rand_score(class_subset, pred_km)
print "MI:", metrics.adjusted_mutual_info_score(class_subset, pred_km)
print "V:",metrics.v_measure_score(class_subset, pred_km)
Explanation: Part 4 : Measuring the performance
Now we will evaluate the performance of the algorithm.
If we have the ground truth labels (i.e. we know to which class each training sample belongs), we can define the classification performance with metrics that measure the similarity between label assignments.
In our problem, this means that we compare the cluster assignment to the actual class of the digit, ignoring permutations. (cf. N.B. above, the cluster index may not be equal to the class index)
scikit-learn provides a range of such metrics.
We will report scores for three of them :
- Adjusted Rand Index
- Mutual information
- V-measure
All these scores span the [0,1] range, the higher, the better.
End of explanation
tsne = manifold.TSNE(n_components=2, init='pca', random_state=0, method = "barnes_hut")
data_subset_tsne = tsne.fit_transform(data_subset)
#Now let's apply kmeans to the transformed dataset
pred_km_tsne = km.fit_predict(data_subset_tsne)
print "Rand score:", metrics.adjusted_rand_score(class_subset, pred_km_tsne)
print "MI:", metrics.adjusted_mutual_info_score(class_subset, pred_km_tsne)
print "V:",metrics.v_measure_score(class_subset, pred_km_tsne)
Explanation: Part 4 : Improving unsupervised classification with tsne
N.B. You should have sklearn version 0.17 or else tsne will be really slow
We are now going to apply t-sne to the data.
Its advantages are 2-fold
It reduces the feature space (we project the 28*28 dimension feature space to a 2 dimension one) hence allowing easy visualisation
It is sensitive to local structures and may provide much better separation between various classes than traditional methods such as PCA
N.B. This notebook cell may take time to execute
End of explanation
# color map, one color per digit
list_color = ["r", "g", "b", "k", "plum", "pink", "lightseagreen", "blueviolet", "darkgray", "sandybrown"]
# dictionnary of color to be used in the plot
d_color = {}
for i in range(10) :
d_color[i] = list_color[i]
fig = plt.figure()
ax = fig.add_subplot(111)
# Plot the data
for i in range(2000):
ax.text(data_subset_tsne[i,0], data_subset_tsne[i,1], str(class_subset[i]), color=d_color[class_subset[i]], fontsize=12)
# Also plot the cluster centers
for c in km.cluster_centers_ :
ax.plot(c[0], c[1], "x", color = "k", markersize = 15, markeredgewidth=4)
# choose the boundaries of the plot for an ideal view
ax.set_xlim([-2 + min(data_subset_tsne[:,0]),2 + max(data_subset_tsne[:,0])])
ax.set_ylim([-2 + min(data_subset_tsne[:,1]),2 + max(data_subset_tsne[:,1])])
plt.show()
Explanation: This new classification is a vast improvement over the previous one !
We are now going to do a visualisation of what has exactly happened :
2D plot of the data projected by tsne
location of centroids in the data (how well our k-MEANS algorithm picks up the new structure in the data).
We will see that while not perfects, the centroid detection works very well for some digits.
End of explanation
<END_TASK> |
217 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Data
The data can be downloaded from https
Step1: Check the total data sets agree
The files filename and filename1 were downloaded from, respectively, the US Gov website, and the Chicago site. They are slightly different in size, but appear to contain the same data. (This can be checked!)
The files filename_all and filename_all1 were also downloaded from, respectively, the US Gov website, and the Chicago site. While they are the same size (uncompressed), and have the same headers, the data appears, at least naively, to be different.
Step2: Compare the actual contents of the files.
This is rather memory intensive, so we go to a little effort to use less RAM.
Step3: Check that the data is encoded in the expected way
ID,Case Number,Date,Block,IUCR,Primary Type,Description,Location Description,Arrest,Domestic,Beat,District,Ward,Community Area,FBI Code,X Coordinate,Y Coordinate,Year,Updated On,Latitude,Longitude,Location
Step4: Compare the full dataset with the extract
Let us compare the last 12 months data with the full dataset.
There are a few differences, but they really are "few" compared to the size of the complete dataset. There appears to be no pattern in the differences. | <ASSISTANT_TASK:>
Python Code:
import sys, os, csv, lzma
import open_cp.sources.chicago as chicago
filename = os.path.join("..", "..", "open_cp", "sources", "chicago.csv")
filename1 = os.path.join("..", "..", "open_cp", "sources", "chicago1.csv")
filename_all = os.path.join("..", "..", "open_cp", "sources", "chicago_all.csv.xz")
filename_all1 = os.path.join("..", "..", "open_cp", "sources", "chicago_all1.csv.xz")
Explanation: Data
The data can be downloaded from https://catalog.data.gov/dataset/crimes-2001-to-present-398a4 (see the module docstring of open_cp.sources.chicago See also https://data.cityofchicago.org/Public-Safety/Crimes-2001-to-present/ijzp-q8t2
The total data sets (for all crime events 2001 onwards) give different files between these two sources. We check that they do contain the same data.
End of explanation
with lzma.open(filename_all, "rt") as file:
print(next(file))
with lzma.open(filename_all1, "rt") as file:
print(next(file))
with lzma.open(filename_all, "rt") as file:
next(file); print(next(file))
with lzma.open(filename_all1, "rt") as file:
next(file); print(next(file))
Explanation: Check the total data sets agree
The files filename and filename1 were downloaded from, respectively, the US Gov website, and the Chicago site. They are slightly different in size, but appear to contain the same data. (This can be checked!)
The files filename_all and filename_all1 were also downloaded from, respectively, the US Gov website, and the Chicago site. While they are the same size (uncompressed), and have the same headers, the data appears, at least naively, to be different.
End of explanation
# NB: These methods encode a missing geometry and (-1, -1)
def yield_tuples(f):
for feature in chicago.generate_GeoJSON_Features(f, type="all"):
props = feature["properties"]
if props["crime"] == "HOMICIDE":
continue
coords = feature["geometry"]
if coords is None:
coords = (-1, -1)
else:
coords = coords["coordinates"]
event = (props["case"], props["crime"], props["type"], props["location"],
props["timestamp"], props["address"], coords[0], coords[1])
yield event
def load_as_tuples(f):
events = list(yield_tuples(f))
def load_as_dict_to_lists(f):
events = dict()
for event in yield_tuples(f):
case = event[0]
if case not in events:
events[case] = []
events[case].append(event[1:])
return events
def compare_one_other(file1, file2):
in_only1 = []
in_only2 = []
with lzma.open(file1, "rt") as file:
events = load_as_dict_to_lists(file)
with lzma.open(file2, "rt") as file:
for event in yield_tuples(file):
case, e = event[0], event[1:]
if case not in events or e not in events[case]:
in_only2.append(event)
continue
events[case].remove(e)
if len(events[case]) == 0:
del events[case]
for case, e in events.items():
in_only1.append( (case,) + e )
return in_only1, in_only2
compare_one_other(filename_all, filename_all1)
Explanation: Compare the actual contents of the files.
This is rather memory intensive, so we go to a little effort to use less RAM.
End of explanation
import pyproj, numpy
proj = pyproj.Proj({'init': 'epsg:3435'}, preserve_units=True)
def check_file(file):
reader = csv.reader(file)
header = next(reader)
assert header[15] == "X Coordinate"
assert header[16] == "Y Coordinate"
assert header[19] == "Latitude"
assert header[20] == "Longitude"
assert header[21] == "Location"
for row in reader:
x, y = row[15], row[16]
lat, lon, latlon = row[19], row[20], row[21]
if x == "":
assert y == ""
assert lat == ""
assert lon == ""
assert latlon == ""
else:
assert latlon == "(" + lat + ", " + lon + ")"
xx, yy = proj(float(lon), float(lat))
assert int(x) == numpy.round(xx)
assert int(y) == numpy.round(yy)
with lzma.open(filename_all, "rt") as file:
check_file(file)
with lzma.open(filename_all1, "rt") as file:
check_file(file)
Explanation: Check that the data is encoded in the expected way
ID,Case Number,Date,Block,IUCR,Primary Type,Description,Location Description,Arrest,Domestic,Beat,District,Ward,Community Area,FBI Code,X Coordinate,Y Coordinate,Year,Updated On,Latitude,Longitude,Location
End of explanation
with lzma.open(filename_all, "rt") as file:
all_events = load_as_dict_to_lists(file)
frame = chicago.load_to_geoDataFrame()
frame.head()
known_diffs = {"JA233208", "JA228951", "JA249656", "JA256373", "JA256594", "JA256838"}
not_found = []
for index, row in frame.iterrows():
if row.crime == "HOMICIDE":
continue
if row.case in known_diffs:
continue
if row.case not in all_events:
not_found.append(row.case)
continue
event = all_events[row.case]
if len(event) > 1:
print("Doubled, skipping:", row.case)
continue
event = event[0]
assert(row.address == event[4])
assert(row.crime == event[0])
assert(row.location == event[2])
assert(row.timestamp == event[3])
assert(row.type == event[1])
if row.geometry is not None:
assert(row.geometry.coords[0][0] == event[5])
assert(row.geometry.coords[0][1] == event[6])
not_found
frame[frame.case.map(lambda x : x in known_diffs)]
Explanation: Compare the full dataset with the extract
Let us compare the last 12 months data with the full dataset.
There are a few differences, but they really are "few" compared to the size of the complete dataset. There appears to be no pattern in the differences.
End of explanation
<END_TASK> |
218 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Symmetric random walk on uniform distribution
Step1: We genereate samples from following distribution
$$
P(x)= \begin{cases}1 / 21 & x \in{0,1,2, \ldots, 20} \ 0 & \text { otherwise. }\end{cases}
$$
The proposal distribution is
$$
Q\left(x^{\prime} ; x\right)= \begin{cases}1 / 2 & x^{\prime}=x \pm 1, x \in{1,2, \ldots, 19} \ 1 / 2 & x' = {0,1}, x = 0 \1 / 2 &x' = {20, 19}, x=20 \ 0 & \text { otherwise. }\end{cases}
$$
Step2: $x_0 = 10$
Step3: $x_0 = 17$ | <ASSISTANT_TASK:>
Python Code:
# Example from section 29.4 & 29.6 (Fig 29.14 & 29.15) of https://www.inference.org.uk/itprnn/book.pdf
try:
import probml_utils as pml
except ModuleNotFoundError:
%pip install -qq git+https://github.com/probml/probml-utils.git
import probml_utils as pml
import matplotlib.pyplot as plt
import jax.numpy as jnp
import seaborn as sns
import os
pml.latexify(width_scale_factor=2, fig_height=2)
Explanation: Symmetric random walk on uniform distribution
End of explanation
# create transition matrix
prob_vect = jnp.repeat(0.5, 20)
transition_mat = jnp.diag(prob_vect, -1) + jnp.diag(prob_vect, 1)
transition_mat = transition_mat.at[0, 0].set(0.5)
transition_mat = transition_mat.at[20, 20].set(0.5)
states = [0, 1, 2, 3, 10, 100, 200, 400]
MARKER_SIZE = 2 if pml.is_latexify_enabled() else 4
def calculate_q_at_state_no(state_no, p_initial, transition_mat):
transition_mat_at_state_no = jnp.linalg.matrix_power(transition_mat, state_no)
q = jnp.matmul(transition_mat_at_state_no, p_initial) # q = p_initial * T^n = transpose(T)^n * p_initial
return q
def plot_stem(ax):
markerline, stemlines, baseline = ax.stem(q, markerfmt="ro", linefmt="blue")
ax.set_yticks([])
ax.set_ylim(0, max(q) * 1.13)
ax.set_ylabel("$p_{" + str(state_no) + "}(x)$", fontdict={"rotation": 0}, labelpad=13)
baseline.set_linewidth(0)
stemlines.set_linewidth(1)
markerline.set_markersize(MARKER_SIZE)
sns.despine()
Explanation: We genereate samples from following distribution
$$
P(x)= \begin{cases}1 / 21 & x \in{0,1,2, \ldots, 20} \ 0 & \text { otherwise. }\end{cases}
$$
The proposal distribution is
$$
Q\left(x^{\prime} ; x\right)= \begin{cases}1 / 2 & x^{\prime}=x \pm 1, x \in{1,2, \ldots, 19} \ 1 / 2 & x' = {0,1}, x = 0 \1 / 2 &x' = {20, 19}, x=20 \ 0 & \text { otherwise. }\end{cases}
$$
End of explanation
x_0 = 10 # initial state
p_initial = [0] * 21
p_initial[x_0] = 1
p_initial = jnp.array(p_initial)
p_initial
fig, axs = plt.subplots(len(states), 1)
axs[0].set_title(f"Initial Condition $x_0 = {x_0}$")
for ind, state_no in enumerate(states):
q = calculate_q_at_state_no(
state_no, p_initial, transition_mat
) # q is proposed probabilities for p(x) on particular markov state
plot_stem(axs[ind])
if state_no != states[-1]:
axs[ind].set_xticks([])
else:
axs[ind].set_xticks(range(0, 21, 5))
axs[ind].set_xlabel("$x$")
pml.savefig(f"Initial_state_{x_0}")
print(sum(q), 1 / 21, q)
Explanation: $x_0 = 10$
End of explanation
x_0 = 17 # initial state
p_initial = [0] * 21
p_initial[x_0] = 1
p_initial = jnp.array(p_initial)
p_initial
fig, axs = plt.subplots(len(states), 1)
axs[0].set_title(f"Initial Condition $x_0 = {x_0}$")
for ind, state_no in enumerate(states):
ax = axs[ind]
q = calculate_q_at_state_no(state_no, p_initial, transition_mat)
plot_stem(ax)
if state_no != states[-1]:
axs[ind].set_xticks([])
else:
axs[ind].set_xticks(range(0, 21, 5))
axs[ind].set_xlabel("$x$")
pml.savefig(f"Initial_state_{x_0}")
print(sum(q), 1 / 21, q)
Explanation: $x_0 = 17$
End of explanation
<END_TASK> |
219 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
5. Squared Wheel and Other Reinventions
Many a times, we hear discussions about writing programs using Python in the "most Pythonic way." Sometimes it's more of a philosophy; however, quite often there might be a more concrete message
Step1: This "pattern" of code is sometimes referred to as a classic "procedural" approach.
Now let's consider a more Pythonic approach.
Step2: A little cleaner, but not too much. The main difference is to (largely) replace the counter-based while-loop with a more idiomatic for-loop, using a syntax for iterating over collections that mimics mathematical notation. (That is, "for each word in list_of_words ...") However, we still need to maintain a counter to omit the last space. But we can do better!
Step3: The counter always increases "in sync" with iteration over the list of words. The enumerate() function captures this pattern succinctly.
Pretty good, but we can do even better.
Step4: A single line of code just solved the whole problem! That's the power of the language and its full-featured library.
Example
Step5: But we can compute at least the mean with less code using the built-in library function, sum().
Step6: Now we can try to compute distances in a more Pythonic way, again using the more idiomatic for-loop syntax for iterating over elements of a collection
Step7: Finally, we can make it even more compact with list comprehensions, which are designed for "tiny for loops," that is, for loops whose iterations are independent and whose bodies are simple or small functions.
Step8: Example
Step9: And now everything is the same, in a more Pythonic way
Step10: Exercise. Now, enlightened by all this knowledge, let's try to write a function that takes a string, drops all the words that contain letter 'o', and return a new string without these words. In Python, it's truly a single line function | <ASSISTANT_TASK:>
Python Code:
# Task: Concatenate a list of strings into a single string
# delimited by spaces.
list_of_words = ['the', 'quick', 'brown', 'fox', 'jumped', 'over', 'the', 'lazy', 'dog']
i = 0 # A counter to maintain the current position in the list
new_string = '' # String to hold the output
while i < len(list_of_words): # Iterate over words
new_string += list_of_words[i]
i += 1
# Add a space to join the words together if it's not the last word
if i < len(list_of_words):
new_string += ' '
print ("The resulting string is '" + new_string + "'.")
Explanation: 5. Squared Wheel and Other Reinventions
Many a times, we hear discussions about writing programs using Python in the "most Pythonic way." Sometimes it's more of a philosophy; however, quite often there might be a more concrete message: "Do not re-invent the wheel." Indeed, Python is a wonderful language full of lots of built-in libraries. If you need anything, it is most likely implemented in the language. It takes no more than a dozen lines of code to solve many common problems. This is the result of Python's "batteries included" approach to design of the language and libraries.
Read this article for more information: https://docs.python.org/3/tutorial/stdlib.html
To illustrate some of these ideas, let's consider a couple different computational tasks and look at different ways to write a correct program for each.
Example: String concatenation. Let's try to create a string from the words in a list. A common style in other programming languages, like C/C++ or Java, is the "scalar loop form."
End of explanation
list_of_words = ['the', 'quick', 'brown', 'fox', 'jumped', 'over', 'the', 'lazy', 'dog']
i = 0 # Current position in the list
new_string = '' # String to hold the output
for word in list_of_words: # Iterate over words
new_string += word
i += 1
# Add a space to join the words together if it's not the last word
if i < len(list_of_words):
new_string += ' '
print ("The resulting string is '" + new_string + "'.")
Explanation: This "pattern" of code is sometimes referred to as a classic "procedural" approach.
Now let's consider a more Pythonic approach.
End of explanation
list_of_words = ['the', 'quick', 'brown', 'fox', 'jumped', 'over', 'the', 'lazy', 'dog']
# Create and empty string
new_string = ''
# Iterate through all words and enumerate them
for i, word in enumerate(list_of_words):
new_string += word
# Add a space to join the words together if it's not the last word
if i < len(list_of_words)-1:
new_string += ' '
print ("The resulting string is '" + new_string + "'.")
Explanation: A little cleaner, but not too much. The main difference is to (largely) replace the counter-based while-loop with a more idiomatic for-loop, using a syntax for iterating over collections that mimics mathematical notation. (That is, "for each word in list_of_words ...") However, we still need to maintain a counter to omit the last space. But we can do better!
End of explanation
list_of_words = ['the', 'quick', 'brown', 'fox', 'jumped', 'over', 'the', 'lazy', 'dog']
new_string = ' '.join(list_of_words)
print ("The resulting string is '" + new_string + "'.")
Explanation: The counter always increases "in sync" with iteration over the list of words. The enumerate() function captures this pattern succinctly.
Pretty good, but we can do even better.
End of explanation
array = [1, 2, 3, 4, 5, 6]
mean = 0
for i in range(len(array)):
mean += array[i]
mean /= len(array)
dist = []
for i in range(len(array)):
dist += [array[i] - mean]
print ("The mean of the array", array, "is", mean, "and the distances are", dist)
Explanation: A single line of code just solved the whole problem! That's the power of the language and its full-featured library.
Example: Computing the mean. Now let's look at another problem where we compute the mean of all the elements in an array. Then we will calculate the distance from the mean for every element.
End of explanation
array = [1, 2, 3, 4, 5, 6]
mean = sum(array) / len(array)
dist = []
for i in range(len(array)):
dist += [array[i] - mean]
print ("The mean of the array", array, "is", mean, "distances are", dist)
Explanation: But we can compute at least the mean with less code using the built-in library function, sum().
End of explanation
array = [1, 2, 3, 4, 5, 6]
mean = sum(array) / len(array)
dist = []
for element in array:
dist += [element - mean]
print ("The mean of the array", array, "is", mean, "distances are", dist)
Explanation: Now we can try to compute distances in a more Pythonic way, again using the more idiomatic for-loop syntax for iterating over elements of a collection:
End of explanation
array = [1, 2, 3, 4, 5, 6]
mean = sum(array) / len(array)
dist = [element - mean for element in array]
print ("The mean of the array", array, "is", mean, "distances are", dist)
Explanation: Finally, we can make it even more compact with list comprehensions, which are designed for "tiny for loops," that is, for loops whose iterations are independent and whose bodies are simple or small functions.
End of explanation
first_names = ['Leonard', 'Sheldon', 'Howard', 'Rajesh']
last_names = ['Hofstadter', 'Cooper', 'Wolowitz', 'Koothrappali']
name_dict = {}
for name_ind in range(len(last_names)):
name_dict[last_names[name_ind]] = first_names[name_ind]
print ("Name dictionary is", name_dict)
Explanation: Example: Lists to dictionaries. Now let's try to create a new dictionary from two lists.
Suppose we have two lists of first and last names. In this case, the lists are aligned: there is a one-to-one correspondence between elements of one list and the other. Further suppose our task is to create a new dictionary that would allow us to quickly look up the first name, given the last name.
End of explanation
first_names = ['Leonard', 'Sheldon', 'Howard', 'Rajesh']
last_names = ['Hofstadter', 'Cooper', 'Wolowitz', 'Koothrappali']
name_dict = dict(zip(last_names, first_names))
print ("Name dictionary is", name_dict)
Explanation: And now everything is the same, in a more Pythonic way: simultaneously iterating over two collections where there is a one-to-one correspondence is a pattern referred to as a "zipper iteration," which Python handles nicely via its zip() function.
End of explanation
def pick_o(s):
pass
s = 'the quick brown fox jumped over the lazy dog'
true_string = 'brown fox over dog'
new_string = pick_o(s)
print("pick_o('{}') -> '{}' [True: '{}']".format(s, new_string, true_string))
assert new_string == true_string
Explanation: Exercise. Now, enlightened by all this knowledge, let's try to write a function that takes a string, drops all the words that contain letter 'o', and return a new string without these words. In Python, it's truly a single line function :)
End of explanation
<END_TASK> |
220 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
AlexNet in Keras
In this notebook, we leverage an AlexNet-like deep, convolutional neural network to classify flowers into the 17 categories of the Oxford Flowers data set. Derived from this earlier notebook.
Set seed for reproducibility
Step1: Load dependencies
Step2: Load and preprocess data
Step3: Design neural network architecture
Step4: Configure model
Step5: Configure TensorBoard (for part 5 of lesson 3)
Step6: Train! | <ASSISTANT_TASK:>
Python Code:
import numpy as np
np.random.seed(42)
Explanation: AlexNet in Keras
In this notebook, we leverage an AlexNet-like deep, convolutional neural network to classify flowers into the 17 categories of the Oxford Flowers data set. Derived from this earlier notebook.
Set seed for reproducibility
End of explanation
import keras
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPooling2D
from keras.layers.normalization import BatchNormalization
from keras.callbacks import TensorBoard # for part 3.5 on TensorBoard
Explanation: Load dependencies
End of explanation
import tflearn.datasets.oxflower17 as oxflower17
X, Y = oxflower17.load_data(one_hot=True)
Explanation: Load and preprocess data
End of explanation
model = Sequential()
model.add(Conv2D(96, kernel_size=(11, 11), strides=(4, 4), activation='relu', input_shape=(224, 224, 3)))
model.add(MaxPooling2D(pool_size=(3, 3), strides=(2, 2)))
model.add(BatchNormalization())
model.add(Conv2D(256, kernel_size=(5, 5), activation='relu'))
model.add(MaxPooling2D(pool_size=(3, 3), strides=(2, 2)))
model.add(BatchNormalization())
model.add(Conv2D(256, kernel_size=(3, 3), activation='relu'))
model.add(Conv2D(384, kernel_size=(3, 3), activation='relu'))
model.add(Conv2D(384, kernel_size=(3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(3, 3), strides=(2, 2)))
model.add(BatchNormalization())
model.add(Flatten())
model.add(Dense(4096, activation='tanh'))
model.add(Dropout(0.5))
model.add(Dense(4096, activation='tanh'))
model.add(Dropout(0.5))
model.add(Dense(17, activation='softmax'))
model.summary()
Explanation: Design neural network architecture
End of explanation
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
Explanation: Configure model
End of explanation
tensorbrd = TensorBoard('logs/alexnet')
Explanation: Configure TensorBoard (for part 5 of lesson 3)
End of explanation
model.fit(X, Y, batch_size=64, epochs=32, verbose=1, validation_split=0.1, shuffle=True,
callbacks=[tensorbrd])
Explanation: Train!
End of explanation
<END_TASK> |
221 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Single Particle Systems
Step1: Next, we construct the fermion basis using the constructor spinless_fermion_basis_1d. Since we are interested in a free model, it suffices to consider a single particle Nf=1.
Step2: In defining the site-coupling list, we set a convention that the operator indices grow to the right (this is not required by QuSpin, it's merely our choice and we do it for convenience), as written out in the Hamiltonian above. Thus, the fermion hopping operator (unlike bosons) requires two different lists to reflect the sign flip in the hermitian conjugate term.
The static and dynamic lists as well as building the real-space Hamiltonian is the same as for the BHM. Last, we diagonalise the real-space Hamiltonian.
Step3: In momentum space, $k\in\mathrm{BZ'}=[-\pi/2,\pi/2)$, the Hamiltonian becomes block diagonal
Step4: To create the block-diagonal Hamiltonian, we invoke the block_diag_hamiltonian method. It takes both required and optional arguments, and returns the transformation, which block-diagonalises the Hamiltonian (in our case the Fourier transform) and the block-diagonal Hamiltonian object. Required arguments, in order of appearance, are the blocks, the static and dynamic lists, the basis constructor, basis_args, and the data type. Since we expect the Hamiltonian to contain the Fourier factors $\exp(-ik)$, we know to choose a complex data type. block_diag_hamiltonian also accepts some optional arguments, such as the flags for disabling the automatic built-in symmetry checks.
Step5: We now compare the real-space and momentum-space spectra, to check if they match | <ASSISTANT_TASK:>
Python Code:
from quspin.operators import hamiltonian # Hamiltonians and operators
from quspin.basis import spinless_fermion_basis_1d # Hilbert space fermion basis
from quspin.tools.block_tools import block_diag_hamiltonian # block diagonalisation
import numpy as np # generic math functions
#
##### define model parameters #####
L=6 # system size
J=1.0 # uniform hopping contribution
deltaJ=0.1 # bond dimerisation
Delta=0.5 # staggered potential
Explanation: Single Particle Systems: coding the SSH model in real and momentum space
This tutorial shows how to use QuSpin to construct single-particle Hamiltonians in real space and momentum space. To demonstrate this, we use the Su-Schrieffer-Heeger (SSH) model of free spinless fermions on a dimerised chain:
$$ H = \sum_{j=0}^{L-1} -(J+(-1)^j\delta J)\left(c_jc^\dagger_{j+1} - c^\dagger_{j}c_{j+1}\right) + \Delta\sum_{j=0}^{L-1}(-1)^jn_j,$$
where $J$ is the uniform component of the hopping, $\delta J$ -- the bond dimerisation, and $\Delta$ -- a staggered potential.
We begin by loading the QuSpin libraries and define the model parameters
End of explanation
##### construct single-particle Hamiltonian #####
# define basis
basis=spinless_fermion_basis_1d(L,Nf=1)
print(basis)
Explanation: Next, we construct the fermion basis using the constructor spinless_fermion_basis_1d. Since we are interested in a free model, it suffices to consider a single particle Nf=1.
End of explanation
# define site-coupling lists
hop_pm=[[-J-deltaJ*(-1)**i,i,(i+1)%L] for i in range(L)] # PBC
hop_mp=[[+J+deltaJ*(-1)**i,i,(i+1)%L] for i in range(L)] # PBC
stagg_pot=[[Delta*(-1)**i,i] for i in range(L)]
# define static and dynamic lists
static=[["+-",hop_pm],["-+",hop_mp],['n',stagg_pot]]
dynamic=[]
# build real-space Hamiltonian
H=hamiltonian(static,dynamic,basis=basis,dtype=np.float64)
print("FH Hamiltonian is real space is:")
print(H.toarray())
# diagonalise real-space Hamiltonian
E,V=H.eigh()
Explanation: In defining the site-coupling list, we set a convention that the operator indices grow to the right (this is not required by QuSpin, it's merely our choice and we do it for convenience), as written out in the Hamiltonian above. Thus, the fermion hopping operator (unlike bosons) requires two different lists to reflect the sign flip in the hermitian conjugate term.
The static and dynamic lists as well as building the real-space Hamiltonian is the same as for the BHM. Last, we diagonalise the real-space Hamiltonian.
End of explanation
# define basis blocks and arguments
blocks=[dict(Nf=1,kblock=i,a=2) for i in range(L//2)] # only L//2 distinct momenta
basis_args = (L,)
Explanation: In momentum space, $k\in\mathrm{BZ'}=[-\pi/2,\pi/2)$, the Hamiltonian becomes block diagonal:
$$ H !=! \sum_{k\in\mathrm{BZ'}} (a^\dagger_k,b^\dagger_k)
\left(\begin{array}{cc}
\Delta & -(J+\delta J)\mathrm e^{-i k} - (J-\delta J)\mathrm e^{+i k} \
-(J+\delta J)\mathrm e^{+i k} - (J-\delta J)\mathrm e^{-i k} & -\Delta
\end{array}
\right)
\left(! \begin{array}{c}
a_k\
b_k
\end{array}
!\right)$$
For translation invariant single-particle models, therefore, the user might prefer to use momentum space. This can be achieved using QuSpin's block_tools. The idea behind it is simple: the main purpose is to create the full Hamiltonian in block-diagonal form, where the blocks correspond to pre-defined quantum numbers. In our case, we would like to use momentum or kblock's. Note that the unit cell in the SSH model contains two sites, which we encode using the variable a=2. Thus, we can create a list of dictionaries -- blocks, each element of which defines a single symmetry block. If we combine all blocks, we exhaust the full Hilbert space. All other basis arguments, such as the system size, we store in the variable basis_args. We mention in passing that this procedure is independent of the symmetry, and can be done using all symmetries supported by QuSpin, not only translation.
End of explanation
# construct block-diagonal Hamiltonian
FT,Hblock = block_diag_hamiltonian(blocks,static,dynamic,spinless_fermion_basis_1d,basis_args,np.complex128,
get_proj_kwargs=dict(pcon=True))
print(np.around(Hblock.toarray(),2))
# diagonalise momentum-space Hamiltonian
Eblock,Vblock=Hblock.eigh()
Explanation: To create the block-diagonal Hamiltonian, we invoke the block_diag_hamiltonian method. It takes both required and optional arguments, and returns the transformation, which block-diagonalises the Hamiltonian (in our case the Fourier transform) and the block-diagonal Hamiltonian object. Required arguments, in order of appearance, are the blocks, the static and dynamic lists, the basis constructor, basis_args, and the data type. Since we expect the Hamiltonian to contain the Fourier factors $\exp(-ik)$, we know to choose a complex data type. block_diag_hamiltonian also accepts some optional arguments, such as the flags for disabling the automatic built-in symmetry checks.
End of explanation
##### plot spectra
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(np.arange(H.Ns),E/L,marker='o',color='b',label='real space')
plt.plot(np.arange(Hblock.Ns),Eblock/L,marker='x',color='r',markersize=2,label='momentum space')
plt.xlabel('state number',fontsize=16)
plt.ylabel('energy',fontsize=16)
plt.xticks(fontsize=16)
plt.yticks(fontsize=16)
plt.legend(fontsize=16)
plt.grid()
plt.tight_layout()
plt.show()
Explanation: We now compare the real-space and momentum-space spectra, to check if they match
End of explanation
<END_TASK> |
222 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Outline
Glossary
7. Observing Systems
Previous
Step1: Import section specific modules
Step2: 7.7 Propagation effects <a id='instrum
Step3: Figure 7.7.1
Step4: Figure 7.7.2
Step5: Figure 7.7.3
Step6: Following the scattering, we need to smooth the image a little to better illustrate the result
Step7: Make the RGB cube and you're done
Step8: Figure 7.7.4
Step9: Figure 7.7.5
Step10: Figure 7.7.5
Step11: Figure 7.7.6
Step12: Figure 7.7.7
Step13: Figure 7.7.8 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import HTML
HTML('../style/course.css') #apply general CSS
Explanation: Outline
Glossary
7. Observing Systems
Previous: 7.6 Polarization and Antenna Feeds
Next: 7.8 Radio Frequency Interference (RFI)
Import standard modules:
End of explanation
import scipy.signal
from IPython.display import Image
import matplotlib.image as mpimg
# This section uses the optional slimscat package (https://krosenfeld.github.io/slimscat/current/)
try:
import slimscat
doExample = True
except ImportError:
print 'Warning: could not import slimscat, will not run example'
doExample = False
Explanation: Import section specific modules:
End of explanation
Image(filename='figures/corrugatedPhaseScreen.png', width=400)
Explanation: 7.7 Propagation effects <a id='instrum:sec:propagation'></a>
Cosmic radio waves traverse a variety of ionized and neutral media before measurement at a radio telescope. Some effects are negliable and/or easily corrected for, however, others can prevent any useful astrophysical inference at all regardless of the sensitivity of the telescope. This section will provide a brief overview of the dominant sources of signal corruption in the troposphere, ionosphere and interstellar media.
7.7.1 Interstellar medium <a id='instrum:sec:ism'></a>
The interstellar medium is filled with plasma that is not uniformly distributed, which causes radio waves to scatter and astrophysical sources to scintillate (vary in apparent brightness), increasingly so at low frequency. Here we cover some basic scattering theory, which is of course applicable to the other astrophysical media discussed here.
An electro-magnetic wave passing through any medium with refractive index inhomogeneity will cause the wave to scatter. This effect is most commonly modeled as a thin screen (relative the to the distance to the source) normal to the line-of-sight, which adds a random phase offset to the incoming wave at each transverse point, yielding what can be viewed as a 'corrugated' (not flat) wavefront.
End of explanation
Image(filename='figures/scattering_table.png', width=600)
Image(filename='figures/scattering_imaging.png', width=600)
Explanation: Figure 7.7.1: Schematic diagram showing on a plane by a irregular slab of plasma. As can be imagined, what should be measured as constant phase across the radio telescope array, will now include a stochastic phase corruption due to differeing refractive indices (Image credit: Sanjay Bhatnagar PhD Thesis)
The degree of corrugation can be described by the phase structure function,
$$D_\phi (\mathbf{x},\mathbf{x'}) =\ < ( \phi(\mathbf{x} +\mathbf{x'}) - \phi(\mathbf{x}))^2>,$$
where $\mathbf{x}$ and $\mathbf{x'} $ represent two points on the screen.
Assuming isotropic (no directional dependence), power law turbulence simplifies the expression to
$$D_{\phi} (r) = (r/r_{\rm diff})^{\beta},\ \ (r^2 = \mathbf{x}^2 -\mathbf{x'}^2),$$
where $r_{\rm diff}$, is the distance on the phase screen where the mean phase difference is equal to 1 radian. The exponent of $\beta = 5/3$ is characteristic of Kolmogorov turbulence, which empirical results have shown to be a reasonable approximation for the ISM and troposphere in many circumstances (see Thompson et al. 2001, and refernce therein).
Another important length scale is the Fresnel scale, defined as the distance across the phase screen where the geometrical path difference is $1/2$ radian, $r_{\rm F} = \sqrt{\lambda D/2\pi}$, where $D$ is the observer - scattering screen distance. These two length scales, $r_{\rm F}$ and $r_{\rm diff}$, define the nature of the scattering which is split into the \emph{strong} and \emph{weak} regimes. In weak scattering, $ r_{\rm diff} \gg r_{\rm F}$ and hence, $D_{\phi}(r_{\rm F}) \ll 1$. In the regime of \emph{strong scattering}, $ r_{\rm diff} \ll r_{\rm F}$ implying $D_{\phi}(r_{\rm F}) \gg 1$.
Using a frozen screen assumption, where micro-scale diffusion within the scattering screen is much smaller than the bulk motion, the structure function becomes $D(R)=D(t)|_{R=v T}$ where $v$ is the scattering screen transverse velocity and T is the integration time. This simply means one assumes the screen that causes the phase corruptions changes structure on time scales much larger than it takes for the for the screen to move across the telescope's field of view. This assumptions allows the structure function to be parameterise in terms of bulk velocity of the screen, rather than position. This has a number of advantages, as will become clear.
These length and timescales determine a wide range of scattering effects in radio imaging, which are summarized in Figure 7.7.2. A full treatment of these cannot possibly be covered here, however, Table 7.7.1 provides indicative values for a range of media, which can then be plugged into the ScatterBrane software below to test different regimes and gain a more intuitive understanding. The additional reading section at the end of the chapter will provide further references for more in-depth explanations.
Table 7.7.1: Typical values of scattering media in astronomy (Credit: Narayan 1992)
End of explanation
if doExample:
# construct gaussian source
nx,ny = 220,256
yy,xx = np.meshgrid(np.arange(ny)-ny/2,np.arange(nx)-nx/2,indexing='ij')
isrc = np.exp(-0.5*(xx**2 + yy**2) / (0.07*np.min([nx,ny]))**2)
slimscat.generate_screen(screenfile='gaussian_screen.bin')
# scatter source
iss = slimscat.run_slimscat(isrc,1.,screenfile='gaussian_screen.bin')
if doExample:
plt.figure(figsize=(15,6))
plt.subplot(121)
plt.imshow(isrc,vmin=isrc.min(),vmax=isrc.max())
plt.title('source')
plt.colorbar()
plt.subplot(122)
plt.imshow(iss,vmin=isrc.min(),vmax=isrc.max())
plt.title('scattered')
plt.colorbar()
Explanation: Figure 7.7.2: Different imaging regimes in the presence of scattering, shown as function of the integration time and Fresnel time ($t_{\rm F} = r_{\rm F}/v$) on the vertical axis and the strength of the scattering shown on the horizontal axis. These regimes can be explored using the ScatterBrane code below in combination with typical values from Table 7.7.1. (Credit: Narayan 1992)
The following code is taken from the SLIMSCAT package courtesy of Michael Johnson and Katherine Rosenfeld. Changing the input parameter values and input image will enable a more intuitive understanding of scattering effects in the different imaging regimes shown in Figure 7.7.2.
End of explanation
if doExample:
# generate scattering file
slimscat.generate_screen(wavelength=0.87e-6,dx=0.2,ips=4,screenfile='gargantua_screen.bin')
# load RGB image with rough scale appropriate for Sgr A*
model = mpimg.imread('figures/gargantua.jpg')
dx = 50/300.
nx,ny,nchan = model.shape
# scatter each rgb channel separately (using same screen)
r = slimscat.run_slimscat(model[:,:,0],dx,screenfile='gargantua_screen.bin')
g = slimscat.run_slimscat(model[:,:,1],dx,screenfile='gargantua_screen.bin')
b = slimscat.run_slimscat(model[:,:,2],dx,screenfile='gargantua_screen.bin')
Explanation: Figure 7.7.3: Example of the effect of strong scattering on a Gaussian source.
Clearly this degrades the quality of the observations significantly. Trying playing around with all the (hidden) input parameters to the slimscat.generate_screen and slimscat.run_slimscat functions. For more information see http://krosenfeld.github.io/slimscat. We finish with one of their fun examples, which is scattering Gargantua from the movie Interstellar. While this is simply illustrative, it is an real observation challenge that is faced by researchers trying to image the shadow of the black hole at the centre of our Galaxy. This is done for three colour filters.
End of explanation
if doExample:
N=5
t = 1 - np.abs(np.linspace(-1, 1, N))
kernel = t.reshape(N, 1) * t.reshape(1, N)
kernel /= kernel.sum()
rs = scipy.signal.convolve2d(r/255., kernel, mode='same')
gs = scipy.signal.convolve2d(g/255., kernel, mode='same')
bs = scipy.signal.convolve2d(b/255., kernel, mode='same')
Explanation: Following the scattering, we need to smooth the image a little to better illustrate the result:
End of explanation
if doExample:
rgb_uint8 = (np.dstack((rs,gs,bs)) * 255.).astype(np.uint8)
plt.figure(figsize=(14,20))
plt.subplot(211)
plt.imshow(model[:,:,:])
plt.title('source')
plt.subplot(212)
plt.imshow(rgb_uint8)
plt.title('scattered')
Explanation: Make the RGB cube and you're done:
End of explanation
Image(filename='figures/iono.png', width=500)
Explanation: Figure 7.7.4: Fun colour example of a scattered RGB image of the black hole sillhouette of Gargantua (from the movie Interstellar).
If you increase the wavelength of the observation, or decrease the frequency (see input to slimscat.generate_screen), you'll see how interstellar scattering can be a major barrier to successfully measuring the shadow of a black hole with the Event Horizon Telescope! See http://www.eventhorizontelescope.org/ for more details on this project.
7.7.2 Ionosphere <a id='instrum:sec:ion'></a>
The ionosphere is a layer of ionized plasma with an altitude range of 60-1000 km above sea level. This is driven mostly by ultraviolet photons from the Sun that ionize the upper layers of the atmosphere. As a result, there is a natural diurnal cycle, with the layers nearest the Sun being the mostly highly ionized. This level of ionization is measured with the unit Total Electron Content (TEC), where 1 TEC unit (TECU) = $10^{16}$ m$^{-2}$. Figure~xx shows a global map with the TEC values at a given time, showing that this is most concentrated near the equator.
End of explanation
Image(filename='figures/Atmosphere_with_Ionosphere.png', width=400)
Explanation: Figure 7.7.5: Total Electron Content above the Earth at given time, showing the enhancement towards the equatorial regions nearest the Sun. (Image credit: http://www.astrosurf.com/luxorion/qsl-review-propagation-software-research.htm)
The ionosphere is a subject of intensive and complex study itself. Here we just describe the primary (adverse) effects it has on radio observations. Like any plasma, the ionosphere has a oscillation or plamsa frequency, given by $$\nu_p = \frac{e}{2\pi}\sqrt{\frac{n_e}{\epsilon_0 m}} \ \simeq 9\sqrt{n_e} \ \ {\rm (Hz)},$$ where $e, n_e, \epsilon_0$ are the electron charge, density and permitivity of free space respectively. A typical value for the ionosphere's plasma frequency is less than $\nu_p \sim 15$ MHz. Note than the ionosphere is opaque to any radiation below this frequency. While this disadvantage in low frequency observations of the Universe from Earth, it has also be used to aid long-distance radio communication by reflecting radio waves off the lower surface of the ionosphere.
The free electrons in the ionosphere cause a propogation delay of an incident electro-magnetic wave, which can be represented as an "excess path length", $$L_0 \simeq -\frac{1}{2} \int^\infty_0 \left[ \frac{\nu_p(h)}{\nu}\right]^2 {\rm d}h, $$
where $h$ is the height above sea level. Note the $\nu^2$ dependance, which is why the ionophere is primarily a problem for low radio frequency observations, such as attempts to detect the epoch of reionization. Within the Jones matrix formalism, this delay can be represented as follows,
$$
\mathbf{Z} = Z = \left[ \begin{array}{cc}\mathrm{e}^{-2\pi \imath \tau / \lambda} & 0 \ 0 & \mathrm{e}^{-2\pi \imath \tau / \lambda} \end{array} \right] = \mathrm{e}^{-2\pi \imath \tau / \lambda},
$$
which is of course the same as any other phase only effect, of which there are many described in this chapter. Although the $\mathbf{Z}$ Jones matrix has no amplitude effect, it is important to note that the variable or differential delay imposed by the ionsphere result in a loss of coherence if data are averaged on timescales larger than the local dynamical timescale for the ionosphere (seconds to tens of minutes). It this is loss of coherence that also results in an amplitude loss.
End of explanation
Image(filename='figures/iono_regimes.png', width=400)
Explanation: Figure 7.7.5: Comparison of atmospoheric and ionospheric temperature and density profiles as a function of height above sea level.
For a given observing frequency, the extent to which the ionosphere corrupts the measured visibilies and the ease with which one can correct for that depends on the array geometry and station field of view (FoV). These can be best summarized as follows:
small array, small FoV: strong phase coherence across array and FoV
large array, small FoV: some/no phase coherence across array, strong phase coherence across FoV
large array, small FoV: strong phase coherence across array, some phase coherence across FoV
large array, large FoV: some/no phase coherence across array, some phase coherence across FoV
Actual scales for the above descriptions depend on the observing frequencies and ionospheric turbulence, however, at a few hundred MHz indicative values for "small arrays" can be for baselines less than 1 km; and an indicative value of a "small Fov" is < 10 arcmin$^2$. Cases 3 and 4 above demand that direction dependent calibration be employed to accurate solve for the ionospheric effects. This topic is under active research due to the slew of low frequency, wide FoV instruments under contruction or in their design phase.
End of explanation
Image(filename='figures/total_precipitable_water_201008.jpg', width=600)
Explanation: Figure 7.7.6: Four different radio array regimes with regard to ionospheric calibration (see text). Credit: Huib Intema (2013 Lorentz Center presentation).
7.7.3 Trophosphere <a id='instrum:sec:propagation'></a>
This is the lowest layer of the neutral atmosphere and has typical scale height of 2 km. The refractivity of water vapour in this layer at radio wavelengths is of order 20x greater than in the optical/near-infrared (recall that typical "seeing" in the optical window is ~1 arcsec). Since water vapour is poorely mixed in this layer, there are phase fluctuations introduced due to the refractive index inhomogenities that result. The delay due to a non-unity refractive index ($n$) is $$\Delta t = \frac{1}{c} \int (n - 1) dy, $$ where c is the speed of light the refractive path length is the straight line path. The refractivity of moist air is described by the empirical formula $$N = 77.6\frac{p_D}{T} + 64.8\frac{p_V}{T} + 3.776 \times 10^5 \frac{p_V}{T^2}, $$ where the refractivity $N \equiv 10^6(n-1)$, $T$ is the temperature in Kelvins, $p_D$ and $p_V$ are the partial pressures of dry air and water vapor respectively. Since the atmosphere obeys the equation of hydrostatic equilibrium (to a good approximation), it is clear that to zero-th order, the trosphere's refractive index will have some dependence on time-of-day, time-of-season, site altitude and latitude. However, water vapour is not well mixed in the atmosphere and is therefore not well correlated with ground-based meteorological parameters.
End of explanation
Image(filename='figures/ALMAatmo.jpg', width=600)
Explanation: Figure 7.7.7: Map of the Earth showing water vapour content during August 2010. (NASA image by Robert Simmon)
The troposphere has several effects on high frequency ($>> 10$~GHz) radio observations, the most important (or detrimental) are:
absorption
refraction
turbulence-induced scattering
The absorption (also known as the opacity) is due to pressure-broadened molecular resonances of water vapour, oxygen and ozone. Between 100 GHz to 1 THz there are a series of strong water vapour lines, necessitating the contruction of millimetre observatories at the high altitudes ($>2$ km), where several atmospheric windows are made accesible (see Figure 7.8.2).
The mean refraction is an effect that must be corrected in the telescope pointing, increasingly so at low elevation. However, small-scale tropospheric turbulence results in rapid phase fluctuations, significantly limiting telescope sensitivity. At $>$100 GHz, coherence times are limited to $\sim$10 seconds for even some of the best sites in the world.
The power spectrum of these phase fluctuations are usually described as a Kologmov turbulent screen, with a phase structure function defined as:
$$D_\phi(d) = \ <[\Phi(x) - \Phi(x - d)]^2>, $$
where $\Phi(x)$ is the phase at point $x$, $\Phi(x-d)$ is the phase at point $x-d$, and the angle brackets represent the ensemble average. This implies that the closer two points are to one another, the more similar their phase delay will be. The theory on this subject in particular is vast and not covered in any further detail here, but see Thompson et al. 2001, and reference therein.
In terms of the Jones matrix formalism, the troposhere is equiavalent to the ionosphere, however, with the additional amplitude loss due to non-zero atmospheric opacity.
End of explanation
Image(filename='figures/carilli_holdaway1999.png', width=600)
Explanation: Figure 7.7.8: Transmission curves (fraction of radiation that is able to traverse a medium) above the ALMA site Chajnantor. The lines correspond to the frequency coverage of the ALMA receiver bands (1-10). (Image credit: ESO http://www.eso.org/sci/facilities/alma/documents.html)
There are different regimes of atmosphereic phase corruptions, with the boundary conditions being two antennas looking through the exact same atmosphere, to VLBI baselines where the weather above two antennas would be entirely un-correlated. The below figure demonstrates the level of correlation in phase above the Very Large Array as a function of baseline length. The changes in slopes reveal the changing slope with the structure function.
End of explanation
<END_TASK> |
223 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Interact Exercise 2
Imports
Step1: Plotting with parameters
Write a plot_sin1(a, b) function that plots $sin(ax+b)$ over the interval $[0,4\pi]$.
Customize your visualization to make it effective and beautiful.
Customize the box, grid, spines and ticks to match the requirements of this data.
Use enough points along the x-axis to get a smooth plot.
For the x-axis tick locations use integer multiples of $\pi$.
For the x-axis tick labels use multiples of pi using LaTeX
Step2: Then use interact to create a user interface for exploring your function
Step3: In matplotlib, the line style and color can be set with a third argument to plot. Examples of this argument
Step4: Use interact to create a UI for plot_sine2.
Use a slider for a and b as above.
Use a drop down menu for selecting the line style between a dotted blue line line, black circles and red triangles. | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
from IPython.html.widgets import interact, interactive, fixed
from IPython.display import display
Explanation: Interact Exercise 2
Imports
End of explanation
def plot_sine1(a, b):
f = plt.figure(figsize=(16,2))
x = np.linspace(0, 4*np.pi, 1000)
plt.plot(x, np.sin(a*x + b))
plt.xlim(right=4*np.pi)
plt.tick_params(direction='out')
plt.xticks([np.pi, 2*np.pi, 3*np.pi, 4*np.pi],['$\pi$','$2\pi$','$3\pi$','$4\pi$'])
plot_sine1(5, 3.4)
Explanation: Plotting with parameters
Write a plot_sin1(a, b) function that plots $sin(ax+b)$ over the interval $[0,4\pi]$.
Customize your visualization to make it effective and beautiful.
Customize the box, grid, spines and ticks to match the requirements of this data.
Use enough points along the x-axis to get a smooth plot.
For the x-axis tick locations use integer multiples of $\pi$.
For the x-axis tick labels use multiples of pi using LaTeX: $3\pi$.
End of explanation
interact(plot_sine1, a=(0.0,5.0,0.1), b=(-5.0,5.0,0.1));
assert True # leave this for grading the plot_sine1 exercise
Explanation: Then use interact to create a user interface for exploring your function:
a should be a floating point slider over the interval $[0.0,5.0]$ with steps of $0.1$.
b should be a floating point slider over the interval $[-5.0,5.0]$ with steps of $0.1$.
End of explanation
def plot_sine2(a, b, style):
f = plt.figure(figsize=(16,2))
x = np.linspace(0, 4*np.pi, 100)
plt.plot(x, np.sin(a*x + b), style)
plt.xlim(right=4*np.pi)
plt.tick_params(direction='out')
plt.xticks([np.pi, 2*np.pi, 3*np.pi, 4*np.pi],['$\pi$','$2\pi$','$3\pi$','$4\pi$'])
plot_sine2(4.0, -1.0, 'r--')
Explanation: In matplotlib, the line style and color can be set with a third argument to plot. Examples of this argument:
dashed red: r--
blue circles: bo
dotted black: k.
Write a plot_sine2(a, b, style) function that has a third style argument that allows you to set the line style of the plot. The style should default to a blue line.
End of explanation
interact(plot_sine2, a=(0.0,5.0,0.1), b=(-5.0,5.0,0.1), style={'dotted blue line': 'b.', 'black circles': 'ko', 'red triangles': 'r^'});
assert True # leave this for grading the plot_sine2 exercise
Explanation: Use interact to create a UI for plot_sine2.
Use a slider for a and b as above.
Use a drop down menu for selecting the line style between a dotted blue line line, black circles and red triangles.
End of explanation
<END_TASK> |
224 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
XOR Network
Data generation
Step1: Xor cannot be solved with single layer of neural network
Step2: Notice that the error is far from zero.
Actually network always predicts approximately $0.5$, regardless of input data. That yields error of about $0.25$, because we use mean squared error ($0.5^2 = 0.25$).
Step3: Accuracy is not that hard to predict...
Step4: Xor Network with 2 layers
Step5: This time the network works a tad better | <ASSISTANT_TASK:>
Python Code:
def create_examples(N, batch_size):
A = np.random.binomial(n=1, p=0.5, size=(batch_size, N))
B = np.random.binomial(n=1, p=0.5, size=(batch_size, N,))
X = np.zeros((batch_size, 2 *N,), dtype=np.float32)
X[:,:N], X[:,N:] = A, B
Y = (A ^ B).astype(np.float32)
return X,Y
X, Y = create_examples(3, 2)
print(X[0,:3], "xor", X[0,3:],"equals", Y[0])
print(X[1,:3], "xor", X[1,3:],"equals", Y[1])
Explanation: XOR Network
Data generation
End of explanation
import math
class Layer(object):
def __init__(self, input_size, output_size):
tensor_b = tf.zeros((output_size,))
self.b = tf.Variable(tensor_b)
tensor_W = tf.random_uniform((input_size, output_size),
-1.0 / math.sqrt(input_size),
1.0 / math.sqrt(input_size))
self.W = tf.Variable(tensor_W)
def __call__(self, x):
return tf.matmul(x, self.W) + self.b
tf.ops.reset_default_graph()
sess = tf.InteractiveSession()
N = 5
# x represents input data
x = tf.placeholder(tf.float32, (None, 2 * N), name="x")
# y_golden is a reference output data.
y_golden = tf.placeholder(tf.float32, (None, N), name="y")
layer1 = Layer(2 * N, N)
# y is a linear projection of x with nonlinearity applied to the result.
y = tf.nn.sigmoid(layer1(x))
# mean squared error over all examples and all N output dimensions.
cost = tf.reduce_mean(tf.square(y - y_golden))
# create a function that will optimize the neural network
optimizer = tf.train.AdagradOptimizer(learning_rate=0.3)
train_op = optimizer.minimize(cost)
# initialize the variables
sess.run(tf.initialize_all_variables())
for t in range(5000):
example_x, example_y = create_examples(N, 10)
cost_t, _ = sess.run([cost, train_op], {x: example_x, y_golden: example_y})
if t % 500 == 0:
print(cost_t.mean())
Explanation: Xor cannot be solved with single layer of neural network
End of explanation
X, _ = create_examples(N, 3)
prediction = sess.run([y], {x: X})
print(X)
print(prediction)
Explanation: Notice that the error is far from zero.
Actually network always predicts approximately $0.5$, regardless of input data. That yields error of about $0.25$, because we use mean squared error ($0.5^2 = 0.25$).
End of explanation
N_EXAMPLES = 1000
example_x, example_y = create_examples(N, N_EXAMPLES)
# one day I need to write a wrapper which will turn the expression
# below to:
# tf.abs(y - y_golden) < 0.5
is_correct = tf.less_equal(tf.abs(y - y_golden), tf.constant(0.5))
accuracy = tf.reduce_mean(tf.cast(is_correct, "float"))
acc_result = sess.run(accuracy, {x: example_x, y_golden: example_y})
print("Accuracy over %d examples: %.0f %%" % (N_EXAMPLES, 100.0 * acc_result))
Explanation: Accuracy is not that hard to predict...
End of explanation
tf.ops.reset_default_graph()
sess = tf.InteractiveSession()
N = 5
# we add a single hidden layer of size 12
# otherwise code is similar to above
HIDDEN_SIZE = 12
x = tf.placeholder(tf.float32, (None, 2 * N), name="x")
y_golden = tf.placeholder(tf.float32, (None, N), name="y")
layer1 = Layer(2 * N, HIDDEN_SIZE)
layer2 = Layer(HIDDEN_SIZE, N) # <------- HERE IT IS!
hidden_repr = tf.nn.tanh(layer1(x))
y = tf.nn.sigmoid(layer2(hidden_repr))
cost = tf.reduce_mean(tf.square(y - y_golden))
optimizer = tf.train.AdagradOptimizer(learning_rate=0.3)
train_op = optimizer.minimize(cost)
sess.run(tf.initialize_all_variables())
for t in range(5000):
example_x, example_y = create_examples(N, 10)
cost_t, _ = sess.run([cost, train_op], {x: example_x, y_golden: example_y})
if t % 500 == 0:
print(cost_t.mean())
Explanation: Xor Network with 2 layers
End of explanation
X, Y = create_examples(N, 3)
prediction = sess.run([y], {x: X})
print(X)
print(Y)
print(prediction)
N_EXAMPLES = 1000
example_x, example_y = create_examples(N, N_EXAMPLES)
is_correct = tf.less_equal(tf.abs(y - y_golden), tf.constant(0.5))
accuracy = tf.reduce_mean(tf.cast(is_correct, "float"))
acc_result = sess.run(accuracy, {x: example_x, y_golden: example_y})
print("Accuracy over %d examples: %.0f %%" % (N_EXAMPLES, 100.0 * acc_result))
Explanation: This time the network works a tad better
End of explanation
<END_TASK> |
225 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Sensitivity, accuracy, errors, and repeatability
Linear approximation of nonlinear transfer function. The response of a temperature sensor is given as
$$R(T)=R_{0}e^{\beta(\frac{1}{T}-\frac{1}{T_{0}})}$$
where $R_{0}$ is the resistance of the sensor at temperature $T_{0}$ and $\beta$ is a constant that depends on the material of the sensor. $T_{0}$=20 ºC. Temperatures $T$ and $T_{0}$ are in K. Given
Step1: The plot shows the nonlinear behaviour of the sensor and the two points used for estimating the curve.
b. Approximate the transfer function as a straight line connecting the end points and calculate the maximum error expected as a percentage of full scale.
We approximate the transfer function as a straight line as $R(T)_{\text{linear}} = aT+b$ and compute the error.
Step2: Note how the error starts from zero reaches a maximum of 66.5 % and comes back down to zero at the other end point as expected.
c. Approximate the transfer function as a linear least squares approximation and calculate the maximum error expected as a percentage of full scale.
Mediante la funcion polyfit se realizo la aproximacion lineal de la funcion de transferencia del sensor. El maximo error con respecto a la temperatura es calculado a partir de la diferencia entre R(T) y la aproximacion obtenida, en el intervalo definido por los puntos de corte de la funciones. Esta diferencia dividida por la pendiente de la aproximacion me dará el error en el eje correspondiente a la temperatura en K | <ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import numpy as np
from math import log, exp
%matplotlib inline
# Evaluate beta for this sensor
T_0=273.15+20;
N=(1/273.15-1/293.15)-(1/298.15-1/293.15);
beta= log(3000/1000)/N;
R_0=1000/exp(beta*((1/298.15)-(1/293.15)));
## Results
print('Beta for this sensor = %2.2f and the resistance of the sensor at temperature T0 is R_0 = = %2.2f' % (beta, R_0))
# Plot the sensor transfer function for the intended span.
# T= np.arange(start = -45, stop = 121, step = 1)+273.15;
T = np.linspace(-45,120)+273.15
R_T= R_0*np.exp(beta*(1/T-1/T_0));
# Plot
# plt.plot(T,R_T,T[45],R_T[45],'ro',T[45+25],R_T[45+25],'ro')
plt.plot(T,R_T)
plt.ylabel('Resistance of the sensor[ohm]')
plt.xlabel('Temperature [K]')
plt.show()
Explanation: Sensitivity, accuracy, errors, and repeatability
Linear approximation of nonlinear transfer function. The response of a temperature sensor is given as
$$R(T)=R_{0}e^{\beta(\frac{1}{T}-\frac{1}{T_{0}})}$$
where $R_{0}$ is the resistance of the sensor at temperature $T_{0}$ and $\beta$ is a constant that depends on the material of the sensor. $T_{0}$=20 ºC. Temperatures $T$ and $T_{0}$ are in K. Given: R(T)=1000 $\Omega$ at 25 ºC and 3000 $\Omega$ at 0 ºC. The sensor is intended for use between -45 ºC and 120 ºC.
a. Evaluate $\beta$ for this sensor and plot the sensor transfer function for the intended span.
b. Approximate the transfer function as a straight line connecting the end points and calculate the maximum error expected as a percentage of full scale.
c. Approximate the transfer function as a linear least squares approximation and calculate the maximum error expected as a percentage of full scale.
Solution
a. Evaluate $\beta$ for this sensor and plot the sensor transfer function for the intended span.
From the two known points
$R_1= 3000\, \Omega$, $T_1= 273.15\, \text{K}$.
$R_2= 1000\, \Omega$, $T_2= 298.15\, \text{K}$.
From the general equation and the points,
$$R_{0}= \frac{R_{1}}{e^{\beta(\frac{1}{T_{1}}-\frac{1}{T_{0}})}} \enspace,$$
we obtain
$$\beta= \frac{\ln(\frac{R_{2}}{R_{1}})}{(\frac{1}{T_{2}}-\frac{1}{T_{1}})} \enspace.$$
End of explanation
# Linear fit with just end points
a, b = np.polyfit(np.array([T[0],T[-1]]),np.array([R_T[0],R_T[-1]]),1)
print('The coefficients are a = %2.4f and b = %2.4f' % (a, b))
# Linear approximation
R_T_linear = a*T+b
# Plot
plt.plot(T,R_T_linear,'b:',label='Linear approximation')
plt.plot(T,R_T,label='Transfer function')
plt.ylabel('Resistance of the sensor[ohm]')
plt.xlabel('Temperature [K]')
plt.legend(loc='upper right')
plt.show()
# Output Full scale
FS = np.abs(np.max(R_T)-np.min(R_T))
error=np.abs(R_T-R_T_linear)/FS*100;
# error_X=np.abs(error_Y/a2);
plt.ylabel('error [%]')
plt.plot(T,error)
plt.xlabel('Temperature [K]')
plt.show()
print('The maximum error expected as a percentage of full scale is = %2.2f %%' % (np.max(error)))
Explanation: The plot shows the nonlinear behaviour of the sensor and the two points used for estimating the curve.
b. Approximate the transfer function as a straight line connecting the end points and calculate the maximum error expected as a percentage of full scale.
We approximate the transfer function as a straight line as $R(T)_{\text{linear}} = aT+b$ and compute the error.
End of explanation
# polyfit computes the coefficients a and b of degree=1
a,b = np.polyfit(T,R_T,1)
# Linear approximation
R_T_lsq = a*T+b
# Plot
plt.plot(T,R_T_lsq,'b:',label='Least Squares fit')
plt.plot(T,R_T,label='Transfer function')
plt.ylabel('Resistance of sensor [ohm]')
plt.xlabel('Temperature [K]')
plt.legend(loc='upper right')
plt.show()
error=np.abs(R_T-R_T_lsq)/FS*100;
# error_X=np.abs(error_Y/a2);
plt.ylabel('error [%]')
plt.plot(T,error)
plt.xlabel('Temperature [K]')
plt.show()
print('The maximum error expected as a percentage of full scale is = %2.1f %%' % (np.max(error)))
Explanation: Note how the error starts from zero reaches a maximum of 66.5 % and comes back down to zero at the other end point as expected.
c. Approximate the transfer function as a linear least squares approximation and calculate the maximum error expected as a percentage of full scale.
Mediante la funcion polyfit se realizo la aproximacion lineal de la funcion de transferencia del sensor. El maximo error con respecto a la temperatura es calculado a partir de la diferencia entre R(T) y la aproximacion obtenida, en el intervalo definido por los puntos de corte de la funciones. Esta diferencia dividida por la pendiente de la aproximacion me dará el error en el eje correspondiente a la temperatura en K:
End of explanation
<END_TASK> |
226 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Entity Extraction from Contract Documents
Problem Statement
Step1: Figure 1
Step2: Loading the data
Step3: Converting the rawtext into sentences
Step4: Task 1
Step5: Task 2
Step6: Entity Extraction
Parts of Speech Tagging
Step7: NNP - Noun Proper Plural tag with respec to NLP library. The below command provides description of each POS tag followed by the nltk library
Step8: Name Entity Chunks
The below command assigns name entity to each token in the given document. The commonly used type of named entities are
ORGANIZATION, PERSON, LOCATION, DATE, TIME, MONEY, PERCENT, FACILITY, GPE | <ASSISTANT_TASK:>
Python Code:
from IPython.display import Image
Image(filename='entity_extraction_process.png')
# Note: this image is taken from NLTK Book and requires citation
Explanation: Entity Extraction from Contract Documents
Problem Statement:
Many companies deal with the understanding of large amount text data in the documents. With the help of natural language processing one can break the down problem in to simple steps as shown below.
Display: Jupyter notebook
Language used: Python
Libraries used: nltk, re
Tasks accomplished:
1) Extracting addresses which are in various format from the given document
2) Tokenizing the text into sentences, words and then tagging with Name Entities. The process performed here is shown in the Figure 1
Note: This project is purely for the demonstration purpose and not to disclose any confidential data/procedures of any originazation
End of explanation
# Importing NLTK Dependencies
import nltk, re
from nltk import word_tokenize, pos_tag, ne_chunk
from nltk.tokenize.punkt import PunktSentenceTokenizer
Explanation: Figure 1: Entity Extraction Process
Analysis
End of explanation
# using docx2txt libarary to avoid the encoding errors while reading the text in given word file
import docx2txt
raw_data = docx2txt.process('Contract_Template.docx')
Explanation: Loading the data
End of explanation
paragraphs = [p for p in raw_data.split('\n') if p]
tokenizer = PunktSentenceTokenizer()
sentences = [tokenizer.tokenize(paragraph) for paragraph in paragraphs]
sentences[:5]
Explanation: Converting the rawtext into sentences
End of explanation
# using python's regular expressions to extract the required address data
street_address = re.compile(u'\d{1,4} [\w\s]{1,20}(?:street|st|avenue|ave|road|rd|highway|hwy|square|sq|trail|trl|drive|dr|court|ct|park|parkway|pkwy|circle|cir|boulevard|blvd)\W?(?=\D|$)', re.IGNORECASE)
zip_code = re.compile(r'\b\d{5}(?:[-\s]\d{4})?\b')
proper_address = "[0-9]{1,5} .+, .+, [A-Z]{2} [0-9]{5}"
print ("Below are the street addresses found in the given document \n")
print (re.findall(street_address, raw_data))
print ("\nBelow are the zip codes found in the given document \n")
print (re.findall(zip_code, raw_data))
print ("\nBelow is the address found in the given document with PROPER format \n")
print (re.findall(proper_address, raw_data))
for i in range (len(sentences)):
if sentences[i][0] == 'Address:':
print ("\nBelow is the address found in the given document with IMPROPER format is: \n")
print (sentences[i+1][0])
Explanation: Task 1: To extract the addresses which are in different format in the given text
End of explanation
tokens = [nltk.word_tokenize(sent[0]) for sent in sentences]
tokens[:6] # View the first few tokens
tokens[0]
Explanation: Task 2: Entity Extraction
Tokenizing Words
End of explanation
# Viewing the parts of speech tag for each token in the given text
pos_tags = [nltk.pos_tag(token) for token_list in tokens for token in token_list]
pos[:20] # View top few pos tags
Explanation: Entity Extraction
Parts of Speech Tagging
End of explanation
nltk.help.upenn_tagset()
Explanation: NNP - Noun Proper Plural tag with respec to NLP library. The below command provides description of each POS tag followed by the nltk library
End of explanation
chunks = ne_chunk(pos)
print (chunks.__repr__())
Explanation: Name Entity Chunks
The below command assigns name entity to each token in the given document. The commonly used type of named entities are
ORGANIZATION, PERSON, LOCATION, DATE, TIME, MONEY, PERCENT, FACILITY, GPE
End of explanation
<END_TASK> |
227 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
minimask mosaic example
Construct a mosaic of squares over the sky
Step1: Specify the location of the mask file to write
Step2: Construct a mask using a tile pattern with centers specified by the healpix grid.
Step3: Load the file as a mask object
Step4: Plot the mask on a mollweide projection using healpy.
Step5: Pixelize the mask onto the healpix grid | <ASSISTANT_TASK:>
Python Code:
%matplotlib notebook
import os
import numpy as np
import tempfile
import matplotlib.pyplot as pyplot
import logging
logging.basicConfig(level=logging.INFO)
import minimask.mask as mask
import minimask.healpix_projection as hp
import minimask.io.mosaic as mosaic
Explanation: minimask mosaic example
Construct a mosaic of squares over the sky
End of explanation
filename = "masks/mosaic.txt"
try:
os.mkdir(os.path.dirname(filename))
except:
pass
Explanation: Specify the location of the mask file to write
End of explanation
tile = np.array([[[-0.5, -0.5],[0.5, -0.5],[0.5,0.5],[-0.5,0.5]]])*8
grid = hp.HealpixProjector(nside=4)
lon, lat = grid.pix2ang(np.arange(grid.npix))
centers = np.transpose([lon, lat])
mosaic.Mosaic(tile, centers).write(filename)
Explanation: Construct a mask using a tile pattern with centers specified by the healpix grid.
End of explanation
M = mask.Mask(filename)
print "The number of polygons in the mask is {}.".format(len(M))
Explanation: Load the file as a mask object
End of explanation
import healpy
healpy.mollview(title="")
for x,y in M.render(1):
healpy.projplot(x,y,lonlat=True)
Explanation: Plot the mask on a mollweide projection using healpy.
End of explanation
map = M.pixelize(nside=64, n=10, weight=False)
healpy.mollview(map, title="")
Explanation: Pixelize the mask onto the healpix grid
End of explanation
<END_TASK> |
228 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
EEG source localization given electrode locations on an MRI
This tutorial explains how to compute the forward operator from EEG data
when the electrodes are in MRI voxel coordinates.
Step1: Prerequisites
For this we will assume that you have
Step2: Visualizing the MRI
Let's take our MRI-with-eeg-locations and adjust the affine to put the data
in MNI space, and plot using
Step3: Getting our MRI voxel EEG locations to head (and MRI surface RAS) coords
Let's load our
Step4: We can then get our transformation from the MRI coordinate frame (where our
points are defined) to the head coordinate frame from the object.
Step5: Let's apply this digitization to our dataset, and in the process
automatically convert our locations to the head coordinate frame, as
shown by
Step6: Now we can do standard sensor-space operations like make joint plots of
evoked data.
Step7: Getting a source estimate
New we have all of the components we need to compute a forward solution,
but first we should sanity check that everything is well aligned
Step8: Now we can actually compute the forward
Step9: Finally let's compute the inverse and apply it | <ASSISTANT_TASK:>
Python Code:
# Authors: Eric Larson <larson.eric.d@gmail.com>
#
# License: BSD Style.
import os.path as op
import nibabel
from nilearn.plotting import plot_glass_brain
import numpy as np
import mne
from mne.channels import compute_native_head_t, read_custom_montage
from mne.viz import plot_alignment
Explanation: EEG source localization given electrode locations on an MRI
This tutorial explains how to compute the forward operator from EEG data
when the electrodes are in MRI voxel coordinates.
:depth: 2
End of explanation
data_path = mne.datasets.sample.data_path()
subjects_dir = op.join(data_path, 'subjects')
fname_raw = op.join(data_path, 'MEG', 'sample', 'sample_audvis_raw.fif')
bem_dir = op.join(subjects_dir, 'sample', 'bem')
fname_bem = op.join(bem_dir, 'sample-5120-5120-5120-bem-sol.fif')
fname_src = op.join(bem_dir, 'sample-oct-6-src.fif')
misc_path = mne.datasets.misc.data_path()
fname_T1_electrodes = op.join(misc_path, 'sample_eeg_mri', 'T1_electrodes.mgz')
fname_mon = op.join(misc_path, 'sample_eeg_mri', 'sample_mri_montage.elc')
Explanation: Prerequisites
For this we will assume that you have:
raw EEG data
your subject's MRI reconstrcted using FreeSurfer
an appropriate boundary element model (BEM)
an appropriate source space (src)
your EEG electrodes in Freesurfer surface RAS coordinates, stored
in one of the formats :func:mne.channels.read_custom_montage supports
Let's set the paths to these files for the sample dataset, including
a modified sample MRI showing the electrode locations plus a .elc
file corresponding to the points in MRI coords (these were synthesized
<https://gist.github.com/larsoner/0ac6fad57e31cb2d9caa77350a9ff366>__,
and thus are stored as part of the misc dataset).
End of explanation
img = nibabel.load(fname_T1_electrodes) # original subject MRI w/EEG
ras_mni_t = mne.transforms.read_ras_mni_t('sample', subjects_dir) # from FS
mni_affine = np.dot(ras_mni_t['trans'], img.affine) # vox->ras->MNI
img_mni = nibabel.Nifti1Image(img.dataobj, mni_affine) # now in MNI coords!
plot_glass_brain(img_mni, cmap='hot_black_bone', threshold=0., black_bg=True,
resampling_interpolation='nearest', colorbar=True)
Explanation: Visualizing the MRI
Let's take our MRI-with-eeg-locations and adjust the affine to put the data
in MNI space, and plot using :func:nilearn.plotting.plot_glass_brain,
which does a maximum intensity projection (easy to see the fake electrodes).
This plotting function requires data to be in MNI space.
Because img.affine gives the voxel-to-world (RAS) mapping, if we apply a
RAS-to-MNI transform to it, it becomes the voxel-to-MNI transformation we
need. Thus we create a "new" MRI image in MNI coordinates and plot it as:
End of explanation
dig_montage = read_custom_montage(fname_mon, head_size=None, coord_frame='mri')
dig_montage.plot()
Explanation: Getting our MRI voxel EEG locations to head (and MRI surface RAS) coords
Let's load our :class:~mne.channels.DigMontage using
:func:mne.channels.read_custom_montage, making note of the fact that
we stored our locations in Freesurfer surface RAS (MRI) coordinates.
.. collapse:: |question| What if my electrodes are in MRI voxels?
:class: info
If you have voxel coordinates in MRI voxels, you can transform these to
FreeSurfer surface RAS (called "mri" in MNE) coordinates using the
transformations that FreeSurfer computes during reconstruction.
``nibabel`` calls this transformation the ``vox2ras_tkr`` transform
and operates in millimeters, so we can load it, convert it to meters,
and then apply it::
>>> pos_vox = ... # loaded from a file somehow
>>> img = nibabel.load(fname_T1)
>>> vox2mri_t = img.header.get_vox2ras_tkr() # voxel -> mri trans
>>> pos_mri = mne.transforms.apply_trans(vox2mri_t, pos_vox)
>>> pos_mri /= 1000. # mm -> m
You can also verify that these are correct (or manually convert voxels
to MRI coords) by looking at the points in Freeview or tkmedit.
End of explanation
trans = compute_native_head_t(dig_montage)
print(trans) # should be mri->head, as the "native" space here is MRI
Explanation: We can then get our transformation from the MRI coordinate frame (where our
points are defined) to the head coordinate frame from the object.
End of explanation
raw = mne.io.read_raw_fif(fname_raw)
raw.pick_types(meg=False, eeg=True, stim=True, exclude=()).load_data()
raw.set_montage(dig_montage)
raw.plot_sensors(show_names=True)
Explanation: Let's apply this digitization to our dataset, and in the process
automatically convert our locations to the head coordinate frame, as
shown by :meth:~mne.io.Raw.plot_sensors.
End of explanation
raw.set_eeg_reference(projection=True)
events = mne.find_events(raw)
epochs = mne.Epochs(raw, events)
cov = mne.compute_covariance(epochs, tmax=0.)
evoked = epochs['1'].average() # trigger 1 in auditory/left
evoked.plot_joint()
Explanation: Now we can do standard sensor-space operations like make joint plots of
evoked data.
End of explanation
fig = plot_alignment(
evoked.info, trans=trans, show_axes=True, surfaces='head-dense',
subject='sample', subjects_dir=subjects_dir)
Explanation: Getting a source estimate
New we have all of the components we need to compute a forward solution,
but first we should sanity check that everything is well aligned:
End of explanation
fwd = mne.make_forward_solution(
evoked.info, trans=trans, src=fname_src, bem=fname_bem, verbose=True)
Explanation: Now we can actually compute the forward:
End of explanation
inv = mne.minimum_norm.make_inverse_operator(
evoked.info, fwd, cov, verbose=True)
stc = mne.minimum_norm.apply_inverse(evoked, inv)
brain = stc.plot(subjects_dir=subjects_dir, initial_time=0.1)
Explanation: Finally let's compute the inverse and apply it:
End of explanation
<END_TASK> |
229 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Demo
Strategic Redesign für das Projekt „Spring Petclinic“
Auslastungsdaten vom Produktivbetrieb
Datenquelle
Step1: Berechnung wesentlicher Metriken für Größe und Nutzungsgrad
Step2: Vorbereitung Verbindung zu technischen Schulden
Es wird ein eindeutiger Schlüssel ("fqn") für die Softwareinheiten erstellt, um nachfolgend Nutzungsdaten zu den technischen Schulden zuordnen zu können
Zudem werden nicht mehr benötigte Daten weggelassen
Step3: Änderungshäufigkeit der Software
Zur Bewertung der Investitionen werden die Änderungen pro Softwareeinheit herangezogen.
Laden der Änderungshäufigkeit aus dem Versionskontrollsystem
Die vorgenommenen Änderungen der Anwendung werden geladen.
Step4: Vorbereitung der Zuordnung zu Auslastungsdaten
Es wird ein eindeutiger Schlüssel für die Softwareinheiten erstellt, um technischen Schulden zu Nutzungsdaten zuordnen zu können
Mehrfacheinträge zu technischen Schulden werden pro Softwareeinheit aufsummiert
Step5: Aufstellung der notwendigen Daten für tiefergehende Analysen
Es wird nur der Name der vermessenen Softwareeinheit benötigt sowie die berechnete Dauer der technischen Schulden
Die Dauer der technischen Schulden wird entsprechend als Zeitdauer umgewandelt
Step6: Erstellung der Management-Sicht
Zusammenführung der Daten
Step7: Identifikation technischer Komponenten
Step8: Zusammenfassung nach technischen Komponenten
Step9: Bewertungsmatrix nach technischen Gesichtspunkten
Step10: Identifikation fachlicher Komponenten
Step11: Zusammenfassung nach fachlichen Komponenten
Step12: Bewertungsmatrix nach fachlichen Gesichtspunkten | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
coverage = pd.read_csv("../dataset/jacoco_production_coverage_spring_petclinic.csv")
coverage.head()
Explanation: Demo
Strategic Redesign für das Projekt „Spring Petclinic“
Auslastungsdaten vom Produktivbetrieb
Datenquelle: Gemessen wurde der Anwendungsbetrieb der Software über einen Zeitraum von 24h an einem Wochentag. Für jede Softwareeinheit ("CLASS") wurden die durchlaufenen Code-Zeilen aufgezeichnet.
End of explanation
coverage['lines'] = coverage.LINE_MISSED + coverage.LINE_COVERED
coverage['covered'] = coverage.LINE_COVERED / coverage.lines
coverage.head()
Explanation: Berechnung wesentlicher Metriken für Größe und Nutzungsgrad
End of explanation
coverage['fqn'] = coverage.PACKAGE + "." + coverage.CLASS
coverage_per_class = coverage.set_index('fqn')[['lines', 'covered']]
coverage_per_class.head()
Explanation: Vorbereitung Verbindung zu technischen Schulden
Es wird ein eindeutiger Schlüssel ("fqn") für die Softwareinheiten erstellt, um nachfolgend Nutzungsdaten zu den technischen Schulden zuordnen zu können
Zudem werden nicht mehr benötigte Daten weggelassen
End of explanation
git_log = pd.read_csv(
"../dataset/git_log_numstat_spring_petclinic.log",
sep="\t",
names=['additions', 'deletions', 'path'])
git_log.head()
Explanation: Änderungshäufigkeit der Software
Zur Bewertung der Investitionen werden die Änderungen pro Softwareeinheit herangezogen.
Laden der Änderungshäufigkeit aus dem Versionskontrollsystem
Die vorgenommenen Änderungen der Anwendung werden geladen.
End of explanation
git_log['fqn'] = git_log.path.str.extract(
"/java/(.*)\.java",
expand=True)[0]
git_log['fqn'] = git_log.fqn.str.replace("/", ".")
git_log['fqn'][0]
Explanation: Vorbereitung der Zuordnung zu Auslastungsdaten
Es wird ein eindeutiger Schlüssel für die Softwareinheiten erstellt, um technischen Schulden zu Nutzungsdaten zuordnen zu können
Mehrfacheinträge zu technischen Schulden werden pro Softwareeinheit aufsummiert
End of explanation
changes_per_file = git_log.groupby('fqn').path.count()
changes_per_file.name = "changes"
changes_per_file.head()
Explanation: Aufstellung der notwendigen Daten für tiefergehende Analysen
Es wird nur der Name der vermessenen Softwareeinheit benötigt sowie die berechnete Dauer der technischen Schulden
Die Dauer der technischen Schulden wird entsprechend als Zeitdauer umgewandelt
End of explanation
analysis = coverage_per_class.join(changes_per_file)
analysis.head()
Explanation: Erstellung der Management-Sicht
Zusammenführung der Daten
End of explanation
analysis['tech'] = analysis.index.str.split(".").str[-2]
analysis.head()
Explanation: Identifikation technischer Komponenten
End of explanation
tech_insights = analysis.groupby('tech').agg({
"lines" : "sum",
"covered": "mean",
"changes" : "sum"
})
tech_insights
Explanation: Zusammenfassung nach technischen Komponenten
End of explanation
%matplotlib inline
from ausi import portfolio
portfolio.plot_diagram(tech_insights, "changes", "covered", "lines");
Explanation: Bewertungsmatrix nach technischen Gesichtspunkten
End of explanation
analysis['domain'] = "Other"
domains = ["Owner", "Pet", "Visit", "Vet", "Specialty", "Clinic"]
for domain in domains:
analysis.loc[analysis.index.str.contains(domain), 'domain'] = domain
analysis.head()
Explanation: Identifikation fachlicher Komponenten
End of explanation
domain_insights = analysis.groupby('domain').agg({
"lines" : "sum",
"covered": "mean",
"changes" : "sum"
})
domain_insights = domain_insights.rename(columns=
{"lines": "Größe", "covered" : "Nutzungsgrad", "changes" : "Investition"})
domain_insights
Explanation: Zusammenfassung nach fachlichen Komponenten
End of explanation
portfolio.plot_diagram(domain_insights, "Investition", "Nutzungsgrad", "Größe");
Explanation: Bewertungsmatrix nach fachlichen Gesichtspunkten
End of explanation
<END_TASK> |
230 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Метод главных компонент
В данном задании вам будет предложено ознакомиться с подходом, который переоткрывался в самых разных областях, имеет множество разных интерпретаций, а также несколько интересных обобщений
Step1: Теория
Любой набор данных представляет собой матрицу $X$.
Метод главных компонент последовательно находит следующие линейные комбинации признаков (компоненты) из $X$
Step2: Путём диагонализации истинной матрицы ковариаций $C$, мы можем найти преобразование исходного набора данных, компоненты которого наилучшим образом будут описывать дисперсию, с учётом их ортогональности друг другу
Step3: А теперь сравним эти направления с направлениями, которые выбирает метод главных компонент
Step4: Видно, что уже при небольшом количестве данных они отличаются незначительно. Увеличим размер выборки
Step5: В этом случае главные компоненты значительно точнее приближают истинные направления данных, вдоль которых наблюдается наибольшая дисперсия.
Статистический взгляд на модель
Как формализовать предположения метода, указанные выше? При помощи вероятностной модели!
Задача, стоящая за любым методом уменьшения размерности
Step6: Вариационный взгляд на модель
Мы знаем, что каждой главной компоненте соответствует описываемая ей дисперсия данных (дисперсия данных при проекции на эту компоненту). Она численно равна значению диагональных элементов матрицы $\Lambda$, получаемой из спектрального разложения матрицы ковариации данных (смотри теорию выше).
Исходя из этого, мы можем отсортировать дисперсию данных вдоль этих компонент по убыванию, и уменьшить размерность данных, отбросив $q$ итоговых главных компонент, имеющих наименьшую дисперсию.
Делать это можно двумя разными способами. Например, если вы вдальнейшем обучаете на данных с уменьшенной размерностью модель классификации или регрессии, то можно запустить итерационный процесс
Step7: Интерпретация главных компонент
В качестве главных компонент мы получаем линейные комбинации исходных призанков, поэтому резонно возникает вопрос об их интерпретации.
Для этого существует несколько подходов, мы рассмотрим два
Step8: Интерпретация главных компонент с использованием данных
Рассмотрим теперь величину, которую можно проинтерпретировать, как квадрат косинуса угла между объектом выборки и главной компонентой
Step9: Анализ основных недостатков метода главных компонент
Рассмотренные выше задачи являются, безусловно, модельными, потому что данные для них были сгенерированы в соответствии с предположениями метода главных компонент. На практике эти предположения, естественно, выполняются далеко не всегда. Рассмотрим типичные ошибки PCA, которые следует иметь в виду перед тем, как его применять.
Направления с максимальной дисперсией в данных неортогональны
Рассмотрим случай выборки, которая сгенерирована из двух вытянутых нормальных распределений
Step10: В чём проблема, почему pca здесь работает плохо? Ответ прост | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import matplotlib
from matplotlib import pyplot as plt
import matplotlib.patches as mpatches
matplotlib.style.use('ggplot')
%matplotlib inline
Explanation: Метод главных компонент
В данном задании вам будет предложено ознакомиться с подходом, который переоткрывался в самых разных областях, имеет множество разных интерпретаций, а также несколько интересных обобщений: методом главных компонент (principal component analysis).
Programming assignment
Задание разбито на две части:
- работа с модельными данными,
- работа с реальными данными.
В конце каждого пункта от вас требуется получить ответ и загрузить в соответствующую форму в виде набора текстовых файлов.
End of explanation
from sklearn.decomposition import PCA
mu = np.zeros(2)
C = np.array([[3,1],[1,2]])
data = np.random.multivariate_normal(mu, C, size=50)
plt.scatter(data[:,0], data[:,1])
plt.show()
Explanation: Теория
Любой набор данных представляет собой матрицу $X$.
Метод главных компонент последовательно находит следующие линейные комбинации признаков (компоненты) из $X$:
- каждая компонента ортогональна всем остальным и нормированна: $<w_i, w_j> = 0, \quad ||w_i||=1$,
- каждая компонента описывает максимально возможную дисперсию данных (с учётом предыдущего ограничения).
Предположения, в рамках которых данный подход будет работать хорошо:
- линейность компонент: мы предполагаем, что данные можно анализировать линейными методами,
- большие дисперсии важны: предполагается, что наиболее важны те направления в данных, вдоль которых они имеют наибольшую дисперсию,
- все компоненты ортогональны: это предположение позволяет проводить анализ главных компонент при помощи техник линейной алгебры (например, сингулярное разложение матрицы $X$ или спектральное разложение матрицы $X^TX$).
Как это выглядит математически?
Обозначим следующим образом выборочную матрицу ковариации данных: $\hat{C} \propto Q = X^TX$. ($Q$ отличается от $\hat{C}$ нормировкой на число объектов).
Сингулярное разложение матрицы $Q$ выглядит следующим образом:
$$Q = X^TX = W \Lambda W^T$$
Можно строго показать, что столбцы матрицы $W$ являются главными компонентами матрицы $X$, т.е. комбинациями признаков, удовлетворяющих двум условиям, указанным в начале. При этом дисперсия данных вдоль направления, заданного каждой компонентой, равна соответствующему значению диагональной матрицы $\Lambda$.
Как же на основании этого преобразования производить уменьшение размерности? Мы можем отранжировать компоненты, используя значения дисперсий данных вдоль них.
Сделаем это: $\lambda_{(1)} > \lambda_{(2)} > \dots > \lambda_{(D)}$.
Тогда, если мы выберем компоненты, соответствующие первым $d$ дисперсиям из этого списка, мы получим набор из $d$ новых признаков, которые наилучшим образом описывают дисперсию изначального набора данных среди всех других возможных линейных комбинаций исходных признаков матрицы $X$.
- Если $d=D$, то мы вообще не теряем никакой информации.
- Если $d<D$, то мы теряем информацию, которая, при справедливости указанных выше предположений, будет пропорциональна сумме дисперсий отброшенных компонент.
Получается, что метод главных компонент позволяет нам ранжировать полученные компоненты по "значимости", а также запустить процесс их отбора.
Пример
Рассмотрим набор данных, который сэмплирован из многомерного нормального распределения с матрицей ковариации $C = \begin{pmatrix} 3 & 1 \ 1 & 2 \end{pmatrix}$.
End of explanation
v, W_true = np.linalg.eig(C)
plt.scatter(data[:,0], data[:,1])
# построим истинные компоненты, вдоль которых максимальна дисперсия данных
plt.plot(data[:,0], (W_true[0,0]/W_true[0,1])*data[:,0], color="g")
plt.plot(data[:,0], (W_true[1,0]/W_true[1,1])*data[:,0], color="g")
g_patch = mpatches.Patch(color='g', label='True components')
plt.legend(handles=[g_patch])
plt.axis('equal')
limits = [np.minimum(np.amin(data[:,0]), np.amin(data[:,1])),
np.maximum(np.amax(data[:,0]), np.amax(data[:,1]))]
plt.xlim(limits[0],limits[1])
plt.ylim(limits[0],limits[1])
plt.draw()
Explanation: Путём диагонализации истинной матрицы ковариаций $C$, мы можем найти преобразование исходного набора данных, компоненты которого наилучшим образом будут описывать дисперсию, с учётом их ортогональности друг другу:
End of explanation
def plot_principal_components(data, model, scatter=True, legend=True):
W_pca = model.components_
if scatter:
plt.scatter(data[:,0], data[:,1])
plt.plot(data[:,0], -(W_pca[0,0]/W_pca[0,1])*data[:,0], color="c")
plt.plot(data[:,0], -(W_pca[1,0]/W_pca[1,1])*data[:,0], color="c")
if legend:
c_patch = mpatches.Patch(color='c', label='Principal components')
plt.legend(handles=[c_patch], loc='lower right')
# сделаем графики красивыми:
plt.axis('equal')
limits = [np.minimum(np.amin(data[:,0]), np.amin(data[:,1]))-0.5,
np.maximum(np.amax(data[:,0]), np.amax(data[:,1]))+0.5]
plt.xlim(limits[0],limits[1])
plt.ylim(limits[0],limits[1])
plt.draw()
model = PCA(n_components=2)
model.fit(data)
plt.scatter(data[:,0], data[:,1])
# построим истинные компоненты, вдоль которых максимальна дисперсия данных
plt.plot(data[:,0], (W_true[0,0]/W_true[0,1])*data[:,0], color="g")
plt.plot(data[:,0], (W_true[1,0]/W_true[1,1])*data[:,0], color="g")
# построим компоненты, полученные с использованием метода PCA:
plot_principal_components(data, model, scatter=False, legend=False)
c_patch = mpatches.Patch(color='c', label='Principal components')
plt.legend(handles=[g_patch, c_patch])
plt.draw()
Explanation: А теперь сравним эти направления с направлениями, которые выбирает метод главных компонент:
End of explanation
data_large = np.random.multivariate_normal(mu, C, size=5000)
model = PCA(n_components=2)
model.fit(data_large)
plt.scatter(data_large[:,0], data_large[:,1], alpha=0.1)
# построим истинные компоненты, вдоль которых максимальна дисперсия данных
plt.plot(data_large[:,0], (W_true[0,0]/W_true[0,1])*data_large[:,0], color="g")
plt.plot(data_large[:,0], (W_true[1,0]/W_true[1,1])*data_large[:,0], color="g")
# построим компоненты, полученные с использованием метода PCA:
plot_principal_components(data_large, model, scatter=False, legend=False)
c_patch = mpatches.Patch(color='c', label='Principal components')
plt.legend(handles=[g_patch, c_patch])
plt.draw()
Explanation: Видно, что уже при небольшом количестве данных они отличаются незначительно. Увеличим размер выборки:
End of explanation
from sklearn.decomposition import PCA
from sklearn.cross_validation import cross_val_score as cv_score
def plot_scores(d_scores):
n_components = np.arange(1,d_scores.size+1)
plt.plot(n_components, d_scores, 'b', label='PCA scores')
plt.xlim(n_components[0], n_components[-1])
plt.xlabel('n components')
plt.ylabel('cv scores')
plt.legend(loc='lower right')
plt.show()
def write_answer_1(optimal_d):
with open("pca_answer1.txt", "w") as fout:
fout.write(str(optimal_d))
data = pd.read_csv('data_task1.csv')
from sklearn.model_selection import cross_val_score
from tqdm import tqdm
best = -np.inf
best_d = None
for d in tqdm(range(len(data.columns), 0, -1)):
model = PCA(n_components=d, svd_solver='full')
scores = cross_val_score(model, X=data)
if np.mean(scores) > best:
best = np.mean(scores)
best_d = d
write_answer_1(best_d)
Explanation: В этом случае главные компоненты значительно точнее приближают истинные направления данных, вдоль которых наблюдается наибольшая дисперсия.
Статистический взгляд на модель
Как формализовать предположения метода, указанные выше? При помощи вероятностной модели!
Задача, стоящая за любым методом уменьшения размерности: получить из набора зашумлённых признаков $X$ истинные значения $Y$, которые на самом деле определяют набор данных (т.е. сведение датасета с большим количеством признаков к данным, имеющим т.н. "эффективную размерность").
В случае метода главных компонент мы хотим найти направления, вдоль которых максимальна дисперсия, с учётом описанных выше предположений о структуре данных и компонент.
Материал, описанный ниже в данной секции, не обязателен для ознакомления для выполнения следующего задания, т.к. требует некоторых знаний статистики.
Для тех, кто собирается его пропустить: в конце раздела мы получим метрику качества, которая должна определять, насколько данные хорошо описываются построенной моделью при заданном числе компонент. Отбор признаков при этом сводится к тому, что мы выбираем то количество компонент, при котором используемая метрика (логарифм правдоподобия) является максимальной.
С учётом предположений задача метода главных компонент выглядит следующим образом:
$$ x = Wy + \mu + \epsilon$$
где:
- $x$ -- наблюдаемые данные
- $W$ -- матрица главных компонент (каждый стобец -- одна компонента)
- $y$ -- их проекция на главные компоненты
- $\mu$ -- среднее наблюдаемых данных
- $\epsilon \sim \mathcal{N}(0, \sigma^2I)$ -- нормальный шум
Исходя из распределения шума, выпишем распределение на $x$:
$$p(x \mid y) = \mathcal{N}(Wx + \mu, \sigma^2I) $$
Введём априорное распределение на $y$:
$$p(y) = \mathcal{N}(0, 1)$$
Выведем из этого при помощи формулы Байеса маргинальное распределение на $p(x)$:
$$p(x) = \mathcal{N}(\mu, \sigma^2I + WW^T)$$
Тогда правдоподобие набора данных при условии используемой модели выглядит следующим образом:
$$\mathcal{L} = \sum_{i=1}^N \log p(x_i) = -N/2 \Big( d\log(2\pi) + \log |C| + \text{tr}(C^{-1}S) \Big)$$
где:
- $C = \sigma^2I + WW^T$ -- матрица ковариации в маргинальной модели
- $S = \frac{1}{N} \sum_{i=1}^N (x_i - \mu)(x_i - \mu)^T$ -- выборочная ковариация
Значение $\mathcal{L}$ имеет смысл логарифма вероятности получения набора данных $X$ при условии, что он удовлетворяет предположениям модели метода главных компонент. Чем оно больше -- тем лучше модель описывает наблюдаемые данные.
Задание 1. Автоматическое уменьшение размерности данных при помощи логарифма правдоподобия $\mathcal{L}$
Рассмотрим набор данных размерности $D$, чья реальная размерность значительно меньше наблюдаемой (назовём её $d$). От вас требуется:
Для каждого значения $\hat{d}$ в интервале [1,D] построить модель PCA с $\hat{d}$ главными компонентами.
Оценить средний логарифм правдоподобия данных для каждой модели на генеральной совокупности, используя метод кросс-валидации с 3 фолдами (итоговая оценка значения логарифма правдоподобия усредняется по всем фолдам).
Найти модель, для которой он максимален, и внести в файл ответа число компонент в данной модели, т.е. значение $\hat{d}_{opt}$.
Для оценки логарифма правдоподобия модели для заданного числа главных компонент при помощи метода кросс-валидации используйте следующие функции:
model = PCA(n_components=n)
scores = cv_score(model, data)
Обратите внимание, что scores -- это вектор, длина которого равна числу фолдов. Для получения оценки на правдоподобие модели его значения требуется усреднить.
Для визуализации оценок можете использовать следующую функцию:
plot_scores(d_scores)
которой на вход передаётся вектор полученных оценок логарифма правдоподобия данных для каждого $\hat{d}$.
Для интересующихся: данные для заданий 1 и 2 были сгенерированны в соответствии с предполагаемой PCA моделью. То есть: данные $Y$ с эффективной размерностью $d$, полученные из независимых равномерных распределений, линейно траснформированны случайной матрицей $W$ в пространство размерностью $D$, после чего ко всем признакам был добавлен независимый нормальный шум с дисперсией $\sigma$.
End of explanation
from sklearn.decomposition import PCA
from sklearn.cross_validation import cross_val_score as cv_score
def plot_variances(d_variances):
n_components = np.arange(1,d_variances.size+1)
plt.plot(n_components, d_variances, 'b', label='Component variances')
plt.xlim(n_components[0], n_components[-1])
plt.xlabel('n components')
plt.ylabel('variance')
plt.legend(loc='upper right')
plt.show()
def write_answer_2(optimal_d):
with open("pca_answer2.txt", "w") as fout:
fout.write(str(optimal_d))
data = pd.read_csv('data_task2.csv')
model = PCA(n_components=len(data.columns), svd_solver='full')
model.fit(data)
t_data = model.transform(data)
variances = np.std(t_data, axis = 0)
diff = variances[:-1] - variances[1:]
plot_variances(variances)
best = np.argmax(diff) + 1
write_answer_2(best)
Explanation: Вариационный взгляд на модель
Мы знаем, что каждой главной компоненте соответствует описываемая ей дисперсия данных (дисперсия данных при проекции на эту компоненту). Она численно равна значению диагональных элементов матрицы $\Lambda$, получаемой из спектрального разложения матрицы ковариации данных (смотри теорию выше).
Исходя из этого, мы можем отсортировать дисперсию данных вдоль этих компонент по убыванию, и уменьшить размерность данных, отбросив $q$ итоговых главных компонент, имеющих наименьшую дисперсию.
Делать это можно двумя разными способами. Например, если вы вдальнейшем обучаете на данных с уменьшенной размерностью модель классификации или регрессии, то можно запустить итерационный процесс: удалять компоненты с наименьшей дисперсией по одной, пока качество итоговой модели не станет значительно хуже.
Более общий способ отбора признаков заключается в том, что вы можете посмотреть на разности в дисперсиях в отсортированном ряде $\lambda_{(1)} > \lambda_{(2)} > \dots > \lambda_{(D)}$: $\lambda_{(1)}-\lambda_{(2)}, \dots, \lambda_{(D-1)} - \lambda_{(D)}$, и удалить те компоненты, на которых разность будет наибольшей. Именно этим методом вам и предлагается воспользоваться для тестового набора данных.
Задание 2. Ручное уменьшение размерности признаков посредством анализа дисперсии данных вдоль главных компонент
Рассмотрим ещё один набор данных размерности $D$, чья реальная размерность значительно меньше наблюдаемой (назовём её также $d$). От вас требуется:
Построить модель PCA с $D$ главными компонентами по этим данным.
Спроецировать данные на главные компоненты.
Оценить их дисперсию вдоль главных компонент.
Отсортировать дисперсии в порядке убывания и получить их попарные разности: $\lambda_{(i-1)} - \lambda_{(i)}$.
Найти разность с наибольшим значением и получить по ней оценку на эффективную размерность данных $\hat{d}$.
Построить график дисперсий и убедиться, что полученная оценка на $\hat{d}{opt}$ действительно имеет смысл, после этого внести полученное значение $\hat{d}{opt}$ в файл ответа.
Для построения модели PCA используйте функцию:
model.fit(data)
Для трансформации данных используйте метод:
model.transform(data)
Оценку дисперсий на трансформированных данных от вас потребуется реализовать вручную. Для построения графиков можно воспользоваться функцией
plot_variances(d_variances)
которой следует передать на вход отсортированный по убыванию вектор дисперсий вдоль компонент.
End of explanation
from sklearn import datasets
def plot_iris(transformed_data, target, target_names):
plt.figure()
for c, i, target_name in zip("rgb", [0, 1, 2], target_names):
plt.scatter(transformed_data[target == i, 0],
transformed_data[target == i, 1], c=c, label=target_name)
plt.legend()
plt.show()
def write_answer_3(list_pc1, list_pc2):
with open("pca_answer3.txt", "w") as fout:
fout.write(" ".join([str(num) for num in list_pc1]))
fout.write(" ")
fout.write(" ".join([str(num) for num in list_pc2]))
# загрузим датасет iris
iris = datasets.load_iris()
data = iris.data
target = iris.target
target_names = iris.target_names
model = PCA(n_components=2, svd_solver='full')
model.fit(data)
t_data = model.transform(data)
plot_iris(t_data, target, target_names)
best_l = []
best_r = []
left = t_data[:, 0].reshape(len(t_data),)
right = t_data[:, 1].reshape(len(t_data),)
for idx in range(data.shape[1]):
cov_l = np.corrcoef(np.vstack((left, data[:, idx].reshape(len(t_data),))))
cov_r = np.corrcoef(np.vstack((right, data[:, idx].reshape(len(t_data),))))
if cov_l[0, 1] > cov_r[0, 1]:
best_l.append(idx + 1)
else:
best_r.append(idx + 1)
best_l, best_r
write_answer_3(best_l, best_r)
Explanation: Интерпретация главных компонент
В качестве главных компонент мы получаем линейные комбинации исходных призанков, поэтому резонно возникает вопрос об их интерпретации.
Для этого существует несколько подходов, мы рассмотрим два:
- рассчитать взаимосвязи главных компонент с исходными признаками
- рассчитать вклады каждого конкретного наблюдения в главные компоненты
Первый способ подходит в том случае, когда все объекты из набора данных не несут для нас никакой семантической информации, которая уже не запечатлена в наборе признаков.
Второй способ подходит для случая, когда данные имеют более сложную структуру. Например, лица для человека несут больший семантический смысл, чем вектор значений пикселей, которые анализирует PCA.
Рассмотрим подробнее способ 1: он заключается в подсчёте коэффициентов корреляций между исходными признаками и набором главных компонент.
Так как метод главных компонент является линейным, то предлагается для анализа использовать корреляцию Пирсона, выборочный аналог которой имеет следующую формулу:
$$r_{jk} = \frac{\sum_{i=1}^N (x_{ij} - \bar{x}j) (y{ik} - \bar{y}k)}{\sqrt{\sum{i=1}^N (x_{ij} - \bar{x}j)^2 \sum{i=1}^N (y_{ik} - \bar{y}_k)^2}} $$
где:
- $\bar{x}_j$ -- среднее значение j-го признака,
- $\bar{y}_k$ -- среднее значение проекции на k-ю главную компоненту.
Корреляция Пирсона является мерой линейной зависимости. Она равна 0 в случае, когда величины независимы, и $\pm 1$, если они линейно зависимы. Исходя из степени корреляции новой компоненты с исходными признаками, можно строить её семантическую интерпретацию, т.к. смысл исходных признаков мы знаем.
Задание 3. Анализ главных компонент при помощи корреляций с исходными признаками.
Обучите метод главных компонент на датасете iris, получите преобразованные данные.
Посчитайте корреляции исходных признаков с их проекциями на первые две главные компоненты.
Для каждого признака найдите компоненту (из двух построенных), с которой он коррелирует больше всего.
На основании п.3 сгруппируйте признаки по компонентам. Составьте два списка: список номеров признаков, которые сильнее коррелируют с первой компонентой, и такой же список для второй. Нумерацию начинать с единицы. Передайте оба списка функции write_answer_3.
Набор данных состоит из 4 признаков, посчитанных для 150 ирисов. Каждый из них принадлежит одному из трёх видов. Визуализацию проекции данного датасета на две компоненты, которые описывают наибольшую дисперсию данных, можно получить при помощи функции
plot_iris(transformed_data, target, target_names)
на вход которой требуется передать данные, преобразованные при помощи PCA, а также информацию о классах. Цвет точек отвечает одному из трёх видов ириса.
Для того чтобы получить имена исходных признаков, используйте следующий список:
iris.feature_names
При подсчёте корреляций не забудьте центрировать признаки и проекции на главные компоненты (вычитать из них среднее).
End of explanation
from sklearn.datasets import fetch_olivetti_faces
from sklearn.decomposition import RandomizedPCA
def write_answer_4(list_pc):
with open("pca_answer4.txt", "w") as fout:
fout.write(" ".join([str(num) for num in list_pc]))
data = fetch_olivetti_faces(shuffle=True, random_state=0).data
image_shape = (64, 64)
model = RandomizedPCA(n_components=10)
model.fit(data)
t_data = model.transform(data)
t_data
t_data = t_data - np.mean(t_data, axis=0)
t_data = np.power(t_data, 2)
t_data = t_data / np.sum(t_data, axis = 1).reshape(len(t_data), 1)
indexes = np.argmax(t_data, axis = 0)
write_answer_4(indexes)
indexes
Explanation: Интерпретация главных компонент с использованием данных
Рассмотрим теперь величину, которую можно проинтерпретировать, как квадрат косинуса угла между объектом выборки и главной компонентой:
$$ cos^2_{ik} = \frac{f_{ik}^2}{\sum_{\ell=1}^d f_{i\ell}^2} $$
где
- i -- номер объекта
- k -- номер главной компоненты
- $f_{ik}$ -- модуль центрированной проекции объекта на компоненту
Очевидно, что
$$ \sum_{k=1}^d cos^2_{ik} = 1 $$
Это значит, что для каждого объекта мы в виде данной величины получили веса, пропорциональные вкладу, которую вносит данный объект в дисперсию каждой компоненты. Чем больше вклад, тем более значим объект для описания конкретной главной компоненты.
Задание 4. Анализ главных компонент при помощи вкладов в их дисперсию отдельных объектов
Загрузите датасет лиц Olivetti Faces и обучите на нём модель RandomizedPCA (используется при большом количестве признаков и работает быстрее, чем обычный PCA). Получите проекции признаков на 10 первых главных компонент.
Посчитайте для каждого объекта его относительный вклад в дисперсию каждой из 10 компонент, используя формулу из предыдущего раздела (d = 10).
Для каждой компоненты найдите и визуализируйте лицо, которое вносит наибольший относительный вклад в неё. Для визуализации используйте функцию
plt.imshow(image.reshape(image_shape))
Передайте в функцию write_answer_4 список номеров лиц с наибольшим относительным вкладом в дисперсию каждой из компонент, список начинается с 0.
End of explanation
C1 = np.array([[10,0],[0,0.5]])
phi = np.pi/3
C2 = np.dot(C1, np.array([[np.cos(phi), np.sin(phi)],
[-np.sin(phi),np.cos(phi)]]))
data = np.vstack([np.random.multivariate_normal(mu, C1, size=50),
np.random.multivariate_normal(mu, C2, size=50)])
plt.scatter(data[:,0], data[:,1])
# построим истинные интересующие нас компоненты
plt.plot(data[:,0], np.zeros(data[:,0].size), color="g")
plt.plot(data[:,0], 3**0.5*data[:,0], color="g")
# обучим модель pca и построим главные компоненты
model = PCA(n_components=2)
model.fit(data)
plot_principal_components(data, model, scatter=False, legend=False)
c_patch = mpatches.Patch(color='c', label='Principal components')
plt.legend(handles=[g_patch, c_patch])
plt.draw()
Explanation: Анализ основных недостатков метода главных компонент
Рассмотренные выше задачи являются, безусловно, модельными, потому что данные для них были сгенерированы в соответствии с предположениями метода главных компонент. На практике эти предположения, естественно, выполняются далеко не всегда. Рассмотрим типичные ошибки PCA, которые следует иметь в виду перед тем, как его применять.
Направления с максимальной дисперсией в данных неортогональны
Рассмотрим случай выборки, которая сгенерирована из двух вытянутых нормальных распределений:
End of explanation
C = np.array([[0.5,0],[0,10]])
mu1 = np.array([-2,0])
mu2 = np.array([2,0])
data = np.vstack([np.random.multivariate_normal(mu1, C, size=50),
np.random.multivariate_normal(mu2, C, size=50)])
plt.scatter(data[:,0], data[:,1])
# обучим модель pca и построим главные компоненты
model = PCA(n_components=2)
model.fit(data)
plot_principal_components(data, model)
plt.draw()
Explanation: В чём проблема, почему pca здесь работает плохо? Ответ прост: интересующие нас компоненты в данных коррелированны между собой (или неортогональны, в зависимости от того, какой терминологией пользоваться). Для поиска подобных преобразований требуются более сложные методы, которые уже выходят за рамки метода главных компонент.
Для интересующихся: то, что можно применить непосредственно к выходу метода главных компонент, для получения подобных неортогональных преобразований, называется методами ротации. Почитать о них можно в связи с другим методом уменьшения размерности, который называется Factor Analysis (FA), но ничего не мешает их применять и к главным компонентам.
Интересное направление в данных не совпадает с направлением максимальной дисперсии
Рассмотрим пример, когда дисперсии не отражают интересующих нас направлений в данных:
End of explanation
<END_TASK> |
231 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
<h1> Repeatable splitting </h1>
In this notebook, we will explore the impact of different ways of creating machine learning datasets.
<p>
Repeatability is important in machine learning. If you do the same thing now and 5 minutes from now and get different answers, then it makes experimentation difficult. In other words, you will find it difficult to gauge whether a change you made has resulted in an improvement or not.
Step2: <h3> Create a simple machine learning model </h3>
The dataset that we will use is <a href="https
Step4: <h3> What is wrong with calculating RMSE on the training and test data as follows? </h3>
Step6: Hint
Step8: <h2> Using HASH of date to split the data </h2>
Let's split by date and train.
Step10: We can now use the alpha to compute RMSE. Because the alpha value is repeatable, we don't need to worry that the alpha in the compute_rmse will be different from the alpha computed in the compute_alpha. | <ASSISTANT_TASK:>
Python Code:
from google.cloud import bigquery
Explanation: <h1> Repeatable splitting </h1>
In this notebook, we will explore the impact of different ways of creating machine learning datasets.
<p>
Repeatability is important in machine learning. If you do the same thing now and 5 minutes from now and get different answers, then it makes experimentation difficult. In other words, you will find it difficult to gauge whether a change you made has resulted in an improvement or not.
End of explanation
compute_alpha =
#standardSQL
SELECT
SAFE_DIVIDE(
SUM(arrival_delay * departure_delay),
SUM(departure_delay * departure_delay)) AS alpha
FROM
(
SELECT
RAND() AS splitfield,
arrival_delay,
departure_delay
FROM
`bigquery-samples.airline_ontime_data.flights`
WHERE
departure_airport = 'DEN'
AND arrival_airport = 'LAX'
)
WHERE
splitfield < 0.8
results = bigquery.Client().query(compute_alpha).to_dataframe()
alpha = results['alpha'][0]
print(alpha)
Explanation: <h3> Create a simple machine learning model </h3>
The dataset that we will use is <a href="https://bigquery.cloud.google.com/table/bigquery-samples:airline_ontime_data.flights">a BigQuery public dataset</a> of airline arrival data. Click on the link, and look at the column names. Switch to the Details tab to verify that the number of records is 70 million, and then switch to the Preview tab to look at a few rows.
<p>
We want to predict the arrival delay of an airline based on the departure delay. The model that we will use is a zero-bias linear model:
$$ delay_{arrival} = \alpha * delay_{departure} $$
<p>
To train the model is to estimate a good value for $\alpha$.
<p>
One approach to estimate alpha is to use this formula:
$$ \alpha = \frac{\sum delay_{departure} delay_{arrival} }{ \sum delay_{departure}^2 } $$
Because we'd like to capture the idea that this relationship is different for flights from New York to Los Angeles vs. flights from Austin to Indianapolis (shorter flight, less busy airports), we'd compute a different $alpha$ for each airport-pair. For simplicity, we'll do this model only for flights between Denver and Los Angeles.
<h2> Naive random split (not repeatable) </h2>
End of explanation
compute_rmse =
#standardSQL
SELECT
dataset,
SQRT(
AVG(
(arrival_delay - ALPHA * departure_delay) *
(arrival_delay - ALPHA * departure_delay)
)
) AS rmse,
COUNT(arrival_delay) AS num_flights
FROM (
SELECT
IF (RAND() < 0.8, 'train', 'eval') AS dataset,
arrival_delay,
departure_delay
FROM
`bigquery-samples.airline_ontime_data.flights`
WHERE
departure_airport = 'DEN'
AND arrival_airport = 'LAX' )
GROUP BY
dataset
bigquery.Client().query(compute_rmse.replace('ALPHA', str(alpha))).to_dataframe()
Explanation: <h3> What is wrong with calculating RMSE on the training and test data as follows? </h3>
End of explanation
train_and_eval_rand =
#standardSQL
WITH
alldata AS (
SELECT
IF (RAND() < 0.8, 'train', 'eval') AS dataset,
arrival_delay,
departure_delay
FROM
`bigquery-samples.airline_ontime_data.flights`
WHERE
departure_airport = 'DEN'
AND arrival_airport = 'LAX' ),
training AS (
SELECT
SAFE_DIVIDE(
SUM(arrival_delay * departure_delay),
SUM(departure_delay * departure_delay)) AS alpha
FROM
alldata
WHERE
dataset = 'train' )
SELECT
MAX(alpha) AS alpha,
dataset,
SQRT(
AVG(
(arrival_delay - alpha * departure_delay) *
(arrival_delay - alpha * departure_delay)
)
) AS rmse,
COUNT(arrival_delay) AS num_flights
FROM
alldata,
training
GROUP BY
dataset
bigquery.Client().query(train_and_eval_rand).to_dataframe()
Explanation: Hint:
* Are you really getting the same training data in the compute_rmse query as in the compute_alpha query?
* Do you get the same answers each time you rerun the compute_alpha and compute_rmse blocks?
<h3> How do we correctly train and evaluate? </h3>
<br/>
Here's the right way to compute the RMSE using the actual training and held-out (evaluation) data. Note how much harder this feels.
Although the calculations are now correct, the experiment is still not repeatable.
Try running it several times; do you get the same answer?
End of explanation
compute_alpha =
#standardSQL
SELECT
SAFE_DIVIDE(
SUM(arrival_delay * departure_delay),
SUM(departure_delay * departure_delay)) AS alpha
FROM
`bigquery-samples.airline_ontime_data.flights`
WHERE
departure_airport = 'DEN'
AND arrival_airport = 'LAX'
AND ABS(MOD(FARM_FINGERPRINT(date), 10)) < 8
results = bigquery.Client().query(compute_alpha).to_dataframe()
alpha = results['alpha'][0]
print(alpha)
Explanation: <h2> Using HASH of date to split the data </h2>
Let's split by date and train.
End of explanation
compute_rmse =
#standardSQL
SELECT
IF(ABS(MOD(FARM_FINGERPRINT(date), 10)) < 8, 'train', 'eval') AS dataset,
SQRT(
AVG(
(arrival_delay - ALPHA * departure_delay) *
(arrival_delay - ALPHA * departure_delay)
)
) AS rmse,
COUNT(arrival_delay) AS num_flights
FROM
`bigquery-samples.airline_ontime_data.flights`
WHERE
departure_airport = 'DEN'
AND arrival_airport = 'LAX'
GROUP BY
dataset
print(bigquery.Client().query(compute_rmse.replace('ALPHA', str(alpha))).to_dataframe().head())
Explanation: We can now use the alpha to compute RMSE. Because the alpha value is repeatable, we don't need to worry that the alpha in the compute_rmse will be different from the alpha computed in the compute_alpha.
End of explanation
<END_TASK> |
232 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
2018 NUS-MIT Datathon Tutorial
Step1: Next, we need to authenticate ourselves to Google Cloud Platform. If you are running the code cell below for the first time, a link will show up, which leads to a web page for authentication and authorization. Login with your crendentials and make sure the permissions it requests are proper, after clicking Allow button, you will be redirected to another web page which has a verification code displayed. Copy the code and paste it in the input field below.
Step2: At the same time, let's set the project we are going to use throughout the tutorial.
Step3: Optional
Step4: Dataset
We have already extracted the images from the DICOM files to separate folders on GCS, and some preprocessing were also done with the raw images (If you need custom preprocessing, please consult our tutorial on image preprocessing).
The folders ending with _demo contain subsets of training and test images. Specifically, the demo training dataset has 100 images, with 25 images for each breast density category (1 - 4). There are 20 images in the test dataset which were selected randomly. All the images were first padded to 5251x7111 (largest width and height among the selected images) and then resized to 95x128 to fit in memory and save training time. Both training and test images are "Cranial-Caudal" only.
ISIC dataset is organized in a slightly different way, the images are in JPEG format and each image comes with a JSON file containing metadata information. In order to make this tutorial work for ISIC, you will need to first pad and resize the images (we provide a script to do that here), and extract the labels from the JSON files based on your interests.
Training
Before coding on our neurual network, let's create a few helper methods to make loading data from Google Cloud Storage (GCS) easier.
Step6: Let's create a model function, which will be passed to an estimator that we will create later. The model has an architecture of 6 layers
Step7: Now that we have a model function, next step is feeding it to an estimator for training. Here are are creating a main function as required by tensorflow.
Step8: Finally, here comes the exciting moment. We are going to train and evaluate the model we just built! Run the following code cell and pay attention to the accuracy printed at the end of logs.
Note if this is not the first time you run the following cell, to avoid weird errors like "NaN loss during training", please run the following command to remove the temporary files. | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import os
import pandas as pd
import random
import tensorflow as tf
from google.colab import auth
from google.cloud import storage
from io import BytesIO
# The next import is used to print out pretty pandas dataframes
from IPython.display import display, HTML
from PIL import Image
Explanation: 2018 NUS-MIT Datathon Tutorial: Machine Learning on CBIS-DDSM
Goal
In this colab, we are going to train a simple convolutional neural network (CNN) with Tensorflow, which can be used to classify the mammographic images based on breast density.
The network we are going to build is adapted from the official tensorflow tutorial.
CBIS-DDSM
The dataset we are going to work with is CBIS-DDSM. Quote from their website:
"This CBIS-DDSM (Curated Breast Imaging Subset of DDSM) is an updated and standardized version of the Digital Database for Screening Mammography (DDSM)."
CBIS-DDSM differs from the original DDSM dataset in that it converted images to DICOM format, which is easier to work with.
Note that although this tutorial focuses on the CBIS-DDSM dataset, most of it can be easily applied to The International Skin Imaging Collaboration (ISIC) dataset as well. More details will be provided in the Datasets section below.
Setup
To be able to run the code cells in this tutorial, you need to create a copy of this Colab notebook by clicking "File" > "Save a copy in Drive..." menu.
You can share your copy with your teammates by clicking on the "SHARE" button on the top-right corner of your Colab notebook copy. Everyone with "Edit" permission is able to modify the notebook at the same time, so it is a great way for team collaboration.
First Let's import modules needed to complete the tutorial. You can run the following cell by clicking on the triangle button when you hover over the [ ] space on the top-left corner of the code cell below.
End of explanation
auth.authenticate_user()
Explanation: Next, we need to authenticate ourselves to Google Cloud Platform. If you are running the code cell below for the first time, a link will show up, which leads to a web page for authentication and authorization. Login with your crendentials and make sure the permissions it requests are proper, after clicking Allow button, you will be redirected to another web page which has a verification code displayed. Copy the code and paste it in the input field below.
End of explanation
project_id = 'nus-datathon-2018-team-00'
os.environ["GOOGLE_CLOUD_PROJECT"] = project_id
Explanation: At the same time, let's set the project we are going to use throughout the tutorial.
End of explanation
# Should output something like '/device:GPU:0'.
tf.test.gpu_device_name()
Explanation: Optional: In this Colab we can opt to use GPU to train our model by clicking "Runtime" on the top menus, then clicking "Change runtime type", select "GPU" for hardware accelerator. You can verify that GPU is working with the following code cell.
End of explanation
client = storage.Client()
bucket_name = 'datathon-cbis-ddsm-colab'
bucket = client.get_bucket(bucket_name)
def load_images(folder):
images = []
labels = []
# The image name is in format: <LABEL>_Calc_{Train,Test}_P_<Patient_ID>_{Left,Right}_CC.
for label in [1, 2, 3, 4]:
blobs = bucket.list_blobs(prefix=("%s/%s_" % (folder, label)))
for blob in blobs:
byte_stream = BytesIO()
blob.download_to_file(byte_stream)
byte_stream.seek(0)
img = Image.open(byte_stream)
images.append(np.array(img, dtype=np.float32))
labels.append(label-1) # Minus 1 to fit in [0, 4).
return np.array(images), np.array(labels, dtype=np.int32)
def load_train_images():
return load_images('small_train_demo')
def load_test_images():
return load_images('small_test_demo')
Explanation: Dataset
We have already extracted the images from the DICOM files to separate folders on GCS, and some preprocessing were also done with the raw images (If you need custom preprocessing, please consult our tutorial on image preprocessing).
The folders ending with _demo contain subsets of training and test images. Specifically, the demo training dataset has 100 images, with 25 images for each breast density category (1 - 4). There are 20 images in the test dataset which were selected randomly. All the images were first padded to 5251x7111 (largest width and height among the selected images) and then resized to 95x128 to fit in memory and save training time. Both training and test images are "Cranial-Caudal" only.
ISIC dataset is organized in a slightly different way, the images are in JPEG format and each image comes with a JSON file containing metadata information. In order to make this tutorial work for ISIC, you will need to first pad and resize the images (we provide a script to do that here), and extract the labels from the JSON files based on your interests.
Training
Before coding on our neurual network, let's create a few helper methods to make loading data from Google Cloud Storage (GCS) easier.
End of explanation
KERNEL_SIZE = 5 #@param
DROPOUT_RATE = 0.25 #@param
def cnn_model_fn(features, labels, mode):
Model function for CNN.
# Input Layer.
# Reshape to 4-D tensor: [batch_size, height, width, channels]
# DDSM images are grayscale, which have 1 channel.
input_layer = tf.reshape(features["x"], [-1, 95, 128, 1])
# Convolutional Layer #1.
# Input Tensor Shape: [batch_size, 95, 128, 1]
# Output Tensor Shape: [batch_size, 95, 128, 32]
conv1 = tf.layers.conv2d(
inputs=input_layer,
filters=32,
kernel_size=KERNEL_SIZE,
padding="same",
activation=tf.nn.relu)
# Pooling Layer #1.
# Input Tensor Shape: [batch_size, 95, 128, 1]
# Output Tensor Shape: [batch_size, 47, 64, 32]
pool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=[2, 2], strides=2)
# Convolutional Layer #2.
# Input Tensor Shape: [batch_size, 47, 64, 32]
# Output Tensor Shape: [batch_size, 47, 64, 64]
conv2 = tf.layers.conv2d(
inputs=pool1,
filters=64,
kernel_size=KERNEL_SIZE,
padding="same",
activation=tf.nn.relu)
# Pooling Layer #2.
# Input Tensor Shape: [batch_size, 47, 64, 32]
# Output Tensor Shape: [batch_size, 23, 32, 64]
pool2 = tf.layers.max_pooling2d(inputs=conv2, pool_size=[2, 2], strides=2)
# Flatten tensor into a batch of vectors
# Input Tensor Shape: [batch_size, 23, 32, 64]
# Output Tensor Shape: [batch_size, 23 * 32 * 64]
pool2_flat = tf.reshape(pool2, [-1, 23 * 32 * 64])
# Dense Layer.
# Input Tensor Shape: [batch_size, 25 * 17 * 64]
# Output Tensor Shape: [batch_size, 1024]
dense = tf.layers.dense(inputs=pool2_flat, units=1024, activation=tf.nn.relu)
# Dropout operation.
# 0.75 probability that element will be kept.
dropout = tf.layers.dropout(inputs=dense, rate=DROPOUT_RATE,
training=(mode == tf.estimator.ModeKeys.TRAIN))
# Logits Layer.
# Input Tensor Shape: [batch_size, 1024]
# Output Tensor Shape: [batch_size, 4]
logits = tf.layers.dense(inputs=dropout, units=4)
predictions = {
# Generate predictions (for PREDICT and EVAL mode)
"classes": tf.argmax(input=logits, axis=1),
# Add `softmax_tensor` to the graph. It is used for PREDICT and by the
# `logging_hook`.
"probabilities": tf.nn.softmax(logits, name="softmax_tensor")
}
if mode == tf.estimator.ModeKeys.PREDICT:
return tf.estimator.EstimatorSpec(mode=mode, predictions=predictions)
# Loss Calculation.
loss = tf.losses.sparse_softmax_cross_entropy(labels=labels, logits=logits)
if mode == tf.estimator.ModeKeys.TRAIN:
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.001)
train_op = optimizer.minimize(
loss=loss,
global_step=tf.train.get_global_step())
return tf.estimator.EstimatorSpec(mode=mode, loss=loss, train_op=train_op)
# Add evaluation metrics (for EVAL mode).
eval_metric_ops = {
"accuracy": tf.metrics.accuracy(
labels=labels, predictions=predictions["classes"])}
return tf.estimator.EstimatorSpec(
mode=mode, loss=loss, eval_metric_ops=eval_metric_ops)
Explanation: Let's create a model function, which will be passed to an estimator that we will create later. The model has an architecture of 6 layers:
Convolutional Layer: Applies 32 5x5 filters, with ReLU activation function
Pooling Layer: Performs max pooling with a 2x2 filter and stride of 2
Convolutional Layer: Applies 64 5x5 filters, with ReLU activation function
Pooling Layer: Same setup as #2
Dense Layer: 1,024 neurons, with dropout regulartization rate of 0.25
Logits Layer: 4 neurons, one for each breast density category, i.e. [0, 4)
Note that you can change the parameters on the right (or inline) to tune the neurual network. It is highly recommended to check out the original tensorflow tutorial to get a deeper understanding of the network we are building here.
End of explanation
BATCH_SIZE = 20 #@param
STEPS = 1000 #@param
artifacts_bucket_name = 'nus-datathon-2018-team-00-shared-files'
# Append a random number to avoid collision.
artifacts_path = "ddsm_model_%s" % random.randint(0, 1000)
model_dir = "gs://%s/%s" % (artifacts_bucket_name, artifacts_path)
def main(_):
# Load training and test data.
train_data, train_labels = load_train_images()
eval_data, eval_labels = load_test_images()
# Create the Estimator.
ddsm_classifier = tf.estimator.Estimator(
model_fn=cnn_model_fn,
model_dir=model_dir)
# Set up logging for predictions.
# Log the values in the "Softmax" tensor with label "probabilities".
tensors_to_log = {"probabilities": "softmax_tensor"}
logging_hook = tf.train.LoggingTensorHook(
tensors=tensors_to_log, every_n_iter=50)
# Train the model.
train_input_fn = tf.compat.v1.estimator.inputs.numpy_input_fn(
x={"x": train_data},
y=train_labels,
batch_size=BATCH_SIZE,
num_epochs=None,
shuffle=True)
ddsm_classifier.train(
input_fn=train_input_fn,
steps=STEPS,
hooks=[logging_hook])
# Evaluate the model and print results.
eval_input_fn = tf.compat.v1.estimator.inputs.numpy_input_fn(
x={"x": eval_data},
y=eval_labels,
num_epochs=1,
shuffle=False)
eval_results = ddsm_classifier.evaluate(input_fn=eval_input_fn)
print(eval_results)
Explanation: Now that we have a model function, next step is feeding it to an estimator for training. Here are are creating a main function as required by tensorflow.
End of explanation
# Remove temporary files.
artifacts_bucket = client.get_bucket(artifacts_bucket_name)
artifacts_bucket.delete_blobs(artifacts_bucket.list_blobs(prefix=artifacts_path))
# Set logging level.
tf.logging.set_verbosity(tf.logging.INFO)
# Start training, this will call the main method defined above behind the scene.
# The whole training process will take ~5 mins.
tf.app.run()
Explanation: Finally, here comes the exciting moment. We are going to train and evaluate the model we just built! Run the following code cell and pay attention to the accuracy printed at the end of logs.
Note if this is not the first time you run the following cell, to avoid weird errors like "NaN loss during training", please run the following command to remove the temporary files.
End of explanation
<END_TASK> |
233 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Below I use my stemming/tokenizing and tokenizing functions to iterate over the list of synopses to create two vocabularies
Step1: Tf-idf and document similarity
Step2: Lets cluster!
Step3: But what did we get? | <ASSISTANT_TASK:>
Python Code:
df = pd.read_csv('../data/wiki/wiki.csv.gz', encoding='utf8', index_col=None)
df['text'] = df.text.str[:3000]
totalvocab_stemmed = []
totalvocab_tokenized = []
for doc_text in df.text:
allwords_stemmed = tokenize_and_stem(doc_text) #for each item in 'synopses', tokenize/stem
totalvocab_stemmed.extend(allwords_stemmed) #extend the 'totalvocab_stemmed' list
allwords_tokenized = tokenize_only(doc_text)
totalvocab_tokenized.extend(allwords_tokenized)
vocab_frame = pd.DataFrame({'words': totalvocab_tokenized}, index = totalvocab_stemmed)
vocab_frame.head(10)
len(vocab_frame)
Explanation: Below I use my stemming/tokenizing and tokenizing functions to iterate over the list of synopses to create two vocabularies: one stemmed and one only tokenized.
End of explanation
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf_vectorizer = TfidfVectorizer(max_df=12, min_df=3,
stop_words='english',
tokenizer=tokenize_and_stem, ngram_range=(1,3))
%time tfidf_matrix = tfidf_vectorizer.fit_transform(df.text)
print(tfidf_matrix.shape)
terms = tfidf_vectorizer.get_feature_names()
len(terms)
len(vocab_frame)
terms = tfidf_vectorizer.get_feature_names()
idx = 2742
terms[idx]
terms[800]
vocab_frame.ix[terms[idx]].head(5)
Explanation: Tf-idf and document similarity
End of explanation
from sklearn.cluster import KMeans
num_clusters = 10
km = KMeans(n_clusters=num_clusters)
%time km.fit(tfidf_matrix)
clusters = km.labels_.tolist()
len(clusters)
Explanation: Lets cluster!
End of explanation
clustered = df.join(pd.DataFrame({'cluster': clusters}))
clustered.head()
len(km.cluster_centers_[1])
order_centroids = km.cluster_centers_.argsort()
order_centroids
term_words = vocab_frame.ix[terms[idx]]
term_words['words'].value_counts().keys()[0]
cluster_names = []
for cluster_centeroids in order_centroids:
words = []
for ind in cluster_centeroids[-6:]:
term_words = vocab_frame.ix[terms[ind].split(' ')]
best_word = term_words['words'].value_counts().keys()[0]
words.append(best_word)
cluster_names.append(', '.join(words))
cluster_names
clustered['cluster_name'] =clustered.cluster.map(lambda cluster: cluster_names[cluster])
clustered.head(10)
from sklearn.metrics.pairwise import cosine_similarity
dist = 1 - cosine_similarity(tfidf_matrix)
import os # for os.path.basename
import matplotlib.pyplot as plt
import matplotlib as mpl
from sklearn.manifold import MDS
MDS()
# two components as we're plotting points in a two-dimensional plane
# "precomputed" because we provide a distance matrix
# we will also specify `random_state` so the plot is reproducible.
mds = MDS(n_components=2, dissimilarity="precomputed", random_state=1)
pos = mds.fit_transform(dist) # shape (n_components, n_samples)
xs, ys = pos[:, 0], pos[:, 1]
import seaborn as sns
#set up colors per clusters using a dict
cluster_colors = sns.color_palette(n_colors=len(clusters))
%matplotlib inline
#create data frame that has the result of the MDS plus the cluster numbers and titles
df_print = pd.DataFrame(dict(x=xs, y=ys, label=clusters, title=df.name))
#group by cluster
groups = df_print.groupby('label')
# set up plot
fig, ax = plt.subplots(figsize=(30, 30)) # set size
ax.margins(0.05) # Optional, just adds 5% padding to the autoscaling
#iterate through groups to layer the plot
#note that I use the cluster_name and cluster_color dicts with the 'name' lookup to return the appropriate color/label
for name, group in groups:
ax.plot(group.x, group.y, marker='o', linestyle='', ms=12, label=cluster_names[name], color=cluster_colors[name], mec='none')
ax.set_aspect('auto')
ax.tick_params(\
axis= 'x', # changes apply to the x-axis
which='both', # both major and minor ticks are affected
bottom='off', # ticks along the bottom edge are off
top='off', # ticks along the top edge are off
labelbottom='off')
ax.tick_params(\
axis= 'y', # changes apply to the y-axis
which='both', # both major and minor ticks are affected
left='off', # ticks along the bottom edge are off
top='off', # ticks along the top edge are off
labelleft='off')
ax.legend(numpoints=1) #show legend with only 1 point
#add label in x,y position with the label as the film title
for i in range(len(df)):
ax.text(df_print.ix[i]['x'], df_print.ix[i]['y'], df_print.ix[i]['title'], size=8)
#plt.show() #show the plot
#uncomment the below to save the plot if need be
plt.savefig('clusters_small_noaxes.png', dpi=300)
Explanation: But what did we get?
End of explanation
<END_TASK> |
234 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Übungsblatt 7
Präsenzaufgaben
Aufgabe 1 CFG
Step3: Aufgabe 2 CFG
Step5: Hausaufgaben
Aufgabe 7 Plural für das Subjekt
Ergänzen Sie die in den Präsenzaufgaben erstellte Grammatik um die Möglichkeit, das Subjekt in den Plural zu setzen.
Dafür müssen Sie folgendes tun
Step6: Aufgabe 8 Adverben und Verbzweitstellung
Fügen Sie der Grammatik jetzt die zwei Adverben heute und morgen hinzu. Adverben können prinzipiell sehr frei im Satz platziert werden. Eine Besonderheit des Deutschen ist aber die sog. Verbzweitstellung, wie sie z. B. in Sätzen wie Heute schläft der Mann. deutlich wird.
Versuchen Sie alle Möglichkeiten zu implementieren | <ASSISTANT_TASK:>
Python Code:
grammar =
S -> NP VP
NP -> DET NOM
NOM -> ADJ NOM | N
ADJ -> "schöne" | "kluge" | "dicke"
DET -> "der" | "die" | "das"
N -> "Mann" | "Frau" | "Buch"
VP -> V NP NP | V NP | V
V -> "gibt" | "schenkt" | "schläft" | "gefällt" | "kennt"
import nltk
from IPython.display import display
import sys
def test_grammar(grammar, sentences):
cfg = nltk.grammar.FeatureGrammar.fromstring(grammar)
parser = nltk.parse.FeatureEarleyChartParser(cfg)
for i, sent in enumerate(sentences, 1):
print("Satz {}: {}".format(i, sent))
results = parser.parse(sent.split())
analyzed = False
for tree in results:
display(tree) # tree.draw() oder print(tree)
analyzed = True
if not analyzed:
print("Keine Analyse möglich", file=sys.stderr)
pos_sentences = ["der Mann schläft"]
neg_sentences = ["das Mann schläft"]
test_grammar(grammar, neg_sentences)
Explanation: Übungsblatt 7
Präsenzaufgaben
Aufgabe 1 CFG: Kongruenz in Nominalphrasen
Die folgende Grammatik entspricht der Grammatik von Übungsblatt 4 am Ende der Präsenzaufgaben. (Sie können also stattdessen auch Ihre im Zuge der Übung von damals selbst erstellte Grammatik als Grundlage verwenden.)
Orientieren Sie sich an folgender Tabelle zur Mehrdeutigkeit der Formen des bestimmen Artikels im Deutschen und passen Sie die Grammatik so an, dass sie nur noch grammatikalisch korrekte Nominalphrasen als Teil von Sätzen akzeptiert. Konzentrieren Sie sich auf die Kongruenz von Artikel und Nomen im Genus.
|Form|mögliche Merkmale|
|----|-----------------|
|der|[NUM=sg, GEN=mas, KAS=nom]|
||[NUM=sg, GEN=fem, KAS=dat]|
||[NUM=sg, GEN=fem, KAS=GEN]|
||[NUM=pl, KAS=GEN]|
|die|[NUM=sg, GEN=fem, KAS=nom]|
||[NUM=sg, GEN=fem, KAS=akk]|
||[NUM=pl, KAS=nom]|
||[NUM=pl, KAS=akk]|
|das|[NUM=sg, GEN=neu, KAS=nom]|
||[NUM=sg, GEN=neu, KAS=akk]|
End of explanation
grammar =
BITTE NACH BEARBEITUNG VON (1) VON OBEN KOPIEREN
pos_sentences.extend([
"das Buch gefällt der Frau",
"das Buch kennt die Frau",
"der Mann gibt der Frau das Buch"
])
neg_sentences.extend([
"der Mann schläft das Buch",
"die Frau gefällt das Buch",
"das Buch kennt",
"die Frau gibt das Buch"
])
Explanation: Aufgabe 2 CFG: Kasus
Als nächstes sollen Kasusbedingungen in die Grammatik integriert werden:
Es gibt nur eine Nominalphrase im Nominativ (Subjekt).
Je nach Valenzstellen des Verbes sollen nur Nominalphrasen in den korrekten Kasus akzeptiert werden.
Optional Versuchen Sie die freie Satzstellung des Deutschen zu berücksichtigen.
End of explanation
grammar =
BITTE NACH BEARBEITUNG VON (2) VON OBEN KOPIEREN
pos_sentences.extend([
"die Männer geben der Frau das Buch",
"die Bücher gefallen der Frau",
"die Frauen schlafen"
])
neg_sentences.extend([
"der Mann geben der Frau das Buch",
"das Buch gefällt der Frauen",
"die Frauen schläft"
])
Explanation: Hausaufgaben
Aufgabe 7 Plural für das Subjekt
Ergänzen Sie die in den Präsenzaufgaben erstellte Grammatik um die Möglichkeit, das Subjekt in den Plural zu setzen.
Dafür müssen Sie folgendes tun:
1. Erstellen Sie lexikalische Regeln für Pluralformen der Verben, Adjektive und Substantive (Nominativ ist ausreichend.).
1. Vervollständigen Sie die lexikalischen Regeln für die Form des Artikels die um die korrekte Merkmalstruktur für den Plural.
1. Formulieren Sie eine Kongruenzbedingung in Numerus zwischen Verb und Subjekt.
End of explanation
pos_sentences.extend([
"heute gibt der Mann der Frau das Buch",
"der Mann gibt heute der Frau das Buch",
"der Mann gibt der Frau heute das Buch",
"der Mann gibt der Frau das Buch heute"
])
neg_sentences.extend([
"heute der Mann gibt der Frau das Buch"
])
Explanation: Aufgabe 8 Adverben und Verbzweitstellung
Fügen Sie der Grammatik jetzt die zwei Adverben heute und morgen hinzu. Adverben können prinzipiell sehr frei im Satz platziert werden. Eine Besonderheit des Deutschen ist aber die sog. Verbzweitstellung, wie sie z. B. in Sätzen wie Heute schläft der Mann. deutlich wird.
Versuchen Sie alle Möglichkeiten zu implementieren:
End of explanation
<END_TASK> |
235 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Vertex AI Pipelines
Step1: Install the latest GA version of google-cloud-storage library as well.
Step2: Install the latest GA version of google-cloud-pipeline-components library as well.
Step3: Restart the kernel
Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
Step4: Check the versions of the packages you installed. The KFP SDK version should be >=1.6.
Step5: Before you begin
GPU runtime
This tutorial does not require a GPU runtime.
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex AI APIs, Compute Engine APIs, and Cloud Storage.
The Google Cloud SDK is already installed in Google Cloud Notebook.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note
Step6: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas
Step7: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
Step8: Authenticate your Google Cloud account
If you are using Google Cloud Notebook, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps
Step9: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you initialize the Vertex AI SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
Step10: Only if your bucket doesn't already exist
Step11: Finally, validate access to your Cloud Storage bucket by examining its contents
Step12: Service Account
If you don't know your service account, try to get your service account using gcloud command by executing the second cell below.
Step13: Set service account access for Vertex AI Pipelines
Run the following commands to grant your service account access to read and write pipeline artifacts in the bucket that you created in the previous step -- you only need to run these once per service account.
Step14: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Step15: Vertex AI Pipelines constants
Setup up the following constants for Vertex AI Pipelines
Step16: Additional imports.
Step17: Initialize Vertex AI SDK for Python
Initialize the Vertex AI SDK for Python for your project and corresponding bucket.
Step18: Define custom model pipeline that uses components from google_cloud_pipeline_components
Next, you define the pipeline.
The experimental.run_as_aiplatform_custom_job method takes as arguments the previously defined component, and the list of worker_pool_specs— in this case one— with which the custom training job is configured.
Then, google_cloud_pipeline_components components are used to define the rest of the pipeline
Step19: Compile the pipeline
Next, compile the pipeline.
Step20: Run the pipeline
Next, run the pipeline.
Step21: Click on the generated link to see your run in the Cloud Console.
<!-- It should look something like this as it is running | <ASSISTANT_TASK:>
Python Code:
import os
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
Explanation: Vertex AI Pipelines: model train, upload, and deploy using google-cloud-pipeline-components
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/official/pipelines/google_cloud_pipeline_components_model_train_upload_deploy.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/notebooks/blob/master/official/pipelines/google_cloud_pipeline_components_model_train_upload_deploy.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
<td>
<a href="https://console.cloud.google.com/vertex-ai/notebooks/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/notebooks/blob/master/official/pipelines/google_cloud_pipeline_components_model_train_upload_deploy.ipynb">
Open in Vertex AI Workbench
</a>
</td>
</table>
<br/><br/><br/>
Overview
This notebook shows how to use the components defined in google_cloud_pipeline_components in conjunction with an experimental run_as_aiplatform_custom_job method, to build a Vertex AI Pipelines workflow that trains a custom model, uploads the model as a Model resource, creates an Endpoint resource, and deploys the Model resource to the Endpoint resource.
Dataset
The dataset used for this tutorial is Cloud Public Dataset Program London Bikes Rental combined with NOAA weather data
The dataset predicts the duration of the bike rental.
Objective
In this tutorial, you create an custom model using a pipeline with components from google_cloud_pipeline_components and a custom pipeline component you build.
In addition, you'll use the kfp.v2.google.experimental.run_as_aiplatform_custom_job method to train a custom model.
The steps performed include:
Train a custom model.
Uploads the trained model as a Model resource.
Creates an Endpoint resource.
Deploys the Model resource to the Endpoint resource.
The components are documented here.
(From that page, see also the CustomPythonPackageTrainingJobRunOp and CustomContainerTrainingJobRunOp components, which similarly run 'custom' training, but as with the related google.cloud.aiplatform.CustomContainerTrainingJob and google.cloud.aiplatform.CustomPythonPackageTrainingJob methods from the Vertex AI SDK, also upload the trained model).
Costs
This tutorial uses billable components of Google Cloud:
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Set up your local development environment
If you are using Colab or Google Cloud Notebook, your environment already meets all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements. You need the following:
The Cloud Storage SDK
Git
Python 3
virtualenv
Jupyter notebook running in a virtual environment with Python 3
The Cloud Storage guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions:
Install and initialize the SDK.
Install Python 3.
Install virtualenv and create a virtual environment that uses Python 3.
Activate that environment and run pip3 install Jupyter in a terminal shell to install Jupyter.
Run jupyter notebook on the command line in a terminal shell to launch Jupyter.
Open this notebook in the Jupyter Notebook Dashboard.
Installation
Install the latest version of Vertex SDK for Python.
End of explanation
! pip3 install -U google-cloud-storage $USER_FLAG
Explanation: Install the latest GA version of google-cloud-storage library as well.
End of explanation
! pip3 install $USER kfp google-cloud-pipeline-components --upgrade
Explanation: Install the latest GA version of google-cloud-pipeline-components library as well.
End of explanation
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the kernel
Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
End of explanation
! python3 -c "import kfp; print('KFP SDK version: {}'.format(kfp.__version__))"
! python3 -c "import google_cloud_pipeline_components; print('google_cloud_pipeline_components version: {}'.format(google_cloud_pipeline_components.__version__))"
Explanation: Check the versions of the packages you installed. The KFP SDK version should be >=1.6.
End of explanation
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
Explanation: Before you begin
GPU runtime
This tutorial does not require a GPU runtime.
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex AI APIs, Compute Engine APIs, and Cloud Storage.
The Google Cloud SDK is already installed in Google Cloud Notebook.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $.
End of explanation
REGION = "us-central1" # @param {type: "string"}
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.
Learn more about Vertex AI regions
End of explanation
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
End of explanation
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebook, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key page.
Click Create service account.
In the Service account name field, enter a name, and click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you initialize the Vertex AI SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
End of explanation
! gsutil mb -l $REGION $BUCKET_NAME
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
! gsutil ls -al $BUCKET_NAME
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
SERVICE_ACCOUNT = "[your-service-account]" # @param {type:"string"}
if (
SERVICE_ACCOUNT == ""
or SERVICE_ACCOUNT is None
or SERVICE_ACCOUNT == "[your-service-account]"
):
# Get your GCP project id from gcloud
shell_output = !gcloud auth list 2>/dev/null
SERVICE_ACCOUNT = shell_output[2].strip()
print("Service Account:", SERVICE_ACCOUNT)
Explanation: Service Account
If you don't know your service account, try to get your service account using gcloud command by executing the second cell below.
End of explanation
! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectCreator $BUCKET_NAME
! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectViewer $BUCKET_NAME
Explanation: Set service account access for Vertex AI Pipelines
Run the following commands to grant your service account access to read and write pipeline artifacts in the bucket that you created in the previous step -- you only need to run these once per service account.
End of explanation
import google.cloud.aiplatform as aip
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
End of explanation
PIPELINE_ROOT = "{}/pipeline_root/bikes_weather".format(BUCKET_NAME)
Explanation: Vertex AI Pipelines constants
Setup up the following constants for Vertex AI Pipelines:
End of explanation
import kfp
from kfp.v2.dsl import component
Explanation: Additional imports.
End of explanation
aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)
Explanation: Initialize Vertex AI SDK for Python
Initialize the Vertex AI SDK for Python for your project and corresponding bucket.
End of explanation
hp_dict: str = '{"num_hidden_layers": 3, "hidden_size": 32, "learning_rate": 0.01, "epochs": 1, "steps_per_epoch": -1}'
data_dir: str = "gs://aju-dev-demos-codelabs/bikes_weather/"
TRAINER_ARGS = ["--data-dir", data_dir, "--hptune-dict", hp_dict]
# create working dir to pass to job spec
WORKING_DIR = f"{PIPELINE_ROOT}/{TIMESTAMP}"
MODEL_DISPLAY_NAME = f"train_deploy{TIMESTAMP}"
print(TRAINER_ARGS, WORKING_DIR, MODEL_DISPLAY_NAME)
@kfp.dsl.pipeline(name="train-endpoint-deploy" + TIMESTAMP)
def pipeline(
project: str = PROJECT_ID,
model_display_name: str = MODEL_DISPLAY_NAME,
serving_container_image_uri: str = "us-docker.pkg.dev/cloud-aiplatform/prediction/tf2-cpu.2-3:latest",
):
from google_cloud_pipeline_components.types import artifact_types
from google_cloud_pipeline_components.v1.custom_job import \
CustomTrainingJobOp
from google_cloud_pipeline_components.v1.endpoint import (EndpointCreateOp,
ModelDeployOp)
from google_cloud_pipeline_components.v1.model import ModelUploadOp
from kfp.v2.components import importer_node
custom_job_task = CustomTrainingJobOp(
project=project,
display_name="model-training",
worker_pool_specs=[
{
"containerSpec": {
"args": TRAINER_ARGS,
"env": [{"name": "AIP_MODEL_DIR", "value": WORKING_DIR}],
"imageUri": "gcr.io/google-samples/bw-cc-train:latest",
},
"replicaCount": "1",
"machineSpec": {
"machineType": "n1-standard-16",
"accelerator_type": aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,
"accelerator_count": 2,
},
}
],
)
import_unmanaged_model_task = importer_node.importer(
artifact_uri=WORKING_DIR,
artifact_class=artifact_types.UnmanagedContainerModel,
metadata={
"containerSpec": {
"imageUri": "us-docker.pkg.dev/cloud-aiplatform/prediction/tf2-cpu.2-3:latest",
},
},
).after(custom_job_task)
model_upload_op = ModelUploadOp(
project=project,
display_name=model_display_name,
unmanaged_container_model=import_unmanaged_model_task.outputs["artifact"],
)
model_upload_op.after(import_unmanaged_model_task)
endpoint_create_op = EndpointCreateOp(
project=project,
display_name="pipelines-created-endpoint",
)
ModelDeployOp(
endpoint=endpoint_create_op.outputs["endpoint"],
model=model_upload_op.outputs["model"],
deployed_model_display_name=model_display_name,
dedicated_resources_machine_type="n1-standard-16",
dedicated_resources_min_replica_count=1,
dedicated_resources_max_replica_count=1,
)
Explanation: Define custom model pipeline that uses components from google_cloud_pipeline_components
Next, you define the pipeline.
The experimental.run_as_aiplatform_custom_job method takes as arguments the previously defined component, and the list of worker_pool_specs— in this case one— with which the custom training job is configured.
Then, google_cloud_pipeline_components components are used to define the rest of the pipeline: upload the model, create an endpoint, and deploy the model to the endpoint.
Note: While not shown in this example, the model deploy will create an endpoint if one is not provided.
End of explanation
from kfp.v2 import compiler # noqa: F811
compiler.Compiler().compile(
pipeline_func=pipeline,
package_path="tabular regression_pipeline.json".replace(" ", "_"),
)
Explanation: Compile the pipeline
Next, compile the pipeline.
End of explanation
DISPLAY_NAME = "bikes_weather_" + TIMESTAMP
job = aip.PipelineJob(
display_name=DISPLAY_NAME,
template_path="tabular regression_pipeline.json".replace(" ", "_"),
pipeline_root=PIPELINE_ROOT,
enable_caching=False,
)
job.run()
! rm tabular_regression_pipeline.json
Explanation: Run the pipeline
Next, run the pipeline.
End of explanation
delete_dataset = True
delete_pipeline = True
delete_model = True
delete_endpoint = True
delete_batchjob = True
delete_customjob = True
delete_hptjob = True
delete_bucket = True
try:
if delete_model and "DISPLAY_NAME" in globals():
models = aip.Model.list(
filter=f"display_name={DISPLAY_NAME}", order_by="create_time"
)
model = models[0]
aip.Model.delete(model)
print("Deleted model:", model)
except Exception as e:
print(e)
try:
if delete_endpoint and "DISPLAY_NAME" in globals():
endpoints = aip.Endpoint.list(
filter=f"display_name={DISPLAY_NAME}_endpoint", order_by="create_time"
)
endpoint = endpoints[0]
endpoint.undeploy_all()
aip.Endpoint.delete(endpoint.resource_name)
print("Deleted endpoint:", endpoint)
except Exception as e:
print(e)
if delete_dataset and "DISPLAY_NAME" in globals():
if "tabular" == "tabular":
try:
datasets = aip.TabularDataset.list(
filter=f"display_name={DISPLAY_NAME}", order_by="create_time"
)
dataset = datasets[0]
aip.TabularDataset.delete(dataset.resource_name)
print("Deleted dataset:", dataset)
except Exception as e:
print(e)
if "tabular" == "image":
try:
datasets = aip.ImageDataset.list(
filter=f"display_name={DISPLAY_NAME}", order_by="create_time"
)
dataset = datasets[0]
aip.ImageDataset.delete(dataset.resource_name)
print("Deleted dataset:", dataset)
except Exception as e:
print(e)
if "tabular" == "text":
try:
datasets = aip.TextDataset.list(
filter=f"display_name={DISPLAY_NAME}", order_by="create_time"
)
dataset = datasets[0]
aip.TextDataset.delete(dataset.resource_name)
print("Deleted dataset:", dataset)
except Exception as e:
print(e)
if "tabular" == "video":
try:
datasets = aip.VideoDataset.list(
filter=f"display_name={DISPLAY_NAME}", order_by="create_time"
)
dataset = datasets[0]
aip.VideoDataset.delete(dataset.resource_name)
print("Deleted dataset:", dataset)
except Exception as e:
print(e)
try:
if delete_pipeline and "DISPLAY_NAME" in globals():
pipelines = aip.PipelineJob.list(
filter=f"display_name={DISPLAY_NAME}", order_by="create_time"
)
pipeline = pipelines[0]
aip.PipelineJob.delete(pipeline.resource_name)
print("Deleted pipeline:", pipeline)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
Explanation: Click on the generated link to see your run in the Cloud Console.
<!-- It should look something like this as it is running:
<a href="https://storage.googleapis.com/amy-jo/images/mp/automl_tabular_classif.png" target="_blank"><img src="https://storage.googleapis.com/amy-jo/images/mp/automl_tabular_classif.png" width="40%"/></a> -->
In the UI, many of the pipeline DAG nodes will expand or collapse when you click on them. Here is a partially-expanded view of the DAG (click image to see larger version).
<a href="https://storage.googleapis.com/amy-jo/images/mp/train_endpoint_deploy.png" target="_blank"><img src="https://storage.googleapis.com/amy-jo/images/mp/train_endpoint_deploy.png" width="75%"/></a>
Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial -- Note: this is auto-generated and not all resources may be applicable for this tutorial:
Dataset
Pipeline
Model
Endpoint
Batch Job
Custom Job
Hyperparameter Tuning Job
Cloud Storage Bucket
End of explanation
<END_TASK> |
236 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Python fundamentals
A quick introduction to the Python programming language and Jupyter notebooks. (We're using Python 3, not Python 2.)
Basic data types and the print() function
Step1: Basic math
You can do basic math with Python. (You can also do more advanced math.)
Step2: Lists
A comma-separated collection of items between square brackets
Step3: Dictionaries
A data structure that maps keys to values inside curly brackets
Step4: Commenting your code
Python skips lines that begin with a hashtag # -- these lines are used to write comments to help explain the code to others (and to your future self).
Multi-line comments are enclosed between triple quotes
Step6: Type coercion
Consider | <ASSISTANT_TASK:>
Python Code:
# variable assignment
# https://www.digitalocean.com/community/tutorials/how-to-use-variables-in-python-3
# strings -- enclose in single or double quotes, just make sure they match
# numbers
# the print function
# booleans
Explanation: Python fundamentals
A quick introduction to the Python programming language and Jupyter notebooks. (We're using Python 3, not Python 2.)
Basic data types and the print() function
End of explanation
# addition
# subtraction
# multiplication
# division
# etc.
Explanation: Basic math
You can do basic math with Python. (You can also do more advanced math.)
End of explanation
# create a list: name, hometown, age
# an item's position in the list is the key thing
# create another list of mixed data
# use len() to get the number of items in the list
# use square brackets [] to access items in a list
# (counting starts at zero in Python)
# get the first item
# you can do negative indexing to get items from the end of your list
# get the last item
# Use colons to get a range of items in a list
# get the first two items
# the last number in a list slice is the first list item that's ~not~ included in the result
# if you leave the last number off, it takes the item at the first number's index and everything afterward
# get everything from the third item onward
# Use append() to add things to a list
# Use pop() to remove items from the end of a list
# use join() to join items from a list into a string with a delimiter of your choosing
Explanation: Lists
A comma-separated collection of items between square brackets: []. Python keeps track of the order of things inside a list.
End of explanation
# Access items in a dictionary using square brackets and the key (typically a string)
# You can also use the `get()` method to retrieve values
# you can optionally provide a second argument as the default value
# if the key doesn't exist (otherwise defaults to `None`)
# Use the .keys() method to get the keys of a dictionary
# Use the .values() method to get the values
# add items to a dictionary using square brackets, the name of the key (typically a string)
# and set the value like you'd set a variable, with =
# delete an item from a dictionary with `del`
Explanation: Dictionaries
A data structure that maps keys to values inside curly brackets: {}. Items in the dictionary are separated by commas. Python does not keep track of the order of items in a dictionary; if you need to keep track of insertion order, use an OrderedDict instead.
End of explanation
# date in m/d/yyyy format
# split out individual pieces of the date
# using a shortcut method to assign variables to the resulting list
# reshuffle as yyyy-mm-dd using .format()
# use a formatting option (:0>2) to left-pad month/day numbers with a zero
# construct a greeting template
Explanation: Commenting your code
Python skips lines that begin with a hashtag # -- these lines are used to write comments to help explain the code to others (and to your future self).
Multi-line comments are enclosed between triple quotes:
Comparison operators
When you want to compare values, you can use these symbols:
< means less than
> means greater than
== means equal
>= means greater than or equal
<= means less than or equal
!= means not equal
String functions
Python has a number of built-in methods to work with strings. They're useful if, say, you're using Python to clean data. Here are a few of them:
strip()
Call strip() on a string to remove whitespace from either side. It's like using the =TRIM() function in Excel.
upper() and lower()
Call .upper() on a string to make the characters uppercase. Call .lower() on a string to make the characters lowercase. This can be useful when testing strings for equality.
replace()
Use .replace() to substitute bits of text.
split()
Use .split() to split a string on some delimiter. If you don't specify a delimiter, it uses a single space as the default.
zfill()
Among other things, you can use .zfill() to add zero padding -- for instance, if you're working with ZIP code data that was saved as a number somewhere and you've lost the leading zeroes for that handful of ZIP codes that begin with 0.
Note: .zfill() is a string method, so if you want to apply it to a number, you'll need to first coerce it to a string with str().
slicing
Like lists, strings are iterables, so you can use slicing to grab chunks.
startswith(), endswith() and in
If you need to test whether a string starts with a series of characters, use .startswith(). If you need to test whether a string ends with a series of characters, use .endswith(). If you need to test whether a string is part of another string -- or a list of strings -- use .in().
These are case sensitive, so you'd typically .upper() or .lower() the strings you're comparing to ensure an apples-to-apples comparison.
String formatting
Using curly brackets with the various options available to the .format() method, you can create string templates for your data. Some examples:
End of explanation
# two strings of numbers
# what happens when you add them without coercing?
# coerce to integer, then add them
Explanation: Type coercion
Consider:
```python
this is a number, can't do string-y things to it
age = 32
this is a string, can't do number-y things to it
age = '32'
```
There are several functions you can use to coerce a value of one type to a value of another type. Here are a couple of them:
int() tries to convert to an integer
str() tries to convert to a string
float() tries to convert to a float
End of explanation
<END_TASK> |
237 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Exploring US Traffic Fatality Data
By Ben Van Dyke
The National Highway Traffic Safety Administration maintains a fatality dataset called the Fatality Reporting System. It contains detailed information about every fatal traffic crash in the U.S., including the time of day, location with latitude and longitude, roadway type, and more. This notebook will hopefully provide some helpful Python analysis techniques as well as raise awareness of the very serious societal problem of roadway injury and death.
Additional Datasets Used
Federal Highway Administration - Vehicle Miles Traveled
Centers for Disease Control - US Causes of Death
World Health Organization - Motor Vehicle Death Rates by Country
The additional data files are available in the accompanying Github repository.
Step1: Motor Vehicles Are Third-Leading Cause of Death Due to Injury
Step2: There were 35,092 traffic fatalities in the U.S. in 2015, or a little more than 11 for every 100,000 people. To put that in perspective, 39,260 women died from breast cancer and 29,720 men died from prostate cancer in 2013, according to the American Cancer Society. The fight against these cancers generates a lot of public awareness and fundraising. Fore example, in Chicago the lights on top of skyscrapers turn pink for a month every year. Contrast that with a general public apathy to the number of people dying in traffic crashes at rates comparable to the most-common forms of cancer.
In fact, traffic fatalities are the second-leading cause of death due to injury (non-health and disease related) in the U.S. The CDC has death statistics through the year 2015. See the query used to generate the data here. Here's the bar plot showing fatality rates by injury
Step4: Motor vehicle traffic is the third longest bar on the plot. Drug-related deaths make up the majority of poisoning deaths, and this number has increased substantially in recent years.
Motor vehicles kill at a nearly identical rate to firearms and close to the rate of drugs. Both of these other issues are discussed at length in the news media and by policymakers. We have a decades-long War on Drugs and recent renewed efforts on restricting assault weapons.
Why is there a lack of public awareness of the death toll caused by our driving culture?
That's a difficult question to answer. The automobile industry is a very important economic engine and source of national pride. The construction of the interstate system through urban areas and accompanying white flight to car-oriented suburbs likely had an impact as well. Since the 1950's, the majority of the built environment in this country has been designed specifically to increase the capacity for automobile travel, often at the expense of other modes. Perhaps we've become so dependent on our cars that we can't confront their deadly impact on our society at large. This is a question that can't be answered in this analysis, but it's important to consider at the same time.
US Roads Are Much More Deadly Than International Counterparts
That 35,092 killed in 2015 is certainly a large number, how does it compare to other countries in our peer group? The World Health Organization has that data.
Step5: The U.S. does not compare favorably at all against other wealthy countries with large populations. Even other countries with high automobile share, such as Australia and Canada, have nearly half the traffic death rate of the U.S.. The U.S. is wealthier by GDP per capita than the other nations in the chart, so why is our rate of traffic deaths so much higher?
One possible explanation is that Americans drive more than citizens of other countries. Total vehicle miles traveled by country isn't readily available, but this table provided by the FWHA of data from the 1990's shows that Americans drive 16% more miles per capita than Canadians and 46% more than UK citizens. But VMT alone does not explain the difference, the 16% more miles driven by Americans versus Canadians fails to account for the 76% higher fatality rate in the U.S.
It's not all bad news, though. Until 2015, traffic fatality rates have actually been declining in the U.S. As recently as 2005, there were more than 40,000 fatalities.
Step6: The fatality rate has declined significantly since the early 1990's, with a sharp decrease in the second half of the 2000's.
Step7: The absolute number of fatalities has declined, but so has the fatality rate per vehicle miles traveled (VMT), which indicates that we are making progress towards safer roads. Since 1994, the fatality rate has dropped while VMT increased. In recent years, Americans are driving less, with several year-over-year decreases in CMTd since the mid-2000's. The continued decline in the fatality rate - even with a decreasing denominator - is an encouraging sign.
Drunk Driving
One of the first things that comes to mind when I think of traffic fatalities is drunk driving. From a young age, I recall being repeatedly warned about the dangers of drunk driving in school, on television, etc. Penalties are stiff, yet it does not seem to deter significant numbers of people from getting behind the wheel while intoxicated. The FARS data includes a drunk driver indicator, the value in the DRUNKEN_DR column indicates the number of drunk drivers involved in each fatal crash.
Step8: Nearly a third of all traffic fatalities involve a drunk driver. Despite all the education and public campaigns and increased enforcement, drunk driving is still taking a massive toll on human life every year.
What else can we learn about drunk driving from the data?
Step9: Clearly the late evening and early morning hours show high levels of drunken driving activity. Fatalities caused by drunken drivers are nearly double those caused by sober drivers between the hours of 2
Step10: As you might expect, drunk driving fatalities peak substantially on the weekends, with non-drunk fatalities remaining relatively consistent across all days of week.
Weather Conditions
The FARS data contains natural environment features such as LGT_COND and WEATHER which encode information on light conditions (light, dusk, etc) and weather (rain, fog, etc), respectively. Intuitively, I expect more fatalities to occur in darker conditions or harsh weather.
Step11: The majority of fatalities occur with no weather affecting visibility. Rain is the only precipitation form that shows up significantly. Perhaps people reduce driving during adverse conditions or drive more cautiously - leading to fewer deaths. | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import StringIO
import zipfile
import urllib
from __future__ import division, print_function
matplotlib.style.use('fivethirtyeight')
%matplotlib inline
# Download and extract the 2015 FARS file
output = StringIO.StringIO()
output.write(urllib.urlopen('ftp://ftp.nhtsa.dot.gov/FARS/2015/National/FARS2015NationalCSV.zip').read())
fars_zip = zipfile.ZipFile(output)
fars_zip.extract('accident.csv')
# Read the data file into a pandas DataFrame
df_fatality = pd.read_csv('accident.csv')
Explanation: Exploring US Traffic Fatality Data
By Ben Van Dyke
The National Highway Traffic Safety Administration maintains a fatality dataset called the Fatality Reporting System. It contains detailed information about every fatal traffic crash in the U.S., including the time of day, location with latitude and longitude, roadway type, and more. This notebook will hopefully provide some helpful Python analysis techniques as well as raise awareness of the very serious societal problem of roadway injury and death.
Additional Datasets Used
Federal Highway Administration - Vehicle Miles Traveled
Centers for Disease Control - US Causes of Death
World Health Organization - Motor Vehicle Death Rates by Country
The additional data files are available in the accompanying Github repository.
End of explanation
# Number of traffic fatalities in the US in 2012 using pandas DataFrame sum function
total_traffic_fatalities = df_fatality.FATALS.sum()
print("2015 Traffic Fatalities: ", total_traffic_fatalities)
Explanation: Motor Vehicles Are Third-Leading Cause of Death Due to Injury
End of explanation
# Get the rates
df_cdc = pd.read_csv('data/cdc_injuries_2015.txt',delimiter='\t')
df_cdc['Rate'] = df_cdc['Deaths'] / (df_cdc['Population'] / 100000)
# Create the series for plotting
df_cdc_rates = df_cdc.set_index('Injury Mechanism & All Other Leading Causes')['Rate']\
.dropna()\
.sort_values()
# Plot the top 10
plt.figure(figsize=(12,6))
df_cdc_rates.iloc[-10:].plot(kind='barh',
title='Motor Vehicles are Third-Leading Cause of Death Due to Injury')
plt.xlabel('Deaths per 100k people, 2015')
plt.ylabel('')
plt.show()
Explanation: There were 35,092 traffic fatalities in the U.S. in 2015, or a little more than 11 for every 100,000 people. To put that in perspective, 39,260 women died from breast cancer and 29,720 men died from prostate cancer in 2013, according to the American Cancer Society. The fight against these cancers generates a lot of public awareness and fundraising. Fore example, in Chicago the lights on top of skyscrapers turn pink for a month every year. Contrast that with a general public apathy to the number of people dying in traffic crashes at rates comparable to the most-common forms of cancer.
In fact, traffic fatalities are the second-leading cause of death due to injury (non-health and disease related) in the U.S. The CDC has death statistics through the year 2015. See the query used to generate the data here. Here's the bar plot showing fatality rates by injury:
End of explanation
df_who = pd.read_csv('data/who_2013_traffic_deaths.csv', index_col=0, skiprows=1,
names=['Country', 'Deaths', 'Death Rate'])
plt.figure(figsize=(12,6))
# group of peer countries
country_group = ['Australia', 'Canada', 'France', 'Germany', 'Japan',
'United Kingdom of Great Britain and Northern Ireland', 'United States of America']
# labels for plot
country_labels = ['Australia', 'Canada', 'France', 'Germany', 'Japan',
'UK', 'USA']
ax = df_who.loc[country_group]['Death Rate'].plot(kind='bar')
plt.ylabel("2013 Traffic Deaths / 100,000 people")
plt.title("US Traffic Death Rates Higher Than Those of Peer Group")
plt.xticks(np.arange(len(country_group)), country_labels, rotation=0)
plt.xlabel('')
rects = ax.patches
def autolabel(rects):
Attach some labels.
for rect in rects:
height = rect.get_height()
plt.text(rect.get_x()+rect.get_width()/2., height - .3, '%0.1f'%height,
ha='center', va='top', fontsize=14, color='w')
autolabel(rects)
plt.show()
Explanation: Motor vehicle traffic is the third longest bar on the plot. Drug-related deaths make up the majority of poisoning deaths, and this number has increased substantially in recent years.
Motor vehicles kill at a nearly identical rate to firearms and close to the rate of drugs. Both of these other issues are discussed at length in the news media and by policymakers. We have a decades-long War on Drugs and recent renewed efforts on restricting assault weapons.
Why is there a lack of public awareness of the death toll caused by our driving culture?
That's a difficult question to answer. The automobile industry is a very important economic engine and source of national pride. The construction of the interstate system through urban areas and accompanying white flight to car-oriented suburbs likely had an impact as well. Since the 1950's, the majority of the built environment in this country has been designed specifically to increase the capacity for automobile travel, often at the expense of other modes. Perhaps we've become so dependent on our cars that we can't confront their deadly impact on our society at large. This is a question that can't be answered in this analysis, but it's important to consider at the same time.
US Roads Are Much More Deadly Than International Counterparts
That 35,092 killed in 2015 is certainly a large number, how does it compare to other countries in our peer group? The World Health Organization has that data.
End of explanation
# Load FARS fatality time series
df_annual = pd.read_csv('data/fars_annual_trend.txt',delimiter='\t')
df_annual['Year'] = pd.to_datetime(df_annual['Year'], format='%Y')
series_annual = df_annual.set_index('Year')['Fatality Rate per 100,000 Population']
# Add 2015 per capita, US 2015 Population available here:
# https://factfinder.census.gov/faces/tableservices/jsf/pages/productview.xhtml?pid=PEP_2015_PEPANNRES&src=pt
series_annual[pd.to_datetime('2015-01-01')] = 35092 / (321418820 / 100000)
plt.figure(figsize=(12,6))
series_annual.plot()
plt.ylim(0)
plt.title('US Fatality Rate Declined in Mid-2000\'s')
plt.ylabel('Deaths per 100k people')
plt.show()
Explanation: The U.S. does not compare favorably at all against other wealthy countries with large populations. Even other countries with high automobile share, such as Australia and Canada, have nearly half the traffic death rate of the U.S.. The U.S. is wealthier by GDP per capita than the other nations in the chart, so why is our rate of traffic deaths so much higher?
One possible explanation is that Americans drive more than citizens of other countries. Total vehicle miles traveled by country isn't readily available, but this table provided by the FWHA of data from the 1990's shows that Americans drive 16% more miles per capita than Canadians and 46% more than UK citizens. But VMT alone does not explain the difference, the 16% more miles driven by Americans versus Canadians fails to account for the 76% higher fatality rate in the U.S.
It's not all bad news, though. Until 2015, traffic fatality rates have actually been declining in the U.S. As recently as 2005, there were more than 40,000 fatalities.
End of explanation
f, axarr = plt.subplots(1,2,figsize=(12,4))
df_annual['VMT (Trillions)'] = df_annual['Vehicle Miles Traveled (Billions)'] / 1000
df_annual.set_index('Year')['VMT (Trillions)'].plot(ax=axarr[0], title='Total VMT in the US is Leveling Off', color='black')
axarr[0].set_ylim(0)
axarr[0].set_xlabel('')
axarr[0].set_ylabel('Annual VMT (Trillions)')
df_annual.set_index('Year')['Fatality Rate per 100 Million VMT'].plot(ax=axarr[1], title='Fatality Rate per VMT is Declining',
)
axarr[1].set_xlabel('')
axarr[1].set_ylim(0)
axarr[1].set_ylabel('Deaths per 100M VMT')
plt.show()
Explanation: The fatality rate has declined significantly since the early 1990's, with a sharp decrease in the second half of the 2000's.
End of explanation
# Number of fatalities in crashes involving a drunken driver
df_dd = df_fatality.FATALS[df_fatality.DRUNK_DR >= 1].sum()
print("Fatalities involving a drunk driver: ", df_dd)
print("Percent of total traffic fatalities involving drunk driver: ",
'{0:.1f}%'.format(df_dd / total_traffic_fatalities * 100))
Explanation: The absolute number of fatalities has declined, but so has the fatality rate per vehicle miles traveled (VMT), which indicates that we are making progress towards safer roads. Since 1994, the fatality rate has dropped while VMT increased. In recent years, Americans are driving less, with several year-over-year decreases in CMTd since the mid-2000's. The continued decline in the fatality rate - even with a decreasing denominator - is an encouraging sign.
Drunk Driving
One of the first things that comes to mind when I think of traffic fatalities is drunk driving. From a young age, I recall being repeatedly warned about the dangers of drunk driving in school, on television, etc. Penalties are stiff, yet it does not seem to deter significant numbers of people from getting behind the wheel while intoxicated. The FARS data includes a drunk driver indicator, the value in the DRUNKEN_DR column indicates the number of drunk drivers involved in each fatal crash.
End of explanation
# pandas DataFrame pivot by hour that crash occurred and drunk driving
fatal_pivot = df_fatality.pivot_table(index=['HOUR'], columns=['DRUNK_DR'],
values='FATALS', aggfunc=np.sum).fillna(0)
# Sum the total number of drunk drivers involved
fatal_pivot['DRUNK_DR_SUM'] = fatal_pivot[[1,2,3]].sum(axis=1)
fp = fatal_pivot[[0,'DRUNK_DR_SUM']].iloc[:-1].copy()
fp.columns = ['No Drunk Driver', 'Drunk Driver']
plt.rcParams['figure.figsize'] = (12,6)
fp.plot()
plt.title('Drunk Driving Fatalities Peak in the Late Evening/Early Morning Hours')
plt.ylabel('Total Fatalities, 2015')
plt.xlabel('Hour')
plt.show()
Explanation: Nearly a third of all traffic fatalities involve a drunk driver. Despite all the education and public campaigns and increased enforcement, drunk driving is still taking a massive toll on human life every year.
What else can we learn about drunk driving from the data?
End of explanation
# Now look at day of week
fatal_pivot = df_fatality.pivot_table(index=['DAY_WEEK'],columns=['DRUNK_DR'],
values='FATALS', aggfunc=np.sum)
# Sum the total number of drunk drivers involved
fatal_pivot['DRUNK_DR_SUM'] = fatal_pivot[[1,2,3]].sum(axis=1)
fp = fatal_pivot[[0,'DRUNK_DR_SUM']].copy()
fp.columns = ['No Drunk Driver', 'Drunk Driver']
# Days of week are indexed 1=Sunday, 2=Monday, ..., 6=Saturday
labels=['Sun','Mon','Tue','Wed','Thu','Fri','Sat']
fp.index = labels
fp.plot(kind='bar')
plt.xticks(rotation=0)
plt.ylabel('Total Fatalities, 2015')
plt.title('Drunk Driving Fatalities Peak on Weekends')
plt.show()
Explanation: Clearly the late evening and early morning hours show high levels of drunken driving activity. Fatalities caused by drunken drivers are nearly double those caused by sober drivers between the hours of 2:00 and 4:00 a.m.
I have not been able to find VMT data by hour for U.S., but this report from the Federal Highway Administration suggests that VMT in the late evening/early morning is a fraction of the peak volume during daytime commuting hours. On a per-VMT basis, the roads at night are more dangerous than the absolute numbers show, as the elevated fatality numbers are observed despite dramatically fewer people driving at those times.
End of explanation
weather_group = df_fatality.groupby(['WEATHER']).sum()['FATALS']
labels = ['Clear', 'Rain', 'Sleet/Hail', 'Snow', 'Fog, Smog, Smoke',
'Severe Crosswinds', 'Blowing Sand, Soil, Dirt', 'Other',
'Cloudy', 'Blowing Snow', 'Freezing Rain or Drizzle', 'Not Reported', 'Unknown']
weather_group.index = labels
(weather_group.sort_values() / weather_group.sum()).plot(kind='barh')
plt.title('Most Crashes Occur in Clear Weather Conditions')
plt.xlabel('Proportion of Total Crashes, 2015')
plt.show()
Explanation: As you might expect, drunk driving fatalities peak substantially on the weekends, with non-drunk fatalities remaining relatively consistent across all days of week.
Weather Conditions
The FARS data contains natural environment features such as LGT_COND and WEATHER which encode information on light conditions (light, dusk, etc) and weather (rain, fog, etc), respectively. Intuitively, I expect more fatalities to occur in darker conditions or harsh weather.
End of explanation
# pandas groupby on LGT_COND column
light_group = df_fatality.groupby(['LGT_COND']).sum()['FATALS']
labels = ['Daylight','Dark - Not Lighted', 'Dark - Lighted',
'Dawn', 'Dusk', 'Dark - Unknown Lighting', 'Other',
'Not Reported', 'Unknown']
light_group.index = labels
(light_group.sort_values() / light_group.sum()).plot(kind='barh')
plt.title('Fatal Crashes are Evenly Split Between Daylight and Darkness')
plt.xlabel('Proportion of Total Crashes, 2015')
plt.show()
Explanation: The majority of fatalities occur with no weather affecting visibility. Rain is the only precipitation form that shows up significantly. Perhaps people reduce driving during adverse conditions or drive more cautiously - leading to fewer deaths.
End of explanation
<END_TASK> |
238 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Iris Project
Data Exploration and Analysis
Step1: Iris Data from Seaborn
Step2: Visualisation
Step3: scikit-learn
url = http
Step4: Key points
Data in the form of a table
Features in the form of a matrix
Label or target array
Step5: Scikit-Learn API
url = https
Step6: Step 1. Choose a class of model
Step7: Step 2. Choose model hyperparameters
Step8: Step 3. Arrage data into features matrix and target array
Step9: Step 4. Fit model to data
Step10: Step 5. Apply trained model to new data
Step11: Visualise | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
sns.set(color_codes=True)
%matplotlib inline
df = pd.read_csv('iris.data')
df.head()
df = pd.read_csv('iris.data', header=-1)
df.head()
col_name = ['sepal length', 'sepal width', 'petal length', 'petal width', 'class']
df.columns = col_name
df.head()
Explanation: Iris Project
Data Exploration and Analysis
End of explanation
iris = sns.load_dataset('iris')
iris.head()
df.describe()
iris.describe()
print(iris.info())
print(iris.groupby('species').size())
Explanation: Iris Data from Seaborn
End of explanation
sns.pairplot(iris, hue='species', size=3, aspect=1);
iris.hist(edgecolor='black', linewidth=1.2, figsize=(12,8));
plt.show();
plt.figure(figsize=(12,8));
plt.subplot(2,2,1)
sns.violinplot(x='species', y='sepal_length', data=iris)
plt.subplot(2,2,2)
sns.violinplot(x='species', y='sepal_width', data=iris)
plt.subplot(2,2,3)
sns.violinplot(x='species', y='petal_length', data=iris)
plt.subplot(2,2,4)
sns.violinplot(x='species', y='petal_width', data=iris);
iris.boxplot(by='species', figsize=(12,8));
pd.plotting.scatter_matrix(iris, figsize=(12,10))
plt.show()
sns.pairplot(iris, hue="species",diag_kind="kde");
Explanation: Visualisation
End of explanation
%%HTML
<iframe width=100% height=500 src='http://scikit-learn.org/stable/'></iframe>
Explanation: scikit-learn
url = http://scikit-learn.org/stable/
End of explanation
iris.head()
Explanation: Key points
Data in the form of a table
Features in the form of a matrix
Label or target array
End of explanation
generate_random = np.random.RandomState(0)
x = 10 * generate_random.rand(100)
y = 3 * x + np.random.randn(100)
plt.figure(figsize = (10, 8))
plt.scatter(x, y);
Explanation: Scikit-Learn API
url = https://arxiv.org/abs/1309.0238
General principles
Consistency. All objects (basic or composite) share a consistent interface composed of a limited set of methods. This interface is documented in a consistent manner for all objects.
Inspection. Constructor parameters and parameter values determined by learning algorithms are stored and exposed as public attributes.
Non-proliferation of classes. Learning algorithms are the only objects to be represented using custom classes. Datasets are represented as NumPy arrays or SciPy sparse matrices. Hyper-parameter names and values are represented as standard Python strings or numbers whenever possible. This keeps scikitlearn easy to use and easy to combine with other libraries.
Composition. Many machine learning tasks are expressible as sequences or combinations of transformations to data. Some learning algorithms are also naturally viewed as meta-algorithms parametrized on other algorithms. Whenever feasible, such algorithms are implemented and composed from existing building blocks.
Sensible defaults. Whenever an operation requires a user-defined parameter, an appropriate default value is defined by the library. The default value should cause the operation to be performed in a sensible way (giving a baseline solution for the task at hand).
Basic Steps of Using Scikit-Learn API
Choose a class of model
Choose model hyperparameters
Arrage data into features matrix and target array
Fit model to data
Apply trained model to new data
Supervised Learning: Simple Linear Regression
End of explanation
from sklearn.linear_model import LinearRegression
Explanation: Step 1. Choose a class of model
End of explanation
model = LinearRegression(fit_intercept=True)
model
Explanation: Step 2. Choose model hyperparameters
End of explanation
X = x.reshape(-1, 1)
X.shape
Explanation: Step 3. Arrage data into features matrix and target array
End of explanation
model.fit(X, y)
model.coef_
model.intercept_
Explanation: Step 4. Fit model to data
End of explanation
x_fit = np.linspace(-1, 11)
X_fit = x_fit.reshape(-1,1)
y_fit = model.predict(X_fit)
Explanation: Step 5. Apply trained model to new data
End of explanation
plt.figure(figsize = (10, 8))
plt.scatter(x, y)
plt.plot(x_fit, y_fit);
Explanation: Visualise
End of explanation
<END_TASK> |
239 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Train / Test
We'll start by creating some data set that we want to build a model for (in this case a polynomial regression)
Step1: Now we'll split the data in two - 80% of it will be used for "training" our model, and the other 20% for testing it. This way we can avoid overfitting.
Step2: Here's our training dataset
Step3: And our test dataset
Step4: Now we'll try to fit an 8th-degree polynomial to this data (which is almost certainly overfitting, given what we know about how it was generated!)
Step5: Let's plot our polynomial against the training data
Step6: And against our test data
Step7: Doesn't look that bad when you just eyeball it, but the r-squared score on the test data is kind of horrible! This tells us that our model isn't all that great...
Step8: ...even though it fits the training data better | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
from pylab import *
np.random.seed(2)
pageSpeeds = np.random.normal(3.0, 1.0, 100)
purchaseAmount = np.random.normal(50.0, 30.0, 100) / pageSpeeds
scatter(pageSpeeds, purchaseAmount)
Explanation: Train / Test
We'll start by creating some data set that we want to build a model for (in this case a polynomial regression):
End of explanation
trainX = pageSpeeds[:80]
testX = pageSpeeds[80:]
trainY = purchaseAmount[:80]
testY = purchaseAmount[80:]
Explanation: Now we'll split the data in two - 80% of it will be used for "training" our model, and the other 20% for testing it. This way we can avoid overfitting.
End of explanation
scatter(trainX, trainY)
Explanation: Here's our training dataset:
End of explanation
scatter(testX, testY)
Explanation: And our test dataset:
End of explanation
x = np.array(trainX)
y = np.array(trainY)
p4 = np.poly1d(np.polyfit(x, y, 8))
Explanation: Now we'll try to fit an 8th-degree polynomial to this data (which is almost certainly overfitting, given what we know about how it was generated!)
End of explanation
import matplotlib.pyplot as plt
xp = np.linspace(0, 7, 100)
axes = plt.axes()
axes.set_xlim([0,7])
axes.set_ylim([0, 200])
plt.scatter(x, y)
plt.plot(xp, p4(xp), c='r')
plt.show()
Explanation: Let's plot our polynomial against the training data:
End of explanation
testx = np.array(testX)
testy = np.array(testY)
axes = plt.axes()
axes.set_xlim([0,7])
axes.set_ylim([0, 200])
plt.scatter(testx, testy)
plt.plot(xp, p4(xp), c='r')
plt.show()
Explanation: And against our test data:
End of explanation
from sklearn.metrics import r2_score
r2 = r2_score(testy, p4(testx))
print(r2)
Explanation: Doesn't look that bad when you just eyeball it, but the r-squared score on the test data is kind of horrible! This tells us that our model isn't all that great...
End of explanation
from sklearn.metrics import r2_score
r2 = r2_score(np.array(trainY), p4(np.array(trainX)))
print(r2)
Explanation: ...even though it fits the training data better:
End of explanation
<END_TASK> |
240 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Copyright 2018 The TensorFlow Probability Authors.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: コピュラ入門
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step3: [copula](https
Step4: しかし、このようなモデルの力は、確率積分変換を使用して任意の R.V. にコピュラを使用するところにあります。こうすることで、任意の周辺分布を指定し、コピュラを使って接合することができます。
次のモデルから開始します。
$$\begin{align} X &\sim \text{Kumaraswamy}(a, b) \ Y &\sim \text{Gumbel}(\mu, \beta) \end{align}$$
そして、コピュラを使って、二変量 R.V. $Z$ を取得します。これには、周辺分布の Kumaraswamy と Gumbel があります。
まず、これらの 2 つの R.V. で生成された分布をプロットしましょう。これは、コピュラを適用したときの比較の基準としてのみ使用します。
Step6: 異なる周辺分布を使用した同時分布
次に、ガウスコピュラを使用して分布を接合し、それをプロットします。ここでも、TransformedDistribution を適切な Bijector に適用して、目的の周辺分布を取得します。
具体的には、さまざまなベクトルで異なるバイジェクターを適用(全単射変換のままです)する Blockwise バイジェクターを使用します。
これで、必要としているコピュラを定義できるようになりました。ターゲットとなる周辺分布のリスト(バイジェクターとしてエンコード)があれば、コピュラを使用し、指定された周辺分布を持つ新しい分布を簡単に構築できます。
Step7: 最後に、このガウスコピュラを実際に使用してみましょう。バリアンス 1 に対応する $\begin{bmatrix}1 & 0\rho & \sqrt{(1-\rho^2)}\end{bmatrix}$ のコレスキー、そして多変量正規分布の相関 $\rho$ を使用します。
いくつかのケースを見てみましょう。
Step8: 最後に、実際に求めていた周辺分布を実際に取得することを確認しましょう。 | <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2018 The TensorFlow Probability Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
import numpy as np
import matplotlib.pyplot as plt
import tensorflow.compat.v2 as tf
tf.enable_v2_behavior()
import tensorflow_probability as tfp
tfd = tfp.distributions
tfb = tfp.bijectors
Explanation: コピュラ入門
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/probability/examples/Gaussian_Copula"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で表示</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/probability/examples/Gaussian_Copula.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab で実行</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/probability/examples/Gaussian_Copula.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/probability/examples/Gaussian_Copula.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a></td>
</table>
End of explanation
class GaussianCopulaTriL(tfd.TransformedDistribution):
Takes a location, and lower triangular matrix for the Cholesky factor.
def __init__(self, loc, scale_tril):
super(GaussianCopulaTriL, self).__init__(
distribution=tfd.MultivariateNormalTriL(
loc=loc,
scale_tril=scale_tril),
bijector=tfb.NormalCDF(),
validate_args=False,
name="GaussianCopulaTriLUniform")
# Plot an example of this.
unit_interval = np.linspace(0.01, 0.99, num=200, dtype=np.float32)
x_grid, y_grid = np.meshgrid(unit_interval, unit_interval)
coordinates = np.concatenate(
[x_grid[..., np.newaxis],
y_grid[..., np.newaxis]], axis=-1)
pdf = GaussianCopulaTriL(
loc=[0., 0.],
scale_tril=[[1., 0.8], [0., 0.6]],
).prob(coordinates)
# Plot its density.
plt.contour(x_grid, y_grid, pdf, 100, cmap=plt.cm.jet);
Explanation: [copula](https://en.wikipedia.org/wiki/Copula_(probability_theory%29)とは、ランダム変数間の依存関係を捉えるための古典的な手法です。より正式には、コピュラは、マージナリゼーションによって $U_i \sim \text{Uniform}(0, 1)$ が得られる多変量分布 $C(U_1, U_2, ...., U_n)$ です。
コピュラを使用すると任意の周辺分布を持つ多変量分布を作成できるため、コピュラは非常に興味深い関数と言えます。以下にそのレシピを示します。
確率積分変換を使って、任意の連続分布 R.V. $X$ を一様分布 $F_X(X)$ に変換します。$F_X$ は $X$ の CDF(累積分布関数)です。
コピュラ(二変量とした場合)$C(U, V)$ がある場合、$U$ と $V$ には一様周辺分布があります。
ここで、関心のある R.V が $X, Y$ である場合に、$C'(X, Y) = C(F_X(X), F_Y(Y))$ という新しい分布を作成します。$X$ と $Y$ の周辺分布は目的どおりの分布です。
周辺分布は単変量であるため、測定やモデル化が行いやすい場合があります。コピュラは周辺分布から開始することを可能にしますが、次元間の任意の相関関係を得ることも可能です。
ガウスコピュラ
コピュラがどのように成立しているかを説明するために、多変量のガウス相関関係に従って依存関係を捉えるケースを考察しましょう。ガウスコピュラは $C(u_1, u_2, ...u_n) = \Phi_\Sigma(\Phi^{-1}(u_1), \Phi^{-1}(u_2), ... \Phi^{-1}(u_n))$ によって得られるもので、$\Phi_\Sigma$ は多変量正規分布の CFD を表し、共分散 $\Sigma$ と平均値 0 を持ち、$\Phi^{-1}$ は標準正規分布の逆 CDF です。
正規の逆 CFD を適用すると、一様次元をラップして正規に分布されるようになります。多変量正規分布の CDF を適用すると、分布が押しつぶされてわずかに均一になり、ガウス相関が得られます。
したがって、ガウスコピュラは、一様な周辺分布を持つ単位超立方体 $[0, 1]^n$ にわたる分布であることがわかります。
このように定義されると、ガウスコピュラは tfd.TransformedDistribution と適切な Bijector で実装することができます。つまり、tfb.NormalCDF バイジェクターによって実装された正規分布の逆 CDF を使用して、多変量正規分布を変換しているということです。
以下では、共分散がコレスキー因子によってパラメータ化される(したがって、MultivariateNormalTriL の共分散)という 1 つの単純化した仮定で、ガウスコピュラを実装します。(ほかの tf.linalg.LinearOperators を使用して、行列のないさまざまな仮定をエンコードすることができます。)
End of explanation
a = 2.0
b = 2.0
gloc = 0.
gscale = 1.
x = tfd.Kumaraswamy(a, b)
y = tfd.Gumbel(loc=gloc, scale=gscale)
# Plot the distributions, assuming independence
x_axis_interval = np.linspace(0.01, 0.99, num=200, dtype=np.float32)
y_axis_interval = np.linspace(-2., 3., num=200, dtype=np.float32)
x_grid, y_grid = np.meshgrid(x_axis_interval, y_axis_interval)
pdf = x.prob(x_grid) * y.prob(y_grid)
# Plot its density
plt.contour(x_grid, y_grid, pdf, 100, cmap=plt.cm.jet);
Explanation: しかし、このようなモデルの力は、確率積分変換を使用して任意の R.V. にコピュラを使用するところにあります。こうすることで、任意の周辺分布を指定し、コピュラを使って接合することができます。
次のモデルから開始します。
$$\begin{align} X &\sim \text{Kumaraswamy}(a, b) \ Y &\sim \text{Gumbel}(\mu, \beta) \end{align}$$
そして、コピュラを使って、二変量 R.V. $Z$ を取得します。これには、周辺分布の Kumaraswamy と Gumbel があります。
まず、これらの 2 つの R.V. で生成された分布をプロットしましょう。これは、コピュラを適用したときの比較の基準としてのみ使用します。
End of explanation
class WarpedGaussianCopula(tfd.TransformedDistribution):
Application of a Gaussian Copula on a list of target marginals.
This implements an application of a Gaussian Copula. Given [x_0, ... x_n]
which are distributed marginally (with CDF) [F_0, ... F_n],
`GaussianCopula` represents an application of the Copula, such that the
resulting multivariate distribution has the above specified marginals.
The marginals are specified by `marginal_bijectors`: These are
bijectors whose `inverse` encodes the CDF and `forward` the inverse CDF.
block_sizes is a 1-D Tensor to determine splits for `marginal_bijectors`
length should be same as length of `marginal_bijectors`.
See tfb.Blockwise for details
def __init__(self, loc, scale_tril, marginal_bijectors, block_sizes=None):
super(WarpedGaussianCopula, self).__init__(
distribution=GaussianCopulaTriL(loc=loc, scale_tril=scale_tril),
bijector=tfb.Blockwise(bijectors=marginal_bijectors,
block_sizes=block_sizes),
validate_args=False,
name="GaussianCopula")
Explanation: 異なる周辺分布を使用した同時分布
次に、ガウスコピュラを使用して分布を接合し、それをプロットします。ここでも、TransformedDistribution を適切な Bijector に適用して、目的の周辺分布を取得します。
具体的には、さまざまなベクトルで異なるバイジェクターを適用(全単射変換のままです)する Blockwise バイジェクターを使用します。
これで、必要としているコピュラを定義できるようになりました。ターゲットとなる周辺分布のリスト(バイジェクターとしてエンコード)があれば、コピュラを使用し、指定された周辺分布を持つ新しい分布を簡単に構築できます。
End of explanation
# Create our coordinates:
coordinates = np.concatenate(
[x_grid[..., np.newaxis], y_grid[..., np.newaxis]], -1)
def create_gaussian_copula(correlation):
# Use Gaussian Copula to add dependence.
return WarpedGaussianCopula(
loc=[0., 0.],
scale_tril=[[1., 0.], [correlation, tf.sqrt(1. - correlation ** 2)]],
# These encode the marginals we want. In this case we want X_0 has
# Kumaraswamy marginal, and X_1 has Gumbel marginal.
marginal_bijectors=[
tfb.Invert(tfb.KumaraswamyCDF(a, b)),
tfb.Invert(tfb.GumbelCDF(loc=0., scale=1.))])
# Note that the zero case will correspond to independent marginals!
correlations = [0., -0.8, 0.8]
copulas = []
probs = []
for correlation in correlations:
copula = create_gaussian_copula(correlation)
copulas.append(copula)
probs.append(copula.prob(coordinates))
# Plot it's density
for correlation, copula_prob in zip(correlations, probs):
plt.figure()
plt.contour(x_grid, y_grid, copula_prob, 100, cmap=plt.cm.jet)
plt.title('Correlation {}'.format(correlation))
Explanation: 最後に、このガウスコピュラを実際に使用してみましょう。バリアンス 1 に対応する $\begin{bmatrix}1 & 0\rho & \sqrt{(1-\rho^2)}\end{bmatrix}$ のコレスキー、そして多変量正規分布の相関 $\rho$ を使用します。
いくつかのケースを見てみましょう。
End of explanation
def kumaraswamy_pdf(x):
return tfd.Kumaraswamy(a, b).prob(np.float32(x))
def gumbel_pdf(x):
return tfd.Gumbel(gloc, gscale).prob(np.float32(x))
copula_samples = []
for copula in copulas:
copula_samples.append(copula.sample(10000))
plot_rows = len(correlations)
plot_cols = 2 # for 2 densities [kumarswamy, gumbel]
fig, axes = plt.subplots(plot_rows, plot_cols, sharex='col', figsize=(18,12))
# Let's marginalize out on each, and plot the samples.
for i, (correlation, copula_sample) in enumerate(zip(correlations, copula_samples)):
k = copula_sample[..., 0].numpy()
g = copula_sample[..., 1].numpy()
_, bins, _ = axes[i, 0].hist(k, bins=100, density=True)
axes[i, 0].plot(bins, kumaraswamy_pdf(bins), 'r--')
axes[i, 0].set_title('Kumaraswamy from Copula with correlation {}'.format(correlation))
_, bins, _ = axes[i, 1].hist(g, bins=100, density=True)
axes[i, 1].plot(bins, gumbel_pdf(bins), 'r--')
axes[i, 1].set_title('Gumbel from Copula with correlation {}'.format(correlation))
Explanation: 最後に、実際に求めていた周辺分布を実際に取得することを確認しましょう。
End of explanation
<END_TASK> |
241 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Populate local MDCS instance with student data and metadata
Import MDCS API tool module
Step1: Host and user information
Step2: List of file prefixes for micrograph images and XML metadata
Step3: For each name in the list | <ASSISTANT_TASK:>
Python Code:
import mdcs
Explanation: Populate local MDCS instance with student data and metadata
Import MDCS API tool module
End of explanation
user='admin'
pswd='admin'
host='http://127.0.0.1:8000'
template_name='DiffusionDemo'
Explanation: Host and user information
End of explanation
name_list=[
"GE-DiffusionCouple-IN100-IN718",
"GE-DiffusionCouple-IN718-R95",
"GE-DiffusionCouple-R95-R88",
"GE-DiffusionCouple-R88-IN100"
]
Explanation: List of file prefixes for micrograph images and XML metadata
End of explanation
for name in name_list:
xml_name=name+".xml"
tif_name=name+".tif"
print "Uploading:",tif_name
url = mdcs.blob.upload(tif_name,host,user,pswd)
print "Reading:",xml_name
with open(xml_name, 'r') as f:
content = f.read()
content = content.replace("http://127.0.0.1:8000/rest/blob?id=REPLACE-ME-BLOB-ID",url)
print "Uploading:",xml_name
response = mdcs.curate_as(xml_name,name,host,user,pswd,template_title=template_name,content=content)
print "Response:",response
Explanation: For each name in the list:
Upload micrograph
Read XML metadata
Replace generic URL with unique URL for micrograph
Upload XML metadata record
End of explanation
<END_TASK> |
242 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Introduction
Step1: Import raw data
The user needs to specify the directories containing the data of interest. Each sample type should have a key which corresponds to the directory path. Additionally, each object should have a list that includes the channels of interest.
Step2: We'll generate a list of pairs of stypes and channels for ease of use.
Step3: We can now read in all datafiles specified by the data dictionary above.
Step4: Calculate landmark bins
Step5: Calculate landmark bins based on user input parameters and the previously specified control sample.
Step6: Calculate landmarks | <ASSISTANT_TASK:>
Python Code:
import deltascope as ds
import deltascope.alignment as ut
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.preprocessing import normalize
from scipy.optimize import minimize
import os
import tqdm
import json
import datetime
Explanation: Introduction: Landmarks
End of explanation
# --------------------------------
# -------- User input ------------
# --------------------------------
data = {
# Specify sample type key
'30hpf': {
# Specify path to data directory
'path': '.\\Data\\30hpf\\Output-02-14-2019',
# Specify which channels are in the directory and are of interest
'channels': ['AT','ZRF']
},
'28hpf': {
'path': '.\Data\\28hpf\\Output-02-14-2019-yot-ilastik',
'channels': ['AT','ZRF']
},
'26hpf': {
'path': '.\Data\\26hpf\\Output-02-14-2019',
'channels': ['AT','ZRF']
},
'24hpf': {
'path': '.\Data\\24hpf\\Output-02-15-2019',
'channels': ['AT','ZRF']
},
'22hpf': {
'path': '.\Data\\22hpf\\Output-02-14-2019',
'channels': ['AT','ZRF']
}
}
Explanation: Import raw data
The user needs to specify the directories containing the data of interest. Each sample type should have a key which corresponds to the directory path. Additionally, each object should have a list that includes the channels of interest.
End of explanation
data_pairs = []
for s in data.keys():
for c in data[s]['channels']:
data_pairs.append((s,c))
Explanation: We'll generate a list of pairs of stypes and channels for ease of use.
End of explanation
D = {}
for s in data.keys():
D[s] = {}
for c in data[s]['channels']:
D[s][c] = ds.read_psi_to_dict(data[s]['path'],c)
Explanation: We can now read in all datafiles specified by the data dictionary above.
End of explanation
# --------------------------------
# -------- User input ------------
# --------------------------------
# Pick an integer value for bin number
anum = 30
# Specify the percentiles which will be used to calculate landmarks
percbins = [50]
theta_step = np.pi/4
Explanation: Calculate landmark bins
End of explanation
lm = ds.landmarks(percbins=percbins, rnull=np.nan)
lm.calc_bins(D['28hpf']['AT'], anum, theta_step)
print('Alpha bins')
print(lm.acbins)
print('Theta bins')
print(lm.tbins)
Explanation: Calculate landmark bins based on user input parameters and the previously specified control sample.
End of explanation
lmdf = pd.DataFrame()
# Loop through each pair of stype and channels
for s,c in tqdm.tqdm(data_pairs):
print(s,c)
# Calculate landmarks for each sample with this data pair
for k,df in tqdm.tqdm(D[s][c].items()):
lmdf = lm.calc_perc(df, k, '-'.join([s,c]), lmdf)
# Set timestamp for saving data
tstamp = datetime.datetime.now().strftime('%Y-%m-%d')
# Save completed landmarks to a csv file
lmdf.to_csv(os.path.join('.\Data',tstamp+'_landmarks.csv'))
# Save landmark bins to json file
bins = {
'acbins':list(lm.acbins),
'tbins':list(lm.tbins)
}
with open(os.path.join('.\Data', tstamp+'_landmarks_bins.json'), 'w') as outfile:
json.dump(bins, outfile)
Explanation: Calculate landmarks
End of explanation
<END_TASK> |
243 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
In this post, we'll use data from a tensile test to build a stress strain curve with Python and Matplotlib.
A tensile test is a type of mechanical test performed by engineers used to determine the mechanical properties of a material. Engineering metal alloys such as steel and aluminum alloys are tensile tested in order to determine their strength and stiffness. Tensile tests are performed in a piece of equipment called a mechanical test frame.
After a tensile test is complete, a set of data is produced by the mechanical test frame. Using the data acquired during a tensile test, a stress-strain curve can be produced.
In this post, we will create a stress-strain curve (a plot) from a set of tensile test data of a steel 1045 sample and an aluminum 6061 sample. The stress strain curve we construct will have the following features
Step1: Ensure the two .xls data files are in the same folder as the Jupyter notebook
Before we proceed, let's make sure the two .xls data files are in the same folder as our running Jupyter notebook. We'll use a Jupyter notebook magic command to print out the contents of the folder that our notebook is in. The %ls command lists the contents of the current folder.
Step2: We can see our Jupyter notebook stress_strain_curve_with_python.ipynb as well as the two .xls data files aluminum6061.xls and steel1045.xls are in our current folder.
Now that we are sure the two .xls data files are in the same folder as our notebook, we can import the data in the two two .xls files using Panda's pd.read_excel() function. The data from the two excel files will be stored in two Pandas dataframes called steel_df and al_df.
Step3: We can use Pandas .head() method to view the first five rows of each dataframe.
Step4: We see a number of columns in each dataframe. The columns we are interested in are FORCE, EXT, and CH5. Below is a description of what these columns mean.
FORCE Force measurements from the load cell in pounds (lb), force in pounds
EXT Extension measurements from the mechanical extensometer in percent (%), strain in percent
CH5 Extension readings from the laser extensometer in percent (%), strain in percent
Create stress and strain series from the FORCE, EXT, and CH5 columns
Next we'll create a four Pandas series from the ['CH5'] and ['FORCE'] columns of our al_df and steel_df dataframes. The equations below show how to calculate stress, $\sigma$, and strain, $\epsilon$, from force $F$ and cross-sectional area $A$. Cross-sectional area $A$ is the formula for the area of a circle. For the steel and aluminum samples we tested, the diameter $d$ was $0.506 \ in$.
$$ \sigma = \frac{F}{A_0} $$
$$ F \ (kip) = F \ (lb) \times 0.001 $$
$$ A_0 = \pi (d/2)^2 $$
$$ d = 0.506 \ in $$
$$ \epsilon \ (unitless) = \epsilon \ (\%) \times 0.01 $$
Step5: Build a quick plot
Now that we have the data from the tensile test in four series, we can build a quick plot using Matplotlib's plt.plot() method. The first x,y pair we pass to plt.plot() is strain_steel,stress_steel and the second x,y pair we pass in is strain_al,stress_al. The command plt.show() shows the plot.
Step6: We see a plot with two lines. One line represents the steel sample and one line represents the aluminum sample. We can improve our plot by adding axis labels with units, a title and a legend.
Add axis labels, title and a legend
Axis labels, titles and a legend are added to our plot with three Matplotlib methods. The methods are summarized in the table below.
| Matplotlib method | description | example |
| --- | --- | --- |
| plt.xlabel() | x-axis label | plt.xlabel('strain (in/in)') |
| plt.ylabel() | y-axis label | plt.ylabel('stress (ksi)') |
| plt.title() | plot title | plt.title('Stress Strain Curve') |
| plt.legend() | legend | plt.legend(['steel','aluminum']) |
The code cell below shows these four methods in action and produces a plot.
Step7: The plot we see has two lines, axis labels, a title and a legend. Next we'll save the plot to a .png image file.
Save the plot as a .png image
Now we can save the plot as a .png image using Matplotlib's plt.savefig() method. The code cell below builds the plot and saves an image file called stress-strain_curve.png. The argument dpi=300 inside of Matplotlib's plt.savefig() method specifies the resolution of our saved image. The image stress-strain_curve.png will be saved in the same folder as our running Jupyter notebook. | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
print("NumPy version:",np.__version__)
print("Pandas version:",pd.__version__)
Explanation: In this post, we'll use data from a tensile test to build a stress strain curve with Python and Matplotlib.
A tensile test is a type of mechanical test performed by engineers used to determine the mechanical properties of a material. Engineering metal alloys such as steel and aluminum alloys are tensile tested in order to determine their strength and stiffness. Tensile tests are performed in a piece of equipment called a mechanical test frame.
After a tensile test is complete, a set of data is produced by the mechanical test frame. Using the data acquired during a tensile test, a stress-strain curve can be produced.
In this post, we will create a stress-strain curve (a plot) from a set of tensile test data of a steel 1045 sample and an aluminum 6061 sample. The stress strain curve we construct will have the following features:
A descriptive title
Axes labels with units
Two lines on the same plot. One line for steel 1045 and one line for aluminum 6061
A legend
Install Python
We are going to build our stress strain curve with Python and a Jupyter notebook. I suggest engineers and problem-solvers download and install the Anaconda distribution of Python. See this post to learn how to install Anaconda on your computer. Alternatively, you can download Python form Python.org or download Python the Microsoft Store.
Install Jupyter, NumPy, Pandas, and Matplotlib
Once Python is installed, the next thing we need to do is install a couple of Python packages. If you are using the Anaconda distribution of Python, the packages we are going to use to build the plot: Jupyter, NumPy, Pandas, and Matplotlib come pre-installed and no additional installation steps are necessary.
However, if you downloaded Python from Python.org or installed Python using the Microsoft Store, you will need to install install Jupyter, NumPy, Pandas, and Matplotlib separately. You can install Jupyter, NumPy, Pandas, and Matplotlib with pip (the Python package manager) or install theses four packages with the Anaconda Prompt.
If you are using a terminal and pip, type:
```text
pip install jupyter numpy pandas matplotlib
```
If you have Anaconda installed and use the Anaconda Prompt, type:
```text
conda install jupyter numpy pandas matplotlib
```
Open a Jupyter notebook
We will construct our stress strain curve using a Jupyter notebook. See this post to see how to open a Jupyter notebook.
Make sure to save your Jupyter notebook with a recognizable name.
Download the data and move the data into the same folder as the Jupyter notebook
Next, we need to download the two data files that we will use to build our stress-strain curve. You can download sample data using the links below:
steel1045.xls
aluminum6061.xls
After these .xls files are downloaded, both .xls files need to be moved into the same folder as our Jupyter notebook.
Import NumPy, Pandas, and Matplotlib
Now that our Jupyter notebook is open and the two .xls data files are in the same folder as the Jupyter notebook, we can start coding and build our plot.
At the top of the Jupyter notebook, import NumPy, Pandas and Matplotlib. The command %matplotlib inline is included so that our plot will display directly inside our Jupyter notebook. If you are using a .py file instead of a Jupyter notebook, make sure to comment out %matplotlib inline as this line is not valid Python code.
We will also print out the versions of our NumPy and Pandas packages using the .__version__ attribute. If the versions of NumPy and Pandas prints out, that means that NumPy and Pandas are installed and we can use these packages in our code.
End of explanation
%ls
Explanation: Ensure the two .xls data files are in the same folder as the Jupyter notebook
Before we proceed, let's make sure the two .xls data files are in the same folder as our running Jupyter notebook. We'll use a Jupyter notebook magic command to print out the contents of the folder that our notebook is in. The %ls command lists the contents of the current folder.
End of explanation
steel_df = pd.read_excel("steel1045.xls")
al_df = pd.read_excel("aluminum6061.xls")
Explanation: We can see our Jupyter notebook stress_strain_curve_with_python.ipynb as well as the two .xls data files aluminum6061.xls and steel1045.xls are in our current folder.
Now that we are sure the two .xls data files are in the same folder as our notebook, we can import the data in the two two .xls files using Panda's pd.read_excel() function. The data from the two excel files will be stored in two Pandas dataframes called steel_df and al_df.
End of explanation
steel_df.head()
al_df.head()
Explanation: We can use Pandas .head() method to view the first five rows of each dataframe.
End of explanation
strain_steel = steel_df['CH5']*0.01
d_steel = 0.506 # test bar diameter = 0.506 inches
stress_steel = (steel_df['FORCE']*0.001)/(np.pi*((d_steel/2)**2))
strain_al = al_df['CH5']*0.01
d_al = 0.506 # test bar diameter = 0.506 inches
stress_al = (al_df['FORCE']*0.001)/(np.pi*((d_al/2)**2))
Explanation: We see a number of columns in each dataframe. The columns we are interested in are FORCE, EXT, and CH5. Below is a description of what these columns mean.
FORCE Force measurements from the load cell in pounds (lb), force in pounds
EXT Extension measurements from the mechanical extensometer in percent (%), strain in percent
CH5 Extension readings from the laser extensometer in percent (%), strain in percent
Create stress and strain series from the FORCE, EXT, and CH5 columns
Next we'll create a four Pandas series from the ['CH5'] and ['FORCE'] columns of our al_df and steel_df dataframes. The equations below show how to calculate stress, $\sigma$, and strain, $\epsilon$, from force $F$ and cross-sectional area $A$. Cross-sectional area $A$ is the formula for the area of a circle. For the steel and aluminum samples we tested, the diameter $d$ was $0.506 \ in$.
$$ \sigma = \frac{F}{A_0} $$
$$ F \ (kip) = F \ (lb) \times 0.001 $$
$$ A_0 = \pi (d/2)^2 $$
$$ d = 0.506 \ in $$
$$ \epsilon \ (unitless) = \epsilon \ (\%) \times 0.01 $$
End of explanation
plt.plot(strain_steel,stress_steel,strain_al,stress_al)
plt.show()
Explanation: Build a quick plot
Now that we have the data from the tensile test in four series, we can build a quick plot using Matplotlib's plt.plot() method. The first x,y pair we pass to plt.plot() is strain_steel,stress_steel and the second x,y pair we pass in is strain_al,stress_al. The command plt.show() shows the plot.
End of explanation
plt.plot(strain_steel,stress_steel,strain_al,stress_al)
plt.xlabel('strain (in/in)')
plt.ylabel('stress (ksi)')
plt.title('Stress Strain Curve of Steel 1045 and Aluminum 6061 in tension')
plt.legend(['Steel 1045','Aluminum 6061'])
plt.show()
Explanation: We see a plot with two lines. One line represents the steel sample and one line represents the aluminum sample. We can improve our plot by adding axis labels with units, a title and a legend.
Add axis labels, title and a legend
Axis labels, titles and a legend are added to our plot with three Matplotlib methods. The methods are summarized in the table below.
| Matplotlib method | description | example |
| --- | --- | --- |
| plt.xlabel() | x-axis label | plt.xlabel('strain (in/in)') |
| plt.ylabel() | y-axis label | plt.ylabel('stress (ksi)') |
| plt.title() | plot title | plt.title('Stress Strain Curve') |
| plt.legend() | legend | plt.legend(['steel','aluminum']) |
The code cell below shows these four methods in action and produces a plot.
End of explanation
plt.plot(strain_steel,stress_steel,strain_al,stress_al)
plt.xlabel('strain (in/in)')
plt.ylabel('stress (ksi)')
plt.title('Stress Strain Curve of Steel 1045 and Aluminum 6061 in tension')
plt.legend(['Steel 1045','Aluminum 6061'])
plt.savefig('stress-strain_curve.png', dpi=300, bbox_inches='tight')
plt.show()
Explanation: The plot we see has two lines, axis labels, a title and a legend. Next we'll save the plot to a .png image file.
Save the plot as a .png image
Now we can save the plot as a .png image using Matplotlib's plt.savefig() method. The code cell below builds the plot and saves an image file called stress-strain_curve.png. The argument dpi=300 inside of Matplotlib's plt.savefig() method specifies the resolution of our saved image. The image stress-strain_curve.png will be saved in the same folder as our running Jupyter notebook.
End of explanation
<END_TASK> |
244 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
An introduction to solving biological problems with Python
Session 1.3
Step1: You can of course use variables in tuples and other data structures
Step2: Tuples can be packed and unpacked with a convenient syntax. The number of variables used to unpack the tuple must match the number of elements in the tuple.
Step3: Lists
Can contain any number of items
Can contain different types of items
Can be altered once created (they are mutable)
Items have a particular order
Lists are created with square brackets around their items
Step4: Lists and tuples can contain other list and tuples, or any other type of collection
Step5: You can convert between tuples and lists with the <tt>tuple</tt> and <tt>list</tt> functions. Note that these create a new collection with the same items, and leave the original unaffected.
Step6: Manipulating tuples and lists
Once your data is in a list or tuple, python supports a number of ways you can access elements of the list and manipulate the list in useful ways, such as sorting the data.
Tuples and lists can generally be used in very similar ways.
Index access
You can access individual elements of the collection using their index, note that the first element is at index 0. Negative indices count backwards from the end.
Step7: Slices
You can also access a range of items, known as slices, from inside lists and tuples using a colon
Step8: in operator
You can check if a value is in a tuple or list with the <tt>in</tt> operator, and you can negate this with <tt>not</tt>
Step9: len() and count() functions
You can get the length of a list or tuple with the in-built <tt>len()</tt> function, and you can count the number of particular elements contained in a list with the <tt>.count()</tt> function.
Step10: Modifying lists
You can alter lists in place, but not tuples
Step11: Tuples cannot be altered once they have been created, if you try to do so, you'll get an error.
Step12: You can add elements to the end of a list with <tt>append()</tt>
Step13: or insert values at a certain position with <tt>insert()</tt>, by supplying the desired position as well as the new value
Step14: You can remove values with <tt>remove()</tt>
Step15: and delete values by index with <tt>del</tt>
Step16: It's often useful to be able to combine arrays together, which can be done with <tt>extend()</tt> (as <tt>append</tt> would add the whole list as a single element in the list)
Step17: The plus symbol <tt>+</tt> is shorthand for the extend operation when applied to lists
Step18: Slice syntax can be used on the left hand side of an assignment operation to assign subregions of a list
Step19: You can change the order of elements in a list
Step20: Note that both of these change the list, if you want a sorted copy of the list while leaving the original untouched, use <tt>sorted()</tt>
Step21: Getting help from the official Python documentation
The most useful information is online on https
Step22: Exercise 1.3.1
Create a list of DNA codons for the protein sequence CLYSY based on the codon variables you defined previously.
Print the DNA sequence of the protein to the screen.
Print the DNA codon of the last amino acid in the protein sequence.
Create two more variables containing the DNA sequence of a stop codon and a start codon, and replace the first element of the DNA sequence with the start codon and append the stop codon to the end of the DNA sequence. Print out the resulting DNA sequence.
String manipulations
Strings are a lot like tuples of characters, and individual characters and substrings can be accessed and manipulated using similar operations we introduced above.
Step23: Just as with tuples, trying to assign a value to an element of a string results in an error
Step24: Python provides a number of useful functions that let you manipulate strings
The <tt>in</tt> operator lets you check if a substring is contained within a larger string, but it does not tell you where the substring is located. This is often useful to know and python provides the <tt>.find()</tt> method which returns the index of the first occurrence of the search string, and the <tt>.rfind()</tt> method to start searching from the end of the string.
If the search string is not found in the string both these methods return -1.
Step25: When we are reading text from files (which we will see later on), often there is unwanted whitespace at the start or end of the string. We can remove leading whitespace with the <tt>.lstrip()</tt> method, trailing whitespace with <tt>.rstrip()</tt>, and whitespace from both ends with <tt>.strip()</tt>.
All of these methods return a copy of the changed string, so if you want to replace the original you can assign the result of the method call to the original variable.
Step26: You can split a string into a list of substrings using the <tt>.split()</tt> method, supplying the delimiter as an argument to the method. If you don't supply any delimiter the method will split the string on whitespace by default (which is very often what you want!)
To split a string into its component characters you can simply cast the string to a list
Step27: <tt>.split()</tt> is the counterpart to the <tt>.join()</tt> method that lets you join the elements of a list into a string only if all the elements are of type String
Step28: We also saw earlier that the <tt>+</tt> operator lets you concatenate strings together into a larger string.
Note that this operator only works on variables of the same type. If you want to concatenate a string with an integer (or some other type), first you have to cast the integer to a string with the <tt>str()</tt> function.
Step29: To get more information about these two methods split() and join() we could find it online in the Python documentation starting from www.python.org or get help using the help() builtin function. | <ASSISTANT_TASK:>
Python Code:
a = (123, 54, 92) # tuple of 4 integers
b = () # empty tuple
c = ("Ala",) # tuple of a single string (note the trailing ",")
d = (2, 3, False, "Arg", None) # a tuple of mixed types
print(a)
print(b)
print(c)
print(d)
Explanation: An introduction to solving biological problems with Python
Session 1.3: Collections Lists and Strings
Tuples, Lists and Manipulating tuples and lists | Exercise 1.3.1
String manipulations | Exercise 1.3.2
As well as the basic data types we introduced above, very commonly you will want to store and operate on collections of values, and python has several data structures that you can use to do this. The general idea is that you can place several items into a single collection and then refer to that collection as a whole. Which one you will use will depend on what problem you are trying to solve.
Tuples
Can contain any number of items
Can contain different types of items
Cannot be altered once created (they are immutable)
Items have a defined order
A tuple is created by using round brackets around the items it contains, with commas seperating the individual elements.
End of explanation
x = 1.2
y = -0.3
z = 0.9
t = (x, y, z)
print(t)
Explanation: You can of course use variables in tuples and other data structures
End of explanation
t = 2, 3, 4 # tuple packing
print('t is', t)
x, y, z = t # tuple unpacking
print('x is', x)
print('y is', y)
print('z is', z)
Explanation: Tuples can be packed and unpacked with a convenient syntax. The number of variables used to unpack the tuple must match the number of elements in the tuple.
End of explanation
a = [1, 3, 9]
b = ["ATG"]
c = []
print(a)
print(b)
print(c)
Explanation: Lists
Can contain any number of items
Can contain different types of items
Can be altered once created (they are mutable)
Items have a particular order
Lists are created with square brackets around their items:
End of explanation
matrix = [[1, 0], [0, 2]]
print(matrix)
Explanation: Lists and tuples can contain other list and tuples, or any other type of collection:
End of explanation
a = (1, 4, 9, 16) # A tuple of numbers
b = ['G','C','A','T'] # A list of characters
print(a)
print(b)
l = list(a) # Make a list based on a tuple
print(l)
t = tuple(b) # Make a tuple based on a list
print(t)
Explanation: You can convert between tuples and lists with the <tt>tuple</tt> and <tt>list</tt> functions. Note that these create a new collection with the same items, and leave the original unaffected.
End of explanation
t = (123, 54, 92, 87, 33)
x = [123, 54, 92, 87, 33]
print('t is', t)
print('t[0] is', t[0])
print('t[2] is', t[2])
print('x is', x)
print('x[-1] is', x[-1])
Explanation: Manipulating tuples and lists
Once your data is in a list or tuple, python supports a number of ways you can access elements of the list and manipulate the list in useful ways, such as sorting the data.
Tuples and lists can generally be used in very similar ways.
Index access
You can access individual elements of the collection using their index, note that the first element is at index 0. Negative indices count backwards from the end.
End of explanation
t = (123, 54, 92, 87, 33)
x = [123, 54, 92, 87, 33]
print('t[1:3] is', t[1:3])
print('x[2:] is', x[2:])
print('x[:-1] is', x[:-1])
Explanation: Slices
You can also access a range of items, known as slices, from inside lists and tuples using a colon : to indicate the beginning and end of the slice inside the square brackets. Note that the slice notation [a:b] includes positions from a up to but not including b.
End of explanation
t = (123, 54, 92, 87, 33)
x = [123, 54, 92, 87, 33]
print('123 in', x, 123 in x)
print('234 in', t, 234 in t)
print('999 not in', x, 999 not in x)
Explanation: in operator
You can check if a value is in a tuple or list with the <tt>in</tt> operator, and you can negate this with <tt>not</tt>
End of explanation
t = (123, 54, 92, 87, 33)
x = [123, 54, 92, 87, 33]
print("length of t is", len(t))
print("number of 33s in x is", x.count(33))
Explanation: len() and count() functions
You can get the length of a list or tuple with the in-built <tt>len()</tt> function, and you can count the number of particular elements contained in a list with the <tt>.count()</tt> function.
End of explanation
x = [123, 54, 92, 87, 33]
print(x)
x[2] = 33
print(x)
Explanation: Modifying lists
You can alter lists in place, but not tuples
End of explanation
t = (123, 54, 92, 87, 33)
print(t)
t[1] = 4
Explanation: Tuples cannot be altered once they have been created, if you try to do so, you'll get an error.
End of explanation
x = [123, 54, 92, 87, 33]
x.append(101)
print(x)
Explanation: You can add elements to the end of a list with <tt>append()</tt>
End of explanation
x = [123, 54, 92, 87, 33]
x.insert(3, 1111)
print(x)
Explanation: or insert values at a certain position with <tt>insert()</tt>, by supplying the desired position as well as the new value
End of explanation
x = [123, 54, 92, 87, 33]
x.remove(123)
print(x)
Explanation: You can remove values with <tt>remove()</tt>
End of explanation
x = [123, 54, 92, 87, 33]
print(x)
del x[0]
print(x)
Explanation: and delete values by index with <tt>del</tt>
End of explanation
a = [1,2,3]
b = [4,5,6]
a.extend(b)
print(a)
a.append(b)
print(a)
Explanation: It's often useful to be able to combine arrays together, which can be done with <tt>extend()</tt> (as <tt>append</tt> would add the whole list as a single element in the list)
End of explanation
a = [1, 2, 3]
b = [4, 5, 6]
a = a + b
print(a)
Explanation: The plus symbol <tt>+</tt> is shorthand for the extend operation when applied to lists:
End of explanation
a = [1, 2, 3, 4, 5, 6]
a[1:3] = [9, 9, 9, 9]
print(a)
Explanation: Slice syntax can be used on the left hand side of an assignment operation to assign subregions of a list
End of explanation
a = [1, 3, 5, 4, 2]
a.reverse()
print(a)
a.sort()
print(a)
Explanation: You can change the order of elements in a list
End of explanation
a = [2, 5, 7, 1]
b = sorted(a)
print(a)
print(b)
Explanation: Note that both of these change the list, if you want a sorted copy of the list while leaving the original untouched, use <tt>sorted()</tt>
End of explanation
help(len)
help(list)
help(list.insert)
help(list.count)
Explanation: Getting help from the official Python documentation
The most useful information is online on https://www.python.org/ website and should be used as a reference guide.
Python 3.5.2 documentation is the starting page with links to tutorials and libraries' documentation for Python 3
The Python Tutorial
The Python Standard Library Reference is the documentation of all libraries included within Python as well as built-in functions and data types like:
Text Sequence Type — str
Numeric Types — int, float
Sequence Types — list, tuple
Set Types — set
Mapping Types — dict
Getting help directly from within Python using help()
End of explanation
text = "ATGTCATTTGT"
print(text[0])
print(text[-2])
print(text[0:6])
print("ATG" in text)
print("TGA" in text)
print(len(text))
Explanation: Exercise 1.3.1
Create a list of DNA codons for the protein sequence CLYSY based on the codon variables you defined previously.
Print the DNA sequence of the protein to the screen.
Print the DNA codon of the last amino acid in the protein sequence.
Create two more variables containing the DNA sequence of a stop codon and a start codon, and replace the first element of the DNA sequence with the start codon and append the stop codon to the end of the DNA sequence. Print out the resulting DNA sequence.
String manipulations
Strings are a lot like tuples of characters, and individual characters and substrings can be accessed and manipulated using similar operations we introduced above.
End of explanation
text = "ATGTCATTTGT"
text[0:2] = "CCC"
Explanation: Just as with tuples, trying to assign a value to an element of a string results in an error
End of explanation
dna = "ATGTCACCGTTT"
index = dna.find("TCA")
print("TCA is at position:", index)
index = dna.rfind('C')
print("The last Cytosine is at position:", index)
print("Position of a stop codon:", dna.find("TGA"))
Explanation: Python provides a number of useful functions that let you manipulate strings
The <tt>in</tt> operator lets you check if a substring is contained within a larger string, but it does not tell you where the substring is located. This is often useful to know and python provides the <tt>.find()</tt> method which returns the index of the first occurrence of the search string, and the <tt>.rfind()</tt> method to start searching from the end of the string.
If the search string is not found in the string both these methods return -1.
End of explanation
s = " Chromosome Start End "
print(len(s), s)
s = s.lstrip()
print(len(s), s)
s = s.rstrip()
print(len(s), s)
s = " Chromosome Start End "
s = s.strip()
print(len(s), s)
Explanation: When we are reading text from files (which we will see later on), often there is unwanted whitespace at the start or end of the string. We can remove leading whitespace with the <tt>.lstrip()</tt> method, trailing whitespace with <tt>.rstrip()</tt>, and whitespace from both ends with <tt>.strip()</tt>.
All of these methods return a copy of the changed string, so if you want to replace the original you can assign the result of the method call to the original variable.
End of explanation
seq = "ATG TCA CCG GGC"
codons = seq.split(" ")
print(codons)
bases = list(seq) # a tuple of character converted into a list
print(bases)
Explanation: You can split a string into a list of substrings using the <tt>.split()</tt> method, supplying the delimiter as an argument to the method. If you don't supply any delimiter the method will split the string on whitespace by default (which is very often what you want!)
To split a string into its component characters you can simply cast the string to a list
End of explanation
seq = "ATG TCA CCG GGC"
codons = seq.split(" ")
print(codons)
print("|".join(codons))
Explanation: <tt>.split()</tt> is the counterpart to the <tt>.join()</tt> method that lets you join the elements of a list into a string only if all the elements are of type String:
End of explanation
s = "chr"
chrom_number = 2
print(s + str(chrom_number))
Explanation: We also saw earlier that the <tt>+</tt> operator lets you concatenate strings together into a larger string.
Note that this operator only works on variables of the same type. If you want to concatenate a string with an integer (or some other type), first you have to cast the integer to a string with the <tt>str()</tt> function.
End of explanation
help(str.split)
help(str.join)
Explanation: To get more information about these two methods split() and join() we could find it online in the Python documentation starting from www.python.org or get help using the help() builtin function.
End of explanation
<END_TASK> |
245 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Chemical-Disease Relation (CDR) Tutorial
In this example, we'll be writing an application to extract mentions of chemical-induced-disease relationships from Pubmed abstracts, as per the BioCreative CDR Challenge. This tutorial will show off some of the more advanced features of Snorkel, so we'll assume you've followed the Intro tutorial.
Let's start by reloading from the last notebook.
Step1: Part III
Step2: Text pattern approaches
Now we'll use some LF helpers to create LFs based on indicative text patterns. We came up with these rules by using the viewer to examine training candidates and noting frequent patterns.
Step3: Composite LFs
The following LFs take some of the strongest distant supervision and text pattern LFs, and combine them to form more specific LFs. These LFs introduce some obvious dependencies within the LF set, which we will model later.
Step4: Rules based on context hierarchy
These last two rules will make use of the context hierarchy. The first checks if there is a chemical mention much closer to the candidate's disease mention than the candidate's chemical mention. The second does the analog for diseases.
Step5: Running the LFs on the training set
Step6: Part IV
Step7: Now we'll train the generative model, using the deps argument to account for the learned dependencies. We'll also model LF propensity here, unlike the intro tutorial. In addition to learning the accuracies of the LFs, this also learns their likelihood of labeling an example.
Step8: Checking performance against development set labels
Finally, we'll run the labeler on the development set, load in some external labels, then evaluate the LF performance. The external labels are applied via a small script for convenience. It maps the document-level relation annotations found in the CDR file to mention-level labels. Note that these will not be perfect, although they are pretty good. If we wanted to keep iterating, we could use snorkel.lf_helpers.test_LF against the dev set, or look at some false positive and false negative candidates. | <ASSISTANT_TASK:>
Python Code:
%load_ext autoreload
%autoreload 2
%matplotlib inline
from snorkel import SnorkelSession
session = SnorkelSession()
from snorkel.models import candidate_subclass
ChemicalDisease = candidate_subclass('ChemicalDisease', ['chemical', 'disease'])
train_cands = session.query(ChemicalDisease).filter(ChemicalDisease.split == 0).all()
dev_cands = session.query(ChemicalDisease).filter(ChemicalDisease.split == 1).all()
Explanation: Chemical-Disease Relation (CDR) Tutorial
In this example, we'll be writing an application to extract mentions of chemical-induced-disease relationships from Pubmed abstracts, as per the BioCreative CDR Challenge. This tutorial will show off some of the more advanced features of Snorkel, so we'll assume you've followed the Intro tutorial.
Let's start by reloading from the last notebook.
End of explanation
import bz2
from six.moves.cPickle import load
with bz2.BZ2File('data/ctd.pkl.bz2', 'rb') as ctd_f:
ctd_unspecified, ctd_therapy, ctd_marker = load(ctd_f)
def cand_in_ctd_unspecified(c):
return 1 if c.get_cids() in ctd_unspecified else 0
def cand_in_ctd_therapy(c):
return 1 if c.get_cids() in ctd_therapy else 0
def cand_in_ctd_marker(c):
return 1 if c.get_cids() in ctd_marker else 0
def LF_in_ctd_unspecified(c):
return -1 * cand_in_ctd_unspecified(c)
def LF_in_ctd_therapy(c):
return -1 * cand_in_ctd_therapy(c)
def LF_in_ctd_marker(c):
return cand_in_ctd_marker(c)
Explanation: Part III: Writing LFs
This tutorial features some more advanced LFs than the intro tutorial, with more focus on distant supervision and dependencies between LFs.
Distant supervision approaches
We'll use the Comparative Toxicogenomics Database (CTD) for distant supervision. The CTD lists chemical-condition entity pairs under three categories: therapy, marker, and unspecified. Therapy means the chemical treats the condition, marker means the chemical is typically present with the condition, and unspecified is...unspecified. We can write LFs based on these categories.
End of explanation
import re
from snorkel.lf_helpers import (
get_tagged_text,
rule_regex_search_tagged_text,
rule_regex_search_btw_AB,
rule_regex_search_btw_BA,
rule_regex_search_before_A,
rule_regex_search_before_B,
)
# List to parenthetical
def ltp(x):
return '(' + '|'.join(x) + ')'
def LF_induce(c):
return 1 if re.search(r'{{A}}.{0,20}induc.{0,20}{{B}}', get_tagged_text(c), flags=re.I) else 0
causal_past = ['induced', 'caused', 'due']
def LF_d_induced_by_c(c):
return rule_regex_search_btw_BA(c, '.{0,50}' + ltp(causal_past) + '.{0,9}(by|to).{0,50}', 1)
def LF_d_induced_by_c_tight(c):
return rule_regex_search_btw_BA(c, '.{0,50}' + ltp(causal_past) + ' (by|to) ', 1)
def LF_induce_name(c):
return 1 if 'induc' in c.chemical.get_span().lower() else 0
causal = ['cause[sd]?', 'induce[sd]?', 'associated with']
def LF_c_cause_d(c):
return 1 if (
re.search(r'{{A}}.{0,50} ' + ltp(causal) + '.{0,50}{{B}}', get_tagged_text(c), re.I)
and not re.search('{{A}}.{0,50}(not|no).{0,20}' + ltp(causal) + '.{0,50}{{B}}', get_tagged_text(c), re.I)
) else 0
treat = ['treat', 'effective', 'prevent', 'resistant', 'slow', 'promise', 'therap']
def LF_d_treat_c(c):
return rule_regex_search_btw_BA(c, '.{0,50}' + ltp(treat) + '.{0,50}', -1)
def LF_c_treat_d(c):
return rule_regex_search_btw_AB(c, '.{0,50}' + ltp(treat) + '.{0,50}', -1)
def LF_treat_d(c):
return rule_regex_search_before_B(c, ltp(treat) + '.{0,50}', -1)
def LF_c_treat_d_wide(c):
return rule_regex_search_btw_AB(c, '.{0,200}' + ltp(treat) + '.{0,200}', -1)
def LF_c_d(c):
return 1 if ('{{A}} {{B}}' in get_tagged_text(c)) else 0
def LF_c_induced_d(c):
return 1 if (
('{{A}} {{B}}' in get_tagged_text(c)) and
(('-induc' in c[0].get_span().lower()) or ('-assoc' in c[0].get_span().lower()))
) else 0
def LF_improve_before_disease(c):
return rule_regex_search_before_B(c, 'improv.*', -1)
pat_terms = ['in a patient with ', 'in patients with']
def LF_in_patient_with(c):
return -1 if re.search(ltp(pat_terms) + '{{B}}', get_tagged_text(c), flags=re.I) else 0
uncertain = ['combin', 'possible', 'unlikely']
def LF_uncertain(c):
return rule_regex_search_before_A(c, ltp(uncertain) + '.*', -1)
def LF_induced_other(c):
return rule_regex_search_tagged_text(c, '{{A}}.{20,1000}-induced {{B}}', -1)
def LF_far_c_d(c):
return rule_regex_search_btw_AB(c, '.{100,5000}', -1)
def LF_far_d_c(c):
return rule_regex_search_btw_BA(c, '.{100,5000}', -1)
def LF_risk_d(c):
return rule_regex_search_before_B(c, 'risk of ', 1)
def LF_develop_d_following_c(c):
return 1 if re.search(r'develop.{0,25}{{B}}.{0,25}following.{0,25}{{A}}', get_tagged_text(c), flags=re.I) else 0
procedure, following = ['inject', 'administrat'], ['following']
def LF_d_following_c(c):
return 1 if re.search('{{B}}.{0,50}' + ltp(following) + '.{0,20}{{A}}.{0,50}' + ltp(procedure), get_tagged_text(c), flags=re.I) else 0
def LF_measure(c):
return -1 if re.search('measur.{0,75}{{A}}', get_tagged_text(c), flags=re.I) else 0
def LF_level(c):
return -1 if re.search('{{A}}.{0,25} level', get_tagged_text(c), flags=re.I) else 0
def LF_neg_d(c):
return -1 if re.search('(none|not|no) .{0,25}{{B}}', get_tagged_text(c), flags=re.I) else 0
WEAK_PHRASES = ['none', 'although', 'was carried out', 'was conducted',
'seems', 'suggests', 'risk', 'implicated',
'the aim', 'to (investigate|assess|study)']
WEAK_RGX = r'|'.join(WEAK_PHRASES)
def LF_weak_assertions(c):
return -1 if re.search(WEAK_RGX, get_tagged_text(c), flags=re.I) else 0
Explanation: Text pattern approaches
Now we'll use some LF helpers to create LFs based on indicative text patterns. We came up with these rules by using the viewer to examine training candidates and noting frequent patterns.
End of explanation
def LF_ctd_marker_c_d(c):
return LF_c_d(c) * cand_in_ctd_marker(c)
def LF_ctd_marker_induce(c):
return (LF_c_induced_d(c) or LF_d_induced_by_c_tight(c)) * cand_in_ctd_marker(c)
def LF_ctd_therapy_treat(c):
return LF_c_treat_d_wide(c) * cand_in_ctd_therapy(c)
def LF_ctd_unspecified_treat(c):
return LF_c_treat_d_wide(c) * cand_in_ctd_unspecified(c)
def LF_ctd_unspecified_induce(c):
return (LF_c_induced_d(c) or LF_d_induced_by_c_tight(c)) * cand_in_ctd_unspecified(c)
Explanation: Composite LFs
The following LFs take some of the strongest distant supervision and text pattern LFs, and combine them to form more specific LFs. These LFs introduce some obvious dependencies within the LF set, which we will model later.
End of explanation
def LF_closer_chem(c):
# Get distance between chemical and disease
chem_start, chem_end = c.chemical.get_word_start(), c.chemical.get_word_end()
dis_start, dis_end = c.disease.get_word_start(), c.disease.get_word_end()
if dis_start < chem_start:
dist = chem_start - dis_end
else:
dist = dis_start - chem_end
# Try to find chemical closer than @dist/2 in either direction
sent = c.get_parent()
closest_other_chem = float('inf')
for i in range(dis_end, min(len(sent.words), dis_end + dist // 2)):
et, cid = sent.entity_types[i], sent.entity_cids[i]
if et == 'Chemical' and cid != sent.entity_cids[chem_start]:
return -1
for i in range(max(0, dis_start - dist // 2), dis_start):
et, cid = sent.entity_types[i], sent.entity_cids[i]
if et == 'Chemical' and cid != sent.entity_cids[chem_start]:
return -1
return 0
def LF_closer_dis(c):
# Get distance between chemical and disease
chem_start, chem_end = c.chemical.get_word_start(), c.chemical.get_word_end()
dis_start, dis_end = c.disease.get_word_start(), c.disease.get_word_end()
if dis_start < chem_start:
dist = chem_start - dis_end
else:
dist = dis_start - chem_end
# Try to find chemical disease than @dist/8 in either direction
sent = c.get_parent()
for i in range(chem_end, min(len(sent.words), chem_end + dist // 8)):
et, cid = sent.entity_types[i], sent.entity_cids[i]
if et == 'Disease' and cid != sent.entity_cids[dis_start]:
return -1
for i in range(max(0, chem_start - dist // 8), chem_start):
et, cid = sent.entity_types[i], sent.entity_cids[i]
if et == 'Disease' and cid != sent.entity_cids[dis_start]:
return -1
return 0
Explanation: Rules based on context hierarchy
These last two rules will make use of the context hierarchy. The first checks if there is a chemical mention much closer to the candidate's disease mention than the candidate's chemical mention. The second does the analog for diseases.
End of explanation
LFs = [
LF_c_cause_d,
LF_c_d,
LF_c_induced_d,
LF_c_treat_d,
LF_c_treat_d_wide,
LF_closer_chem,
LF_closer_dis,
LF_ctd_marker_c_d,
LF_ctd_marker_induce,
LF_ctd_therapy_treat,
LF_ctd_unspecified_treat,
LF_ctd_unspecified_induce,
LF_d_following_c,
LF_d_induced_by_c,
LF_d_induced_by_c_tight,
LF_d_treat_c,
LF_develop_d_following_c,
LF_far_c_d,
LF_far_d_c,
LF_improve_before_disease,
LF_in_ctd_therapy,
LF_in_ctd_marker,
LF_in_patient_with,
LF_induce,
LF_induce_name,
LF_induced_other,
LF_level,
LF_measure,
LF_neg_d,
LF_risk_d,
LF_treat_d,
LF_uncertain,
LF_weak_assertions,
]
from snorkel.annotations import LabelAnnotator
labeler = LabelAnnotator(lfs=LFs)
%time L_train = labeler.apply(split=0)
L_train
L_train.lf_stats(session)
Explanation: Running the LFs on the training set
End of explanation
from snorkel.learning.structure import DependencySelector
ds = DependencySelector()
deps = ds.select(L_train, threshold=0.1)
len(deps)
Explanation: Part IV: Training the generative model
As mentioned above, we want to include the dependencies between our LFs when training the generative model. Snorkel makes it easy to do this! DependencySelector runs a fast structure learning algorithm over the matrix of LF outputs to identify a set of likely dependencies. We can see that these match up with our prior knowledge. For example, it identified a "reinforcing" dependency between LF_c_induced_d and LF_ctd_marker_induce. Recall that we constructed the latter using the former.
End of explanation
from snorkel.learning import GenerativeModel
gen_model = GenerativeModel(lf_propensity=True)
gen_model.train(
L_train, deps=deps, decay=0.95, step_size=0.1/L_train.shape[0], reg_param=0.0
)
train_marginals = gen_model.marginals(L_train)
import matplotlib.pyplot as plt
plt.hist(train_marginals, bins=20)
plt.show()
gen_model.learned_lf_stats()
from snorkel.annotations import save_marginals
save_marginals(session, L_train, train_marginals)
Explanation: Now we'll train the generative model, using the deps argument to account for the learned dependencies. We'll also model LF propensity here, unlike the intro tutorial. In addition to learning the accuracies of the LFs, this also learns their likelihood of labeling an example.
End of explanation
from load_external_annotations import load_external_labels
load_external_labels(session, ChemicalDisease, split=1, annotator='gold')
from snorkel.annotations import load_gold_labels
L_gold_dev = load_gold_labels(session, annotator_name='gold', split=1)
L_gold_dev
L_dev = labeler.apply_existing(split=1)
_ = gen_model.error_analysis(session, L_dev, L_gold_dev)
L_dev.lf_stats(session, L_gold_dev, gen_model.learned_lf_stats()['Accuracy'])
Explanation: Checking performance against development set labels
Finally, we'll run the labeler on the development set, load in some external labels, then evaluate the LF performance. The external labels are applied via a small script for convenience. It maps the document-level relation annotations found in the CDR file to mention-level labels. Note that these will not be perfect, although they are pretty good. If we wanted to keep iterating, we could use snorkel.lf_helpers.test_LF against the dev set, or look at some false positive and false negative candidates.
End of explanation
<END_TASK> |
246 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
title
Step1: 对于变量x来说,不需要知道它是字符串还是列表,就可以调用它的count方法—不用管它是什么类型(只要提供一个字符作为参数即可)。
任何不知道对象到底是什么类型,但是又要对对象“做点什么”的时候,都会用到多态。这不仅限于方法,很多内建运算符和函数都有多态的性质:
Step2: 1.2 封装
封装是可以不关心对象是如何构建的而直接进行使用。如何将变量“封装”在对象内?可以将其作为属性(attribute)存储。正如方法一样,属性是对象内部的变量。
对象有它自己的状态(state),对象的状态由它的属性(比如名称)来描述。对象的方法可以改变它的属性。
1.3 继承
如果已经有了一个类,而又想建立一个非常类似的呢?新的类可能只是添加几个方法。这时就会用到继承。
2 类和类型
类将它视为种类或类型的同义词,代表对象的集合。类的主要任务是定义它的实例会用到的方法。所有的对象都属于某一个类,称为类的实例。当一个对象所属的类是另外一个对象所属类的子集时,前者就被称为后者的子类(subclass),所以“百灵鸟类”是“鸟类”的子类。相反,“鸟类”是“百灵鸟类”的父类(superclass)。
Python中,习惯上使用单数名词,并且首字母大写,比如Bird和Lark,来描述对象的类。
2.1 创建自己的类
先来看一个简单的类:
Step3: 注意 尽管可能使用的是新版的Python,但一些功能不会在旧式类上起作用。为了确保类是新型的,需要在模块或者脚本开始的地方放置赋值语句__metaclass__=type,或者继承新式类(比如object类,也就是子类化内建类object)。新式类必然包含了更多的功能,也是之后推荐的写法,从写法上区分的话,如果当前类或者父类继承了object类,那么该类便是新式类。
在调用foo的setName和greet函数时,foo自动将自己作为第一个参数传人函数中—因此形象地命名为self。显然这就是self的用处和存在的必要性。没有它,成员方法就没法访问它们要对其属性进行操作的对象本身了。
属性是可以在外部访问的:
Step4: 2.2 属性和方法
self参数事实上正是方法和函数的区别。方法将它们的第一个参数绑定到所属的实例上,因此这个参数可以不必提供。
属性只是作为对象的一部分变量,方法则是存储在对象内的函数。
2.2.1 私有化
默认情况下,程序可以从外部访问一个对象的属性。但是有时候需要使用私有属性,这是外部对象无法访问的,但是通过getName和setName等访问器(accessor)能够访问这些私有属性。
为了让方法或者属性变为私有,只要在它的名字前面加上双下划线即可:
Step5: 现在__inaccessible从外界是无法访问的,而在内部还能使用(比如从accessible)访问:
Step6: 类的内部定义中,所有以双下划线开始的名字(方法或属性)都被“翻译”成前面加上单下划线和类名的形式。在了解了这些幕后的事情后,实际上还是能在类外访问这些私有方法,尽管不应该这么做:
Step7: 简而言之,确保其他人不会访问对象的方法和属性是不可能的,但是通过这类“名称变化术”就是他们不应该访问这些方法和属性的强有力信号。
2.2.2 访问器方法
访问器是一个简单的方法,它能使用getHeight、setHeight这样的名字来得到或者重绑定一些属性:
Step8: 在上面的例子中,getSize和setSize方法是一个名为size的假想属性的访问器方法,size是由width和height构成的元组。如果有一天要改变类的实现,将size变成一个真正的属性,这样width和height就可以动态算出,那么就要把它们放到一个访问器方法中去。但如果有很多简单的属性,那么就不现实了。如果那么做就得写很多访问器方法。那么怎么解决呢?这就需要用到property函数。
property函数
property函数的使用很简单。延续上面的Rectangle类,只要增加一行代码(子类化object,或者使用__metaclass__=type)
Step9: 在新版的Rectangle中,property函数创建了一个属性size,其中访问器方法被当做参数(先是取值,然后是赋值)。
Step10: 很显然,size属性仍然取决于getSize和setSize中的计算。但它看起来就像普通的属性一样。实际上,property函数可以用fget,fset,fdel和doc-这四个参数来调用。如果没有参数,产生的属性既不可读,也不可写。如果只使用一个参数调用(一个取值方法),产生的属性是只读的。第三个参数(可选)是一个用于删除属性的方法。第四个参数(可选)是一个文档字符串。
2.2.3 特殊方法
在Python中,有的名称(方法名)在前面和后面都加上两个下划线,比如__future__,这样拼写表示名字有特殊含义,所以绝不要在自己的程序中使用这种名字。由这些名字组成的集合所包含的方法称为特殊方法。如果对象实现了这些方法的某一个,那么这个方法会在特殊的情况下被Python调用。而几乎没有直接调用它们的必要。
(1) 构造方法
首先要讨论的第一个特殊方法是构造方法。构造方法是一个很奇怪的名字,它代表着类似于以前例子中使用过的那种名为init的初始化方法。但构造方法和其他普通方法不同的地方在于,当一个对象被创建后,会立即调用构造方法。
Step11: (2) 重写一般方法和特殊的构造方法
如果一个方法在B类的一个实例中被调用(或一个属性被访问),但在B类中没有找到该方法,那么就会去它的父类A里面找:
Step12: 在子类中增加功能最基本的方式就是增加方法。但是也可以重写一些父类的方法来自定义继承的行为。B类也能重写这个方法。
Step13: 重写是继承机制中的一个重要内容,但是对于构造方法尤其重要。构造方法用来初始化新创建对象的状态,大多数子类不仅要拥有自己的初始化代码,还要拥有父类的初始化代码。虽然重写的机制对于所有方法来说都是一样的,但是当重写构造方法时,更可能遇到特别的问题:如果一个类的构造方法被重写,那么就需要调用父类的构造方法,否则对象可能不会被正确的初始化。如下:
Step14: 可以看到,鸟吃过了以后,就不会再饥饿。现在考虑子类SongBird,它添加了唱歌的行为。
Step15: 因为SongBird是Bird的一个子类,它继承了eat方法,但如果调用eat方法,就会产生一个问题:
Step16: 异常很清楚地说明了错误:SongBird没有hungry属性。原因是:在SongBird中,构造方法被重写,但新的构造方法没有任何关于初始化hungry属性的代码。为了达到预期的效果,SongBird的构造方法必须调用其父类Bird的构造方法来确保进行基本的初始化。有两种方法能达到这个目的,如下:
调用未绑定的父类构造方法
Step17: 通过将当前的实例作为self参数提供给未绑定方法,SongBird就能够使用其父类构造方法的所有实现,也就是说属性hungry能被设置。
使用super函数
super函数只能在新式类中使用。当前的类和对象可以作为super函数的参数使用,调用函数返回的是父类的方法,而不是当前类的方法。如下:
Step22: (3) 成员访问方法
接下来介绍一些处理对象访问的方法,这些方法允许你创建自己的序列或者映射。
基本的序列和映射规则很简单,但如果要实现它们全部功能就需要实现很多特殊函数。下面将会说到:
基本的序列和映射规则
序列和映射是对象的集合。为了实现它们基本的行为(规则),如果对象是不可变的(如字符串和元组)。那么就需要使用两个特殊方法,如果是可变的(列表和字典),则需要使用4个。
a. __len__(self)
Step23: 注意,没有实现__del__方法的原因是我希望删除元素是非法的:
Step24: 这个类没有__len__方法,因为它是无限长的。
索引检查是通过用户自定义的checkIndex函数实现的。如果使用了一个非法类型的索引,就会引发TypeError异常,如果索引的类型是正确的但超出了范围,则会引起IndexError异常:
Step25: (4) __getattr__和__setattr__
拦截(intercept)对象的所有属性访问是可能的,这样可以用旧式类实现属性。为了在访问属性的时候可以执行代码,必须使用一些特殊方法。下面的4种方法提供了需要的功能(在旧式类中只需要后3个)
a. __getattribute__(self,name)
Step26: 注意: __setattr__方法在所涉及的属性不是size时也会被调用。如果属性是size,那么就像前面那样执行操作,否则就要使用特殊方法__dict__,该方法包含一个字典,字典里是所有实例的属性;
__getattr__方法只在普通的属性没有被找到的时候调用。
(5) 迭代器
迭代的意思是重复做一些事很多次—就像在循环中做的那样。到现在为止只有在for循环中对序列和字典进行迭代,但实际上也能对其他的对象进行迭代:实现__iter__特殊方法的对象。
__iter__方法返回一个迭代器(iterator),所谓的迭代器就是具有next方法(这个方法在调用时不需要任何参数)的对象。在调用next方法时,迭代器会返回它的下一个值。
迭代规则的关键是什么?为什么不使用列表?因为列表的杀伤力太大。如果有可以一个接一个地计算值的函数,那么在使用时可能是计算一个值时获取一个值-而不是通过列表一次性获取所有值。如果有很多值,列表就会占用太多的内存。另外,使用迭代器更通用、更简单、更优雅。让我们看看一个不使用列表的例子,因为要用的话,列表的长度必须无限。
这里的“列表”是一个斐波那契数列。使用的迭代器如下:
Step27: 在很多情况下,__iter__被放到会在for循环中使用的对象中。
注意 正式的说法是,一个实现了__iter__方法的对象是可迭代的,一个实现了next方法的对象则是迭代器。
内建函数iter可以从可迭代的对象中获得迭代器:
Step28: 从迭代器中得到序列:
使用list函数显式地将迭代器转化为列表。
Step29: 2.3 类的命名空间
所有位于class语句中的代码都在特殊的命名空间中执行—类命名空间。这个命名空间可由类内所有成员访问。
Step30: 从上可以看出,类的定义其实就是执行代码块,这一点很有用,比如,在类的定义区并不只限使用def语句:
Step31: 上面的代码中,在类作用域内定义了一个可供所有成员(实例)访问的变量,用来计算类的成员数量。
就像方法一样,类作用域内的变量也可以被所有实例(对象)访问:
Step32: 那么在实例中重绑定members属性呢?
Step33: 2.4 继承父类
子类可以扩展父类的定义。将其他类名写在class语句后的圆括号内就可以继承父类:
Step34: Filter类的用处在于它可以用作其他类的父类,比如SPAMFilter类,可以将序列中“SPAM”过滤出去。
Step35: 2.4.1 子类化列表,字典和字符串
如果希望实现一个和内建对象类型(例如列表,字符串和字典)行为相似的序列或映射,可以使用子类化内建类型。
注意 当子类化一个内建类型,比如list的时候,也就间接的将object子类化了。因此该类就自动成为新式类,意味着可以使用像super函数这样的特性了。
看看下面的例子-带有访问计数的列表。
Step36: CounterList类严重依赖于它的子类化父类(list)的行为。CounterList类没有重写任何的方法,能直接调用列表的任何方法(如append、extend、index)。在两个被重写的方法中,super方法被用来调用相应的父类的方法,只有在__init__中添加了所需的初始化counter属性的行为,并在__getitem__中更新了counter属性。
Step37: 可以看到,CounterList在很多方面和列表的作用一样,但它有一个counter属性(被初始化为0),每次列表元素被访问时,它都会自增。
2.5 调查继承
如果想要查看一个类是否是另一个的子类,可以使用内建的issubclass函数:
Step38: 如果想要知道已知类的父类(们),可以直接使用它的特殊属性__bases__:
Step39: 同样,还能使用isinstance函数检测一个对象是否是一个类的实例:
Step40: 如果只想知道一个对象属于哪个类,可以使用__class__属性或type函数:
Step41: 2.6 多个父类
一个类的父类可能多于一个,如下:
Step42: 子类(TalkingCalculator)自己不做任何事,它从自己的父类继承所有的行为。这样它就成了会说话的计算器(talking calculator)。
Step43: 这种行为称为多重继承(multiple inheritance),是个非常有用的工具。
一般来说,对于对象不用探讨过深。程序员可以靠多态调用自己需要的方法。不过如果想要知道对象到底有什么方法和属性,有些函数可以帮助完成这项工作。如下可以检查对象的方法或属性是否已经存在:
Step44: 如果要查看对象内所有存储的值,那么可以使用__dict__属性。
Step45: 3 生成器
生成器是Python新引入的概念。生成器是一种普遍的函数语法定义的迭代器。接下来介绍怎么创建和使用生成器,了解它的内部机制。
3.1 创建生成器
创建一个生成器就像创建函数一样简单。首先我们创建一个可以展开嵌套列表的函数。参数是一个列表:
Step46: 任何包含yield语句的函数称为生成器。除了名字不同以外,它的行为和普通的函数也有很大的差别。这就在于它不像return语句那样返回值,而是每次产生一个值。每次产生一个值(使用yield语句),函数就会被冻结:即函数停在那点等待被激活。函数被激活后就从停止的那点开始执行。
接下来可以通过在生成器上迭代来使用所有的值:
Step47: 从上可以看到,试图对一个数值5进行迭代会引发一个TypeError异常。
生成器由两部分组成:生成器的函数和生成器的迭代器。生成器的函数是用def语句定义的,包含yield部分,生成器的迭代器是这个函数返回的部分。
Step48: 3.2 递归生成器
如果要处理任意层的嵌套该怎么办?每次嵌套需要增加一个for循环,但因为不知道有几层嵌套,所以必须把解决方案变得更灵活。这就需要用到递归:
Step49: 当flatten被调用时,有两种可能性(处理递归时大部分都是这种情况):基本情况和需要递归的情况。在基本的情况中,函数被告知展开一个元素(比如一个数字),这种情况下,for循环会引发一个TypeError异常(因为试图对一个数字进行迭代),生成器会产生一个元素。如果展开的是一个列表,那么就要进行特殊处理。程序必须遍历所有的子列表,并对他们调用flatten。然后使用另一个for循环来产生被展开的子列表的所有元素。 | <ASSISTANT_TASK:>
Python Code:
'abc'.count('a')
[1,2,'a'].count('a')
Explanation: title: 对象
create: 2016.12.7
modified: 2016.12.7
tags: python
多态
封装
方法
5
[TOC]
之前介绍了Python主要的内建对象类型(数字int、字符串str、列表list、元组tuple和字典dict),以及内建函数和标准库的用法,还有自定义函数的方式。接下来将介绍如何创建自己的对象?
为什么要自定义对象呢?使用字典、序列、数字和字符串来创建函数,完成这项工作还不够吗?这样做当然可以,但是创建自己的对象(尤其是类型或者被称为类的对象)是Python的核心概念,事实上,Python被称为面向对象的语言(和C++、Java一样)。接下来会介绍如何创建对象,以及多态、封装、方法、属性、父类以及继承的概念。
1 对象的魔力
对象可以看做数据(属性)以及一系列可以存取、操作这些数据的方法所组成的集合。使用对象替代全局变量和函数的原因可能有很多,其中对象最重要的优点如下:
1.1 多态
多态(Polymorphism)意味着就算不知道变量所引用的对象类型是什么,还是能对它进行操作,而它会根据对象(或类)类型的不同而表现出不同的行为。
End of explanation
1+2
'Fish '+'license'
Explanation: 对于变量x来说,不需要知道它是字符串还是列表,就可以调用它的count方法—不用管它是什么类型(只要提供一个字符作为参数即可)。
任何不知道对象到底是什么类型,但是又要对对象“做点什么”的时候,都会用到多态。这不仅限于方法,很多内建运算符和函数都有多态的性质:
End of explanation
__metaclass__=type #确定使用新式类
class Person:
def setName(self, name):
self.name=name
def getName(self):
return self.name
def greet(self):
print "Hello, world! I'm %s" % self.name
foo=Person()
foo.setName('Luke Skywalker')
foo.greet()
Explanation: 1.2 封装
封装是可以不关心对象是如何构建的而直接进行使用。如何将变量“封装”在对象内?可以将其作为属性(attribute)存储。正如方法一样,属性是对象内部的变量。
对象有它自己的状态(state),对象的状态由它的属性(比如名称)来描述。对象的方法可以改变它的属性。
1.3 继承
如果已经有了一个类,而又想建立一个非常类似的呢?新的类可能只是添加几个方法。这时就会用到继承。
2 类和类型
类将它视为种类或类型的同义词,代表对象的集合。类的主要任务是定义它的实例会用到的方法。所有的对象都属于某一个类,称为类的实例。当一个对象所属的类是另外一个对象所属类的子集时,前者就被称为后者的子类(subclass),所以“百灵鸟类”是“鸟类”的子类。相反,“鸟类”是“百灵鸟类”的父类(superclass)。
Python中,习惯上使用单数名词,并且首字母大写,比如Bird和Lark,来描述对象的类。
2.1 创建自己的类
先来看一个简单的类:
End of explanation
foo.name
Explanation: 注意 尽管可能使用的是新版的Python,但一些功能不会在旧式类上起作用。为了确保类是新型的,需要在模块或者脚本开始的地方放置赋值语句__metaclass__=type,或者继承新式类(比如object类,也就是子类化内建类object)。新式类必然包含了更多的功能,也是之后推荐的写法,从写法上区分的话,如果当前类或者父类继承了object类,那么该类便是新式类。
在调用foo的setName和greet函数时,foo自动将自己作为第一个参数传人函数中—因此形象地命名为self。显然这就是self的用处和存在的必要性。没有它,成员方法就没法访问它们要对其属性进行操作的对象本身了。
属性是可以在外部访问的:
End of explanation
class Secretive:
def __inaccessible(self):
print "Hello, world!"
def accessible(self):
print "The secret message is: "
self.__inaccessible()
Explanation: 2.2 属性和方法
self参数事实上正是方法和函数的区别。方法将它们的第一个参数绑定到所属的实例上,因此这个参数可以不必提供。
属性只是作为对象的一部分变量,方法则是存储在对象内的函数。
2.2.1 私有化
默认情况下,程序可以从外部访问一个对象的属性。但是有时候需要使用私有属性,这是外部对象无法访问的,但是通过getName和setName等访问器(accessor)能够访问这些私有属性。
为了让方法或者属性变为私有,只要在它的名字前面加上双下划线即可:
End of explanation
s=Secretive()
s.__inaccessible()
s.accessible()
Explanation: 现在__inaccessible从外界是无法访问的,而在内部还能使用(比如从accessible)访问:
End of explanation
s._Secretive__inaccessible()
Explanation: 类的内部定义中,所有以双下划线开始的名字(方法或属性)都被“翻译”成前面加上单下划线和类名的形式。在了解了这些幕后的事情后,实际上还是能在类外访问这些私有方法,尽管不应该这么做:
End of explanation
class Rectangle:
def __inf__(self):
self.width=0
self.height=0
def setSize(self,size):
self.width,self.height=size
def getSize(self):
return self.width,self.height
r=Rectangle()
r.width=10
r.height=5
r.getSize()
r.setSize((150,100))
r.width
Explanation: 简而言之,确保其他人不会访问对象的方法和属性是不可能的,但是通过这类“名称变化术”就是他们不应该访问这些方法和属性的强有力信号。
2.2.2 访问器方法
访问器是一个简单的方法,它能使用getHeight、setHeight这样的名字来得到或者重绑定一些属性:
End of explanation
__metaclass__=type
class Rectangle:
def __inf__(self):
self.width=0
self.height=0
def setSize(self,size):
self.width,self.height=size
def getSize(self):
return self.width,self.height
size=property(getSize,setSize)
Explanation: 在上面的例子中,getSize和setSize方法是一个名为size的假想属性的访问器方法,size是由width和height构成的元组。如果有一天要改变类的实现,将size变成一个真正的属性,这样width和height就可以动态算出,那么就要把它们放到一个访问器方法中去。但如果有很多简单的属性,那么就不现实了。如果那么做就得写很多访问器方法。那么怎么解决呢?这就需要用到property函数。
property函数
property函数的使用很简单。延续上面的Rectangle类,只要增加一行代码(子类化object,或者使用__metaclass__=type):
End of explanation
w=Rectangle()
w.width=10
w.height=5
w.size
w.size=150,100
w.width
Explanation: 在新版的Rectangle中,property函数创建了一个属性size,其中访问器方法被当做参数(先是取值,然后是赋值)。
End of explanation
class FooBar:
def __init__(self):
self.somevar=42
f=FooBar()
f.somevar
Explanation: 很显然,size属性仍然取决于getSize和setSize中的计算。但它看起来就像普通的属性一样。实际上,property函数可以用fget,fset,fdel和doc-这四个参数来调用。如果没有参数,产生的属性既不可读,也不可写。如果只使用一个参数调用(一个取值方法),产生的属性是只读的。第三个参数(可选)是一个用于删除属性的方法。第四个参数(可选)是一个文档字符串。
2.2.3 特殊方法
在Python中,有的名称(方法名)在前面和后面都加上两个下划线,比如__future__,这样拼写表示名字有特殊含义,所以绝不要在自己的程序中使用这种名字。由这些名字组成的集合所包含的方法称为特殊方法。如果对象实现了这些方法的某一个,那么这个方法会在特殊的情况下被Python调用。而几乎没有直接调用它们的必要。
(1) 构造方法
首先要讨论的第一个特殊方法是构造方法。构造方法是一个很奇怪的名字,它代表着类似于以前例子中使用过的那种名为init的初始化方法。但构造方法和其他普通方法不同的地方在于,当一个对象被创建后,会立即调用构造方法。
End of explanation
class A:
def hello(self):
print "hello, I'm A"
class B(A):
pass
a=A()
b=B()
a.hello()
b.hello()
Explanation: (2) 重写一般方法和特殊的构造方法
如果一个方法在B类的一个实例中被调用(或一个属性被访问),但在B类中没有找到该方法,那么就会去它的父类A里面找:
End of explanation
class B(A):
def hello(self):
print "hello, I'm B"
b=B()
b.hello()
Explanation: 在子类中增加功能最基本的方式就是增加方法。但是也可以重写一些父类的方法来自定义继承的行为。B类也能重写这个方法。
End of explanation
class Bird:
def __init__(self):
self.hungry=True
def eat(self):
if self.hungry:
print 'Aaaah...'
self.hungry=False
else:
print 'No,thanks!'
b=Bird()
b.eat()
b.eat()
Explanation: 重写是继承机制中的一个重要内容,但是对于构造方法尤其重要。构造方法用来初始化新创建对象的状态,大多数子类不仅要拥有自己的初始化代码,还要拥有父类的初始化代码。虽然重写的机制对于所有方法来说都是一样的,但是当重写构造方法时,更可能遇到特别的问题:如果一个类的构造方法被重写,那么就需要调用父类的构造方法,否则对象可能不会被正确的初始化。如下:
End of explanation
class SongBird(Bird):
def __init__(self):
self.sound='Squawk!'
def sing(self):
print self.sound
sb=SongBird()
sb.sing()
Explanation: 可以看到,鸟吃过了以后,就不会再饥饿。现在考虑子类SongBird,它添加了唱歌的行为。
End of explanation
sb.eat()
Explanation: 因为SongBird是Bird的一个子类,它继承了eat方法,但如果调用eat方法,就会产生一个问题:
End of explanation
class SongBird(Bird):
def __init__(self):
Bird.__init__(self)
self.sound='Squawk!'
def sing(self):
print self.sound
sb=SongBird()
sb.sing()
sb.eat()
sb.eat()
Explanation: 异常很清楚地说明了错误:SongBird没有hungry属性。原因是:在SongBird中,构造方法被重写,但新的构造方法没有任何关于初始化hungry属性的代码。为了达到预期的效果,SongBird的构造方法必须调用其父类Bird的构造方法来确保进行基本的初始化。有两种方法能达到这个目的,如下:
调用未绑定的父类构造方法
End of explanation
__metaclass__=type
class SongBird(Bird):
def __init__(self):
super(SongBird,self).__init__()
self.sound='Squawk!'
def sing(self):
print self.sound
sb=SongBird()
sb.sing()
sb.eat()
sb.eat()
Explanation: 通过将当前的实例作为self参数提供给未绑定方法,SongBird就能够使用其父类构造方法的所有实现,也就是说属性hungry能被设置。
使用super函数
super函数只能在新式类中使用。当前的类和对象可以作为super函数的参数使用,调用函数返回的是父类的方法,而不是当前类的方法。如下:
End of explanation
def checkIndex(key):
所给的键能接受索引吗?
为了能被接受,键应该是一个非负的整数,如果它不是一个整数,比如是字符串,会引发TypeError;
如果它是负数,则会引发IndexError(因为序列是无限长的)。
if not isinstance(key,(int,long)):
raise TypeError
if key<0:
raise IndexError
class ArithmeticSequence:
def __init__(self,start=0,step=1):
初始化算数序列
初始值-序列中的第一个值
步长-两个相邻值之间的差别
改变-用户修改的值的字典
self.start=start
self.step=step
self.changed={} #没有项被修改
def __getitem__(self,key):
Get an item from the arithmetic sequence.
checkIndex(key)
try:
return self.changed[key] #修改了吗?
except KeyError: #否则...
return self.start+key*self.step #...计算值
def __setitem__(self,key,value):
修改算术序列中的一个项
checkIndex(key)
self.changed[key]=value
s=ArithmeticSequence(1,2)
s[4]
s[4]=2
s[4]
s[5]
Explanation: (3) 成员访问方法
接下来介绍一些处理对象访问的方法,这些方法允许你创建自己的序列或者映射。
基本的序列和映射规则很简单,但如果要实现它们全部功能就需要实现很多特殊函数。下面将会说到:
基本的序列和映射规则
序列和映射是对象的集合。为了实现它们基本的行为(规则),如果对象是不可变的(如字符串和元组)。那么就需要使用两个特殊方法,如果是可变的(列表和字典),则需要使用4个。
a. __len__(self):这个方法返回集合中所含对象的数量。对于序列来说,这就是元素的个数;对于映射来说,则是键-值对的数量。
b. __getitem__(self,key):这个方法返回与所给键对应的值。对于序列来说,键应该是一个0~n-1的整数(或者像后面所说的负数);对于映射来说,可以使用任何种类的键。
c. __setitem__(self,key,value):这个方法按一定的方式存储和key关联的value,该值随后可使用 __getitem__来获取。当然,只能为可以修改的对象定义这个方法。
d. __delitem__(self,key):这个方法在对一部分对象使用del语句时被调用,同时删除和键关联的值。这个方法也是为可修改的对象定义的。
对这些方法的附件要求:
a. 对于一个序列来说,如果键是负整数,那么要从末尾开始计数。换句话说就是x[-n]和x[len(x)-n]是一样的;
b. 如果键是不合适的类型(例如,对序列使用字符串作为键),会引发一个TypeError异常;
c. 如果序列的索引是正确的类型,但超出了范围,会引发一个IndexError异常。
让我们实践一下—看看如果创建一个无穷序列,会发生什么:
End of explanation
del s[4]
Explanation: 注意,没有实现__del__方法的原因是我希望删除元素是非法的:
End of explanation
s['four']
s[-4]
Explanation: 这个类没有__len__方法,因为它是无限长的。
索引检查是通过用户自定义的checkIndex函数实现的。如果使用了一个非法类型的索引,就会引发TypeError异常,如果索引的类型是正确的但超出了范围,则会引起IndexError异常:
End of explanation
class Rectangle:
def __init__(self):
self.width=0
self.height=0
def __setattr__(self,name,value):
if name =='size':
self.width,self.height=value
else:
self.__dict__[name]=value
def __getattr__(self,name):
if name =='size':
return self.width,self.height
else:
raise AttributeError
w=Rectangle()
w.size
w.__dict__
w.size=(2,6)
w.size
w.width
hasattr(w,'size')
w.age=28
w.age
w.__dict__
Explanation: (4) __getattr__和__setattr__
拦截(intercept)对象的所有属性访问是可能的,这样可以用旧式类实现属性。为了在访问属性的时候可以执行代码,必须使用一些特殊方法。下面的4种方法提供了需要的功能(在旧式类中只需要后3个)
a. __getattribute__(self,name):当属性name被访问时自动被调用(只能在新式类中使用);
b. __getattr__(self,name):当属性name被访问且对象没有相应的属性时被自动调用;
c. __setattr__(self,name,value): 当试图给属性name赋值时会被自动调用;
d. __delattr__(self,name): 当试图删除属性name时会被自动调用。
尽管和使用property函数相比有点复杂(而且在某些方面效率更低),但是这些特殊方法是很强大的,因为可以对处理很多属性的方法进行再编码。
下面还是Rectangle的例子,但这次使用的是特殊方法:
End of explanation
class Fibs:
def __init__(self):
self.a=0
self.b=1
def next(self):
self.a,self.b=self.b,self.a+self.b
return self.a
def __iter__(self):
return self
fibs=Fibs()
for f in fibs:
if f>10:
print f
break
Explanation: 注意: __setattr__方法在所涉及的属性不是size时也会被调用。如果属性是size,那么就像前面那样执行操作,否则就要使用特殊方法__dict__,该方法包含一个字典,字典里是所有实例的属性;
__getattr__方法只在普通的属性没有被找到的时候调用。
(5) 迭代器
迭代的意思是重复做一些事很多次—就像在循环中做的那样。到现在为止只有在for循环中对序列和字典进行迭代,但实际上也能对其他的对象进行迭代:实现__iter__特殊方法的对象。
__iter__方法返回一个迭代器(iterator),所谓的迭代器就是具有next方法(这个方法在调用时不需要任何参数)的对象。在调用next方法时,迭代器会返回它的下一个值。
迭代规则的关键是什么?为什么不使用列表?因为列表的杀伤力太大。如果有可以一个接一个地计算值的函数,那么在使用时可能是计算一个值时获取一个值-而不是通过列表一次性获取所有值。如果有很多值,列表就会占用太多的内存。另外,使用迭代器更通用、更简单、更优雅。让我们看看一个不使用列表的例子,因为要用的话,列表的长度必须无限。
这里的“列表”是一个斐波那契数列。使用的迭代器如下:
End of explanation
a=[1,2,3]
a.next()
it=iter([1,2,3])
it.next()
it.next()
Explanation: 在很多情况下,__iter__被放到会在for循环中使用的对象中。
注意 正式的说法是,一个实现了__iter__方法的对象是可迭代的,一个实现了next方法的对象则是迭代器。
内建函数iter可以从可迭代的对象中获得迭代器:
End of explanation
it=iter([1,2,3])
it
list(it)
Explanation: 从迭代器中得到序列:
使用list函数显式地将迭代器转化为列表。
End of explanation
class C:
print 'Class C being defined...'
Explanation: 2.3 类的命名空间
所有位于class语句中的代码都在特殊的命名空间中执行—类命名空间。这个命名空间可由类内所有成员访问。
End of explanation
class MemberCounter:
members=0
def init(self):
MemberCounter.members+=1
m1=MemberCounter()
m1.init()
MemberCounter.members
m2=MemberCounter()
m2.init()
MemberCounter.members
Explanation: 从上可以看出,类的定义其实就是执行代码块,这一点很有用,比如,在类的定义区并不只限使用def语句:
End of explanation
m1.members
m2.members
Explanation: 上面的代码中,在类作用域内定义了一个可供所有成员(实例)访问的变量,用来计算类的成员数量。
就像方法一样,类作用域内的变量也可以被所有实例(对象)访问:
End of explanation
m1.members='Two'
m1.members
m2.members
Explanation: 那么在实例中重绑定members属性呢?
End of explanation
class Filter:
def init(self):
self.blocked=[]
def filter(self,sequence):
return [x for x in sequence if x not in self.blocked]
class SPAMFilter(Filter):
def init(self):
self.blocked=['SPAM']
f=Filter()
f.init()
f.filter([1,2,3])
Explanation: 2.4 继承父类
子类可以扩展父类的定义。将其他类名写在class语句后的圆括号内就可以继承父类:
End of explanation
s=SPAMFilter()
s.init()
s.filter(['SPAM','SPAM','SPAM','SPAM','eggs','bacon'])
Explanation: Filter类的用处在于它可以用作其他类的父类,比如SPAMFilter类,可以将序列中“SPAM”过滤出去。
End of explanation
class CounterList(list):
def __init__(self,*args):
super(CounterList,self).__init__(*args)
self.counter=0
def __getitem__(self,index):
self.counter+=1
return super(CounterList,self).__getitem__(index)
Explanation: 2.4.1 子类化列表,字典和字符串
如果希望实现一个和内建对象类型(例如列表,字符串和字典)行为相似的序列或映射,可以使用子类化内建类型。
注意 当子类化一个内建类型,比如list的时候,也就间接的将object子类化了。因此该类就自动成为新式类,意味着可以使用像super函数这样的特性了。
看看下面的例子-带有访问计数的列表。
End of explanation
c1=CounterList('aaa')
c1
c1=CounterList((1,2,3))
c1
c1=CounterList({'first':1,'second':2})
c1
c1=CounterList(range(10))
c1
c1.reverse()
c1
del c1[3:6]
c1
c1.counter
c1[0]+c1[1]+c1[2]
c1.counter
Explanation: CounterList类严重依赖于它的子类化父类(list)的行为。CounterList类没有重写任何的方法,能直接调用列表的任何方法(如append、extend、index)。在两个被重写的方法中,super方法被用来调用相应的父类的方法,只有在__init__中添加了所需的初始化counter属性的行为,并在__getitem__中更新了counter属性。
End of explanation
issubclass(SPAMFilter,Filter)
issubclass(Filter,SPAMFilter)
Explanation: 可以看到,CounterList在很多方面和列表的作用一样,但它有一个counter属性(被初始化为0),每次列表元素被访问时,它都会自增。
2.5 调查继承
如果想要查看一个类是否是另一个的子类,可以使用内建的issubclass函数:
End of explanation
SPAMFilter.__bases__
Filter.__bases__
Explanation: 如果想要知道已知类的父类(们),可以直接使用它的特殊属性__bases__:
End of explanation
s=SPAMFilter()
isinstance(s,SPAMFilter)
isinstance(s,str)
Explanation: 同样,还能使用isinstance函数检测一个对象是否是一个类的实例:
End of explanation
s.__class__
type(s)
type([1,2])
Explanation: 如果只想知道一个对象属于哪个类,可以使用__class__属性或type函数:
End of explanation
class Calculator:
def calculate(self,expression):
self.value=eval(expression)
class Talker:
def talk(self):
print 'Hi,my value is ',self.value
class TalkingCalculator(Calculator,Talker):
pass
Explanation: 2.6 多个父类
一个类的父类可能多于一个,如下:
End of explanation
tc=TalkingCalculator()
tc.calculate('1+2+3')
tc.talk()
Explanation: 子类(TalkingCalculator)自己不做任何事,它从自己的父类继承所有的行为。这样它就成了会说话的计算器(talking calculator)。
End of explanation
hasattr(tc,'talk')
hasattr(tc,'fnord')
getattr(tc,'talk','None') #获得对象属性的值,可选择提供默认值,以便在属性不存在时使用
getattr(tc,'value','None')
setattr(tc,'name','Mr. Gumby') #与getattr相对应的函数是setattr,用来设置对象的属性及值
tc.name
Explanation: 这种行为称为多重继承(multiple inheritance),是个非常有用的工具。
一般来说,对于对象不用探讨过深。程序员可以靠多态调用自己需要的方法。不过如果想要知道对象到底有什么方法和属性,有些函数可以帮助完成这项工作。如下可以检查对象的方法或属性是否已经存在:
End of explanation
tc.__dict__
Explanation: 如果要查看对象内所有存储的值,那么可以使用__dict__属性。
End of explanation
def flatten(nested):
for sublist in nested:
for element in sublist:
yield element
nested=[[1,2],[3,4],5]
Explanation: 3 生成器
生成器是Python新引入的概念。生成器是一种普遍的函数语法定义的迭代器。接下来介绍怎么创建和使用生成器,了解它的内部机制。
3.1 创建生成器
创建一个生成器就像创建函数一样简单。首先我们创建一个可以展开嵌套列表的函数。参数是一个列表:
End of explanation
flatten(nested)
for num in flatten(nested):
print num
Explanation: 任何包含yield语句的函数称为生成器。除了名字不同以外,它的行为和普通的函数也有很大的差别。这就在于它不像return语句那样返回值,而是每次产生一个值。每次产生一个值(使用yield语句),函数就会被冻结:即函数停在那点等待被激活。函数被激活后就从停止的那点开始执行。
接下来可以通过在生成器上迭代来使用所有的值:
End of explanation
nested=[[1,2],[3,4],[5]]
list(flatten(nested))
Explanation: 从上可以看到,试图对一个数值5进行迭代会引发一个TypeError异常。
生成器由两部分组成:生成器的函数和生成器的迭代器。生成器的函数是用def语句定义的,包含yield部分,生成器的迭代器是这个函数返回的部分。
End of explanation
def flatten(nested):
try:
for sublist in nested:
for element in flatten(sublist):
yield element
except TypeError:
yield nested
Explanation: 3.2 递归生成器
如果要处理任意层的嵌套该怎么办?每次嵌套需要增加一个for循环,但因为不知道有几层嵌套,所以必须把解决方案变得更灵活。这就需要用到递归:
End of explanation
list(flatten([[[1],2],3,4,[5,[6,7]],8]))
Explanation: 当flatten被调用时,有两种可能性(处理递归时大部分都是这种情况):基本情况和需要递归的情况。在基本的情况中,函数被告知展开一个元素(比如一个数字),这种情况下,for循环会引发一个TypeError异常(因为试图对一个数字进行迭代),生成器会产生一个元素。如果展开的是一个列表,那么就要进行特殊处理。程序必须遍历所有的子列表,并对他们调用flatten。然后使用另一个for循环来产生被展开的子列表的所有元素。
End of explanation
<END_TASK> |
247 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
NYGC 30x HighCov samples Hail Table
Step1: Phased genotypes
Creating MTs for the phased data is straightforward, as multiallelic variants were split during phasing.
Autosomes (phased)
Step2: ChrX (phased)
Step3: Unphased genotypes
Autosomes (unphased)
Step4: Separate biallelic and multiallelic variants, split multiallelic variants with split_multi_hts, and then union_rows the split multiallelic MT back to the biallelic MT.
For multiallelic variants we will just set PL to be missing, to avoid running into index out of bounds errors in split_multi_hts.
Step5: After splitting multiallelic variants, we need to extract the appropriate values from the INFO array fields with a_index.
Then annotate globals with metadata, annotate columns with sample relationships, perform sample_qc and variant_qc, and write final MT to hail-datasets-us.
Step6: ChrX (unphased)
Step7: Separate biallelic and multiallelic variants, split multiallelic variants with split_multi_hts, and then union_rows the split multiallelic MT back to the biallelic MT.
For multiallelic variants we will just set PL to be missing, to avoid running into index out of bounds errors in split_multi_hts.
Step8: After splitting multiallelic variants, we need to extract the appropriate values from the INFO array fields with a_index.
Then annotate globals with metadata, annotate columns with sample relationships, perform sample_qc and variant_qc, and write final MT to hail-datasets-us.
Step9: ChrY (unphased)
Step10: Separate biallelic and multiallelic variants, split multiallelic variants with split_multi_hts, and then union_rows the split multiallelic MT back to the biallelic MT.
For multiallelic variants we will just set PL to be missing, to avoid running into index out of bounds errors in split_multi_hts.
Step11: After splitting multiallelic variants, we need to extract the appropriate values from the INFO array fields with a_index.
Then annotate globals with metadata, annotate columns with sample relationships, perform sample_qc and variant_qc, and write final MT to hail-datasets-us.
Step13: Create/update schemas | <ASSISTANT_TASK:>
Python Code:
ht_samples = hl.import_table(
"gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/1000_Genomes_NYGC_30x_samples_ped_population.txt.bgz",
delimiter="\s+",
impute=True
)
ht_samples = ht_samples.annotate(
FatherID = hl.if_else(ht_samples.FatherID == "0",
hl.missing(hl.tstr),
ht_samples.FatherID),
MotherID = hl.if_else(ht_samples.MotherID == "0",
hl.missing(hl.tstr),
ht_samples.MotherID),
Sex = hl.if_else(ht_samples.Sex == 1, "male", "female")
)
ht_samples = ht_samples.key_by("SampleID")
n_rows = ht_samples.count()
n_partitions = ht_samples.n_partitions()
ht_samples = ht_samples.annotate_globals(
metadata=hl.struct(
name="1000_Genomes_HighCov_samples",
n_rows=n_rows,
n_partitions=n_partitions)
)
ht_samples.write("gs://hail-datasets-us/1000_Genomes_NYGC_30x_HighCov_samples.ht", overwrite=False)
ht_samples = hl.read_table("gs://hail-datasets-us/1000_Genomes_NYGC_30x_HighCov_samples.ht")
ht_samples.describe()
Explanation: NYGC 30x HighCov samples Hail Table:
End of explanation
mt = hl.import_vcf(
"gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/1000_Genomes_NYGC_30x_phased_chr{1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22}_GRCh38.vcf.bgz",
reference_genome="GRCh38"
)
n_rows, n_cols = mt.count()
n_partitions = mt.n_partitions()
mt = mt.annotate_globals(
metadata=hl.struct(
name="1000_Genomes_HighCov_autosomes",
reference_genome="GRCh38",
n_rows=n_rows,
n_cols=n_cols,
n_partitions=n_partitions
)
)
# Get list of INFO fields that are arrays
known_keys = [x[0] for x in list(mt.row.info.items()) if "array" in str(x[1])]
# Extract value from INFO array fields (all arrays are length 1)
mt = mt.annotate_rows(
info = mt.info.annotate(
**{k: hl.or_missing(hl.is_defined(mt.info[k]),
mt.info[k][0])
for k in known_keys}
)
)
mt = mt.checkpoint(
"gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/checkpoints/autosomes_phased_GRCh38.mt",
overwrite=False,
_read_if_exists=True
)
mt = mt.annotate_cols(**ht_samples[mt.s])
mt = hl.sample_qc(mt)
mt = hl.variant_qc(mt)
mt.write("gs://hail-datasets-us/1000_Genomes/NYGC_30x/GRCh38/autosomes_phased.mt", overwrite=False)
mt = hl.read_matrix_table("gs://hail-datasets-us/1000_Genomes/NYGC_30x/GRCh38/autosomes_phased.mt")
mt.describe()
Explanation: Phased genotypes
Creating MTs for the phased data is straightforward, as multiallelic variants were split during phasing.
Autosomes (phased):
End of explanation
mt = hl.import_vcf(
"gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/1000_Genomes_NYGC_30x_phased_chrX_GRCh38.vcf.bgz",
reference_genome="GRCh38"
)
n_rows, n_cols = mt.count()
n_partitions = mt.n_partitions()
mt = mt.annotate_globals(
metadata=hl.struct(
name="1000_Genomes_HighCov_chrX",
reference_genome="GRCh38",
n_rows=n_rows,
n_cols=n_cols,
n_partitions=n_partitions
)
)
# Get list of INFO fields that are arrays
known_keys = [x[0] for x in list(mt.row.info.items()) if "array" in str(x[1])]
# Extract appropriate value from INFO array fields (all arrays are length 1)
mt = mt.annotate_rows(
info = mt.info.annotate(
**{k: hl.or_missing(hl.is_defined(mt.info[k]),
mt.info[k][0])
for k in known_keys}
)
)
mt = mt.checkpoint(
"gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/checkpoints/chrX_phased_GRCh38.mt",
overwrite=False,
_read_if_exists=True
)
mt = mt.annotate_cols(**ht_samples[mt.s])
mt = hl.sample_qc(mt)
mt = hl.variant_qc(mt)
mt.write("gs://hail-datasets-us/1000_Genomes/NYGC_30x/GRCh38/chrX_phased.mt", overwrite=False)
mt = hl.read_matrix_table("gs://hail-datasets-us/1000_Genomes/NYGC_30x/GRCh38/chrX_phased.mt")
mt.describe()
Explanation: ChrX (phased):
End of explanation
mt = hl.import_vcf(
("gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/1000_Genomes_NYGC_30x_"
"chr{1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22}_"
"GRCh38.vcf.bgz"),
reference_genome="GRCh38",
array_elements_required=False
)
mt = mt.annotate_entries(
PL = hl.if_else(mt.PL.contains(hl.missing(hl.tint32)),
hl.missing(mt.PL.dtype),
mt.PL)
)
mt = mt.checkpoint(
"gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/checkpoints/autosomes_unphased_GRCh38_imported_vcf.mt",
overwrite=False,
_read_if_exists=True
)
Explanation: Unphased genotypes
Autosomes (unphased):
Import chr1-chr22 VCF to MatrixTable and checkpoint:
End of explanation
mt = hl.read_matrix_table(
"gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/checkpoints/autosomes_unphased_GRCh38_imported_vcf.mt"
)
bi = mt.filter_rows(hl.len(mt.alleles) == 2)
bi = bi.annotate_rows(a_index=1, was_split=False)
bi = bi.checkpoint(
"gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/checkpoints/autosomes_unphased_GRCh38_biallelic.mt",
overwrite=False,
_read_if_exists=True
)
multi = mt.filter_rows(hl.len(mt.alleles) > 2)
multi = multi.annotate_entries(PL = hl.missing(multi.PL.dtype))
multi = multi.checkpoint(
"gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/checkpoints/autosomes_unphased_GRCh38_multiallelic.mt",
overwrite=False,
_read_if_exists=True
)
split = hl.split_multi_hts(multi, keep_star=True, permit_shuffle=True)
split = split.checkpoint(
"gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/checkpoints/autosomes_unphased_GRCh38_multiallelic_split.mt",
overwrite=False,
_read_if_exists=True
)
unioned = split.union_rows(bi)
unioned = unioned.checkpoint(
"gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/checkpoints/autosomes_unphased_GRCh38_unioned.mt",
overwrite=False,
_read_if_exists=True
)
unioned = unioned.repartition(12000, shuffle=True)
unioned = unioned.checkpoint(
"gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/checkpoints/autosomes_unphased_GRCh38_unioned_repart.mt",
overwrite=False,
_read_if_exists=True
)
Explanation: Separate biallelic and multiallelic variants, split multiallelic variants with split_multi_hts, and then union_rows the split multiallelic MT back to the biallelic MT.
For multiallelic variants we will just set PL to be missing, to avoid running into index out of bounds errors in split_multi_hts.
End of explanation
unioned = hl.read_matrix_table(
"gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/checkpoints/autosomes_unphased_GRCh38_unioned_repart.mt"
)
# Get list of INFO fields that are arrays
known_keys = [x[0] for x in list(unioned.row.info.items()) if "array" in str(x[1])]
# Extract appropriate values from INFO array fields after splitting
mt = unioned.annotate_rows(
info = unioned.info.annotate(
**{k: hl.or_missing(hl.is_defined(unioned.info[k]),
unioned.info[k][unioned.a_index - 1])
for k in known_keys}
)
)
n_rows, n_cols = mt.count()
n_partitions = mt.n_partitions()
mt = mt.annotate_globals(
metadata=hl.struct(
name="1000_Genomes_HighCov_autosomes",
reference_genome="GRCh38",
n_rows=n_rows,
n_cols=n_cols,
n_partitions=n_partitions
)
)
ht_samples = hl.read_table("gs://hail-datasets-us/1000_Genomes/NYGC_30x/samples.ht")
mt = mt.annotate_cols(**ht_samples[mt.s])
mt = hl.sample_qc(mt)
mt = hl.variant_qc(mt)
mt.write("gs://hail-datasets-us/1000_Genomes/NYGC_30x/GRCh38/autosomes_unphased.mt", overwrite=False)
mt = hl.read_matrix_table("gs://hail-datasets-us/1000_Genomes/NYGC_30x/GRCh38/autosomes_unphased.mt")
mt.describe()
Explanation: After splitting multiallelic variants, we need to extract the appropriate values from the INFO array fields with a_index.
Then annotate globals with metadata, annotate columns with sample relationships, perform sample_qc and variant_qc, and write final MT to hail-datasets-us.
End of explanation
mt = hl.import_vcf(
"gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/1000_Genomes_NYGC_30x_chrX_GRCh38.vcf.bgz",
reference_genome="GRCh38",
array_elements_required=False
)
mt = mt.annotate_entries(
PL = hl.if_else(mt.PL.contains(hl.missing(hl.tint32)),
hl.missing(mt.PL.dtype),
mt.PL)
)
mt = mt.checkpoint(
"gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/checkpoints/chrX_unphased_GRCh38_imported_vcf.mt",
overwrite=False,
_read_if_exists=True
)
Explanation: ChrX (unphased):
Import chrX VCF to MatrixTable and checkpoint:
End of explanation
mt = hl.read_matrix_table(
"gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/checkpoints/chrX_unphased_GRCh38_imported_vcf.mt"
)
bi = mt.filter_rows(hl.len(mt.alleles) == 2)
bi = bi.annotate_rows(a_index=1, was_split=False)
bi = bi.checkpoint(
"gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/checkpoints/chrX_unphased_GRCh38_biallelic.mt",
overwrite=False,
_read_if_exists=True
)
multi = mt.filter_rows(hl.len(mt.alleles) > 2)
multi = multi.annotate_entries(PL = hl.missing(multi.PL.dtype))
multi = multi.checkpoint(
"gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/checkpoints/chrX_unphased_GRCh38_multiallelic.mt",
overwrite=False,
_read_if_exists=True
)
split = hl.split_multi_hts(multi, keep_star=True, permit_shuffle=True)
split = split.checkpoint(
"gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/checkpoints/chrX_unphased_GRCh38_multiallelic_split.mt",
overwrite=False,
_read_if_exists=True
)
unioned = split.union_rows(bi)
unioned = unioned.checkpoint(
"gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/checkpoints/chrX_unphased_GRCh38_unioned.mt",
overwrite=False,
_read_if_exists=True
)
unioned = unioned.repartition(512, shuffle=True)
unioned = unioned.checkpoint(
"gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/checkpoints/chrX_unphased_GRCh38_unioned_repart.mt",
overwrite=False,
_read_if_exists=True
)
Explanation: Separate biallelic and multiallelic variants, split multiallelic variants with split_multi_hts, and then union_rows the split multiallelic MT back to the biallelic MT.
For multiallelic variants we will just set PL to be missing, to avoid running into index out of bounds errors in split_multi_hts.
End of explanation
unioned = hl.read_matrix_table(
"gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/checkpoints/chrX_unphased_GRCh38_unioned_repart.mt"
)
# Get list of INFO fields that are arrays
known_keys = [x[0] for x in list(unioned.row.info.items()) if "array" in str(x[1])]
# Extract appropriate values from INFO array fields after splitting
mt = unioned.annotate_rows(
info = unioned.info.annotate(
**{k: hl.or_missing(hl.is_defined(unioned.info[k]),
unioned.info[k][unioned.a_index - 1])
for k in known_keys}
)
)
n_rows, n_cols = mt.count()
n_partitions = mt.n_partitions()
mt = mt.annotate_globals(
metadata=hl.struct(
name="1000_Genomes_HighCov_chrX",
reference_genome="GRCh38",
n_rows=n_rows,
n_cols=n_cols,
n_partitions=n_partitions
)
)
ht_samples = hl.read_table("gs://hail-datasets-us/1000_Genomes/NYGC_30x/samples.ht")
mt = mt.annotate_cols(**ht_samples[mt.s])
mt = hl.sample_qc(mt)
mt = hl.variant_qc(mt)
mt.write("gs://hail-datasets-us/1000_Genomes/NYGC_30x/GRCh38/chrX_unphased.mt", overwrite=False)
mt = hl.read_matrix_table("gs://hail-datasets-us/1000_Genomes/NYGC_30x/GRCh38/chrX_unphased.mt")
mt.describe()
Explanation: After splitting multiallelic variants, we need to extract the appropriate values from the INFO array fields with a_index.
Then annotate globals with metadata, annotate columns with sample relationships, perform sample_qc and variant_qc, and write final MT to hail-datasets-us.
End of explanation
mt = hl.import_vcf(
"gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/1000_Genomes_NYGC_30x_chrY_GRCh38.vcf.bgz",
reference_genome="GRCh38",
array_elements_required=False
)
mt = mt.annotate_entries(
PL = hl.if_else(mt.PL.contains(hl.missing(hl.tint32)),
hl.missing(mt.PL.dtype),
mt.PL)
)
mt = mt.checkpoint(
"gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/checkpoints/chrY_unphased_GRCh38_imported_vcf.mt",
overwrite=False,
_read_if_exists=True
)
Explanation: ChrY (unphased):
Import chrY VCF to MatrixTable and checkpoint:
End of explanation
mt = hl.read_matrix_table(
"gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/checkpoints/chrY_unphased_GRCh38_imported_vcf.mt"
)
bi = mt.filter_rows(hl.len(mt.alleles) == 2)
bi = bi.annotate_rows(a_index=1, was_split=False)
bi = bi.checkpoint(
"gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/checkpoints/chrY_unphased_GRCh38_biallelic.mt",
overwrite=False,
_read_if_exists=True
)
multi = mt.filter_rows(hl.len(mt.alleles) > 2)
multi = multi.annotate_entries(PL = hl.missing(multi.PL.dtype))
multi = multi.checkpoint(
"gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/checkpoints/chrY_unphased_GRCh38_multiallelic.mt",
overwrite=False,
_read_if_exists=True
)
split = hl.split_multi_hts(multi, keep_star=True, permit_shuffle=True)
split = split.checkpoint(
"gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/checkpoints/chrY_unphased_GRCh38_multiallelic_split.mt",
overwrite=False,
_read_if_exists=True
)
unioned = split.union_rows(bi)
unioned = unioned.checkpoint(
"gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/checkpoints/chrY_unphased_GRCh38_unioned.mt",
overwrite=False,
_read_if_exists=True
)
unioned = unioned.repartition(8, shuffle=True)
unioned = unioned.checkpoint(
"gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/checkpoints/chrY_unphased_GRCh38_unioned_repart.mt",
overwrite=False,
_read_if_exists=True
)
Explanation: Separate biallelic and multiallelic variants, split multiallelic variants with split_multi_hts, and then union_rows the split multiallelic MT back to the biallelic MT.
For multiallelic variants we will just set PL to be missing, to avoid running into index out of bounds errors in split_multi_hts.
End of explanation
unioned = hl.read_matrix_table(
"gs://hail-datasets-tmp/1000_Genomes_NYGC_30x/checkpoints/chrY_unphased_GRCh38_unioned_repart.mt"
)
# Get list of INFO fields that are arrays
known_keys = [x[0] for x in list(unioned.row.info.items()) if "array" in str(x[1])]
# Extract appropriate values from INFO array fields after splitting
mt = unioned.annotate_rows(
info = unioned.info.annotate(
**{k: hl.or_missing(hl.is_defined(unioned.info[k]),
unioned.info[k][unioned.a_index - 1])
for k in known_keys}
)
)
n_rows, n_cols = mt.count()
n_partitions = mt.n_partitions()
mt = mt.annotate_globals(
metadata=hl.struct(
name="1000_Genomes_HighCov_chrY",
reference_genome="GRCh38",
n_rows=n_rows,
n_cols=n_cols,
n_partitions=n_partitions
)
)
ht_samples = hl.read_table("gs://hail-datasets-us/1000_Genomes/NYGC_30x/samples.ht")
mt = mt.annotate_cols(**ht_samples[mt.s])
mt = hl.sample_qc(mt)
mt = hl.variant_qc(mt)
mt.write("gs://hail-datasets-us/1000_Genomes/NYGC_30x/GRCh38/chrY_unphased.mt", overwrite=False)
mt = hl.read_matrix_table("gs://hail-datasets-us/1000_Genomes/NYGC_30x/GRCh38/chrY_unphased.mt")
mt.describe()
Explanation: After splitting multiallelic variants, we need to extract the appropriate values from the INFO array fields with a_index.
Then annotate globals with metadata, annotate columns with sample relationships, perform sample_qc and variant_qc, and write final MT to hail-datasets-us.
End of explanation
import json
import os
import textwrap
output_dir = os.path.abspath("../../hail/python/hail/docs/datasets/schemas")
datasets_path = os.path.abspath("../../hail/python/hail/experimental/datasets.json")
with open(datasets_path, "r") as f:
datasets = json.load(f)
names = datasets.keys()
for name in [name for name in names if "1000_Genomes_HighCov" in name]:
versions = sorted(set(dataset["version"] for dataset in datasets[name]["versions"]))
if not versions:
versions = [None]
reference_genomes = sorted(set(dataset["reference_genome"] for dataset in datasets[name]["versions"]))
if not reference_genomes:
reference_genomes = [None]
print(name)
# Create schemas for unphased versions, since phased entries only have GT
if name == "1000_Genomes_HighCov_chrY":
v = versions[0]
else:
v = versions[1]
print(v)
print(reference_genomes[0] + "\n")
path = [dataset["url"]["gcp"]["us"]
for dataset in datasets[name]["versions"]
if all([dataset["version"] == v,
dataset["reference_genome"] == reference_genomes[0]])]
assert len(path) == 1
path = path[0]
if path.endswith(".ht"):
table = hl.methods.read_table(path)
table_class = "hail.Table"
else:
table = hl.methods.read_matrix_table(path)
table_class = "hail.MatrixTable"
description = table.describe(handler=lambda x: str(x)).split("\n")
description = "\n".join([line.rstrip() for line in description])
template = .. _{dataset}:
{dataset}
{underline1}
* **Versions:** {versions}
* **Reference genome builds:** {ref_genomes}
* **Type:** :class:`{class}`
Schema ({version0}, {ref_genome0})
{underline2}
.. code-block:: text
{schema}
context = {
"dataset": name,
"underline1": len(name) * "=",
"version0": v,
"ref_genome0": reference_genomes[0],
"versions": ", ".join([str(version) for version in versions]),
"ref_genomes": ", ".join([str(reference_genome) for reference_genome in reference_genomes]),
"underline2": len("".join(["Schema (", str(v), ", ", str(reference_genomes[0]), ")"])) * "~",
"schema": textwrap.indent(description, " "),
"class": table_class
}
with open(output_dir + f"/{name}.rst", "w") as f:
f.write(template.format(**context).strip())
Explanation: Create/update schemas
End of explanation
<END_TASK> |
248 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Skip-gram word2vec
In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation.
Readings
Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.
A really good conceptual overview of word2vec from Chris McCormick
First word2vec paper from Mikolov et al.
NIPS paper with improvements for word2vec also from Mikolov et al.
An implementation of word2vec from Thushan Ganegedara
TensorFlow word2vec tutorial
Word embeddings
When you're dealing with words in text, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The matrix multiplication going into the first hidden layer will have almost all of the resulting values be zero. This a huge waste of computation.
To solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the "on" input unit.
Instead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example "heart" is encoded as 958, "mind" as 18094. Then to get hidden layer values for "heart", you just take the 958th row of the embedding matrix. This process is called an embedding lookup and the number of hidden units is the embedding dimension.
<img src='assets/tokenize_lookup.png' width=500>
There is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix as well.
Embeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called Word2Vec uses the embedding layer to find vector representations of words that contain semantic meaning.
Word2Vec
The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram.
<img src="assets/word2vec_architectures.png" width="500">
In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.
First up, importing packages.
Step1: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.
Step2: Preprocessing
Here I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to <PERIOD>. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it.
Step3: And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.
Step4: Subsampling
Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by
$$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$
where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.
I'm going to leave this up to you as an exercise. This is more of a programming challenge, than about deep learning specifically. But, being able to prepare your data for your network is an important skill to have. Check out my solution to see how I did it.
Exercise
Step5: Making batches
Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$.
From Mikolov et al.
Step6: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.
Step7: Building the graph
From Chris McCormick's blog, we can see the general structure of our network.
The input words are passed in as integers. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal.
The idea here is to train the hidden layer weight matrix to find efficient representations for our words. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset.
I'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal.
Exercise
Step8: Embedding
The embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary.
Exercise
Step9: Negative sampling
For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss.
Exercise
Step10: Validation
This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.
Step11: Training
Below is the code to train the network. Every 100 batches it reports the training loss. Every 1000 batches, it'll print out the validation words.
Step12: Restore the trained network if you need to
Step13: Visualizing the word vectors
Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data. | <ASSISTANT_TASK:>
Python Code:
import time
import numpy as np
import tensorflow as tf
import utils
Explanation: Skip-gram word2vec
In this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation.
Readings
Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.
A really good conceptual overview of word2vec from Chris McCormick
First word2vec paper from Mikolov et al.
NIPS paper with improvements for word2vec also from Mikolov et al.
An implementation of word2vec from Thushan Ganegedara
TensorFlow word2vec tutorial
Word embeddings
When you're dealing with words in text, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The matrix multiplication going into the first hidden layer will have almost all of the resulting values be zero. This a huge waste of computation.
To solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the "on" input unit.
Instead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example "heart" is encoded as 958, "mind" as 18094. Then to get hidden layer values for "heart", you just take the 958th row of the embedding matrix. This process is called an embedding lookup and the number of hidden units is the embedding dimension.
<img src='assets/tokenize_lookup.png' width=500>
There is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix as well.
Embeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called Word2Vec uses the embedding layer to find vector representations of words that contain semantic meaning.
Word2Vec
The word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as "black", "white", and "red" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram.
<img src="assets/word2vec_architectures.png" width="500">
In this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.
First up, importing packages.
End of explanation
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import zipfile
dataset_folder_path = 'data'
dataset_filename = 'text8.zip'
dataset_name = 'Text8 Dataset'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(dataset_filename):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar:
urlretrieve(
'http://mattmahoney.net/dc/text8.zip',
dataset_filename,
pbar.hook)
if not isdir(dataset_folder_path):
with zipfile.ZipFile(dataset_filename) as zip_ref:
zip_ref.extractall(dataset_folder_path)
with open('data/text8') as f:
text = f.read()
Explanation: Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.
End of explanation
words = utils.preprocess(text)
print(words[:30])
print("Total words: {}".format(len(words)))
print("Unique words: {}".format(len(set(words))))
Explanation: Preprocessing
Here I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to <PERIOD>. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it.
End of explanation
vocab_to_int, int_to_vocab = utils.create_lookup_tables(words)
int_words = [vocab_to_int[word] for word in words]
Explanation: And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.
End of explanation
## Your code here
from collections import Counter
import random
threshold = 1e-5
word_counts = Counter(int_words)
total_count = len(int_words)
freqs = {word: count / total_count for word, count in word_counts.items()}
p_drop = {word: 1 - np.sqrt(threshold / freqs[word]) for word in word_counts}
train_words = [word for word in int_words if random.random() < (1 - p_drop[word])]
Explanation: Subsampling
Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by
$$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$
where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.
I'm going to leave this up to you as an exercise. This is more of a programming challenge, than about deep learning specifically. But, being able to prepare your data for your network is an important skill to have. Check out my solution to see how I did it.
Exercise: Implement subsampling for the words in int_words. That is, go through int_words and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is the probability that a word is discarded. Assign the subsampled data to train_words.
End of explanation
def get_target(words, idx, window_size=5):
''' Get a list of words in a window around an index. '''
# Your code here
R = np.random.randint(1, window_size + 1)
start = idx - R if (idx - R) > 0 else 0
stop = idx + R
target_words = set(words[start: idx] + words[idx + 1: stop + 1])
return list(target_words)
Explanation: Making batches
Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$.
From Mikolov et al.:
"Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $< 1; C >$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels."
Exercise: Implement a function get_target that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you choose a random number of words from the window.
End of explanation
def get_batches(words, batch_size, window_size=5):
''' Create a generator of word batches as a tuple (inputs, targets) '''
n_batches = len(words)//batch_size
# only full batches
words = words[:n_batches*batch_size]
for idx in range(0, len(words), batch_size):
x, y = [], []
batch = words[idx:idx+batch_size]
for ii in range(len(batch)):
batch_x = batch[ii]
batch_y = get_target(batch, ii, window_size)
y.extend(batch_y)
x.extend([batch_x]*len(batch_y))
yield x, y
Explanation: Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.
End of explanation
train_graph = tf.Graph()
with train_graph.as_default():
inputs = tf.placeholder(tf.int32, [None], name = 'inputs')
labels = tf.placeholder(tf.int32, [None, None], name = 'labels')
Explanation: Building the graph
From Chris McCormick's blog, we can see the general structure of our network.
The input words are passed in as integers. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal.
The idea here is to train the hidden layer weight matrix to find efficient representations for our words. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset.
I'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal.
Exercise: Assign inputs and labels using tf.placeholder. We're going to be passing in integers, so set the data types to tf.int32. The batches we're passing in will have varying sizes, so set the batch sizes to [None]. To make things work later, you'll need to set the second dimension of labels to None or 1.
End of explanation
n_vocab = len(int_to_vocab)
n_embedding = 200 # Number of embedding features
with train_graph.as_default():
embedding = tf.Variable(tf.random_uniform((n_vocab, n_embedding), -1, 1)) # create embedding weight matrix here
embed = tf.nn.embedding_lookup(embedding, inputs) # use tf.nn.embedding_lookup to get the hidden layer output
Explanation: Embedding
The embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary.
Exercise: Tensorflow provides a convenient function tf.nn.embedding_lookup that does this lookup for us. You pass in the embedding matrix and a tensor of integers, then it returns rows in the matrix corresponding to those integers. Below, set the number of embedding features you'll use (200 is a good start), create the embedding matrix variable, and use tf.nn.embedding_lookup to get the embedding tensors. For the embedding matrix, I suggest you initialize it with a uniform random numbers between -1 and 1 using tf.random_uniform.
End of explanation
# Number of negative labels to sample
n_sampled = 100
with train_graph.as_default():
softmax_w = tf.Variable(tf.truncated_normal((n_vocab, n_embedding), stddev = 0.1)) # create softmax weight matrix here
softmax_b = tf.Variable(tf.zeros(n_vocab)) # create softmax biases here
# Calculate the loss using negative sampling
loss = tf.nn.sampled_softmax_loss(softmax_w, softmax_b, labels, embed, n_sampled, n_vocab)
cost = tf.reduce_mean(loss)
optimizer = tf.train.AdamOptimizer().minimize(cost)
Explanation: Negative sampling
For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called "negative sampling". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss.
Exercise: Below, create weights and biases for the softmax layer. Then, use tf.nn.sampled_softmax_loss to calculate the loss. Be sure to read the documentation to figure out how it works.
End of explanation
with train_graph.as_default():
## From Thushan Ganegedara's implementation
valid_size = 16 # Random set of words to evaluate similarity on.
valid_window = 100
# pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent
valid_examples = np.array(random.sample(range(valid_window), valid_size//2))
valid_examples = np.append(valid_examples,
random.sample(range(1000,1000+valid_window), valid_size//2))
valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
# We use the cosine distance:
norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True))
normalized_embedding = embedding / norm
valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset)
similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding))
# If the checkpoints directory doesn't exist:
!mkdir checkpoints
Explanation: Validation
This code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.
End of explanation
epochs = 10
batch_size = 1000
window_size = 10
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
iteration = 1
loss = 0
sess.run(tf.global_variables_initializer())
for e in range(1, epochs+1):
batches = get_batches(train_words, batch_size, window_size)
start = time.time()
for x, y in batches:
feed = {inputs: x,
labels: np.array(y)[:, None]}
train_loss, _ = sess.run([cost, optimizer], feed_dict=feed)
loss += train_loss
if iteration % 100 == 0:
end = time.time()
print("Epoch {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Avg. Training loss: {:.4f}".format(loss/100),
"{:.4f} sec/batch".format((end-start)/100))
loss = 0
start = time.time()
if iteration % 1000 == 0:
## From Thushan Ganegedara's implementation
# note that this is expensive (~20% slowdown if computed every 500 steps)
sim = similarity.eval()
for i in range(valid_size):
valid_word = int_to_vocab[valid_examples[i]]
top_k = 8 # number of nearest neighbors
nearest = (-sim[i, :]).argsort()[1:top_k+1]
log = 'Nearest to %s:' % valid_word
for k in range(top_k):
close_word = int_to_vocab[nearest[k]]
log = '%s %s,' % (log, close_word)
print(log)
iteration += 1
save_path = saver.save(sess, "checkpoints/text8.ckpt")
embed_mat = sess.run(normalized_embedding)
Explanation: Training
Below is the code to train the network. Every 100 batches it reports the training loss. Every 1000 batches, it'll print out the validation words.
End of explanation
with train_graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=train_graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
embed_mat = sess.run(embedding)
Explanation: Restore the trained network if you need to:
End of explanation
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
viz_words = 500
tsne = TSNE()
embed_tsne = tsne.fit_transform(embed_mat[:viz_words, :])
fig, ax = plt.subplots(figsize=(14, 14))
for idx in range(viz_words):
plt.scatter(*embed_tsne[idx, :], color='steelblue')
plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7)
Explanation: Visualizing the word vectors
Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data.
End of explanation
<END_TASK> |
249 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Building a LAS file from scratch
Step1: Step 1
Create some fake data, and make some of the values at the bottom NULL (numpy.nan). Note that of course every curve in a LAS file is recorded against a reference/index, either depth or time, so we create that array too.
Step2: Step 2
Create an empty LASFile object and review its header section
Step3: Let's add some information to the header
Step4: Next, let's make a new item in the ~Parameters section for the operator. To do this we need to make a new HeaderItem
Step5: And finally, add some free text to the ~Other section
Step6: Step 3
Add the curves to the LAS file using the add_curve method
Step7: Step 4
Now let's write out two files
Step8: and let's see if that worked | <ASSISTANT_TASK:>
Python Code:
import lasio
import datetime
import numpy
import os
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Building a LAS file from scratch
End of explanation
depths = numpy.arange(10, 50, 0.5)
fake_curve = numpy.random.random(len(depths))
fake_curve[-10:] = numpy.nan # Add some null values at the bottom
plt.plot(depths, fake_curve)
Explanation: Step 1
Create some fake data, and make some of the values at the bottom NULL (numpy.nan). Note that of course every curve in a LAS file is recorded against a reference/index, either depth or time, so we create that array too.
End of explanation
l = lasio.LASFile()
l.header
Explanation: Step 2
Create an empty LASFile object and review its header section
End of explanation
l.well.DATE = str(datetime.datetime.today())
Explanation: Let's add some information to the header:
the date
the operator (in the Parameter section)
a description of the file in the Other section.
First, let's change the date.
End of explanation
l.params['ENGI'] = lasio.HeaderItem("ENGI", "", "kinverarity@hotmail.com", "Creator of this file...")
Explanation: Next, let's make a new item in the ~Parameters section for the operator. To do this we need to make a new HeaderItem:
End of explanation
l.other = "Example of how to create a LAS file from scratch using lasio"
Explanation: And finally, add some free text to the ~Other section:
End of explanation
l.add_curve('DEPT', depths, unit='m')
l.add_curve('FAKE_CURVE', fake_curve, descr='fake curve')
Explanation: Step 3
Add the curves to the LAS file using the add_curve method:
End of explanation
fn = "scratch_example_v2.las"
with open(fn, mode="w") as f: # Write LAS file to disk
l.write(f)
Explanation: Step 4
Now let's write out two files: one according to the LAS file specification version 1.2, and one according to 2.0. Note that by default an empty LASFile object is version 2.0.
End of explanation
with open(fn, mode="r") as f: # Show the result...
print(f.read())
plt.plot(l['DEPT'], l['FAKE_CURVE'])
os.remove(fn)
Explanation: and let's see if that worked
End of explanation
<END_TASK> |
250 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Homework 1
Step1: If you've set up your environment properly, this cell should run without problems
Step2: Now, run this cell to log into OkPy.
This is the submission system for the class; you will use this
website to confirm that you've submitted your assignment.
Step5: 2. Python
Python is the main programming language we'll use in this course. We assume you have some experience with Python or can learn it yourself, but here is a brief review.
Below are some simple Python code fragments.
You should feel confident explaining what each fragment is doing. If not,
please brush up on your Python. There a number of tutorials online (search
for "Python tutorial"). https
Step6: Question 1
Question 1a
Write a function nums_reversed that takes in an integer n and returns a string
containing the numbers 1 through n including n in reverse order, separated
by spaces. For example
Step7: Question 1b
Write a function string_splosion that takes in a non-empty string like
"Code" and returns a long string containing every prefix of the input.
For example
Step8: Question 1c
Write a function double100 that takes in a list of integers
and returns True only if the list has two 100s next to each other.
>>> double100([100, 2, 3, 100])
False
>>> double100([2, 3, 100, 100, 5])
True
Step9: Question 1d
Write a function median that takes in a list of numbers
and returns the median element of the list. If the list has even
length, it returns the mean of the two elements in the middle.
>>> median([5, 4, 3, 2, 1])
3
>>> median([ 40, 30, 10, 20 ])
25
Step10: 3. NumPy
The NumPy library lets us do fast, simple computing with numbers in Python.
3.1. Arrays
The basic NumPy data type is the array, a homogeneously-typed sequential collection (a list of things that all have the same type). Arrays will most often contain strings, numbers, or other arrays.
Let's create some arrays
Step11: Math operations on arrays happen element-wise. Here's what we mean
Step12: This is not only very convenient (fewer for loops!) but also fast. NumPy is designed to run operations on arrays much faster than equivalent Python code on lists. Data science sometimes involves working with large datasets where speed is important - even the constant factors!
Jupyter pro-tip
Step13: Another Jupyter pro-tip
Step14: Question 2
Using the np.linspace function, create an array called xs that contains
100 evenly spaced points between 0 and 2 * np.pi. Then, create an array called ys that
contains the value of $ \sin{x} $ at each of those 100 points.
Hint
Step15: The plt.plot function from another library called matplotlib lets us make plots. It takes in
an array of x-values and a corresponding array of y-values. It makes a scatter plot of the (x, y) pairs and connects points with line segments. If you give it enough points, it will appear to create a smooth curve.
Let's plot the points you calculated in the previous question
Step16: This is a useful recipe for plotting any function
Step17: Calculating derivatives is an important operation in data science, but it can be difficult. We can have computers do it for us using a simple idea called numerical differentiation.
Consider the ith point (xs[i], ys[i]). The slope of sin at xs[i] is roughly the slope of the line connecting (xs[i], ys[i]) to the nearby point (xs[i+1], ys[i+1]). That slope is
Step18: Question 4
Plot the slopes you computed. Then plot cos on top of your plot, calling plt.plot again in the same cell. Did numerical differentiation work?
Note
Step19: In the plot above, it's probably not clear which curve is which. Examine the cell below to see how to plot your results with a legend.
Step20: 3.2. Multidimensional Arrays
A multidimensional array is a primitive version of a table, containing only one kind of data and having no column labels. A 2-dimensional array is useful for working with matrices of numbers.
Step21: Arrays allow you to assign to multiple places at once. The special character
Step22: In fact, you can use arrays of indices to assign to multiple places. Study the next example and make sure you understand how it works.
Step23: Question 5
Create a 50x50 array called twice_identity that contains all zeros except on the
diagonal, where it contains the value 2.
Start by making a 50x50 array of all zeros, then set the values. Use indexing, not a for loop! (Don't use np.eye either, though you might find that function useful later.)
Step24: 4. A Picture Puzzle
Your boss has given you some strange text files. He says they're images,
some of which depict a summer scene and the rest a winter scene.
He demands that you figure out how to determine whether a given
text file represents a summer scene or a winter scene.
You receive 10 files, 1.txt through 10.txt. Peek at the files in a text
editor of your choice.
Question 6
How do you think the contents of the file are structured? Take your best guess.
SOLUTION
Step25: Each file begins with a line containing two numbers. After checking the length of
a file, you could notice that the product of these two numbers equals the number of
lines in each file (other than the first one).
This suggests the rows represent elements in a 2-dimensional grid. In fact, each
dataset represents an image!
On the first line, the first of the two numbers is
the height of the image (in pixels) and the second is the width (again in pixels).
Each line in the rest of the file contains the pixels of the image.
Each pixel is a triplet of numbers denoting how much red, green, and blue
the pixel contains, respectively.
In image processing, each column in one of these image files is called a channel
(disregarding line 1). So there are 3 channels
Step27: Question 9
Images in numpy are simply arrays, but we can also display them them as
actual images in this notebook.
Use the provided show_images function to display image1. You may call it
like show_images(image1). If you later have multiple images to display, you
can call show_images([image1, image2]) to display them all at once.
The resulting image should look almost completely black. Why do you suppose
that is?
Step28: Question 10
If you look at the data, you'll notice all the numbers lie between 0 and 10.
In NumPy, a color intensity is an integer ranging from 0 to 255, where 0 is
no color (black). That's why the image is almost black. To see the image,
we'll need to rescale the numbers in the data to have a larger range.
Define a function expand_image_range that takes in an image. It returns a
new copy of the image with the following transformation
Step29: Question 11
Eureka! You've managed to reveal the image that the text file represents.
Now, define a function called reveal_file that takes in a filename
and returns an expanded image. This should be relatively easy since you've
defined functions for each step in the process.
Then, set expanded_images to a list of all the revealed images. There are
10 images to reveal (including the one you just revealed).
Finally, use show_images to display the expanded_images.
Step30: Notice that 5 of the above images are of summer scenes; the other 5
are of winter.
Think about how you'd distinguish between pictures of summer and winter. What
qualities of the image seem to signal to your brain that the image is one of
summer? Of winter?
One trait that seems specific to summer pictures is that the colors are warmer.
Let's see if the proportion of pixels of each color in the image can let us
distinguish between summer and winter pictures.
Question 12
To simplify things, we can categorize each pixel according to its most intense
(highest-value) channel. (Remember, red, green, and blue are the 3 channels.)
For example, we could just call a [2 4 0] pixel "green." If a pixel has a
tie between several channels, let's count it as none of them.
Write a function proportion_by_channel. It takes in an image. It assigns
each pixel to its greatest-intensity channel
Step31: Let's plot the proportions you computed above on a bar chart
Step32: Question 13
What do you notice about the colors present in the summer images compared to
the winter ones?
Use this info to write a function summer_or_winter. It takes in an image and
returns True if the image is a summer image and False if the image is a
winter image.
Do not hard-code the function to the 10 images you currently have (eg.
if image1, return False). We will run your function on other images
that we've reserved for testing.
You must classify all of the 10 provided images correctly to pass the test
for this function.
Step33: Congrats! You've created your very first classifier for this class.
Question 14
How do you think your classification function will perform
in general?
Why do you think it will perform that way?
What do you think would most likely give you false positives?
False negatives?
SOLUTION
Step34: 5. Submitting this assignment
First, run this cell to run all the autograder tests at once so you can double-
check your work.
Step35: Now, run this code in your terminal to make a
git commit
that saves a snapshot of your changes in git. The last line of the cell
runs git push, which will send your work to your personal Github repo.
```
Tell git to commit all the changes so far
git add -A
Tell git to make the commit
git commit -m "hw1 finished"
Send your updates to your personal private repo
git push origin master
```
Finally, we'll submit the assignment to OkPy so that the staff will know to
grade it. You can submit as many times as you want and you can choose which
submission you want us to grade by going to https | <ASSISTANT_TASK:>
Python Code:
!pip install -U okpy
Explanation: Homework 1: Setup and (Re-)Introduction to Python
Course Policies
Here are some important course policies. These are also located at
http://www.ds100.org/sp17/.
Tentative Grading
There will be 7 challenging homework assignments. Homeworks must be completed
individually and will mix programming and short answer questions. At the end of
each week of instruction we will have an online multiple choice quiz ("vitamin") that will
help you stay up-to-date with lecture materials. Labs assignments will be
graded for completion and are intended to help with the homework assignments.
40% Homeworks
13% Vitamins
7% Labs
15% Midterm
25% Final
Collaboration Policy
Data science is a collaborative activity. While you may talk with others about
the homework, we ask that you write your solutions individually. If you do
discuss the assignments with others please include their names at the top
of your solution. Keep in mind that content from the homework and vitamins will
likely be covered on both the midterm and final.
This assignment
In this assignment, you'll learn (or review):
How to set up Jupyter on your own computer.
How to check out and submit assignments for this class.
Python basics, like defining functions.
How to use the numpy library to compute with arrays of numbers.
1. Setup
If you haven't already, read through the instructions at
http://www.ds100.org/spring-2017/setup.
The instructions for submission are at the end of this notebook.
First, let's make sure you have the latest version of okpy.
End of explanation
import math
import numpy as np
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
from datascience import *
from client.api.notebook import Notebook
ok = Notebook('hw1.ok')
Explanation: If you've set up your environment properly, this cell should run without problems:
End of explanation
ok.auth(inline=True)
Explanation: Now, run this cell to log into OkPy.
This is the submission system for the class; you will use this
website to confirm that you've submitted your assignment.
End of explanation
2 + 2
# This is a comment.
# In Python, the ** operator performs exponentiation.
math.e**(-2)
print("Hello" + ",", "world!")
"Hello, cell output!"
def add2(x):
This docstring explains what this function does: it adds 2 to a number.
return x + 2
def makeAdder(amount):
Make a function that adds the given amount to a number.
def addAmount(x):
return x + amount
return addAmount
add3 = makeAdder(3)
add3(4)
# add4 is very similar to add2, but it's been created using a lambda expression.
add4 = lambda x: x + 4
add4(5)
sameAsMakeAdder = lambda amount: lambda x: x + amount
add5 = sameAsMakeAdder(5)
add5(6)
def fib(n):
if n <= 1:
return 1
# Functions can call themselves recursively.
return fib(n-1) + fib(n-2)
fib(4)
# A for loop repeats a block of code once for each
# element in a given collection.
for i in range(5):
if i % 2 == 0:
print(2**i)
else:
print("Odd power of 2")
# A list comprehension is a convenient way to apply a function
# to each element in a given collection.
# The String method join appends together all its arguments
# separated by the given string. So we append each element produced
# by the list comprehension, each separated by a newline ("\n").
print("\n".join([str(2**i) if i % 2 == 0 else "Odd power of 2" for i in range(5)]))
Explanation: 2. Python
Python is the main programming language we'll use in this course. We assume you have some experience with Python or can learn it yourself, but here is a brief review.
Below are some simple Python code fragments.
You should feel confident explaining what each fragment is doing. If not,
please brush up on your Python. There a number of tutorials online (search
for "Python tutorial"). https://docs.python.org/3/tutorial/ is a good place to
start.
End of explanation
def nums_reversed(n):
return " ".join([str(i) for i in range(n, 0, -1)]) #SOLUTION
_ = ok.grade('q01a')
_ = ok.backup()
Explanation: Question 1
Question 1a
Write a function nums_reversed that takes in an integer n and returns a string
containing the numbers 1 through n including n in reverse order, separated
by spaces. For example:
>>> nums_reversed(5)
'5 4 3 2 1'
Note: The ellipsis (...) indicates something you should fill in. It doesn't necessarily imply you should replace it with only one line of code.
End of explanation
def string_splosion(string):
if string == '':
return ''
return string_splosion(string[:-1]) + string
_ = ok.grade('q01b')
_ = ok.backup()
Explanation: Question 1b
Write a function string_splosion that takes in a non-empty string like
"Code" and returns a long string containing every prefix of the input.
For example:
>>> string_splosion('Code')
'CCoCodCode'
>>> string_splosion('data!')
'ddadatdatadata!'
>>> string_splosion('hi')
'hhi'
End of explanation
def double100(nums):
if len(nums) < 2: return False
if nums[0] == nums[1] == 100: return True
return double100(nums[1:])
_ = ok.grade('q01c')
_ = ok.backup()
Explanation: Question 1c
Write a function double100 that takes in a list of integers
and returns True only if the list has two 100s next to each other.
>>> double100([100, 2, 3, 100])
False
>>> double100([2, 3, 100, 100, 5])
True
End of explanation
def median(number_list):
n = len(number_list)
in_order = sorted(number_list)
if n % 2 == 1:
return in_order[n // 2]
else:
return (in_order[n // 2 - 1] + in_order[n // 2]) / 2
_ = ok.grade('q01d')
_ = ok.backup()
Explanation: Question 1d
Write a function median that takes in a list of numbers
and returns the median element of the list. If the list has even
length, it returns the mean of the two elements in the middle.
>>> median([5, 4, 3, 2, 1])
3
>>> median([ 40, 30, 10, 20 ])
25
End of explanation
array1 = np.array([2, 3, 4, 5])
array2 = np.arange(4)
array1, array2
Explanation: 3. NumPy
The NumPy library lets us do fast, simple computing with numbers in Python.
3.1. Arrays
The basic NumPy data type is the array, a homogeneously-typed sequential collection (a list of things that all have the same type). Arrays will most often contain strings, numbers, or other arrays.
Let's create some arrays:
End of explanation
array1 * 2
array1 * array2
array1 ** array2
Explanation: Math operations on arrays happen element-wise. Here's what we mean:
End of explanation
np.arange?
Explanation: This is not only very convenient (fewer for loops!) but also fast. NumPy is designed to run operations on arrays much faster than equivalent Python code on lists. Data science sometimes involves working with large datasets where speed is important - even the constant factors!
Jupyter pro-tip: Pull up the docs for any function in Jupyter by running a cell with
the function name and a ? at the end:
End of explanation
np.linspace
Explanation: Another Jupyter pro-tip: Pull up the docs for any function in Jupyter by typing the function
name, then <Shift>-<Tab> on your keyboard. Super convenient when you forget the order
of the arguments to a function. You can press <Tab> multiple tabs to expand the docs.
Try it on the function below:
End of explanation
xs = np.linspace(0, 2 * np.pi, 100) #SOLUTION
ys = np.sin(xs) #SOLUTION
_ = ok.grade('q02')
_ = ok.backup()
Explanation: Question 2
Using the np.linspace function, create an array called xs that contains
100 evenly spaced points between 0 and 2 * np.pi. Then, create an array called ys that
contains the value of $ \sin{x} $ at each of those 100 points.
Hint: Use the np.sin function. You should be able to define each variable with one line of code.)
End of explanation
plt.plot(xs, ys)
Explanation: The plt.plot function from another library called matplotlib lets us make plots. It takes in
an array of x-values and a corresponding array of y-values. It makes a scatter plot of the (x, y) pairs and connects points with line segments. If you give it enough points, it will appear to create a smooth curve.
Let's plot the points you calculated in the previous question:
End of explanation
# Try plotting cos here.
Explanation: This is a useful recipe for plotting any function:
1. Use linspace or arange to make a range of x-values.
2. Apply the function to each point to produce y-values.
3. Plot the points.
You might remember from calculus that the derivative of the sin function is the cos function. That means that the slope of the curve you plotted above at any point xs[i] is given by cos(xs[i]). You can try verifying this by plotting cos in the next cell.
End of explanation
def derivative(xvals, yvals):
return (yvals[1:] - yvals[:-1]) / (xvals[1:] - xvals[:-1]) #SOLUTION
slopes = derivative(xs, ys) #SOLUTION
slopes[:5]
_ = ok.grade('q03')
_ = ok.backup()
Explanation: Calculating derivatives is an important operation in data science, but it can be difficult. We can have computers do it for us using a simple idea called numerical differentiation.
Consider the ith point (xs[i], ys[i]). The slope of sin at xs[i] is roughly the slope of the line connecting (xs[i], ys[i]) to the nearby point (xs[i+1], ys[i+1]). That slope is:
(ys[i+1] - ys[i]) / (xs[i+1] - xs[i])
If the difference between xs[i+1] and xs[i] were infinitessimal, we'd have exactly the derivative. In numerical differentiation we take advantage of the fact that it's often good enough to use "really small" differences instead.
Question 3
Define a function called derivative that takes in an array of x-values and their
corresponding y-values and computes the slope of the line connecting each point to the next point.
>>> derivative(np.array([0, 1, 2]), np.array([2, 4, 6]))
np.array([2., 2.])
>>> derivative(np.arange(5), np.arange(5) ** 2)
np.array([0., 2., 4., 6.])
Notice that the output array has one less element than the inputs since we can't
find the slope for the last point.
It's possible to do this in one short line using slicing, but feel free to use whatever method you know.
Then, use your derivative function to compute the slopes for each point in xs, ys.
Store the slopes in an array called slopes.
End of explanation
plt.plot(xs[:-1], slopes) #SOLUTION
plt.plot(xs[:-1], np.cos(xs[:-1])) #SOLUTION
Explanation: Question 4
Plot the slopes you computed. Then plot cos on top of your plot, calling plt.plot again in the same cell. Did numerical differentiation work?
Note: Since we have only 99 slopes, you'll need to take off the last x-value before plotting to avoid an error.
End of explanation
plt.plot(xs[:-1], slopes, label="Numerical derivative")
plt.plot(xs[:-1], np.cos(xs[:-1]), label="True derivative")
# You can just call plt.legend(), but the legend will cover up
# some of the graph. Use bbox_to_anchor=(x,y) to set the x-
# and y-coordinates of the center-left point of the legend,
# where, for example, (0, 0) is the bottom-left of the graph
# and (1, .5) is all the way to the right and halfway up.
plt.legend(bbox_to_anchor=(1, .5), loc="center left");
Explanation: In the plot above, it's probably not clear which curve is which. Examine the cell below to see how to plot your results with a legend.
End of explanation
# The zeros function creates an array with the given shape.
# For a 2-dimensional array like this one, the first
# coordinate says how far the array goes *down*, and the
# second says how far it goes *right*.
array3 = np.zeros((4, 5))
array3
# The shape attribute returns the dimensions of the array.
array3.shape
# You can think of array3 as an array containing 4 arrays, each
# containing 5 zeros. Accordingly, we can set or get the third
# element of the second array in array 3 using standard Python
# array indexing syntax twice:
array3[1][2] = 7
array3
# This comes up so often that there is special syntax provided
# for it. The comma syntax is equivalent to using multiple
# brackets:
array3[1, 2] = 8
array3
Explanation: 3.2. Multidimensional Arrays
A multidimensional array is a primitive version of a table, containing only one kind of data and having no column labels. A 2-dimensional array is useful for working with matrices of numbers.
End of explanation
array4 = np.zeros((3, 5))
array4[:, 2] = 5
array4
Explanation: Arrays allow you to assign to multiple places at once. The special character : means "everything."
End of explanation
array5 = np.zeros((3, 5))
rows = np.array([1, 0, 2])
cols = np.array([3, 1, 4])
# Indices (1,3), (0,1), and (2,4) will be set.
array5[rows, cols] = 3
array5
Explanation: In fact, you can use arrays of indices to assign to multiple places. Study the next example and make sure you understand how it works.
End of explanation
twice_identity = np.zeros((50, 50))
diagonal = np.arange(50)
twice_identity[diagonal, diagonal] = 2
twice_identity
_ = ok.grade('q05')
_ = ok.backup()
Explanation: Question 5
Create a 50x50 array called twice_identity that contains all zeros except on the
diagonal, where it contains the value 2.
Start by making a 50x50 array of all zeros, then set the values. Use indexing, not a for loop! (Don't use np.eye either, though you might find that function useful later.)
End of explanation
def read_file_lines(filename):
with open(filename, 'r') as f: #SOLUTION
return f.readlines() #SOLUTION
file1 = read_file_lines('1.txt') #SOLUTION
file1[:5]
_ = ok.grade('q07')
_ = ok.backup()
Explanation: 4. A Picture Puzzle
Your boss has given you some strange text files. He says they're images,
some of which depict a summer scene and the rest a winter scene.
He demands that you figure out how to determine whether a given
text file represents a summer scene or a winter scene.
You receive 10 files, 1.txt through 10.txt. Peek at the files in a text
editor of your choice.
Question 6
How do you think the contents of the file are structured? Take your best guess.
SOLUTION: Here is what we can tell just by looking at the files. They seem to be tables organized in a space-separate values format. All the files have 3 integer columns, so they're probably 10 different datasets of the same type. The first row always has 2 numbers and looks special; it probably contains some kind of metadata that tells us how to interpret the rows.
Question 7
Create a function called read_file_lines that takes in a filename as its argument.
This function should return a Python list containing the lines of the
file as strings. That is, if 1.txt contains:
1 2 3
3 4 5
7 8 9
the return value should be: ['1 2 3\n', '3 4 5\n', '7 8 9\n'].
Then, use the read_file_lines function on the file 1.txt, reading the contents
into a variable called file1.
Hint: Check out this Stack Overflow page on reading lines of files.
End of explanation
def lines_to_image(file_lines):
rows, cols = [int(num) for num in file_lines[0].split()]
stripped_lines = [line.strip() for line in file_lines[1:]]
triplets = [[int(num) for num in line.split()]
for line in stripped_lines]
triplet_arr = np.array(triplets)
image_array = triplet_arr.reshape((rows, cols, 3))
return image_array.astype(np.uint8)
image1 = lines_to_image(file1)
image1.shape
_ = ok.grade('q08')
_ = ok.backup()
Explanation: Each file begins with a line containing two numbers. After checking the length of
a file, you could notice that the product of these two numbers equals the number of
lines in each file (other than the first one).
This suggests the rows represent elements in a 2-dimensional grid. In fact, each
dataset represents an image!
On the first line, the first of the two numbers is
the height of the image (in pixels) and the second is the width (again in pixels).
Each line in the rest of the file contains the pixels of the image.
Each pixel is a triplet of numbers denoting how much red, green, and blue
the pixel contains, respectively.
In image processing, each column in one of these image files is called a channel
(disregarding line 1). So there are 3 channels: red, green, and blue.
Question 8
Define a function called lines_to_image that takes in the contents of a
file as a list (such as file1). It should return an array containing integers of
shape (n_rows, n_cols, 3). That is, it contains the pixel triplets organized in the
correct number of rows and columns.
For example, if the file originally contained:
4 2
0 0 0
10 10 10
2 2 2
3 3 3
4 4 4
5 5 5
6 6 6
7 7 7
The resulting array should be a 3-dimensional array that looks like this:
array([
[ [0,0,0], [10,10,10] ],
[ [2,2,2], [3,3,3] ],
[ [4,4,4], [5,5,5] ],
[ [6,6,6], [7,7,7] ]
])
The string method split and the function np.reshape might be useful.
Important note: You must call .astype(np.uint8) on the final array before
returning so that numpy will recognize the array represents an image.
Once you've defined the function, set image1 to the result of calling
lines_to_image on file1.
End of explanation
def show_images(images, ncols=2, figsize=(10, 7), **kwargs):
Shows one or more color images.
images: Image or list of images. Each image is a 3-dimensional
array, where dimension 1 indexes height and dimension 2
the width. Dimension 3 indexes the 3 color values red,
blue, and green (so it always has length 3).
def show_image(image, axis=plt):
plt.imshow(image, **kwargs)
if not (isinstance(images, list) or isinstance(images, tuple)):
images = [images]
images = [image.astype(np.uint8) for image in images]
nrows = math.ceil(len(images) / ncols)
ncols = min(len(images), ncols)
plt.figure(figsize=figsize)
for i, image in enumerate(images):
axis = plt.subplot2grid(
(nrows, ncols),
(i // ncols, i % ncols),
)
axis.tick_params(bottom='off', left='off', top='off', right='off',
labelleft='off', labelbottom='off')
axis.grid(False)
show_image(image, axis)
# Show image1 here:
show_images(image1) #SOLUTION
Explanation: Question 9
Images in numpy are simply arrays, but we can also display them them as
actual images in this notebook.
Use the provided show_images function to display image1. You may call it
like show_images(image1). If you later have multiple images to display, you
can call show_images([image1, image2]) to display them all at once.
The resulting image should look almost completely black. Why do you suppose
that is?
End of explanation
# This array is provided for your convenience.
transformed = np.array([12, 37, 65, 89, 114, 137, 162, 187, 214, 240, 250])
def expand_image_range(image):
return transformed[image] #SOLUTION
expanded1 = expand_image_range(image1) #SOLUTION
show_images(expanded1)
_ = ok.grade('q10')
_ = ok.backup()
Explanation: Question 10
If you look at the data, you'll notice all the numbers lie between 0 and 10.
In NumPy, a color intensity is an integer ranging from 0 to 255, where 0 is
no color (black). That's why the image is almost black. To see the image,
we'll need to rescale the numbers in the data to have a larger range.
Define a function expand_image_range that takes in an image. It returns a
new copy of the image with the following transformation:
old value | new value
========= | =========
0 | 12
1 | 37
2 | 65
3 | 89
4 | 114
5 | 137
6 | 162
7 | 187
8 | 214
9 | 240
10 | 250
This expands the color range of the image. For example, a pixel that previously
had the value [5 5 5] (almost-black) will now have the value [137 137 137]
(gray).
Set expanded1 to the expanded image1, then display it with show_images.
This page
from the numpy docs has some useful information that will allow you
to use indexing instead of for loops.
However, the slickest implementation uses one very short line of code.
Hint: If you index an array with another array or list as in question 5, your
array (or list) of indices can contain repeats, as in array1[[0, 1, 0]].
Investigate what happens in that case.
End of explanation
def reveal_file(filename):
return expand_image_range(lines_to_image(read_file_lines(filename))) #SOLUTION
filenames = ['1.txt', '2.txt', '3.txt', '4.txt', '5.txt',
'6.txt', '7.txt', '8.txt', '9.txt', '10.txt']
expanded_images = [reveal_file(filename) for filename in filenames] #SOLUTION
show_images(expanded_images, ncols=5)
Explanation: Question 11
Eureka! You've managed to reveal the image that the text file represents.
Now, define a function called reveal_file that takes in a filename
and returns an expanded image. This should be relatively easy since you've
defined functions for each step in the process.
Then, set expanded_images to a list of all the revealed images. There are
10 images to reveal (including the one you just revealed).
Finally, use show_images to display the expanded_images.
End of explanation
def proportion_by_channel(image):
NUM_CHANNELS = 3
n_pixels = image.shape[0] * image.shape[1]
flattened = image.reshape((n_pixels, NUM_CHANNELS))
sorted_by_channel_value = np.sort(flattened, axis=1)
indices_with_winner = sorted_by_channel_value[:,NUM_CHANNELS-1] != sorted_by_channel_value[:,NUM_CHANNELS-2]
pixels_with_winner = flattened[indices_with_winner,:]
counts = np.bincount(np.argmax(pixels_with_winner, axis=1), minlength=NUM_CHANNELS)
return counts / n_pixels
image_proportions = np.array(
[proportion_by_channel(image) for image in expanded_images])
image_proportions
_ = ok.grade('q12')
_ = ok.backup()
Explanation: Notice that 5 of the above images are of summer scenes; the other 5
are of winter.
Think about how you'd distinguish between pictures of summer and winter. What
qualities of the image seem to signal to your brain that the image is one of
summer? Of winter?
One trait that seems specific to summer pictures is that the colors are warmer.
Let's see if the proportion of pixels of each color in the image can let us
distinguish between summer and winter pictures.
Question 12
To simplify things, we can categorize each pixel according to its most intense
(highest-value) channel. (Remember, red, green, and blue are the 3 channels.)
For example, we could just call a [2 4 0] pixel "green." If a pixel has a
tie between several channels, let's count it as none of them.
Write a function proportion_by_channel. It takes in an image. It assigns
each pixel to its greatest-intensity channel: red, green, or blue. Then
the function returns an array of length three containing the proportion of
pixels categorized as red, the proportion categorized as green, and the
proportion categorized as blue (respectively). (Again, don't count pixels
that are tied between 2 or 3 colors as any category, but do count them
in the denominator when you're computing proportions.)
For example:
```
test_im = np.array([
[ [5, 2, 2], [2, 5, 10] ]
])
proportion_by_channel(test_im)
array([ 0.5, 0, 0.5 ])
If tied, count neither as the highest
test_im = np.array([
[ [5, 2, 5], [2, 50, 50] ]
])
proportion_by_channel(test_im)
array([ 0, 0, 0 ])
```
Then, set image_proportions to the result of proportion_by_channel called
on each image in expanded_images as a 2d array.
Hint: It's fine to use a for loop, but for a difficult challenge, try
avoiding it. (As a side benefit, your code will be much faster.) Our solution
uses the NumPy functions np.reshape, np.sort, np.argmax, and np.bincount.
End of explanation
# You'll learn about Pandas and DataFrames soon.
import pandas as pd
pd.DataFrame({
'red': image_proportions[:, 0],
'green': image_proportions[:, 1],
'blue': image_proportions[:, 2]
}, index=pd.Series(['Image {}'.format(n) for n in range(1, 11)], name='image'))\
.iloc[::-1]\
.plot.barh();
Explanation: Let's plot the proportions you computed above on a bar chart:
End of explanation
RED_PROP_THRESHOLD = 0.25
# The easiest solution is to find the proportion of red pixels in the image and
# threshold on that.
def summer_or_winter(image):
return proportion_by_channel(image)[0] > RED_PROP_THRESHOLD
[summer_or_winter(image) for image in expanded_images]
_ = ok.grade('q13')
_ = ok.backup()
Explanation: Question 13
What do you notice about the colors present in the summer images compared to
the winter ones?
Use this info to write a function summer_or_winter. It takes in an image and
returns True if the image is a summer image and False if the image is a
winter image.
Do not hard-code the function to the 10 images you currently have (eg.
if image1, return False). We will run your function on other images
that we've reserved for testing.
You must classify all of the 10 provided images correctly to pass the test
for this function.
End of explanation
import skimage as sk
import skimage.io as skio
def read_image(filename):
'''Reads in an image from a filename'''
return skio.imread(filename)
def compress_image(im):
'''Takes an image as an array and compresses it to look black.'''
res = im / 25
return res.astype(np.uint8)
def to_text_file(im, filename):
'''
Takes in an image array and a filename for the resulting text file.
Creates the encoded text file for later decoding.
'''
h, w, c = im.shape
to_rgb = ' '.join
to_row = '\n'.join
to_lines = '\n'.join
rgb = [[to_rgb(triplet) for triplet in row] for row in im.astype(str)]
lines = to_lines([to_row(row) for row in rgb])
with open(filename, 'w') as f:
f.write('{} {}\n'.format(h, w))
f.write(lines)
f.write('\n')
summers = skio.imread_collection('orig/summer/*.jpg')
winters = skio.imread_collection('orig/winter/*.jpg')
len(summers)
sum_nums = np.array([ 5, 6, 9, 3, 2, 11, 12])
win_nums = np.array([ 10, 7, 8, 1, 4, 13, 14])
for im, n in zip(summers, sum_nums):
to_text_file(compress_image(im), '{}.txt'.format(n))
for im, n in zip(winters, win_nums):
to_text_file(compress_image(im), '{}.txt'.format(n))
Explanation: Congrats! You've created your very first classifier for this class.
Question 14
How do you think your classification function will perform
in general?
Why do you think it will perform that way?
What do you think would most likely give you false positives?
False negatives?
SOLUTION: The images on which we "trained" our classifier seem typical of images that try to portray summer and winter, so I would expect our function to work okay in general. (Perhaps 80% classification accuracy.) We are likely to get false positives from winter images that prominently feature fires or brown trees. We'd get false negatives from summer images that are mostly blue sky and green grass, like golf courses.
Final note: While our approach here is simplistic, skin color segmentation
-- figuring out which parts of the image belong to a human body -- is a
key step in many algorithms such as face detection.
Optional: Our code to encode images
Here are the functions we used to generate the text files for this assignment.
Feel free to send not-so-secret messages to your friends if you'd like.
End of explanation
_ = ok.grade_all()
Explanation: 5. Submitting this assignment
First, run this cell to run all the autograder tests at once so you can double-
check your work.
End of explanation
# Now, we'll submit to okpy
_ = ok.submit()
Explanation: Now, run this code in your terminal to make a
git commit
that saves a snapshot of your changes in git. The last line of the cell
runs git push, which will send your work to your personal Github repo.
```
Tell git to commit all the changes so far
git add -A
Tell git to make the commit
git commit -m "hw1 finished"
Send your updates to your personal private repo
git push origin master
```
Finally, we'll submit the assignment to OkPy so that the staff will know to
grade it. You can submit as many times as you want and you can choose which
submission you want us to grade by going to https://okpy.org/cal/data100/sp17/.
End of explanation
<END_TASK> |
251 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
Step1: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
Step2: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days in the data set. You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
Step3: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
Step4: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
Step5: Splitting the data into training, testing, and validation sets
We'll save the last 21 days of the data to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
Step6: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
Step7: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters
Step8: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of epochs
This is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
Step9: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
Step10: Thinking about your results
Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?
Note
Step11: Unit tests
Run these unit tests to check the correctness of your network implementation. These tests must all be successful to pass the project. | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
Explanation: Your first neural network
In this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.
End of explanation
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
Explanation: Load and prepare the data
A critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!
End of explanation
rides[:24*10].plot(x='dteday', y='cnt')
Explanation: Checking out the data
This dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.
Below is a plot showing the number of bike riders over the first 10 days in the data set. You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.
End of explanation
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
Explanation: Dummy variables
Here we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().
End of explanation
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
Explanation: Scaling target variables
To make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.
The scaling factors are saved so we can go backwards when we use the network for predictions.
End of explanation
# Save the last 21 days
test_data = data[-21*24:]
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
Explanation: Splitting the data into training, testing, and validation sets
We'll save the last 21 days of the data to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.
End of explanation
# Hold out the last 60 days of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
Explanation: We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).
End of explanation
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.input_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5,
(self.output_nodes, self.hidden_nodes))
self.lr = learning_rate
#### Set this to your implemented sigmoid function ####
# Activation function is the sigmoid function
self.activation_function = lambda x: 1 / (1 + np.exp(-x))
def train(self, inputs_list, targets_list):
# Convert inputs list to 2d array
inputs = np.array(inputs_list, ndmin=2).T
targets = np.array(targets_list, ndmin=2).T
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer
hidden_inputs = np.dot(self.weights_input_to_hidden, inputs)
hidden_outputs = self.activation_function(hidden_inputs)
# TODO: Output layer
final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs)
final_outputs = final_inputs
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error
# Output layer error is the difference between desired target and actual output.
output_errors = targets - final_outputs
# TODO: Backpropagated error
# errors propagated to the hidden layer
hidden_errors = np.dot(self.weights_hidden_to_output.T, output_errors)
# hidden layer gradients
hidden_grad = hidden_outputs * (1 - hidden_outputs)
# TODO: Update the weights
# update hidden-to-output weights with gradient descent step
self.weights_hidden_to_output += self.lr * np.dot(output_errors, hidden_outputs.T)
# update input-to-hidden weights with gradient descent step
self.weights_input_to_hidden += self.lr * np.dot(hidden_errors * hidden_grad, inputs.T)
def run(self, inputs_list):
# Run a forward pass through the network
inputs = np.array(inputs_list, ndmin=2).T
#### Implement the forward pass here ####
# TODO: Hidden layer
#hidden_inputs = # signals into hidden layer
#hidden_outputs = # signals from hidden layer
hidden_inputs = np.dot(self.weights_input_to_hidden, inputs)
hidden_outputs = self.activation_function(hidden_inputs)
# TODO: Output layer
#final_inputs = # signals into final output layer
#final_outputs = # signals from final output layer
final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs)
final_outputs = final_inputs
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
Explanation: Time to build the network
Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.
The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.
We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.
Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.
Below, you have these tasks:
1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.
2. Implement the forward pass in the train method.
3. Implement the backpropagation algorithm in the train method, including calculating the output error.
4. Implement the forward pass in the run method.
End of explanation
import sys
### Set the hyperparameters here ###
epochs = 2400
learning_rate = 0.0065
hidden_nodes = 12
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for e in range(epochs):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
for record, target in zip(train_features.ix[batch].values,
train_targets.ix[batch]['cnt']):
network.train(record, target)
# Printing out the training progress
train_loss = MSE(network.run(train_features), train_targets['cnt'].values)
val_loss = MSE(network.run(val_features), val_targets['cnt'].values)
sys.stdout.write("\rProgress: " + str(100 * e/float(epochs))[:4] \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
plt.ylim(ymax=0.5)
Explanation: Training the network
Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.
You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.
Choose the number of epochs
This is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting.
Choose the learning rate
This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.
Choose the number of hidden nodes
The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
End of explanation
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features)*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
Explanation: Check out your predictions
Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
End of explanation
#create a new data set to explore Neuronal Network performance in other parts of the data.
#Set for Dec 2011
cx_test_data = data[339*24:360*24]
cx_test_features, cx_test_targets = cx_test_data.drop(target_fields, axis=1), cx_test_data[target_fields]
#Plot predictions and data from Dec 2011 and Dec 2012
fig1, ax1 = plt.subplots(figsize=(14,6))
cx_predictions = network.run(cx_test_features)*std + mean
ax1.plot(cx_predictions[0], label='Prediction 2011')
ax1.plot((cx_test_targets['cnt']*std + mean).values, label='Data 2011')
ax1.set_xlim(right=len(cx_predictions))
ax1.legend()
dates1 = pd.to_datetime(rides.ix[cx_test_data.index]['dteday'])
dates1 = dates1.apply(lambda d: d.strftime('%b %d'))
ax1.set_xticks(np.arange(len(dates1))[12::24])
_ = ax1.set_xticklabels(dates1[12::24], rotation=45)
#Show the mean squeared error for test set and the new set
test_MSE = MSE(predictions[0], (test_targets['cnt']*std + mean).values)
new_MSE = MSE(cx_predictions[0], (cx_test_targets['cnt']*std + mean).values)
print ('test MSE : ', test_MSE)
print ('new MSE : ',new_MSE)
Explanation: Thinking about your results
Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?
Note: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter
Your answer below
The model has a good performance in predicting the number of cyclists, becoming very accurate in many parts of the dataset. However, the predictions at the end of the year, from 21 to 31 December, are not really accurate. It is easy to realize that this error has been introduced by ourselves. In the data preparation phase, the last 21 days where trimmed from the training set, so the predictions of the neural network respond to the trend shown by the data for December 2011 only. As can be seen in the graph below, the model makes better predictions for December 2011 and the mean squared error decreases by 75%. The selection for the test set should be selected randomly throughout the data set to avoid data lacking in the training set.
End of explanation
import unittest
inputs = [0.5, -0.2, 0.1]
targets = [0.4]
test_w_i_h = np.array([[0.1, 0.4, -0.3],
[-0.2, 0.5, 0.2]])
test_w_h_o = np.array([[0.3, -0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328, -0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, 0.39775194, -0.29887597],
[-0.20185996, 0.50074398, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
Explanation: Unit tests
Run these unit tests to check the correctness of your network implementation. These tests must all be successful to pass the project.
End of explanation
<END_TASK> |
252 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Welcome to the Tutorial!
First I'll introduce the theory behind neural nets. then we will implement one from scratch in numpy, (which is installed on the uni computers) - just type this code into your text editor of choice. I'll also show you how to define a neural net in googles DL library Tensorflow(which is not installed on the uni computers) and train it to clasify handwritten digits.
You will understand things better if you're familiar with calculus and linear algebra, but the only thing you really need to know is basic programming. Don't worry if you don't understand the equations.
Numpy/linear algebra crash course
(You should be able to run this all in python 2.7.8 on the uni computers.)
Vectors and matrices are the language of neural networks. For our purposes, a vector is a list of numbers and a matrix is a 2d grid of numbers. Both can be defined as instances of numpy's ndarray class
Step1: Putting an ndarray through a function will apply it elementwise
Step2: What is a neural network?
For our data-sciencey purposes, it's best to think of a neural network as a function approximator or a statistical model. Surprisingly enough they are made up of a network of neurons. What is a neuron?
WARNING
Step6: Learning
Well that's all very nice, but we need it to be able to learn | <ASSISTANT_TASK:>
Python Code:
import numpy as np
my_vector = np.asarray([1,2,3])
my_matrix = np.asarray([[1,2,3],[10,10,10]])
print(my_matrix*my_vector)
Explanation: Welcome to the Tutorial!
First I'll introduce the theory behind neural nets. then we will implement one from scratch in numpy, (which is installed on the uni computers) - just type this code into your text editor of choice. I'll also show you how to define a neural net in googles DL library Tensorflow(which is not installed on the uni computers) and train it to clasify handwritten digits.
You will understand things better if you're familiar with calculus and linear algebra, but the only thing you really need to know is basic programming. Don't worry if you don't understand the equations.
Numpy/linear algebra crash course
(You should be able to run this all in python 2.7.8 on the uni computers.)
Vectors and matrices are the language of neural networks. For our purposes, a vector is a list of numbers and a matrix is a 2d grid of numbers. Both can be defined as instances of numpy's ndarray class:
End of explanation
print((my_matrix**2))
print((my_matrix))
Explanation: Putting an ndarray through a function will apply it elementwise:
End of explanation
def sigmoid(x):
return 1.0/(1.0+np.exp(-x))
hidden_1 = sigmoid(x.dot(W1) + b_1)
output = hidden1.dot(W2) + b_2
Explanation: What is a neural network?
For our data-sciencey purposes, it's best to think of a neural network as a function approximator or a statistical model. Surprisingly enough they are made up of a network of neurons. What is a neuron?
WARNING: huge oversimplification that will make neuroscientists cringe.
This is what a neuron in your brain looks like. On the right are the axons, on the left are the dendrites, which recieve signals from the axons of other neurons. The dendrites are connected to the axons with synapses. If the neuron has enough voltage across, it will "spike" and send a signal through its axon to neighbouring neurons. Some synapses are excitory in that if a signal goes through them it will increase the voltage across the next neuron, making it more likely to spike. Others are inhibitory
and do the opposite. We learn by changing the strengths of synapses(well, kinda), and that is also usually how artificial neural networks learn.
This is what a the simplest possible artificial neuron looks like. This neuron is connected to two other input neurons named \(x_1 \) and \( x_2\) with "synapses" \(w_1\) and \(w_2\). All of these symbols are just numbers(real/float).
To get the neurons output signal \(h\), just sum the input neurons up, weighted by their "synapses" then put them through a nonlinear function \( f\):
$$ h = f(x_1 w_1 + x_2 w_2)$$
\(f\) can be anything that maps a real number to a real number, but for ML you want something nonlinear and smooth. For this neuron, \(f\) is the sigmoid function:
$$\sigma(x) = \frac{1}{1+e^{-x}} $$
Sigmoid squashes its output into [0,1], so it's closer to "fully firing" the more positive it's input, and closer to "not firing" the more negative it's input.
If you like to think in terms of graph theory, neurons are nodes and
If you have a stats background you might have noticed that this looks similar a logistic regression on two variables. That's because it is!
As you can see, these artificial neurons are only loosely inspired by biological neurons. That's ok, our goal is to have a good model, not simulate a brain.
There are many exciting ways to arange these neurons into a network, but we will focus on one of the easier, more useful topologies called a "two layer perceptron", which looks like this:
Neurons are arranged in layers, with the first hidden layer of neurons connected to a vector(think list of numbers) of input data, \(x\), sometimes referred to as an "input layer". Every neuron in a given layer is connected to every neuron in the previous layer.
$$net = \sum_{i=0}^{N}x_i w_i = \vec{x} \cdot \vec{w}$$
Where \(\vec{x}\) is a vector of previous layer's neuron activations and \(\vec{w} \) is a vector of the weights(synapses) for every \(x \in \vec{x} \).
Look back at the diagram again. Each of these 4 hidden units will have a vector of 3 weights for each of the inputs. We can arrange them as a 3x4 matrix of row vectors, which we call \(W_1\). Then we can multiply this matrix with \(\vec{x}\) and apply our nonlinearity \(f\) to get a vector of neuron activations:
$$\vec{h} = f( \vec{x} \cdot W_1 )$$
..actually, in practice we add a unique learnable "bias" \(b\) to every neurons weighted sum, which has the effect of shifting the nonlinearity left or right:
$$\vec{h} = f( \vec{x} \cdot W_1 + \vec{b}_1 )$$
We pretty much do the same thing to get the output for the second hidden layer, but with a different weight matrix \(W_2\):
$$\vec{h_2} = f( \vec{h_1} \cdot W_2 + \vec{b}_2 )$$
So if we want to get an output for a given data vector x, we can just plug it into these equations. Here it is in numpy:
End of explanation
N,D = 300,2 # number of examples, dimension of examples
X = np.random.uniform(size=(N,D),low=0,high=20)
y = [X[i,0] * X[i,1] for i in range(N)]
class TwoLayerPerceptron:
Simple implementation of the most basic neural net
def __init__(self,X,H,Y):
N,D = X.shape
N,O = y.shape
# initialize the weights, or "connections between neurons" to random values.
self.W1 = np.random.normal(size=(D,H))
self.b1 = np.zeros(size=(H,))
self.W2 = np.random.normal(size=(H,O))
self.b2 = np.random.normal(size=(O,))
def forward_pass(X):
Get the outputs for batch X, and a cache of hidden states for backprop
hidden_inputs = X.dot(W1) + b #matrix multiply
hidden_activations = relu(hidden_inputs)
output = hidden_activations.dot(W2) + b
cache = [X, hidden_inputs, ]
return cache
def backwards_pass(self,cache):
[X,hidden_inputs, hidden_activations, output] = cache
#//TODO: backwards pass
return d_W1, d_W2, d_b1, d_b2
def subtract_gradients(self,gradients,lr=0.001):
[d_W1, d_W2, d_b1, d_b2] = gradients
self.W1 -= lr * d_W1
self.W2 -= lr * d_W2
self.b1 -= lr * d_b1
self.b2 -= lr * d_b2
hidden_activations = relu(np.dot(X,W1) + b1)
output = np.dot(hidden_activations,W2)+b2
errors = 0.5 * (output - y)**2
d_h1 = np.dot((output - y),W2.T)
d_b1 = np.sum(d_h1,axis=1)
d_a1 = sigmoid()
d_W2 = np.dot(hidden_Activations, errors)
d_W1 = np.dot(d_h1, W1.T)
W_2 += d_W2
b1 += db1
W_1 += d_W1
display(Math(r'h_1 = \sigma(X \cdot W_1 + b)'))
Explanation: Learning
Well that's all very nice, but we need it to be able to learn
End of explanation
<END_TASK> |
253 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Handwritten Number Recognition with TFLearn and MNIST
In this notebook, we'll be building a neural network that recognizes handwritten numbers 0-9.
This kind of neural network is used in a variety of real-world applications including
Step1: Retrieving training and test data
The MNIST data set already contains both training and test data. There are 55,000 data points of training data, and 10,000 points of test data.
Each MNIST data point has
Step2: Visualize the training data
Provided below is a function that will help you visualize the MNIST data. By passing in the index of a training example, the function show_digit will display that training image along with it's corresponding label in the title.
Step3: Building the network
TFLearn lets you build the network by defining the layers in that network.
For this example, you'll define
Step4: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively.
Too few epochs don't effectively train your network, and too many take a long time to execute. Choose wisely!
Step5: Testing
After you're satisified with the training output and accuracy, you can then run the network on the test data set to measure it's performance! Remember, only do this after you've done the training and are satisfied with the results.
A good result will be higher than 98% accuracy! Some simple models have been known to get up to 99.7% accuracy.
Step6: BUT WAIT!
Isn't column zero [
Step7: Column zero has 9020 zeros and 980 ones. So about one tenth of our 10000 images was a zero.
Column one is different
Step8: It has over 1000 ones. So there are more ones than zeros in the testY set.
Let's look at ALL the digits
Step9: As a check, adding up all the one-hot one's, we should get a total of 10000
Step10: Now let's look at our predictions in the same way
Step11: And what about our accuracy test?
This shows how we got 99.67% accuracy. 33 errors out of 10000.
Step12: But what about the other columns besides zero?
It turns out those other columns (the other digits) have different levels of error.
Up to 1.15% error for the nines. | <ASSISTANT_TASK:>
Python Code:
# Import Numpy, TensorFlow, TFLearn, and MNIST data
import numpy as np
import tensorflow as tf
import tflearn
import tflearn.datasets.mnist as mnist
Explanation: Handwritten Number Recognition with TFLearn and MNIST
In this notebook, we'll be building a neural network that recognizes handwritten numbers 0-9.
This kind of neural network is used in a variety of real-world applications including: recognizing phone numbers and sorting postal mail by address. To build the network, we'll be using the MNIST data set, which consists of images of handwritten numbers and their correct labels 0-9.
We'll be using TFLearn, a high-level library built on top of TensorFlow to build the neural network. We'll start off by importing all the modules we'll need, then load the data, and finally build the network.
End of explanation
# Retrieve the training and test data
trainX, trainY, testX, testY = mnist.load_data(one_hot=True)
Explanation: Retrieving training and test data
The MNIST data set already contains both training and test data. There are 55,000 data points of training data, and 10,000 points of test data.
Each MNIST data point has:
1. an image of a handwritten digit and
2. a corresponding label (a number 0-9 that identifies the image)
We'll call the images, which will be the input to our neural network, X and their corresponding labels Y.
We're going to want our labels as one-hot vectors, which are vectors that holds mostly 0's and one 1. It's easiest to see this in a example. As a one-hot vector, the number 0 is represented as [1, 0, 0, 0, 0, 0, 0, 0, 0, 0], and 4 is represented as [0, 0, 0, 0, 1, 0, 0, 0, 0, 0].
Flattened data
For this example, we'll be using flattened data or a representation of MNIST images in one dimension rather than two. So, each handwritten number image, which is 28x28 pixels, will be represented as a one dimensional array of 784 pixel values.
Flattening the data throws away information about the 2D structure of the image, but it simplifies our data so that all of the training data can be contained in one array whose shape is [55000, 784]; the first dimension is the number of training images and the second dimension is the number of pixels in each image. This is the kind of data that is easy to analyze using a simple neural network.
End of explanation
# Visualizing the data
import matplotlib.pyplot as plt
%matplotlib inline
# Function for displaying a training image by it's index in the MNIST set
def show_digit(index):
label = trainY[index].argmax(axis=0)
# Reshape 784 array into 28x28 image
image = trainX[index].reshape([28,28])
plt.title('Training data, index: %d, Label: %d' % (index, label))
plt.imshow(image, cmap='gray_r')
plt.show()
# Display the first (index 0) training image
show_digit(0)
Explanation: Visualize the training data
Provided below is a function that will help you visualize the MNIST data. By passing in the index of a training example, the function show_digit will display that training image along with it's corresponding label in the title.
End of explanation
# Define the neural network
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
#### Your code ####
# Include the input layer, hidden layer(s), and set how you want to train the model
net = tflearn.input_data([None, 784])
net = tflearn.fully_connected(net, 128, activation='ReLU')
net = tflearn.fully_connected(net, 32, activation='ReLU')
net = tflearn.fully_connected(net, 10, activation='softmax')
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
# This model assumes that your network is named "net"
model = tflearn.DNN(net)
return model
# Build the model
model = build_model()
Explanation: Building the network
TFLearn lets you build the network by defining the layers in that network.
For this example, you'll define:
The input layer, which tells the network the number of inputs it should expect for each piece of MNIST data.
Hidden layers, which recognize patterns in data and connect the input to the output layer, and
The output layer, which defines how the network learns and outputs a label for a given image.
Let's start with the input layer; to define the input layer, you'll define the type of data that the network expects. For example,
net = tflearn.input_data([None, 100])
would create a network with 100 inputs. The number of inputs to your network needs to match the size of your data. For this example, we're using 784 element long vectors to encode our input data, so we need 784 input units.
Adding layers
To add new hidden layers, you use
net = tflearn.fully_connected(net, n_units, activation='ReLU')
This adds a fully connected layer where every unit (or node) in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call, it designates the input to the hidden layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling tflearn.fully_connected(net, n_units).
Then, to set how you train the network, use:
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
Again, this is passing in the network you've been building. The keywords:
optimizer sets the training method, here stochastic gradient descent
learning_rate is the learning rate
loss determines how the network error is calculated. In this example, with categorical cross-entropy.
Finally, you put all this together to create the model with tflearn.DNN(net).
Exercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.
Hint: The final output layer must have 10 output nodes (one for each digit 0-9). It's also recommended to use a softmax activation layer as your final output layer.
End of explanation
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=100, n_epoch=8)
Explanation: Training the network
Now that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively.
Too few epochs don't effectively train your network, and too many take a long time to execute. Choose wisely!
End of explanation
# Compare the labels that our model predicts with the actual labels
predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_)
# Calculate the accuracy, which is the percentage of times the predicated labels matched the actual labels
test_accuracy = np.mean(predictions == testY[:,0], axis=0)
# Print out the result
print("Test accuracy: ", test_accuracy)
Explanation: Testing
After you're satisified with the training output and accuracy, you can then run the network on the test data set to measure it's performance! Remember, only do this after you've done the training and are satisfied with the results.
A good result will be higher than 98% accuracy! Some simple models have been known to get up to 99.7% accuracy.
End of explanation
np.bincount(testY[:,0].astype(np.int_))
Explanation: BUT WAIT!
Isn't column zero [:,0] just the prediction for the digit ZERO?
Let's look at our 10000 testY vectors. Each has a one-hot encoding of the digits 0-9.
End of explanation
np.bincount(testY[:,1].astype(np.int_))
Explanation: Column zero has 9020 zeros and 980 ones. So about one tenth of our 10000 images was a zero.
Column one is different:
End of explanation
for i in range(10):
print(i, np.bincount(testY[:,i].astype(np.int_)))
Explanation: It has over 1000 ones. So there are more ones than zeros in the testY set.
Let's look at ALL the digits:
End of explanation
980 + \
1135+ \
1032+ \
1010+ \
982 + \
892 + \
958 + \
1028+ \
974 + \
1009
Explanation: As a check, adding up all the one-hot one's, we should get a total of 10000:
End of explanation
for i in range(10):
print(i, np.bincount((np.array(model.predict(testX))[:,i] >= 0.5).astype(np.int_)))
Explanation: Now let's look at our predictions in the same way:
End of explanation
np.bincount(predictions == testY[:,0])
Explanation: And what about our accuracy test?
This shows how we got 99.67% accuracy. 33 errors out of 10000.
End of explanation
for i in range(10):
print(i, np.bincount((np.array(model.predict(testX))[:,i] >= 0.5).astype(np.int_) == testY[:,i]))
Explanation: But what about the other columns besides zero?
It turns out those other columns (the other digits) have different levels of error.
Up to 1.15% error for the nines.
End of explanation
<END_TASK> |
254 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Algorithms Exercise 3
Imports
Step2: Character counting and entropy
Write a function char_probs that takes a string and computes the probabilities of each character in the string
Step5: The entropy is a quantiative measure of the disorder of a probability distribution. It is used extensively in Physics, Statistics, Machine Learning, Computer Science and Information Science. Given a set of probabilities $P_i$, the entropy is defined as
Step6: Use IPython's interact function to create a user interface that allows you to type a string into a text box and see the entropy of the character probabilities of the string. | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
from IPython.html.widgets import interact
Explanation: Algorithms Exercise 3
Imports
End of explanation
def char_probs(s):
Find the probabilities of the unique characters in the string s.
Parameters
----------
s : str
A string of characters.
Returns
-------
probs : dict
A dictionary whose keys are the unique characters in s and whose values
are the probabilities of those characters.
dictionary = {}
for n in s:
dictionary[n]= (s.count(n))/len(s)
return dictionary
test1 = char_probs('aaaa')
assert np.allclose(test1['a'], 1.0)
test2 = char_probs('aabb')
assert np.allclose(test2['a'], 0.5)
assert np.allclose(test2['b'], 0.5)
test3 = char_probs('abcd')
assert np.allclose(test3['a'], 0.25)
assert np.allclose(test3['b'], 0.25)
assert np.allclose(test3['c'], 0.25)
assert np.allclose(test3['d'], 0.25)
Explanation: Character counting and entropy
Write a function char_probs that takes a string and computes the probabilities of each character in the string:
First do a character count and store the result in a dictionary.
Then divide each character counts by the total number of character to compute the normalized probabilties.
Return the dictionary of characters (keys) and probabilities (values).
End of explanation
def entropy(d):
Compute the entropy of a dict d whose values are probabilities.
Return a list of 2-tuples of (word, count), sorted by count descending.
#t = np.array(d)
#t = np.sort(t)
H = 0
l = [(i,d[i]) for i in d]
t = sorted(l, key = lambda x:x[1], reverse = True)
for n in t:
H = H + (n[1])*np.log2(n[1])
#t = char_probs(t)*np.log2(char_probs(t))
return -H
entropy({'a': 0.5, 'b': 0.5})
assert np.allclose(entropy({'a': 0.5, 'b': 0.5}), 1.0)
assert np.allclose(entropy({'a': 1.0}), 0.0)
Explanation: The entropy is a quantiative measure of the disorder of a probability distribution. It is used extensively in Physics, Statistics, Machine Learning, Computer Science and Information Science. Given a set of probabilities $P_i$, the entropy is defined as:
$$H = - \Sigma_i P_i \log_2(P_i)$$
In this expression $\log_2$ is the base 2 log (np.log2), which is commonly used in information science. In Physics the natural log is often used in the definition of entropy.
Write a funtion entropy that computes the entropy of a probability distribution. The probability distribution will be passed as a Python dict: the values in the dict will be the probabilities.
To compute the entropy, you should:
First convert the values (probabilities) of the dict to a Numpy array of probabilities.
Then use other Numpy functions (np.log2, etc.) to compute the entropy.
Don't use any for or while loops in your code.
End of explanation
def z(x):
print(entropy(char_probs(x)))
return entropy(char_probs(x))
interact(z, x='string');
assert True # use this for grading the pi digits histogram
Explanation: Use IPython's interact function to create a user interface that allows you to type a string into a text box and see the entropy of the character probabilities of the string.
End of explanation
<END_TASK> |
255 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Example H SAF Metop ASCAT SSM DR products
The following example shows how to read and plot H SAF Metop ASCAT SSM data record products using the test data included in the ascat package.
Step1: A soil moisture time series is read for a specific grid point. The data attribute contains a pandas.DataFrame object.
Step2: Time series plots
A simple time series plot of surface soil moisture can be created using matplotlib.
Step3: The SSM data record H109 can be extended using H110, representing a consistent continuation of the data set
Step4: A soil moisture time series can also be plotted using the plot function provided by the pandas.DataFrame. The following example displays several variables stored in the time series.
Step5: Masking invalid soil moisture measurements
In order to mask invalid/suspicious soil moisture measurements, the confidence flag can be used. It masks soil moisture measurements with a frozen or snow cover probability > 50% and using the Surface State Flag (SSF).
Step6: Differentiate between soil moisture from Metop satellites
The sat_id field can be used to differentiate between
Step7: Convert to absolute surface soil moisture
It is possible to convert relative surface soil moisture given in degree of saturation into absolute soil moisture ($m^3 m^{-3}$) using the absolute_sm keyword during reading. Porosity information provided by Noah GLDAS and pre-computed porosity from the Harmonized World Soil Database (HWSD) using the formulas of Saxton and Rawls (2006) is used to produce volumetric surface soil moisture expressed in $m^{3} m^{-3}$. | <ASSISTANT_TASK:>
Python Code:
import os
import matplotlib.pyplot as plt
from ascat.h_saf import AscatSsmDataRecord
test_data_path = os.path.join('..', 'tests','ascat_test_data', 'hsaf')
h109_path = os.path.join(test_data_path, 'h109')
h110_path = os.path.join(test_data_path, 'h110')
h111_path = os.path.join(test_data_path, 'h111')
grid_path = os.path.join(test_data_path, 'grid')
static_layer_path = os.path.join(test_data_path, 'static_layer')
h109_dr = AscatSsmDataRecord(h109_path, grid_path, static_layer_path=static_layer_path)
h110_dr = AscatSsmDataRecord(h110_path, grid_path, static_layer_path=static_layer_path)
h111_dr = AscatSsmDataRecord(h111_path, grid_path, static_layer_path=static_layer_path)
Explanation: Example H SAF Metop ASCAT SSM DR products
The following example shows how to read and plot H SAF Metop ASCAT SSM data record products using the test data included in the ascat package.
End of explanation
gpi = 2501225
h109_ts = h109_dr.read(gpi)
Explanation: A soil moisture time series is read for a specific grid point. The data attribute contains a pandas.DataFrame object.
End of explanation
fig, ax = plt.subplots(1, 1, figsize=(15, 5))
ax.plot(h109_ts['sm'], label='Metop ASCAT SSM Data Record (H109)')
ax.set_ylabel('Degree of Saturation (%)')
ax.legend()
Explanation: Time series plots
A simple time series plot of surface soil moisture can be created using matplotlib.
End of explanation
h110_ts = h110_dr.read(gpi)
fig, ax = plt.subplots(1, 1, figsize=(15, 5))
ax.plot(h109_ts['sm'], label='Metop ASCAT SSM Data Record (H109)')
ax.plot(h110_ts['sm'], label='Metop ASCAT SSM Data Record Extension (H110)')
ax.set_ylabel('Degree of Saturation (%)')
ax.legend()
Explanation: The SSM data record H109 can be extended using H110, representing a consistent continuation of the data set
End of explanation
fields = ['sm', 'sm_noise', 'ssf', 'snow_prob', 'frozen_prob']
h111_ts = h111_dr.read(gpi)
fig, ax = plt.subplots(1, 1, figsize=(15, 5))
h111_ts[fields].plot(ax=ax)
ax.legend()
Explanation: A soil moisture time series can also be plotted using the plot function provided by the pandas.DataFrame. The following example displays several variables stored in the time series.
End of explanation
conf_flag_ok = h111_ts['conf_flag'] == 0
fig, ax = plt.subplots(1, 1, figsize=(15, 5))
h111_ts[conf_flag_ok][fields].plot(ax=ax)
ax.legend()
Explanation: Masking invalid soil moisture measurements
In order to mask invalid/suspicious soil moisture measurements, the confidence flag can be used. It masks soil moisture measurements with a frozen or snow cover probability > 50% and using the Surface State Flag (SSF).
End of explanation
metop_a = h111_ts[conf_flag_ok]['sat_id'] == 3
metop_b = h111_ts[conf_flag_ok]['sat_id'] == 4
fig, ax = plt.subplots(1, 1, figsize=(15, 5))
h111_ts[conf_flag_ok]['sm'][metop_a].plot(ax=ax, ls='none', marker='o',
color='C1', fillstyle='none', label='Metop-A SSM')
h111_ts[conf_flag_ok]['sm'][metop_b].plot(ax=ax, ls='none', marker='o',
color='C0', fillstyle='none', label='Metop-B SSM')
ax.set_ylabel('Degree of Saturation (%)')
ax.legend()
Explanation: Differentiate between soil moisture from Metop satellites
The sat_id field can be used to differentiate between: Metop-A (sat_id=3), Metop-B (sat_id=4) and Metop-C (sat_id=5).
End of explanation
h111_ts = h111_dr.read(gpi, absolute_sm=True)
conf_flag_ok = h111_ts['conf_flag'] == 0
fig, ax = plt.subplots(1, 1, figsize=(15, 5))
h111_ts[conf_flag_ok]['abs_sm_gldas'].plot(ax=ax, label='Absolute SSM using porosity from Noah GLDAS')
h111_ts[conf_flag_ok]['abs_sm_hwsd'].plot(ax=ax, label='Absolute SSM using porosity from HWSD')
ax.set_ylabel('Vol. soil moisture ($m^3 m^{-3}$)')
ax.legend()
Explanation: Convert to absolute surface soil moisture
It is possible to convert relative surface soil moisture given in degree of saturation into absolute soil moisture ($m^3 m^{-3}$) using the absolute_sm keyword during reading. Porosity information provided by Noah GLDAS and pre-computed porosity from the Harmonized World Soil Database (HWSD) using the formulas of Saxton and Rawls (2006) is used to produce volumetric surface soil moisture expressed in $m^{3} m^{-3}$.
End of explanation
<END_TASK> |
256 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
GEE nested covariance structure simulation study
This notebook is a simulation study that illustrates and evaluates the performance of the GEE nested covariance structure.
A nested covariance structure is based on a nested sequence of groups, or "levels". The top level in the hierarchy is defined by the groups argument to GEE. Subsequent levels are defined by the dep_data argument to GEE.
Step1: Set the number of covariates.
Step2: These parameters define the population variance for each level of grouping.
Step3: Set the number of groups
Step4: Set the number of observations at each level of grouping. Here, everything is balanced, i.e. within a level every group has the same size.
Step5: Calculate the total sample size.
Step6: Construct the design matrix.
Step7: Construct labels showing which group each observation belongs to at each level.
Step8: Simulate the random effects.
Step9: Simulate the response variable.
Step10: Put everything into a dataframe.
Step11: Fit the model.
Step12: The estimated covariance parameters should be similar to groups_var, level1_var, etc. as defined above. | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import statsmodels.api as sm
Explanation: GEE nested covariance structure simulation study
This notebook is a simulation study that illustrates and evaluates the performance of the GEE nested covariance structure.
A nested covariance structure is based on a nested sequence of groups, or "levels". The top level in the hierarchy is defined by the groups argument to GEE. Subsequent levels are defined by the dep_data argument to GEE.
End of explanation
p = 5
Explanation: Set the number of covariates.
End of explanation
groups_var = 1
level1_var = 2
level2_var = 3
resid_var = 4
Explanation: These parameters define the population variance for each level of grouping.
End of explanation
n_groups = 100
Explanation: Set the number of groups
End of explanation
group_size = 20
level1_size = 10
level2_size = 5
Explanation: Set the number of observations at each level of grouping. Here, everything is balanced, i.e. within a level every group has the same size.
End of explanation
n = n_groups * group_size * level1_size * level2_size
Explanation: Calculate the total sample size.
End of explanation
xmat = np.random.normal(size=(n, p))
Explanation: Construct the design matrix.
End of explanation
groups_ix = np.kron(np.arange(n // group_size), np.ones(group_size)).astype(np.int)
level1_ix = np.kron(np.arange(n // level1_size), np.ones(level1_size)).astype(np.int)
level2_ix = np.kron(np.arange(n // level2_size), np.ones(level2_size)).astype(np.int)
Explanation: Construct labels showing which group each observation belongs to at each level.
End of explanation
groups_re = np.sqrt(groups_var) * np.random.normal(size=n // group_size)
level1_re = np.sqrt(level1_var) * np.random.normal(size=n // level1_size)
level2_re = np.sqrt(level2_var) * np.random.normal(size=n // level2_size)
Explanation: Simulate the random effects.
End of explanation
y = groups_re[groups_ix] + level1_re[level1_ix] + level2_re[level2_ix]
y += np.sqrt(resid_var) * np.random.normal(size=n)
Explanation: Simulate the response variable.
End of explanation
df = pd.DataFrame(xmat, columns=["x%d" % j for j in range(p)])
df["y"] = y + xmat[:, 0] - xmat[:, 3]
df["groups_ix"] = groups_ix
df["level1_ix"] = level1_ix
df["level2_ix"] = level2_ix
Explanation: Put everything into a dataframe.
End of explanation
cs = sm.cov_struct.Nested()
dep_fml = "0 + level1_ix + level2_ix"
m = sm.GEE.from_formula("y ~ x0 + x1 + x2 + x3 + x4", cov_struct=cs,
dep_data=dep_fml, groups="groups_ix", data=df)
r = m.fit()
Explanation: Fit the model.
End of explanation
r.cov_struct.summary()
Explanation: The estimated covariance parameters should be similar to groups_var, level1_var, etc. as defined above.
End of explanation
<END_TASK> |
257 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
This notebook counts the number of evolutions and reversions in a binary trait on a large phylogenetic tree.
Import the dependencies - Dendropy and Pandas
Step1: Read data and tree.
Step2: Iterate over the tips of the trees and annotate with data (in this case, whether the tip is viviparous or oviparous). T
Step3: The counting loop. If we have a tip that has data, append to either the oviparous or viviparous list, as apporpriate. If the node label is annotated as having over a 50% probability of being oviparous, add to the oviparous list. If the node is likely to be viviparous, add to the viviparous list. | <ASSISTANT_TASK:>
Python Code:
import dendropy
import pandas as pd
Explanation: This notebook counts the number of evolutions and reversions in a binary trait on a large phylogenetic tree.
Import the dependencies - Dendropy and Pandas
End of explanation
data = pd.read_csv('../Data/PyronParityData.csv', index_col=0, header=False)
taxa = dendropy.TaxonSet()
mle = dendropy.Tree.get_from_path('../TotalOpt/annotatedTO_0param_2598364.dated', 'newick', taxon_set=taxa, preserve_underscores=True)
Explanation: Read data and tree.
End of explanation
for idx, nd in enumerate(mle.postorder_node_iter()):
if nd.label is None:
lookup = '{}'.format(nd.taxon)
nd.label = int(data.ix[lookup])
else:
pass
Explanation: Iterate over the tips of the trees and annotate with data (in this case, whether the tip is viviparous or oviparous). T
End of explanation
putative_c = []
putative_co = []
total = []
childs = []
for index, node in enumerate(mle.postorder_node_iter()):
total.append(index)
if node.parent_node is None:
pass
elif .5 < float(node.label) < 1 or float(node.label) == 0: #Is likely oviparous
if float(node.parent_node.label) < .05 : #List of nodes that demonstrate change away from oviparity.
if node.taxon is not None :
putative_co.append([node.parent_node.label, node.taxon])
else:
putative_co.append(node.parent_node.label)
for nd in node.child_nodes():
# print nd.taxon
pass
elif 0 < float(node.label) < .95 or float(node.label) == 1:
if float(node.parent_node.label) > .05:
putative_c.append([node.parent_node.label,node.taxon])
print len(putative_c), 'changes to viviparity'
print len(putative_co), 'reversions to oviparity'
Explanation: The counting loop. If we have a tip that has data, append to either the oviparous or viviparous list, as apporpriate. If the node label is annotated as having over a 50% probability of being oviparous, add to the oviparous list. If the node is likely to be viviparous, add to the viviparous list.
End of explanation
<END_TASK> |
258 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
CS231n Python Tutorial With Jupyter Notebook
This tutorial was originally written by Justin Johnson for cs231n and adapted as a Jupyter notebook for cs228 by Volodymyr Kuleshov and Isaac Caswell.
This current version has been adapted as a Jupyter notebook with Python3 support by Kevin Zakka for the Spring 2020 edition of cs231n.
What is a Jupyter Notebook?
A Jupyter notebook is made up of a number of cells. Each cell can contain Python code. There are two main types of cells
Step1: Global variables are shared between cells. Try executing the cell below
Step2: Keyboard Shortcuts
There are a few keyboard shortcuts you should be aware of to make your notebook experience more pleasant. To escape editing of a cell, press esc. Escaping a Markdown cell won't render it, so make sure to execute it if you wish to render the markdown. Notice how the highlight color switches back to blue when you have escaped a cell.
You can navigate between cells by pressing your arrow keys. Executing a cell automatically shifts the cell cursor down 1 cell if one exists, or creates a new cell below the current one if none exist.
To place a cell below the current one, press b.
To place a cell above the current one, press a.
To delete a cell, press dd.
To convert a cell to Markdown press m. Note you have to be in esc mode.
To convert it back to Code press y. Note you have to be in esc mode.
Get familiar with these keyboard shortcuts, they really help!
You can restart a notebook and clear all cells by clicking Kernel -> Restart & Clear Output. If you don't want to clear cell outputs, just hit Kernel -> Restart.
By convention, Jupyter notebooks are expected to be run from top to bottom. Failing to execute some cells or executing cells out of order can result in errors. After restarting the notebook, try running the y = 2 * x cell 2 cells above and observe what happens.
After you have modified a Jupyter notebook for one of the assignments by modifying or executing some of its cells, remember to save your changes! You can save with the Command/Control + s shortcut or by clicking File -> Save and Checkpoint.
This has only been a brief introduction to Jupyter notebooks, but it should be enough to get you up and running on the assignments for this course.
Python Tutorial
Python is a great general-purpose programming language on its own, but with the help of a few popular libraries (numpy, scipy, matplotlib) it becomes a powerful environment for scientific computing.
We expect that many of you will have some experience with Python and numpy; for the rest of you, this section will serve as a quick crash course both on the Python programming language and on the use of Python for scientific computing.
Some of you may have previous knowledge in Matlab, in which case we also recommend the numpy for Matlab users page (https
Step3: Basics of Python
Python is a high-level, dynamically typed multiparadigm programming language. Python code is often said to be almost like pseudocode, since it allows you to express very powerful ideas in very few lines of code while being very readable. As an example, here is an implementation of the classic quicksort algorithm in Python
Step4: Basic data types
Numbers
Integers and floats work as you would expect from other languages
Step5: Note that unlike many languages, Python does not have unary increment (x++) or decrement (x--) operators.
Python also has built-in types for long integers and complex numbers; you can find all of the details in the documentation.
Booleans
Python implements all of the usual operators for Boolean logic, but uses English words rather than symbols (&&, ||, etc.)
Step6: Now we let's look at the operations
Step7: Strings
Step8: String objects have a bunch of useful methods; for example
Step9: You can find a list of all string methods in the documentation.
Containers
Python includes several built-in container types
Step10: As usual, you can find all the gory details about lists in the documentation.
Slicing
In addition to accessing list elements one at a time, Python provides concise syntax to access sublists; this is known as slicing
Step11: Loops
You can loop over the elements of a list like this
Step12: If you want access to the index of each element within the body of a loop, use the built-in enumerate function
Step13: List comprehensions
When programming, frequently we want to transform one type of data into another. As a simple example, consider the following code that computes square numbers
Step14: You can make this code simpler using a list comprehension
Step15: List comprehensions can also contain conditions
Step16: Dictionaries
A dictionary stores (key, value) pairs, similar to a Map in Java or an object in Javascript. You can use it like this
Step17: You can find all you need to know about dictionaries in the documentation.
It is easy to iterate over the keys in a dictionary
Step18: Dictionary comprehensions
Step19: Sets
A set is an unordered collection of distinct elements. As a simple example, consider the following
Step20: Loops
Step21: Set comprehensions
Step22: Tuples
A tuple is an (immutable) ordered list of values. A tuple is in many ways similar to a list; one of the most important differences is that tuples can be used as keys in dictionaries and as elements of sets, while lists cannot. Here is a trivial example
Step23: Functions
Python functions are defined using the def keyword. For example
Step24: We will often define functions to take optional keyword arguments, like this
Step25: Classes
The syntax for defining classes in Python is straightforward
Step26: Numpy
Numpy is the core library for scientific computing in Python. It provides a high-performance multidimensional array object, and tools for working with these arrays. If you are already familiar with MATLAB, you might find this tutorial useful to get started with Numpy.
To use Numpy, we first need to import the numpy package
Step27: Arrays
A numpy array is a grid of values, all of the same type, and is indexed by a tuple of nonnegative integers. The number of dimensions is the rank of the array; the shape of an array is a tuple of integers giving the size of the array along each dimension.
We can initialize numpy arrays from nested Python lists, and access elements using square brackets
Step28: Numpy also provides many functions to create arrays
Step29: Array indexing
Numpy offers several ways to index into arrays.
Slicing
Step30: A slice of an array is a view into the same data, so modifying it will modify the original array.
Step31: You can also mix integer indexing with slice indexing. However, doing so will yield an array of lower rank than the original array. Note that this is quite different from the way that MATLAB handles array slicing
Step32: Two ways of accessing the data in the middle row of the array.
Mixing integer indexing with slices yields an array of lower rank,
while using only slices yields an array of the same rank as the
original array
Step33: Integer array indexing
Step34: One useful trick with integer array indexing is selecting or mutating one element from each row of a matrix
Step35: Boolean array indexing
Step36: For brevity we have left out a lot of details about numpy array indexing; if you want to know more you should read the documentation.
Datatypes
Every numpy array is a grid of elements of the same type. Numpy provides a large set of numeric datatypes that you can use to construct arrays. Numpy tries to guess a datatype when you create an array, but functions that construct arrays usually also include an optional argument to explicitly specify the datatype. Here is an example
Step37: You can read all about numpy datatypes in the documentation.
Array math
Basic mathematical functions operate elementwise on arrays, and are available both as operator overloads and as functions in the numpy module
Step38: Note that unlike MATLAB, * is elementwise multiplication, not matrix multiplication. We instead use the dot function to compute inner products of vectors, to multiply a vector by a matrix, and to multiply matrices. dot is available both as a function in the numpy module and as an instance method of array objects
Step39: You can also use the @ operator which is equivalent to numpy's dot operator.
Step40: Numpy provides many useful functions for performing computations on arrays; one of the most useful is sum
Step41: You can find the full list of mathematical functions provided by numpy in the documentation.
Apart from computing mathematical functions using arrays, we frequently need to reshape or otherwise manipulate data in arrays. The simplest example of this type of operation is transposing a matrix; to transpose a matrix, simply use the T attribute of an array object
Step42: Broadcasting
Broadcasting is a powerful mechanism that allows numpy to work with arrays of different shapes when performing arithmetic operations. Frequently we have a smaller array and a larger array, and we want to use the smaller array multiple times to perform some operation on the larger array.
For example, suppose that we want to add a constant vector to each row of a matrix. We could do it like this
Step43: This works; however when the matrix x is very large, computing an explicit loop in Python could be slow. Note that adding the vector v to each row of the matrix x is equivalent to forming a matrix vv by stacking multiple copies of v vertically, then performing elementwise summation of x and vv. We could implement this approach like this
Step44: Numpy broadcasting allows us to perform this computation without actually creating multiple copies of v. Consider this version, using broadcasting
Step45: The line y = x + v works even though x has shape (4, 3) and v has shape (3,) due to broadcasting; this line works as if v actually had shape (4, 3), where each row was a copy of v, and the sum was performed elementwise.
Broadcasting two arrays together follows these rules
Step46: Broadcasting typically makes your code more concise and faster, so you should strive to use it where possible.
This brief overview has touched on many of the important things that you need to know about numpy, but is far from complete. Check out the numpy reference to find out much more about numpy.
Matplotlib
Matplotlib is a plotting library. In this section give a brief introduction to the matplotlib.pyplot module, which provides a plotting system similar to that of MATLAB.
Step47: By running this special iPython command, we will be displaying plots inline
Step48: Plotting
The most important function in matplotlib is plot, which allows you to plot 2D data. Here is a simple example
Step49: With just a little bit of extra work we can easily plot multiple lines at once, and add a title, legend, and axis labels
Step50: Subplots
You can plot different things in the same figure using the subplot function. Here is an example | <ASSISTANT_TASK:>
Python Code:
x = 1
print(x)
Explanation: CS231n Python Tutorial With Jupyter Notebook
This tutorial was originally written by Justin Johnson for cs231n and adapted as a Jupyter notebook for cs228 by Volodymyr Kuleshov and Isaac Caswell.
This current version has been adapted as a Jupyter notebook with Python3 support by Kevin Zakka for the Spring 2020 edition of cs231n.
What is a Jupyter Notebook?
A Jupyter notebook is made up of a number of cells. Each cell can contain Python code. There are two main types of cells: Code cells and Markdown cells. This particular cell is a Markdown cell. You can execute a particular cell by double clicking on it (the highlight color will switch from blue to green) and pressing Shift-Enter. When you do so, if the cell is a Code cell, the code in the cell will run, and the output of the cell will be displayed beneath the cell, and if the cell is a Markdown cell, the markdown text will get rendered beneath the cell.
Go ahead and try executing this cell.
The cell below is a Code cell. Go ahead and click it, then execute it.
End of explanation
y = 2 * x
print(y)
Explanation: Global variables are shared between cells. Try executing the cell below:
End of explanation
!python --version
Explanation: Keyboard Shortcuts
There are a few keyboard shortcuts you should be aware of to make your notebook experience more pleasant. To escape editing of a cell, press esc. Escaping a Markdown cell won't render it, so make sure to execute it if you wish to render the markdown. Notice how the highlight color switches back to blue when you have escaped a cell.
You can navigate between cells by pressing your arrow keys. Executing a cell automatically shifts the cell cursor down 1 cell if one exists, or creates a new cell below the current one if none exist.
To place a cell below the current one, press b.
To place a cell above the current one, press a.
To delete a cell, press dd.
To convert a cell to Markdown press m. Note you have to be in esc mode.
To convert it back to Code press y. Note you have to be in esc mode.
Get familiar with these keyboard shortcuts, they really help!
You can restart a notebook and clear all cells by clicking Kernel -> Restart & Clear Output. If you don't want to clear cell outputs, just hit Kernel -> Restart.
By convention, Jupyter notebooks are expected to be run from top to bottom. Failing to execute some cells or executing cells out of order can result in errors. After restarting the notebook, try running the y = 2 * x cell 2 cells above and observe what happens.
After you have modified a Jupyter notebook for one of the assignments by modifying or executing some of its cells, remember to save your changes! You can save with the Command/Control + s shortcut or by clicking File -> Save and Checkpoint.
This has only been a brief introduction to Jupyter notebooks, but it should be enough to get you up and running on the assignments for this course.
Python Tutorial
Python is a great general-purpose programming language on its own, but with the help of a few popular libraries (numpy, scipy, matplotlib) it becomes a powerful environment for scientific computing.
We expect that many of you will have some experience with Python and numpy; for the rest of you, this section will serve as a quick crash course both on the Python programming language and on the use of Python for scientific computing.
Some of you may have previous knowledge in Matlab, in which case we also recommend the numpy for Matlab users page (https://docs.scipy.org/doc/numpy-dev/user/numpy-for-matlab-users.html).
In this tutorial, we will cover:
Basic Python: Basic data types (Containers, Lists, Dictionaries, Sets, Tuples), Functions, Classes
Numpy: Arrays, Array indexing, Datatypes, Array math, Broadcasting
Matplotlib: Plotting, Subplots, Images
IPython: Creating notebooks, Typical workflows
A Brief Note on Python Versions
As of Janurary 1, 2020, Python has officially dropped support for python2. We'll be using Python 3.7 for this iteration of the course.
You should have activated your cs231n virtual environment created in the Setup Instructions before calling jupyter notebook. If that is
the case, the cell below should print out a major version of 3.7.
End of explanation
def quicksort(arr):
if len(arr) <= 1:
return arr
pivot = arr[len(arr) // 2]
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quicksort(left) + middle + quicksort(right)
print(quicksort([3,6,8,10,1,2,1]))
Explanation: Basics of Python
Python is a high-level, dynamically typed multiparadigm programming language. Python code is often said to be almost like pseudocode, since it allows you to express very powerful ideas in very few lines of code while being very readable. As an example, here is an implementation of the classic quicksort algorithm in Python:
End of explanation
x = 3
print(x, type(x))
print(x + 1) # Addition
print(x - 1) # Subtraction
print(x * 2) # Multiplication
print(x ** 2) # Exponentiation
x += 1
print(x)
x *= 2
print(x)
y = 2.5
print(type(y))
print(y, y + 1, y * 2, y ** 2)
Explanation: Basic data types
Numbers
Integers and floats work as you would expect from other languages:
End of explanation
t, f = True, False
print(type(t))
Explanation: Note that unlike many languages, Python does not have unary increment (x++) or decrement (x--) operators.
Python also has built-in types for long integers and complex numbers; you can find all of the details in the documentation.
Booleans
Python implements all of the usual operators for Boolean logic, but uses English words rather than symbols (&&, ||, etc.):
End of explanation
print(t and f) # Logical AND;
print(t or f) # Logical OR;
print(not t) # Logical NOT;
print(t != f) # Logical XOR;
Explanation: Now we let's look at the operations:
End of explanation
hello = 'hello' # String literals can use single quotes
world = "world" # or double quotes; it does not matter
print(hello, len(hello))
hw = hello + ' ' + world # String concatenation
print(hw)
hw12 = '{} {} {}'.format(hello, world, 12) # string formatting
print(hw12)
Explanation: Strings
End of explanation
s = "hello"
print(s.capitalize()) # Capitalize a string
print(s.upper()) # Convert a string to uppercase; prints "HELLO"
print(s.rjust(7)) # Right-justify a string, padding with spaces
print(s.center(7)) # Center a string, padding with spaces
print(s.replace('l', '(ell)')) # Replace all instances of one substring with another
print(' world '.strip()) # Strip leading and trailing whitespace
Explanation: String objects have a bunch of useful methods; for example:
End of explanation
xs = [3, 1, 2] # Create a list
print(xs, xs[2])
print(xs[-1]) # Negative indices count from the end of the list; prints "2"
xs[2] = 'foo' # Lists can contain elements of different types
print(xs)
xs.append('bar') # Add a new element to the end of the list
print(xs)
x = xs.pop() # Remove and return the last element of the list
print(x, xs)
Explanation: You can find a list of all string methods in the documentation.
Containers
Python includes several built-in container types: lists, dictionaries, sets, and tuples.
Lists
A list is the Python equivalent of an array, but is resizeable and can contain elements of different types:
End of explanation
nums = list(range(5)) # range is a built-in function that creates a list of integers
print(nums) # Prints "[0, 1, 2, 3, 4]"
print(nums[2:4]) # Get a slice from index 2 to 4 (exclusive); prints "[2, 3]"
print(nums[2:]) # Get a slice from index 2 to the end; prints "[2, 3, 4]"
print(nums[:2]) # Get a slice from the start to index 2 (exclusive); prints "[0, 1]"
print(nums[:]) # Get a slice of the whole list; prints ["0, 1, 2, 3, 4]"
print(nums[:-1]) # Slice indices can be negative; prints ["0, 1, 2, 3]"
nums[2:4] = [8, 9] # Assign a new sublist to a slice
print(nums) # Prints "[0, 1, 8, 9, 4]"
Explanation: As usual, you can find all the gory details about lists in the documentation.
Slicing
In addition to accessing list elements one at a time, Python provides concise syntax to access sublists; this is known as slicing:
End of explanation
animals = ['cat', 'dog', 'monkey']
for animal in animals:
print(animal)
Explanation: Loops
You can loop over the elements of a list like this:
End of explanation
animals = ['cat', 'dog', 'monkey']
for idx, animal in enumerate(animals):
print('#{}: {}'.format(idx + 1, animal))
Explanation: If you want access to the index of each element within the body of a loop, use the built-in enumerate function:
End of explanation
nums = [0, 1, 2, 3, 4]
squares = []
for x in nums:
squares.append(x ** 2)
print(squares)
Explanation: List comprehensions
When programming, frequently we want to transform one type of data into another. As a simple example, consider the following code that computes square numbers:
End of explanation
nums = [0, 1, 2, 3, 4]
squares = [x ** 2 for x in nums]
print(squares)
Explanation: You can make this code simpler using a list comprehension:
End of explanation
nums = [0, 1, 2, 3, 4]
even_squares = [x ** 2 for x in nums if x % 2 == 0]
print(even_squares)
Explanation: List comprehensions can also contain conditions:
End of explanation
d = {'cat': 'cute', 'dog': 'furry'} # Create a new dictionary with some data
print(d['cat']) # Get an entry from a dictionary; prints "cute"
print('cat' in d) # Check if a dictionary has a given key; prints "True"
d['fish'] = 'wet' # Set an entry in a dictionary
print(d['fish']) # Prints "wet"
print(d['monkey']) # KeyError: 'monkey' not a key of d
print(d.get('monkey', 'N/A')) # Get an element with a default; prints "N/A"
print(d.get('fish', 'N/A')) # Get an element with a default; prints "wet"
del d['fish'] # Remove an element from a dictionary
print(d.get('fish', 'N/A')) # "fish" is no longer a key; prints "N/A"
Explanation: Dictionaries
A dictionary stores (key, value) pairs, similar to a Map in Java or an object in Javascript. You can use it like this:
End of explanation
d = {'person': 2, 'cat': 4, 'spider': 8}
for animal, legs in d.items():
print('A {} has {} legs'.format(animal, legs))
Explanation: You can find all you need to know about dictionaries in the documentation.
It is easy to iterate over the keys in a dictionary:
End of explanation
nums = [0, 1, 2, 3, 4]
even_num_to_square = {x: x ** 2 for x in nums if x % 2 == 0}
print(even_num_to_square)
Explanation: Dictionary comprehensions: These are similar to list comprehensions, but allow you to easily construct dictionaries. For example:
End of explanation
animals = {'cat', 'dog'}
print('cat' in animals) # Check if an element is in a set; prints "True"
print('fish' in animals) # prints "False"
animals.add('fish') # Add an element to a set
print('fish' in animals)
print(len(animals)) # Number of elements in a set;
animals.add('cat') # Adding an element that is already in the set does nothing
print(len(animals))
animals.remove('cat') # Remove an element from a set
print(len(animals))
Explanation: Sets
A set is an unordered collection of distinct elements. As a simple example, consider the following:
End of explanation
animals = {'cat', 'dog', 'fish'}
for idx, animal in enumerate(animals):
print('#{}: {}'.format(idx + 1, animal))
Explanation: Loops: Iterating over a set has the same syntax as iterating over a list; however since sets are unordered, you cannot make assumptions about the order in which you visit the elements of the set:
End of explanation
from math import sqrt
print({int(sqrt(x)) for x in range(30)})
Explanation: Set comprehensions: Like lists and dictionaries, we can easily construct sets using set comprehensions:
End of explanation
d = {(x, x + 1): x for x in range(10)} # Create a dictionary with tuple keys
t = (5, 6) # Create a tuple
print(type(t))
print(d[t])
print(d[(1, 2)])
t[0] = 1
Explanation: Tuples
A tuple is an (immutable) ordered list of values. A tuple is in many ways similar to a list; one of the most important differences is that tuples can be used as keys in dictionaries and as elements of sets, while lists cannot. Here is a trivial example:
End of explanation
def sign(x):
if x > 0:
return 'positive'
elif x < 0:
return 'negative'
else:
return 'zero'
for x in [-1, 0, 1]:
print(sign(x))
Explanation: Functions
Python functions are defined using the def keyword. For example:
End of explanation
def hello(name, loud=False):
if loud:
print('HELLO, {}'.format(name.upper()))
else:
print('Hello, {}!'.format(name))
hello('Bob')
hello('Fred', loud=True)
Explanation: We will often define functions to take optional keyword arguments, like this:
End of explanation
class Greeter:
# Constructor
def __init__(self, name):
self.name = name # Create an instance variable
# Instance method
def greet(self, loud=False):
if loud:
print('HELLO, {}'.format(self.name.upper()))
else:
print('Hello, {}!'.format(self.name))
g = Greeter('Fred') # Construct an instance of the Greeter class
g.greet() # Call an instance method; prints "Hello, Fred"
g.greet(loud=True) # Call an instance method; prints "HELLO, FRED!"
Explanation: Classes
The syntax for defining classes in Python is straightforward:
End of explanation
import numpy as np
Explanation: Numpy
Numpy is the core library for scientific computing in Python. It provides a high-performance multidimensional array object, and tools for working with these arrays. If you are already familiar with MATLAB, you might find this tutorial useful to get started with Numpy.
To use Numpy, we first need to import the numpy package:
End of explanation
a = np.array([1, 2, 3]) # Create a rank 1 array
print(type(a), a.shape, a[0], a[1], a[2])
a[0] = 5 # Change an element of the array
print(a)
b = np.array([[1,2,3],[4,5,6]]) # Create a rank 2 array
print(b)
print(b.shape)
print(b[0, 0], b[0, 1], b[1, 0])
Explanation: Arrays
A numpy array is a grid of values, all of the same type, and is indexed by a tuple of nonnegative integers. The number of dimensions is the rank of the array; the shape of an array is a tuple of integers giving the size of the array along each dimension.
We can initialize numpy arrays from nested Python lists, and access elements using square brackets:
End of explanation
a = np.zeros((2,2)) # Create an array of all zeros
print(a)
b = np.ones((1,2)) # Create an array of all ones
print(b)
c = np.full((2,2), 7) # Create a constant array
print(c)
d = np.eye(2) # Create a 2x2 identity matrix
print(d)
e = np.random.random((2,2)) # Create an array filled with random values
print(e)
Explanation: Numpy also provides many functions to create arrays:
End of explanation
import numpy as np
# Create the following rank 2 array with shape (3, 4)
# [[ 1 2 3 4]
# [ 5 6 7 8]
# [ 9 10 11 12]]
a = np.array([[1,2,3,4], [5,6,7,8], [9,10,11,12]])
# Use slicing to pull out the subarray consisting of the first 2 rows
# and columns 1 and 2; b is the following array of shape (2, 2):
# [[2 3]
# [6 7]]
b = a[:2, 1:3]
print(b)
Explanation: Array indexing
Numpy offers several ways to index into arrays.
Slicing: Similar to Python lists, numpy arrays can be sliced. Since arrays may be multidimensional, you must specify a slice for each dimension of the array:
End of explanation
print(a[0, 1])
b[0, 0] = 77 # b[0, 0] is the same piece of data as a[0, 1]
print(a[0, 1])
Explanation: A slice of an array is a view into the same data, so modifying it will modify the original array.
End of explanation
# Create the following rank 2 array with shape (3, 4)
a = np.array([[1,2,3,4], [5,6,7,8], [9,10,11,12]])
print(a)
Explanation: You can also mix integer indexing with slice indexing. However, doing so will yield an array of lower rank than the original array. Note that this is quite different from the way that MATLAB handles array slicing:
End of explanation
row_r1 = a[1, :] # Rank 1 view of the second row of a
row_r2 = a[1:2, :] # Rank 2 view of the second row of a
row_r3 = a[[1], :] # Rank 2 view of the second row of a
print(row_r1, row_r1.shape)
print(row_r2, row_r2.shape)
print(row_r3, row_r3.shape)
# We can make the same distinction when accessing columns of an array:
col_r1 = a[:, 1]
col_r2 = a[:, 1:2]
print(col_r1, col_r1.shape)
print()
print(col_r2, col_r2.shape)
Explanation: Two ways of accessing the data in the middle row of the array.
Mixing integer indexing with slices yields an array of lower rank,
while using only slices yields an array of the same rank as the
original array:
End of explanation
a = np.array([[1,2], [3, 4], [5, 6]])
# An example of integer array indexing.
# The returned array will have shape (3,) and
print(a[[0, 1, 2], [0, 1, 0]])
# The above example of integer array indexing is equivalent to this:
print(np.array([a[0, 0], a[1, 1], a[2, 0]]))
# When using integer array indexing, you can reuse the same
# element from the source array:
print(a[[0, 0], [1, 1]])
# Equivalent to the previous integer array indexing example
print(np.array([a[0, 1], a[0, 1]]))
Explanation: Integer array indexing: When you index into numpy arrays using slicing, the resulting array view will always be a subarray of the original array. In contrast, integer array indexing allows you to construct arbitrary arrays using the data from another array. Here is an example:
End of explanation
# Create a new array from which we will select elements
a = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]])
print(a)
# Create an array of indices
b = np.array([0, 2, 0, 1])
# Select one element from each row of a using the indices in b
print(a[np.arange(4), b]) # Prints "[ 1 6 7 11]"
# Mutate one element from each row of a using the indices in b
a[np.arange(4), b] += 10
print(a)
Explanation: One useful trick with integer array indexing is selecting or mutating one element from each row of a matrix:
End of explanation
import numpy as np
a = np.array([[1,2], [3, 4], [5, 6]])
bool_idx = (a > 2) # Find the elements of a that are bigger than 2;
# this returns a numpy array of Booleans of the same
# shape as a, where each slot of bool_idx tells
# whether that element of a is > 2.
print(bool_idx)
# We use boolean array indexing to construct a rank 1 array
# consisting of the elements of a corresponding to the True values
# of bool_idx
print(a[bool_idx])
# We can do all of the above in a single concise statement:
print(a[a > 2])
Explanation: Boolean array indexing: Boolean array indexing lets you pick out arbitrary elements of an array. Frequently this type of indexing is used to select the elements of an array that satisfy some condition. Here is an example:
End of explanation
x = np.array([1, 2]) # Let numpy choose the datatype
y = np.array([1.0, 2.0]) # Let numpy choose the datatype
z = np.array([1, 2], dtype=np.int64) # Force a particular datatype
print(x.dtype, y.dtype, z.dtype)
Explanation: For brevity we have left out a lot of details about numpy array indexing; if you want to know more you should read the documentation.
Datatypes
Every numpy array is a grid of elements of the same type. Numpy provides a large set of numeric datatypes that you can use to construct arrays. Numpy tries to guess a datatype when you create an array, but functions that construct arrays usually also include an optional argument to explicitly specify the datatype. Here is an example:
End of explanation
x = np.array([[1,2],[3,4]], dtype=np.float64)
y = np.array([[5,6],[7,8]], dtype=np.float64)
# Elementwise sum; both produce the array
print(x + y)
print(np.add(x, y))
# Elementwise difference; both produce the array
print(x - y)
print(np.subtract(x, y))
# Elementwise product; both produce the array
print(x * y)
print(np.multiply(x, y))
# Elementwise division; both produce the array
# [[ 0.2 0.33333333]
# [ 0.42857143 0.5 ]]
print(x / y)
print(np.divide(x, y))
# Elementwise square root; produces the array
# [[ 1. 1.41421356]
# [ 1.73205081 2. ]]
print(np.sqrt(x))
Explanation: You can read all about numpy datatypes in the documentation.
Array math
Basic mathematical functions operate elementwise on arrays, and are available both as operator overloads and as functions in the numpy module:
End of explanation
x = np.array([[1,2],[3,4]])
y = np.array([[5,6],[7,8]])
v = np.array([9,10])
w = np.array([11, 12])
# Inner product of vectors; both produce 219
print(v.dot(w))
print(np.dot(v, w))
Explanation: Note that unlike MATLAB, * is elementwise multiplication, not matrix multiplication. We instead use the dot function to compute inner products of vectors, to multiply a vector by a matrix, and to multiply matrices. dot is available both as a function in the numpy module and as an instance method of array objects:
End of explanation
print(v @ w)
# Matrix / vector product; both produce the rank 1 array [29 67]
print(x.dot(v))
print(np.dot(x, v))
print(x @ v)
# Matrix / matrix product; both produce the rank 2 array
# [[19 22]
# [43 50]]
print(x.dot(y))
print(np.dot(x, y))
print(x @ y)
Explanation: You can also use the @ operator which is equivalent to numpy's dot operator.
End of explanation
x = np.array([[1,2],[3,4]])
print(np.sum(x)) # Compute sum of all elements; prints "10"
print(np.sum(x, axis=0)) # Compute sum of each column; prints "[4 6]"
print(np.sum(x, axis=1)) # Compute sum of each row; prints "[3 7]"
Explanation: Numpy provides many useful functions for performing computations on arrays; one of the most useful is sum:
End of explanation
print(x)
print("transpose\n", x.T)
v = np.array([[1,2,3]])
print(v )
print("transpose\n", v.T)
Explanation: You can find the full list of mathematical functions provided by numpy in the documentation.
Apart from computing mathematical functions using arrays, we frequently need to reshape or otherwise manipulate data in arrays. The simplest example of this type of operation is transposing a matrix; to transpose a matrix, simply use the T attribute of an array object:
End of explanation
# We will add the vector v to each row of the matrix x,
# storing the result in the matrix y
x = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]])
v = np.array([1, 0, 1])
y = np.empty_like(x) # Create an empty matrix with the same shape as x
# Add the vector v to each row of the matrix x with an explicit loop
for i in range(4):
y[i, :] = x[i, :] + v
print(y)
Explanation: Broadcasting
Broadcasting is a powerful mechanism that allows numpy to work with arrays of different shapes when performing arithmetic operations. Frequently we have a smaller array and a larger array, and we want to use the smaller array multiple times to perform some operation on the larger array.
For example, suppose that we want to add a constant vector to each row of a matrix. We could do it like this:
End of explanation
vv = np.tile(v, (4, 1)) # Stack 4 copies of v on top of each other
print(vv) # Prints "[[1 0 1]
# [1 0 1]
# [1 0 1]
# [1 0 1]]"
y = x + vv # Add x and vv elementwise
print(y)
Explanation: This works; however when the matrix x is very large, computing an explicit loop in Python could be slow. Note that adding the vector v to each row of the matrix x is equivalent to forming a matrix vv by stacking multiple copies of v vertically, then performing elementwise summation of x and vv. We could implement this approach like this:
End of explanation
import numpy as np
# We will add the vector v to each row of the matrix x,
# storing the result in the matrix y
x = np.array([[1,2,3], [4,5,6], [7,8,9], [10, 11, 12]])
v = np.array([1, 0, 1])
y = x + v # Add v to each row of x using broadcasting
print(y)
Explanation: Numpy broadcasting allows us to perform this computation without actually creating multiple copies of v. Consider this version, using broadcasting:
End of explanation
# Compute outer product of vectors
v = np.array([1,2,3]) # v has shape (3,)
w = np.array([4,5]) # w has shape (2,)
# To compute an outer product, we first reshape v to be a column
# vector of shape (3, 1); we can then broadcast it against w to yield
# an output of shape (3, 2), which is the outer product of v and w:
print(np.reshape(v, (3, 1)) * w)
# Add a vector to each row of a matrix
x = np.array([[1,2,3], [4,5,6]])
# x has shape (2, 3) and v has shape (3,) so they broadcast to (2, 3),
# giving the following matrix:
print(x + v)
# Add a vector to each column of a matrix
# x has shape (2, 3) and w has shape (2,).
# If we transpose x then it has shape (3, 2) and can be broadcast
# against w to yield a result of shape (3, 2); transposing this result
# yields the final result of shape (2, 3) which is the matrix x with
# the vector w added to each column. Gives the following matrix:
print((x.T + w).T)
# Another solution is to reshape w to be a row vector of shape (2, 1);
# we can then broadcast it directly against x to produce the same
# output.
print(x + np.reshape(w, (2, 1)))
# Multiply a matrix by a constant:
# x has shape (2, 3). Numpy treats scalars as arrays of shape ();
# these can be broadcast together to shape (2, 3), producing the
# following array:
print(x * 2)
Explanation: The line y = x + v works even though x has shape (4, 3) and v has shape (3,) due to broadcasting; this line works as if v actually had shape (4, 3), where each row was a copy of v, and the sum was performed elementwise.
Broadcasting two arrays together follows these rules:
If the arrays do not have the same rank, prepend the shape of the lower rank array with 1s until both shapes have the same length.
The two arrays are said to be compatible in a dimension if they have the same size in the dimension, or if one of the arrays has size 1 in that dimension.
The arrays can be broadcast together if they are compatible in all dimensions.
After broadcasting, each array behaves as if it had shape equal to the elementwise maximum of shapes of the two input arrays.
In any dimension where one array had size 1 and the other array had size greater than 1, the first array behaves as if it were copied along that dimension
If this explanation does not make sense, try reading the explanation from the documentation or this explanation.
Functions that support broadcasting are known as universal functions. You can find the list of all universal functions in the documentation.
Here are some applications of broadcasting:
End of explanation
import matplotlib.pyplot as plt
Explanation: Broadcasting typically makes your code more concise and faster, so you should strive to use it where possible.
This brief overview has touched on many of the important things that you need to know about numpy, but is far from complete. Check out the numpy reference to find out much more about numpy.
Matplotlib
Matplotlib is a plotting library. In this section give a brief introduction to the matplotlib.pyplot module, which provides a plotting system similar to that of MATLAB.
End of explanation
%matplotlib inline
Explanation: By running this special iPython command, we will be displaying plots inline:
End of explanation
# Compute the x and y coordinates for points on a sine curve
x = np.arange(0, 3 * np.pi, 0.1)
y = np.sin(x)
# Plot the points using matplotlib
plt.plot(x, y)
Explanation: Plotting
The most important function in matplotlib is plot, which allows you to plot 2D data. Here is a simple example:
End of explanation
y_sin = np.sin(x)
y_cos = np.cos(x)
# Plot the points using matplotlib
plt.plot(x, y_sin)
plt.plot(x, y_cos)
plt.xlabel('x axis label')
plt.ylabel('y axis label')
plt.title('Sine and Cosine')
plt.legend(['Sine', 'Cosine'])
Explanation: With just a little bit of extra work we can easily plot multiple lines at once, and add a title, legend, and axis labels:
End of explanation
# Compute the x and y coordinates for points on sine and cosine curves
x = np.arange(0, 3 * np.pi, 0.1)
y_sin = np.sin(x)
y_cos = np.cos(x)
# Set up a subplot grid that has height 2 and width 1,
# and set the first such subplot as active.
plt.subplot(2, 1, 1)
# Make the first plot
plt.plot(x, y_sin)
plt.title('Sine')
# Set the second subplot as active, and make the second plot.
plt.subplot(2, 1, 2)
plt.plot(x, y_cos)
plt.title('Cosine')
# Show the figure.
plt.show()
Explanation: Subplots
You can plot different things in the same figure using the subplot function. Here is an example:
End of explanation
<END_TASK> |
259 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Hyperparameter tuning
Learning Objectives
1. Learn how to use cloudml-hypertune to report the results for Cloud hyperparameter tuning trial runs
2. Learn how to configure the .yaml file for submitting a Cloud hyperparameter tuning job
3. Submit a hyperparameter tuning job to Cloud AI Platform
Introduction
Let's see if we can improve upon that by tuning our hyperparameters.
Hyperparameters are parameters that are set prior to training a model, as opposed to parameters which are learned during training.
These include learning rate and batch size, but also model design parameters such as type of activation function and number of hidden units.
Here are the four most common ways to finding the ideal hyperparameters
Step1: Make code compatible with AI Platform Training Service
In order to make our code compatible with AI Platform Training Service we need to make the following changes
Step2: Move code into python package
In the previous lab, we moved our code into a python package for training on Cloud AI Platform. Let's just check that the files are there. You should see the following files in the taxifare/trainer directory
Step3: To use hyperparameter tuning in your training job you must perform the following steps
Step4: Modify task.py
Step5: Create config.yaml file
Specify the hyperparameter tuning configuration for your training job
Create a HyperparameterSpec object to hold the hyperparameter tuning configuration for your training job, and add the HyperparameterSpec as the hyperparameters object in your TrainingInput object.
In your HyperparameterSpec, set the hyperparameterMetricTag to a value representing your chosen metric. If you don't specify a hyperparameterMetricTag, AI Platform Training looks for a metric with the name training/hptuning/metric. The following example shows how to create a configuration for a metric named metric1
Step6: Report your hyperparameter metric to AI Platform Training
The way to report your hyperparameter metric to the AI Platform Training service depends on whether you are using TensorFlow for training or not. It also depends on whether you are using a runtime version or a custom container for training.
We recommend that your training code reports your hyperparameter metric to AI Platform Training frequently in order to take advantage of early stopping.
TensorFlow with a runtime version
If you use an AI Platform Training runtime version and train with TensorFlow, then you can report your hyperparameter metric to AI Platform Training by writing the metric to a TensorFlow summary. Use one of the following functions.
You may need to install cloudml-hypertune on your machine to run this code locally. | <ASSISTANT_TASK:>
Python Code:
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
PROJECT = "<YOUR PROJECT>"
BUCKET = "<YOUR BUCKET>"
REGION = "<YOUR REGION>"
TFVERSION = "2.3.0" # TF version for AI Platform to use
import os
os.environ["PROJECT"] = PROJECT
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
os.environ["TFVERSION"] = TFVERSION
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
Explanation: Hyperparameter tuning
Learning Objectives
1. Learn how to use cloudml-hypertune to report the results for Cloud hyperparameter tuning trial runs
2. Learn how to configure the .yaml file for submitting a Cloud hyperparameter tuning job
3. Submit a hyperparameter tuning job to Cloud AI Platform
Introduction
Let's see if we can improve upon that by tuning our hyperparameters.
Hyperparameters are parameters that are set prior to training a model, as opposed to parameters which are learned during training.
These include learning rate and batch size, but also model design parameters such as type of activation function and number of hidden units.
Here are the four most common ways to finding the ideal hyperparameters:
1. Manual
2. Grid Search
3. Random Search
4. Bayesian Optimzation
1. Manual
Traditionally, hyperparameter tuning is a manual trial and error process. A data scientist has some intution about suitable hyperparameters which they use as a starting point, then they observe the result and use that information to try a new set of hyperparameters to try to beat the existing performance.
Pros
- Educational, builds up your intuition as a data scientist
- Inexpensive because only one trial is conducted at a time
Cons
- Requires alot of time and patience
2. Grid Search
On the other extreme we can use grid search. Define a discrete set of values to try for each hyperparameter then try every possible combination.
Pros
- Can run hundreds of trials in parallel using the cloud
- Gauranteed to find the best solution within the search space
Cons
- Expensive
3. Random Search
Alternatively define a range for each hyperparamter (e.g. 0-256) and sample uniformly at random from that range.
Pros
- Can run hundreds of trials in parallel using the cloud
- Requires less trials than Grid Search to find a good solution
Cons
- Expensive (but less so than Grid Search)
4. Bayesian Optimization
Unlike Grid Search and Random Search, Bayesian Optimization takes into account information from past trials to select parameters for future trials. The details of how this is done is beyond the scope of this notebook, but if you're interested you can read how it works here here.
Pros
- Picks values intelligenty based on results from past trials
- Less expensive because requires fewer trials to get a good result
Cons
- Requires sequential trials for best results, takes longer
AI Platform HyperTune
AI Platform HyperTune, powered by Google Vizier, uses Bayesian Optimization by default, but also supports Grid Search and Random Search.
When tuning just a few hyperparameters (say less than 4), Grid Search and Random Search work well, but when tunining several hyperparameters and the search space is large Bayesian Optimization is best.
End of explanation
!gsutil ls gs://$BUCKET/taxifare/data
Explanation: Make code compatible with AI Platform Training Service
In order to make our code compatible with AI Platform Training Service we need to make the following changes:
Upload data to Google Cloud Storage
Move code into a trainer Python package
Submit training job with gcloud to train on AI Platform
Upload data to Google Cloud Storage (GCS)
Cloud services don't have access to our local files, so we need to upload them to a location the Cloud servers can read from. In this case we'll use GCS.
To do this run the notebook 0_export_data_from_bq_to_gcs.ipynb, which will export the taxifare data from BigQuery directly into a GCS bucket. If all ran smoothly, you should be able to list the data bucket by running the following command:
End of explanation
!ls -la taxifare/trainer
Explanation: Move code into python package
In the previous lab, we moved our code into a python package for training on Cloud AI Platform. Let's just check that the files are there. You should see the following files in the taxifare/trainer directory:
- __init__.py
- model.py
- task.py
End of explanation
%%writefile ./taxifare/trainer/model.py
import datetime
import hypertune
import logging
import os
import shutil
import numpy as np
import tensorflow as tf
from tensorflow.keras import activations
from tensorflow.keras import callbacks
from tensorflow.keras import layers
from tensorflow.keras import models
from tensorflow import feature_column as fc
logging.info(tf.version.VERSION)
CSV_COLUMNS = [
'fare_amount',
'pickup_datetime',
'pickup_longitude',
'pickup_latitude',
'dropoff_longitude',
'dropoff_latitude',
'passenger_count',
'key',
]
LABEL_COLUMN = 'fare_amount'
DEFAULTS = [[0.0], ['na'], [0.0], [0.0], [0.0], [0.0], [0.0], ['na']]
DAYS = ['Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat']
def features_and_labels(row_data):
for unwanted_col in ['key']:
row_data.pop(unwanted_col)
label = row_data.pop(LABEL_COLUMN)
return row_data, label
def load_dataset(pattern, batch_size, num_repeat):
dataset = tf.data.experimental.make_csv_dataset(
file_pattern=pattern,
batch_size=batch_size,
column_names=CSV_COLUMNS,
column_defaults=DEFAULTS,
num_epochs=num_repeat,
)
return dataset.map(features_and_labels)
def create_train_dataset(pattern, batch_size):
dataset = load_dataset(pattern, batch_size, num_repeat=None)
return dataset.prefetch(1)
def create_eval_dataset(pattern, batch_size):
dataset = load_dataset(pattern, batch_size, num_repeat=1)
return dataset.prefetch(1)
def parse_datetime(s):
if type(s) is not str:
s = s.numpy().decode('utf-8')
return datetime.datetime.strptime(s, "%Y-%m-%d %H:%M:%S %Z")
def euclidean(params):
lon1, lat1, lon2, lat2 = params
londiff = lon2 - lon1
latdiff = lat2 - lat1
return tf.sqrt(londiff*londiff + latdiff*latdiff)
def get_dayofweek(s):
ts = parse_datetime(s)
return DAYS[ts.weekday()]
@tf.function
def dayofweek(ts_in):
return tf.map_fn(
lambda s: tf.py_function(get_dayofweek, inp=[s], Tout=tf.string),
ts_in
)
def transform(inputs, NUMERIC_COLS, STRING_COLS, nbuckets):
# Pass-through columns
transformed = inputs.copy()
del transformed['pickup_datetime']
feature_columns = {
colname: fc.numeric_column(colname)
for colname in NUMERIC_COLS
}
# Scaling longitude from range [-70, -78] to [0, 1]
for lon_col in ['pickup_longitude', 'dropoff_longitude']:
transformed[lon_col] = layers.Lambda(
lambda x: (x + 78)/8.0,
name='scale_{}'.format(lon_col)
)(inputs[lon_col])
# Scaling latitude from range [37, 45] to [0, 1]
for lat_col in ['pickup_latitude', 'dropoff_latitude']:
transformed[lat_col] = layers.Lambda(
lambda x: (x - 37)/8.0,
name='scale_{}'.format(lat_col)
)(inputs[lat_col])
# Adding Euclidean dist (no need to be accurate: NN will calibrate it)
transformed['euclidean'] = layers.Lambda(euclidean, name='euclidean')([
inputs['pickup_longitude'],
inputs['pickup_latitude'],
inputs['dropoff_longitude'],
inputs['dropoff_latitude']
])
feature_columns['euclidean'] = fc.numeric_column('euclidean')
# hour of day from timestamp of form '2010-02-08 09:17:00+00:00'
transformed['hourofday'] = layers.Lambda(
lambda x: tf.strings.to_number(
tf.strings.substr(x, 11, 2), out_type=tf.dtypes.int32),
name='hourofday'
)(inputs['pickup_datetime'])
feature_columns['hourofday'] = fc.indicator_column(
fc.categorical_column_with_identity(
'hourofday', num_buckets=24))
latbuckets = np.linspace(0, 1, nbuckets).tolist()
lonbuckets = np.linspace(0, 1, nbuckets).tolist()
b_plat = fc.bucketized_column(
feature_columns['pickup_latitude'], latbuckets)
b_dlat = fc.bucketized_column(
feature_columns['dropoff_latitude'], latbuckets)
b_plon = fc.bucketized_column(
feature_columns['pickup_longitude'], lonbuckets)
b_dlon = fc.bucketized_column(
feature_columns['dropoff_longitude'], lonbuckets)
ploc = fc.crossed_column(
[b_plat, b_plon], nbuckets * nbuckets)
dloc = fc.crossed_column(
[b_dlat, b_dlon], nbuckets * nbuckets)
pd_pair = fc.crossed_column([ploc, dloc], nbuckets ** 4)
feature_columns['pickup_and_dropoff'] = fc.embedding_column(
pd_pair, 100)
return transformed, feature_columns
def rmse(y_true, y_pred):
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
def build_dnn_model(nbuckets, nnsize, lr):
# input layer is all float except for pickup_datetime which is a string
STRING_COLS = ['pickup_datetime']
NUMERIC_COLS = (
set(CSV_COLUMNS) - set([LABEL_COLUMN, 'key']) - set(STRING_COLS)
)
inputs = {
colname: layers.Input(name=colname, shape=(), dtype='float32')
for colname in NUMERIC_COLS
}
inputs.update({
colname: layers.Input(name=colname, shape=(), dtype='string')
for colname in STRING_COLS
})
# transforms
transformed, feature_columns = transform(
inputs, NUMERIC_COLS, STRING_COLS, nbuckets=nbuckets)
dnn_inputs = layers.DenseFeatures(feature_columns.values())(transformed)
x = dnn_inputs
for layer, nodes in enumerate(nnsize):
x = layers.Dense(nodes, activation='relu', name='h{}'.format(layer))(x)
output = layers.Dense(1, name='fare')(x)
model = models.Model(inputs, output)
lr_optimizer = tf.keras.optimizers.Adam(learning_rate=lr)
model.compile(optimizer=lr_optimizer, loss='mse', metrics=[rmse, 'mse'])
return model
def train_and_evaluate(hparams):
batch_size = hparams['batch_size']
eval_data_path = hparams['eval_data_path']
nnsize = hparams['nnsize']
nbuckets = hparams['nbuckets']
lr = hparams['lr']
num_evals = hparams['num_evals']
num_examples_to_train_on = hparams['num_examples_to_train_on']
output_dir = hparams['output_dir']
train_data_path = hparams['train_data_path']
if tf.io.gfile.exists(output_dir):
tf.io.gfile.rmtree(output_dir)
timestamp = datetime.datetime.now().strftime('%Y%m%d%H%M%S')
savedmodel_dir = os.path.join(output_dir, 'savedmodel')
model_export_path = os.path.join(savedmodel_dir, timestamp)
checkpoint_path = os.path.join(output_dir, 'checkpoints')
tensorboard_path = os.path.join(output_dir, 'tensorboard')
dnn_model = build_dnn_model(nbuckets, nnsize, lr)
logging.info(dnn_model.summary())
trainds = create_train_dataset(train_data_path, batch_size)
evalds = create_eval_dataset(eval_data_path, batch_size)
steps_per_epoch = num_examples_to_train_on // (batch_size * num_evals)
checkpoint_cb = callbacks.ModelCheckpoint(checkpoint_path,
save_weights_only=True,
verbose=1)
tensorboard_cb = callbacks.TensorBoard(tensorboard_path,
histogram_freq=1)
history = dnn_model.fit(
trainds,
validation_data=evalds,
epochs=num_evals,
steps_per_epoch=max(1, steps_per_epoch),
verbose=2, # 0=silent, 1=progress bar, 2=one line per epoch
callbacks=[checkpoint_cb, tensorboard_cb]
)
# Exporting the model with default serving function.
tf.saved_model.save(dnn_model, model_export_path)
# TODO 1
hp_metric = history.history['val_rmse'][num_evals-1]
# TODO 1
hpt = hypertune.HyperTune()
hpt.report_hyperparameter_tuning_metric(
hyperparameter_metric_tag='rmse',
metric_value=hp_metric,
global_step=num_evals
)
return history
Explanation: To use hyperparameter tuning in your training job you must perform the following steps:
Specify the hyperparameter tuning configuration for your training job by including a HyperparameterSpec in your TrainingInput object.
Include the following code in your training application:
Parse the command-line arguments representing the hyperparameters you want to tune, and use the values to set the hyperparameters for your training trial.
Add your hyperparameter metric to the summary for your graph.
To submit a hyperparameter tuning job, we must modify model.py and task.py to expose any variables we want to tune as command line arguments.
Modify model.py
End of explanation
%%writefile taxifare/trainer/task.py
import argparse
import json
import os
from trainer import model
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument(
"--batch_size",
help = "Batch size for training steps",
type = int,
default = 32
)
parser.add_argument(
"--eval_data_path",
help = "GCS location pattern of eval files",
required = True
)
parser.add_argument(
"--nnsize",
help = "Hidden layer sizes (provide space-separated sizes)",
nargs = "+",
type = int,
default=[32, 8]
)
parser.add_argument(
"--nbuckets",
help = "Number of buckets to divide lat and lon with",
type = int,
default = 10
)
parser.add_argument(
"--lr",
help = "learning rate for optimizer",
type = float,
default = 0.001
)
parser.add_argument(
"--num_evals",
help = "Number of times to evaluate model on eval data training.",
type = int,
default = 5
)
parser.add_argument(
"--num_examples_to_train_on",
help = "Number of examples to train on.",
type = int,
default = 100
)
parser.add_argument(
"--output_dir",
help = "GCS location to write checkpoints and export models",
required = True
)
parser.add_argument(
"--train_data_path",
help = "GCS location pattern of train files containing eval URLs",
required = True
)
parser.add_argument(
"--job-dir",
help = "this model ignores this field, but it is required by gcloud",
default = "junk"
)
args, _ = parser.parse_known_args()
hparams = args.__dict__
hparams["output_dir"] = os.path.join(
hparams["output_dir"],
json.loads(
os.environ.get("TF_CONFIG", "{}")
).get("task", {}).get("trial", "")
)
print("output_dir", hparams["output_dir"])
model.train_and_evaluate(hparams)
Explanation: Modify task.py
End of explanation
%%writefile hptuning_config.yaml
trainingInput:
scaleTier: BASIC
hyperparameters:
goal: MINIMIZE
maxTrials: 10
maxParallelTrials: 2
hyperparameterMetricTag: rmse # TODO
enableTrialEarlyStopping: True
params:
- parameterName: lr
# TODO
type: DOUBLE
minValue: 0.0001
maxValue: 0.1
scaleType: UNIT_LOG_SCALE
- parameterName: nbuckets
# TODO
type: INTEGER
minValue: 10
maxValue: 25
scaleType: UNIT_LINEAR_SCALE
- parameterName: batch_size
# TODO
type: DISCRETE
discreteValues:
- 15
- 30
- 50
Explanation: Create config.yaml file
Specify the hyperparameter tuning configuration for your training job
Create a HyperparameterSpec object to hold the hyperparameter tuning configuration for your training job, and add the HyperparameterSpec as the hyperparameters object in your TrainingInput object.
In your HyperparameterSpec, set the hyperparameterMetricTag to a value representing your chosen metric. If you don't specify a hyperparameterMetricTag, AI Platform Training looks for a metric with the name training/hptuning/metric. The following example shows how to create a configuration for a metric named metric1:
End of explanation
!pip install --user cloudml-hypertune
%%bash
EVAL_DATA_PATH=./taxifare/tests/data/taxi-valid*
TRAIN_DATA_PATH=./taxifare/tests/data/taxi-train*
OUTPUT_DIR=./taxifare-model
rm -rf ${OUTDIR}
export PYTHONPATH=${PYTHONPATH}:${PWD}/taxifare
python3 -m trainer.task \
--eval_data_path $EVAL_DATA_PATH \
--output_dir $OUTPUT_DIR \
--train_data_path $TRAIN_DATA_PATH \
--batch_size 5 \
--num_examples_to_train_on 100 \
--num_evals 1 \
--nbuckets 10 \
--lr 0.001 \
--nnsize 32 8
ls taxifare-model/tensorboard
%%bash
PROJECT_ID=$(gcloud config list project --format "value(core.project)")
BUCKET=$PROJECT_ID
REGION="us-central1"
TFVERSION="2.1"
# Output directory and jobID
OUTDIR=gs://${BUCKET}/taxifare/trained_model_$(date -u +%y%m%d_%H%M%S)
JOBID=taxifare_$(date -u +%y%m%d_%H%M%S)
echo ${OUTDIR} ${REGION} ${JOBID}
gsutil -m rm -rf ${OUTDIR}
# Model and training hyperparameters
BATCH_SIZE=15
NUM_EXAMPLES_TO_TRAIN_ON=100
NUM_EVALS=10
NBUCKETS=10
LR=0.001
NNSIZE="32 8"
# GCS paths
GCS_PROJECT_PATH=gs://$BUCKET/taxifare
DATA_PATH=$GCS_PROJECT_PATH/data
TRAIN_DATA_PATH=$DATA_PATH/taxi-train*
EVAL_DATA_PATH=$DATA_PATH/taxi-valid*
# TODO
gcloud ai-platform jobs submit training $JOBID \
--module-name=trainer.task \
--package-path=taxifare/trainer \
--staging-bucket=gs://${BUCKET} \
--config=hptuning_config.yaml \
--python-version=3.7 \
--runtime-version=${TFVERSION} \
--region=${REGION} \
-- \
--eval_data_path $EVAL_DATA_PATH \
--output_dir $OUTDIR \
--train_data_path $TRAIN_DATA_PATH \
--batch_size $BATCH_SIZE \
--num_examples_to_train_on $NUM_EXAMPLES_TO_TRAIN_ON \
--num_evals $NUM_EVALS \
--nbuckets $NBUCKETS \
--lr $LR \
--nnsize $NNSIZE
Explanation: Report your hyperparameter metric to AI Platform Training
The way to report your hyperparameter metric to the AI Platform Training service depends on whether you are using TensorFlow for training or not. It also depends on whether you are using a runtime version or a custom container for training.
We recommend that your training code reports your hyperparameter metric to AI Platform Training frequently in order to take advantage of early stopping.
TensorFlow with a runtime version
If you use an AI Platform Training runtime version and train with TensorFlow, then you can report your hyperparameter metric to AI Platform Training by writing the metric to a TensorFlow summary. Use one of the following functions.
You may need to install cloudml-hypertune on your machine to run this code locally.
End of explanation
<END_TASK> |
260 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below
Step9: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token
Step11: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step13: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step15: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below
Step18: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders
Step21: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
Step24: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
Step27: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
Step30: Build the Neural Network
Apply the functions you implemented above to
Step33: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements
Step35: Neural Network Training
Hyperparameters
Tune the following parameters
Step37: Build the Graph
Build the graph using the neural network you implemented.
Step39: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step41: Save Parameters
Save seq_length and save_dir for generating a new TV script.
Step43: Checkpoint
Step46: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names
Step49: Choose Word
Implement the pick_word() function to select the next word using probabilities.
Step51: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate. | <ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
# TODO: Implement Function
word_set = set(text)
int_to_vocab = {ii: word for ii, word in enumerate(word_set)}
vocab_to_int = {word: ii for ii, word in int_to_vocab.items()}
return vocab_to_int, int_to_vocab
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_create_lookup_tables(create_lookup_tables)
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
def token_lookup():
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
# TODO: Implement Function
return {'.' : '||Period||',
',' : '||Comma||',
'"' : '||QuotationMark||',
';' : '||Semicolon||',
'!' : '||ExclamationMark||',
'?' : '||QuestionMark||',
'(' : '||LeftParentheses||',
')' : '||RightParentheses||',
'--' : '||Dash||',
'\n' : '||Return||'}
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_tokenize(token_lookup)
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
#The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length).
#Here the "2" means the input and the target
#Each batch contains two elements:
# The first element is a single batch of input with the shape [batch size, sequence length]
# The second element is a single batch of targets with the shape [batch size, sequence length]
def get_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
# TODO: Implement Function
# The input shape should be (batch_size, seq_length)
input = tf.placeholder(tf.int32, shape=(None, None), name='input')
# The output shape should be (batch_size, vocab_size)
output = tf.placeholder(tf.int32, shape=(None, None), name='output')
learning_rate = tf.placeholder(tf.float32, shape=None, name='learning_rate')
return input, output, learning_rate
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_inputs(get_inputs)
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following tuple (Input, Targets, LearningRate)
End of explanation
# num LSTM layers
num_layers = 1
def get_init_cell(batch_size, rnn_size):
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
# TODO: Implement Function
cell = tf.contrib.rnn.BasicLSTMCell(num_units=rnn_size)
cells = tf.contrib.rnn.MultiRNNCell(num_layers * [cell])
initial_state = cells.zero_state(batch_size, tf.float32)
initial_state = tf.identity(initial_state, name='initial_state')
return cells, initial_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_init_cell(get_init_cell)
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
# calculate from input_data to embed output
def get_embed(input_data, vocab_size, embed_dim):
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
# TODO: Implement Function
embedding = tf.Variable(tf.truncated_normal(shape=[vocab_size, embed_dim], mean=0, stddev=1)) # create embedding weight matrix here
embed = tf.nn.embedding_lookup(embedding, input_data) # use tf.nn.embedding_lookup to get the hidden layer output
return embed
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_embed(get_embed)
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
# calculate from embed output to LSTM output, fully dynamic unrolling of sequence steps
def build_rnn(cell, inputs):
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
# TODO: Implement Function
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32)
final_state = tf.identity(final_state, name='final_state')
return outputs, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_rnn(build_rnn)
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
# TODO: Implement Function
embed = get_embed(input_data, vocab_size, embed_dim)
outputs, final_state = build_rnn(cell, embed)
logits = tf.layers.dense(outputs, vocab_size)
return logits, final_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_build_nn(build_nn)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
def get_batches(int_text, batch_size, seq_length):
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
# TODO: Implement Function
segment_len = (len(int_text) - 1) // batch_size
num_seqs = segment_len // seq_length
segment_len = num_seqs * seq_length
# use_text_len = segment_len * batch_size + 1
batches = np.zeros(shape=(num_seqs, 2, batch_size, seq_length))
for s in range(num_seqs):
# for j in range(2):
for b in range(batch_size):
batches[s, 0, b, :] = int_text[b*segment_len+s*seq_length : b*segment_len+s*seq_length+seq_length]
batches[s, 1, b, :] = int_text[b*segment_len+s*seq_length+1 : b*segment_len+s*seq_length+seq_length+1]
return batches
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_batches(get_batches)
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15], 2, 3) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2 3], [ 7 8 9]],
# Batch of targets
[[ 2 3 4], [ 8 9 10]]
],
# Second Batch
[
# Batch of Input
[[ 4 5 6], [10 11 12]],
# Batch of targets
[[ 5 6 7], [11 12 13]]
]
]
```
End of explanation
# Number of Epochs
num_epochs = 80
# Batch Size
batch_size = 128
# RNN Size
rnn_size = 128
# Embedding Dimension Size
embed_dim = 200
# Sequence Length
seq_length = 20
# Learning Rate
learning_rate = 0.01
# Show stats for every n number of batches
show_every_n_batches = 26
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
save_dir = './save'
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set embed_dim to the size of the embedding.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
Explanation: Checkpoint
End of explanation
def get_tensors(loaded_graph):
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
# TODO: Implement Function
return loaded_graph.get_tensor_by_name('input:0'), \
loaded_graph.get_tensor_by_name('initial_state:0'), \
loaded_graph.get_tensor_by_name('final_state:0'), \
loaded_graph.get_tensor_by_name('probs:0')
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_get_tensors(get_tensors)
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
def pick_word(probabilities, int_to_vocab):
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
# TODO: Implement Function
rnd_idx = np.random.choice(len(probabilities), p=probabilities)
return int_to_vocab[rnd_idx]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_pick_word(pick_word)
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation
<END_TASK> |
261 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Output Containers and Layout Managers
Output containers are objects that hold a collection of other objects, and displays all its contents, even when they are complex interactive objects and MIME type.
By default the contents are just stacked up on the page, but you can configure them to get tabs, a grid, or cycling.
Stacked Output Containers
Step1: Tabbed Output Containers
Step2: Grid Output Containers
Step3: Cycling Output Container | <ASSISTANT_TASK:>
Python Code:
from beakerx import *
o = OutputContainer()
o.addItem("simplest example")
o.addItem([2, 3, 5, 7])
o.addItem(HTML("<h1>title</h1>"))
o.addItem(None)
o
rates = pd.read_csv('../resources/data/interest-rates.csv')
c = Color(120, 120, 120, 100)
plot1 = Plot(initWidth= 300, initHeight= 400)
plot1.add(Points(x= rates.y1, y=rates.y30, size= 3, displayName="y1 vs y30"))
plot1.add(Line(x= rates.y1, y=rates.y30, color= c))
plot1.add(Points(x= rates.m3, y=rates.y5, size= 3, displayName="m3 vs y5"))
plot1.add(Line(x= rates.m3, y=rates.y5, color= c))
plot1.setShowLegend(False)
plot2 = SimpleTimePlot(rates, ["m3", "y1"], showLegend=False, initWidth= 300, initHeight= 400)
plot3 = SimpleTimePlot(rates, ["y5", "y10"], showLegend=False, initWidth= 300, initHeight= 400)
Explanation: Output Containers and Layout Managers
Output containers are objects that hold a collection of other objects, and displays all its contents, even when they are complex interactive objects and MIME type.
By default the contents are just stacked up on the page, but you can configure them to get tabs, a grid, or cycling.
Stacked Output Containers
End of explanation
table = pd.DataFrame({'a' : [1, 2, 1, 5], 'b' : ["a", "ab", "b", "ababa"]})
l = TabbedOutputContainerLayoutManager()
l.setBorderDisplayed(False)
o = OutputContainer()
o.setLayoutManager(l)
o.addItem(plot1, "Scatter with History")
o.addItem(plot2, "Short Term")
o.addItem(plot3, "Long Term")
o.addItem(table, "Pandas Table")
o
Explanation: Tabbed Output Containers
End of explanation
bars = CategoryPlot(initWidth= 300, initHeight= 400)
bars.add(CategoryBars(value= [[1.1, 2.4, 3.8], [1, 3, 4]]))
lg = GridOutputContainerLayoutManager(3)
og = OutputContainer()
og.setLayoutManager(lg)
og.addItem(plot1, "Scatter with History")
og.addItem(plot2, "Short Term")
og.addItem(plot3, "Long Term1")
og.addItem(bars, "Bar Chart")
og.addItem(HTML("<center>some<b>thing</b></center>"))
og.addItem(table, "Pandas Table")
og
Explanation: Grid Output Containers
End of explanation
l = CyclingOutputContainerLayoutManager()
l.setPeriod(2345); # milliseconds
l.setBorderDisplayed(False);
o = OutputContainer()
o.setLayoutManager(l)
o.addItem(plot1, "Scatter with History")
o.addItem(plot2, "Short Term")
o.addItem(plot3, "Long Term")
o
Explanation: Cycling Output Container
End of explanation
<END_TASK> |
262 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Lesson 3
Step1: Loops - Repeat with "while" and Iterate with "for"
While loop
A loop statement allows us to execute a statement or group of statements multiple times
While loop is the simplest loop which executes a block of code until and expression is True
A 'break' command can be used to exit any loop prematurely
A 'continue' command can be used to skip ahead to the next iterations without exiting the loop
<img src="whileloop.png">
*Now lets improve the previous function "dice()" such that the user has 5 attempts at entering the number and getting the right dice value. *
Step2: Excercise
Step3: For loop
For loops are used to iterate over data
It makes it possible for you to traverse data structures without knowing how large they are
You can even iterate over datastreams in real time
Let us write a program that asks the user to enter a text sentence and counts the number of times a particular character occurs in the sentence.
Step4: Excercise | <ASSISTANT_TASK:>
Python Code:
from random import randint #From the random library we import the randint function.
def dice(user_input):
'''This function mimics a dice. It generates a random number between 1 and 6 and
sees if the value given by the user equals to the dice value'''
dice_value = randint(1,6) #Generate a randome number between 1 and 6
if int(user_input) == dice_value:
print('Congratulations ! The value entered is the dice value')
else:
print('Sorry, the dice showed: ', dice_value)
user_input = int(input('Input a number between 1 and 6: '))
#!!!Input always passes strings !!!
dice(user_input) #we pass the user_input as an argument to the function.
Explanation: Lesson 3: Conditions and Loops
Conditions - Compare with "if", "elif" and "else"
Evaluate expressions which produce TRUE or FALSE as outcome
Based on the outcome of the expression a specific block of code is executed
The expressions could be: equality (==) , inequality (!=), less/greater than (<)(>), less/greater than or equal
(<=) (>=), membership (in, not in)
Expressions can be combined using "and" and "or" and "and not"
<img src="ifcondition.jpg">
Let us write a small code that mimics a dice by generating a random number between 1 and 6. It then asks you to enter a number between 1 and 6. If what you enter equals the dice value you win or else you loose.
End of explanation
from random import randint
def dice(user_input):
dice_value = randint(1,6) #Generate a random number between 1 and 6
if user_input == dice_value:
print('Congratulations ! The value entered is the dice value')
else:
print('Sorry, the dice showed: ', dice_value)
iterate = 0
while iterate < 5:
user_input = int(input('Input a number between 1 and 6: '))
dice(user_input)
iterate = iterate + 1
Explanation: Loops - Repeat with "while" and Iterate with "for"
While loop
A loop statement allows us to execute a statement or group of statements multiple times
While loop is the simplest loop which executes a block of code until and expression is True
A 'break' command can be used to exit any loop prematurely
A 'continue' command can be used to skip ahead to the next iterations without exiting the loop
<img src="whileloop.png">
*Now lets improve the previous function "dice()" such that the user has 5 attempts at entering the number and getting the right dice value. *
End of explanation
# Enter code here
Explanation: Excercise : Can you edit the the previous code so that it stops asking the user to enter a number when the value entered matches the dice value ? Hint: You will need to use the "break" command within the while loop. Also remember that you can use the "return" command to pass values back to the calling function.
End of explanation
def count(sentence, char):
'''This function counts the number of times a particular charactors occurs in a given
sentence'''
count = 0
for x_char in sentence:
if x_char == char:
count += 1
print("Number of times the character '",char, "' occurs is: ", count) # Careful with the quotes !
sentence = input('Input your sentence: ')
sentence = sentence.lower() #the lower() comand converts the sentence to lower case
char = input('Input the character that needs to be counted: ').lower()
count(sentence, char)
Explanation: For loop
For loops are used to iterate over data
It makes it possible for you to traverse data structures without knowing how large they are
You can even iterate over datastreams in real time
Let us write a program that asks the user to enter a text sentence and counts the number of times a particular character occurs in the sentence.
End of explanation
# Enter code here
Explanation: Excercise: Can you use the for loop so that it counts the number of times a given word occurs in a sentence ? Hint: Use the split() command to split the sentence into a list of words and then use the for loop to traverse through the list.
End of explanation
<END_TASK> |
263 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Traffic flow with an on-ramp
In this chapter we return to the LWR traffic model that we investigated in two earlier chapters. The LWR model involves a single length of one-way road; in this chapter we will think of this road as a highway. On a real highway, there are cars entering and leaving the highway from other roads. In general, real traffic flow must be modeled on a network of roads. the development of continuum traffic models based on LWR and other simple models is an important and very active area of research; see for instance <cite data-cite="holden1995mathematical"><a href="riemann.html#holden1995mathematical">(Holden, 1995)</a></cite> for an investigation of the Riemann problem at a junction, and <cite data-cite="garavello2006traffic"><a href="riemann.html#garavello2006traffic">(Garavello, 2006)</a></cite> for an overview of the area. Here we take a first step in that direction by considering the presence of a single on-ramp, where traffic enters the highway.
Let the flux of cars from the on-ramp be denoted by $D$; we assume that $D$ is constant in time but concentrated at a single point ($x=0$ in space). Our model equation then becomes
\begin{align} \label{TFR
Step1: Light traffic, little inflow
What happens when the on-ramp has a relatively small flux of cars, and the highway around the ramp is not congested? There will be a region of somewhat higher density starting at the ramp and propagating downstream. This is demonstrated in the example below.
Step2: In contrast to the LWR model without a ramp, here we see three constant states separated by two waves in the Riemann solution. The first is a stationary wave where the traffic density abruptly increases due to the cars entering from the ramp, as predicted by \eqref{TFR
Step3: The influx of cars from the ramp here causes a traffic jam that moves upstream. As we discuss further below, some real highways limit the influx in order to prevent this.
Experiment with the value of $\rho_r$ and in the example above. Can you give a precise condition that determines whether the shock will move left or right?
Light traffic, heavy inflow
Now we come to the most interesting case. Since the maximum flux is $1/4$, it follows that if $f(\rho_l) + D = 1/4$, then the oncoming traffic from the highway and the on-ramp can just fit onto the road at $x=0$. The smaller value of $\rho_l$ for which this equation holds is $\rho^* = 1/4 - \sqrt{D}$. If $\rho_l$ exceeds this value, then not all the cars arriving at $x=0$ can fit on the road there; since our model gives priority to the cars coming from the on-ramp, the road to the left of $x=0$ must suffer a traffic jam -- a shock wave moving to the left.
As long as $\rho_r < 1/2$, the value of $\rho^+$ will be exactly $1/2$, so as to maximize the flux through $x=0$. Downstream, a rarefaction will form as cars accelerate into the less-congested highway.
Step4: Notice that in the extreme case that $D=1/4$, the cars from the on-ramp completely block the cars coming from the left; those cars come to a complete stop and never pass $x=0$. This may seem surprising, since the density of cars to the right of $x=0$ is just $1/2$. However, since the flux must increase by $1/4$ at $x=0$, it follows that the flux just to the left of $x=0$ must be zero.
Counterintuitively, when two roads merge, limiting the influx of traffic from one or both of them can significantly increase the overall rate of traffic flow. Contrary to our model, the usual approach is to prioritize the cars already on the highway and restrict the influx of cars from an on-ramp. This is done in practice nowadays on many highway on-ramps in congested areas.
Congested upstream, uncongested downstream
Step5: Congested on both sides
Next let us consider what happens if the incoming traffic from the upstream highway and the on-ramp exceeds the maximum flux, but the road is also congested for $x>0$ (i.e., $\rho_r>1/2$). Then no waves can travel to the right, and a left-going shock will form. If downstream congestion is too high, then the traffic from the on-ramp will not all be able to enter the highway and no solution is possible in this model (see the second condition for existence, above).
Step6: Further examples
Step7: Interactive solver | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from clawpack import pyclaw
from clawpack import riemann
from ipywidgets import interact
from ipywidgets import widgets
from exact_solvers import traffic_ramps
from utils import riemann_tools
def c(rho, xi):
return (1.-2*rho)
def make_plot_function(rho_l,rho_r,D):
states, speeds, reval, wave_types = traffic_ramps.exact_riemann_solution(rho_l,rho_r,D)
def plot_function(t):
ax = riemann_tools.plot_riemann(states,speeds,reval,wave_types,t=t,t_pointer=0,
extra_axes=True,variable_names=['Density']);
# Characteristic plotting isn't working right for this problem
riemann_tools.plot_characteristics(reval,c,None,ax[0])
traffic_ramps.phase_plane_plot(rho_l,rho_r,D,axes=ax[2])
ax[1].set_ylim(0,1)
plt.show()
return plot_function
def riemann_solution(rho_l, rho_r, D):
plot_function = make_plot_function(rho_l,rho_r,D)
interact(plot_function, t=widgets.FloatSlider(value=0.1,min=0,max=.9));
Explanation: Traffic flow with an on-ramp
In this chapter we return to the LWR traffic model that we investigated in two earlier chapters. The LWR model involves a single length of one-way road; in this chapter we will think of this road as a highway. On a real highway, there are cars entering and leaving the highway from other roads. In general, real traffic flow must be modeled on a network of roads. the development of continuum traffic models based on LWR and other simple models is an important and very active area of research; see for instance <cite data-cite="holden1995mathematical"><a href="riemann.html#holden1995mathematical">(Holden, 1995)</a></cite> for an investigation of the Riemann problem at a junction, and <cite data-cite="garavello2006traffic"><a href="riemann.html#garavello2006traffic">(Garavello, 2006)</a></cite> for an overview of the area. Here we take a first step in that direction by considering the presence of a single on-ramp, where traffic enters the highway.
Let the flux of cars from the on-ramp be denoted by $D$; we assume that $D$ is constant in time but concentrated at a single point ($x=0$ in space). Our model equation then becomes
\begin{align} \label{TFR:balance_law}
\rho_t + \left(\rho(1-\rho)\right)_x & = D \delta(x),
\end{align}
where $\delta(x)$ is the Dirac delta function. Equation \eqref{TFR:balance_law} is our first example of a balance law. The term on the right hand side does not take the form of a flux, and the total mass of cars is not conserved. We refer to the right-hand-side term as a source term -- quite appropriately in the present context, since it represents a source of cars entering the highway. In a more realistic model, traffic on the on-ramp itself would also be modeled. However, our goal here is primarily to illustrate the effect of a source term like that in \eqref{TFR:balance_law} on the solution of the Riemann problem.
Typically, source terms have only an infinitesimal effect on the Riemann solution over short times, since they are distributed in space. The term considered here is an example of a singular source term; it has a non-negligible effect on the Riemann solution because it is concentrated at $x=0$.
Recall that the flux of cars in the LWR model is given by
$$f(\rho) = \rho(1-\rho)$$
where $0 \le \rho \le 1$. Thus the maximum flux is $f_\text{max} = 1/4$, achieved when $\rho=1/2$. We assume always that $D \le 1/4$, so that all the cars arriving from the on-ramp can enter the highway.
As discussed already in the chapter on traffic with a varying speed limit, the flux of cars must be continuous everywhere, and in particular at $x=0$. Let $\rho^-, \rho^+$ denote the density $\rho$ in the limit as $\xi \to 0$ from the left and right, respectively. Then this condition means that
\begin{align} \label{TFR:source_balance}
f(\rho^-) + D = f(\rho^+).
\end{align}
For $D\ne0$, this condition implies that a stationary jump exists at $x=0$, similar to the stationary jump we found in the case of a varying speed limit.
One approach to solving the Riemann problem is to focus on finding $\rho^-$ and $\rho^+$; the wave structure on either side of $x=0$ can then be deduced in the same way we have done for problems without a source term -- connecting $\rho_l$ to $\rho_-$ and $\rho_r$ to $\rho_+$ by entropy-satisfying shock or rarefaction waves. This approach was undertaken by Greenberg et. al. in <cite data-cite="greenberg1997analysis"><a href="riemann.html#greenberg1997analysis">(Greenberg, 1997)</a></cite> for Burgers' equation; the main results (Table 1 therein) can be transferred to the LWR model in a straightforward way. As they noted, there is typically more than one choice of $(\rho^+, \rho^-)$ that leads to an entropy-satisfying weak solution; some additional admissibility condition is required in order to choose one. Herein we will motivate the choice of $\rho^+, \rho^-$ based on physical considerations; the resulting values agree with those of Greenberg et. al. (see also <cite data-cite="isaacson1992nonlinear"><a href="riemann.html#isaacson1992nonlinear">(Isaacson, 1992)</a></cite> for yet another approach that yields the same admissibility conditions).
Spatially-varying fluxes and source terms
The similarity between the existence of an on-ramp at $x=0$ and a change in the speed limit at $x=0$ can be seen mathematically as follows. For the varying speed limit, we studied a conservation law of the form
$$\rho_t + f(\rho,x)_x =0.$$
Using the chain rule, this is equivalent to
$$\rho_t + f_\rho(\rho,x) \rho_x = - f_x(\rho,x).$$
Hence the variable-coefficient system can also be viewed as a balance law. If $f$ is discontinuous at $x=0$, then
$f_x$ is a delta function. Notice that the presence of an on-ramp (positive source term) corresponds to a decrease in the speed limit. This makes sense -- both of these have the effect of reducing the rate at which cars from upstream ($x<0$) can proceed downstream ($x>0$). Thus the Riemann solutions we find in this chapter will be similar to those found in the presence of a decrease in speed limit.
In the remainder of the chapter, we investigate the solution of the Riemann problem for this balance law.
Conditions for existence of a solution
In our model, cars entering from the on-ramp are always given priority. In a real-world scenario, traffic on the on-ramp could also back up and the flux $D$ from the ramp could be decreased. However, a much more complicated model would be required in order to account for this; see <cite data-cite="delle2014pde"><a href="riemann.html#delle2014pde">(delle Monache, 1992)</a></cite> for an example of such a model.
The flux $D$ from the on-ramp cannot raise the density above $\rho_\text{max}=1$ (representing bumper-to-bumper traffic). This leads to some restrictions on $D$ in order to guarantee existence of a solution to the Riemann problem:
$D \le 1/4$. This condition is necessary since otherwise the flux from the on-ramp would exceed the maximum flux of the highway, even without any other oncoming traffic.
If $\rho_r > 1/2$, then $D \le f(\rho_r)$. The reason for this is as follows: if $\rho_r > 1/2$, then characteristics to the right of $x=0$ go to the left. Hence there cannot be any right-going wave (a more detailed analysis shows that a right-going transonic shock is impossible), and it must be that $\rho^+ = \rho_r$. Thus $D = f(\rho_r) - f(\rho^-) \le f(\rho_r)$.
It turns out that these two conditions are also sufficient for the existence of a solution to the Riemann problem.
End of explanation
rho_l = 0.2
rho_r = 0.2
D = 0.05
riemann_solution(rho_l, rho_r, D)
traffic_ramps.plot_car_trajectories(rho_l,rho_r,D)
Explanation: Light traffic, little inflow
What happens when the on-ramp has a relatively small flux of cars, and the highway around the ramp is not congested? There will be a region of somewhat higher density starting at the ramp and propagating downstream. This is demonstrated in the example below.
End of explanation
rho_l = 0.2
rho_r = 0.8
D = 0.05
riemann_solution(rho_l, rho_r, D)
traffic_ramps.plot_car_trajectories(rho_l,rho_r,D,xmax=0.2)
Explanation: In contrast to the LWR model without a ramp, here we see three constant states separated by two waves in the Riemann solution. The first is a stationary wave where the traffic density abruptly increases due to the cars entering from the ramp, as predicted by \eqref{TFR:source_balance}. Indeed that condition determines the middle state $\rho_m$ as the solution of
$$f(\rho_l) + D = f(\rho_m)$$
For given values of $\rho_l$ and $D$, this is a quadratic equation for $\rho_m$, with solution
\begin{align} \label{TFR:qm1}
\rho_m = \frac{1 \pm \sqrt{1-4(f(\rho_l)+D)}}{2}.
\end{align}
As in the case of varying speed limit, we can choose the physically relevant solution by applying the condition that the characteristic speed not change sign at $x=0$. This dictates that we choose the minus sign, so that $\rho_m<1/2$, since $\rho_l < 1/2$.
Downstream, there is a rarefaction as these cars accelerate and again spread out.
The solution just proposed will break down if either of the following occur:
If the downstream density $\rho_r$ is greater than $\rho_m$, then a shock wave will form rather than a rarefaction.
If the combined flux from upstream and from the ramp exceeds $f_\text{max}$, there will be a shock wave moving upstream due to congestion at the mouth of the on-ramp. This happens if $f(\rho_l) + D > 1/4$; notice that this is precisely the condition for the value of $\rho_m$ in \eqref{TFR:qm1} to become complex.
We consider each of these scenarios in the following sections.
Uncongested upstream, congested downstream: transonic shock
What if upstream traffic and flux from the on-ramp are light, but traffic is significantly heavier just after the on-ramp? In this case a shock wave will form, since if $\rho_r > \rho_l$, characteristics from the left and right regions must cross. The shock may move to the left or right, depending on how congested the downstream segment is. In either case, there will again be a stationary jump at $x=0$ due to the cars entering from the on-ramp.
End of explanation
rho_l = 0.2
rho_r = 0.2
D = 0.25
riemann_solution(rho_l, rho_r, D)
traffic_ramps.plot_car_trajectories(rho_l,rho_r,D)
Explanation: The influx of cars from the ramp here causes a traffic jam that moves upstream. As we discuss further below, some real highways limit the influx in order to prevent this.
Experiment with the value of $\rho_r$ and in the example above. Can you give a precise condition that determines whether the shock will move left or right?
Light traffic, heavy inflow
Now we come to the most interesting case. Since the maximum flux is $1/4$, it follows that if $f(\rho_l) + D = 1/4$, then the oncoming traffic from the highway and the on-ramp can just fit onto the road at $x=0$. The smaller value of $\rho_l$ for which this equation holds is $\rho^* = 1/4 - \sqrt{D}$. If $\rho_l$ exceeds this value, then not all the cars arriving at $x=0$ can fit on the road there; since our model gives priority to the cars coming from the on-ramp, the road to the left of $x=0$ must suffer a traffic jam -- a shock wave moving to the left.
As long as $\rho_r < 1/2$, the value of $\rho^+$ will be exactly $1/2$, so as to maximize the flux through $x=0$. Downstream, a rarefaction will form as cars accelerate into the less-congested highway.
End of explanation
rho_l = 0.6
rho_r = 0.2
D = 0.12
riemann_solution(rho_l, rho_r, D)
traffic_ramps.plot_car_trajectories(rho_l,rho_r,D)
Explanation: Notice that in the extreme case that $D=1/4$, the cars from the on-ramp completely block the cars coming from the left; those cars come to a complete stop and never pass $x=0$. This may seem surprising, since the density of cars to the right of $x=0$ is just $1/2$. However, since the flux must increase by $1/4$ at $x=0$, it follows that the flux just to the left of $x=0$ must be zero.
Counterintuitively, when two roads merge, limiting the influx of traffic from one or both of them can significantly increase the overall rate of traffic flow. Contrary to our model, the usual approach is to prioritize the cars already on the highway and restrict the influx of cars from an on-ramp. This is done in practice nowadays on many highway on-ramps in congested areas.
Congested upstream, uncongested downstream
End of explanation
rho_l = 0.6
rho_r = 0.8
D = 0.12
riemann_solution(rho_l, rho_r, D)
traffic_ramps.plot_car_trajectories(rho_l,rho_r,D)
Explanation: Congested on both sides
Next let us consider what happens if the incoming traffic from the upstream highway and the on-ramp exceeds the maximum flux, but the road is also congested for $x>0$ (i.e., $\rho_r>1/2$). Then no waves can travel to the right, and a left-going shock will form. If downstream congestion is too high, then the traffic from the on-ramp will not all be able to enter the highway and no solution is possible in this model (see the second condition for existence, above).
End of explanation
rho_l = 0.1
rho_r = 0.6
D = 0.08
riemann_solution(rho_l, rho_r, D)
traffic_ramps.plot_car_trajectories(rho_l,rho_r,D)
rho_l = 1.0
rho_r = 0.7
D = 0.1
riemann_solution(rho_l, rho_r, D)
traffic_ramps.plot_car_trajectories(rho_l,rho_r,D)
Explanation: Further examples
End of explanation
f = lambda q: q*(1-q)
def plot_all(rho_l,rho_r,D):
states, speeds, reval, wave_types = traffic_ramps.exact_riemann_solution(rho_l,rho_r,D)
ax = riemann_tools.plot_riemann(states,speeds,reval,wave_types,t=0.5,extra_axes=2);
riemann_tools.plot_characteristics(reval,c,None,ax[0])
traffic_ramps.phase_plane_plot(rho_l,rho_r,D,axes=ax[2],show=False)
traffic_ramps.plot_car_trajectories(rho_l,rho_r,D,axes=ax[3])
plt.show()
interact(plot_all,
rho_l = widgets.FloatSlider(min=0.,max=1.,step=0.01,value=0.4,description=r'$\rho_l$'),
rho_r = widgets.FloatSlider(min=0.,max=1.,step=0.01,value=0.7,description=r'$\rho_r$'),
D = widgets.FloatSlider(min=0.,max=0.25,step=0.01,value=0.1),
);
Explanation: Interactive solver
End of explanation
<END_TASK> |
264 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Title
Step1: Make a dataframe
Step2: Add a new column for elderly | <ASSISTANT_TASK:>
Python Code:
# Import required modules
import pandas as pd
import numpy as np
Explanation: Title: Create a Column Based on a Conditional in pandas
Slug: pandas_create_column_using_conditional
Summary: Create a Column Based on a Conditional in pandas
Date: 2016-05-01 12:00
Category: Python
Tags: Data Wrangling
Authors: Chris Albon
Preliminaries
End of explanation
data = {'name': ['Jason', 'Molly', 'Tina', 'Jake', 'Amy'],
'age': [42, 52, 36, 24, 73],
'preTestScore': [4, 24, 31, 2, 3],
'postTestScore': [25, 94, 57, 62, 70]}
df = pd.DataFrame(data, columns = ['name', 'age', 'preTestScore', 'postTestScore'])
df
Explanation: Make a dataframe
End of explanation
# Create a new column called df.elderly where the value is yes
# if df.age is greater than 50 and no if not
df['elderly'] = np.where(df['age']>=50, 'yes', 'no')
# View the dataframe
df
Explanation: Add a new column for elderly
End of explanation
<END_TASK> |
265 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Reading µs-ALEX data from Photon-HDF5 with h5py
In this notebook we show how to read a µs-ALEX smFRET measurement stored in
Photon-HDF5 format
using python and a few common scientific libraries (numpy, h5py, matplotlib).
Specifically, we show how to load timestamps, build an alternation histogram
and select photons in the donor and acceptor excitation periods.
See also a similar notebook using
pytables instead of h5py.
Step2: 1. Utility functions
Here we define an utility function to print HDF5 file contents
Step3: 2. Open the data file
Let assume we have a Photon-HDF5 file at the following location
Step4: We can open the file, as a normal HDF5 file
Step5: The object h5file is a pytables file reference. The root group is accessed with h5file.root.
3. Print the content
Let's start by taking a look at the file content
Step6: We see the typical Photon-HDF5 structure. In particular the field description provides a short description of the measurement and acquisition_duration tells that the acquisition lasted 600 seconds.
As an example let's take a look at the content of the sample group
Step7: Finally, we define a shortcut to the photon_data group to save some typing later
Step8: 4. Reading the data
First, we make sure the file contains the right type of measurement
Step9: Ok, tha's what we espect.
Now we can load all the timestamps (including timestamps unit) and detectors arrays
Step10: We may want to check the excitation wavelengths used in the measurement. This information is found in the setup group
Step11: Now, let's load the definitions of donor/acceptor channel and excitation periods
Step12: These numbers define the donor and acceptor alternation periods as shown below
Step13: Now that the data has been loaded we can plot an alternation histogram using matplotlib
Step14: 6. Timestamps in different excitation periods
We conclude by showing, as an example, how to create arrays of timestamps containing only donor or acceptor exitation photons. | <ASSISTANT_TASK:>
Python Code:
from __future__ import division, print_function # only needed on py2
%matplotlib inline
import numpy as np
import h5py
import matplotlib.pyplot as plt
Explanation: Reading µs-ALEX data from Photon-HDF5 with h5py
In this notebook we show how to read a µs-ALEX smFRET measurement stored in
Photon-HDF5 format
using python and a few common scientific libraries (numpy, h5py, matplotlib).
Specifically, we show how to load timestamps, build an alternation histogram
and select photons in the donor and acceptor excitation periods.
See also a similar notebook using
pytables instead of h5py.
End of explanation
def print_children(group):
Print all the sub-groups in `group` and leaf-nodes children of `group`.
Parameters:
data_file (h5py HDF5 file object): the data file to print
for name, value in group.items():
if isinstance(value, h5py.Group):
content = '(Group)'
else:
content = value[()]
print(name)
print(' Content: %s' % content)
print(' Description: %s\n' % value.attrs['TITLE'].decode())
Explanation: 1. Utility functions
Here we define an utility function to print HDF5 file contents:
End of explanation
filename = '../data/0023uLRpitc_NTP_20dT_0.5GndCl.hdf5'
Explanation: 2. Open the data file
Let assume we have a Photon-HDF5 file at the following location:
End of explanation
h5file = h5py.File(filename)
Explanation: We can open the file, as a normal HDF5 file
End of explanation
print_children(h5file)
Explanation: The object h5file is a pytables file reference. The root group is accessed with h5file.root.
3. Print the content
Let's start by taking a look at the file content:
End of explanation
print_children(h5file['sample'])
Explanation: We see the typical Photon-HDF5 structure. In particular the field description provides a short description of the measurement and acquisition_duration tells that the acquisition lasted 600 seconds.
As an example let's take a look at the content of the sample group:
End of explanation
photon_data = h5file['photon_data']
Explanation: Finally, we define a shortcut to the photon_data group to save some typing later:
End of explanation
photon_data['measurement_specs']['measurement_type'][()].decode()
Explanation: 4. Reading the data
First, we make sure the file contains the right type of measurement:
End of explanation
timestamps = photon_data['timestamps'][:]
timestamps_unit = photon_data['timestamps_specs']['timestamps_unit'][()]
detectors = photon_data['detectors'][:]
print('Number of photons: %d' % timestamps.size)
print('Timestamps unit: %.2e seconds' % timestamps_unit)
print('Detectors: %s' % np.unique(detectors))
Explanation: Ok, tha's what we espect.
Now we can load all the timestamps (including timestamps unit) and detectors arrays:
End of explanation
h5file['setup']['excitation_wavelengths'][:]
Explanation: We may want to check the excitation wavelengths used in the measurement. This information is found in the setup group:
End of explanation
donor_ch = photon_data['measurement_specs']['detectors_specs']['spectral_ch1'][()]
acceptor_ch = photon_data['measurement_specs']['detectors_specs']['spectral_ch2'][()]
print('Donor CH: %d Acceptor CH: %d' % (donor_ch, acceptor_ch))
alex_period = photon_data['measurement_specs']['alex_period'][()]
donor_period = photon_data['measurement_specs']['alex_excitation_period1'][()]
offset = photon_data['measurement_specs']['alex_offset'][()]
acceptor_period = photon_data['measurement_specs']['alex_excitation_period2'][()]
print('ALEX period: %d \nOffset: %4d \nDonor period: %s \nAcceptor period: %s' % \
(alex_period, offset, donor_period, acceptor_period))
Explanation: Now, let's load the definitions of donor/acceptor channel and excitation periods:
End of explanation
timestamps_donor = timestamps[detectors == donor_ch]
timestamps_acceptor = timestamps[detectors == acceptor_ch]
Explanation: These numbers define the donor and acceptor alternation periods as shown below:
$$2180 < \widetilde{t} < 3900 \qquad \textrm{donor period}$$
$$200 < \widetilde{t} < 1800 \qquad \textrm{acceptor period}$$
where $\widetilde{t}$ represent the (timestamps - offset) MODULO alex_period.
For more information
please refer to the measurements_specs section
of the Reference Documentation.
5. Plotting the alternation histogram
Let start by separating timestamps from donor and acceptor channels:
End of explanation
fig, ax = plt.subplots()
ax.hist((timestamps_acceptor - offset) % alex_period, bins=100, alpha=0.8, color='red', label='donor')
ax.hist((timestamps_donor - offset) % alex_period, bins=100, alpha=0.8, color='green', label='acceptor')
ax.axvspan(donor_period[0], donor_period[1], alpha=0.3, color='green')
ax.axvspan(acceptor_period[0], acceptor_period[1], alpha=0.3, color='red')
ax.set_xlabel('(timestamps - offset) MOD alex_period')
ax.set_title('ALEX histogram')
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5), frameon=False);
Explanation: Now that the data has been loaded we can plot an alternation histogram using matplotlib:
End of explanation
timestamps_mod = (timestamps - offset) % alex_period
donor_excitation = (timestamps_mod < donor_period[1])*(timestamps_mod > donor_period[0])
acceptor_excitation = (timestamps_mod < acceptor_period[1])*(timestamps_mod > acceptor_period[0])
timestamps_Dex = timestamps[donor_excitation]
timestamps_Aex = timestamps[acceptor_excitation]
fig, ax = plt.subplots()
ax.hist((timestamps_Dex - offset) % alex_period, bins=np.arange(0, alex_period, 40), alpha=0.8, color='green', label='D_ex')
ax.hist((timestamps_Aex - offset) % alex_period, bins=np.arange(0, alex_period, 40), alpha=0.8, color='red', label='A_ex')
ax.set_xlabel('(timestamps - offset) MOD alex_period')
ax.set_title('ALEX histogram (selected periods only)')
ax.legend(loc='center left', bbox_to_anchor=(1, 0.5), frameon=False);
#plt.close('all')
Explanation: 6. Timestamps in different excitation periods
We conclude by showing, as an example, how to create arrays of timestamps containing only donor or acceptor exitation photons.
End of explanation
<END_TASK> |
266 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
English Wikipedia page views, 2008 - 2017
For this assignment, your job is to analyze traffic on English Wikipedia over time, and then document your process and the resulting dataset and visualization according to best practices for open research that were outlined for you in class.
Example API request
You can use this example API request as a starting point for building your API queries. Note that the Legacy Pagecounts API has slightly different schema than the pageview API shown here.
This sample API request would get you all pageviews by web crawlers on the mobile website for English Wikipedia during the month of September, 2017.
Scrape Pageview Mobile Site traffic (current api)
Step1: Scrape Pageview Mobile App traffic (current api)
Step2: Scrape Pageview Desktop Site traffic (current api)
Step3: Scrape Pageview All Site traffic (current api)
Step4: Scrape Pagecount desktop Site traffic (legacy api)
Step5: Scrape Pagecount all Site traffic (legacy api)
Step6: Scrape Pagecount mobile Site traffic (legacy api)
Step7: Make a dictionary and add relevant info. timestamp = key, counts = value. Repeat for all | <ASSISTANT_TASK:>
Python Code:
#current
import requests
endpoint = 'https://wikimedia.org/api/rest_v1/metrics/pageviews/aggregate/{project}/{access}/{agent}/{granularity}/{start}/{end}'
headers={'User-Agent' : 'https://github.com/your_github_username', 'From' : 'abhiv@uw.edu'}
params = {'project' : 'en.wikipedia.org',
'access' : 'mobile-web',
'agent' : 'user',
'granularity' : 'monthly',
'start' : '2015070100',
'end' : '2017091000'#use the first day of the following month to ensure a full month of data is collected
}
api_call = requests.get(endpoint.format(**params))
pageview_mobile_site = api_call.json()
print(pageview_mobile_site)
Explanation: English Wikipedia page views, 2008 - 2017
For this assignment, your job is to analyze traffic on English Wikipedia over time, and then document your process and the resulting dataset and visualization according to best practices for open research that were outlined for you in class.
Example API request
You can use this example API request as a starting point for building your API queries. Note that the Legacy Pagecounts API has slightly different schema than the pageview API shown here.
This sample API request would get you all pageviews by web crawlers on the mobile website for English Wikipedia during the month of September, 2017.
Scrape Pageview Mobile Site traffic (current api)
End of explanation
#current
import requests
endpoint = 'https://wikimedia.org/api/rest_v1/metrics/pageviews/aggregate/{project}/{access}/{agent}/{granularity}/{start}/{end}'
headers={'User-Agent' : 'https://github.com/your_github_username', 'From' : 'abhiv@uw.edu'}
params = {'project' : 'en.wikipedia.org',
'access' : 'mobile-app',
'agent' : 'user',
'granularity' : 'monthly',
'start' : '2015070100',
'end' : '2017091000'#use the first day of the following month to ensure a full month of data is collected
}
api_call = requests.get(endpoint.format(**params))
pageview_mobile_app = api_call.json()
print(pageview_mobile_app)
Explanation: Scrape Pageview Mobile App traffic (current api)
End of explanation
#current
import requests
endpoint = 'https://wikimedia.org/api/rest_v1/metrics/pageviews/aggregate/{project}/{access}/{agent}/{granularity}/{start}/{end}'
headers={'User-Agent' : 'https://github.com/your_github_username', 'From' : 'abhiv@uw.edu'}
params = {'project' : 'en.wikipedia.org',
'access' : 'desktop',
'agent' : 'user',
'granularity' : 'monthly',
'start' : '2015070100',
'end' : '2017091000'#use the first day of the following month to ensure a full month of data is collected
}
api_call = requests.get(endpoint.format(**params))
pageview_desktop_site = api_call.json()
print(pageview_desktop_site)
Explanation: Scrape Pageview Desktop Site traffic (current api)
End of explanation
#current
import requests
endpoint = 'https://wikimedia.org/api/rest_v1/metrics/pageviews/aggregate/{project}/{access}/{agent}/{granularity}/{start}/{end}'
headers={'User-Agent' : 'https://github.com/your_github_username', 'From' : 'abhiv@uw.edu'}
params = {'project' : 'en.wikipedia.org',
'access' : 'all-access',
'agent' : 'user',
'granularity' : 'monthly',
'start' : '2015070100',
'end' : '2017091000'#use the first day of the following month to ensure a full month of data is collected
}
api_call = requests.get(endpoint.format(**params))
pageview_all_site = api_call.json()
print(pageview_all_site)
Explanation: Scrape Pageview All Site traffic (current api)
End of explanation
#Legacy
import requests
endpoint = 'https://wikimedia.org/api/rest_v1/metrics/legacy/pagecounts/aggregate/{project}/{access}/{granularity}/{start}/{end}'
headers={'User-Agent' : 'https://github.com/your_github_username', 'From' : 'abhiv@uw.edu'}
params = {'project' : 'en.wikipedia.org',
'access' : 'desktop-site',
'granularity' : 'monthly',
'start' : '2008010100',
'end' : '2016071000'#use the first day of the following month to ensure a full month of data is collected
}
api_call = requests.get(endpoint.format(**params))
pagecounts_desktop_site = api_call.json()
print(pagecounts_desktop_site)
Explanation: Scrape Pagecount desktop Site traffic (legacy api)
End of explanation
#Legacy
import requests
endpoint = 'https://wikimedia.org/api/rest_v1/metrics/legacy/pagecounts/aggregate/{project}/{access}/{granularity}/{start}/{end}'
headers={'User-Agent' : 'https://github.com/your_github_username', 'From' : 'abhiv@uw.edu'}
params = {'project' : 'en.wikipedia.org',
'access' : 'all-sites',
'granularity' : 'monthly',
'start' : '2008010100',
'end' : '2016071000'#use the first day of the following month to ensure a full month of data is collected
}
api_call = requests.get(endpoint.format(**params))
pagecounts_all_sites = api_call.json()
print(pagecounts_all_sites)
Explanation: Scrape Pagecount all Site traffic (legacy api)
End of explanation
#Legacy
import requests
endpoint = 'https://wikimedia.org/api/rest_v1/metrics/legacy/pagecounts/aggregate/{project}/{access}/{granularity}/{start}/{end}'
headers={'User-Agent' : 'https://github.com/your_github_username', 'From' : 'abhiv@uw.edu'}
params = {'project' : 'en.wikipedia.org',
'access' : 'mobile-site',
'granularity' : 'monthly',
'start' : '2008010100',
'end' : '2016071000'#use the first day of the following month to ensure a full month of data is collected
}
api_call = requests.get(endpoint.format(**params))
pagecounts_mobile_site = api_call.json()
print(pagecounts_mobile_site)
countresponse['items']
countresponse['items'][0]['count']
viewresponse['items'][3]['views']
pageview_mobile_site['items'][3]['timestamp']
Explanation: Scrape Pagecount mobile Site traffic (legacy api)
End of explanation
#pageview_mobile_site
i = 0
dict_pageview_mobile_site={}
for x in pageview_mobile_site['items']:
dict_pageview_mobile_site[pageview_mobile_site['items'][i]['timestamp']] = pageview_mobile_site['items'][i]['views']
i+=1
dict_pageview_mobile_site
#pageview_desktop_site
i = 0
dict_pageview_desktop_site={}
for x in pageview_desktop_site['items']:
dict_pageview_desktop_site[pageview_desktop_site['items'][i]['timestamp']] = pageview_desktop_site['items'][i]['views']
i+=1
dict_pageview_desktop_site
#pageview_all_site
i = 0
dict_pageview_all_site={}
for x in pageview_all_site['items']:
dict_pageview_all_site[pageview_all_site['items'][i]['timestamp']] = pageview_all_site['items'][i]['views']
i+=1
dict_pageview_all_site
#pagecount_all_site
i = 0
dict_pagecounts_all_sites={}
for x in pagecounts_all_sites['items']:
dict_pagecounts_all_sites[pagecounts_all_sites['items'][i]['timestamp']] = pagecounts_all_sites['items'][i]['count']
i+=1
dict_pagecounts_all_sites
i = 0
dict_pagecounts_desktop_site={}
for x in pagecounts_desktop_site['items']:
dict_pagecounts_desktop_site[pagecounts_desktop_site['items'][i]['timestamp']] = pagecounts_desktop_site['items'][i]['count']
i+=1
i = 0
dict_pagecounts_mobile_site={}
for x in pagecounts_mobile_site['items']:
dict_pagecounts_mobile_site[pagecounts_mobile_site['items'][i]['timestamp']] = pagecounts_mobile_site['items'][i]['count']
i+=1
i = 0
dict_pageview_mobile_app={}
for x in pageview_mobile_app['items']:
dict_pageview_mobile_app[pageview_mobile_app['items'][i]['timestamp']] = pageview_mobile_app['items'][i]['views']
i+=1
dict_pageview_mobile = {}
i=0
for x in pageview_mobile_app['items']:
dict_pageview_mobile[pageview_mobile_app['items'][i]['timestamp']] = pageview_mobile_app['items'][i]['views']+pageview_mobile_site['items'][i]['views']
i+=1
Explanation: Make a dictionary and add relevant info. timestamp = key, counts = value. Repeat for all
End of explanation
<END_TASK> |
267 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
<a href="https
Step1: Manipulating data without using TFDS
We first illustrate how to make streams of minibatches using vanilla numpy code. TFDS will then let us eliminate a lot of this boilerplate. As an example, let's package some small labeled datasets into two dictionaries, for train and test.
Step2: Now we make one pass (epoch) over the data, computing random minibatches of size 30. There are 100 examples total, but with a batch size of 30,
we don't use all the data. We can solve such "boundary effects" later.
Step3: Using TFDS
Step4: Using pre-packaged datasets
There are many standard datasets available from https
Step5: Streams and iterators
Step6: Worked example
For efficiently processing datastreams, see this webpage.
Step7: Data visualization
Step8: Graveyard
Here we store some code we don't need (for now). | <ASSISTANT_TASK:>
Python Code:
# Standard Python libraries
from __future__ import absolute_import, division, print_function, unicode_literals
from typing import Any, Iterator, Mapping, NamedTuple, Sequence, Tuple
import os
import time
import numpy as np
import glob
import matplotlib.pyplot as plt
import PIL
import imageio
from IPython import display
import sklearn
# TensorFlow ≥2.0 is required
import tensorflow as tf
from tensorflow import keras
assert tf.__version__ >= "2.0"
import tensorflow_datasets as tfds
print("tf version {}".format(tf.__version__))
import jax
from typing import Any, Callable, Sequence, Optional, Dict, Tuple
import jax.numpy as jnp
rng = jax.random.PRNGKey(0)
# Useful type aliases
Array = jnp.ndarray
PRNGKey = Array
Batch = Mapping[str, np.ndarray]
OptState = Any
Explanation: <a href="https://colab.research.google.com/github/probml/pyprobml/blob/master/book1/mlp/tfds_intro.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Introduction to tensorflow datasets
TFDS is a handy way to handle large datasets as a stream of minibatches, suitable for large scale training and parallel evaluation. It can be used by tensorflow and JAX code, as we illustrate below. (See the official colab for details.)
End of explanation
import sklearn
import sklearn.datasets
from sklearn.model_selection import train_test_split
def get_datasets_iris():
iris = sklearn.datasets.load_iris()
X = iris["data"]
y = iris["target"]
N, D = X.shape # 150, 4
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
train_ds = {"X": X_train, "y": y_train}
test_ds = {"X": X_test, "y": y_test}
return train_ds, test_ds
train_ds, test_ds = get_datasets_iris()
print(train_ds["X"].shape)
print(train_ds["y"].shape)
iris = sklearn.datasets.load_iris()
print(iris.feature_names)
print(iris.target_names)
Explanation: Manipulating data without using TFDS
We first illustrate how to make streams of minibatches using vanilla numpy code. TFDS will then let us eliminate a lot of this boilerplate. As an example, let's package some small labeled datasets into two dictionaries, for train and test.
End of explanation
def extract_batch(ds, ndx):
batch = {k: v[ndx, ...] for k, v in ds.items()}
# batch = {'X': ds['X'][ndx,:], 'y': ds['y'][ndx]}
return batch
def process_epoch(train_ds, batch_size, rng):
train_ds_size = len(train_ds["X"])
steps_per_epoch = train_ds_size // batch_size
perms = jax.random.permutation(rng, len(train_ds["X"]))
perms = perms[: steps_per_epoch * batch_size] # skip incomplete batch
perms = perms.reshape((steps_per_epoch, batch_size)) # perms[i,:] is list of data indices for step i
for step, perm in enumerate(perms):
batch = extract_batch(train_ds, perm)
print("processing batch {} X shape {}, y shape {}".format(step, batch["X"].shape, batch["y"].shape))
batch_size = 30
process_epoch(train_ds, batch_size, rng)
Explanation: Now we make one pass (epoch) over the data, computing random minibatches of size 30. There are 100 examples total, but with a batch size of 30,
we don't use all the data. We can solve such "boundary effects" later.
End of explanation
def load_dataset_iris(split: str, batch_size: int) -> Iterator[Batch]:
train_ds, test_ds = get_datasets_iris()
if split == tfds.Split.TRAIN:
ds = tf.data.Dataset.from_tensor_slices({"X": train_ds["X"], "y": train_ds["y"]})
elif split == tfds.Split.TEST:
ds = tf.data.Dataset.from_tensor_slices({"X": test_ds["X"], "y": test_ds["y"]})
ds = ds.shuffle(buffer_size=1 * batch_size)
ds = ds.batch(batch_size)
ds = ds.prefetch(buffer_size=5)
ds = ds.repeat() # make infinite stream of epochs
return iter(tfds.as_numpy(ds)) # python iterator
batch_size = 30
train_ds = load_dataset_iris(tfds.Split.TRAIN, batch_size)
valid_ds = load_dataset_iris(tfds.Split.TEST, batch_size)
print(train_ds)
training_steps = 5
for step in range(training_steps):
batch = next(train_ds)
print("processing batch {} X shape {}, y shape {}".format(step, batch["X"].shape, batch["y"].shape))
Explanation: Using TFDS
End of explanation
ds, info = tfds.load("binarized_mnist", split=tfds.Split.TRAIN, shuffle_files=True, with_info=True)
print(ds)
print(info)
train_ds, info = tfds.load("mnist", split=tfds.Split.TRAIN, shuffle_files=True, with_info=True)
print(train_ds)
print(info)
ds = tfds.load("mnist", split="train")
print(type(ds))
ds = ds.take(1) # Only take a single example
print(type(ds))
for example in ds: # example is `{'image': tf.Tensor, 'label': tf.Tensor}`
print(list(example.keys()))
image = example["image"]
label = example["label"]
print(image.shape, label)
def rename(batch):
d = {"inputs": batch["image"], "outputs": batch["label"]}
return d
ds = tfds.load("mnist", split="train")
ds = ds.map(rename)
i = 0
for d in ds:
print(d["inputs"].shape)
i += 1
if i > 2:
break
Explanation: Using pre-packaged datasets
There are many standard datasets available from https://www.tensorflow.org/datasets. We give some examples below.
End of explanation
ds = tfds.as_numpy(train_ds)
print(ds)
for i, batch in enumerate(ds):
print(type(batch))
X = batch["image"]
y = batch["label"]
print(X.shape)
print(y.shape)
i += 1
if i > 2:
break
ds = tfds.load("mnist", split="train")
ds = ds.take(100)
# ds = tfds.as_numpy(ds)
batches = ds.repeat(2).batch(batch_size)
print(type(batches))
print(batches)
batch_stream = batches.as_numpy_iterator()
print(type(batch_stream))
print(batch_stream)
b = next(batch_stream)
print(type(b))
print(b["image"].shape)
b = batch_stream.next()
print(type(b))
print(b["image"].shape)
ds = tfds.load("mnist", split="train")
batches = ds.repeat().batch(batch_size)
batch_stream = batches.as_numpy_iterator()
def process_stream(stream):
b = next(stream)
X = b["image"]
y = b["label"]
d = {"X": X, "y": y}
yield d
my_stream = process_stream(batch_stream)
b = next(my_stream)
print(type(b))
print(b["X"].shape)
b = my_stream.next()
print(type(b))
print(b["X"].shape)
Explanation: Streams and iterators
End of explanation
def load_dataset_mnist(split: tfds.Split, batch_size: int) -> Iterator[Batch]:
ds, ds_info = tfds.load("mnist", split=split, with_info=True)
# For true randomness, we set the shuffle buffer to the full dataset size.
ds = ds.shuffle(ds_info.splits[split].num_examples)
# ds = ds.shuffle(buffer_size=10 * batch_size)
ds = ds.batch(batch_size)
ds = ds.prefetch(tf.data.experimental.AUTOTUNE)
ds = ds.repeat()
return iter(tfds.as_numpy(ds))
def preprocess_batch(batch: Batch, prng_key=None) -> Batch:
# Convert to X,y field names, optionally dequantize X, and convert to float
X = batch["image"].astype(np.float32)
y = batch["label"]
if prng_key is not None:
# Dequantize pixel values {0, 1, ..., 255} with uniform noise [0, 1).
X += jax.random.uniform(prng_key, X.shape)
X = X / 256.0 # Normalize pixel values from [0, 256) to [0, 1)
d = {"X": X, "y": y}
return d
batch_size = 30
train_ds = load_dataset_mnist(tfds.Split.TRAIN, batch_size)
print(type(train_ds))
training_steps = 5
for step in range(training_steps):
batch = next(train_ds)
batch = preprocess_batch(batch, rng)
X = batch["X"]
y = batch["y"]
print("processing batch {} X shape {}, y shape {}".format(step, X.shape, y.shape))
Explanation: Worked example
For efficiently processing datastreams, see this webpage.
End of explanation
import pandas as pd
pd.set_option("precision", 2) # 2 decimal places
pd.set_option("display.max_rows", 20)
pd.set_option("display.max_columns", 30)
pd.set_option("display.width", 100) # wide windows
# ds, info = tfds.load('mnist', split='train', with_info=True)
ds, info = tfds.load("iris", split="train", with_info=True)
print(info)
df = tfds.as_dataframe(ds.take(4), info)
print(type(df))
print(df)
df.head()
ds, info = tfds.load("mnist", split="train", with_info=True)
fig = tfds.show_examples(ds, info, rows=2, cols=5)
# This function is not well documented. But source code for show_examples is here:
# https://github.com/tensorflow/datasets/blob/v4.2.0/tensorflow_datasets/core/visualization/image_visualizer.py
ds, info = tfds.load("cifar10", split="train", with_info=True)
fig = tfds.show_examples(ds, info, rows=2, cols=5)
import tensorflow_data_validation
tfds.show_statistics(info)
Explanation: Data visualization
End of explanation
def get_datasets_mnist():
ds_builder = tfds.builder("mnist")
ds_builder.download_and_prepare()
train_ds_all = tfds.as_numpy(ds_builder.as_dataset(split="train", batch_size=-1))
test_ds_all = tfds.as_numpy(ds_builder.as_dataset(split="test", batch_size=-1))
num_train = len(train_ds_all["image"])
train_ds["X"] = jnp.reshape(jnp.float32(train_ds_all["image"]) / 255.0, (num_train, -1))
train_ds["y"] = train_ds_all["label"]
num_test = len(test_ds_all["image"])
test_ds["X"] = jnp.reshape(jnp.float32(test_ds["image"]) / 255.0, (num_test, -1))
test_ds["y"] = test_ds_all["label"]
return train_ds, test_ds
dataset = load_dataset_iris(tfds.Split.TRAIN, 30)
batches = dataset.repeat().batch(batch_size)
step = 0
num_minibatches = 5
for batch in batches:
if step >= num_minibatches:
break
X, y = batch["image"], batch["label"]
print("processing batch {} X shape {}, y shape {}".format(step, X.shape, y.shape))
step = step + 1
print("batchified version v2")
batch_stream = batches.as_numpy_iterator()
for step in range(num_minibatches):
batch = batch_stream.next()
X, y = batch["image"], batch["label"] # convert to canonical names
print("processing batch {} X shape {}, y shape {}".format(step, X.shape, y.shape))
step = step + 1
def sample_categorical(N, C):
p = (1 / C) * np.ones(C)
y = np.random.choice(C, size=N, p=p)
return y
def get_datasets_rnd():
Ntrain = 1000
Ntest = 1000
D = 5
C = 10
train_ds = {"X": np.random.randn(Ntrain, D), "y": sample_categorical(Ntrain, C)}
test_ds = {"X": np.random.randn(Ntest, D), "y": sample_categorical(Ntest, C)}
return train_ds, test_ds
def get_datasets_logreg(key):
Ntrain = 1000
Ntest = 1000
D = 5
C = 10
W = jax.random.normal(key, (D, C))
Xtrain = jax.random.normal(key, (Ntrain, D))
logits = jnp.dot(Xtrain, W)
ytrain = jax.random.categorical(key, logits)
Xtest = jax.random.normal(key, (Ntest, D))
logits = jnp.dot(Xtest, W)
ytest = jax.random.categorical(key, logits)
train_ds = {"X": Xtrain, "y": ytrain}
test_ds = {"X": Xtest, "y": ytest}
return train_ds, test_ds
Explanation: Graveyard
Here we store some code we don't need (for now).
End of explanation
<END_TASK> |
268 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Day 6 - pre-class assignment
Goals for today's pre-class assignment
Use the NumPy module to manipulate arrays of data
Write a program using if statements and Boolean logic
Create loops that use the for and while statements
Assignment instructions
Watch the videos below, read through the materials linked to below the videos as well as Section 4.1 through 4.3 of the Python Tutorial, and complete the assigned programming problems. Please get started early, and come to office hours if you have any questions!
Recall that to make notebook cells that have Python code in them do something, hold down the 'shift' key and then press the 'enter' key (you'll have to do this to get the YouTube videos to run). To edit a cell (to add answers, for example) you double-click on the cell, add your text, and then enter it by holding down 'shift' and pressing 'enter'
This assignment is due by 11
Step1: Some useful numpy references
The NumPy Quick Start Guide
An introduction to numpy
Question 1
Step2: Question 2
Step3: Potentially useful links
WikiBooks Python Programming - conditional statements
Learn Python the Hard Way - Boolean Logic
How To Think Like a Computer Scientist
Step4: Potentially useful links
WikiBooks Python Programming - loops
Hands-on Python - loops and sequences
Numpy - nditer reference
Question 4
Step6: Assignment wrapup
Please fill out the form that appears when you run the code below. You must completely fill this out in order to receive credit for the assignment! | <ASSISTANT_TASK:>
Python Code:
# Imports the functionality that we need to display YouTube videos in
# a Jupyter Notebook.
# You need to run this cell before you run ANY of the YouTube videos.
from IPython.display import YouTubeVideo
# Don't forget to watch the video in full-screen mode!
YouTubeVideo("BTXyE3KLIOs",width=640,height=360) # numpy
Explanation: Day 6 - pre-class assignment
Goals for today's pre-class assignment
Use the NumPy module to manipulate arrays of data
Write a program using if statements and Boolean logic
Create loops that use the for and while statements
Assignment instructions
Watch the videos below, read through the materials linked to below the videos as well as Section 4.1 through 4.3 of the Python Tutorial, and complete the assigned programming problems. Please get started early, and come to office hours if you have any questions!
Recall that to make notebook cells that have Python code in them do something, hold down the 'shift' key and then press the 'enter' key (you'll have to do this to get the YouTube videos to run). To edit a cell (to add answers, for example) you double-click on the cell, add your text, and then enter it by holding down 'shift' and pressing 'enter'
This assignment is due by 11:59 p.m. the day before class, and should be uploaded into the "Pre-class assignments" dropbox folder for Day 6. Submission instructions can be found at the end of the notebook.
End of explanation
import numpy as np
# put your code here!
Explanation: Some useful numpy references
The NumPy Quick Start Guide
An introduction to numpy
Question 1: In the cell below, import the numpy module and then create two arrays with the same number of elements in each one (pick the number of elements and the value of each yourself, making sure they're numbers and not strings!). Then, add those arrays together and store it in a third array, and print it out. Sort the values of the third array, and print that out again.
End of explanation
%matplotlib inline
import matplotlib.pyplot as plt
# put your code here!
# Don't forget to watch the video in full-screen mode!
YouTubeVideo("cozbOliNwSs",width=640,height=360) # Boolean logic and if statements
Explanation: Question 2: Now let's use numpy and pyplot together (you learned about pyplot in a prior class). We're going to use numpy's linspace method (which you can learn about by typing this:
np.linspace?
in the cell below. Use linspace to create an array called x with 10 values ranging from x=0 to $4\pi$, and then a second array called y that is the sine of x. Then, plot x vs. y with a red line!
Hint 1: use np.sin(), which you can find more about by typing np.sin? in the cell below
Hint 2: look at the pre-class notebook from previous classes - what did you do to make plots there? Alternately, check out the Pyplot tutorial.
End of explanation
# write your program here, using multiple cells if necessary (adding extra cells using
# the 'Cell' menu at the top of this notebook). Don't forget that you can execute
# your program by holding down 'shift' and pressing 'enter' in each cell!
# Don't forget to watch the video in full-screen mode!
YouTubeVideo("VnTN5sFIPD0",width=640,height=360) # loops in python
Explanation: Potentially useful links
WikiBooks Python Programming - conditional statements
Learn Python the Hard Way - Boolean Logic
How To Think Like a Computer Scientist: Conditionals
Python logical operators
Python comparison operators
Question 3: Write a program that compares three variables v1, v2, and v3 (not necessarily the variable names) and does the following using if statements:
If v1 is equal to v2, print "woohoo!"
If v1 has the smallest value of the three (i.e., v1 is less than both v2 and v3), print out "v1 is tiny!"
As with (2), but add a nested if statement to print out a statement saying whether v2 or v3 is the largest value.
Try several values of v1, v2 and v3 and ensure that you see the correct behavior.
End of explanation
# Program 1 here
# Program 2 here
# Program 3 here
Explanation: Potentially useful links
WikiBooks Python Programming - loops
Hands-on Python - loops and sequences
Numpy - nditer reference
Question 4: Write two small programs that do the following:
Using range() and a for loop, print out the numbers from 4 to 16 by steps of two (i.e., 2, 4, ... 16), adding up all of the numbers you have printed out.
Write a while loop that does exactly the same thing.
Create two numpy arrays with 10 elements, one using np.arange() and one using np.ones(), and use np.nditer() to iterate over the two arrays and print out their values.
End of explanation
from IPython.display import HTML
HTML(
<iframe
src="https://goo.gl/forms/F1MvFMDpIWPScchr2?embedded=true"
width="80%"
height="1200px"
frameborder="0"
marginheight="0"
marginwidth="0">
Loading...
</iframe>
)
Explanation: Assignment wrapup
Please fill out the form that appears when you run the code below. You must completely fill this out in order to receive credit for the assignment!
End of explanation
<END_TASK> |
269 |
<SYSTEM_TASK:>
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
<END_TASK>
<USER_TASK:>
Problem:
LightGBM Custom Loss Function
| <ASSISTANT_TASK:>
Python Code::
import LightGBM as lgb
def custom_loss(y_pred, data):
y_true = data.get_label()
error = y_pred-y_true
#1st derivative of loss function
grad = 2 * error
#2nd derivative of loss function
hess = 0 * error + 2
return grad, hess
params = {"learning_rate" : 0.1}
training_data = lgb.Dataset(X_train , label = y_train)
model = lgb.train(train_set=training_data,
params=params,
fobj=custom_loss)
<END_TASK>
|
270 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
What is a dataset?
A dataset is a collection of information (or data) that can be used by a computer. A dataset typically has some number of examples, where each example has features associated with it. Some datasets also include labels, which is an identifying piece of information that is of interest.
What is an example?
An example is a single element of a dataset, typically a row (similar to a row in a table). Multiple examples are used to generalize trends about the dataset as a whole. When predicting the list price of a house, each house would be considered a single example.
Examples are often referred to with the letter $x$.
What is a feature?
A feature is a measurable characteristic that describes an example in a dataset. Features make up the information that a computer can use to learn and make predictions. If your examples are houses, your features might be
Step1: Import the dataset
Import the dataset and store it to a variable called iris. This dataset is similar to a python dictionary, with the keys
Step2: Visualizing the data
Visualizing the data can help us better understand the data and make use of it. The following block of code will create a plot of sepal length (x-axis) vs sepal width (y-axis). The colors of the datapoints correspond to the labeled species of iris for that example.
After plotting, look at the data. What do you notice about the way it is arranged?
Step3: Make your own plot
Below, try making your own plots. First, modify the previous code to create a similar plot, showing the petal width vs the petal length. You can start by copying and pasting the previous block of code to the cell below, and modifying it to work.
How is the data arranged differently? Do you think these additional features would be helpful in determining to which species of iris a new plant should be categorized?
What about plotting other feature combinations, like petal length vs sepal length?
Once you've plotted the data several different ways, think about how you would predict the species of a new iris plant, given only the length and width of its sepals and petals.
Step4: Training and Testing Sets
In order to evaluate our data properly, we need to divide our dataset into training and testing sets.
Training Set
A portion of the data, usually a majority, used to train a machine learning classifier. These are the examples that the computer will learn in order to try to predict data labels.
Testing Set
A portion of the data, smaller than the training set (usually about 30%), used to test the accuracy of the machine learning classifier. The computer does not "see" this data while learning, but tries to guess the data labels. We can then determine the accuracy of our method by determining how many examples it got correct.
Creating training and testing sets
Below, we create a training and testing set from the iris dataset using using the train_test_split() function. | <ASSISTANT_TASK:>
Python Code:
# Print figures in the notebook
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn import datasets # Import datasets from scikit-learn
# Import patch for drawing rectangles in the legend
from matplotlib.patches import Rectangle
# Create color maps
cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF'])
# Create a legend for the colors, using rectangles for the corresponding colormap colors
labelList = []
for color in cmap_bold.colors:
labelList.append(Rectangle((0, 0), 1, 1, fc=color))
Explanation: What is a dataset?
A dataset is a collection of information (or data) that can be used by a computer. A dataset typically has some number of examples, where each example has features associated with it. Some datasets also include labels, which is an identifying piece of information that is of interest.
What is an example?
An example is a single element of a dataset, typically a row (similar to a row in a table). Multiple examples are used to generalize trends about the dataset as a whole. When predicting the list price of a house, each house would be considered a single example.
Examples are often referred to with the letter $x$.
What is a feature?
A feature is a measurable characteristic that describes an example in a dataset. Features make up the information that a computer can use to learn and make predictions. If your examples are houses, your features might be: the square footage, the number of bedrooms, or the number of bathrooms. Some features are more useful than others. When predicting the list price of a house the number of bedrooms is a useful feature while the number of floorboards is not, even though they both describe the house.
Features are sometimes specified as a single element of an example, $x_i$
What is a label?
A label identifies a piece of information about an example that is of particular interest. In machine learning, the label is the information we want the computer to learn to predict. In our housing example, the label would be the list price of the house.
Labels can be continuous (e.g. price, length, width) or they can be a category label (e.g. color). They are typically specified by the letter $y$.
The Iris Dataset
Here, we use the Iris dataset, available through scikit-learn. Scikit-learn's explanation of the dataset is here.
This dataset contains information on three species of iris flowers (Setosa, Versicolour, and Virginica.
|<img src="Images/Setosa.jpg" width=200>|<img src="Images/Versicolor.jpg" width=200>|<img src="Images/Virginica.jpg" width=200>|
|:-------------------------------------:|:-----------------------------------------:|:----------------------------------------:|
| Iris Setosa source | Iris Versicolour source | Iris Virginica source |
Each example has four features (or measurements): sepal length, sepal width, petal length, and petal width. All measurements are in cm.
|<img src="Images/Petal-sepal.jpg" width=200>|
|:------------------------------------------:|
|Petal and sepal of a primrose plant. From wikipedia|
Examples
The datasets consists of 150 examples, 50 examples from each species of iris.
Features
The features are the columns of the dataset. In order from left to right (or 0-3) they are: sepal length, sepal width, petal length, and petal width
Our goal
The goal, for this dataset, is to train a computer to predict the species of a new iris plant, given only the measured length and width of its sepal and petal.
Setup
Tell matplotlib to print figures in the notebook. Then import numpy (for numerical data), pyplot (for plotting figures), and ListedColormap (for plotting colors), datasets.
Also create the color maps to use to color the plotted data, and "labelList", which is a list of colored rectangles to use in plotted legends
End of explanation
# Import some data to play with
iris = datasets.load_iris()
# List the data keys
print('Keys: ' + str(iris.keys()))
print('Label names: ' + str(iris.target_names))
print('Feature names: ' + str(iris.feature_names))
print('')
# Store the labels (y), label names, features (X), and feature names
y = iris.target # Labels are stored in y as numbers
labelNames = iris.target_names # Species names corresponding to labels 0, 1, and 2
X = iris.data
featureNames = iris.feature_names
# Show the first five examples
print(iris.data[1:5,:])
Explanation: Import the dataset
Import the dataset and store it to a variable called iris. This dataset is similar to a python dictionary, with the keys: ['DESCR', 'target_names', 'target', 'data', 'feature_names']
The data features are stored in iris.data, where each row is an example from a single flow, and each column is a single feature. The feature names are stored in iris.feature_names. Labels are stored as the numbers 0, 1, or 2 in iris.target, and the names of these labels are in iris.target_names.
End of explanation
# Plot the data
# Sepal length and width
X_sepal = X[:,:2]
# Get the minimum and maximum values with an additional 0.5 border
x_min, x_max = X_sepal[:, 0].min() - .5, X_sepal[:, 0].max() + .5
y_min, y_max = X_sepal[:, 1].min() - .5, X_sepal[:, 1].max() + .5
plt.figure(figsize=(8, 6))
# Plot the training points
plt.scatter(X_sepal[:, 0], X_sepal[:, 1], c=y, cmap=cmap_bold)
plt.xlabel('Sepal length (cm)')
plt.ylabel('Sepal width (cm)')
plt.title('Sepal width vs length')
# Set the plot limits
plt.xlim(x_min, x_max)
plt.ylim(y_min, y_max)
plt.legend(labelList, labelNames)
plt.show()
Explanation: Visualizing the data
Visualizing the data can help us better understand the data and make use of it. The following block of code will create a plot of sepal length (x-axis) vs sepal width (y-axis). The colors of the datapoints correspond to the labeled species of iris for that example.
After plotting, look at the data. What do you notice about the way it is arranged?
End of explanation
# Put your code here!
# Plot the data
# Petal length and width
X_petal = X[:,2:]
# Get the minimum and maximum values with an additional 0.5 border
x_min, x_max = X_petal[:, 0].min() - .5, X_petal[:, 0].max() + .5
y_min, y_max = X_petal[:, 1].min() - .5, X_petal[:, 1].max() + .5
plt.figure(figsize=(8, 6))
# Plot the training points
plt.scatter(X_petal[:, 0], X_petal[:, 1], c=y, cmap=cmap_bold)
plt.xlabel('Petal length (cm)')
plt.ylabel('Petal width (cm)')
plt.title('Petal width vs length')
# Set the plot limits
plt.xlim(x_min, x_max)
plt.ylim(y_min, y_max)
plt.legend(labelList, labelNames)
plt.show()
Explanation: Make your own plot
Below, try making your own plots. First, modify the previous code to create a similar plot, showing the petal width vs the petal length. You can start by copying and pasting the previous block of code to the cell below, and modifying it to work.
How is the data arranged differently? Do you think these additional features would be helpful in determining to which species of iris a new plant should be categorized?
What about plotting other feature combinations, like petal length vs sepal length?
Once you've plotted the data several different ways, think about how you would predict the species of a new iris plant, given only the length and width of its sepals and petals.
End of explanation
from sklearn import cross_validation
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.3)
print('Original dataset size: ' + str(X.shape))
print('Training dataset size: ' + str(X_train.shape))
print('Test dataset size: ' + str(X_test.shape))
Explanation: Training and Testing Sets
In order to evaluate our data properly, we need to divide our dataset into training and testing sets.
Training Set
A portion of the data, usually a majority, used to train a machine learning classifier. These are the examples that the computer will learn in order to try to predict data labels.
Testing Set
A portion of the data, smaller than the training set (usually about 30%), used to test the accuracy of the machine learning classifier. The computer does not "see" this data while learning, but tries to guess the data labels. We can then determine the accuracy of our method by determining how many examples it got correct.
Creating training and testing sets
Below, we create a training and testing set from the iris dataset using using the train_test_split() function.
End of explanation
<END_TASK> |
271 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
PMOD Grove PIR Motion Sensor
This example shows how to use the Grove PIR motion sensor.
This demo requires the Grove PIR motion sensor and PYNQ Grove Adapter.
1. Download overlay
Download the overlay. Instantiate Grove PIR object.
In this example, the Grove PIR sensor is assumed to be connected to G1 on the PYNQ Grove Adapter. The Grove Adapter is connected to PMODA interface on board.
Step1: 2. Detect motion
We can now start the motion detection loop.
When the cell below is executed, the LED will keep flashing until the PIR detects some movement.
Step2: 3. Clean-up
Delete objects. | <ASSISTANT_TASK:>
Python Code:
from time import sleep
from pynq import Overlay
from pynq.board import LED
from pynq.iop import Grove_PIR
from pynq.iop import PMODA
from pynq.iop import PMOD_GROVE_G1
ol1 = Overlay("base.bit")
ol1.download()
pir = Grove_PIR(PMODA,PMOD_GROVE_G1)
Explanation: PMOD Grove PIR Motion Sensor
This example shows how to use the Grove PIR motion sensor.
This demo requires the Grove PIR motion sensor and PYNQ Grove Adapter.
1. Download overlay
Download the overlay. Instantiate Grove PIR object.
In this example, the Grove PIR sensor is assumed to be connected to G1 on the PYNQ Grove Adapter. The Grove Adapter is connected to PMODA interface on board.
End of explanation
led = LED(0)
led.on()
if pir.read()==0:
print("Starting detection...")
while True:
led.on()
sleep(0.1)
led.off()
sleep(0.1)
if pir.read()==1:
print("Detected a motion.")
break
print("Ending detection...")
Explanation: 2. Detect motion
We can now start the motion detection loop.
When the cell below is executed, the LED will keep flashing until the PIR detects some movement.
End of explanation
del pir
del led
del ol1
Explanation: 3. Clean-up
Delete objects.
End of explanation
<END_TASK> |
272 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
True Values
The "true" values can be computed analytically in this case, so we did so.
We can also compute the distribution for weighting the errors.
Step1: Comparing the Errors
For each algorithm, we get the associated experiment, and calculate the errors at each timestep, averaged over the runs performed with that algorithm. | <ASSISTANT_TASK:>
Python Code:
num_states = 8
gamma = 0.9
true_values = gamma**np.arange(num_states)[::-1]
d_pi = np.ones(num_states)/num_states
D_pi = np.diag(d_pi)
print("True values:")
print(true_values)
print("On-policy distribution:")
print(d_pi)
def compute_value_dct(theta_lst, features):
return [{s: np.dot(theta, x) for s, x in features.items()} for theta in theta_lst]
def compute_values(theta_lst, X):
return [np.dot(X, theta) for theta in theta_lst]
def compute_errors(value_lst, error_func):
return [error_func(v) for v in value_lst]
def rmse_factory(true_values, d=None):
true_values = np.ravel(true_values)
# sensible default for weighting distribution
if d is None:
d = np.ones_like(true_values)
else:
d = np.ravel(d)
assert(len(d) == len(true_values))
# the actual root-mean square error
def func(v):
diff = true_values - v
return np.sqrt(np.mean(d*diff**2))
return func
Explanation: True Values
The "true" values can be computed analytically in this case, so we did so.
We can also compute the distribution for weighting the errors.
End of explanation
# define the experiment
num_states = 8
num_features = 6
num_active = 3
num_runs = 50
max_steps = 5000
# set up environment
env = chicken.Chicken(num_states)
# Define the target policy
pol_pi = policy.FixedPolicy({s: {0: 1} for s in env.states})
# Define the behavior policy
pol_mu = policy.FixedPolicy({s: {0: 1} if s < 4 else {0: 0.5, 1: 0.5} for s in env.states})
# state-dependent gamma
gm_dct = {s: 0.9 for s in env.states}
gm_dct[0] = 0
gm_func = parametric.MapState(gm_dct)
gm_p_func = parametric.MapNextState(gm_dct)
# set up algorithm parameters
update_params = {
'alpha': 0.03,
'beta': 0.002,
'gm': gm_func,
'gm_p': gm_p_func,
'lm': 0.0,
'lm_p': 0.0,
'interest': 1.0,
}
# Run all available algorithms
data = dict()
for name, alg in algos.algo_registry.items():
print(name)
run_lst = []
for i in range(num_runs):
print("Run: %d"%i, end="\r")
episode_data = dict()
# Want to use random features
phi = features.RandomBinary(num_features, num_active)
episode_data['features'] = {s: phi(s) for s in env.states}
# Set up the agent
_update_params = update_params.copy()
if name == 'ETD':
_update_params['alpha'] = 0.002
agent = OffPolicyAgent(alg(phi.length), pol_pi, pol_mu, phi, _update_params)
# Run the experiment
episode_data['steps'] = run_contextual(agent, env, max_steps)
run_lst.append(episode_data)
data[name] = run_lst
baseline = rmse_factory(np.zeros(num_states))
# True values & associated stationary distribution
theta_ls = np.array([ 0.4782969, 0.531441 , 0.59049, 0.6561, 0.729, 0.81, 0.9, 1.])
d_pi = np.ones(num_states)/num_states
D_pi = np.diag(d_pi)
# define the error/objective function
err_func = rmse_factory(theta_ls, d=d_pi)
baseline = err_func(np.zeros(num_states))
for name, experiment in data.items():
print(name)
errors = []
for episode in experiment:
feats = experiment[0]['features']
X = np.array([feats[k] for k in sorted(feats.keys())])
steps = experiment[0]['steps']
thetas = list(pluck('theta', steps))
# compute the values at each step
val_lst = compute_values(thetas, X)
# compute the errors at each step
err_lst = compute_errors(val_lst, err_func)
errors.append(err_lst)
# calculate the average error
clipped_errs = np.clip(errors, 0, 100)
avg_err = np.mean(clipped_errs, axis=0)
# plot the errors
fig, ax = plt.subplots()
ax.plot(avg_err)
# format the graph
ax.set_ylim(1e-2, 2)
ax.axhline(baseline, c='red')
ax.set_yscale('log')
plt.show()
Explanation: Comparing the Errors
For each algorithm, we get the associated experiment, and calculate the errors at each timestep, averaged over the runs performed with that algorithm.
End of explanation
<END_TASK> |
273 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Constraint Satisfaction Problems (CSPs)
This IPy notebook acts as supporting material for topics covered in Chapter 6 Constraint Satisfaction Problems of the book Artificial Intelligence
Step1: Review
CSPs are a special kind of search problems. Here we don't treat the space as a black box but the state has a particular form and we use that to our advantage to tweak our algorithms to be more suited to the problems. A CSP State is defined by a set of variables which can take values from corresponding domains. These variables can take only certain values in their domains to satisfy the constraints. A set of assignments which satisfies all constraints passes the goal test. Let us start by exploring the CSP class which we will use to model our CSPs. You can keep the popup open and read the main page to get a better idea of the code.
Step2: The _ init _ method parameters specify the CSP. Variable can be passed as a list of strings or integers. Domains are passed as dict where key specify the variables and value specify the domains. The variables are passed as an empty list. Variables are extracted from the keys of the domain dictionary. Neighbor is a dict of variables that essentially describes the constraint graph. Here each variable key has a list its value which are the variables that are constraint along with it. The constraint parameter should be a function f(A, a, B, b) that returns true if neighbors A, B satisfy the constraint when they have values A=a, B=b. We have additional parameters like nassings which is incremented each time an assignment is made when calling the assign method. You can read more about the methods and parameters in the class doc string. We will talk more about them as we encounter their use. Let us jump to an example.
Graph Coloring
We use the graph coloring problem as our running example for demonstrating the different algorithms in the csp module. The idea of map coloring problem is that the adjacent nodes (those connected by edges) should not have the same color throughout the graph. The graph can be colored using a fixed number of colors. Here each node is a variable and the values are the colors that can be assigned to them. Given that the domain will be the same for all our nodes we use a custom dict defined by the UniversalDict class. The UniversalDict Class takes in a parameter which it returns as value for all the keys of the dict. It is very similar to defaultdict in Python except that it does not support item assignment.
Step3: For our CSP we also need to define a constraint function f(A, a, B, b). In this what we need is that the neighbors must not have the same color. This is defined in the function different_values_constraint of the module.
Step4: The CSP class takes neighbors in the form of a Dict. The module specifies a simple helper function named parse_neighbors which allows to take input in the form of strings and return a Dict of the form compatible with the CSP Class.
Step5: The MapColoringCSP function creates and returns a CSP with the above constraint function and states. The variables our the keys of the neighbors dict and the constraint is the one specified by the different_values_constratint function. australia, usa and france are three CSPs that have been created using MapColoringCSP. australia corresponds to Figure 6.1 in the book.
Step6: NQueens
The N-queens puzzle is the problem of placing N chess queens on a N×N chessboard so that no two queens threaten each other. Here N is a natural number. Like the graph coloring, problem NQueens is also implemented in the csp module. The NQueensCSP class inherits from the CSP class. It makes some modifications in the methods to suit the particular problem. The queens are assumed to be placed one per column, from left to right. That means position (x, y) represents (var, val) in the CSP. The constraint that needs to be passed on the CSP is defined in the queen_constraint function. The constraint is satisfied (true) if A, B are really the same variable, or if they are not in the same row, down diagonal, or up diagonal.
Step7: The NQueensCSP method implements methods that support solving the problem via min_conflicts which is one of the techniques for solving CSPs. Because min_conflicts hill climbs the number of conflicts to solve the CSP assign and unassign are modified to record conflicts. More details about the structures rows, downs, ups which help in recording conflicts are explained in the docstring.
Step8: The _ init _ method takes only one parameter n the size of the problem. To create an instance we just pass the required n into the constructor.
Step9: Helper Functions
We will now implement few helper functions that will help us visualize the Coloring Problem. We will make some modifications to the existing Classes and Functions for additional book keeping. To begin with we modify the assign and unassign methods in the CSP to add a copy of the assignment to the assingment_history. We call this new class InstruCSP. This would allow us to see how the assignment evolves over time.
Step10: Next, we define make_instru which takes an instance of CSP and returns a InstruCSP instance.
Step11: We will now use a graph defined as a dictonary for plotting purposes in our Graph Coloring Problem. The keys are the nodes and their corresponding values are the nodes are they are connected to.
Step12: Now we are ready to create an InstruCSP instance for our problem. We are doing this for an instance of MapColoringProblem class which inherits from the CSP Class. This means that our make_instru function will work perfectly for it.
Step13: Backtracking Search
For solving a CSP the main issue with Naive search algorithms is that they can continue expanding obviously wrong paths. In backtracking search, we check constraints as we go. Backtracking is just the above idea combined with the fact that we are dealing with one variable at a time. Backtracking Search is implemented in the repository as the function backtracking_search. This is the same as Figure 6.5 in the book. The function takes as input a CSP and few other optional parameters which can be used to further speed it up. The function returns the correct assignment if it satisfies the goal. We will discuss these later. Let us solve our coloring_problem1 with backtracking_search.
Step14: Let us also check the number of assingments made.
Step15: Now let us check the total number of assingments and unassingments which is the lentgh ofour assingment history.
Step16: Now let us explore the optional keyword arguments that the backtracking_search function takes. These optional arguments help speed up the assignment further. Along with these, we will also point out to methods in the CSP class that help make this work.
The first of these is select_unassigned_variable. It takes in a function that helps in deciding the order in which variables will be selected for assignment. We use a heuristic called Most Restricted Variable which is implemented by the function mrv. The idea behind mrv is to choose the variable with the fewest legal values left in its domain. The intuition behind selecting the mrv or the most constrained variable is that it allows us to encounter failure quickly before going too deep into a tree if we have selected a wrong step before. The mrv implementation makes use of another function num_legal_values to sort out the variables by a number of legal values left in its domain. This function, in turn, calls the nconflicts method of the CSP to return such values.
Step17: Another ordering related parameter order_domain_values governs the value ordering. Here we select the Least Constraining Value which is implemented by the function lcv. The idea is to select the value which rules out the fewest values in the remaining variables. The intuition behind selecting the lcv is that it leaves a lot of freedom to assign values later. The idea behind selecting the mrc and lcv makes sense because we need to do all variables but for values, we might better try the ones that are likely. So for vars, we face the hard ones first.
Step18: Finally, the third parameter inference can make use of one of the two techniques called Arc Consistency or Forward Checking. The details of these methods can be found in the Section 6.3.2 of the book. In short the idea of inference is to detect the possible failure before it occurs and to look ahead to not make mistakes. mac and forward_checking implement these two techniques. The CSP methods support_pruning, suppose, prune, choices, infer_assignment and restore help in using these techniques. You can know more about these by looking up the source code.
Now let us compare the performance with these parameters enabled vs the default parameters. We will use the Graph Coloring problem instance usa for comparison. We will call the instances solve_simple and solve_parameters and solve them using backtracking and compare the number of assignments.
Step19: Graph Coloring Visualization
Next, we define some functions to create the visualisation from the assingment_history of coloring_problem1. The reader need not concern himself with the code that immediately follows as it is the usage of Matplotib with IPython Widgets. If you are interested in reading more about these visit ipywidgets.readthedocs.io. We will be using the networkx library to generate graphs. These graphs can be treated as the graph that needs to be colored or as a constraint graph for this problem. If interested you can read a dead simple tutorial here. We start by importing the necessary libraries and initializing matplotlib inline.
Step20: The ipython widgets we will be using require the plots in the form of a step function such that there is a graph corresponding to each value. We define the make_update_step_function which return such a function. It takes in as inputs the neighbors/graph along with an instance of the InstruCSP. This will be more clear with the example below. If this sounds confusing do not worry this is not the part of the core material and our only goal is to help you visualize how the process works.
Step21: Finally let us plot our problem. We first use the function above to obtain a step function.
Step22: Next we set the canvas size.
Step23: Finally our plot using ipywidget slider and matplotib. You can move the slider to experiment and see the coloring change. It is also possible to move the slider using arrow keys or to jump to the value by directly editing the number with a double click. The Visualize Button will automatically animate the slider for you. The Extra Delay Box allows you to set time delay in seconds upto one second for each time step.
Step24: NQueens Visualization
Just like the Graph Coloring Problem we will start with defining a few helper functions to help us visualize the assignments as they evolve over time. The make_plot_board_step_function behaves similar to the make_update_step_function introduced earlier. It initializes a chess board in the form of a 2D grid with alternating 0s and 1s. This is used by plot_board_step function which draws the board using matplotlib and adds queens to it. This function also calls the label_queen_conflicts which modifies the grid placing 3 in positions in a position where there is a conflict.
Step25: Now let us visualize a solution obtained via backtracking. We use of the previosuly defined make_instru function for keeping a history of steps.
Step26: Now finally we set some matplotlib parameters to adjust how our plot will look. The font is necessary because the Black Queen Unicode character is not a part of all fonts. You can move the slider to experiment and observe the how queens are assigned. It is also possible to move the slider using arrow keys or to jump to the value by directly editing the number with a double click.The Visualize Button will automatically animate the slider for you. The Extra Delay Box allows you to set time delay in seconds upto one second for each time step.
Step27: Now let us finally repeat the above steps for min_conflicts solution.
Step28: The visualization has same features as the above. But here it also highlights the conflicts by labeling the conflicted queens with a red background. | <ASSISTANT_TASK:>
Python Code:
from csp import *
Explanation: Constraint Satisfaction Problems (CSPs)
This IPy notebook acts as supporting material for topics covered in Chapter 6 Constraint Satisfaction Problems of the book Artificial Intelligence: A Modern Approach. We make use of the implementations in csp.py module. Even though this notebook includes a brief summary of the main topics familiarity with the material present in the book is expected. We will look at some visualizations and solve some of the CSP problems described in the book. Let us import everything from the csp module to get started.
End of explanation
%psource CSP
Explanation: Review
CSPs are a special kind of search problems. Here we don't treat the space as a black box but the state has a particular form and we use that to our advantage to tweak our algorithms to be more suited to the problems. A CSP State is defined by a set of variables which can take values from corresponding domains. These variables can take only certain values in their domains to satisfy the constraints. A set of assignments which satisfies all constraints passes the goal test. Let us start by exploring the CSP class which we will use to model our CSPs. You can keep the popup open and read the main page to get a better idea of the code.
End of explanation
s = UniversalDict(['R','G','B'])
s[5]
Explanation: The _ init _ method parameters specify the CSP. Variable can be passed as a list of strings or integers. Domains are passed as dict where key specify the variables and value specify the domains. The variables are passed as an empty list. Variables are extracted from the keys of the domain dictionary. Neighbor is a dict of variables that essentially describes the constraint graph. Here each variable key has a list its value which are the variables that are constraint along with it. The constraint parameter should be a function f(A, a, B, b) that returns true if neighbors A, B satisfy the constraint when they have values A=a, B=b. We have additional parameters like nassings which is incremented each time an assignment is made when calling the assign method. You can read more about the methods and parameters in the class doc string. We will talk more about them as we encounter their use. Let us jump to an example.
Graph Coloring
We use the graph coloring problem as our running example for demonstrating the different algorithms in the csp module. The idea of map coloring problem is that the adjacent nodes (those connected by edges) should not have the same color throughout the graph. The graph can be colored using a fixed number of colors. Here each node is a variable and the values are the colors that can be assigned to them. Given that the domain will be the same for all our nodes we use a custom dict defined by the UniversalDict class. The UniversalDict Class takes in a parameter which it returns as value for all the keys of the dict. It is very similar to defaultdict in Python except that it does not support item assignment.
End of explanation
%psource different_values_constraint
Explanation: For our CSP we also need to define a constraint function f(A, a, B, b). In this what we need is that the neighbors must not have the same color. This is defined in the function different_values_constraint of the module.
End of explanation
%pdoc parse_neighbors
Explanation: The CSP class takes neighbors in the form of a Dict. The module specifies a simple helper function named parse_neighbors which allows to take input in the form of strings and return a Dict of the form compatible with the CSP Class.
End of explanation
%psource MapColoringCSP
australia, usa, france
Explanation: The MapColoringCSP function creates and returns a CSP with the above constraint function and states. The variables our the keys of the neighbors dict and the constraint is the one specified by the different_values_constratint function. australia, usa and france are three CSPs that have been created using MapColoringCSP. australia corresponds to Figure 6.1 in the book.
End of explanation
%psource queen_constraint
Explanation: NQueens
The N-queens puzzle is the problem of placing N chess queens on a N×N chessboard so that no two queens threaten each other. Here N is a natural number. Like the graph coloring, problem NQueens is also implemented in the csp module. The NQueensCSP class inherits from the CSP class. It makes some modifications in the methods to suit the particular problem. The queens are assumed to be placed one per column, from left to right. That means position (x, y) represents (var, val) in the CSP. The constraint that needs to be passed on the CSP is defined in the queen_constraint function. The constraint is satisfied (true) if A, B are really the same variable, or if they are not in the same row, down diagonal, or up diagonal.
End of explanation
%psource NQueensCSP
Explanation: The NQueensCSP method implements methods that support solving the problem via min_conflicts which is one of the techniques for solving CSPs. Because min_conflicts hill climbs the number of conflicts to solve the CSP assign and unassign are modified to record conflicts. More details about the structures rows, downs, ups which help in recording conflicts are explained in the docstring.
End of explanation
eight_queens = NQueensCSP(8)
Explanation: The _ init _ method takes only one parameter n the size of the problem. To create an instance we just pass the required n into the constructor.
End of explanation
import copy
class InstruCSP(CSP):
def __init__(self, variables, domains, neighbors, constraints):
super().__init__(variables, domains, neighbors, constraints)
self.assingment_history = []
def assign(self, var, val, assignment):
super().assign(var,val, assignment)
self.assingment_history.append(copy.deepcopy(assignment))
def unassign(self, var, assignment):
super().unassign(var,assignment)
self.assingment_history.append(copy.deepcopy(assignment))
Explanation: Helper Functions
We will now implement few helper functions that will help us visualize the Coloring Problem. We will make some modifications to the existing Classes and Functions for additional book keeping. To begin with we modify the assign and unassign methods in the CSP to add a copy of the assignment to the assingment_history. We call this new class InstruCSP. This would allow us to see how the assignment evolves over time.
End of explanation
def make_instru(csp):
return InstruCSP(csp.variables, csp.domains, csp.neighbors,
csp.constraints)
Explanation: Next, we define make_instru which takes an instance of CSP and returns a InstruCSP instance.
End of explanation
neighbors = {
0: [6, 11, 15, 18, 4, 11, 6, 15, 18, 4],
1: [12, 12, 14, 14],
2: [17, 6, 11, 6, 11, 10, 17, 14, 10, 14],
3: [20, 8, 19, 12, 20, 19, 8, 12],
4: [11, 0, 18, 5, 18, 5, 11, 0],
5: [4, 4],
6: [8, 15, 0, 11, 2, 14, 8, 11, 15, 2, 0, 14],
7: [13, 16, 13, 16],
8: [19, 15, 6, 14, 12, 3, 6, 15, 19, 12, 3, 14],
9: [20, 15, 19, 16, 15, 19, 20, 16],
10: [17, 11, 2, 11, 17, 2],
11: [6, 0, 4, 10, 2, 6, 2, 0, 10, 4],
12: [8, 3, 8, 14, 1, 3, 1, 14],
13: [7, 15, 18, 15, 16, 7, 18, 16],
14: [8, 6, 2, 12, 1, 8, 6, 2, 1, 12],
15: [8, 6, 16, 13, 18, 0, 6, 8, 19, 9, 0, 19, 13, 18, 9, 16],
16: [7, 15, 13, 9, 7, 13, 15, 9],
17: [10, 2, 2, 10],
18: [15, 0, 13, 4, 0, 15, 13, 4],
19: [20, 8, 15, 9, 15, 8, 3, 20, 3, 9],
20: [3, 19, 9, 19, 3, 9]
}
Explanation: We will now use a graph defined as a dictonary for plotting purposes in our Graph Coloring Problem. The keys are the nodes and their corresponding values are the nodes are they are connected to.
End of explanation
coloring_problem = MapColoringCSP('RGBY', neighbors)
coloring_problem1 = make_instru(coloring_problem)
Explanation: Now we are ready to create an InstruCSP instance for our problem. We are doing this for an instance of MapColoringProblem class which inherits from the CSP Class. This means that our make_instru function will work perfectly for it.
End of explanation
result = backtracking_search(coloring_problem1)
result # A dictonary of assingments.
Explanation: Backtracking Search
For solving a CSP the main issue with Naive search algorithms is that they can continue expanding obviously wrong paths. In backtracking search, we check constraints as we go. Backtracking is just the above idea combined with the fact that we are dealing with one variable at a time. Backtracking Search is implemented in the repository as the function backtracking_search. This is the same as Figure 6.5 in the book. The function takes as input a CSP and few other optional parameters which can be used to further speed it up. The function returns the correct assignment if it satisfies the goal. We will discuss these later. Let us solve our coloring_problem1 with backtracking_search.
End of explanation
coloring_problem1.nassigns
Explanation: Let us also check the number of assingments made.
End of explanation
len(coloring_problem1.assingment_history)
Explanation: Now let us check the total number of assingments and unassingments which is the lentgh ofour assingment history.
End of explanation
%psource mrv
%psource num_legal_values
%psource CSP.nconflicts
Explanation: Now let us explore the optional keyword arguments that the backtracking_search function takes. These optional arguments help speed up the assignment further. Along with these, we will also point out to methods in the CSP class that help make this work.
The first of these is select_unassigned_variable. It takes in a function that helps in deciding the order in which variables will be selected for assignment. We use a heuristic called Most Restricted Variable which is implemented by the function mrv. The idea behind mrv is to choose the variable with the fewest legal values left in its domain. The intuition behind selecting the mrv or the most constrained variable is that it allows us to encounter failure quickly before going too deep into a tree if we have selected a wrong step before. The mrv implementation makes use of another function num_legal_values to sort out the variables by a number of legal values left in its domain. This function, in turn, calls the nconflicts method of the CSP to return such values.
End of explanation
%psource lcv
Explanation: Another ordering related parameter order_domain_values governs the value ordering. Here we select the Least Constraining Value which is implemented by the function lcv. The idea is to select the value which rules out the fewest values in the remaining variables. The intuition behind selecting the lcv is that it leaves a lot of freedom to assign values later. The idea behind selecting the mrc and lcv makes sense because we need to do all variables but for values, we might better try the ones that are likely. So for vars, we face the hard ones first.
End of explanation
solve_simple = copy.deepcopy(usa)
solve_parameters = copy.deepcopy(usa)
backtracking_search(solve_simple)
backtracking_search(solve_parameters, order_domain_values=lcv, select_unassigned_variable=mrv, inference=mac )
solve_simple.nassigns
solve_parameters.nassigns
Explanation: Finally, the third parameter inference can make use of one of the two techniques called Arc Consistency or Forward Checking. The details of these methods can be found in the Section 6.3.2 of the book. In short the idea of inference is to detect the possible failure before it occurs and to look ahead to not make mistakes. mac and forward_checking implement these two techniques. The CSP methods support_pruning, suppose, prune, choices, infer_assignment and restore help in using these techniques. You can know more about these by looking up the source code.
Now let us compare the performance with these parameters enabled vs the default parameters. We will use the Graph Coloring problem instance usa for comparison. We will call the instances solve_simple and solve_parameters and solve them using backtracking and compare the number of assignments.
End of explanation
%matplotlib inline
import networkx as nx
import matplotlib.pyplot as plt
import matplotlib
import time
Explanation: Graph Coloring Visualization
Next, we define some functions to create the visualisation from the assingment_history of coloring_problem1. The reader need not concern himself with the code that immediately follows as it is the usage of Matplotib with IPython Widgets. If you are interested in reading more about these visit ipywidgets.readthedocs.io. We will be using the networkx library to generate graphs. These graphs can be treated as the graph that needs to be colored or as a constraint graph for this problem. If interested you can read a dead simple tutorial here. We start by importing the necessary libraries and initializing matplotlib inline.
End of explanation
def make_update_step_function(graph, instru_csp):
def draw_graph(graph):
# create networkx graph
G=nx.Graph(graph)
# draw graph
pos = nx.spring_layout(G,k=0.15)
return (G, pos)
G, pos = draw_graph(graph)
def update_step(iteration):
# here iteration is the index of the assingment_history we want to visualize.
current = instru_csp.assingment_history[iteration]
# We convert the particular assingment to a default dict so that the color for nodes which
# have not been assigned defaults to black.
current = defaultdict(lambda: 'Black', current)
# Now we use colors in the list and default to black otherwise.
colors = [current[node] for node in G.node.keys()]
# Finally drawing the nodes.
nx.draw(G, pos, node_color=colors, node_size=500)
labels = {label:label for label in G.node}
# Labels shifted by offset so as to not overlap nodes.
label_pos = {key:[value[0], value[1]+0.03] for key, value in pos.items()}
nx.draw_networkx_labels(G, label_pos, labels, font_size=20)
# show graph
plt.show()
return update_step # <-- this is a function
def make_visualize(slider):
''' Takes an input a slider and returns
callback function for timer and animation
'''
def visualize_callback(Visualize, time_step):
if Visualize is True:
for i in range(slider.min, slider.max + 1):
slider.value = i
time.sleep(float(time_step))
return visualize_callback
Explanation: The ipython widgets we will be using require the plots in the form of a step function such that there is a graph corresponding to each value. We define the make_update_step_function which return such a function. It takes in as inputs the neighbors/graph along with an instance of the InstruCSP. This will be more clear with the example below. If this sounds confusing do not worry this is not the part of the core material and our only goal is to help you visualize how the process works.
End of explanation
step_func = make_update_step_function(neighbors, coloring_problem1)
Explanation: Finally let us plot our problem. We first use the function above to obtain a step function.
End of explanation
matplotlib.rcParams['figure.figsize'] = (18.0, 18.0)
Explanation: Next we set the canvas size.
End of explanation
import ipywidgets as widgets
from IPython.display import display
iteration_slider = widgets.IntSlider(min=0, max=len(coloring_problem1.assingment_history)-1, step=1, value=0)
w=widgets.interactive(step_func,iteration=iteration_slider)
display(w)
visualize_callback = make_visualize(iteration_slider)
visualize_button = widgets.ToggleButton(desctiption = "Visualize", value = False)
time_select = widgets.ToggleButtons(description='Extra Delay:',options=['0', '0.1', '0.2', '0.5', '0.7', '1.0'])
a = widgets.interactive(visualize_callback, Visualize = visualize_button, time_step=time_select)
display(a)
Explanation: Finally our plot using ipywidget slider and matplotib. You can move the slider to experiment and see the coloring change. It is also possible to move the slider using arrow keys or to jump to the value by directly editing the number with a double click. The Visualize Button will automatically animate the slider for you. The Extra Delay Box allows you to set time delay in seconds upto one second for each time step.
End of explanation
def label_queen_conflicts(assingment,grid):
''' Mark grid with queens that are under conflict. '''
for col, row in assingment.items(): # check each queen for conflict
row_conflicts = {temp_col:temp_row for temp_col,temp_row in assingment.items()
if temp_row == row and temp_col != col}
up_conflicts = {temp_col:temp_row for temp_col,temp_row in assingment.items()
if temp_row+temp_col == row+col and temp_col != col}
down_conflicts = {temp_col:temp_row for temp_col,temp_row in assingment.items()----
if temp_row-temp_col == row-col and temp_col != col}
# Now marking the grid.
for col, row in row_conflicts.items():
grid[col][row] = 3
for col, row in up_conflicts.items():
grid[col][row] = 3
for col, row in down_conflicts.items():
grid[col][row] = 3
return grid
def make_plot_board_step_function(instru_csp):
'''ipywidgets interactive function supports
single parameter as input. This function
creates and return such a function by taking
in input other parameters.
'''
n = len(instru_csp.variables)
def plot_board_step(iteration):
''' Add Queens to the Board.'''
data = instru_csp.assingment_history[iteration]
grid = [[(col+row+1)%2 for col in range(n)] for row in range(n)]
grid = label_queen_conflicts(data, grid) # Update grid with conflict labels.
# color map of fixed colors
cmap = matplotlib.colors.ListedColormap(['white','lightsteelblue','red'])
bounds=[0,1,2,3] # 0 for white 1 for black 2 onwards for conflict labels (red).
norm = matplotlib.colors.BoundaryNorm(bounds, cmap.N)
fig = plt.imshow(grid, interpolation='nearest', cmap = cmap,norm=norm)
+
plt.axis('off')
fig.axes.get_xaxis().set_visible(False)
fig.axes.get_yaxis().set_visible(False)
# Place the Queens Unicode Symbol
for col, row in data.items():
fig.axes.text(row, col, u"\u265B", va='center', ha='center', family='Dejavu Sans', fontsize=32)
plt.show()
return plot_board_step
Explanation: NQueens Visualization
Just like the Graph Coloring Problem we will start with defining a few helper functions to help us visualize the assignments as they evolve over time. The make_plot_board_step_function behaves similar to the make_update_step_function introduced earlier. It initializes a chess board in the form of a 2D grid with alternating 0s and 1s. This is used by plot_board_step function which draws the board using matplotlib and adds queens to it. This function also calls the label_queen_conflicts which modifies the grid placing 3 in positions in a position where there is a conflict.
End of explanation
twelve_queens_csp = NQueensCSP(12)
backtracking_instru_queen = make_instru(twelve_queens_csp)
result = backtracking_search(backtracking_instru_queen)
backtrack_queen_step = make_plot_board_step_function(backtracking_instru_queen) # Step Function for Widgets
Explanation: Now let us visualize a solution obtained via backtracking. We use of the previosuly defined make_instru function for keeping a history of steps.
End of explanation
matplotlib.rcParams['figure.figsize'] = (8.0, 8.0)
matplotlib.rcParams['font.family'].append(u'Dejavu Sans')
iteration_slider = widgets.IntSlider(min=0, max=len(backtracking_instru_queen.assingment_history)-1, step=0, value=0)
w=widgets.interactive(backtrack_queen_step,iteration=iteration_slider)
display(w)
visualize_callback = make_visualize(iteration_slider)
visualize_button = widgets.ToggleButton(desctiption = "Visualize", value = False)
time_select = widgets.ToggleButtons(description='Extra Delay:',options=['0', '0.1', '0.2', '0.5', '0.7', '1.0'])
a = widgets.interactive(visualize_callback, Visualize = visualize_button, time_step=time_select)
display(a)
Explanation: Now finally we set some matplotlib parameters to adjust how our plot will look. The font is necessary because the Black Queen Unicode character is not a part of all fonts. You can move the slider to experiment and observe the how queens are assigned. It is also possible to move the slider using arrow keys or to jump to the value by directly editing the number with a double click.The Visualize Button will automatically animate the slider for you. The Extra Delay Box allows you to set time delay in seconds upto one second for each time step.
End of explanation
conflicts_instru_queen = make_instru(twelve_queens_csp)
result = min_conflicts(conflicts_instru_queen)
conflicts_step = make_plot_board_step_function(conflicts_instru_queen)
Explanation: Now let us finally repeat the above steps for min_conflicts solution.
End of explanation
iteration_slider = widgets.IntSlider(min=0, max=len(conflicts_instru_queen.assingment_history)-1, step=0, value=0)
w=widgets.interactive(conflicts_step,iteration=iteration_slider)
display(w)
visualize_callback = make_visualize(iteration_slider)
visualize_button = widgets.ToggleButton(desctiption = "Visualize", value = False)
time_select = widgets.ToggleButtons(description='Extra Delay:',options=['0', '0.1', '0.2', '0.5', '0.7', '1.0'])
a = widgets.interactive(visualize_callback, Visualize = visualize_button, time_step=time_select)
display(a)
Explanation: The visualization has same features as the above. But here it also highlights the conflicts by labeling the conflicted queens with a red background.
End of explanation
<END_TASK> |
274 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
The train_y_nll, valid_y_nll and valid_objective show massive overfitting.
Step1: Hard to see whether it is still learning... | <ASSISTANT_TASK:>
Python Code:
m.layer_names
channel = m.monitor.channels["valid_y_nll"]
hl.Curve(zip(channel.epoch_record, channel.val_record),label="valid_y_nll")
channel = m.monitor.channels["valid_y_nll"]
plt.plot(channel.epoch_record, channel.val_record)
Explanation: The train_y_nll, valid_y_nll and valid_objective show massive overfitting.
End of explanation
ch1 = m.monitor.channels["valid_y_nll"]
ch2 = m.monitor.channels["train_y_nll"]
hl.Curve(zip(ch1.epoch_record[-40:], ch1.val_record[-40:]),label="valid_y_nll")
hl.Curve(zip(ch2.epoch_record[-40:], ch2.val_record[-40:]),label="train_y_nll")
m = pylearn2.utils.serial.load(
"/disk/scratch/neuroglycerin/models/continue_hopeful_recent.pkl")
make_curves(m,"valid_objective","valid_y_nll","train_y_nll")
Explanation: Hard to see whether it is still learning...
End of explanation
<END_TASK> |
275 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
The MIT License (MIT)<br>
Copyright (c) 2017 Massachusetts Institute of Technology<br>
Author
Step1: Specify list of dates
Step2: Enter Research Data Archive (NCAR) credentials
Step3: Create data fetcher
Step4: Access data
Step5: Plot temperature data | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams['figure.dpi'] = 150
from getpass import getpass
import pandas as pd
from skdaccess.framework.param_class import *
from skdaccess.geo.era_interim.cache import DataFetcher as EDF
Explanation: The MIT License (MIT)<br>
Copyright (c) 2017 Massachusetts Institute of Technology<br>
Author: Cody Rude<br>
This software has been created in projects supported by the US National<br>
Science Foundation and NASA (PI: Pankratius)<br>
Permission is hereby granted, free of charge, to any person obtaining a copy<br>
of this software and associated documentation files (the "Software"), to deal<br>
in the Software without restriction, including without limitation the rights<br>
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell<br>
copies of the Software, and to permit persons to whom the Software is<br>
furnished to do so, subject to the following conditions:<br>
The above copyright notice and this permission notice shall be included in<br>
all copies or substantial portions of the Software.<br>
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR<br>
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,<br>
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE<br>
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER<br>
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,<br>
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN<br>
THE SOFTWARE.<br>
Data Citation
European Centre for Medium-Range Weather Forecasts (2009): ERA-Interim Project. Research Data Archive at the National Center for Atmospheric Research, Computational and Information Systems Laboratory. https://doi.org/10.5065/D6CR5RD9.
End of explanation
date_list = pd.date_range('2015-06-06 00:00:00', '2015-06-06 06:00:00', freq='6H')
Explanation: Specify list of dates
End of explanation
username='Enter username'
password = getpass()
Explanation: Enter Research Data Archive (NCAR) credentials
End of explanation
edf = EDF(date_list=date_list, data_names=['Geopotential','Temperature'],
username=username, password=password)
Explanation: Create data fetcher
End of explanation
edw = edf.output()
iterator = edw.getIterator()
geo_label, geo_data = next(iterator)
temp_label, temp_data = next(iterator)
Explanation: Access data
End of explanation
plt.figure(figsize=(5,3.75));
plt.plot(temp_data[0,:,75,350], temp_data['pressure']);
plt.gca().invert_yaxis();
plt.ylabel('Pressure');
plt.xlabel('Temperature');
Explanation: Plot temperature data
End of explanation
<END_TASK> |
276 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
<h1>2b. Machine Learning using tf.estimator </h1>
In this notebook, we will create a machine learning model using tf.estimator and evaluate its performance. The dataset is rather small (7700 samples), so we can do it all in-memory. We will also simply pass the raw data in as-is.
Step1: Read data created in the previous chapter.
Step2: <h2> Train and eval input functions to read from Pandas Dataframe </h2>
Step3: Our input function for predictions is the same except we don't provide a label
Step4: Create feature columns for estimator
Step5: <h3> Linear Regression with tf.Estimator framework </h3>
Step6: Evaluate on the validation data (we should defer using the test data to after we have selected a final model).
Step7: This is nowhere near our benchmark (RMSE of $6 or so on this data), but it serves to demonstrate what TensorFlow code looks like. Let's use this model for prediction.
Step8: This explains why the RMSE was so high -- the model essentially predicts the same amount for every trip. Would a more complex model help? Let's try using a deep neural network. The code to do this is quite straightforward as well.
<h3> Deep Neural Network regression </h3>
Step11: We are not beating our benchmark with either model ... what's up? Well, we may be using TensorFlow for Machine Learning, but we are not yet using it well. That's what the rest of this course is about!
But, for the record, let's say we had to choose between the two models. We'd choose the one with the lower validation error. Finally, we'd measure the RMSE on the test data with this chosen model.
<h2> Benchmark dataset </h2>
Let's do this on the benchmark dataset. | <ASSISTANT_TASK:>
Python Code:
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
# Ensure the right version of Tensorflow is installed.
!pip freeze | grep tensorflow==2.6
import tensorflow as tf
import pandas as pd
import numpy as np
import shutil
print(tf.__version__)
Explanation: <h1>2b. Machine Learning using tf.estimator </h1>
In this notebook, we will create a machine learning model using tf.estimator and evaluate its performance. The dataset is rather small (7700 samples), so we can do it all in-memory. We will also simply pass the raw data in as-is.
End of explanation
# In CSV, label is the first column, after the features, followed by the key
CSV_COLUMNS = ['fare_amount', 'pickuplon','pickuplat','dropofflon','dropofflat','passengers', 'key']
FEATURES = CSV_COLUMNS[1:len(CSV_COLUMNS) - 1]
LABEL = CSV_COLUMNS[0]
df_train = pd.read_csv('./taxi-train.csv', header = None, names = CSV_COLUMNS)
df_valid = pd.read_csv('./taxi-valid.csv', header = None, names = CSV_COLUMNS)
df_test = pd.read_csv('./taxi-test.csv', header = None, names = CSV_COLUMNS)
Explanation: Read data created in the previous chapter.
End of explanation
# TODO: Create an appropriate input_fn to read the training data
def make_train_input_fn(df, num_epochs):
return tf.compat.v1.estimator.inputs.pandas_input_fn(
#ADD CODE HERE
)
# TODO: Create an appropriate input_fn to read the validation data
def make_eval_input_fn(df):
return tf.compat.v1.estimator.inputs.pandas_input_fn(
#ADD CODE HERE
)
Explanation: <h2> Train and eval input functions to read from Pandas Dataframe </h2>
End of explanation
# TODO: Create an appropriate prediction_input_fn
def make_prediction_input_fn(df):
return tf.compat.v1.estimator.inputs.pandas_input_fn(
#ADD CODE HERE
)
Explanation: Our input function for predictions is the same except we don't provide a label
End of explanation
# TODO: Create feature columns
Explanation: Create feature columns for estimator
End of explanation
tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.INFO)
OUTDIR = 'taxi_trained'
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
# TODO: Train a linear regression model
model = #ADD CODE HERE
model.train(#ADD CODE HERE
)
Explanation: <h3> Linear Regression with tf.Estimator framework </h3>
End of explanation
def print_rmse(model, df):
metrics = model.evaluate(input_fn = make_eval_input_fn(df))
print('RMSE on dataset = {}'.format(np.sqrt(metrics['average_loss'])))
print_rmse(model, df_valid)
Explanation: Evaluate on the validation data (we should defer using the test data to after we have selected a final model).
End of explanation
# TODO: Predict from the estimator model we trained using test dataset
Explanation: This is nowhere near our benchmark (RMSE of $6 or so on this data), but it serves to demonstrate what TensorFlow code looks like. Let's use this model for prediction.
End of explanation
# TODO: Copy your LinearRegressor estimator and replace with DNNRegressor. Remember to add a list of hidden units i.e. [32, 8, 2]
Explanation: This explains why the RMSE was so high -- the model essentially predicts the same amount for every trip. Would a more complex model help? Let's try using a deep neural network. The code to do this is quite straightforward as well.
<h3> Deep Neural Network regression </h3>
End of explanation
from google.cloud import bigquery
import numpy as np
import pandas as pd
def create_query(phase, EVERY_N):
phase: 1 = train 2 = valid
base_query =
SELECT
(tolls_amount + fare_amount) AS fare_amount,
EXTRACT(DAYOFWEEK FROM pickup_datetime) * 1.0 AS dayofweek,
EXTRACT(HOUR FROM pickup_datetime) * 1.0 AS hourofday,
pickup_longitude AS pickuplon,
pickup_latitude AS pickuplat,
dropoff_longitude AS dropofflon,
dropoff_latitude AS dropofflat,
passenger_count * 1.0 AS passengers,
CONCAT(CAST(pickup_datetime AS STRING), CAST(pickup_longitude AS STRING), CAST(pickup_latitude AS STRING), CAST(dropoff_latitude AS STRING), CAST(dropoff_longitude AS STRING)) AS key
FROM
`nyc-tlc.yellow.trips`
WHERE
trip_distance > 0
AND fare_amount >= 2.5
AND pickup_longitude > -78
AND pickup_longitude < -70
AND dropoff_longitude > -78
AND dropoff_longitude < -70
AND pickup_latitude > 37
AND pickup_latitude < 45
AND dropoff_latitude > 37
AND dropoff_latitude < 45
AND passenger_count > 0
if EVERY_N == None:
if phase < 2:
# Training
query = "{0} AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 4)) < 2".format(base_query)
else:
# Validation
query = "{0} AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 4)) = {1}".format(base_query, phase)
else:
query = "{0} AND ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), {1})) = {2}".format(base_query, EVERY_N, phase)
return query
query = create_query(2, 100000)
df = bigquery.Client().query(query).to_dataframe()
print_rmse(model, df)
Explanation: We are not beating our benchmark with either model ... what's up? Well, we may be using TensorFlow for Machine Learning, but we are not yet using it well. That's what the rest of this course is about!
But, for the record, let's say we had to choose between the two models. We'd choose the one with the lower validation error. Finally, we'd measure the RMSE on the test data with this chosen model.
<h2> Benchmark dataset </h2>
Let's do this on the benchmark dataset.
End of explanation
<END_TASK> |
277 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Scientific Python Quickstart
John Stachurski
ANU
This is a fast-paced, hands-on introduction to scientific computing with Python, contained in a Jupyter notebook. The main focus will be on introducing Python's four most important scientific libraries
Step1: Basic NumPy
Perhaps the single most important scientific library for Python is NumPy. NumPy provides foundational data structures and routines on which many other libraries rely.
Step2: NumPy defines a basic data type called an array (actually a numpy.ndarray)
Step3: Note that array data must be homogeneous
The most important data types are
Step4: When we create an array such as
Step5: z is a "flat" array with no dimension--- neither row nor column vector
Step6: Here the shape tuple has only one element, which is the length of the array (tuples with one element end with a comma)
To give it dimension, we can change the shape attribute
For example, let's make it a column vector
Step7: Creating arrays
Creating empty arrays --- initializing memory
Step8: These are just garbage numbers --- whatever was in those memory slots
Here's how to make a regular gird sequence
Step9: Creating an array of ones
Step10: Arrays can be made from Python lists or tuples
Step11: Array indexing
Step12: Array methods
Step13: Operations on arrays
Standard arithmetic operations on arrays act elementwise
Step14: For Python $\geq 3.5$ and NumPy $\geq 1.1$ the @ operator also works.
Step15: I'll continue to use np.dot below for the benefit of those who are using older versions. But in my opinion the @ operator is much nicer.
Comparisons
Step16: Matplotlib
Matplotlib is an outstanding plotting and visualization library for Python that interacts nicely with NumPy. Here are a few quick examples. We'll see more below when we discuss the SciPy library.
Step17: Display figures in this browser window rather than having them open up separately
Step18: Create something to plot
Step19: Here's a slightly more complex plot
Step20: SciPy
Let's just cover some simple examples --- references for further reading are below
Statistics and distributions
Let's use scipy.stats to generate some data from the Beta distribution
Step21: Now let's histogram it and compare it to the original density
Step22: Other methods
Step23: Basic linear regression
Step24: Let's plot this with data and line of best fit
Step25: Roots and fixed points
Let's choose an arbitrary function to work with
Step26: Here we see that the algorithm gets it wrong --- newton is fast but not robust
Let's try a hybrid method
Step27: Note that the hybrid method is robust but still quite fast...
Numerical optimization and integration
Step28: Linear Algebra
Let's look at some of the most common routines from linear and matrix algebra
Step29: We'll experiment with matrices
$$
A = \begin{bmatrix} 2 & -1 \ 3 & 0 \end{bmatrix}
\quad \text{and} \quad
b = \begin{bmatrix} 1 \ 1 \end{bmatrix}
$$
Step30: Let's check that $Ax = b$
Step31: We can also invert directly
Step32: Let's compute the eigenvalues and eigenvectors
Step33: More information
linear algebra
Step34: Let's start by writing a test data set to the present working directory, so we can read it back in as a dataframe using pandas. We use an IPython magic to write the data from a cell to a file
Step35: Let's try that again but this time using the country as the index column
Step36: Let's drop the year since it's not very informative
Step37: Let's add a column for GDP per capita
Step38: Let's sort the whole data frame by GDP per capita
Step39: Now we'll plot per capital GDP using the dataframe's plot method
Step40: Exercises
Here are two exercises. Feel free to consult documentation such as can be found here. The solutions are below. The cell with "solution below" is mean to push them below your line of sight and save you from temptation.
Exercise 1
Generate 10000 data points from the exponential distribution with density
$$
f(x; \alpha) = \alpha \exp(-\alpha x)
\qquad
(x > 0, \alpha > 0)
$$
using scipy.stats and taking $\alpha = 0.5$. Then, after looking up the maximum likelihood estimator of $\alpha$, compute the estimate given your data and check that it is in fact close to $\alpha$.
Step41: Exercise 2
Using the same data set, implement maximum likelihood again, but this time pretending that you don't know the analytical expression for the maximum likelihood estimator. Set up the log likelihood function and maximize it numerically using a routine from scipy.optimize.
Step42: Solutions
Step43: Solution to Exercise 1
After checking the docs for the exponential distribution we proceed as follows
Step44: Let's check we've got the right distribution here
Step45: It's well-known that the MLE of $\alpha$ is $1/\bar x$ where $\bar x$ is the mean of the sample. Let's check that it is indeed close to $\alpha$.
Step46: Minimize over a reasonable parameter space | <ASSISTANT_TASK:>
Python Code:
import sys
print(sys.version)
Explanation: Scientific Python Quickstart
John Stachurski
ANU
This is a fast-paced, hands-on introduction to scientific computing with Python, contained in a Jupyter notebook. The main focus will be on introducing Python's four most important scientific libraries: NumPy, Scipy, Pandas and Matplotlib.
If you don't know how to use this notebook you need to first work through this page.
A slower, more detailed and more systematic treatment of Python for scientific applications can be found at quant-econ.net. But this notebook is a good place to start for those who like to learn by doing.
Here's some information on the version of Python that I'm using:
End of explanation
import numpy as np # Import library and give it alias np
print(np.__version__) # The version I'm using
Explanation: Basic NumPy
Perhaps the single most important scientific library for Python is NumPy. NumPy provides foundational data structures and routines on which many other libraries rely.
End of explanation
a = np.zeros(3) # Create an array of zeros
a # Print a
type(a)
Explanation: NumPy defines a basic data type called an array (actually a numpy.ndarray)
End of explanation
a = np.zeros(3)
type(a[1])
Explanation: Note that array data must be homogeneous
The most important data types are:
float64: 64 bit floating point number
float32: 32 bit floating point number
int64: 64 bit integer
int32: 32 bit integer
bool: 8 bit True or False
There are also dtypes to represent complex numbers, unsigned integers, etc
On most machines, the default dtype for arrays is float64
End of explanation
z = np.zeros(10)
Explanation: When we create an array such as
End of explanation
z.shape
Explanation: z is a "flat" array with no dimension--- neither row nor column vector:
End of explanation
z.shape = (10, 1)
z
z = np.zeros(4)
z.shape = (2, 2)
z
Explanation: Here the shape tuple has only one element, which is the length of the array (tuples with one element end with a comma)
To give it dimension, we can change the shape attribute
For example, let's make it a column vector
End of explanation
z = np.empty(3)
z
Explanation: Creating arrays
Creating empty arrays --- initializing memory:
End of explanation
z = np.linspace(2, 4, 5) # From 2 to 4, with 5 elements
z
Explanation: These are just garbage numbers --- whatever was in those memory slots
Here's how to make a regular gird sequence
End of explanation
z = np.ones(3)
z
z = np.identity(2)
z
Explanation: Creating an array of ones
End of explanation
z = np.array([10, 20])
z
z = np.array((10, 20), dtype=float)
z
z = np.array([[1, 2], [3, 4]]) # 2D array from a list of lists
z
Explanation: Arrays can be made from Python lists or tuples
End of explanation
z = np.linspace(1, 2, 5)
z
z[0] # First element --- Python sequences are zero based, like C, Java, etc.
z[-1] # Special syntax for last element
z[0:2] # Meaning: Two elements, starting from element 0
z = np.array([[1, 2], [3, 4]])
z
z[0, 0]
z[0,:] # First row
z[:,0] # First column
z = np.linspace(2, 4, 5)
z
d = np.array([0, 1, 1, 0, 0], dtype=bool)
d
z[d]
Explanation: Array indexing
End of explanation
A = np.array((4, 3, 2, 1))
A
A.sort()
A
A.mean()
A.sum()
A.max()
A.cumsum()
A.var()
A.shape = (2, 2)
A
A.T # Transpose, equivalent to A.transpose()
Explanation: Array methods
End of explanation
a = np.array([1, 2, 3, 4])
b = np.array([5, 6, 7, 8])
a + b
a - b
a + 10
a.shape = 2, 2
b.shape = 2, 2
a
b
a * b # Pointwise multiplication!!
np.dot(a, b) # Matrix multiplication
Explanation: Operations on arrays
Standard arithmetic operations on arrays act elementwise
End of explanation
a @ b
Explanation: For Python $\geq 3.5$ and NumPy $\geq 1.1$ the @ operator also works.
End of explanation
z = np.array([2, 3])
y = np.array([2, 3])
z == y
y[0] = 3
z == y
z = np.linspace(0, 10, 5)
z
z > 3
z[z > 3] # Conditional extraction
Explanation: I'll continue to use np.dot below for the benefit of those who are using older versions. But in my opinion the @ operator is much nicer.
Comparisons
End of explanation
import matplotlib.pyplot as plt # Import main functionality
Explanation: Matplotlib
Matplotlib is an outstanding plotting and visualization library for Python that interacts nicely with NumPy. Here are a few quick examples. We'll see more below when we discuss the SciPy library.
End of explanation
%matplotlib inline
Explanation: Display figures in this browser window rather than having them open up separately:
End of explanation
x = np.linspace(-2, 2, 100)
y = x**2
fig, ax = plt.subplots() # Create axes and figure window
ax.plot(x, y, 'b-')
Explanation: Create something to plot
End of explanation
y3 = x**3
fig, ax = plt.subplots() # Create axes and figure window
ax.plot(x, y, 'b-', lw=2, alpha=0.8, label='$x^2$')
ax.plot(x, y3, 'g-', lw=2, alpha=0.8, label='$x^3$')
ax.legend(loc='lower right')
Explanation: Here's a slightly more complex plot
End of explanation
from scipy.stats import beta
q = beta(5, 5) # Beta(a, b), with a = b = 5
obs = q.rvs(2000) # 2000 observations
Explanation: SciPy
Let's just cover some simple examples --- references for further reading are below
Statistics and distributions
Let's use scipy.stats to generate some data from the Beta distribution
End of explanation
fig, ax = plt.subplots()
ax.hist(obs, bins=40, normed=True)
grid = np.linspace(0.01, 0.99, 100)
ax.plot(grid, q.pdf(grid), 'k-', linewidth=2)
Explanation: Now let's histogram it and compare it to the original density
End of explanation
type(q)
dir(q) # Let's see all its methods
q.cdf(0.5)
q.pdf(0.5)
q.mean()
Explanation: Other methods
End of explanation
from scipy.stats import linregress
n = 100
alpha, beta, sigma = 1, 2, 1.5
x = np.random.randn(n) # n standard normals
y = alpha + beta * x + sigma * np.random.randn(n)
beta_hat, alpha_hat, r_value, p_value, std_err = linregress(x, y)
print("gradient = {}".format(beta_hat))
print("intercept = {}".format(alpha_hat))
Explanation: Basic linear regression:
End of explanation
fig, ax = plt.subplots(figsize=(8, 5))
ax.plot(x, y, 'bo', alpha=0.6, label='observations')
xgrid = np.linspace(-3, 3, 2)
ax.plot(xgrid, alpha_hat + beta_hat * xgrid, 'k-', lw=2, alpha=0.8, label='best fit')
ax.grid()
ax.legend(loc='upper left')
Explanation: Let's plot this with data and line of best fit
End of explanation
fig, ax = plt.subplots()
def f(x):
return np.sin(4 * (x - 0.25)) + x + x**20 - 1
x = np.linspace(0, 1, 100)
ax.plot(x, f(x))
ax.plot(x, 0 * x)
from scipy.optimize import bisect # Bisection algorithm --- slow but robust
bisect(f, 0, 1)
from scipy.optimize import newton # Newton's method --- fast but less robust
newton(f, 0.2) # Start the search at initial condition x = 0.2
newton(f, 0.7) # Start the search at x = 0.7 instead
Explanation: Roots and fixed points
Let's choose an arbitrary function to work with
End of explanation
from scipy.optimize import brentq
brentq(f, 0, 1) # Hybrid method
timeit bisect(f, 0, 1)
timeit newton(f, 0.2)
timeit brentq(f, 0, 1)
Explanation: Here we see that the algorithm gets it wrong --- newton is fast but not robust
Let's try a hybrid method
End of explanation
from scipy.optimize import fminbound
fminbound(lambda x: x**2, -1, 2) # Search in [-1, 2]
from scipy.integrate import quad
integral, error = quad(lambda x: x**2, 0, 1)
integral
Explanation: Note that the hybrid method is robust but still quite fast...
Numerical optimization and integration
End of explanation
import scipy.linalg as la
Explanation: Linear Algebra
Let's look at some of the most common routines from linear and matrix algebra
End of explanation
A = [[2, -1],
[3, 0]]
A = np.array(A) # Convert from list to NumPy array
b = np.ones((2, 1)) # Shape is 2 x 1
A
b
x = la.solve(A, b) # Solve for x in Ax = b
print(x)
Explanation: We'll experiment with matrices
$$
A = \begin{bmatrix} 2 & -1 \ 3 & 0 \end{bmatrix}
\quad \text{and} \quad
b = \begin{bmatrix} 1 \ 1 \end{bmatrix}
$$
End of explanation
np.dot(A, x)
Explanation: Let's check that $Ax = b$
End of explanation
la.inv(A)
np.dot(A, la.inv(A)) # Should be the identity
Explanation: We can also invert directly
End of explanation
eigvals, eigvecs = la.eig(A)
print("eigenvalues = {}".format(eigvals))
print("first eigenvector = {}".format(eigvecs[:, 0]))
Explanation: Let's compute the eigenvalues and eigenvectors
End of explanation
import pandas as pd
Explanation: More information
linear algebra: http://docs.scipy.org/doc/scipy/reference/linalg.html
numerical integration: http://docs.scipy.org/doc/scipy/reference/integrate.html
interpolation: http://docs.scipy.org/doc/scipy/reference/interpolate.html
optimization: http://docs.scipy.org/doc/scipy/reference/optimize.html
distributions and random number generation: http://docs.scipy.org/doc/scipy/reference/stats.html
signal processing: http://docs.scipy.org/doc/scipy/reference/signal.html
Pandas
Pandas is a very popular library for working with data sets. In pandas, data is held in a dataframe, which is kind of like a spread sheet
End of explanation
%%file test_data.csv
"country","country isocode","year","POP","XRAT","tcgdp","cc","cg"
"Argentina","ARG","2000","37335.653","0.9995","295072.21869","75.716805379","5.5788042896"
"Australia","AUS","2000","19053.186","1.72483","541804.6521","67.759025993","6.7200975332"
"India","IND","2000","1006300.297","44.9416","1728144.3748","64.575551328","14.072205773"
"Israel","ISR","2000","6114.57","4.07733","129253.89423","64.436450847","10.266688415"
"Malawi","MWI","2000","11801.505","59.543808333","5026.2217836","74.707624181","11.658954494"
"South Africa","ZAF","2000","45064.098","6.93983","227242.36949","72.718710427","5.7265463933"
"United States","USA","2000","282171.957","1","9898700","72.347054303","6.0324539789"
"Uruguay","URY","2000","3219.793","12.099591667","25255.961693","78.978740282","5.108067988"
%ls ./*.csv # Check it's there
df = pd.read_csv('./test_data.csv')
df
Explanation: Let's start by writing a test data set to the present working directory, so we can read it back in as a dataframe using pandas. We use an IPython magic to write the data from a cell to a file:
End of explanation
df = pd.read_csv('./test_data.csv', index_col='country')
df
Explanation: Let's try that again but this time using the country as the index column
End of explanation
df.drop(['year'], axis=1, inplace=True)
df
Explanation: Let's drop the year since it's not very informative
End of explanation
df['GDP percap'] = df['tcgdp'] / df['POP']
df
Explanation: Let's add a column for GDP per capita
End of explanation
df.sort_values(by='GDP percap', inplace=True)
df
Explanation: Let's sort the whole data frame by GDP per capita
End of explanation
df['GDP percap'].plot(kind='bar')
Explanation: Now we'll plot per capital GDP using the dataframe's plot method
End of explanation
# Put your solution here
Explanation: Exercises
Here are two exercises. Feel free to consult documentation such as can be found here. The solutions are below. The cell with "solution below" is mean to push them below your line of sight and save you from temptation.
Exercise 1
Generate 10000 data points from the exponential distribution with density
$$
f(x; \alpha) = \alpha \exp(-\alpha x)
\qquad
(x > 0, \alpha > 0)
$$
using scipy.stats and taking $\alpha = 0.5$. Then, after looking up the maximum likelihood estimator of $\alpha$, compute the estimate given your data and check that it is in fact close to $\alpha$.
End of explanation
# Put your solution here
Explanation: Exercise 2
Using the same data set, implement maximum likelihood again, but this time pretending that you don't know the analytical expression for the maximum likelihood estimator. Set up the log likelihood function and maximize it numerically using a routine from scipy.optimize.
End of explanation
# Print some nonsense to partially hide solutions
filler_text = "solution below\n" * 25
print(filler_text)
Explanation: Solutions
End of explanation
from scipy.stats import expon
alpha = 0.5
n = 10000
ep = expon(scale=1.0/alpha) # scale controls the exponential parameter
x = ep.rvs(n)
Explanation: Solution to Exercise 1
After checking the docs for the exponential distribution we proceed as follows
End of explanation
fig, ax = plt.subplots(figsize=(8, 5))
xmin, xmax = 0.001, 10.0
ax.set_xlim(xmin, xmax)
ax.hist(x, normed=True, bins=40, alpha=0.3)
grid = np.linspace(xmin, xmax, 200)
ax.plot(grid, ep.pdf(grid), 'g-', lw=2, label='true density')
ax.legend()
Explanation: Let's check we've got the right distribution here
End of explanation
alpha_mle = 1.0 / x.mean()
print("max likelihood estimate of alpha is {}".format(alpha_mle))
s = x.sum()
def neg_loglike(a):
"Minus the log likelihood function for exponential"
return - n * np.log(a) + a * s
Explanation: It's well-known that the MLE of $\alpha$ is $1/\bar x$ where $\bar x$ is the mean of the sample. Let's check that it is indeed close to $\alpha$.
End of explanation
from scipy.optimize import fminbound
fminbound(neg_loglike, 0.01, 10.0)
Explanation: Minimize over a reasonable parameter space
End of explanation
<END_TASK> |
278 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
DDM for All Control Data
Parse all control data to a single CSV, then check if subject data can fit a hierarchical drift decision model for different stimuli types
Experimental stimuli are as follows
Step1: First stab at hddm model fit
Step2: Parameters of Interest
Mean of a = 2.39576 (std = 0.152745)
Mean of t = 0.576694 s
Dift rate (v) mean values
Step3: Plot posterior of drift rate for group means
Step4: Calculate the proportion of the posteriors in which the drift rate for one condition is greater than the other
Step5: Therefore
Step6: Models converge!
Explore Bias | <ASSISTANT_TASK:>
Python Code:
def parse_condition(stim_num):
if stim_num == '1':
return 'SS'
if stim_num == '2':
return 'CP'
if stim_num == '3':
return 'CS'
if stim_num == '4':
return 'US'
import csv
import glob
csv_dir = '../data/pilot_subjects/'
subjects = []
for csv_file in glob.glob(csv_dir + 'data*.csv'):
subject = []
subj_idx = csv_file[-9:-4] # Use id from filename
with open(csv_file, 'r') as f:
reader = csv.DictReader(f)
for trial in reader:
trial['subj_idx'] = subj_idx
trial['stim'] = parse_condition(trial['stim'])
subject.append(trial)
subjects.append(subject)
keys = subject[0].keys()
with open('../data/controls.csv', 'w') as out:
writer = csv.DictWriter(out, keys)
writer.writeheader()
for subj in subjects:
writer.writerows(subj)
Explanation: DDM for All Control Data
Parse all control data to a single CSV, then check if subject data can fit a hierarchical drift decision model for different stimuli types
Experimental stimuli are as follows:
- Condition 1: Same stimuli (see cat / hear cat) -- SS
- Condition 2: Phonologically congruent (see cat / hear hat) -- CP
- Condition 3: Semantically congruent (see cat / hear dog) -- CS
- Condition 4: Unrelated stimuli (see house / hear cat) -- US
Parse all subject data to single csv
Generate a single csv for all data, using subject_idx as a unique patient identifier.
Also give conditions (stim) a 2-character descriptive instead of a number.
CSV fields are as follows:
- subj_idx: Unique subject identifier
- rt: Reaction time
- stim: Stimulus (SS/CP/CS/US)
- response: Response to stimulus (True(1)/False(0))
End of explanation
import hddm
data = hddm.load_csv('../data/all_subjects_clean.csv')
model = hddm.HDDM(data, depends_on={'v': 'stim'})
model.find_starting_values()
model.sample(6000, burn=20)
model.print_stats()
Explanation: First stab at hddm model fit
End of explanation
%matplotlib inline
model.plot_posteriors()
Explanation: Parameters of Interest
Mean of a = 2.39576 (std = 0.152745)
Mean of t = 0.576694 s
Dift rate (v) mean values:
CP = 1.47559
CS = 1.95786
SS = 2.37192
US = 2.28449
Plot posteriors
End of explanation
v_SS, v_CP, v_CS, v_US = model.nodes_db.node[['v(SS)', 'v(CP)', 'v(CS)', 'v(US)']]
hddm.analyze.plot_posterior_nodes([v_SS, v_CP, v_CS, v_US])
Explanation: Plot posterior of drift rate for group means
End of explanation
print('P(SS > US) = ' + str((v_SS.trace() > v_US.trace()).mean()))
print('P(CP > SS) = ' + str((v_CP.trace() > v_SS.trace()).mean()))
print('P(CS > SS) = ' + str((v_CS.trace() > v_SS.trace()).mean()))
print('P(CP > CS) = ' + str((v_CP.trace() > v_CS.trace()).mean()))
Explanation: Calculate the proportion of the posteriors in which the drift rate for one condition is greater than the other
End of explanation
models = []
for i in range(5):
m = hddm.HDDM(data, depends_on={'v': 'stim'})
m.find_starting_values()
m.sample(6000, burn=20)
models.append(m)
hddm.analyze.gelman_rubin(models)
Explanation: Therefore:
- The drift rate for CP is significantly lower than all other conditions
- The drift rate for CS is significantly lower than SS and US, but significantly higher than CP
- The drift rates for SS and US are not significantly different
Check for model convergence
End of explanation
model_bias = hddm.HDDM(data, depends_on={'v': 'stim', 'z': 'stim'}, bias=True)
model_bias.find_starting_values()
model_bias.sample(6000, burn=20)
model_bias.plot_posteriors()
model_bias.print_stats()
model_bias.print_stats()
Explanation: Models converge!
Explore Bias
End of explanation
<END_TASK> |
279 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Exceptions
An exception is an event, which occurs during the execution of a program, that disrupts the normal flow of the program's instructions.
You've already seen some exceptions
Step1: Question
Step3: Generating Exceptions
Why generate exceptions? (Don't I have enough unintentional errors?) | <ASSISTANT_TASK:>
Python Code:
def divide1(numerator, denominator):
try:
result = numerator/denominator
print("result = %f" % result)
except:
print("You can't divide by 0!!")
divide1(1.0, 2)
divide1(1.0, 0)
divide1("x", 2)
Explanation: Exceptions
An exception is an event, which occurs during the execution of a program, that disrupts the normal flow of the program's instructions.
You've already seen some exceptions:
- syntax errors
- divide by 0
Many programs want to know about exceptions when they occur. For example, if the input to a program is a file path. If the user inputs an invalid or non-existent path, the program generates an exception. It may be desired to provide a response to the user in this case.
It may also be that programs will generate exceptions. This is a way of indicating that there is an error in the inputs provided. In general, this is the preferred style for dealing with invalid inputs or states inside a python function rather than having an error return.
Catching Exceptions
Python provides a way to detect when an exception occurs. This is done by the use of a block of code surrounded by a "try" and "except" statement.
End of explanation
#1/0
def divide2(numerator, denominator):
try:
result = numerator/denominator
print("result = %f" % result)
except (ZeroDivisionError, TypeError):
print("Got an exception")
divide2(1, "x")
# Why doesn't this catch the exception?
# How do we fix it?
divide2("x", 2)
# Exceptions in file handling
def read_safely(path):
error = None
try:
with open(path, "r") as fd:
lines = fd.readlines()
print ('\n'.join(lines()))
except FileNotFoundError as err:
print("File %s does not exist. Try again." % path)
read_safely("unknown.txt")
# Handle division by 0 by using a small number
SMALL_NUMBER = 1e-3
def divide2(numerator, denominator):
try:
result = numerator/denominator
except ZeroDivisionError:
result = numerator/SMALL_NUMBER
print("result = %f" % result)
divide2(1,0)
Explanation: Question: What do you do when you get an exception?
You can get information about exceptions.
End of explanation
import pandas as pd
def func(df):
"
:param pd.DataFrame df: should have a column named "hours"
if not "hours" in df.columns:
raise ValueError("DataFrame should have a column named 'hours'.")
df = pd.DataFrame({'hours': range(10) })
func(df)
df = pd.DataFrame({'years': range(10) })
# Generates an exception
#func(df)
Explanation: Generating Exceptions
Why generate exceptions? (Don't I have enough unintentional errors?)
End of explanation
<END_TASK> |
280 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described.
<END_TASK>
<USER_TASK:>
Description:
Divide a sorted array in K parts with sum of difference of max and min minimized in each part
Function to find the minimum sum of differences possible for the given array when the array is divided into K subarrays ; Array to store the differences between two adjacent elements ; Iterating through the array ; Appending differences to p ; Sorting p in descending order ; Sum of the first k - 1 values of p ; Computing the result ;
| <ASSISTANT_TASK:>
Python Code:
def calculate_minimum_split(a , k ) :
p =[]
n = len(a )
for i in range(1 , n ) :
p . append(a[i ] - a[i - 1 ] )
p . sort(reverse = True )
min_sum = sum(p[: k - 1 ] )
res = a[n - 1 ] - a[0 ] - min_sum
return res
/ * Driver code * /
if __name__== "__main __":
arr =[4 , 8 , 15 , 16 , 23 , 42 ]
K = 3
print(calculate_minimum_split(arr , K ) )
<END_TASK>
|
281 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
In this exercise, you will leverage what you've learned to tune a machine learning model with cross-validation.
Setup
The questions below will give you feedback on your work. Run the following cell to set up the feedback system.
Step1: You will work with the Housing Prices Competition for Kaggle Learn Users from the previous exercise.
Run the next code cell without changes to load the training and test data in X and X_test. For simplicity, we drop categorical variables.
Step2: Use the next code cell to print the first several rows of the data.
Step3: So far, you've learned how to build pipelines with scikit-learn. For instance, the pipeline below will use SimpleImputer() to replace missing values in the data, before using RandomForestRegressor() to train a random forest model to make predictions. We set the number of trees in the random forest model with the n_estimators parameter, and setting random_state ensures reproducibility.
Step4: You have also learned how to use pipelines in cross-validation. The code below uses the cross_val_score() function to obtain the mean absolute error (MAE), averaged across five different folds. Recall we set the number of folds with the cv parameter.
Step6: Step 1
Step7: Step 2
Step8: Use the next cell to visualize your results from Step 2. Run the code without changes.
Step9: Step 3 | <ASSISTANT_TASK:>
Python Code:
# Set up code checking
import os
if not os.path.exists("../input/train.csv"):
os.symlink("../input/home-data-for-ml-course/train.csv", "../input/train.csv")
os.symlink("../input/home-data-for-ml-course/test.csv", "../input/test.csv")
from learntools.core import binder
binder.bind(globals())
from learntools.ml_intermediate.ex5 import *
print("Setup Complete")
Explanation: In this exercise, you will leverage what you've learned to tune a machine learning model with cross-validation.
Setup
The questions below will give you feedback on your work. Run the following cell to set up the feedback system.
End of explanation
import pandas as pd
from sklearn.model_selection import train_test_split
# Read the data
train_data = pd.read_csv('../input/train.csv', index_col='Id')
test_data = pd.read_csv('../input/test.csv', index_col='Id')
# Remove rows with missing target, separate target from predictors
train_data.dropna(axis=0, subset=['SalePrice'], inplace=True)
y = train_data.SalePrice
train_data.drop(['SalePrice'], axis=1, inplace=True)
# Select numeric columns only
numeric_cols = [cname for cname in train_data.columns if train_data[cname].dtype in ['int64', 'float64']]
X = train_data[numeric_cols].copy()
X_test = test_data[numeric_cols].copy()
Explanation: You will work with the Housing Prices Competition for Kaggle Learn Users from the previous exercise.
Run the next code cell without changes to load the training and test data in X and X_test. For simplicity, we drop categorical variables.
End of explanation
X.head()
Explanation: Use the next code cell to print the first several rows of the data.
End of explanation
from sklearn.ensemble import RandomForestRegressor
from sklearn.pipeline import Pipeline
from sklearn.impute import SimpleImputer
my_pipeline = Pipeline(steps=[
('preprocessor', SimpleImputer()),
('model', RandomForestRegressor(n_estimators=50, random_state=0))
])
Explanation: So far, you've learned how to build pipelines with scikit-learn. For instance, the pipeline below will use SimpleImputer() to replace missing values in the data, before using RandomForestRegressor() to train a random forest model to make predictions. We set the number of trees in the random forest model with the n_estimators parameter, and setting random_state ensures reproducibility.
End of explanation
from sklearn.model_selection import cross_val_score
# Multiply by -1 since sklearn calculates *negative* MAE
scores = -1 * cross_val_score(my_pipeline, X, y,
cv=5,
scoring='neg_mean_absolute_error')
print("Average MAE score:", scores.mean())
Explanation: You have also learned how to use pipelines in cross-validation. The code below uses the cross_val_score() function to obtain the mean absolute error (MAE), averaged across five different folds. Recall we set the number of folds with the cv parameter.
End of explanation
def get_score(n_estimators):
Return the average MAE over 3 CV folds of random forest model.
Keyword argument:
n_estimators -- the number of trees in the forest
# Replace this body with your own code
pass
# Check your answer
step_1.check()
#%%RM_IF(PROD)%%
def get_score(n_estimators):
my_pipeline = Pipeline(steps=[
('preprocessor', SimpleImputer()),
('model', RandomForestRegressor(n_estimators, random_state=0))
])
scores = -1 * cross_val_score(my_pipeline, X, y,
cv=3,
scoring='neg_mean_absolute_error')
return scores.mean()
step_1.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
step_1.hint()
#_COMMENT_IF(PROD)_
step_1.solution()
Explanation: Step 1: Write a useful function
In this exercise, you'll use cross-validation to select parameters for a machine learning model.
Begin by writing a function get_score() that reports the average (over three cross-validation folds) MAE of a machine learning pipeline that uses:
- the data in X and y to create folds,
- SimpleImputer() (with all parameters left as default) to replace missing values, and
- RandomForestRegressor() (with random_state=0) to fit a random forest model.
The n_estimators parameter supplied to get_score() is used when setting the number of trees in the random forest model.
End of explanation
results = ____ # Your code here
# Check your answer
step_2.check()
#%%RM_IF(PROD)%%
results = {}
for i in range(1,9):
results[50*i] = get_score(50*i)
step_2.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
step_2.hint()
#_COMMENT_IF(PROD)_
step_2.solution()
Explanation: Step 2: Test different parameter values
Now, you will use the function that you defined in Step 1 to evaluate the model performance corresponding to eight different values for the number of trees in the random forest: 50, 100, 150, ..., 300, 350, 400.
Store your results in a Python dictionary results, where results[i] is the average MAE returned by get_score(i).
End of explanation
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(list(results.keys()), list(results.values()))
plt.show()
Explanation: Use the next cell to visualize your results from Step 2. Run the code without changes.
End of explanation
n_estimators_best = ____
# Check your answer
step_3.check()
#%%RM_IF(PROD)%%
n_estimators_best = min(results, key=results.get)
step_3.assert_check_passed()
#%%RM_IF(PROD)%%
n_estimators_best = 200
step_3.assert_check_passed()
# Lines below will give you a hint or solution code
#_COMMENT_IF(PROD)_
step_3.hint()
#_COMMENT_IF(PROD)_
step_3.solution()
Explanation: Step 3: Find the best parameter value
Given the results, which value for n_estimators seems best for the random forest model? Use your answer to set the value of n_estimators_best.
End of explanation
<END_TASK> |
282 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Tarea 04
Step1: Lectura y grafica de los datos de 'TSI2.tx'
Step2: Transformada de fourier de los datos
Step3: Análisis
De la grafica de amplitudes, conforme la frecuencia se aleja del cero, se observa un maximo aproximadamente en $f\approx0.09$, el cual siguiendo la formula $$T=\frac{1}{f}$$
Se obtiene un periodo de $T\approx 11.11 [years]$ el cual es concordante con el ciclo de actividad solar medido por medio de las manchas solares, el cual tiene una duracion de aproximadamente 11 años.
Mejore la resolución en el espectro de frecuencias | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib
import pylab as plt
import scipy.misc as pim
from scipy import stats
% matplotlib inline
font = {'weight' : 'bold',
'size' : 12}
matplotlib.rc('font', **font)
Explanation: Tarea 04: Análisis de Fourier - ciclos solares
Alejando Mesa y Yennifer Angarita
Ciclos del Sol
Procedimiento:
Lea el archivo TSI2.txt y genere una matriz de datos.
Grafique los datos. La primera columna corresponde a años, la segunda a W/m^2.
Realice la TF de los datos. Grafique la amplitud.
Identifique la o las frecuencias más importantes. No incluya las frecuencias cercanas a cero.
Determine el o los ciclos principales del Sol en años.
Mejore la resolución en el espectro de frecuencias y mire si aparece nueva información.
Saque conclusiones de los resultados.
End of explanation
x,y = np.loadtxt('TSI2.txt', usecols=[0,1], dtype='float', unpack='True',delimiter=None)
plt.figure(figsize=(16,8))
plt.plot(x,y,marker='.',color='blue',linewidth=1,markersize=6)
plt.title('Radiation cycles of the Sun', fontsize=24)
plt.xlabel('years', fontsize=18)
plt.ylabel('Radiation [w/m²]', fontsize=18)
plt.grid()
Explanation: Lectura y grafica de los datos de 'TSI2.tx'
End of explanation
dy = 1
fty = np.fft.fftshift(np.fft.fft(np.fft.fftshift(y)))*dy
v = np.arange(-1/(2*dy),1/(2*dy),1/(dy*x.size))
ti = 0
tf =0.5
plt.figure(figsize=(16,8))
plt.plot(v,abs(fty)**(1/4),marker='.',color='blue',linewidth=1,markersize=6)
plt.plot((0, 0.5), (2, 2), 'r-')
plt.title('FT Radiation cycles of the Sun', fontsize=24)
plt.xlabel('Freq.', fontsize=18)
plt.ylabel('Amp', fontsize=18)
plt.xlim(ti,tf)
plt.grid()
Explanation: Transformada de fourier de los datos
End of explanation
yinv=y[::-1] #Invertir el orden de los datos del arreglo
yinv=yinv[1:] #Eliminar primer elemento del nuevo arreglo
yinv=yinv[:-1] #Eliminar ultimo elemento del nuevo arreglo
yf=[]
k=0
while k<50: #Ciclo while para meter en un arreglo los valores de la señal, la inversa de la señal en el tiempo el numero que deseemos repertila
for i in range(0,len(y)):
yf.append(y[i])#Señal
#print len(data3)
for i in range(0,len(yinv)):#Señal invertida en el tiempo
yf.append(yinv[i])
k+=1
dy = 1
t1 = np.arange(0,len(yf),dy)
fty2 = np.fft.fftshift(np.fft.fft(np.fft.fftshift(yf)))*dy
v = np.arange(-1/(2*dy),1/(2*dy),1/(dy*t1.size))
ti = 0
tf =0.1
plt.figure(figsize=(16,8))
plt.plot(v,abs(fty2)**(1/4),marker='.',color='blue',linewidth=1,markersize=6)
plt.plot((0, 0.1), (5.5, 5.5), 'r-')
plt.title('More Resolution FT Radiation cycles of the Sun', fontsize=24)
plt.xlabel('Freq.', fontsize=18)
plt.ylabel('Amp', fontsize=18)
plt.xlim(ti,tf)
plt.grid()
Explanation: Análisis
De la grafica de amplitudes, conforme la frecuencia se aleja del cero, se observa un maximo aproximadamente en $f\approx0.09$, el cual siguiendo la formula $$T=\frac{1}{f}$$
Se obtiene un periodo de $T\approx 11.11 [years]$ el cual es concordante con el ciclo de actividad solar medido por medio de las manchas solares, el cual tiene una duracion de aproximadamente 11 años.
Mejore la resolución en el espectro de frecuencias
End of explanation
<END_TASK> |
283 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
version 1.0.2
+
Introduction to Machine Learning with Apache Spark
Predicting Movie Ratings
One of the most common uses of big data is to predict what users want. This allows Google to show you relevant ads, Amazon to recommend relevant products, and Netflix to recommend movies that you might like. This lab will demonstrate how we can use Apache Spark to recommend movies to a user. We will start with some basic techniques, and then use the Spark MLlib library's Alternating Least Squares method to make more sophisticated predictions.
For this lab, we will use a subset dataset of 500,000 ratings we have included for you into your VM (and on Databricks) from the movielens 10M stable benchmark rating dataset. However, the same code you write will work for the full dataset, or their latest dataset of 21 million ratings.
In this lab
Step3: Part 0
Step4: In this lab we will be examining subsets of the tuples we create (e.g., the top rated movies by users). Whenever we examine only a subset of a large dataset, there is the potential that the result will depend on the order we perform operations, such as joins, or how the data is partitioned across the workers. What we want to guarantee is that we always see the same results for a subset, independent of how we manipulate or store the data.
We can do that by sorting before we examine a subset. You might think that the most obvious choice when dealing with an RDD of tuples would be to use the sortByKey() method. However this choice is problematic, as we can still end up with different results if the key is not unique.
Note
Step6: Even though the two lists contain identical tuples, the difference in ordering sometimes yields a different ordering for the sorted RDD (try running the cell repeatedly and see if the results change or the assertion fails). If we only examined the first two elements of the RDD (e.g., using take(2)), then we would observe different answers - that is a really bad outcome as we want identical input data to always yield identical output. A better technique is to sort the RDD by both the key and value, which we can do by combining the key and value into a single string and then sorting on that string. Since the key is an integer and the value is a unicode string, we can use a function to combine them into a single unicode string (e.g., unicode('%.3f' % key) + ' ' + value) before sorting the RDD using sortBy().
Step7: If we just want to look at the first few elements of the RDD in sorted order, we can use the takeOrdered method with the sortFunction we defined.
Step9: Part 1
Step10: (1b) Movies with Highest Average Ratings
Now that we have a way to calculate the average ratings, we will use the getCountsAndAverages() helper function with Spark to determine movies with highest average ratings.
The steps you should perform are
Step11: (1c) Movies with Highest Average Ratings and more than 500 reviews
Now that we have an RDD of the movies with highest averge ratings, we can use Spark to determine the 20 movies with highest average ratings and more than 500 reviews.
Apply a single RDD transformation to movieNameWithAvgRatingsRDD to limit the results to movies with ratings from more than 500 people. We then use the sortFunction() helper function to sort by the average rating to get the movies in order of their rating (highest rating first). You will end up with an RDD of the form
Step12: Using a threshold on the number of reviews is one way to improve the recommendations, but there are many other good ways to improve quality. For example, you could weight ratings by the number of ratings.
Part 2
Step14: After splitting the dataset, your training set has about 293,000 entries and the validation and test sets each have about 97,000 entries (the exact number of entries in each dataset varies slightly due to the random nature of the randomSplit() transformation.
(2b) Root Mean Square Error (RMSE)
In the next part, you will generate a few different models, and will need a way to decide which model is best. We will use the Root Mean Square Error (RMSE) or Root Mean Square Deviation (RMSD) to compute the error of each model. RMSE is a frequently used measure of the differences between values (sample and population values) predicted by a model or an estimator and the values actually observed. The RMSD represents the sample standard deviation of the differences between predicted values and observed values. These individual differences are called residuals when the calculations are performed over the data sample that was used for estimation, and are called prediction errors when computed out-of-sample. The RMSE serves to aggregate the magnitudes of the errors in predictions for various times into a single measure of predictive power. RMSE is a good measure of accuracy, but only to compare forecasting errors of different models for a particular variable and not between variables, as it is scale-dependent.
The RMSE is the square root of the average value of the square of (actual rating - predicted rating) for all users and movies for which we have the actual rating. Versions of Spark MLlib beginning with Spark 1.4 include a RegressionMetrics modiule that can be used to compute the RMSE. However, since we are using Spark 1.3.1, we will write our own function.
Write a function to compute the sum of squared error given predictedRDD and actualRDD RDDs. Both RDDs consist of tuples of the form (UserID, MovieID, Rating)
Given two ratings RDDs, x and y of size n, we define RSME as follows
Step15: (2c) Using ALS.train()
In this part, we will use the MLlib implementation of Alternating Least Squares, ALS.train(). ALS takes a training dataset (RDD) and several parameters that control the model creation process. To determine the best values for the parameters, we will use ALS to train several models, and then we will select the best model and use the parameters from that model in the rest of this lab exercise.
The process we will use for determining the best model is as follows
Step16: (2d) Testing Your Model
So far, we used the trainingRDD and validationRDD datasets to select the best model. Since we used these two datasets to determine what model is best, we cannot use them to test how good the model is - otherwise we would be very vulnerable to overfitting. To decide how good our model is, we need to use the testRDD dataset. We will use the bestRank you determined in part (2c) to create a model for predicting the ratings for the test dataset and then we will compute the RMSE.
The steps you should perform are
Step17: (2e) Comparing Your Model
Looking at the RMSE for the results predicted by the model versus the values in the test set is one way to evalute the quality of our model. Another way to evaluate the model is to evaluate the error from a test set where every rating is the average rating for the training set.
The steps you should perform are
Step18: You now have code to predict how users will rate movies!
Part 3
Step19: The user ID 0 is unassigned, so we will use it for your ratings. We set the variable myUserID to 0 for you. Next, create a new RDD myRatingsRDD with your ratings for at least 10 movie ratings. Each entry should be formatted as (myUserID, movieID, rating) (i.e., each entry should be formatted in the same way as trainingRDD). As in the original dataset, ratings should be between 1 and 5 (inclusive). If you have not seen at least 10 of these movies, you can increase the parameter passed to take() in the above cell until there are 10 movies that you have seen (or you can also guess what your rating would be for movies you have not seen).
Step20: (3b) Add Your Movies to Training Dataset
Now that you have ratings for yourself, you need to add your ratings to the training dataset so that the model you train will incorporate your preferences. Spark's union() transformation combines two RDDs; use union() to create a new training dataset that includes your ratings and the data in the original training dataset.
Step21: (3c) Train a Model with Your Ratings
Now, train a model with your ratings added and the parameters you used in in part (2c)
Step22: (3d) Check RMSE for the New Model with Your Ratings
Compute the RMSE for this new model on the test set.
For the prediction step, we reuse testForPredictingRDD, consisting of (UserID, MovieID) pairs that you extracted from testRDD. The RDD has the form
Step23: (3e) Predict Your Ratings
So far, we have only used the predictAll method to compute the error of the model. Here, use the predictAll to predict what ratings you would give to the movies that you did not already provide ratings for.
The steps you should perform are
Step24: (3f) Predict Your Ratings
We have our predicted ratings. Now we can print out the 25 movies with the highest predicted ratings.
The steps you should perform are | <ASSISTANT_TASK:>
Python Code:
import sys
import os
from test_helper import Test
baseDir = os.path.join('data')
inputPath = os.path.join('cs100', 'lab4', 'small')
ratingsFilename = os.path.join(baseDir, inputPath, 'ratings.dat.gz')
moviesFilename = os.path.join(baseDir, inputPath, 'movies.dat')
Explanation: version 1.0.2
+
Introduction to Machine Learning with Apache Spark
Predicting Movie Ratings
One of the most common uses of big data is to predict what users want. This allows Google to show you relevant ads, Amazon to recommend relevant products, and Netflix to recommend movies that you might like. This lab will demonstrate how we can use Apache Spark to recommend movies to a user. We will start with some basic techniques, and then use the Spark MLlib library's Alternating Least Squares method to make more sophisticated predictions.
For this lab, we will use a subset dataset of 500,000 ratings we have included for you into your VM (and on Databricks) from the movielens 10M stable benchmark rating dataset. However, the same code you write will work for the full dataset, or their latest dataset of 21 million ratings.
In this lab:
Part 0: Preliminaries
Part 1: Basic Recommendations
Part 2: Collaborative Filtering
Part 3: Predictions for Yourself
As mentioned during the first Learning Spark lab, think carefully before calling collect() on any datasets. When you are using a small dataset, calling collect() and then using Python to get a sense for the data locally (in the driver program) will work fine, but this will not work when you are using a large dataset that doesn't fit in memory on one machine. Solutions that call collect() and do local analysis that could have been done with Spark will likely fail in the autograder and not receive full credit.
Code
This assignment can be completed using basic Python and pySpark Transformations and Actions. Libraries other than math are not necessary. With the exception of the ML functions that we introduce in this assignment, you should be able to complete all parts of this homework using only the Spark functions you have used in prior lab exercises (although you are welcome to use more features of Spark if you like!).
End of explanation
numPartitions = 2
rawRatings = sc.textFile(ratingsFilename).repartition(numPartitions)
rawMovies = sc.textFile(moviesFilename)
def get_ratings_tuple(entry):
Parse a line in the ratings dataset
Args:
entry (str): a line in the ratings dataset in the form of UserID::MovieID::Rating::Timestamp
Returns:
tuple: (UserID, MovieID, Rating)
items = entry.split('::')
return int(items[0]), int(items[1]), float(items[2])
def get_movie_tuple(entry):
Parse a line in the movies dataset
Args:
entry (str): a line in the movies dataset in the form of MovieID::Title::Genres
Returns:
tuple: (MovieID, Title)
items = entry.split('::')
return int(items[0]), items[1]
ratingsRDD = rawRatings.map(get_ratings_tuple).cache()
moviesRDD = rawMovies.map(get_movie_tuple).cache()
ratingsCount = ratingsRDD.count()
moviesCount = moviesRDD.count()
print 'There are %s ratings and %s movies in the datasets' % (ratingsCount, moviesCount)
print 'Ratings: %s' % ratingsRDD.take(3)
print 'Movies: %s' % moviesRDD.take(3)
assert ratingsCount == 487650
assert moviesCount == 3883
assert moviesRDD.filter(lambda (id, title): title == 'Toy Story (1995)').count() == 1
assert (ratingsRDD.takeOrdered(1, key=lambda (user, movie, rating): movie)
== [(1, 1, 5.0)])
Explanation: Part 0: Preliminaries
We read in each of the files and create an RDD consisting of parsed lines.
Each line in the ratings dataset (ratings.dat.gz) is formatted as:
UserID::MovieID::Rating::Timestamp
Each line in the movies (movies.dat) dataset is formatted as:
MovieID::Title::Genres
The Genres field has the format
Genres1|Genres2|Genres3|...
The format of these files is uniform and simple, so we can use Python split() to parse their lines.
Parsing the two files yields two RDDS
For each line in the ratings dataset, we create a tuple of (UserID, MovieID, Rating). We drop the timestamp because we do not need it for this exercise.
For each line in the movies dataset, we create a tuple of (MovieID, Title). We drop the Genres because we do not need them for this exercise.
End of explanation
tmp1 = [(1, u'alpha'), (2, u'alpha'), (2, u'beta'), (3, u'alpha'), (1, u'epsilon'), (1, u'delta')]
tmp2 = [(1, u'delta'), (2, u'alpha'), (2, u'beta'), (3, u'alpha'), (1, u'epsilon'), (1, u'alpha')]
oneRDD = sc.parallelize(tmp1)
twoRDD = sc.parallelize(tmp2)
oneSorted = oneRDD.sortByKey(True).collect()
twoSorted = twoRDD.sortByKey(True).collect()
print oneSorted
print twoSorted
assert set(oneSorted) == set(twoSorted) # Note that both lists have the same elements
assert twoSorted[0][0] < twoSorted.pop()[0] # Check that it is sorted by the keys
assert oneSorted[0:2] != twoSorted[0:2] # Note that the subset consisting of the first two elements does not match
Explanation: In this lab we will be examining subsets of the tuples we create (e.g., the top rated movies by users). Whenever we examine only a subset of a large dataset, there is the potential that the result will depend on the order we perform operations, such as joins, or how the data is partitioned across the workers. What we want to guarantee is that we always see the same results for a subset, independent of how we manipulate or store the data.
We can do that by sorting before we examine a subset. You might think that the most obvious choice when dealing with an RDD of tuples would be to use the sortByKey() method. However this choice is problematic, as we can still end up with different results if the key is not unique.
Note: It is important to use the unicode type instead of the string type as the titles are in unicode characters.
Consider the following example, and note that while the sets are equal, the printed lists are usually in different order by value, although they may randomly match up from time to time.
You can try running this multiple times. If the last assertion fails, don't worry about it: that was just the luck of the draw. And note that in some environments the results may be more deterministic.
End of explanation
def sortFunction(tuple):
Construct the sort string (does not perform actual sorting)
Args:
tuple: (rating, MovieName)
Returns:
sortString: the value to sort with, 'rating MovieName'
key = unicode('%.3f' % tuple[0])
value = tuple[1]
return (key + ' ' + value)
print oneRDD.sortBy(sortFunction, True).collect()
print twoRDD.sortBy(sortFunction, True).collect()
Explanation: Even though the two lists contain identical tuples, the difference in ordering sometimes yields a different ordering for the sorted RDD (try running the cell repeatedly and see if the results change or the assertion fails). If we only examined the first two elements of the RDD (e.g., using take(2)), then we would observe different answers - that is a really bad outcome as we want identical input data to always yield identical output. A better technique is to sort the RDD by both the key and value, which we can do by combining the key and value into a single string and then sorting on that string. Since the key is an integer and the value is a unicode string, we can use a function to combine them into a single unicode string (e.g., unicode('%.3f' % key) + ' ' + value) before sorting the RDD using sortBy().
End of explanation
oneSorted1 = oneRDD.takeOrdered(oneRDD.count(),key=sortFunction)
twoSorted1 = twoRDD.takeOrdered(twoRDD.count(),key=sortFunction)
print 'one is %s' % oneSorted1
print 'two is %s' % twoSorted1
assert oneSorted1 == twoSorted1
Explanation: If we just want to look at the first few elements of the RDD in sorted order, we can use the takeOrdered method with the sortFunction we defined.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
# First, implement a helper function `getCountsAndAverages` using only Python
def getCountsAndAverages(IDandRatingsTuple):
Calculate average rating
Args:
IDandRatingsTuple: a single tuple of (MovieID, (Rating1, Rating2, Rating3, ...))
Returns:
tuple: a tuple of (MovieID, (number of ratings, averageRating))
pid = IDandRatingsTuple[0]
cnt = len(IDandRatingsTuple[1])
avg = float(sum(IDandRatingsTuple[1]))/cnt
return (pid, (cnt, avg))
# TEST Number of Ratings and Average Ratings for a Movie (1a)
Test.assertEquals(getCountsAndAverages((1, (1, 2, 3, 4))), (1, (4, 2.5)),
'incorrect getCountsAndAverages() with integer list')
Test.assertEquals(getCountsAndAverages((100, (10.0, 20.0, 30.0))), (100, (3, 20.0)),
'incorrect getCountsAndAverages() with float list')
Test.assertEquals(getCountsAndAverages((110, xrange(20))), (110, (20, 9.5)),
'incorrect getCountsAndAverages() with xrange')
Explanation: Part 1: Basic Recommendations
One way to recommend movies is to always recommend the movies with the highest average rating. In this part, we will use Spark to find the name, number of ratings, and the average rating of the 20 movies with the highest average rating and more than 500 reviews. We want to filter our movies with high ratings but fewer than or equal to 500 reviews because movies with few reviews may not have broad appeal to everyone.
(1a) Number of Ratings and Average Ratings for a Movie
Using only Python, implement a helper function getCountsAndAverages() that takes a single tuple of (MovieID, (Rating1, Rating2, Rating3, ...)) and returns a tuple of (MovieID, (number of ratings, averageRating)). For example, given the tuple (100, (10.0, 20.0, 30.0)), your function should return (100, (3, 20.0))
End of explanation
# TODO: Replace <FILL IN> with appropriate code
# From ratingsRDD with tuples of (UserID, MovieID, Rating) create an RDD with tuples of
# the (MovieID, iterable of Ratings for that MovieID)
movieIDsWithRatingsRDD = (ratingsRDD
.map(lambda (u_id,m_id,rating) : (m_id,rating))
.groupByKey())
print 'movieIDsWithRatingsRDD: %s\n' % movieIDsWithRatingsRDD.take(3)
# Using `movieIDsWithRatingsRDD`, compute the number of ratings and average rating for each movie to
# yield tuples of the form (MovieID, (number of ratings, average rating))
movieIDsWithAvgRatingsRDD = movieIDsWithRatingsRDD.map(getCountsAndAverages)
print 'movieIDsWithAvgRatingsRDD: %s\n' % movieIDsWithAvgRatingsRDD.take(3)
# To `movieIDsWithAvgRatingsRDD`, apply RDD transformations that use `moviesRDD` to get the movie
# names for `movieIDsWithAvgRatingsRDD`, yielding tuples of the form
# (average rating, movie name, number of ratings)
movieNameWithAvgRatingsRDD = (moviesRDD
.join(movieIDsWithAvgRatingsRDD)
.map(lambda (id, (name, (num, avg))): (avg, name, num)))
print 'movieNameWithAvgRatingsRDD: %s\n' % movieNameWithAvgRatingsRDD.take(3)
# TEST Movies with Highest Average Ratings (1b)
Test.assertEquals(movieIDsWithRatingsRDD.count(), 3615,
'incorrect movieIDsWithRatingsRDD.count() (expected 3615)')
movieIDsWithRatingsTakeOrdered = movieIDsWithRatingsRDD.takeOrdered(3)
Test.assertTrue(movieIDsWithRatingsTakeOrdered[0][0] == 1 and
len(list(movieIDsWithRatingsTakeOrdered[0][1])) == 993,
'incorrect count of ratings for movieIDsWithRatingsTakeOrdered[0] (expected 993)')
Test.assertTrue(movieIDsWithRatingsTakeOrdered[1][0] == 2 and
len(list(movieIDsWithRatingsTakeOrdered[1][1])) == 332,
'incorrect count of ratings for movieIDsWithRatingsTakeOrdered[1] (expected 332)')
Test.assertTrue(movieIDsWithRatingsTakeOrdered[2][0] == 3 and
len(list(movieIDsWithRatingsTakeOrdered[2][1])) == 299,
'incorrect count of ratings for movieIDsWithRatingsTakeOrdered[2] (expected 299)')
Test.assertEquals(movieIDsWithAvgRatingsRDD.count(), 3615,
'incorrect movieIDsWithAvgRatingsRDD.count() (expected 3615)')
Test.assertEquals(movieIDsWithAvgRatingsRDD.takeOrdered(3),
[(1, (993, 4.145015105740181)), (2, (332, 3.174698795180723)),
(3, (299, 3.0468227424749164))],
'incorrect movieIDsWithAvgRatingsRDD.takeOrdered(3)')
Test.assertEquals(movieNameWithAvgRatingsRDD.count(), 3615,
'incorrect movieNameWithAvgRatingsRDD.count() (expected 3615)')
Test.assertEquals(movieNameWithAvgRatingsRDD.takeOrdered(3),
[(1.0, u'Autopsy (Macchie Solari) (1975)', 1), (1.0, u'Better Living (1998)', 1),
(1.0, u'Big Squeeze, The (1996)', 3)],
'incorrect movieNameWithAvgRatingsRDD.takeOrdered(3)')
Explanation: (1b) Movies with Highest Average Ratings
Now that we have a way to calculate the average ratings, we will use the getCountsAndAverages() helper function with Spark to determine movies with highest average ratings.
The steps you should perform are:
Recall that the ratingsRDD contains tuples of the form (UserID, MovieID, Rating). From ratingsRDD create an RDD with tuples of the form (MovieID, Python iterable of Ratings for that MovieID). This transformation will yield an RDD of the form: [(1, <pyspark.resultiterable.ResultIterable object at 0x7f16d50e7c90>), (2, <pyspark.resultiterable.ResultIterable object at 0x7f16d50e79d0>), (3, <pyspark.resultiterable.ResultIterable object at 0x7f16d50e7610>)]. Note that you will only need to perform two Spark transformations to do this step.
Using movieIDsWithRatingsRDD and your getCountsAndAverages() helper function, compute the number of ratings and average rating for each movie to yield tuples of the form (MovieID, (number of ratings, average rating)). This transformation will yield an RDD of the form: [(1, (993, 4.145015105740181)), (2, (332, 3.174698795180723)), (3, (299, 3.0468227424749164))]. You can do this step with one Spark transformation
We want to see movie names, instead of movie IDs. To moviesRDD, apply RDD transformations that use movieIDsWithAvgRatingsRDD to get the movie names for movieIDsWithAvgRatingsRDD, yielding tuples of the form (average rating, movie name, number of ratings). This set of transformations will yield an RDD of the form: [(1.0, u'Autopsy (Macchie Solari) (1975)', 1), (1.0, u'Better Living (1998)', 1), (1.0, u'Big Squeeze, The (1996)', 3)]. You will need to do two Spark transformations to complete this step: first use the moviesRDD with movieIDsWithAvgRatingsRDD to create a new RDD with Movie names matched to Movie IDs, then convert that RDD into the form of (average rating, movie name, number of ratings). These transformations will yield an RDD that looks like: [(3.6818181818181817, u'Happiest Millionaire, The (1967)', 22), (3.0468227424749164, u'Grumpier Old Men (1995)', 299), (2.882978723404255, u'Hocus Pocus (1993)', 94)]
End of explanation
# TODO: Replace <FILL IN> with appropriate code
# Apply an RDD transformation to `movieNameWithAvgRatingsRDD` to limit the results to movies with
# ratings from more than 500 people. We then use the `sortFunction()` helper function to sort by the
# average rating to get the movies in order of their rating (highest rating first)
movieLimitedAndSortedByRatingRDD = (movieNameWithAvgRatingsRDD
.filter(lambda (avg, name, num) : num > 500)
.sortBy(sortFunction, False))
print 'Movies with highest ratings: %s' % movieLimitedAndSortedByRatingRDD.take(20)
# TEST Movies with Highest Average Ratings and more than 500 Reviews (1c)
Test.assertEquals(movieLimitedAndSortedByRatingRDD.count(), 194,
'incorrect movieLimitedAndSortedByRatingRDD.count()')
Test.assertEquals(movieLimitedAndSortedByRatingRDD.take(20),
[(4.5349264705882355, u'Shawshank Redemption, The (1994)', 1088),
(4.515798462852263, u"Schindler's List (1993)", 1171),
(4.512893982808023, u'Godfather, The (1972)', 1047),
(4.510460251046025, u'Raiders of the Lost Ark (1981)', 1195),
(4.505415162454874, u'Usual Suspects, The (1995)', 831),
(4.457256461232604, u'Rear Window (1954)', 503),
(4.45468509984639, u'Dr. Strangelove or: How I Learned to Stop Worrying and Love the Bomb (1963)', 651),
(4.43953006219765, u'Star Wars: Episode IV - A New Hope (1977)', 1447),
(4.4, u'Sixth Sense, The (1999)', 1110), (4.394285714285714, u'North by Northwest (1959)', 700),
(4.379506641366224, u'Citizen Kane (1941)', 527), (4.375, u'Casablanca (1942)', 776),
(4.363975155279503, u'Godfather: Part II, The (1974)', 805),
(4.358816276202219, u"One Flew Over the Cuckoo's Nest (1975)", 811),
(4.358173076923077, u'Silence of the Lambs, The (1991)', 1248),
(4.335826477187734, u'Saving Private Ryan (1998)', 1337),
(4.326241134751773, u'Chinatown (1974)', 564),
(4.325383304940375, u'Life Is Beautiful (La Vita \ufffd bella) (1997)', 587),
(4.324110671936759, u'Monty Python and the Holy Grail (1974)', 759),
(4.3096, u'Matrix, The (1999)', 1250)], 'incorrect sortedByRatingRDD.take(20)')
Explanation: (1c) Movies with Highest Average Ratings and more than 500 reviews
Now that we have an RDD of the movies with highest averge ratings, we can use Spark to determine the 20 movies with highest average ratings and more than 500 reviews.
Apply a single RDD transformation to movieNameWithAvgRatingsRDD to limit the results to movies with ratings from more than 500 people. We then use the sortFunction() helper function to sort by the average rating to get the movies in order of their rating (highest rating first). You will end up with an RDD of the form: [(4.5349264705882355, u'Shawshank Redemption, The (1994)', 1088), (4.515798462852263, u"Schindler's List (1993)", 1171), (4.512893982808023, u'Godfather, The (1972)', 1047)]
End of explanation
trainingRDD, validationRDD, testRDD = ratingsRDD.randomSplit([6, 2, 2], seed=0L)
print 'Training: %s, validation: %s, test: %s\n' % (trainingRDD.count(),
validationRDD.count(),
testRDD.count())
print trainingRDD.take(3)
print validationRDD.take(3)
print testRDD.take(3)
assert trainingRDD.count() == 292716
assert validationRDD.count() == 96902
assert testRDD.count() == 98032
assert trainingRDD.filter(lambda t: t == (1, 914, 3.0)).count() == 1
assert trainingRDD.filter(lambda t: t == (1, 2355, 5.0)).count() == 1
assert trainingRDD.filter(lambda t: t == (1, 595, 5.0)).count() == 1
assert validationRDD.filter(lambda t: t == (1, 1287, 5.0)).count() == 1
assert validationRDD.filter(lambda t: t == (1, 594, 4.0)).count() == 1
assert validationRDD.filter(lambda t: t == (1, 1270, 5.0)).count() == 1
assert testRDD.filter(lambda t: t == (1, 1193, 5.0)).count() == 1
assert testRDD.filter(lambda t: t == (1, 2398, 4.0)).count() == 1
assert testRDD.filter(lambda t: t == (1, 1035, 5.0)).count() == 1
Explanation: Using a threshold on the number of reviews is one way to improve the recommendations, but there are many other good ways to improve quality. For example, you could weight ratings by the number of ratings.
Part 2: Collaborative Filtering
In this course, you have learned about many of the basic transformations and actions that Spark allows us to apply to distributed datasets. Spark also exposes some higher level functionality; in particular, Machine Learning using a component of Spark called MLlib. In this part, you will learn how to use MLlib to make personalized movie recommendations using the movie data we have been analyzing.
We are going to use a technique called collaborative filtering. Collaborative filtering is a method of making automatic predictions (filtering) about the interests of a user by collecting preferences or taste information from many users (collaborating). The underlying assumption of the collaborative filtering approach is that if a person A has the same opinion as a person B on an issue, A is more likely to have B's opinion on a different issue x than to have the opinion on x of a person chosen randomly. You can read more about collaborative filtering here.
The image below (from Wikipedia) shows an example of predicting of the user's rating using collaborative filtering. At first, people rate different items (like videos, images, games). After that, the system is making predictions about a user's rating for an item, which the user has not rated yet. These predictions are built upon the existing ratings of other users, who have similar ratings with the active user. For instance, in the image below the system has made a prediction, that the active user will not like the video.
For movie recommendations, we start with a matrix whose entries are movie ratings by users (shown in red in the diagram below). Each column represents a user (shown in green) and each row represents a particular movie (shown in blue).
Since not all users have rated all movies, we do not know all of the entries in this matrix, which is precisely why we need collaborative filtering. For each user, we have ratings for only a subset of the movies. With collaborative filtering, the idea is to approximate the ratings matrix by factorizing it as the product of two matrices: one that describes properties of each user (shown in green), and one that describes properties of each movie (shown in blue).
We want to select these two matrices such that the error for the users/movie pairs where we know the correct ratings is minimized. The Alternating Least Squares algorithm does this by first randomly filling the users matrix with values and then optimizing the value of the movies such that the error is minimized. Then, it holds the movies matrix constrant and optimizes the value of the user's matrix. This alternation between which matrix to optimize is the reason for the "alternating" in the name.
This optimization is what's being shown on the right in the image above. Given a fixed set of user factors (i.e., values in the users matrix), we use the known ratings to find the best values for the movie factors using the optimization written at the bottom of the figure. Then we "alternate" and pick the best user factors given fixed movie factors.
For a simple example of what the users and movies matrices might look like, check out the videos from Lecture 8 or the slides from Lecture 8
(2a) Creating a Training Set
Before we jump into using machine learning, we need to break up the ratingsRDD dataset into three pieces:
A training set (RDD), which we will use to train models
A validation set (RDD), which we will use to choose the best model
A test set (RDD), which we will use for our experiments
To randomly split the dataset into the multiple groups, we can use the pySpark randomSplit() transformation. randomSplit() takes a set of splits and and seed and returns multiple RDDs.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
import math
def computeError(predictedRDD, actualRDD):
Compute the root mean squared error between predicted and actual
Args:
predictedRDD: predicted ratings for each movie and each user where each entry is in the form
(UserID, MovieID, Rating)
actualRDD: actual ratings where each entry is in the form (UserID, MovieID, Rating)
Returns:
RSME (float): computed RSME value
# Transform predictedRDD into the tuples of the form ((UserID, MovieID), Rating)
predictedReformattedRDD = predictedRDD.map(lambda (u_Id, m_Id, rating): ((u_Id, m_Id), rating))
# Transform actualRDD into the tuples of the form ((UserID, MovieID), Rating)
actualReformattedRDD = actualRDD.map(lambda (u_Id, m_Id, rating): ((u_Id, m_Id), rating))
# Compute the squared error for each matching entry (i.e., the same (User ID, Movie ID) in each
# RDD) in the reformatted RDDs using RDD transformtions - do not use collect()
squaredErrorsRDD = (predictedReformattedRDD
.join(actualReformattedRDD)
.map(lambda (k, (v1,v2)): (k, (v1-v2)**2)))
# Compute the total squared error - do not use collect()
totalError = squaredErrorsRDD.reduce(lambda (k1,v1), (k2,v2): (k1,v1+v2))
# Count the number of entries for which you computed the total squared error
numRatings = squaredErrorsRDD.count()
# Using the total squared error and the number of entries, compute the RSME
return math.sqrt(float(totalError[1])/numRatings)
# sc.parallelize turns a Python list into a Spark RDD.
testPredicted = sc.parallelize([
(1, 1, 5),
(1, 2, 3),
(1, 3, 4),
(2, 1, 3),
(2, 2, 2),
(2, 3, 4)])
testActual = sc.parallelize([
(1, 2, 3),
(1, 3, 5),
(2, 1, 5),
(2, 2, 1)])
testPredicted2 = sc.parallelize([
(2, 2, 5),
(1, 2, 5)])
testError = computeError(testPredicted, testActual)
print 'Error for test dataset (should be 1.22474487139): %s' % testError
testError2 = computeError(testPredicted2, testActual)
print 'Error for test dataset2 (should be 3.16227766017): %s' % testError2
testError3 = computeError(testActual, testActual)
print 'Error for testActual dataset (should be 0.0): %s' % testError3
# TEST Root Mean Square Error (2b)
Test.assertTrue(abs(testError - 1.22474487139) < 0.00000001,
'incorrect testError (expected 1.22474487139)')
Test.assertTrue(abs(testError2 - 3.16227766017) < 0.00000001,
'incorrect testError2 result (expected 3.16227766017)')
Test.assertTrue(abs(testError3 - 0.0) < 0.00000001,
'incorrect testActual result (expected 0.0)')
Explanation: After splitting the dataset, your training set has about 293,000 entries and the validation and test sets each have about 97,000 entries (the exact number of entries in each dataset varies slightly due to the random nature of the randomSplit() transformation.
(2b) Root Mean Square Error (RMSE)
In the next part, you will generate a few different models, and will need a way to decide which model is best. We will use the Root Mean Square Error (RMSE) or Root Mean Square Deviation (RMSD) to compute the error of each model. RMSE is a frequently used measure of the differences between values (sample and population values) predicted by a model or an estimator and the values actually observed. The RMSD represents the sample standard deviation of the differences between predicted values and observed values. These individual differences are called residuals when the calculations are performed over the data sample that was used for estimation, and are called prediction errors when computed out-of-sample. The RMSE serves to aggregate the magnitudes of the errors in predictions for various times into a single measure of predictive power. RMSE is a good measure of accuracy, but only to compare forecasting errors of different models for a particular variable and not between variables, as it is scale-dependent.
The RMSE is the square root of the average value of the square of (actual rating - predicted rating) for all users and movies for which we have the actual rating. Versions of Spark MLlib beginning with Spark 1.4 include a RegressionMetrics modiule that can be used to compute the RMSE. However, since we are using Spark 1.3.1, we will write our own function.
Write a function to compute the sum of squared error given predictedRDD and actualRDD RDDs. Both RDDs consist of tuples of the form (UserID, MovieID, Rating)
Given two ratings RDDs, x and y of size n, we define RSME as follows: $ RMSE = \sqrt{\frac{\sum_{i = 1}^{n} (x_i - y_i)^2}{n}}$
To calculate RSME, the steps you should perform are:
Transform predictedRDD into the tuples of the form ((UserID, MovieID), Rating). For example, tuples like [((1, 1), 5), ((1, 2), 3), ((1, 3), 4), ((2, 1), 3), ((2, 2), 2), ((2, 3), 4)]. You can perform this step with a single Spark transformation.
Transform actualRDD into the tuples of the form ((UserID, MovieID), Rating). For example, tuples like [((1, 2), 3), ((1, 3), 5), ((2, 1), 5), ((2, 2), 1)]. You can perform this step with a single Spark transformation.
Using only RDD transformations (you only need to perform two transformations), compute the squared error for each matching entry (i.e., the same (UserID, MovieID) in each RDD) in the reformatted RDDs - do not use collect() to perform this step. Note that not every (UserID, MovieID) pair will appear in both RDDs - if a pair does not appear in both RDDs, then it does not contribute to the RMSE. You will end up with an RDD with entries of the form $ (x_i - y_i)^2$ You might want to check out Python's math module to see how to compute these values
Using an RDD action (but not collect()), compute the total squared error: $ SE = \sum_{i = 1}^{n} (x_i - y_i)^2 $
Compute n by using an RDD action (but not collect()), to count the number of pairs for which you computed the total squared error
Using the total squared error and the number of pairs, compute the RSME. Make sure you compute this value as a float.
Note: Your solution must only use transformations and actions on RDDs. Do not call collect() on either RDD.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
from pyspark.mllib.recommendation import ALS
validationForPredictRDD = validationRDD.map(lambda (userID, movieID, rating) : (userID, movieID))
seed = 5L
iterations = 5
regularizationParameter = 0.1
ranks = [4, 8, 12]
errors = [0, 0, 0]
err = 0
tolerance = 0.03
minError = float('inf')
bestRank = -1
bestIteration = -1
for rank in ranks:
model = ALS.train(trainingRDD, rank, seed=seed, iterations=iterations,
lambda_=regularizationParameter)
predictedRatingsRDD = model.predictAll(validationForPredictRDD)
error = computeError(predictedRatingsRDD, validationRDD)
errors[err] = error
err += 1
print 'For rank %s the RMSE is %s' % (rank, error)
if error < minError:
minError = error
bestRank = rank
print 'The best model was trained with rank %s' % bestRank
# TEST Using ALS.train (2c)
Test.assertEquals(trainingRDD.getNumPartitions(), 2,
'incorrect number of partitions for trainingRDD (expected 2)')
Test.assertEquals(validationForPredictRDD.count(), 96902,
'incorrect size for validationForPredictRDD (expected 96902)')
Test.assertEquals(validationForPredictRDD.filter(lambda t: t == (1, 1907)).count(), 1,
'incorrect content for validationForPredictRDD')
Test.assertTrue(abs(errors[0] - 0.883710109497) < tolerance, 'incorrect errors[0]')
Test.assertTrue(abs(errors[1] - 0.878486305621) < tolerance, 'incorrect errors[1]')
Test.assertTrue(abs(errors[2] - 0.876832795659) < tolerance, 'incorrect errors[2]')
Explanation: (2c) Using ALS.train()
In this part, we will use the MLlib implementation of Alternating Least Squares, ALS.train(). ALS takes a training dataset (RDD) and several parameters that control the model creation process. To determine the best values for the parameters, we will use ALS to train several models, and then we will select the best model and use the parameters from that model in the rest of this lab exercise.
The process we will use for determining the best model is as follows:
Pick a set of model parameters. The most important parameter to ALS.train() is the rank, which is the number of rows in the Users matrix (green in the diagram above) or the number of columns in the Movies matrix (blue in the diagram above). (In general, a lower rank will mean higher error on the training dataset, but a high rank may lead to overfitting.) We will train models with ranks of 4, 8, and 12 using the trainingRDD dataset.
Create a model using ALS.train(trainingRDD, rank, seed=seed, iterations=iterations, lambda_=regularizationParameter) with three parameters: an RDD consisting of tuples of the form (UserID, MovieID, rating) used to train the model, an integer rank (4, 8, or 12), a number of iterations to execute (we will use 5 for the iterations parameter), and a regularization coefficient (we will use 0.1 for the regularizationParameter).
For the prediction step, create an input RDD, validationForPredictRDD, consisting of (UserID, MovieID) pairs that you extract from validationRDD. You will end up with an RDD of the form: [(1, 1287), (1, 594), (1, 1270)]
Using the model and validationForPredictRDD, we can predict rating values by calling model.predictAll() with the validationForPredictRDD dataset, where model is the model we generated with ALS.train(). predictAll accepts an RDD with each entry in the format (userID, movieID) and outputs an RDD with each entry in the format (userID, movieID, rating).
Evaluate the quality of the model by using the computeError() function you wrote in part (2b) to compute the error between the predicted ratings and the actual ratings in validationRDD.
Which rank produces the best model, based on the RMSE with the validationRDD dataset?
Note: It is likely that this operation will take a noticeable amount of time (around a minute in our VM); you can observe its progress on the Spark Web UI. Probably most of the time will be spent running your computeError() function, since, unlike the Spark ALS implementation (and the Spark 1.4 RegressionMetrics module), this does not use a fast linear algebra library and needs to run some Python code for all 100k entries.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
myModel = ALS.train(trainingRDD, rank=bestRank, seed=seed, iterations=iterations, lambda_=regularizationParameter)
testForPredictingRDD = testRDD.map(lambda (userID, movieID, rating) : (userID, movieID))
predictedTestRDD = myModel.predictAll(testForPredictingRDD)
testRMSE = computeError(testRDD, predictedTestRDD)
print 'The model had a RMSE on the test set of %s' % testRMSE
# TEST Testing Your Model (2d)
Test.assertTrue(abs(testRMSE - 0.87809838344) < tolerance, 'incorrect testRMSE')
Explanation: (2d) Testing Your Model
So far, we used the trainingRDD and validationRDD datasets to select the best model. Since we used these two datasets to determine what model is best, we cannot use them to test how good the model is - otherwise we would be very vulnerable to overfitting. To decide how good our model is, we need to use the testRDD dataset. We will use the bestRank you determined in part (2c) to create a model for predicting the ratings for the test dataset and then we will compute the RMSE.
The steps you should perform are:
Train a model, using the trainingRDD, bestRank from part (2c), and the parameters you used in in part (2c): seed=seed, iterations=iterations, and lambda_=regularizationParameter - make sure you include all of the parameters.
For the prediction step, create an input RDD, testForPredictingRDD, consisting of (UserID, MovieID) pairs that you extract from testRDD. You will end up with an RDD of the form: [(1, 1287), (1, 594), (1, 1270)]
Use myModel.predictAll() to predict rating values for the test dataset.
For validation, use the testRDDand your computeError function to compute the RMSE between testRDD and the predictedTestRDD from the model.
Evaluate the quality of the model by using the computeError() function you wrote in part (2b) to compute the error between the predicted ratings and the actual ratings in testRDD.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
trainingAvgRating = trainingRDD.map(lambda (userID, movieID, rating): rating).reduce(lambda r1, r2: (r1+r2))/trainingRDD.count()
print 'The average rating for movies in the training set is %s' % trainingAvgRating
testForAvgRDD = testRDD.map(lambda (userID, movieID, rating): (userID, movieID, trainingAvgRating))
testAvgRMSE = computeError(testRDD, testForAvgRDD)
print 'The RMSE on the average set is %s' % testAvgRMSE
# TEST Comparing Your Model (2e)
Test.assertTrue(abs(trainingAvgRating - 3.57409571052) < 0.000001,
'incorrect trainingAvgRating (expected 3.57409571052)')
Test.assertTrue(abs(testAvgRMSE - 1.12036693569) < 0.000001,
'incorrect testAvgRMSE (expected 1.12036693569)')
Explanation: (2e) Comparing Your Model
Looking at the RMSE for the results predicted by the model versus the values in the test set is one way to evalute the quality of our model. Another way to evaluate the model is to evaluate the error from a test set where every rating is the average rating for the training set.
The steps you should perform are:
Use the trainingRDD to compute the average rating across all movies in that training dataset.
Use the average rating that you just determined and the testRDD to create an RDD with entries of the form (userID, movieID, average rating).
Use your computeError function to compute the RMSE between the testRDD validation RDD that you just created and the testForAvgRDD.
End of explanation
print 'Most rated movies:'
print '(average rating, movie name, number of reviews)'
for ratingsTuple in movieLimitedAndSortedByRatingRDD.take(50):
print ratingsTuple
Explanation: You now have code to predict how users will rate movies!
Part 3: Predictions for Yourself
The ultimate goal of this lab exercise is to predict what movies to recommend to yourself. In order to do that, you will first need to add ratings for yourself to the ratingsRDD dataset.
(3a) Your Movie Ratings
To help you provide ratings for yourself, we have included the following code to list the names and movie IDs of the 50 highest-rated movies from movieLimitedAndSortedByRatingRDD which we created in part 1 the lab.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
myUserID = 0
# Note that the movie IDs are the *last* number on each line. A common error was to use the number of ratings as the movie ID.
myRatedMovies = [
(0, 516, 5),
(0, 553, 5),
(0, 811, 4),
(0, 817, 2),
(0, 539, 3),
(0,848, 5),
(0, 1300, 3),
(0, 7895, 5),
(0, 551, 2),
(0, 750, 1)
# The format of each line is (myUserID, movie ID, your rating)
# For example, to give the movie "Star Wars: Episode IV - A New Hope (1977)" a five rating, you would add the following line:
# (myUserID, 260, 5),
]
myRatingsRDD = sc.parallelize(myRatedMovies)
print 'My movie ratings: %s' % myRatingsRDD.take(10)
Explanation: The user ID 0 is unassigned, so we will use it for your ratings. We set the variable myUserID to 0 for you. Next, create a new RDD myRatingsRDD with your ratings for at least 10 movie ratings. Each entry should be formatted as (myUserID, movieID, rating) (i.e., each entry should be formatted in the same way as trainingRDD). As in the original dataset, ratings should be between 1 and 5 (inclusive). If you have not seen at least 10 of these movies, you can increase the parameter passed to take() in the above cell until there are 10 movies that you have seen (or you can also guess what your rating would be for movies you have not seen).
End of explanation
# TODO: Replace <FILL IN> with appropriate code
trainingWithMyRatingsRDD = trainingRDD.union(myRatingsRDD)
print ('The training dataset now has %s more entries than the original training dataset' %
(trainingWithMyRatingsRDD.count() - trainingRDD.count()))
assert (trainingWithMyRatingsRDD.count() - trainingRDD.count()) == myRatingsRDD.count()
Explanation: (3b) Add Your Movies to Training Dataset
Now that you have ratings for yourself, you need to add your ratings to the training dataset so that the model you train will incorporate your preferences. Spark's union() transformation combines two RDDs; use union() to create a new training dataset that includes your ratings and the data in the original training dataset.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
myRatingsModel = ALS.train(trainingWithMyRatingsRDD, bestRank, seed=seed, iterations=iterations, lambda_=regularizationParameter)
Explanation: (3c) Train a Model with Your Ratings
Now, train a model with your ratings added and the parameters you used in in part (2c): bestRank, seed=seed, iterations=iterations, and lambda_=regularizationParameter - make sure you include all of the parameters.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
predictedTestMyRatingsRDD = myRatingsModel.predictAll(testForPredictingRDD)
testRMSEMyRatings = computeError(testRDD, predictedTestMyRatingsRDD)
print 'The model had a RMSE on the test set of %s' % testRMSEMyRatings
Explanation: (3d) Check RMSE for the New Model with Your Ratings
Compute the RMSE for this new model on the test set.
For the prediction step, we reuse testForPredictingRDD, consisting of (UserID, MovieID) pairs that you extracted from testRDD. The RDD has the form: [(1, 1287), (1, 594), (1, 1270)]
Use myRatingsModel.predictAll() to predict rating values for the testForPredictingRDD test dataset, set this as predictedTestMyRatingsRDD
For validation, use the testRDDand your computeError function to compute the RMSE between testRDD and the predictedTestMyRatingsRDD from the model.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
# Use the Python list myRatedMovies to transform the moviesRDD into an RDD with entries that are pairs of the form (myUserID, Movie ID) and that does not contain any movies that you have rated.
myUnratedMoviesRDD = (moviesRDD
.map(lambda (id, title): (0, id))
.filter(lambda (myUserID, movieID): movieID not in [movieID for (myID, movieID, rating) in myRatedMovies]))
# Use the input RDD, myUnratedMoviesRDD, with myRatingsModel.predictAll() to predict your ratings for the movies
predictedRatingsRDD = myRatingsModel.predictAll(myUnratedMoviesRDD)
Explanation: (3e) Predict Your Ratings
So far, we have only used the predictAll method to compute the error of the model. Here, use the predictAll to predict what ratings you would give to the movies that you did not already provide ratings for.
The steps you should perform are:
Use the Python list myRatedMovies to transform the moviesRDD into an RDD with entries that are pairs of the form (myUserID, Movie ID) and that does not contain any movies that you have rated. This transformation will yield an RDD of the form: [(0, 1), (0, 2), (0, 3), (0, 4)]. Note that you can do this step with one RDD transformation.
For the prediction step, use the input RDD, myUnratedMoviesRDD, with myRatingsModel.predictAll() to predict your ratings for the movies.
End of explanation
# TODO: Replace <FILL IN> with appropriate code
# Transform movieIDsWithAvgRatingsRDD from part (1b), which has the form (MovieID, (number of ratings, average rating)), into and RDD of the form (MovieID, number of ratings)
movieCountsRDD = movieIDsWithAvgRatingsRDD.map(lambda (movieID, (num, avg)): (movieID, num))
# Transform predictedRatingsRDD into an RDD with entries that are pairs of the form (Movie ID, Predicted Rating)
predictedRDD = predictedRatingsRDD.map(lambda (myID, movieID, rating): (movieID, rating))
# Use RDD transformations with predictedRDD and movieCountsRDD to yield an RDD with tuples of the form (Movie ID, (Predicted Rating, number of ratings))
predictedWithCountsRDD = (predictedRDD
.join(movieCountsRDD))
# Use RDD transformations with PredictedWithCountsRDD and moviesRDD to yield an RDD with tuples of the form (Predicted Rating, Movie Name, number of ratings), for movies with more than 75 ratings
ratingsWithNamesRDD = (predictedWithCountsRDD
.join(moviesRDD)
.map(lambda (movieID,((PredictedRating, num), name)) : (PredictedRating, name, num))
.filter(lambda (rating, name, num): num>75))
predictedHighestRatedMovies = ratingsWithNamesRDD.takeOrdered(20, key=lambda x: -x[0])
print ('My highest rated movies as predicted (for movies with more than 75 reviews):\n%s' %
'\n'.join(map(str, predictedHighestRatedMovies)))
Explanation: (3f) Predict Your Ratings
We have our predicted ratings. Now we can print out the 25 movies with the highest predicted ratings.
The steps you should perform are:
From Parts (1b) and (1c), we know that we should look at movies with a reasonable number of reviews (e.g., more than 75 reviews). You can experiment with a lower threshold, but fewer ratings for a movie may yield higher prediction errors. Transform movieIDsWithAvgRatingsRDD from Part (1b), which has the form (MovieID, (number of ratings, average rating)), into an RDD of the form (MovieID, number of ratings): [(2, 332), (4, 71), (6, 442)]
We want to see movie names, instead of movie IDs. Transform predictedRatingsRDD into an RDD with entries that are pairs of the form (Movie ID, Predicted Rating): [(3456, -0.5501005376936687), (1080, 1.5885892024487962), (320, -3.7952255522487865)]
Use RDD transformations with predictedRDD and movieCountsRDD to yield an RDD with tuples of the form (Movie ID, (Predicted Rating, number of ratings)): [(2050, (0.6694097486155939, 44)), (10, (5.29762541533513, 418)), (2060, (0.5055259373841172, 97))]
Use RDD transformations with predictedWithCountsRDD and moviesRDD to yield an RDD with tuples of the form (Predicted Rating, Movie Name, number of ratings), for movies with more than 75 ratings. For example: [(7.983121900375243, u'Under Siege (1992)'), (7.9769201864261285, u'Fifth Element, The (1997)')]
End of explanation
<END_TASK> |
284 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
ES-DOC CMIP6 Model Properties - Land
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Description
Is Required
Step7: 1.4. Land Atmosphere Flux Exchanges
Is Required
Step8: 1.5. Atmospheric Coupling Treatment
Is Required
Step9: 1.6. Land Cover
Is Required
Step10: 1.7. Land Cover Change
Is Required
Step11: 1.8. Tiling
Is Required
Step12: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required
Step13: 2.2. Water
Is Required
Step14: 2.3. Carbon
Is Required
Step15: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required
Step16: 3.2. Time Step
Is Required
Step17: 3.3. Timestepping Method
Is Required
Step18: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required
Step19: 4.2. Code Version
Is Required
Step20: 4.3. Code Languages
Is Required
Step21: 5. Grid
Land surface grid
5.1. Overview
Is Required
Step22: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required
Step23: 6.2. Matches Atmosphere Grid
Is Required
Step24: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required
Step25: 7.2. Total Depth
Is Required
Step26: 8. Soil
Land surface soil
8.1. Overview
Is Required
Step27: 8.2. Heat Water Coupling
Is Required
Step28: 8.3. Number Of Soil layers
Is Required
Step29: 8.4. Prognostic Variables
Is Required
Step30: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required
Step31: 9.2. Structure
Is Required
Step32: 9.3. Texture
Is Required
Step33: 9.4. Organic Matter
Is Required
Step34: 9.5. Albedo
Is Required
Step35: 9.6. Water Table
Is Required
Step36: 9.7. Continuously Varying Soil Depth
Is Required
Step37: 9.8. Soil Depth
Is Required
Step38: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required
Step39: 10.2. Functions
Is Required
Step40: 10.3. Direct Diffuse
Is Required
Step41: 10.4. Number Of Wavelength Bands
Is Required
Step42: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required
Step43: 11.2. Time Step
Is Required
Step44: 11.3. Tiling
Is Required
Step45: 11.4. Vertical Discretisation
Is Required
Step46: 11.5. Number Of Ground Water Layers
Is Required
Step47: 11.6. Lateral Connectivity
Is Required
Step48: 11.7. Method
Is Required
Step49: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required
Step50: 12.2. Ice Storage Method
Is Required
Step51: 12.3. Permafrost
Is Required
Step52: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required
Step53: 13.2. Types
Is Required
Step54: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required
Step55: 14.2. Time Step
Is Required
Step56: 14.3. Tiling
Is Required
Step57: 14.4. Vertical Discretisation
Is Required
Step58: 14.5. Heat Storage
Is Required
Step59: 14.6. Processes
Is Required
Step60: 15. Snow
Land surface snow
15.1. Overview
Is Required
Step61: 15.2. Tiling
Is Required
Step62: 15.3. Number Of Snow Layers
Is Required
Step63: 15.4. Density
Is Required
Step64: 15.5. Water Equivalent
Is Required
Step65: 15.6. Heat Content
Is Required
Step66: 15.7. Temperature
Is Required
Step67: 15.8. Liquid Water Content
Is Required
Step68: 15.9. Snow Cover Fractions
Is Required
Step69: 15.10. Processes
Is Required
Step70: 15.11. Prognostic Variables
Is Required
Step71: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required
Step72: 16.2. Functions
Is Required
Step73: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required
Step74: 17.2. Time Step
Is Required
Step75: 17.3. Dynamic Vegetation
Is Required
Step76: 17.4. Tiling
Is Required
Step77: 17.5. Vegetation Representation
Is Required
Step78: 17.6. Vegetation Types
Is Required
Step79: 17.7. Biome Types
Is Required
Step80: 17.8. Vegetation Time Variation
Is Required
Step81: 17.9. Vegetation Map
Is Required
Step82: 17.10. Interception
Is Required
Step83: 17.11. Phenology
Is Required
Step84: 17.12. Phenology Description
Is Required
Step85: 17.13. Leaf Area Index
Is Required
Step86: 17.14. Leaf Area Index Description
Is Required
Step87: 17.15. Biomass
Is Required
Step88: 17.16. Biomass Description
Is Required
Step89: 17.17. Biogeography
Is Required
Step90: 17.18. Biogeography Description
Is Required
Step91: 17.19. Stomatal Resistance
Is Required
Step92: 17.20. Stomatal Resistance Description
Is Required
Step93: 17.21. Prognostic Variables
Is Required
Step94: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required
Step95: 18.2. Tiling
Is Required
Step96: 18.3. Number Of Surface Temperatures
Is Required
Step97: 18.4. Evaporation
Is Required
Step98: 18.5. Processes
Is Required
Step99: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required
Step100: 19.2. Tiling
Is Required
Step101: 19.3. Time Step
Is Required
Step102: 19.4. Anthropogenic Carbon
Is Required
Step103: 19.5. Prognostic Variables
Is Required
Step104: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required
Step105: 20.2. Carbon Pools
Is Required
Step106: 20.3. Forest Stand Dynamics
Is Required
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required
Step109: 22.2. Growth Respiration
Is Required
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required
Step111: 23.2. Allocation Bins
Is Required
Step112: 23.3. Allocation Fractions
Is Required
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required
Step115: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required
Step116: 26.2. Carbon Pools
Is Required
Step117: 26.3. Decomposition
Is Required
Step118: 26.4. Method
Is Required
Step119: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required
Step120: 27.2. Carbon Pools
Is Required
Step121: 27.3. Decomposition
Is Required
Step122: 27.4. Method
Is Required
Step123: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required
Step124: 28.2. Emitted Greenhouse Gases
Is Required
Step125: 28.3. Decomposition
Is Required
Step126: 28.4. Impact On Soil Properties
Is Required
Step127: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required
Step128: 29.2. Tiling
Is Required
Step129: 29.3. Time Step
Is Required
Step130: 29.4. Prognostic Variables
Is Required
Step131: 30. River Routing
Land surface river routing
30.1. Overview
Is Required
Step132: 30.2. Tiling
Is Required
Step133: 30.3. Time Step
Is Required
Step134: 30.4. Grid Inherited From Land Surface
Is Required
Step135: 30.5. Grid Description
Is Required
Step136: 30.6. Number Of Reservoirs
Is Required
Step137: 30.7. Water Re Evaporation
Is Required
Step138: 30.8. Coupled To Atmosphere
Is Required
Step139: 30.9. Coupled To Land
Is Required
Step140: 30.10. Quantities Exchanged With Atmosphere
Is Required
Step141: 30.11. Basin Flow Direction Map
Is Required
Step142: 30.12. Flooding
Is Required
Step143: 30.13. Prognostic Variables
Is Required
Step144: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required
Step145: 31.2. Quantities Transported
Is Required
Step146: 32. Lakes
Land surface lakes
32.1. Overview
Is Required
Step147: 32.2. Coupling With Rivers
Is Required
Step148: 32.3. Time Step
Is Required
Step149: 32.4. Quantities Exchanged With Rivers
Is Required
Step150: 32.5. Vertical Grid
Is Required
Step151: 32.6. Prognostic Variables
Is Required
Step152: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required
Step153: 33.2. Albedo
Is Required
Step154: 33.3. Dynamics
Is Required
Step155: 33.4. Dynamic Lake Extent
Is Required
Step156: 33.5. Endorheic Basins
Is Required
Step157: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required | <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cmcc', 'cmcc-cm2-vhr4', 'land')
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: CMCC
Source ID: CMCC-CM2-VHR4
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:53:50
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation
<END_TASK> |
285 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
<a href="https
Step1: Problem 0 - Accessing the GPU
0a) In order to run Numba functions using the GPU, we have to do a couple of things. First, go to the Runtime menu, click on 'Change Runtime Type', and in the pop-up box, under 'Hardware Accelerator', select 'GPU'. Save the Runtime.
0b) Ideally, that's all we should have to do. But in practice, even though the CUDA libararies are installed, for some reason Colab usually can't find them. So, we'll figure out where they are, and then point Colab to them.
Step2: Paste the location of the libraries into the following code box (if it's different, otherwise you can just run the code)
Step3: And that should do it! Okay, now that we've pointed Numba to the correct libraries, let's get going. To start, we are going to return to the first function we created yesterday - the vector add.
Problem 1 - Vector Addition on GPUs
The simplest way to access the GPU through Numba is to return to our vectorized ufunc from yesterday. As you may recall, Numpy Universal Functions operate on vectors, or arrays. If we specify the cuda target, Numba will automatically write a CUDA kernel for us, and run the function on the GPU! Let's try it out
Step4: Cool, it worked! But what actually just happened? Well, a lot of things. Numba automatically
Step5: 1b) Wow, the GPU is a LOT slower! Why might that be?
Try to think of several reasons.
Answer
Step6: Now, let's give it a bit more work to do
Step7: As we saw in the last problem, copying the data to and from the GPU for every function is not necessarily the most efficient way to use the GPU. To address this, Numba provides the to_device function in the cuda module to allocate and copy arrays to the GPU
Step8: x_device and y_device are now Numba "device arrays" that are in many ways equivalent to Numpy ndarrays except that they live in the GPU's global memory, rather than on the CPU. These device arrays can be passed to Numba cuda functions just the way Numpy arrays can, but without the memory copying overhead.
2b) Try out your function using host vs device arrays. How does the time compare?
Step9: You should see a big performance improvement already, but we are still allocating a device array for the output of the ufunc and copying it back to the host. We can create an output buffer on the GPU with the numba.cuda.device_array() function
Step10: And then we can use a special out keyword argument to the ufunc to specify the output buffer
Step11: You should see an even bigger improvement. Once we've finished all of our calculations on the GPU, we can copy the array back from the device using the copy_to_host method
Step12: 2c) Remake a new version of the addition ufunc with 32bit floats that targets the cpu. Compare the resulting time to execute with the gpu version you just timed.
Step13: 2d) Now go back and try the two functions (gpu and cpu) with even larger arrays. When does the GPU start to win? Does the execution time on the GPU scale with the number of array elements the same way that the CPU version does?
If your result is like mine, you may have seen a slight timing advantage in the GPU version with a million array elements, but it was close (and that wasn't even counting the data transfer time). That's because we're still not giving the GPU enough work to keep all those cores busy all the time! By the time we hit 10 million, the GPU was clearly winning. The time it took the CPU function continued to increase linearly with array size, but the GPU function time increased much more slowly.
2e) Let's practice some more memory management. Given the following ufuncs
Step14: Convert the following code to use device allocations so that there are only host<->device copies at the beginning and end. Then benchmark the performance change.
Hint
Step15: Problem 3 - Writing Cuda Kernels
While targeting ufuncs with the cuda syntax is the most straightforward way to access the GPU with Numba, it may not be flexible enough for your needs. If you want to write a more detailed GPU program, at some point you are probably going to need to write CUDA kernels.
As discussed in the lecture, the CUDA programming model allows you to abstract the GPU hardware into a software model composed of a grid containing blocks of threads. These threads are the smallest individual unit in the programming model, and they execute together in groups (traditionally called warps, consisting of 32 thread each). Determiming the best size for your grid of thread blocks is a complicated problem that often depends on the specific algorithm and hardware you're using, but here a few good rules of thumb
Step16: That's a lot more typing than our ufunc example, and it is much more limited
Step17: The calling syntax is designed to mimic the way CUDA kernels are launched in C, where the number of blocks per grid and threads per block are specified in the square brackets, and the arguments to the function are specified afterwards in parentheses.
Note that, unlike the ufunc, the arguments are passed to the kernel as full NumPy arrays. A thread within the kernel can access any element in the array it wants, regardless of its position in the thread grid. This is why CUDA kernels are significantly more powerful than ufuncs. (But with great power, comes a greater amount of typing...)
Numba has created some helper functions to cut down on the typing. We can write the previous kernel much more simply as
Step18: As before, using NumPy arrays forces Numba to allocate GPU memory, copy the arguments to the GPU, run the kernel, then copy the argument arrays back to the host. This not very efficient, so you will often want to allocate device arrays.
3a) Allocate device arrays for x, y, and the output, then try out your new Cuda kernel using the pre-copied device arrays. Compare the time to a version without moving the data first.
Step19: Atomic Operations and avoiding Race Conditions
CUDA, like many general purpose parallel execution frameworks, makes it possible to have race conditions in your code. A race condition in CUDA arises when threads read or write a memory location that might be modified by another independent thread. Generally speaking, you need to worry about
Step20: 3b) Let's practice writing a function that requires an atomic operation - a histogramming kernel. This will take an array of input data, a range and a number of bins, and count how many of the input data elements land in each bin. Below is an example CPU implementation of histogramming
Step21: In the space below, create a cuda version of this kernel, then run it to check that you get the same answer as the CPU version.
You can use much of the same syntax that we used in the cuda addition kernel.
Step23: Problem 4 - Return to the Fractals!
Yesterday we defined two functions to create an instance of the Julia set
Step25: In order to turn this into a GPU implementation, we'd like to have a kernel function (create_fractal) call another function (julia) on the device. Numba has a way of specifying functions that will be called from within a kernel by passing the cuda.jit decorator an argument
Step26: Multi-dimensional grids
For some problems, it makes sense to define a two- or three-dimensional grid of thread blocks. That way, when you're indexing a single thread, you can map it to, say, the pixel position in an image. Multi-dimensional grids are created by passing tuples to the kernel function. You can ensure that you launch a big enough grid by calculating the size of each dimension as a function of the array size and number of threads per block
Step27: Then, within a kernel, you can determine the absolute thread position by calling the grid helper function, as in x, y = cuda.grid(2).
4a) Modify the create_fractal function to launch as a kernel on the GPU and call your new device function, julia. Use a 2D grid of thread blocks to launch the kernel and determine which threads are responsible for each pixel in the image. | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import math
from numba import vectorize, cuda
from matplotlib import pyplot as plt
%matplotlib inline
Explanation: <a href="https://colab.research.google.com/github/evaneschneider/parallel-programming/blob/master/gpu_intro_solutions.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Introduction to GPU programming with Numba
This notebook borrows heavily from seibert's 2018 gtc numba tutorial. I highly recommend that tutorial in its entireity if you want more practice with Numba and GPUs.
Yesterday we discussed the principles of parallel programming, and explored the key aspects of using Numba - the @jit decorator, benchmarking, and the @vectorize decorator for Numpy UFuncs. Today we are going to expand on that basis and use Numba to do parallel calculations in python by taking advantage of Numba's GPU interface (and Google's free GPUs - thanks Colaboratory!).
End of explanation
!find / -iname 'libdevice'
!find / -iname 'libnvvm.so'
Explanation: Problem 0 - Accessing the GPU
0a) In order to run Numba functions using the GPU, we have to do a couple of things. First, go to the Runtime menu, click on 'Change Runtime Type', and in the pop-up box, under 'Hardware Accelerator', select 'GPU'. Save the Runtime.
0b) Ideally, that's all we should have to do. But in practice, even though the CUDA libararies are installed, for some reason Colab usually can't find them. So, we'll figure out where they are, and then point Colab to them.
End of explanation
import os
os.environ['NUMBAPRO_LIBDEVICE'] = "/usr/local/cuda-10.0/nvvm/libdevice"
os.environ['NUMBAPRO_NVVM'] = "/usr/local/cuda-10.0/nvvm/lib64/libnvvm.so"
Explanation: Paste the location of the libraries into the following code box (if it's different, otherwise you can just run the code):
End of explanation
@vectorize(['int64(int64, int64)'], target='cuda')
def add_ufunc(x, y):
return x + y
x = np.arange(10)
y = 2 * x
add_ufunc(x, y)
Explanation: And that should do it! Okay, now that we've pointed Numba to the correct libraries, let's get going. To start, we are going to return to the first function we created yesterday - the vector add.
Problem 1 - Vector Addition on GPUs
The simplest way to access the GPU through Numba is to return to our vectorized ufunc from yesterday. As you may recall, Numpy Universal Functions operate on vectors, or arrays. If we specify the cuda target, Numba will automatically write a CUDA kernel for us, and run the function on the GPU! Let's try it out:
End of explanation
%timeit add_ufunc(x, y) # add code here
@vectorize(['int64(int64, int64)'], target='cpu') # add code here
def add_ufunc_cpu(x, y):
return x + y
%timeit add_ufunc_cpu(x, y)
Explanation: Cool, it worked! But what actually just happened? Well, a lot of things. Numba automatically:
+ Compiled a CUDA kernel to execute the ufunc operation in parallel over all the input elements.
+ Allocated GPU memory for the inputs and the output.
+ Copied the input data to the GPU.
+ Executed the CUDA kernel with the correct kernel dimensions given the input sizes.
+ Copied the result back from the GPU to the CPU.
+ Returned the result as a NumPy array on the host.
1a) Determine how fast the CUDA addition function is. Compare that to a function compiled for the CPU. How does the GPU do?
You'll probably want to write two functions with separate names to compare them.
End of explanation
@vectorize(['float32(float32, float32)'], target='cuda') # add code here
def add_ufunc(x, y):
return x + y
Explanation: 1b) Wow, the GPU is a LOT slower! Why might that be?
Try to think of several reasons.
Answer: We have (deliberately) misused the GPU in several ways in this example:
+ Our inputs are too small: the GPU achieves performance through parallelism, operating on thousands of values at once. Our test inputs have only 10 integers. We need a much larger array to even keep the GPU busy.
+ Our calculation is too simple: Sending a calculation to the GPU involves quite a bit of overhead compared to calling a function on the CPU. If our calculation does not involve enough math operations ("arithmetic intensity"), then the GPU will spend most of its time waiting for data to move around.
+ We copy the data to and from the GPU: While including the copy time can be realistic for a single function, often we want to run several GPU operations in sequence. In those cases, it makes sense to send data to the GPU and keep it there until all of our processing is complete.
+ Our data types are larger than necessary: Our example uses int64 when we probably don't need it. Scalar code using data types that are 32 and 64-bit run basically the same speed on the CPU, but 64-bit data types have a significant performance cost on the GPU. Basic arithmetic on 64-bit floats can be anywhere from 2x (Pascal-architecture Tesla) to 24x (Maxwell-architecture GeForce) slower than 32-bit floats. NumPy defaults to 64-bit data types when creating arrays, so it is important to set the dtype attribute or use the ndarray.astype() method to pick 32-bit types when you need them.
Problem 2 - Memory Management
As we saw in the last problem, Numba can automatically handle transferring data to and from the GPU for us. However, that's not always what we want. Sometimes we will want to perform several functions in a row on the GPU without transferring the data back to the CPU in between.
2a) Remake the addition ufunc to operate on and return 32 bit floats, and target the GPU.
End of explanation
n = 100000
x = np.arange(n).astype(np.float32)
y = 2 * x
Explanation: Now, let's give it a bit more work to do:
End of explanation
x_device = cuda.to_device(x)
y_device = cuda.to_device(y)
print(x_device)
print(x_device.shape)
print(x_device.dtype)
Explanation: As we saw in the last problem, copying the data to and from the GPU for every function is not necessarily the most efficient way to use the GPU. To address this, Numba provides the to_device function in the cuda module to allocate and copy arrays to the GPU:
End of explanation
%timeit add_ufunc(x, y) # add code here
%timeit add_ufunc(x_device, y_device) # add code here
Explanation: x_device and y_device are now Numba "device arrays" that are in many ways equivalent to Numpy ndarrays except that they live in the GPU's global memory, rather than on the CPU. These device arrays can be passed to Numba cuda functions just the way Numpy arrays can, but without the memory copying overhead.
2b) Try out your function using host vs device arrays. How does the time compare?
End of explanation
out_device = cuda.device_array(shape=(n,), dtype=np.float32) # does not initialize the contents, much like np.empty()
Explanation: You should see a big performance improvement already, but we are still allocating a device array for the output of the ufunc and copying it back to the host. We can create an output buffer on the GPU with the numba.cuda.device_array() function:
End of explanation
%timeit add_ufunc(x_device, y_device, out=out_device)
Explanation: And then we can use a special out keyword argument to the ufunc to specify the output buffer:
End of explanation
out_host = out_device.copy_to_host()
print(out_host[:10])
Explanation: You should see an even bigger improvement. Once we've finished all of our calculations on the GPU, we can copy the array back from the device using the copy_to_host method:
End of explanation
@vectorize(['float32(float32, float32)'], target='cpu') # add code here
def add_ufunc_cpu(x, y):
return x + y
%timeit add_ufunc_cpu(x, y)
Explanation: 2c) Remake a new version of the addition ufunc with 32bit floats that targets the cpu. Compare the resulting time to execute with the gpu version you just timed.
End of explanation
@vectorize(['float32(float32, float32)'], target='cuda')
def add_ufunc(x, y):
return x + y
@vectorize(['float32(float32, float32, float32)'], target='cuda')
def make_pulses(i, period, amplitude):
return max(math.sin(i / period) - 0.3, 0.0) * amplitude
Explanation: 2d) Now go back and try the two functions (gpu and cpu) with even larger arrays. When does the GPU start to win? Does the execution time on the GPU scale with the number of array elements the same way that the CPU version does?
If your result is like mine, you may have seen a slight timing advantage in the GPU version with a million array elements, but it was close (and that wasn't even counting the data transfer time). That's because we're still not giving the GPU enough work to keep all those cores busy all the time! By the time we hit 10 million, the GPU was clearly winning. The time it took the CPU function continued to increase linearly with array size, but the GPU function time increased much more slowly.
2e) Let's practice some more memory management. Given the following ufuncs:
End of explanation
n = 100000
noise = (np.random.normal(size=n) * 3).astype(np.float32)
t = np.arange(n, dtype=np.float32)
period = n / 23
pulses = make_pulses(t, period, 100.0)
waveform = add_ufunc(pulses, noise)
plt.plot(waveform)
t_device = cuda.to_device(t)
noise_device = cuda.to_device(noise)
pulses_device = cuda.device_array(shape=(n,), dtype=np.float32)
waveform_device = cuda.device_array(shape=(n,), dtype=np.float32)
%timeit make_pulses(t_device, period, 100.0, out=pulses_device)
%timeit add_ufunc(pulses_device, noise_device, out=waveform_device)
waveform = waveform_device.copy_to_host()
plt.plot(waveform)
Explanation: Convert the following code to use device allocations so that there are only host<->device copies at the beginning and end. Then benchmark the performance change.
Hint: how many arrays will you need on the device?
End of explanation
@cuda.jit
def add_kernel(x, y, out):
tidx = cuda.threadIdx.x # this is the unique thread ID within a 1D block
bidx = cuda.blockIdx.x # Similarly, this is the unique block ID within the 1D grid
block_dimx = cuda.blockDim.x # number of threads per block
grid_dimx = cuda.gridDim.x # number of blocks in the grid
start = tidx + bidx * block_dimx
stride = block_dimx * grid_dimx
# assuming x and y inputs are same length
for i in range(start, x.shape[0], stride):
out[i] = x[i] + y[i]
Explanation: Problem 3 - Writing Cuda Kernels
While targeting ufuncs with the cuda syntax is the most straightforward way to access the GPU with Numba, it may not be flexible enough for your needs. If you want to write a more detailed GPU program, at some point you are probably going to need to write CUDA kernels.
As discussed in the lecture, the CUDA programming model allows you to abstract the GPU hardware into a software model composed of a grid containing blocks of threads. These threads are the smallest individual unit in the programming model, and they execute together in groups (traditionally called warps, consisting of 32 thread each). Determiming the best size for your grid of thread blocks is a complicated problem that often depends on the specific algorithm and hardware you're using, but here a few good rules of thumb:
+ the size of a block should be a multiple of 32 threads, with typical block sizes between 128 and 512 threads per block.
+ the size of the grid should ensure the full GPU is utilized where possible. Launching a grid where the number of blocks is 2x-4x the number of "multiprocessors" on the GPU is a good starting place. Something in the range of 20 - 100 blocks is usually a good starting point.
+ The CUDA kernel launch overhead does depend on the number of blocks, so it may not be best to launch a grid where the number of threads equals the number of input elements when the input size is very big. We'll show a pattern for dealing with large inputs below.
As a first example, let's return to our vector addition function, but this time, we'll target it with the cuda.jit decorator:
End of explanation
n = 100000
x = np.arange(n).astype(np.float32)
y = 2 * x
out = np.empty_like(x)
threads_per_block = 128
blocks_per_grid = 30
add_kernel[blocks_per_grid, threads_per_block](x, y, out)
print(out[:10])
Explanation: That's a lot more typing than our ufunc example, and it is much more limited: it only works on 1D arrays, it doesn't verify input sizes match, etc. Most of the function is spent figuring out how to turn the block and grid indices and dimensions into unique offsets in the input arrays. The pattern of computing a starting index and a stride is a common way to ensure that your grid size is independent of the input size. The striding will maximize bandwidth by ensuring that threads with consecuitive indices are accessing consecutive memory locations as much as possible. Thread indices beyond the length of the input (x.shape[0], since x is a NumPy array) automatically skip over the for loop.
Let's call the function now on some data:
End of explanation
@cuda.jit
def add_kernel(x, y, out):
start = cuda.grid(1) # the 1 argument means a one dimensional thread grid, this returns a single value
stride = cuda.gridsize(1) # ditto
# assuming x and y inputs are same length
for i in range(start, x.shape[0], stride):
out[i] = x[i] + y[i]
Explanation: The calling syntax is designed to mimic the way CUDA kernels are launched in C, where the number of blocks per grid and threads per block are specified in the square brackets, and the arguments to the function are specified afterwards in parentheses.
Note that, unlike the ufunc, the arguments are passed to the kernel as full NumPy arrays. A thread within the kernel can access any element in the array it wants, regardless of its position in the thread grid. This is why CUDA kernels are significantly more powerful than ufuncs. (But with great power, comes a greater amount of typing...)
Numba has created some helper functions to cut down on the typing. We can write the previous kernel much more simply as:
End of explanation
# add code here
x_device = cuda.to_device(x)
y_device = cuda.to_device(y)
out_device = cuda.device_array_like(x)
# add code here
%timeit add_kernel[blocks_per_grid, threads_per_block](x, y, out)
%timeit add_kernel[blocks_per_grid, threads_per_block](x_device, y_device, out_device); out = out_device.copy_to_host()
Explanation: As before, using NumPy arrays forces Numba to allocate GPU memory, copy the arguments to the GPU, run the kernel, then copy the argument arrays back to the host. This not very efficient, so you will often want to allocate device arrays.
3a) Allocate device arrays for x, y, and the output, then try out your new Cuda kernel using the pre-copied device arrays. Compare the time to a version without moving the data first.
End of explanation
@cuda.jit
def thread_counter_race_condition(global_counter):
global_counter[0] += 1 # This is bad
@cuda.jit
def thread_counter_safe(global_counter):
cuda.atomic.add(global_counter, 0, 1) # Safely add 1 to offset 0 in global_counter array
# This gets the wrong answer
global_counter = cuda.to_device(np.array([0], dtype=np.int32))
thread_counter_race_condition[64, 64](global_counter)
print('Should be %d:' % (64*64), global_counter.copy_to_host())
# This works correctly
global_counter = cuda.to_device(np.array([0], dtype=np.int32))
thread_counter_safe[64, 64](global_counter)
print('Should be %d:' % (64*64), global_counter.copy_to_host())
Explanation: Atomic Operations and avoiding Race Conditions
CUDA, like many general purpose parallel execution frameworks, makes it possible to have race conditions in your code. A race condition in CUDA arises when threads read or write a memory location that might be modified by another independent thread. Generally speaking, you need to worry about:
+ read-after-write hazards: One thread is reading a memory location at the same time another thread might be writing to it.
+ write-after-write hazards: Two threads are writing to the same memory location, and only one write will be visible when the kernel is complete.
A common strategy to avoid both of these hazards is to organize your CUDA kernel algorithm such that each thread has exclusive responsibility for unique subsets of output array elements, and/or to never use the same array for both input and output in a single kernel call. (Iterative algorithms can use a double-buffering strategy if needed, and switch input and output arrays on each iteration.)
However, there are many cases where different threads need to combine results. Consider something very simple, like: "every thread increments a global counter." Implementing this in your kernel requires each thread to:
1. Read the current value of a global counter.
2. Compute counter + 1.
3. Write that value back to global memory.
However, there is no guarantee that another thread has not changed the global counter between steps 1 and 3. To resolve this problem, CUDA provides "atomic operations" which will read, modify and update a memory location in one, indivisible step. Numba supports several of these functions, described here.
As an example, let's make a thread counter kernel:
End of explanation
def cpu_histogram(x, xmin, xmax, histogram_out):
'''Increment bin counts in histogram_out, given histogram range [xmin, xmax).'''
# Note that we don't have to pass in nbins explicitly, because the size of histogram_out determines it
nbins = histogram_out.shape[0]
bin_width = (xmax - xmin) / nbins
# This is a very slow way to do this with NumPy, but looks similar to what you will do on the GPU
for element in x:
bin_number = np.int32((element - xmin)/bin_width)
if bin_number >= 0 and bin_number < histogram_out.shape[0]:
# only increment if in range
histogram_out[bin_number] += 1
x = np.random.normal(size=10000, loc=0, scale=1).astype(np.float32)
xmin = np.float32(-4.0)
xmax = np.float32(4.0)
histogram_out = np.zeros(shape=10, dtype=np.int32)
cpu_histogram(x, xmin, xmax, histogram_out)
histogram_out
Explanation: 3b) Let's practice writing a function that requires an atomic operation - a histogramming kernel. This will take an array of input data, a range and a number of bins, and count how many of the input data elements land in each bin. Below is an example CPU implementation of histogramming:
End of explanation
@cuda.jit
def cuda_histogram(x, xmin, xmax, histogram_out):
'''Increment bin counts in histogram_out, given histogram range [xmin, xmax).'''
# add code here
start = cuda.grid(1)
stride = cuda.gridsize(1)
nbins = histogram_out.shape[0]
bin_width = (xmax - xmin) / nbins
for i in range(start, x.shape[0], stride):
bin_number = np.int32((x[i] - xmin)/bin_width)
if bin_number >= 0 and bin_number < histogram_out.shape[0]:
cuda.atomic.add(histogram_out, bin_number, 1)
threads_per_block = 128
blocks_per_grid = 30
histogram_out = np.zeros(shape=10, dtype=np.int32)
cuda_histogram[blocks_per_grid, threads_per_block](x, xmin, xmax, histogram_out)
histogram_out
Explanation: In the space below, create a cuda version of this kernel, then run it to check that you get the same answer as the CPU version.
You can use much of the same syntax that we used in the cuda addition kernel.
End of explanation
def julia(x, y, max_iters):
Given the real and imaginary parts of a complex number,
determine if it is a candidate for membership in the Julia
set given a fixed number of iterations.
i = 0
c = complex(-0.8, 0.156)
a = complex(x,y)
for i in range(max_iters):
a = a*a + c
if (a.real*a.real + a.imag*a.imag) > 1000:
return 0
return 255
def create_fractal(min_x, max_x, min_y, max_y, image, iters):
height = image.shape[0]
width = image.shape[1]
pixel_size_x = (max_x - min_x) / width
pixel_size_y = (max_y - min_y) / height
for x in range(width):
real = min_x + x * pixel_size_x
for y in range(height):
imag = min_y + y * pixel_size_y
color = julia(real, imag, iters)
image[y, x] = color
return image
image = np.zeros((500, 750), dtype=np.uint8)
create_fractal(-2.0, 2.0, -1.0, 1.0, image, 200)
plt.imshow(image)
plt.viridis()
plt.show()
Explanation: Problem 4 - Return to the Fractals!
Yesterday we defined two functions to create an instance of the Julia set:
End of explanation
@cuda.jit(device=True)
def julia(x, y, max_iters):
Given the real and imaginary parts of a complex number,
determine if it is a candidate for membership in the Julia
set given a fixed number of iterations.
i = 0
c = complex(-0.8, 0.156)
a = complex(x,y)
for i in range(max_iters):
a = a*a + c
if (a.real*a.real + a.imag*a.imag) > 1000:
return 0
return 255
Explanation: In order to turn this into a GPU implementation, we'd like to have a kernel function (create_fractal) call another function (julia) on the device. Numba has a way of specifying functions that will be called from within a kernel by passing the cuda.jit decorator an argument:
End of explanation
threadsperblock = 16
xblocks = (image.shape[1] + (threadsperblock - 1)) // threadsperblock
yblocks = (image.shape[0] + (threadsperblock - 1)) // threadsperblock
Explanation: Multi-dimensional grids
For some problems, it makes sense to define a two- or three-dimensional grid of thread blocks. That way, when you're indexing a single thread, you can map it to, say, the pixel position in an image. Multi-dimensional grids are created by passing tuples to the kernel function. You can ensure that you launch a big enough grid by calculating the size of each dimension as a function of the array size and number of threads per block:
End of explanation
@cuda.jit
def create_fractal(min_x, max_x, min_y, max_y, image, iters):
height = image.shape[0]
width = image.shape[1]
pixel_size_x = (max_x - min_x) / width
pixel_size_y = (max_y - min_y) / height
x, y = cuda.grid(2)
if x < width and y < height:
real = min_x + x * pixel_size_x
imag = min_y + y * pixel_size_y
color = julia(real, imag, iters)
image[y, x] = color
image = np.zeros((500, 750), dtype=np.uint8)
threadsperblock = 16
threads_per_block = (16, 16)
xblocks = (image.shape[1] + (threadsperblock - 1)) // threadsperblock
yblocks = (image.shape[0] + (threadsperblock - 1)) // threadsperblock
blocks_per_grid = (xblocks, yblocks)
%timeit create_fractal[blocks_per_grid, threads_per_block](-2.0, 2.0, -1.0, 1.0, image, 200)
plt.imshow(image)
plt.viridis()
plt.show()
Explanation: Then, within a kernel, you can determine the absolute thread position by calling the grid helper function, as in x, y = cuda.grid(2).
4a) Modify the create_fractal function to launch as a kernel on the GPU and call your new device function, julia. Use a 2D grid of thread blocks to launch the kernel and determine which threads are responsible for each pixel in the image.
End of explanation
<END_TASK> |
286 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
<p><font size="6"><b>Jupyter notebook INTRODUCTION </b></font></p>
DS Data manipulation, analysis and visualisation in Python
December, 2017
© 2016, Joris Van den Bossche and Stijn Van Hoey (jorisvandenbossche@gmail.com, stijnvanhoey@gmail.com). Licensed under CC BY 4.0 Creative Commons
Step1: <big><center>To run a cell
Step2: Writing code is what you will do most during this course!
Markdown
Text cells, using Markdown syntax. With the syntax, you can make text bold or italic, amongst many other things...
list
with
items
Link to interesting resources or images
Step3: Help
Step4: <div class="alert alert-success">
<b>EXERCISE</b>
Step5: edit mode to command mode
edit mode means you're editing a cell, i.e. with your cursor inside a cell to type content --> <font color="green">green colored side</font>
command mode means you're NOT editing(!), i.e. NOT with your cursor inside a cell to type content --> <font color="blue">blue colored side</font>
To start editing, click inside a cell or
<img src="../../img/enterbutton.png" alt="Key enter" style="width
Step6: %%timeit
Step7: %lsmagic
Step8: %whos
Step9: Let's get started! | <ASSISTANT_TASK:>
Python Code:
from IPython.display import Image
Image(url='http://python.org/images/python-logo.gif')
Explanation: <p><font size="6"><b>Jupyter notebook INTRODUCTION </b></font></p>
DS Data manipulation, analysis and visualisation in Python
December, 2017
© 2016, Joris Van den Bossche and Stijn Van Hoey (jorisvandenbossche@gmail.com, stijnvanhoey@gmail.com). Licensed under CC BY 4.0 Creative Commons
End of explanation
# Code cell, then we are using python
print('Hello DS')
DS = 10
print(DS + 5) # Yes, we advise to use Python 3 (!)
Explanation: <big><center>To run a cell: push the start triangle in the menu or type SHIFT + ENTER/RETURN
Notebook cell types
We will work in Jupyter notebooks during this course. A notebook is a collection of cells, that can contain different content:
Code
End of explanation
import os
os.mkdir
my_very_long_variable_name = 3
Explanation: Writing code is what you will do most during this course!
Markdown
Text cells, using Markdown syntax. With the syntax, you can make text bold or italic, amongst many other things...
list
with
items
Link to interesting resources or images:
Blockquotes if you like them
This line is part of the same blockquote.
Mathematical formulas can also be incorporated (LaTeX it is...)
$$\frac{dBZV}{dt}=BZV_{in} - k_1 .BZV$$
$$\frac{dOZ}{dt}=k_2 .(OZ_{sat}-OZ) - k_1 .BZV$$
Or tables:
course | points
--- | ---
Math | 8
Chemistry | 4
or tables with Latex..
Symbool | verklaring
--- | ---
$BZV_{(t=0)}$ | initiële biochemische zuurstofvraag (7.33 mg.l-1)
$OZ_{(t=0)}$ | initiële opgeloste zuurstof (8.5 mg.l-1)
$BZV_{in}$ | input BZV(1 mg.l-1.min-1)
$OZ_{sat}$ | saturatieconcentratie opgeloste zuurstof (11 mg.l-1)
$k_1$ | bacteriële degradatiesnelheid (0.3 min-1)
$k_2$ | reäeratieconstante (0.4 min-1)
Code can also be incorporated, but than just to illustrate:
python
BOT = 12
print(BOT)
See also: https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet
HTML
You can also use HTML commands, just check this cell:
<h3> html-adapted titel with <h3> </h3>
<p></p>
<b> Bold text <b> </b> of <i>or italic <i> </i>
Headings of different sizes: section
subsection
subsubsection
Raw Text
Notebook handling ESSENTIALS
Completion: TAB
The TAB button is essential: It provides you all possible actions you can do after loading in a library AND it is used for automatic autocompletion:
End of explanation
round(3.2)
os.mkdir
# An alternative is to put a question mark behind the command
os.mkdir?
Explanation: Help: SHIFT + TAB
The SHIFT-TAB combination is ultra essential to get information/help about the current operation
End of explanation
import glob
glob.glob??
Explanation: <div class="alert alert-success">
<b>EXERCISE</b>: What happens if you put two question marks behind the command?
</div>
End of explanation
%psearch os.*dir
Explanation: edit mode to command mode
edit mode means you're editing a cell, i.e. with your cursor inside a cell to type content --> <font color="green">green colored side</font>
command mode means you're NOT editing(!), i.e. NOT with your cursor inside a cell to type content --> <font color="blue">blue colored side</font>
To start editing, click inside a cell or
<img src="../../img/enterbutton.png" alt="Key enter" style="width:150px">
To stop editing,
<img src="../../img/keyescape.png" alt="Key A" style="width:150px">
new cell A-bove
<img src="../../img/keya.png" alt="Key A" style="width:150px">
Create a new cell above with the key A... when in command mode
new cell B-elow
<img src="../../img/keyb.png" alt="Key B" style="width:150px">
Create a new cell below with the key B... when in command mode
CTRL + SHIFT + P
Just do it!
Trouble...
<div class="alert alert-danger">
<b>NOTE</b>: When you're stuck, or things do crash:
<ul>
<li> first try **Kernel** > **Interrupt** -> your cell should stop running
<li> if no succes -> **Kernel** > **Restart** -> restart your notebook
</ul>
</div>
Overload?!?
<img src="../../img/toomuch.jpg" alt="Key A" style="width:500px">
<br><br>
<center>No stress, just go to </center>
<br>
<center><p style="font-size: 200%;text-align: center;margin:500">Help > Keyboard shortcuts</p></center>
Stackoverflow is really, really, really nice!
http://stackoverflow.com/questions/tagged/python
Google search is with you!
<big><center>REMEMBER: To run a cell: <strike>push the start triangle in the menu or</strike> type SHIFT + ENTER
some MAGIC...
%psearch
End of explanation
%%timeit
mylist = range(1000)
for i in mylist:
i = i**2
import numpy as np
%%timeit
np.arange(1000)**2
Explanation: %%timeit
End of explanation
%lsmagic
Explanation: %lsmagic
End of explanation
%whos
Explanation: %whos
End of explanation
from IPython.display import FileLink, FileLinks
FileLinks('.', recursive=False)
Explanation: Let's get started!
End of explanation
<END_TASK> |
287 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Memory-efficient embeddings for recommendation systems
Author
Step1: Prepare the data
Download and process data
Step2: Create train and eval data splits
Step3: Define dataset metadata and hyperparameters
Step4: Train and evaluate the model
Step5: Experiment 1
Step6: Implement the baseline model
Step7: Notice that the number of trainable parameters is 623,744
Step8: Experiment 2
Step9: Implement Mixed Dimension embedding as a layer
In the mixed dimension embedding technique, we train embedding vectors with full dimensions
for the frequently queried items, while train embedding vectors with reduced dimensions
for less frequent items, plus a projection weights matrix to bring low dimension embeddings
to the full dimensions.
More precisely, we define blocks of items of similar frequencies. For each block,
a block_vocab_size X block_embedding_dim embedding table and block_embedding_dim X full_embedding_dim
projection weights matrix are created. Note that, if block_embedding_dim equals full_embedding_dim,
the projection weights matrix becomes an identity matrix. Embeddings for a given batch of item
indices are generated via the following steps
Step10: Implement the memory-efficient model
In this experiment, we are going to use the Quotient-Remainder technique to reduce the
size of the user embeddings, and the Mixed Dimension technique to reduce the size of the
movie embeddings.
While in the paper, an alpha-power rule is used to determined
the dimensions of the embedding of each block, we simply set the number of blocks and the
dimensions of embeddings of each block based on the histogram visualization of movies popularity.
Step11: You can see that we can group the movies into three blocks, and assign them 64, 32, and 16
embedding dimensions, respectively. Feel free to experiment with different number of blocks
and dimensions.
Step12: Notice that the number of trainable parameters is 117,968, which is more than 5x less than
the number of parameters in the baseline model. | <ASSISTANT_TASK:>
Python Code:
import os
import math
from zipfile import ZipFile
from urllib.request import urlretrieve
import numpy as np
import pandas as pd
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.layers import StringLookup
import matplotlib.pyplot as plt
Explanation: Memory-efficient embeddings for recommendation systems
Author: Khalid Salama<br>
Date created: 2021/02/15<br>
Last modified: 2021/02/15<br>
Description: Using compositional & mixed-dimension embeddings for memory-efficient recommendation models.
Introduction
This example demonstrates two techniques for building memory-efficient recommendation models
by reducing the size of the embedding tables, without sacrificing model effectiveness:
Quotient-remainder trick, by Hao-Jun Michael Shi et al.,
which reduces the number of embedding vectors to store, yet produces unique embedding
vector for each item without explicit definition.
Mixed Dimension embeddings, by Antonio Ginart et al.,
which stores embedding vectors with mixed dimensions, where less popular items have
reduced dimension embeddings.
We use the 1M version of the Movielens dataset.
The dataset includes around 1 million ratings from 6,000 users on 4,000 movies.
Setup
End of explanation
urlretrieve("http://files.grouplens.org/datasets/movielens/ml-1m.zip", "movielens.zip")
ZipFile("movielens.zip", "r").extractall()
ratings_data = pd.read_csv(
"ml-1m/ratings.dat",
sep="::",
names=["user_id", "movie_id", "rating", "unix_timestamp"],
)
ratings_data["movie_id"] = ratings_data["movie_id"].apply(lambda x: f"movie_{x}")
ratings_data["user_id"] = ratings_data["user_id"].apply(lambda x: f"user_{x}")
ratings_data["rating"] = ratings_data["rating"].apply(lambda x: float(x))
del ratings_data["unix_timestamp"]
print(f"Number of users: {len(ratings_data.user_id.unique())}")
print(f"Number of movies: {len(ratings_data.movie_id.unique())}")
print(f"Number of ratings: {len(ratings_data.index)}")
Explanation: Prepare the data
Download and process data
End of explanation
random_selection = np.random.rand(len(ratings_data.index)) <= 0.85
train_data = ratings_data[random_selection]
eval_data = ratings_data[~random_selection]
train_data.to_csv("train_data.csv", index=False, sep="|", header=False)
eval_data.to_csv("eval_data.csv", index=False, sep="|", header=False)
print(f"Train data split: {len(train_data.index)}")
print(f"Eval data split: {len(eval_data.index)}")
print("Train and eval data files are saved.")
Explanation: Create train and eval data splits
End of explanation
csv_header = list(ratings_data.columns)
user_vocabulary = list(ratings_data.user_id.unique())
movie_vocabulary = list(ratings_data.movie_id.unique())
target_feature_name = "rating"
learning_rate = 0.001
batch_size = 128
num_epochs = 3
base_embedding_dim = 64
Explanation: Define dataset metadata and hyperparameters
End of explanation
def get_dataset_from_csv(csv_file_path, batch_size=128, shuffle=True):
return tf.data.experimental.make_csv_dataset(
csv_file_path,
batch_size=batch_size,
column_names=csv_header,
label_name=target_feature_name,
num_epochs=1,
header=False,
field_delim="|",
shuffle=shuffle,
)
def run_experiment(model):
# Compile the model.
model.compile(
optimizer=keras.optimizers.Adam(learning_rate),
loss=tf.keras.losses.MeanSquaredError(),
metrics=[keras.metrics.MeanAbsoluteError(name="mae")],
)
# Read the training data.
train_dataset = get_dataset_from_csv("train_data.csv", batch_size)
# Read the test data.
eval_dataset = get_dataset_from_csv("eval_data.csv", batch_size, shuffle=False)
# Fit the model with the training data.
history = model.fit(train_dataset, epochs=num_epochs, validation_data=eval_dataset,)
return history
Explanation: Train and evaluate the model
End of explanation
def embedding_encoder(vocabulary, embedding_dim, num_oov_indices=0, name=None):
return keras.Sequential(
[
StringLookup(
vocabulary=vocabulary, mask_token=None, num_oov_indices=num_oov_indices
),
layers.Embedding(
input_dim=len(vocabulary) + num_oov_indices, output_dim=embedding_dim
),
],
name=f"{name}_embedding" if name else None,
)
Explanation: Experiment 1: baseline collaborative filtering model
Implement embedding encoder
End of explanation
def create_baseline_model():
# Receive the user as an input.
user_input = layers.Input(name="user_id", shape=(), dtype=tf.string)
# Get user embedding.
user_embedding = embedding_encoder(
vocabulary=user_vocabulary, embedding_dim=base_embedding_dim, name="user"
)(user_input)
# Receive the movie as an input.
movie_input = layers.Input(name="movie_id", shape=(), dtype=tf.string)
# Get embedding.
movie_embedding = embedding_encoder(
vocabulary=movie_vocabulary, embedding_dim=base_embedding_dim, name="movie"
)(movie_input)
# Compute dot product similarity between user and movie embeddings.
logits = layers.Dot(axes=1, name="dot_similarity")(
[user_embedding, movie_embedding]
)
# Convert to rating scale.
prediction = keras.activations.sigmoid(logits) * 5
# Create the model.
model = keras.Model(
inputs=[user_input, movie_input], outputs=prediction, name="baseline_model"
)
return model
baseline_model = create_baseline_model()
baseline_model.summary()
Explanation: Implement the baseline model
End of explanation
history = run_experiment(baseline_model)
plt.plot(history.history["loss"])
plt.plot(history.history["val_loss"])
plt.title("model loss")
plt.ylabel("loss")
plt.xlabel("epoch")
plt.legend(["train", "eval"], loc="upper left")
plt.show()
Explanation: Notice that the number of trainable parameters is 623,744
End of explanation
class QREmbedding(keras.layers.Layer):
def __init__(self, vocabulary, embedding_dim, num_buckets, name=None):
super(QREmbedding, self).__init__(name=name)
self.num_buckets = num_buckets
self.index_lookup = StringLookup(
vocabulary=vocabulary, mask_token=None, num_oov_indices=0
)
self.q_embeddings = layers.Embedding(num_buckets, embedding_dim,)
self.r_embeddings = layers.Embedding(num_buckets, embedding_dim,)
def call(self, inputs):
# Get the item index.
embedding_index = self.index_lookup(inputs)
# Get the quotient index.
quotient_index = tf.math.floordiv(embedding_index, self.num_buckets)
# Get the reminder index.
remainder_index = tf.math.floormod(embedding_index, self.num_buckets)
# Lookup the quotient_embedding using the quotient_index.
quotient_embedding = self.q_embeddings(quotient_index)
# Lookup the remainder_embedding using the remainder_index.
remainder_embedding = self.r_embeddings(remainder_index)
# Use multiplication as a combiner operation
return quotient_embedding * remainder_embedding
Explanation: Experiment 2: memory-efficient model
Implement Quotient-Remainder embedding as a layer
The Quotient-Remainder technique works as follows. For a set of vocabulary and embedding size
embedding_dim, instead of creating a vocabulary_size X embedding_dim embedding table,
we create two num_buckets X embedding_dim embedding tables, where num_buckets
is much smaller than vocabulary_size.
An embedding for a given item index is generated via the following steps:
Compute the quotient_index as index // num_buckets.
Compute the remainder_index as index % num_buckets.
Lookup quotient_embedding from the first embedding table using quotient_index.
Lookup remainder_embedding from the second embedding table using remainder_index.
Return quotient_embedding * remainder_embedding.
This technique not only reduces the number of embedding vectors needs to be stored and trained,
but also generates a unique embedding vector for each item of size embedding_dim.
Note that q_embedding and r_embedding can be combined using other operations,
like Add and Concatenate.
End of explanation
class MDEmbedding(keras.layers.Layer):
def __init__(
self, blocks_vocabulary, blocks_embedding_dims, base_embedding_dim, name=None
):
super(MDEmbedding, self).__init__(name=name)
self.num_blocks = len(blocks_vocabulary)
# Create vocab to block lookup.
keys = []
values = []
for block_idx, block_vocab in enumerate(blocks_vocabulary):
keys.extend(block_vocab)
values.extend([block_idx] * len(block_vocab))
self.vocab_to_block = tf.lookup.StaticHashTable(
tf.lookup.KeyValueTensorInitializer(keys, values), default_value=-1
)
self.block_embedding_encoders = []
self.block_embedding_projectors = []
# Create block embedding encoders and projectors.
for idx in range(self.num_blocks):
vocabulary = blocks_vocabulary[idx]
embedding_dim = blocks_embedding_dims[idx]
block_embedding_encoder = embedding_encoder(
vocabulary, embedding_dim, num_oov_indices=1
)
self.block_embedding_encoders.append(block_embedding_encoder)
if embedding_dim == base_embedding_dim:
self.block_embedding_projectors.append(layers.Lambda(lambda x: x))
else:
self.block_embedding_projectors.append(
layers.Dense(units=base_embedding_dim)
)
def call(self, inputs):
# Get block index for each input item.
block_indicies = self.vocab_to_block.lookup(inputs)
# Initialize output embeddings to zeros.
embeddings = tf.zeros(shape=(tf.shape(inputs)[0], base_embedding_dim))
# Generate embeddings from blocks.
for idx in range(self.num_blocks):
# Lookup embeddings from the current block.
block_embeddings = self.block_embedding_encoders[idx](inputs)
# Project embeddings to base_embedding_dim.
block_embeddings = self.block_embedding_projectors[idx](block_embeddings)
# Create a mask to filter out embeddings of items that do not belong to the current block.
mask = tf.expand_dims(tf.cast(block_indicies == idx, tf.dtypes.float32), 1)
# Set the embeddings for the items not belonging to the current block to zeros.
block_embeddings = block_embeddings * mask
# Add the block embeddings to the final embeddings.
embeddings += block_embeddings
return embeddings
Explanation: Implement Mixed Dimension embedding as a layer
In the mixed dimension embedding technique, we train embedding vectors with full dimensions
for the frequently queried items, while train embedding vectors with reduced dimensions
for less frequent items, plus a projection weights matrix to bring low dimension embeddings
to the full dimensions.
More precisely, we define blocks of items of similar frequencies. For each block,
a block_vocab_size X block_embedding_dim embedding table and block_embedding_dim X full_embedding_dim
projection weights matrix are created. Note that, if block_embedding_dim equals full_embedding_dim,
the projection weights matrix becomes an identity matrix. Embeddings for a given batch of item
indices are generated via the following steps:
For each block, lookup the block_embedding_dim embedding vectors using indices, and
project them to the full_embedding_dim.
If an item index does not belong to a given block, an out-of-vocabulary embedding is returned.
Each block will return a batch_size X full_embedding_dim tensor.
A mask is applied to the embeddings returned from each block in order to convert the
out-of-vocabulary embeddings to vector of zeros. That is, for each item in the batch,
a single non-zero embedding vector is returned from the all block embeddings.
Embeddings retrieved from the blocks are combined using sum to produce the final
batch_size X full_embedding_dim tensor.
End of explanation
movie_frequencies = ratings_data["movie_id"].value_counts()
movie_frequencies.hist(bins=10)
Explanation: Implement the memory-efficient model
In this experiment, we are going to use the Quotient-Remainder technique to reduce the
size of the user embeddings, and the Mixed Dimension technique to reduce the size of the
movie embeddings.
While in the paper, an alpha-power rule is used to determined
the dimensions of the embedding of each block, we simply set the number of blocks and the
dimensions of embeddings of each block based on the histogram visualization of movies popularity.
End of explanation
sorted_movie_vocabulary = list(movie_frequencies.keys())
movie_blocks_vocabulary = [
sorted_movie_vocabulary[:400], # high popularity movies block
sorted_movie_vocabulary[400:1700], # normal popularity movies block
sorted_movie_vocabulary[1700:], # low popularity movies block
]
movie_blocks_embedding_dims = [64, 32, 16]
user_embedding_num_buckets = len(user_vocabulary) // 50
def create_memory_efficient_model():
# Take the user as an input.
user_input = layers.Input(name="user_id", shape=(), dtype=tf.string)
# Get user embedding.
user_embedding = QREmbedding(
vocabulary=user_vocabulary,
embedding_dim=base_embedding_dim,
num_buckets=user_embedding_num_buckets,
name="user_embedding",
)(user_input)
# Take the movie as an input.
movie_input = layers.Input(name="movie_id", shape=(), dtype=tf.string)
# Get embedding.
movie_embedding = MDEmbedding(
blocks_vocabulary=movie_blocks_vocabulary,
blocks_embedding_dims=movie_blocks_embedding_dims,
base_embedding_dim=base_embedding_dim,
name="movie_embedding",
)(movie_input)
# Compute dot product similarity between user and movie embeddings.
logits = layers.Dot(axes=1, name="dot_similarity")(
[user_embedding, movie_embedding]
)
# Convert to rating scale.
prediction = keras.activations.sigmoid(logits) * 5
# Create the model.
model = keras.Model(
inputs=[user_input, movie_input], outputs=prediction, name="baseline_model"
)
return model
memory_efficient_model = create_memory_efficient_model()
memory_efficient_model.summary()
Explanation: You can see that we can group the movies into three blocks, and assign them 64, 32, and 16
embedding dimensions, respectively. Feel free to experiment with different number of blocks
and dimensions.
End of explanation
history = run_experiment(memory_efficient_model)
plt.plot(history.history["loss"])
plt.plot(history.history["val_loss"])
plt.title("model loss")
plt.ylabel("loss")
plt.xlabel("epoch")
plt.legend(["train", "eval"], loc="upper left")
plt.show()
Explanation: Notice that the number of trainable parameters is 117,968, which is more than 5x less than
the number of parameters in the baseline model.
End of explanation
<END_TASK> |
288 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Unit Tests
Overview and Principles
Testing is the process by which you exercise your code to determine if it performs as expected. The code you are testing is referred to as the code under test.
There are two parts to writing tests.
1. invoking the code under test so that it is exercised in a particular way;
1. evaluating the results of executing code under test to determine if it behaved as expected.
The collection of tests performed are referred to as the test cases. The fraction of the code under test that is executed as a result of running the test cases is referred to as test coverage.
For dynamical languages such as Python, it's extremely important to have a high test coverage. In fact, you should try to get 100% coverage. This is because little checking is done when the source code is read by the Python interpreter. For example, the code under test might contain a line that has a function that is undefined. This would not be detected until that line of code is executed.
Test cases can be of several types. Below are listed some common classifications of test cases.
- Smoke test. This is an invocation of the code under test to see if there is an unexpected exception. It's useful as a starting point, but this doesn't tell you anything about the correctness of the results of a computation.
- One-shot test. In this case, you call the code under test with arguments for which you know the expected result.
- Edge test. The code under test is invoked with arguments that should cause an exception, and you evaluate if the expected exception occurrs.
- Pattern test - Based on your knowledge of the calculation (not implementation) of the code under test, you construct a suite of test cases for which the results are known or there are known patterns in these results that are used to evaluate the results returned.
Another principle of testing is to limit what is done in a single test case. Generally, a test case should focus on one use of one function. Sometimes, this is a challenge since the function being tested may call other functions that you are testing. This means that bugs in the called functions may cause failures in the tests of the calling functions. Often, you sort this out by knowing the structure of the code and focusing first on failures in lower level tests. In other situations, you may use more advanced techniques called mocking. A discussion of mocking is beyond the scope of this course.
A best practice is to develop your tests while you are developing your code. Indeed, one school of thought in software engineering, called test-driven development, advocates that you write the tests before you implement the code under test so that the test cases become a kind of specification for what the code under test should do.
Examples of Test Cases
This section presents examples of test cases. The code under test is the calculation of entropy.
Entropy of a set of probabilities
$$
H = -\sum_i p_i \log(p_i)
$$
where $\sum_i p_i = 1$.
Step1: Suppose that all of the probability of a distribution is at one point. An example of this is a coin with two heads. Whenever you flip it, you always get heads. That is, the probability of a head is 1.
What is the entropy of such a distribution? From the calculation above, we see that the entropy should be $log(1)$, which is 0. This means that we have a test case where we know the result!
Step2: Question
Step3: Now let's consider a pattern test. Examining the structure of the calculation of $H$, we consider a situation in which there are $n$ equal probabilities. That is, $p_i = \frac{1}{n}$.
$$
H = -\sum_{i=1}^{n} p_i \log(p_i)
= -\sum_{i=1}^{n} \frac{1}{n} \log(\frac{1}{n})
= n (-\frac{1}{n} \log(\frac{1}{n}) )
= -\log(\frac{1}{n})
$$
For example, entropy([0.5, 0.5]) should be $-log(0.5)$.
Step4: You see that there are many, many cases to test. So far, we've been writing special codes for each test case. We can do better.
Unittest Infrastructure
There are several reasons to use a test infrastructure
Step6: Code for homework or your work should use test files. In this lesson, we'll show how to write test codes in a Jupyter notebook. This is done for pedidogical reasons. It is NOT not something you should do in practice, except as an intermediate exploratory approach.
As expected, the first test passes, but the second test fails.
Exercise
Rewrite the above one-shot test for entropy using the unittest infrastructure.
Step7: Testing For Exceptions
Edge test cases often involves handling exceptions. One approach is to code this directly.
Step8: unittest provides help with testing exceptions.
Step9: Test Files
Although I presented the elements of unittest in a notebook. your tests should be in a file. If the name of module with the code under test is foo.py, then the name of the test file should be test_foo.py.
The structure of the test file will be very similar to cells above. You will import unittest. You must also import the module with the code under test. Take a look at test_prime.py in this directory to see an example.
Discussion
Question | <ASSISTANT_TASK:>
Python Code:
import numpy as np
# Code Under Test
def entropy(ps):
items = ps * np.log(ps)
if any(np.isnan(items)):
raise ValueError("Cannot compute log of ps!")
return -np.sum(items)
np.isnan([.1, .9])
# Smoke test
entropy([0.5, 0.5])
Explanation: Unit Tests
Overview and Principles
Testing is the process by which you exercise your code to determine if it performs as expected. The code you are testing is referred to as the code under test.
There are two parts to writing tests.
1. invoking the code under test so that it is exercised in a particular way;
1. evaluating the results of executing code under test to determine if it behaved as expected.
The collection of tests performed are referred to as the test cases. The fraction of the code under test that is executed as a result of running the test cases is referred to as test coverage.
For dynamical languages such as Python, it's extremely important to have a high test coverage. In fact, you should try to get 100% coverage. This is because little checking is done when the source code is read by the Python interpreter. For example, the code under test might contain a line that has a function that is undefined. This would not be detected until that line of code is executed.
Test cases can be of several types. Below are listed some common classifications of test cases.
- Smoke test. This is an invocation of the code under test to see if there is an unexpected exception. It's useful as a starting point, but this doesn't tell you anything about the correctness of the results of a computation.
- One-shot test. In this case, you call the code under test with arguments for which you know the expected result.
- Edge test. The code under test is invoked with arguments that should cause an exception, and you evaluate if the expected exception occurrs.
- Pattern test - Based on your knowledge of the calculation (not implementation) of the code under test, you construct a suite of test cases for which the results are known or there are known patterns in these results that are used to evaluate the results returned.
Another principle of testing is to limit what is done in a single test case. Generally, a test case should focus on one use of one function. Sometimes, this is a challenge since the function being tested may call other functions that you are testing. This means that bugs in the called functions may cause failures in the tests of the calling functions. Often, you sort this out by knowing the structure of the code and focusing first on failures in lower level tests. In other situations, you may use more advanced techniques called mocking. A discussion of mocking is beyond the scope of this course.
A best practice is to develop your tests while you are developing your code. Indeed, one school of thought in software engineering, called test-driven development, advocates that you write the tests before you implement the code under test so that the test cases become a kind of specification for what the code under test should do.
Examples of Test Cases
This section presents examples of test cases. The code under test is the calculation of entropy.
Entropy of a set of probabilities
$$
H = -\sum_i p_i \log(p_i)
$$
where $\sum_i p_i = 1$.
End of explanation
# One-shot test. Need to know the correct answer.
SMALL_VALUE = 1e-5
entropy([SMALL_VALUE, 1-SMALL_VALUE])
Explanation: Suppose that all of the probability of a distribution is at one point. An example of this is a coin with two heads. Whenever you flip it, you always get heads. That is, the probability of a head is 1.
What is the entropy of such a distribution? From the calculation above, we see that the entropy should be $log(1)$, which is 0. This means that we have a test case where we know the result!
End of explanation
# Edge test. This is something that should cause an exception.
entropy([-.1, .9])
Explanation: Question: What is an example of another one-shot test? (Hint: You need to know the expected result.)
One edge test of interest is to provide an input that is not a distribution in that probabilities don't sum to 1.
End of explanation
# Pattern test
print (entropy([0.5, 0.5]), entropy([1/3, 1/3, 1/3]), entropy(np.repeat(1/20, 20)))
Explanation: Now let's consider a pattern test. Examining the structure of the calculation of $H$, we consider a situation in which there are $n$ equal probabilities. That is, $p_i = \frac{1}{n}$.
$$
H = -\sum_{i=1}^{n} p_i \log(p_i)
= -\sum_{i=1}^{n} \frac{1}{n} \log(\frac{1}{n})
= n (-\frac{1}{n} \log(\frac{1}{n}) )
= -\log(\frac{1}{n})
$$
For example, entropy([0.5, 0.5]) should be $-log(0.5)$.
End of explanation
import unittest
# Define a class in which the tests will run
class UnitTests(unittest.TestCase):
# Each method in the class to execute a test
def test_success(self):
self.assertEqual(1, 1)
def test_success1(self):
self.assertTrue(1 == 1)
def test_failure(self):
self.assertEqual(1, 1)
suite = unittest.TestLoader().loadTestsFromTestCase(UnitTests)
_ = unittest.TextTestRunner().run(suite)
import unittest
# Define a class in which the tests will run
class UnitTests(unittest.TestCase):
# Each method in the class to execute a test
def test_success(self):
self.assertEqual(1, 1)
def test_success1(self):
self.assertTrue(1 == 1)
def test_failure(self):
self.assertEqual(1, 1)
suite = unittest.TestLoader().loadTestsFromTestCase(UnitTests)
_ = unittest.TextTestRunner().run(suite)
# Function the handles test loading
#def test_setup(argument ?):
Explanation: You see that there are many, many cases to test. So far, we've been writing special codes for each test case. We can do better.
Unittest Infrastructure
There are several reasons to use a test infrastructure:
- If you have many test cases (which you should!), the test infrastructure will save you from writing a lot of code.
- The infrastructure provides a uniform way to report test results, and to handle test failures.
- A test infrastructure can tell you about coverage so you know what tests to add.
We'll be using the unittest framework. This is a separate Python package. Using this infrastructure, requires the following:
1. import the unittest module
1. define a class that inherits from unittest.TestCase
1. write methods that run the code to be tested and check the outcomes.
The last item has two subparts. First, we must identify which methods in the class inheriting from unittest.TestCase are tests. You indicate that a method is to be run as a test by having the method name begin with "test".
Second, the "test methods" should communicate with the infrastructure the results of evaluating output from the code under test. This is done by using assert statements. For example, self.assertEqual takes two arguments. If these are objects for which == returns True, then the test passes. Otherwise, the test fails.
End of explanation
# Implementating a pattern test. Use functions in the test.
import unittest
# Define a class in which the tests will run
class TestEntropy(unittest.TestCase):
def test_equal_probability(self):
def test(count):
Invokes the entropy function for a number of values equal to count
that have the same probability.
:param int count:
raise RuntimeError ("Not implemented.")
#
test(2)
test(20)
test(200)
suite = unittest.TestLoader().loadTestsFromTestCase(TestEntropy)
_ = unittest.TextTestRunner().run(suite)
Explanation: Code for homework or your work should use test files. In this lesson, we'll show how to write test codes in a Jupyter notebook. This is done for pedidogical reasons. It is NOT not something you should do in practice, except as an intermediate exploratory approach.
As expected, the first test passes, but the second test fails.
Exercise
Rewrite the above one-shot test for entropy using the unittest infrastructure.
End of explanation
import unittest
# Define a class in which the tests will run
class TestEntropy(unittest.TestCase):
def test_invalid_probability(self):
try:
entropy([0.1, -0.5])
self.assertTrue(False)
except ValueError:
self.assertTrue(True)
suite = unittest.TestLoader().loadTestsFromTestCase(TestEntropy)
_ = unittest.TextTestRunner().run(suite)
Explanation: Testing For Exceptions
Edge test cases often involves handling exceptions. One approach is to code this directly.
End of explanation
import unittest
# Define a class in which the tests will run
class TestEntropy(unittest.TestCase):
def test_invalid_probability(self):
with self.assertRaises(ValueError):
a = 1 / 0.0
entropy([0.1, -0.5])
suite = unittest.TestLoader().loadTestsFromTestCase(TestEntropy)
_ = unittest.TextTestRunner().run(suite)
Explanation: unittest provides help with testing exceptions.
End of explanation
import unittest
# Define a class in which the tests will run
class TestEntryopy(unittest.TestCase):
def test_oneshot(self):
self.assertEqual(geomean([1,1]), 1)
def test_oneshot2(self):
self.assertEqual(geomean([3, 3, 3]), 3)
Explanation: Test Files
Although I presented the elements of unittest in a notebook. your tests should be in a file. If the name of module with the code under test is foo.py, then the name of the test file should be test_foo.py.
The structure of the test file will be very similar to cells above. You will import unittest. You must also import the module with the code under test. Take a look at test_prime.py in this directory to see an example.
Discussion
Question: What tests would you write for a plotting function?
Test Driven Development
Start by writing the tests. Then write the code.
We illustrate this by considering a function geomean that takes a list of numbers as input and produces the geometric mean on output.
End of explanation
<END_TASK> |
289 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
In this document I would like to go through some functional idioms in Python involving the use of iterators and highlight some parallels with the equivalent Haskell implementations.
Step1: The ubiquitous zip
I must admit I severely underestimated the importance of zip when I first started learning functional style of programming. It comes up in a wide range of patterns, however one of the more elegant applications in Python is given in the following example
Step2: In fact, it is mentioned in the documentation in the section on built-in functions. I guess, it is one of those bits that are easily skipped on first reading. The reason this example works the way it does, namely, partitioning the iterable into chunks of the given length, is that zip evaluates its arguments strictly left-to-right. Hence, all the copies of the iterator get advanced together and dereferenced one by one.
Now we take a deeper dive straight away. Haskell introduces the zipWith function which takes an additional argument, a function which is applied to the corresponding elements of the zipped iterables. Thus the output is in general no longer a list of tuples of the original elements.
A possible implementation in Python is given below. Note that we return a generator so that the result can be evaluated lazily as needed.
Step3: Another, more generic, implementation is possible using the starmap function.
Step4: Using zip_with we can express operations on sequences in a more functional way
Step5: Before we proceed, let's introduce another bit of syntactical convenience
Step6: @curry will be explained later, for now just think of it as a fancy way of applying the idea behind functools.partial to forge partial application in Python.
I discovered some of the less obvious applications of zipWith while working on the exercises from the canonical CIS194 course on Haskell. For example, in order to multiply every other element of a sequence by, say 2, we can generate a "masking" sequenc of 1s and 2s and zip it with the original sequence via multiplication
Step7: Lazy evaluation allows us to work with infinite lists without much trouble
Step8: In another amazing example which I first came across in a SO answer zip is used to rotate a (potentially empty or infinite) sequence. In order to emphasize the Haskell influence, let us first write it without all of the convenience functions defined above
Step9: Now we rewrite the same with more Haskell flavour
Step10: Contextual iterators and monads
Using a custom iterator for a class we can implement some fluent syntax for situations when operations on iterators need to be chained (inspiration taken from lodash.js).
Step11: We may notice that the resulting flow has certain traits of a composable contextual computation - something that monads were introduced to deal with. Indeed, we start by putting the original data in a "streaming context". Each public method then extracts the actual data, applies some transformation and wraps it back into the context before passing on. This document is not going to descend into yet another tutorial on monads. Instead we will use the PyMonad in a somewhat crude manner to demonstrate how the same goal can be achieved in a more functional way.
First we define a function that will wrap raw data in a context. If we were implementing our own monad, this would be the unit
Step12: Now express the operations performed by the methods as monadic functions.
Step13: At the end of the chain we will also need a way out of the context to continue working with the data. We do not intend to be 100% pure after all.
Step14: Now we can also partially apply our function for better readability and potential reuse.
Step15: Finally, execute the flow.
Step16: We can take a different approach and work with ordinary functions instead
Step17: We can partially apply them, as before, and then use the overloaded * operator to denote curried function composition.
Step18: Finally the transformation is applied to the "contextualized" data using the applicative style instead of the bind operator.
Step19: Splitting and unpacking
We finish with a brief example showcasing the use of the tee function to create independent iterators and * (splat) operator to unpack sequences with zip.
We shall consider the task of plotting a histogram for a somewhat unusual data model. Say we are given a sequence of results of some measurements (scores) for a number of observables identified by labels. Suppose also that there may be more than one occurence (count) of results with exactly the same score. We are going to partition scores into a given number of bins and aggregate counts within this bins, for each label separately.
Before we proceed we shall define a convenience function inspired by the eponimous example from Clojure (also this answer provided a nice idiom for an infinite generator)
Step20: Now let's produce some mock data. Although using effectful functions in comprehensions is generally frowned upon in Python, here it allows us to achieve a clean separation of the generating (repeatedly) and consuming (take) parts of the logic.
Step21: Now we define a function formatting labels for the plot. This is a typical example of using tee to traverse a sequence in a pairwise manner.
Step22: Here we aggregate data in the bins
Step23: Finally we put together a dictionary containing all the data for the plot | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from random import random, randint, choice
from itertools import cycle, ifilter, imap, islice, izip, starmap, tee
from collections import defaultdict
from operator import add, mul
from pymonad.Maybe import *
from pymonad.Reader import *
Explanation: In this document I would like to go through some functional idioms in Python involving the use of iterators and highlight some parallels with the equivalent Haskell implementations.
End of explanation
a = range(0, 9)
zip(*[iter(a)]*4)
Explanation: The ubiquitous zip
I must admit I severely underestimated the importance of zip when I first started learning functional style of programming. It comes up in a wide range of patterns, however one of the more elegant applications in Python is given in the following example:
End of explanation
def zip_with_(f, a, b):
return (f(*z) for z in izip(a, b))
Explanation: In fact, it is mentioned in the documentation in the section on built-in functions. I guess, it is one of those bits that are easily skipped on first reading. The reason this example works the way it does, namely, partitioning the iterable into chunks of the given length, is that zip evaluates its arguments strictly left-to-right. Hence, all the copies of the iterator get advanced together and dereferenced one by one.
Now we take a deeper dive straight away. Haskell introduces the zipWith function which takes an additional argument, a function which is applied to the corresponding elements of the zipped iterables. Thus the output is in general no longer a list of tuples of the original elements.
A possible implementation in Python is given below. Note that we return a generator so that the result can be evaluated lazily as needed.
End of explanation
def zip_with(f, *a):
return starmap(f, izip(*a))
Explanation: Another, more generic, implementation is possible using the starmap function.
End of explanation
b = range(10, 20)
list(zip_with(add, a, reversed(b)))
Explanation: Using zip_with we can express operations on sequences in a more functional way:
End of explanation
@curry
def take(n, a):
return islice(a, None, n)
@curry
def drop(n, a):
return islice(a, n, None)
Explanation: Before we proceed, let's introduce another bit of syntactical convenience:
End of explanation
def double_every_other(a):
return zip_with(mul, a, cycle([1,2]))
Explanation: @curry will be explained later, for now just think of it as a fancy way of applying the idea behind functools.partial to forge partial application in Python.
I discovered some of the less obvious applications of zipWith while working on the exercises from the canonical CIS194 course on Haskell. For example, in order to multiply every other element of a sequence by, say 2, we can generate a "masking" sequenc of 1s and 2s and zip it with the original sequence via multiplication:
End of explanation
x = cycle([1, 2, 3])
take15 = take(15)
list(take15(x))
y = double_every_other(x)
list(take15(y))
Explanation: Lazy evaluation allows us to work with infinite lists without much trouble:
End of explanation
def rotate(n, a):
return (x for x, _ in izip(islice(cycle(a), n , None), a))
h = "_hello_lazy_world"
r = rotate(6, h)
''.join(list(r))
Explanation: In another amazing example which I first came across in a SO answer zip is used to rotate a (potentially empty or infinite) sequence. In order to emphasize the Haskell influence, let us first write it without all of the convenience functions defined above:
End of explanation
def const(x, y):
return x
def rotate1(n, a):
return zip_with(const, drop(n, cycle(a)), a)
r1 = rotate1(6, h)
''.join(list(r1))
Explanation: Now we rewrite the same with more Haskell flavour:
End of explanation
class Stream(object):
def __init__(self, data):
self.data = iter(data)
def __iter__(self):
class iterator(object):
def __init__(self, it):
self.it = it.data
def next(self):
return next(self.it)
return iterator(self)
def filter(self, pred):
return Stream(ifilter(pred, self))
def map(self, fn):
return Stream(imap(fn, self))
s = Stream(range(20)). \
filter(lambda x: x % 7 == 0). \
map(lambda x: x * x)
list(s)
Explanation: Contextual iterators and monads
Using a custom iterator for a class we can implement some fluent syntax for situations when operations on iterators need to be chained (inspiration taken from lodash.js).
End of explanation
def make_stream(data):
try:
return Just(iter(data))
except TypeError:
return Nothing
Explanation: We may notice that the resulting flow has certain traits of a composable contextual computation - something that monads were introduced to deal with. Indeed, we start by putting the original data in a "streaming context". Each public method then extracts the actual data, applies some transformation and wraps it back into the context before passing on. This document is not going to descend into yet another tutorial on monads. Instead we will use the PyMonad in a somewhat crude manner to demonstrate how the same goal can be achieved in a more functional way.
First we define a function that will wrap raw data in a context. If we were implementing our own monad, this would be the unit
End of explanation
@curry
def filter_stream(pred, stream):
return Just(ifilter(pred, stream))
@curry
def map_stream(fn, stream):
return Just(imap(fn, stream))
Explanation: Now express the operations performed by the methods as monadic functions.
End of explanation
def eval_stream(stream):
if isinstance(stream, Just):
return stream.value
else:
return ()
Explanation: At the end of the chain we will also need a way out of the context to continue working with the data. We do not intend to be 100% pure after all.
End of explanation
my_filter = filter_stream(lambda x: x % 7 == 0)
my_map = map_stream(lambda x: x * x)
Explanation: Now we can also partially apply our function for better readability and potential reuse.
End of explanation
st = make_stream(range(20))
st1 = st >> my_filter >> my_map
list(eval_stream(st1))
Explanation: Finally, execute the flow.
End of explanation
@curry
def filter_stream1(pred, stream):
return ifilter(pred, stream)
@curry
def map_stream1(fn, stream):
return imap(fn, stream)
Explanation: We can take a different approach and work with ordinary functions instead:
End of explanation
my_filter1 = filter_stream1(lambda x: x % 7 == 0)
my_map1 = map_stream1(lambda x: x * x)
my_transform = my_filter1 * my_map1
Explanation: We can partially apply them, as before, and then use the overloaded * operator to denote curried function composition.
End of explanation
st2 = make_stream(range(20))
st3 = my_transform * st2
list(eval_stream(st3))
Explanation: Finally the transformation is applied to the "contextualized" data using the applicative style instead of the bind operator.
End of explanation
def repeatedly(f, *args, **kwargs):
return (f(*args, **kwargs) for _ in iter(int, 1))
Explanation: Splitting and unpacking
We finish with a brief example showcasing the use of the tee function to create independent iterators and * (splat) operator to unpack sequences with zip.
We shall consider the task of plotting a histogram for a somewhat unusual data model. Say we are given a sequence of results of some measurements (scores) for a number of observables identified by labels. Suppose also that there may be more than one occurence (count) of results with exactly the same score. We are going to partition scores into a given number of bins and aggregate counts within this bins, for each label separately.
Before we proceed we shall define a convenience function inspired by the eponimous example from Clojure (also this answer provided a nice idiom for an infinite generator)
End of explanation
def generate_data():
return (choice(['a', 'b', 'c']),
float("{:.2f}".format(random())),
randint(1, 20))
data = take(100, repeatedly(generate_data))
sample = take(10, repeatedly(generate_data))
list(sample)
Explanation: Now let's produce some mock data. Although using effectful functions in comprehensions is generally frowned upon in Python, here it allows us to achieve a clean separation of the generating (repeatedly) and consuming (take) parts of the logic.
End of explanation
def bin_labels(bins, label_format=None):
a, b = tee(bins)
next(b)
label_format = label_format or '>= %.2f, < %.2f'
return [label_format % b for b in izip(a, b)]
Explanation: Now we define a function formatting labels for the plot. This is a typical example of using tee to traverse a sequence in a pairwise manner.
End of explanation
def bin_data(x, y, bin_edges):
data_to_bins = izip(np.digitize(x, bin_edges), y)
bin_sums = defaultdict(int)
for index, count in data_to_bins:
bin_sums[index] += count
return [bin_sums.get(index, 0) for index in xrange(len(bin_edges))]
Explanation: Here we aggregate data in the bins
End of explanation
by_label = defaultdict(list)
d1, d2 = tee(data)
for label, score, count in d1:
by_label[label].append([score, count])
num_bins = 20
_, score, _ = izip(*d2)
bins = np.linspace(min(score), max(score), num_bins)
bin_lbl = bin_labels(bins)
series = []
for label, records in by_label.iteritems():
_, count = izip(*records)
series.append({'label': label, 'data': bin_data(score, count, bins)})
result = {'series': series,
'bins': bin_lbl}
fig, ax = plt.subplots(figsize=(18,6))
bar_width = 0.2
colors = ['r', 'g', 'b']
for k, item in enumerate(result['series']):
index = np.arange(len(item['data']))
plt.bar(index + k * bar_width,
item['data'],
color = colors[k],
width = bar_width,
label = item['label'])
plt.xlabel('Scores')
plt.ylabel('Count')
plt.title('Count by scores')
plt.xticks(index + bar_width, result['bins'], rotation=70)
plt.legend()
plt.show()
Explanation: Finally we put together a dictionary containing all the data for the plot
End of explanation
<END_TASK> |
290 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Simulating data and power analysis
Tom Ellis, August 2017
Before committing to the time and cost of genotyping samples for a paternity study, it is always sensible to run simulations to test the likely statistical power of your data set. This can help with important questions regaridng study design, such as finding an appropriate balance between the number of families vs offspring per family, or identifying a minimum number of loci to type. Simulated data can also be useful in verifying the results of an analysis.
FAPS provides tools to run such simulations. In this notebook we look look at
Step1: There are multiple ways to mate adults to generate offspring. If you supply a set of adults and an integer number of offspring, make_offspring mates adults at random.
Step2: You can also supply an explicit list of dams and sires, in which case the adults are paired in the order they appear in each list.
Step3: Usually we really want to simulate half sib arrays. This can be done using make_sibships, which mates a single mother to a set of males.
Step4: For uneven sibship sizes, give a list of sizes for each family of the same length as sires.
Step5: Adding errors
Real data almost always contains errors. For SNP data, these take the form of
Step6: It is best to create the progeny before adding errors. Set the error rates and add errors at random.
Step7: mutations and dropouts make copies of the genotypeArray, so the original data remains unchanged. For example
Step8: Paternity and sibships
Create a paternityArray and cluster into sibships as usual (more information on these objects can be found here and here.
Step9: A very useful tool is the accuracy subfunction for sibshipCluster objects.
When the paternity and sibship structure are know (seldom the case in real life, but true for simulated data) this returns an array of handy information about the analysis
Step10: In this example, accuracy is high, but the probability of a missing sire is NaN because all the sires are present, and this number of calculated only for offspring whose sire was absent.
We can adjust the paternityArray to see how much this effects the results.
For example, if we remove the sire of the first family (i.e. the male indexed by 1), there is a drop in the accuracy for full-sibling relationships, although half-sibling relationships are unaffected.
Step11: In contrast, imagine we had an idea that selfing was strong. How would this affect things?
Step12: The results are identical to the unmodified case; FAPS has correctly identifed the correct partition structure in spite of the (incorrect) strong prior for high selfing.
Automation
It can be tedious to put together your own simulation for every analysis.
FAPS has an automated function that repeatedly creates genotype data, clusters into siblings and calls the accuracy function.
You can supply lists of variables and it will evaluate each combination.
For example, this code creates four families of five full siblings with a genotyping error rate of 0.0015.
It considers 30, 40 and 50 loci for 100, 250 or 500 candidate fathers.
Each parameter combination is replicated 10 times.
In reality you would want to do more than this; I have found that results tend to asymptote with 300 simulations.
Step13: For convenience, make_power provides a summary of the input parameters.
This can be turned off by setting verbose to False.
Similarly, the progress bar can be removed by setting progress to False.
This bar uses iPython widgets, and probably won't work outside of iPython, so it may be necessary to turn them off.
The results of make_power are basically the output from the accuracy function we saw before, but include information on simulation parameters, and the time taken to create the paternityArray and sibshipCluster objects. View them by inspecting eventab.
Arguments to set up the population work much like those to create genotypeArrays, and are quite flexible.
Have a look into the help file (run make_power? in Python) for more.
You can also take a look at the simulations in support of the main FAPS paper, which considered a range of contrasting demographic scenarios; the example above is adapted from there.
Error rates and missing candidates are important topics to get a handle on.
We can estimate these parameters (e.g. by genotyping some individuals twice and counting how many loci are different), but we can never completely be sure how close to reality we are.
With that in mind make_power allows you to simulate true values mu and the proportion of missing sires, but run the analysis with different values.
The idea is to estimate how wrong you could be before the analysis fails.
For example, this code would simulate the case where you thought that the error rate was 0.0015, and 5% of the candidates went unsampled, but in reality both parameters were double that amount.
Step14: If you want to perform downstream analysis, you can tell make_power to also export each paternity_Array and/or sibshipCluster object. This is done by setting return_paternities and return_clusters to True. For example, this code pulls out the distribution of family sizes from each sibshipArray, and plots it.
Step15: Custom simulations
Once you are familiar with the basic building blocks for generating data and running analysis, creating your own simulations if largely a case of setting up combinations of parameters, and looping over them.
Given the vast array of possible scenarios you could want to simulate, it is impossible to be comprehensive here, so it must suffice to given a couple of examples for inspiration.
Likelihood for missing sires
In this example is was interested in the performance of the likelihood estimator for a sire being absent.
This is the likelihood of generating the offspring genotype if paternal alleles come from population allele frequencies.
This is what the attribute lik_abset in a paternityArray tells you.
Ideally this likelihood should be below the likelihood of paternity for the true sire, but higher than that of the other candidates. I suspected this would not be the case when minor allele frequency is low and there are many candidates.
This cell sets up the simulation. I'm considering 50 loci, and mu=0.0015, but varying sample size and allele frequency.
Step16: This cell simulates genotype data and clusters the offspring into full sibships.
The code pulls out the mean probability that each sire is absent, and the rank of the likelihood for a missing sire among the likelihoods of paternity for the candidates.
Step17: There is a strong dependency on minor allele frequency. As MAF goes from zero to 0.5, the effectiveness of identifying a missing sire using this likelihood estimator goes from 'basically useless' to 'useful'.
Step18: In contrast, there is no effect of the number of adults. | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import faps as fp
import matplotlib.pylab as plt
import pandas as pd
from time import time, localtime, asctime
np.random.seed(37)
allele_freqs = np.random.uniform(0.2, 0.5, 50)
adults = fp.make_parents(10, allele_freqs, family_name='adult')
Explanation: Simulating data and power analysis
Tom Ellis, August 2017
Before committing to the time and cost of genotyping samples for a paternity study, it is always sensible to run simulations to test the likely statistical power of your data set. This can help with important questions regaridng study design, such as finding an appropriate balance between the number of families vs offspring per family, or identifying a minimum number of loci to type. Simulated data can also be useful in verifying the results of an analysis.
FAPS provides tools to run such simulations. In this notebook we look look at:
Basic tools for simulating genotype data.
Automated tools for power analysis.
Crafting custom simulations for specialised purposes.
Simulations using emprical datasets (under construction).
It is worth noting that I relied on loops for a lot of these tools, for the purely selfish reason that it was easy to code. Loops are of course slow, so if you work with these tools a lot there is ample scope for speeding things up (see especially the functions make_offspring, make_sibships and make_power).
Simulation building blocks
Creating genotypeArray objects
Simulations are built using genotypeArrays. See the section on these here for more information.
make_parents generates a population of reproductive adults from population allele frequencies.
This example creates ten individuals.
Note that this population will be in Hardy-Weinberg equilibrium, but yours may not.
End of explanation
family1 = fp.make_offspring(parents = adults, noffs=5)
family1.parents
Explanation: There are multiple ways to mate adults to generate offspring. If you supply a set of adults and an integer number of offspring, make_offspring mates adults at random.
End of explanation
family2 = fp.make_offspring(parents = adults, dam_list=[7,1,8,8,0], sire_list=[2,6,3,0,7])
family2.parents
Explanation: You can also supply an explicit list of dams and sires, in which case the adults are paired in the order they appear in each list.
End of explanation
family3 = fp.make_sibships(parents=adults, dam=0, sires=[1,2,3,4], family_size=5)
family3.parents
Explanation: Usually we really want to simulate half sib arrays. This can be done using make_sibships, which mates a single mother to a set of males.
End of explanation
family4 = fp.make_sibships(parents=adults, dam=0, sires=[1,2,3,4], family_size=[5,4,3,2])
family4.parents
Explanation: For uneven sibship sizes, give a list of sizes for each family of the same length as sires.
End of explanation
np.random.seed(85)
allele_freqs = np.random.uniform(0.2, 0.5, 50)
adults = fp.make_parents(10, allele_freqs, family_name='adult')
progeny = fp.make_sibships(parents=adults, dam=0, sires=[1,2,3,4], family_size=5)
Explanation: Adding errors
Real data almost always contains errors. For SNP data, these take the form of:
Missing data, where a locus fails to amplify for some reason
Genotyping errors, when the observed genotype at a locus is not the actual genotype.
These are straightforward to include in simulated data. First generate some clean data again, and mate the parents.
End of explanation
d, mu= 0.01, 0.0015 # values for dropout and error rate.
# add genotyping errors
adults_mu = adults.mutations(mu)
progeny_mu = progeny.mutations(mu)
# add dropouts (to the mutated data)
adults_mu = adults_mu.dropouts(d)
progeny_mu = progeny.dropouts(d)
Explanation: It is best to create the progeny before adding errors. Set the error rates and add errors at random.
End of explanation
print(adults.missing_data().mean())
print(adults_mu.missing_data().mean())
Explanation: mutations and dropouts make copies of the genotypeArray, so the original data remains unchanged. For example:
End of explanation
np.random.seed(85)
allele_freqs = np.random.uniform(0.4, 0.5, 50)
adults = fp.make_parents(10, allele_freqs, family_name='adult')
progeny = fp.make_sibships(parents=adults, dam=0, sires=[1,2,3,4], family_size=5)
mothers = adults.subset(progeny.parent_index('m', adults.names))
patlik = fp.paternity_array(progeny, mothers, adults, mu=0.0015)
sc = fp.sibship_clustering(patlik)
Explanation: Paternity and sibships
Create a paternityArray and cluster into sibships as usual (more information on these objects can be found here and here.
End of explanation
sc.accuracy(progeny, adults)
Explanation: A very useful tool is the accuracy subfunction for sibshipCluster objects.
When the paternity and sibship structure are know (seldom the case in real life, but true for simulated data) this returns an array of handy information about the analysis:
Binary indiciator for whether the true partition was included in the sample of partitions.
Difference in log likelihood for the maximum likelihood partition identified and the true partition. Positive values indicate that the ML partition had greater support than the true partition.
Posterior probability of the true number of families.
Mean probabilities that a pair of true full sibs are identified as full sibs.
Mean probabilities that a pair of true half sibs are identified as half sibs.
Mean probabilities that a pair of true half or full sibs are correctly assigned as such (i.e. overall accuracy of sibship reconstruction.
Mean (log) probability of paternity of the true sires for those sires who had been sampled (who had non-zero probability in the paternityArray).
Mean (log) probability that the sire had not been sampled for those individuals whose sire was truly absent.
End of explanation
patlik.prob_array = patlik.adjust_prob_array(purge = 1, missing_parents=0.25)
sc = fp.sibship_clustering(patlik)
sc.accuracy(progeny, adults)
Explanation: In this example, accuracy is high, but the probability of a missing sire is NaN because all the sires are present, and this number of calculated only for offspring whose sire was absent.
We can adjust the paternityArray to see how much this effects the results.
For example, if we remove the sire of the first family (i.e. the male indexed by 1), there is a drop in the accuracy for full-sibling relationships, although half-sibling relationships are unaffected.
End of explanation
patlik.prob_array = patlik.adjust_prob_array(selfing_rate=0.5)
sc = fp.sibship_clustering(patlik)
sc.accuracy(progeny, adults)
Explanation: In contrast, imagine we had an idea that selfing was strong. How would this affect things?
End of explanation
# Common simulation parameters
r = 10 # number of replicates
nloci = [30,40,50] # number of loci
allele_freqs = [0.25, 0.5] # draw allele frequencies
nadults = [100,250,500] # size of the adults population
mu = 0.0015 #genotype error rates
sires = 4
offspring = 5
np.random.seed(614)
eventab = fp.make_power(r, nloci, allele_freqs, nadults, sires, offspring, 0, mu)
Explanation: The results are identical to the unmodified case; FAPS has correctly identifed the correct partition structure in spite of the (incorrect) strong prior for high selfing.
Automation
It can be tedious to put together your own simulation for every analysis.
FAPS has an automated function that repeatedly creates genotype data, clusters into siblings and calls the accuracy function.
You can supply lists of variables and it will evaluate each combination.
For example, this code creates four families of five full siblings with a genotyping error rate of 0.0015.
It considers 30, 40 and 50 loci for 100, 250 or 500 candidate fathers.
Each parameter combination is replicated 10 times.
In reality you would want to do more than this; I have found that results tend to asymptote with 300 simulations.
End of explanation
fp.make_power(r, nloci, allele_freqs, nadults, sires, offspring, 0,
mu_input= 0.003,
mu_real=0.0015,
unsampled_real=0.1,
unsampled_input = 0.05);
Explanation: For convenience, make_power provides a summary of the input parameters.
This can be turned off by setting verbose to False.
Similarly, the progress bar can be removed by setting progress to False.
This bar uses iPython widgets, and probably won't work outside of iPython, so it may be necessary to turn them off.
The results of make_power are basically the output from the accuracy function we saw before, but include information on simulation parameters, and the time taken to create the paternityArray and sibshipCluster objects. View them by inspecting eventab.
Arguments to set up the population work much like those to create genotypeArrays, and are quite flexible.
Have a look into the help file (run make_power? in Python) for more.
You can also take a look at the simulations in support of the main FAPS paper, which considered a range of contrasting demographic scenarios; the example above is adapted from there.
Error rates and missing candidates are important topics to get a handle on.
We can estimate these parameters (e.g. by genotyping some individuals twice and counting how many loci are different), but we can never completely be sure how close to reality we are.
With that in mind make_power allows you to simulate true values mu and the proportion of missing sires, but run the analysis with different values.
The idea is to estimate how wrong you could be before the analysis fails.
For example, this code would simulate the case where you thought that the error rate was 0.0015, and 5% of the candidates went unsampled, but in reality both parameters were double that amount.
End of explanation
eventab, evenclusters = fp.make_power(r, nloci, allele_freqs, nadults, sires, offspring, 0, mu, return_clusters=True, verbose=False)
even_famsizes = np.array([evenclusters[i].family_size() for i in range(len(evenclusters))])
plt.plot(even_famsizes.mean(0))
plt.show()
Explanation: If you want to perform downstream analysis, you can tell make_power to also export each paternity_Array and/or sibshipCluster object. This is done by setting return_paternities and return_clusters to True. For example, this code pulls out the distribution of family sizes from each sibshipArray, and plots it.
End of explanation
# Common simulation parameters
nreps = 10 # number of replicates
nloci = [50] # number of loci
allele_freqs = [0.1, 0.2, 0.3, 0.4, 0.5] # draw allele frequencies
nadults = [10, 100, 250, 500, 750, 1000] # size of the adults population
mu_list = [0.0015] #genotype error rates
nsims = nreps * len(nloci) * len(allele_freqs) * len(nadults) * len(mu) # total number of simulations to run
dt = np.zeros([nsims, 7]) # empty array to store data
Explanation: Custom simulations
Once you are familiar with the basic building blocks for generating data and running analysis, creating your own simulations if largely a case of setting up combinations of parameters, and looping over them.
Given the vast array of possible scenarios you could want to simulate, it is impossible to be comprehensive here, so it must suffice to given a couple of examples for inspiration.
Likelihood for missing sires
In this example is was interested in the performance of the likelihood estimator for a sire being absent.
This is the likelihood of generating the offspring genotype if paternal alleles come from population allele frequencies.
This is what the attribute lik_abset in a paternityArray tells you.
Ideally this likelihood should be below the likelihood of paternity for the true sire, but higher than that of the other candidates. I suspected this would not be the case when minor allele frequency is low and there are many candidates.
This cell sets up the simulation. I'm considering 50 loci, and mu=0.0015, but varying sample size and allele frequency.
End of explanation
t0 = time()
counter = 0
print("Beginning simulations on {}.".format(asctime(localtime(time()) )))
for r in range(nreps):
for l in range(len(nloci)):
for a in range(len(allele_freqs)):
for n in range(len(nadults)):
for m in range(len(mu_list)):
af = np.repeat(allele_freqs[a], nloci[l])
adults = fp.make_parents(nadults[n], af)
progeny = fp.make_offspring(adults, 100)
mi = progeny.parent_index('m', adults.names) # maternal index
mothers = adults.subset(mi)
patlik = fp.paternity_array(progeny, mothers, adults, mu_list[m])
# Find the rank of the missing term within the array.
rank = [np.where(np.sort(patlik.prob_array[i]) == patlik.prob_array[i,-1])[0][0] for i in range(progeny.size)]
rank = np.array(rank).mean() / nadults[n]
# get the posterior probabilty fir the missing term.
prob_misisng = np.exp(patlik.prob_array[:, -1]).mean()
#export data
dt[counter] = np.array([r, nloci[l], allele_freqs[a], nadults[n], mu[m], rank, prob_misisng])
# update counters
counter += 1
print("Completed in {} hours.".format(round((time() - t0)/3600,2)))
head = ['rep', 'nloci', 'allele_freqs', 'nadults', 'mu', 'rank', 'prob_missing']
dt = pd.DataFrame(dt, columns=head)
Explanation: This cell simulates genotype data and clusters the offspring into full sibships.
The code pulls out the mean probability that each sire is absent, and the rank of the likelihood for a missing sire among the likelihoods of paternity for the candidates.
End of explanation
dt.groupby('allele_freqs').mean()
Explanation: There is a strong dependency on minor allele frequency. As MAF goes from zero to 0.5, the effectiveness of identifying a missing sire using this likelihood estimator goes from 'basically useless' to 'useful'.
End of explanation
dt.groupby('nadults').mean()
Explanation: In contrast, there is no effect of the number of adults.
End of explanation
<END_TASK> |
291 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Multigroup Mode Part II
Step1: We will begin by creating three materials for the fuel, water, and cladding of the fuel pins.
Step2: With our three materials, we can now create a Materials object that can be exported to an actual XML file.
Step3: Now let's move on to the geometry. This problem will be a square array of fuel pins and control rod guide tubes for which we can use OpenMC's lattice/universe feature. The basic universe will have three regions for the fuel, the clad, and the surrounding coolant. The first step is to create the bounding surfaces for fuel and clad, as well as the outer bounding surfaces of the problem.
Step4: With the surfaces defined, we can now construct a fuel pin cell from cells that are defined by intersections of half-spaces created by the surfaces.
Step5: Likewise, we can construct a control rod guide tube with the same surfaces.
Step6: Using the pin cell universe, we can construct a 17x17 rectangular lattice with a 1.26 cm pitch.
Step7: Next, we create a NumPy array of fuel pin and guide tube universes for the lattice.
Step8: OpenMC requires that there is a "root" universe. Let us create a root cell that is filled by the pin cell universe and then assign it to the root universe.
Step9: Before proceeding lets check the geometry.
Step10: Looks good!
We now must create a geometry that is assigned a root universe and export it to XML.
Step11: With the geometry and materials finished, we now just need to define simulation parameters.
Step12: Create an MGXS Library
Now we are ready to generate multi-group cross sections! First, let's define a 2-group structure using the built-in EnergyGroups class.
Step13: Next, we will instantiate an openmc.mgxs.Library for the energy groups with our the fuel assembly geometry.
Step14: Now, we must specify to the Library which types of cross sections to compute. OpenMC's multi-group mode can accept isotropic flux-weighted cross sections or angle-dependent cross sections, as well as supporting anisotropic scattering represented by either Legendre polynomials, histogram, or tabular angular distributions. We will create the following multi-group cross sections needed to run an OpenMC simulation to verify the accuracy of our cross sections
Step15: Now we must specify the type of domain over which we would like the Library to compute multi-group cross sections. The domain type corresponds to the type of tally filter to be used in the tallies created to compute multi-group cross sections. At the present time, the Library supports "material", "cell", "universe", and "mesh" domain types. In this simple example, we wish to compute multi-group cross sections only for each material and therefore will use a "material" domain type.
NOTE
Step16: We will instruct the library to not compute cross sections on a nuclide-by-nuclide basis, and instead to focus on generating material-specific macroscopic cross sections.
NOTE
Step17: Now we will set the scattering order that we wish to use. For this problem we will use P3 scattering. A warning is expected telling us that the default behavior (a P0 correction on the scattering data) is over-ridden by our choice of using a Legendre expansion to treat anisotropic scattering.
Step18: Now that the Library has been setup let's verify that it contains the types of cross sections which meet the needs of OpenMC's multi-group solver. Note that this step is done automatically when writing the Multi-Group Library file later in the process (as part of mgxs_lib.write_mg_library()), but it is a good practice to also run this before spending all the time running OpenMC to generate the cross sections.
If no error is raised, then we have a good set of data.
Step19: Great, now we can use the Library to construct the tallies needed to compute all of the requested multi-group cross sections in each domain.
Step20: The tallies can now be exported to a "tallies.xml" input file for OpenMC.
NOTE
Step21: In addition, we instantiate a fission rate mesh tally that we will eventually use to compare with the corresponding multi-group results.
Step22: Time to run the calculation and get our results!
Step23: To make sure the results we need are available after running the multi-group calculation, we will now rename the statepoint and summary files.
Step24: Tally Data Processing
Our simulation ran successfully and created statepoint and summary output files. Let's begin by loading the StatePoint file.
Step25: The statepoint is now ready to be analyzed by the Library. We simply have to load the tallies from the statepoint into the Library and our MGXS objects will compute the cross sections for us under-the-hood.
Step26: The next step will be to prepare the input for OpenMC to use our newly created multi-group data.
Multi-Group OpenMC Calculation
We will now use the Library to produce a multi-group cross section data set for use by the OpenMC multi-group solver.
Note that since this simulation included so few histories, it is reasonable to expect some data has not had any scores, and thus we could see division by zero errors. This will show up as a runtime warning in the following step. The Library class is designed to gracefully handle these scenarios.
Step27: OpenMC's multi-group mode uses the same input files as does the continuous-energy mode (materials, geometry, settings, plots, and tallies file). Differences would include the use of a flag to tell the code to use multi-group transport, a location of the multi-group library file, and any changes needed in the materials.xml and geometry.xml files to re-define materials as necessary. The materials and geometry file changes could be necessary if materials or their nuclide/element/macroscopic constituents need to be renamed.
In this example we have created macroscopic cross sections (by material), and thus we will need to change the material definitions accordingly.
First we will create the new materials.xml file.
Step28: No geometry file neeeds to be written as the continuous-energy file is correctly defined for the multi-group case as well.
Next, we can make the changes we need to the simulation parameters.
These changes are limited to telling OpenMC to run a multi-group vice contrinuous-energy calculation.
Step29: Lets clear the tallies file so it doesn't include tallies for re-generating a multi-group library, but then put back in a tally for the fission mesh.
Step30: Before running the calculation let's visually compare a subset of the newly-generated multi-group cross section data to the continuous-energy data. We will do this using the cross section plotting functionality built-in to the OpenMC Python API.
Step31: At this point, the problem is set up and we can run the multi-group calculation.
Step32: Results Comparison
Now we can compare the multi-group and continuous-energy results.
We will begin by loading the multi-group statepoint file we just finished writing and extracting the calculated keff.
Step33: Next, we can load the continuous-energy eigenvalue for comparison.
Step34: Lets compare the two eigenvalues, including their bias
Step35: This shows a small but nontrivial pcm bias between the two methods. Some degree of mismatch is expected simply to the very few histories being used in these example problems. An additional mismatch is always inherent in the practical application of multi-group theory due to the high degree of approximations inherent in that method.
Pin Power Visualizations
Next we will visualize the pin power results obtained from both the Continuous-Energy and Multi-Group OpenMC calculations.
First, we extract volume-integrated fission rates from the Multi-Group calculation's mesh fission rate tally for each pin cell in the fuel assembly.
Step36: We can now do the same for the Continuous-Energy results.
Step37: Now we can easily use Matplotlib to visualize the two fission rates side-by-side.
Step38: These figures really indicate that more histories are probably necessary when trying to achieve a fully converged solution, but hey, this is good enough for our example!
Scattering Anisotropy Treatments
We will next show how we can work with the scattering angular distributions. OpenMC's MG solver has the capability to use group-to-group angular distributions which are represented as any of the following
Step39: Now we can re-run OpenMC to obtain our results
Step40: And then get the eigenvalue differences from the Continuous-Energy and P3 MG solution
Step41: Mixed Scattering Representations
OpenMC's Multi-Group mode also includes a feature where not every data in the library is required to have the same scattering treatment. For example, we could represent the water with P3 scattering, and the fuel and cladding with P0 scattering. This series will show how this can be done.
First we will convert the data to P0 scattering, unless its water, then we will leave that as P3 data.
Step42: We can also use whatever scattering format that we want for the materials in the library. As an example, we will take this P0 data and convert zircaloy to a histogram anisotropic scattering format and the fuel to a tabular anisotropic scattering format
Step43: Finally we will re-set our max_order parameter of our openmc.Settings object to our maximum order so that OpenMC will use whatever scattering data is available in the library.
After we do this we can re-run the simulation.
Step44: For a final step we can again obtain the eigenvalue differences from this case and compare with the same from the P3 MG solution | <ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import numpy as np
import os
import openmc
%matplotlib inline
Explanation: Multigroup Mode Part II: MGXS Library Generation with OpenMC
The previous Notebook in this series used multi-group mode to perform a calculation with previously defined cross sections. However, in many circumstances the multi-group data is not given and one must instead generate the cross sections for the specific application (or at least verify the use of cross sections from another application).
This Notebook illustrates the use of the openmc.mgxs.Library class specifically for the calculation of MGXS to be used in OpenMC's multi-group mode. This example notebook is therefore very similar to the MGXS Part III notebook, except OpenMC is used as the multi-group solver instead of OpenMOC.
During this process, this notebook will illustrate the following features:
Calculation of multi-group cross sections for a fuel assembly
Automated creation and storage of MGXS with openmc.mgxs.Library
Steady-state pin-by-pin fission rates comparison between continuous-energy and multi-group OpenMC.
Modification of the scattering data in the library to show the flexibility of the multi-group solver
Generate Input Files
End of explanation
# 1.6% enriched fuel
fuel = openmc.Material(name='1.6% Fuel')
fuel.set_density('g/cm3', 10.31341)
fuel.add_element('U', 1., enrichment=1.6)
fuel.add_element('O', 2.)
# zircaloy
zircaloy = openmc.Material(name='Zircaloy')
zircaloy.set_density('g/cm3', 6.55)
zircaloy.add_element('Zr', 1.)
# borated water
water = openmc.Material(name='Borated Water')
water.set_density('g/cm3', 0.740582)
water.add_element('H', 4.9457e-2)
water.add_element('O', 2.4732e-2)
water.add_element('B', 8.0042e-6)
Explanation: We will begin by creating three materials for the fuel, water, and cladding of the fuel pins.
End of explanation
# Instantiate a Materials object
materials_file = openmc.Materials((fuel, zircaloy, water))
# Export to "materials.xml"
materials_file.export_to_xml()
Explanation: With our three materials, we can now create a Materials object that can be exported to an actual XML file.
End of explanation
# Create cylinders for the fuel and clad
# The x0 and y0 parameters (0. and 0.) are the default values for an
# openmc.ZCylinder object. We could therefore leave them out to no effect
fuel_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, R=0.39218)
clad_outer_radius = openmc.ZCylinder(x0=0.0, y0=0.0, R=0.45720)
# Create boundary planes to surround the geometry
min_x = openmc.XPlane(x0=-10.71, boundary_type='reflective')
max_x = openmc.XPlane(x0=+10.71, boundary_type='reflective')
min_y = openmc.YPlane(y0=-10.71, boundary_type='reflective')
max_y = openmc.YPlane(y0=+10.71, boundary_type='reflective')
min_z = openmc.ZPlane(z0=-10., boundary_type='reflective')
max_z = openmc.ZPlane(z0=+10., boundary_type='reflective')
Explanation: Now let's move on to the geometry. This problem will be a square array of fuel pins and control rod guide tubes for which we can use OpenMC's lattice/universe feature. The basic universe will have three regions for the fuel, the clad, and the surrounding coolant. The first step is to create the bounding surfaces for fuel and clad, as well as the outer bounding surfaces of the problem.
End of explanation
# Create a Universe to encapsulate a fuel pin
fuel_pin_universe = openmc.Universe(name='1.6% Fuel Pin')
# Create fuel Cell
fuel_cell = openmc.Cell(name='1.6% Fuel')
fuel_cell.fill = fuel
fuel_cell.region = -fuel_outer_radius
fuel_pin_universe.add_cell(fuel_cell)
# Create a clad Cell
clad_cell = openmc.Cell(name='1.6% Clad')
clad_cell.fill = zircaloy
clad_cell.region = +fuel_outer_radius & -clad_outer_radius
fuel_pin_universe.add_cell(clad_cell)
# Create a moderator Cell
moderator_cell = openmc.Cell(name='1.6% Moderator')
moderator_cell.fill = water
moderator_cell.region = +clad_outer_radius
fuel_pin_universe.add_cell(moderator_cell)
Explanation: With the surfaces defined, we can now construct a fuel pin cell from cells that are defined by intersections of half-spaces created by the surfaces.
End of explanation
# Create a Universe to encapsulate a control rod guide tube
guide_tube_universe = openmc.Universe(name='Guide Tube')
# Create guide tube Cell
guide_tube_cell = openmc.Cell(name='Guide Tube Water')
guide_tube_cell.fill = water
guide_tube_cell.region = -fuel_outer_radius
guide_tube_universe.add_cell(guide_tube_cell)
# Create a clad Cell
clad_cell = openmc.Cell(name='Guide Clad')
clad_cell.fill = zircaloy
clad_cell.region = +fuel_outer_radius & -clad_outer_radius
guide_tube_universe.add_cell(clad_cell)
# Create a moderator Cell
moderator_cell = openmc.Cell(name='Guide Tube Moderator')
moderator_cell.fill = water
moderator_cell.region = +clad_outer_radius
guide_tube_universe.add_cell(moderator_cell)
Explanation: Likewise, we can construct a control rod guide tube with the same surfaces.
End of explanation
# Create fuel assembly Lattice
assembly = openmc.RectLattice(name='1.6% Fuel Assembly')
assembly.pitch = (1.26, 1.26)
assembly.lower_left = [-1.26 * 17. / 2.0] * 2
Explanation: Using the pin cell universe, we can construct a 17x17 rectangular lattice with a 1.26 cm pitch.
End of explanation
# Create array indices for guide tube locations in lattice
template_x = np.array([5, 8, 11, 3, 13, 2, 5, 8, 11, 14, 2, 5, 8,
11, 14, 2, 5, 8, 11, 14, 3, 13, 5, 8, 11])
template_y = np.array([2, 2, 2, 3, 3, 5, 5, 5, 5, 5, 8, 8, 8, 8,
8, 11, 11, 11, 11, 11, 13, 13, 14, 14, 14])
# Initialize an empty 17x17 array of the lattice universes
universes = np.empty((17, 17), dtype=openmc.Universe)
# Fill the array with the fuel pin and guide tube universes
universes[:, :] = fuel_pin_universe
universes[template_x, template_y] = guide_tube_universe
# Store the array of universes in the lattice
assembly.universes = universes
Explanation: Next, we create a NumPy array of fuel pin and guide tube universes for the lattice.
End of explanation
# Create root Cell
root_cell = openmc.Cell(name='root cell')
root_cell.fill = assembly
# Add boundary planes
root_cell.region = +min_x & -max_x & +min_y & -max_y & +min_z & -max_z
# Create root Universe
root_universe = openmc.Universe(name='root universe', universe_id=0)
root_universe.add_cell(root_cell)
Explanation: OpenMC requires that there is a "root" universe. Let us create a root cell that is filled by the pin cell universe and then assign it to the root universe.
End of explanation
root_universe.plot(origin=(0., 0., 0.), width=(21.42, 21.42), pixels=(500, 500), color_by='material')
Explanation: Before proceeding lets check the geometry.
End of explanation
# Create Geometry and set root universe
geometry = openmc.Geometry(root_universe)
# Export to "geometry.xml"
geometry.export_to_xml()
Explanation: Looks good!
We now must create a geometry that is assigned a root universe and export it to XML.
End of explanation
# OpenMC simulation parameters
batches = 600
inactive = 50
particles = 3000
# Instantiate a Settings object
settings_file = openmc.Settings()
settings_file.batches = batches
settings_file.inactive = inactive
settings_file.particles = particles
settings_file.output = {'tallies': False}
settings_file.run_mode = 'eigenvalue'
settings_file.verbosity = 4
# Create an initial uniform spatial source distribution over fissionable zones
bounds = [-10.71, -10.71, -10, 10.71, 10.71, 10.]
uniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True)
settings_file.source = openmc.Source(space=uniform_dist)
# Export to "settings.xml"
settings_file.export_to_xml()
Explanation: With the geometry and materials finished, we now just need to define simulation parameters.
End of explanation
# Instantiate a 2-group EnergyGroups object
groups = openmc.mgxs.EnergyGroups([0., 0.625, 20.0e6])
Explanation: Create an MGXS Library
Now we are ready to generate multi-group cross sections! First, let's define a 2-group structure using the built-in EnergyGroups class.
End of explanation
# Initialize a 2-group MGXS Library for OpenMC
mgxs_lib = openmc.mgxs.Library(geometry)
mgxs_lib.energy_groups = groups
Explanation: Next, we will instantiate an openmc.mgxs.Library for the energy groups with our the fuel assembly geometry.
End of explanation
# Specify multi-group cross section types to compute
mgxs_lib.mgxs_types = ['total', 'absorption', 'nu-fission', 'fission',
'nu-scatter matrix', 'multiplicity matrix', 'chi']
Explanation: Now, we must specify to the Library which types of cross sections to compute. OpenMC's multi-group mode can accept isotropic flux-weighted cross sections or angle-dependent cross sections, as well as supporting anisotropic scattering represented by either Legendre polynomials, histogram, or tabular angular distributions. We will create the following multi-group cross sections needed to run an OpenMC simulation to verify the accuracy of our cross sections: "total", "absorption", "nu-fission", '"fission", "nu-scatter matrix", "multiplicity matrix", and "chi".
The "multiplicity matrix" type is a relatively rare cross section type. This data is needed to provide OpenMC's multi-group mode with additional information needed to accurately treat scattering multiplication (i.e., (n,xn) reactions)), including how this multiplication varies depending on both incoming and outgoing neutron energies.
End of explanation
# Specify a "cell" domain type for the cross section tally filters
mgxs_lib.domain_type = "material"
# Specify the cell domains over which to compute multi-group cross sections
mgxs_lib.domains = geometry.get_all_materials().values()
Explanation: Now we must specify the type of domain over which we would like the Library to compute multi-group cross sections. The domain type corresponds to the type of tally filter to be used in the tallies created to compute multi-group cross sections. At the present time, the Library supports "material", "cell", "universe", and "mesh" domain types. In this simple example, we wish to compute multi-group cross sections only for each material and therefore will use a "material" domain type.
NOTE: By default, the Library class will instantiate MGXS objects for each and every domain (material, cell, universe, or mesh) in the geometry of interest. However, one may specify a subset of these domains to the Library.domains property.
End of explanation
# Do not compute cross sections on a nuclide-by-nuclide basis
mgxs_lib.by_nuclide = False
Explanation: We will instruct the library to not compute cross sections on a nuclide-by-nuclide basis, and instead to focus on generating material-specific macroscopic cross sections.
NOTE: The default value of the by_nuclide parameter is False, so the following step is not necessary but is included for illustrative purposes.
End of explanation
# Set the Legendre order to 3 for P3 scattering
mgxs_lib.legendre_order = 3
Explanation: Now we will set the scattering order that we wish to use. For this problem we will use P3 scattering. A warning is expected telling us that the default behavior (a P0 correction on the scattering data) is over-ridden by our choice of using a Legendre expansion to treat anisotropic scattering.
End of explanation
# Check the library - if no errors are raised, then the library is satisfactory.
mgxs_lib.check_library_for_openmc_mgxs()
Explanation: Now that the Library has been setup let's verify that it contains the types of cross sections which meet the needs of OpenMC's multi-group solver. Note that this step is done automatically when writing the Multi-Group Library file later in the process (as part of mgxs_lib.write_mg_library()), but it is a good practice to also run this before spending all the time running OpenMC to generate the cross sections.
If no error is raised, then we have a good set of data.
End of explanation
# Construct all tallies needed for the multi-group cross section library
mgxs_lib.build_library()
Explanation: Great, now we can use the Library to construct the tallies needed to compute all of the requested multi-group cross sections in each domain.
End of explanation
# Create a "tallies.xml" file for the MGXS Library
tallies_file = openmc.Tallies()
mgxs_lib.add_to_tallies_file(tallies_file, merge=True)
Explanation: The tallies can now be exported to a "tallies.xml" input file for OpenMC.
NOTE: At this point the Library has constructed nearly 100 distinct Tally objects. The overhead to tally in OpenMC scales as O(N) for N tallies, which can become a bottleneck for large tally datasets. To compensate for this, the Python API's Tally, Filter and Tallies classes allow for the smart merging of tallies when possible. The Library class supports this runtime optimization with the use of the optional merge parameter (False by default) for the Library.add_to_tallies_file(...) method, as shown below.
End of explanation
# Instantiate a tally Mesh
mesh = openmc.RegularMesh()
mesh.dimension = [17, 17]
mesh.lower_left = [-10.71, -10.71]
mesh.upper_right = [+10.71, +10.71]
# Instantiate tally Filter
mesh_filter = openmc.MeshFilter(mesh)
# Instantiate the Tally
tally = openmc.Tally(name='mesh tally')
tally.filters = [mesh_filter]
tally.scores = ['fission']
# Add tally to collection
tallies_file.append(tally, merge=True)
# Export all tallies to a "tallies.xml" file
tallies_file.export_to_xml()
Explanation: In addition, we instantiate a fission rate mesh tally that we will eventually use to compare with the corresponding multi-group results.
End of explanation
# Run OpenMC
openmc.run()
Explanation: Time to run the calculation and get our results!
End of explanation
# Move the statepoint File
ce_spfile = './statepoint_ce.h5'
os.rename('statepoint.' + str(batches) + '.h5', ce_spfile)
# Move the Summary file
ce_sumfile = './summary_ce.h5'
os.rename('summary.h5', ce_sumfile)
Explanation: To make sure the results we need are available after running the multi-group calculation, we will now rename the statepoint and summary files.
End of explanation
# Load the statepoint file
sp = openmc.StatePoint(ce_spfile, autolink=False)
# Load the summary file in its new location
su = openmc.Summary(ce_sumfile)
sp.link_with_summary(su)
Explanation: Tally Data Processing
Our simulation ran successfully and created statepoint and summary output files. Let's begin by loading the StatePoint file.
End of explanation
# Initialize MGXS Library with OpenMC statepoint data
mgxs_lib.load_from_statepoint(sp)
Explanation: The statepoint is now ready to be analyzed by the Library. We simply have to load the tallies from the statepoint into the Library and our MGXS objects will compute the cross sections for us under-the-hood.
End of explanation
# Create a MGXS File which can then be written to disk
mgxs_file = mgxs_lib.create_mg_library(xs_type='macro', xsdata_names=['fuel', 'zircaloy', 'water'])
# Write the file to disk using the default filename of "mgxs.h5"
mgxs_file.export_to_hdf5()
Explanation: The next step will be to prepare the input for OpenMC to use our newly created multi-group data.
Multi-Group OpenMC Calculation
We will now use the Library to produce a multi-group cross section data set for use by the OpenMC multi-group solver.
Note that since this simulation included so few histories, it is reasonable to expect some data has not had any scores, and thus we could see division by zero errors. This will show up as a runtime warning in the following step. The Library class is designed to gracefully handle these scenarios.
End of explanation
# Re-define our materials to use the multi-group macroscopic data
# instead of the continuous-energy data.
# 1.6% enriched fuel UO2
fuel_mg = openmc.Material(name='UO2', material_id=1)
fuel_mg.add_macroscopic('fuel')
# cladding
zircaloy_mg = openmc.Material(name='Clad', material_id=2)
zircaloy_mg.add_macroscopic('zircaloy')
# moderator
water_mg = openmc.Material(name='Water', material_id=3)
water_mg.add_macroscopic('water')
# Finally, instantiate our Materials object
materials_file = openmc.Materials((fuel_mg, zircaloy_mg, water_mg))
# Set the location of the cross sections file
materials_file.cross_sections = 'mgxs.h5'
# Export to "materials.xml"
materials_file.export_to_xml()
Explanation: OpenMC's multi-group mode uses the same input files as does the continuous-energy mode (materials, geometry, settings, plots, and tallies file). Differences would include the use of a flag to tell the code to use multi-group transport, a location of the multi-group library file, and any changes needed in the materials.xml and geometry.xml files to re-define materials as necessary. The materials and geometry file changes could be necessary if materials or their nuclide/element/macroscopic constituents need to be renamed.
In this example we have created macroscopic cross sections (by material), and thus we will need to change the material definitions accordingly.
First we will create the new materials.xml file.
End of explanation
# Set the energy mode
settings_file.energy_mode = 'multi-group'
# Export to "settings.xml"
settings_file.export_to_xml()
Explanation: No geometry file neeeds to be written as the continuous-energy file is correctly defined for the multi-group case as well.
Next, we can make the changes we need to the simulation parameters.
These changes are limited to telling OpenMC to run a multi-group vice contrinuous-energy calculation.
End of explanation
# Create a "tallies.xml" file for the MGXS Library
tallies_file = openmc.Tallies()
# Add fission and flux mesh to tally for plotting using the same mesh we've already defined
mesh_tally = openmc.Tally(name='mesh tally')
mesh_tally.filters = [openmc.MeshFilter(mesh)]
mesh_tally.scores = ['fission']
tallies_file.append(mesh_tally)
# Export to "tallies.xml"
tallies_file.export_to_xml()
Explanation: Lets clear the tallies file so it doesn't include tallies for re-generating a multi-group library, but then put back in a tally for the fission mesh.
End of explanation
# First lets plot the fuel data
# We will first add the continuous-energy data
fig = openmc.plot_xs(fuel, ['total'])
# We will now add in the corresponding multi-group data and show the result
openmc.plot_xs(fuel_mg, ['total'], plot_CE=False, mg_cross_sections='mgxs.h5', axis=fig.axes[0])
fig.axes[0].legend().set_visible(False)
plt.show()
plt.close()
# Then repeat for the zircaloy data
fig = openmc.plot_xs(zircaloy, ['total'])
openmc.plot_xs(zircaloy_mg, ['total'], plot_CE=False, mg_cross_sections='mgxs.h5', axis=fig.axes[0])
fig.axes[0].legend().set_visible(False)
plt.show()
plt.close()
# And finally repeat for the water data
fig = openmc.plot_xs(water, ['total'])
openmc.plot_xs(water_mg, ['total'], plot_CE=False, mg_cross_sections='mgxs.h5', axis=fig.axes[0])
fig.axes[0].legend().set_visible(False)
plt.show()
plt.close()
Explanation: Before running the calculation let's visually compare a subset of the newly-generated multi-group cross section data to the continuous-energy data. We will do this using the cross section plotting functionality built-in to the OpenMC Python API.
End of explanation
# Run the Multi-Group OpenMC Simulation
openmc.run()
Explanation: At this point, the problem is set up and we can run the multi-group calculation.
End of explanation
# Move the StatePoint File
mg_spfile = './statepoint_mg.h5'
os.rename('statepoint.' + str(batches) + '.h5', mg_spfile)
# Move the Summary file
mg_sumfile = './summary_mg.h5'
os.rename('summary.h5', mg_sumfile)
# Rename and then load the last statepoint file and keff value
mgsp = openmc.StatePoint(mg_spfile, autolink=False)
# Load the summary file in its new location
mgsu = openmc.Summary(mg_sumfile)
mgsp.link_with_summary(mgsu)
# Get keff
mg_keff = mgsp.k_combined
Explanation: Results Comparison
Now we can compare the multi-group and continuous-energy results.
We will begin by loading the multi-group statepoint file we just finished writing and extracting the calculated keff.
End of explanation
ce_keff = sp.k_combined
Explanation: Next, we can load the continuous-energy eigenvalue for comparison.
End of explanation
bias = 1.0E5 * (ce_keff - mg_keff)
print('Continuous-Energy keff = {0:1.6f}'.format(ce_keff))
print('Multi-Group keff = {0:1.6f}'.format(mg_keff))
print('bias [pcm]: {0:1.1f}'.format(bias.nominal_value))
Explanation: Lets compare the two eigenvalues, including their bias
End of explanation
# Get the OpenMC fission rate mesh tally data
mg_mesh_tally = mgsp.get_tally(name='mesh tally')
mg_fission_rates = mg_mesh_tally.get_values(scores=['fission'])
# Reshape array to 2D for plotting
mg_fission_rates.shape = (17,17)
# Normalize to the average pin power
mg_fission_rates /= np.mean(mg_fission_rates[mg_fission_rates > 0.])
Explanation: This shows a small but nontrivial pcm bias between the two methods. Some degree of mismatch is expected simply to the very few histories being used in these example problems. An additional mismatch is always inherent in the practical application of multi-group theory due to the high degree of approximations inherent in that method.
Pin Power Visualizations
Next we will visualize the pin power results obtained from both the Continuous-Energy and Multi-Group OpenMC calculations.
First, we extract volume-integrated fission rates from the Multi-Group calculation's mesh fission rate tally for each pin cell in the fuel assembly.
End of explanation
# Get the OpenMC fission rate mesh tally data
ce_mesh_tally = sp.get_tally(name='mesh tally')
ce_fission_rates = ce_mesh_tally.get_values(scores=['fission'])
# Reshape array to 2D for plotting
ce_fission_rates.shape = (17,17)
# Normalize to the average pin power
ce_fission_rates /= np.mean(ce_fission_rates[ce_fission_rates > 0.])
Explanation: We can now do the same for the Continuous-Energy results.
End of explanation
# Force zeros to be NaNs so their values are not included when matplotlib calculates
# the color scale
ce_fission_rates[ce_fission_rates == 0.] = np.nan
mg_fission_rates[mg_fission_rates == 0.] = np.nan
# Plot the CE fission rates in the left subplot
fig = plt.subplot(121)
plt.imshow(ce_fission_rates, interpolation='none', cmap='jet')
plt.title('Continuous-Energy Fission Rates')
# Plot the MG fission rates in the right subplot
fig2 = plt.subplot(122)
plt.imshow(mg_fission_rates, interpolation='none', cmap='jet')
plt.title('Multi-Group Fission Rates')
Explanation: Now we can easily use Matplotlib to visualize the two fission rates side-by-side.
End of explanation
# Set the maximum scattering order to 0 (i.e., isotropic scattering)
settings_file.max_order = 0
# Export to "settings.xml"
settings_file.export_to_xml()
Explanation: These figures really indicate that more histories are probably necessary when trying to achieve a fully converged solution, but hey, this is good enough for our example!
Scattering Anisotropy Treatments
We will next show how we can work with the scattering angular distributions. OpenMC's MG solver has the capability to use group-to-group angular distributions which are represented as any of the following: a truncated Legendre series of up to the 10th order, a histogram distribution, and a tabular distribution. Any combination of these representations can be used by OpenMC during the transport process, so long as all constituents of a given material use the same representation. This means it is possible to have water represented by a tabular distribution and fuel represented by a Legendre if so desired.
Note: To have the highest runtime performance OpenMC natively converts Legendre series to a tabular distribution before the transport begins. This default functionality can be turned off with the tabular_legendre element of the settings.xml file (or for the Python API, the openmc.Settings.tabular_legendre attribute).
This section will examine the following:
- Re-run the MG-mode calculation with P0 scattering everywhere using the openmc.Settings.max_order attribute
- Re-run the problem with only the water represented with P3 scattering and P0 scattering for the remaining materials using the Python API's ability to convert between formats.
Global P0 Scattering
First we begin by re-running with P0 scattering (i.e., isotropic) everywhere. If a global maximum order is requested, the most effective way to do this is to use the max_order attribute of our openmc.Settings object.
End of explanation
# Run the Multi-Group OpenMC Simulation
openmc.run()
Explanation: Now we can re-run OpenMC to obtain our results
End of explanation
# Move the statepoint File
mgp0_spfile = './statepoint_mg_p0.h5'
os.rename('statepoint.' + str(batches) + '.h5', mgp0_spfile)
# Move the Summary file
mgp0_sumfile = './summary_mg_p0.h5'
os.rename('summary.h5', mgp0_sumfile)
# Load the last statepoint file and keff value
mgsp_p0 = openmc.StatePoint(mgp0_spfile, autolink=False)
# Get keff
mg_p0_keff = mgsp_p0.k_combined
bias_p0 = 1.0E5 * (ce_keff - mg_p0_keff)
print('P3 bias [pcm]: {0:1.1f}'.format(bias.nominal_value))
print('P0 bias [pcm]: {0:1.1f}'.format(bias_p0.nominal_value))
Explanation: And then get the eigenvalue differences from the Continuous-Energy and P3 MG solution
End of explanation
# Convert the zircaloy and fuel data to P0 scattering
for i, xsdata in enumerate(mgxs_file.xsdatas):
if xsdata.name != 'water':
mgxs_file.xsdatas[i] = xsdata.convert_scatter_format('legendre', 0)
Explanation: Mixed Scattering Representations
OpenMC's Multi-Group mode also includes a feature where not every data in the library is required to have the same scattering treatment. For example, we could represent the water with P3 scattering, and the fuel and cladding with P0 scattering. This series will show how this can be done.
First we will convert the data to P0 scattering, unless its water, then we will leave that as P3 data.
End of explanation
# Convert the formats as discussed
for i, xsdata in enumerate(mgxs_file.xsdatas):
if xsdata.name == 'zircaloy':
mgxs_file.xsdatas[i] = xsdata.convert_scatter_format('histogram', 2)
elif xsdata.name == 'fuel':
mgxs_file.xsdatas[i] = xsdata.convert_scatter_format('tabular', 2)
mgxs_file.export_to_hdf5('mgxs.h5')
Explanation: We can also use whatever scattering format that we want for the materials in the library. As an example, we will take this P0 data and convert zircaloy to a histogram anisotropic scattering format and the fuel to a tabular anisotropic scattering format
End of explanation
settings_file.max_order = None
# Export to "settings.xml"
settings_file.export_to_xml()
# Run the Multi-Group OpenMC Simulation
openmc.run()
Explanation: Finally we will re-set our max_order parameter of our openmc.Settings object to our maximum order so that OpenMC will use whatever scattering data is available in the library.
After we do this we can re-run the simulation.
End of explanation
# Load the last statepoint file and keff value
mgsp_mixed = openmc.StatePoint('./statepoint.' + str(batches) + '.h5')
mg_mixed_keff = mgsp_mixed.k_combined
bias_mixed = 1.0E5 * (ce_keff - mg_mixed_keff)
print('P3 bias [pcm]: {0:1.1f}'.format(bias.nominal_value))
print('Mixed Scattering bias [pcm]: {0:1.1f}'.format(bias_mixed.nominal_value))
Explanation: For a final step we can again obtain the eigenvalue differences from this case and compare with the same from the P3 MG solution
End of explanation
<END_TASK> |
292 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
PDF Analysis Tutorial
Introduction
This tutorial demonstrates how to acquire a multidimensional pair distribution function (PDF) from both a flat field electron diffraction pattern and a scanning electron diffraction data set.
The data is from an open-source paper by Shanmugam et al. [1] that is used as a reference standard. It is an
Amorphous 18nm SiO2 film. The scanning electron diffraction data set is a scan of a polycrystalline gold reference standard with 128x128 real space pixels and 256x256 diffraction space pixels. The implementation also initially followed Shanmugam et al.
[1] Shanmugam, J., Borisenko, K. B., Chou, Y. J., & Kirkland, A. I. (2017). eRDF Analyser
Step1: <a id='loa'></a>
1. Loading and Inspection
Load the diffraction data line profile
Step2: For now, the code requires navigation dimensions in the reduced intensity signal, two size 1 ones are created.
Step3: Set the diffraction pattern calibration. Note that pyXem uses a calibration to $s = \frac{1}{d} = 2\frac{\sin{\theta}}{\lambda}$.
Step4: Plot the radial profile
Step5: <a id='ri'></a>
2. Acquiring a Reduced Intensity
Acquire a reduced intensity (also called a structure factor) from the radial profile. The structure factor is what will subsequently be transformed into a PDF through a fourier transform.
The structure factor $\phi(s)$ is acquired by fitting a background scattering factor to the data, and then transforming the data by
Step6: We then fit an electron scattering factor to the profile. To do this, we need to define a list of elements and their respective atomic fractions.
Step7: Then we will fit a background scattering factor. The scattering factor parametrisation used here is that specified by Lobato and Van Dyck [2]. The plot_fit parameter ensures we check the fitted profile.
[2] Lobato, I., & Van Dyck, D. (2014). An accurate parameterization for scattering factors, electron densities and electrostatic potentials for neutral atoms that obey all physical constraints. Acta Crystallographica Section A
Step8: That's clearly a terrible fit! This is because we're trying to fit the beam stop. To avoid this, we specify to fit to the 'tail end' of the data by specifying a minimum and maximum scattering angle range. This is generally recommended, as electron scattering factors tend to not include inelastic scattering, which means the factors are rarely perfect fits.
Step9: That's clearly much much better. Always inspect your fit.
Finally, we calculate the reduced intensity itself.
Step10: If it seems like the reduced intensity is not oscillating around 0 at high s, you should try fitting with a larger s_min. This generally speaking solves the issue.
<a id='dri'></a>
4. Damping the Reduced Intensity
The reduced intensity acquired above does not go to zero at high s as it should because the maximum acquired scattering vector is not very high.
This would result in significant oscillation in the PDF due to a discontinuity in the fourier transformed data. To combat this, the reduced intensity is damped. In the X-ray community a common damping functions are the Lorch function and an exponential damping function. Both are supported here.
It is worth noting that damping does reduce the resolution in r in the PDF.
Step11: Additionally, it is recommended to damp the low s regime. We use an error function to do that
Step12: If the function ends up overdamped, you can simply reacquire the reduced intensity using
Step13: <a id='pdf'></a>
5. Acquiring a PDF
Finally, a PDF is acquired from the damped reduced intensity. This is done by a fourier sine transform.
To ignore parts of the scattering data that are too noisy, you can set a minimum and maximum scattering angle for the transform.
First, we initialise a PDFGenerator1D object.
Step14: Secify a minimum and maximum scattering angle. The maximum must be equivalent to the Lorch function s_max if the Lorch function is used to damp. Otherwise the Lorch function damping can cause artifact in the PDF.
Step15: Finally we get the PDF. r_max specifies the maximum real space distance we want to interpret.
Step16: The PDF can then be saved. | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import hyperspy.api as hs
import pyxem as pxm
import numpy as np
Explanation: PDF Analysis Tutorial
Introduction
This tutorial demonstrates how to acquire a multidimensional pair distribution function (PDF) from both a flat field electron diffraction pattern and a scanning electron diffraction data set.
The data is from an open-source paper by Shanmugam et al. [1] that is used as a reference standard. It is an
Amorphous 18nm SiO2 film. The scanning electron diffraction data set is a scan of a polycrystalline gold reference standard with 128x128 real space pixels and 256x256 diffraction space pixels. The implementation also initially followed Shanmugam et al.
[1] Shanmugam, J., Borisenko, K. B., Chou, Y. J., & Kirkland, A. I. (2017). eRDF Analyser: An interactive GUI for electron reduced density function analysis. SoftwareX, 6, 185-192.
This functionality has been checked to run in pyxem-0.13.0 (March 2021). Bugs are always possible, do not trust the code blindly, and if you experience any issues please report them here: https://github.com/pyxem/pyxem-demos/issues
Contents
<a href='#loa'> Loading & Inspection</a>
<a href='#rad'> Acquiring a radial profile</a>
<a href='#ri'> Acquiring a Reduced Intensity</a>
<a href='#dri'> Damping the Reduced Intensity</a>
<a href='#pdf'> Acquiring a PDF</a>
Import pyXem and other required libraries
End of explanation
rp = hs.load('./data/08/amorphousSiO2.hspy')
rp.set_signal_type('electron_diffraction')
Explanation: <a id='loa'></a>
1. Loading and Inspection
Load the diffraction data line profile
End of explanation
rp = pxm.signals.ElectronDiffraction1D([[rp.data]])
Explanation: For now, the code requires navigation dimensions in the reduced intensity signal, two size 1 ones are created.
End of explanation
calibration = 0.00167
rp.set_diffraction_calibration(calibration=calibration)
Explanation: Set the diffraction pattern calibration. Note that pyXem uses a calibration to $s = \frac{1}{d} = 2\frac{\sin{\theta}}{\lambda}$.
End of explanation
rp.plot()
Explanation: Plot the radial profile
End of explanation
rigen = pxm.generators.ReducedIntensityGenerator1D(rp)
Explanation: <a id='ri'></a>
2. Acquiring a Reduced Intensity
Acquire a reduced intensity (also called a structure factor) from the radial profile. The structure factor is what will subsequently be transformed into a PDF through a fourier transform.
The structure factor $\phi(s)$ is acquired by fitting a background scattering factor to the data, and then transforming the data by:
$$\phi(s) = \frac{I(s) - N\Delta c_{i}f_{i}^{2}}{N\Delta c_{i}^{2}f_{i}^{2}}$$
where s is the scattering vecot, $c_{i}$ and $f_{i}$ the atomic fraction and scattering factor respectively of each element in the sample, and N is a fitted parameter to the intensity.
To acquire the reduced intensity, we first initialise a ReducedIntensityGenerator1D object.
End of explanation
elements = ['Si','O']
fracs = [0.333,0.667]
Explanation: We then fit an electron scattering factor to the profile. To do this, we need to define a list of elements and their respective atomic fractions.
End of explanation
rigen.fit_atomic_scattering(elements,fracs,scattering_factor='lobato',plot_fit=True,iterpath='serpentine')
Explanation: Then we will fit a background scattering factor. The scattering factor parametrisation used here is that specified by Lobato and Van Dyck [2]. The plot_fit parameter ensures we check the fitted profile.
[2] Lobato, I., & Van Dyck, D. (2014). An accurate parameterization for scattering factors, electron densities and electrostatic potentials for neutral atoms that obey all physical constraints. Acta Crystallographica Section A: Foundations and Advances, 70(6), 636-649.
End of explanation
rigen.set_s_cutoff(s_min=1.5,s_max=4)
rigen.fit_atomic_scattering(elements,fracs,scattering_factor='lobato',plot_fit=True,iterpath='serpentine')
Explanation: That's clearly a terrible fit! This is because we're trying to fit the beam stop. To avoid this, we specify to fit to the 'tail end' of the data by specifying a minimum and maximum scattering angle range. This is generally recommended, as electron scattering factors tend to not include inelastic scattering, which means the factors are rarely perfect fits.
End of explanation
ri = rigen.get_reduced_intensity()
ri.plot()
Explanation: That's clearly much much better. Always inspect your fit.
Finally, we calculate the reduced intensity itself.
End of explanation
ri.damp_exponential(b=0.1)
ri.plot()
ri.damp_lorch(s_max=4)
ri.plot()
Explanation: If it seems like the reduced intensity is not oscillating around 0 at high s, you should try fitting with a larger s_min. This generally speaking solves the issue.
<a id='dri'></a>
4. Damping the Reduced Intensity
The reduced intensity acquired above does not go to zero at high s as it should because the maximum acquired scattering vector is not very high.
This would result in significant oscillation in the PDF due to a discontinuity in the fourier transformed data. To combat this, the reduced intensity is damped. In the X-ray community a common damping functions are the Lorch function and an exponential damping function. Both are supported here.
It is worth noting that damping does reduce the resolution in r in the PDF.
End of explanation
ri.damp_low_q_region_erfc(offset=4)
ri.plot()
Explanation: Additionally, it is recommended to damp the low s regime. We use an error function to do that
End of explanation
ri = rigen.get_reduced_intensity()
Explanation: If the function ends up overdamped, you can simply reacquire the reduced intensity using:
End of explanation
pdfgen = pxm.generators.PDFGenerator1D(ri)
Explanation: <a id='pdf'></a>
5. Acquiring a PDF
Finally, a PDF is acquired from the damped reduced intensity. This is done by a fourier sine transform.
To ignore parts of the scattering data that are too noisy, you can set a minimum and maximum scattering angle for the transform.
First, we initialise a PDFGenerator1D object.
End of explanation
s_min = 0.
s_max = 4.
Explanation: Secify a minimum and maximum scattering angle. The maximum must be equivalent to the Lorch function s_max if the Lorch function is used to damp. Otherwise the Lorch function damping can cause artifact in the PDF.
End of explanation
pdf = pdfgen.get_pdf(s_min=s_min, s_max=s_max, r_max=10)
pdf.plot()
Explanation: Finally we get the PDF. r_max specifies the maximum real space distance we want to interpret.
End of explanation
pdf.save('Demo-PDF.hspy')
Explanation: The PDF can then be saved.
End of explanation
<END_TASK> |
293 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Generador de descuentos
Objetivos
Incentivar nuevas compras del cliente en el establecimiento
Fomentar el consumo de otros productos
Fomentar el consumo de productos con más margen de beneficio
Entradas y Salidas
Entrada
Step5: Hechos
Definiremos a continuación los hechos que manejará el sistema.
Step8: Objetivo 1
Incentivar nuevas compras del cliente en el establecimiento
Para esto no hay nada mejor que las típicas promociones 2x1, 3x2, etc.
Implementación
Step9: Pruebas
Utilizaremos la función watch para ver qué está haciendo el motor durante la ejecución.
Step12: Objetivo 2
Fomentar el consumo de otros productos
Para lograr este objetivo generaremos cupones con packs descuento. Ejemplo
Step13: Pruebas
Step14: Si compramos ambos productos de un pack no se nos debe generar la promoción, ya que en este caso el comercio perdería beneficio.
Step17: Objetivo 3
Fomentar el consumo de productos con más margen de beneficio
El truco para cumplir este objetivo es conocer qué beneficio se obtiene por cada producto, y si existe un producto del mismo tipo con un beneficio mayor, generar un cupón de descuento para ese producto que nos permita seguir ganando más.
Implementación
Step18: Pruebas
Step19: El sistema no debe generar cupón si se ha comprado el producto con mayor beneficio
Step20: Juntándolo todo
Gracias a Python podemos utilizar herencia múltiple para unir nuestros distintos motores en uno y darle un mejor interfaz de usuario. | <ASSISTANT_TASK:>
Python Code:
import re
from pyknow import *
Explanation: Generador de descuentos
Objetivos
Incentivar nuevas compras del cliente en el establecimiento
Fomentar el consumo de otros productos
Fomentar el consumo de productos con más margen de beneficio
Entradas y Salidas
Entrada: Lista de artículos que ha comprado el consumidor
Salida: Lista de cupones descuento que imprimir junto al recibo de compra
End of explanation
class Producto(Fact):
Producto que ha comprado un cliente.
>>> Producto(nombre="pepsi", tipo="refresco de cola", cantidad=1)
pass
class Cupon(Fact):
Cupón a generar para la próxima compra del cliente.
>>> Cupon(tipo="2x1", producto="pepsi")
pass
class Promo(Fact):
Promoción vigente en el comercio.
>>> Promo(tipo="2x1", **depende_de_la_promo)
pass
class Beneficio(Fact):
Define los beneficios que obtiene el comercio por cada producto.
>>> Beneficio(nombre="pepsi", tipo="refresco de cola", ganancias=0.2)
pass
Explanation: Hechos
Definiremos a continuación los hechos que manejará el sistema.
End of explanation
class OfertasNxM(KnowledgeEngine):
@DefFacts()
def carga_promociones_nxm(self):
Hechos iniciales.
Genera las promociones vigentes
yield Promo(tipo="2x1", producto="Dodot")
yield Promo(tipo="2x1", producto="Leche Pascual")
yield Promo(tipo="3x2", producto="Pilas AAA")
@Rule(Promo(tipo=MATCH.t & P(lambda t: re.match(r"\d+x\d+", t)),
producto=MATCH.p),
Producto(nombre=MATCH.p))
def oferta_nxm(self, t, p):
Sabemos que el cliente volverá para aprovechar
la promoción, ya que hoy ha comprado el producto.
self.declare(Cupon(tipo=t, producto=p))
Explanation: Objetivo 1
Incentivar nuevas compras del cliente en el establecimiento
Para esto no hay nada mejor que las típicas promociones 2x1, 3x2, etc.
Implementación
End of explanation
watch('RULES', 'FACTS')
nxm = OfertasNxM()
nxm.reset()
nxm.declare(Producto(nombre="Dodot"))
nxm.declare(Producto(nombre="Agua Mineral"))
nxm.declare(Producto(nombre="Pilas AAA"))
nxm.run()
nxm.facts
Explanation: Pruebas
Utilizaremos la función watch para ver qué está haciendo el motor durante la ejecución.
End of explanation
class OfertasPACK(KnowledgeEngine):
@DefFacts()
def carga_promociones_pack(self):
Genera las promociones vigentes
yield Promo(tipo="PACK", producto1="Fregona ACME", producto2="Mopa ACME", descuento="25%")
yield Promo(tipo="PACK", producto1="Pasta Gallo", producto2="Tomate Frito", descuento="10%")
@Rule(Promo(tipo="PACK", producto1=MATCH.p1, producto2=MATCH.p2, descuento=MATCH.d),
OR(
AND(
NOT(Producto(nombre=MATCH.p1)),
Producto(nombre=MATCH.p2)
),
AND(
Producto(nombre=MATCH.p1),
NOT(Producto(nombre=MATCH.p2))
)
)
)
def pack(self, p1, p2, d):
El cliente querrá comprar un producto adicional en su próxima visita.
self.declare(Cupon(tipo="PACK", producto1=p1, producto2=p2, descuento=d))
Explanation: Objetivo 2
Fomentar el consumo de otros productos
Para lograr este objetivo generaremos cupones con packs descuento. Ejemplo:
Si compras una fregona y una mopa a la vez, tienes un 25% de descuento en ambos productos
Implementación
End of explanation
pack = OfertasPACK()
pack.reset()
pack.declare(Producto(nombre="Tomate Frito"))
pack.declare(Producto(nombre="Fregona ACME"))
pack.run()
Explanation: Pruebas
End of explanation
pack.reset()
pack.declare(Producto(nombre="Fregona ACME"))
pack.declare(Producto(nombre="Mopa ACME"))
pack.run()
Explanation: Si compramos ambos productos de un pack no se nos debe generar la promoción, ya que en este caso el comercio perdería beneficio.
End of explanation
class OfertasDescuento(KnowledgeEngine):
@DefFacts()
def carga_beneficios(self):
Define las beneficios por producto.
yield Beneficio(nombre="Mahou", tipo="Cerveza", ganancias=0.5)
yield Beneficio(nombre="Cerveza Hacendado", tipo="Cerveza", ganancias=0.9)
yield Beneficio(nombre="Pilas AAA Duracell", tipo="Pilas AAA", ganancias=1.5)
yield Beneficio(nombre="Pilas AAA Hacendado", tipo="Pilas AAA", ganancias=2)
@Rule(Producto(nombre=MATCH.p1),
Beneficio(nombre=MATCH.p1, tipo=MATCH.t, ganancias=MATCH.g1),
Beneficio(nombre=MATCH.p2, tipo=MATCH.t, ganancias=MATCH.g2),
TEST(lambda g1, g2: g2 > g1)
)
def descuento_producto_con_mayor_beneficio(self, p2, g1, g2, **_):
diferencia_ganancia = g2 - g1
self.declare(Cupon(tipo="DESCUENTO",
producto=p2,
cantidad=diferencia_ganancia / 2))
Explanation: Objetivo 3
Fomentar el consumo de productos con más margen de beneficio
El truco para cumplir este objetivo es conocer qué beneficio se obtiene por cada producto, y si existe un producto del mismo tipo con un beneficio mayor, generar un cupón de descuento para ese producto que nos permita seguir ganando más.
Implementación
End of explanation
descuento = OfertasDescuento()
descuento.reset()
descuento.declare(Producto(nombre="Mahou"))
descuento.run()
Explanation: Pruebas
End of explanation
descuento.reset()
descuento.declare(Producto(nombre="Pilas AAA Hacendado"))
descuento.run()
Explanation: El sistema no debe generar cupón si se ha comprado el producto con mayor beneficio
End of explanation
class GeneradorCupones(OfertasNxM, OfertasPACK, OfertasDescuento):
def generar_cupones(self, *nombre_productos):
# Reiniciamos el motor
self.reset()
# Declaramos los productos que ha comprado el cliente
for nombre in nombre_productos:
self.declare(Producto(nombre=nombre))
# Ejecutamos el motor
self.run()
# Extraemos las promociones generadas
for fact in self.facts.values():
if isinstance(fact, Cupon):
yield fact
ke = GeneradorCupones()
[cupon for cupon in ke.generar_cupones("Pilas AAA", "Mahou", "Tomate Frito")]
Explanation: Juntándolo todo
Gracias a Python podemos utilizar herencia múltiple para unir nuestros distintos motores en uno y darle un mejor interfaz de usuario.
End of explanation
<END_TASK> |
294 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Advanced
Step1: Let's get started with some basic imports.
Step2: And then we'll build a synthetic "dataset" and initialize a new bundle with those data
Step3: mask_enabled and mask_phases parameters
Each dataset will have a mask_enabled parameter and, if mask_enabled=True, a mask_phases parameter.
Step4: This is True by default, but mask_phases is set to an empty list, meaning that no masking will actually be applied. This mask_enabled parameter just allows for setting a mask via mask_phases and then toggling that mask on and off without having to reset mask_phases.
Step5: We can now plot the observations to see the unmasked (fake) data.
Step6: If we wanted to only include the eclipses, we could set mask_phases as a list of tuples, where each tuple contains the phase range that we want to keep.
Step7: Note that b.run_compute, computes the forward model ignoring that mask, whether compute_phases is set or not.
Step8: However, when calling b.run_solver, the mask will be applied when determining what times to compute the forward-model (see Advanced | <ASSISTANT_TASK:>
Python Code:
#!pip install -I "phoebe>=2.3,<2.4"
Explanation: Advanced: mask_phases and mask_enabled
Setup
Let's first make sure we have the latest version of PHOEBE 2.3 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
import phoebe
from phoebe import u # units
import numpy as np
b = phoebe.default_binary()
Explanation: Let's get started with some basic imports.
End of explanation
b = phoebe.default_binary()
b.add_dataset('lc', times=phoebe.linspace(0,5,1001))
b.run_compute()
times = b.get_value('times@model')
fluxes = b.get_value('fluxes@model')
sigmas = np.ones_like(times) * 0.01
b = phoebe.default_binary()
b.add_dataset('lc', compute_phases=phoebe.linspace(0,1,101),
times=times, fluxes=fluxes, sigmas=sigmas,
dataset='lc01')
Explanation: And then we'll build a synthetic "dataset" and initialize a new bundle with those data
End of explanation
b.get_parameter(qualifier='mask_enabled', dataset='lc01', context='dataset')
print(b.get_parameter(qualifier='mask_enabled', dataset='lc01', context='dataset').description)
Explanation: mask_enabled and mask_phases parameters
Each dataset will have a mask_enabled parameter and, if mask_enabled=True, a mask_phases parameter.
End of explanation
b.get_parameter(qualifier='mask_phases', dataset='lc01', context='dataset')
print(b.get_parameter(qualifier='mask_phases', dataset='lc01', context='dataset').description)
Explanation: This is True by default, but mask_phases is set to an empty list, meaning that no masking will actually be applied. This mask_enabled parameter just allows for setting a mask via mask_phases and then toggling that mask on and off without having to reset mask_phases.
End of explanation
_ = b.plot(show=True)
_ = b.plot(x='phases', show=True)
Explanation: We can now plot the observations to see the unmasked (fake) data.
End of explanation
b.set_value('mask_phases', [(-0.1, 0.1), (0.4,-0.4)])
_ = b.plot(show=True)
_ = b.plot(x='phases', show=True)
Explanation: If we wanted to only include the eclipses, we could set mask_phases as a list of tuples, where each tuple contains the phase range that we want to keep.
End of explanation
b.run_compute()
_ = b.plot(x='phases', show=True, marker={'model': '+'})
b.set_value('compute_phases', [])
b.run_compute()
_ = b.plot(x='phases', show=True, marker={'model': '+'})
Explanation: Note that b.run_compute, computes the forward model ignoring that mask, whether compute_phases is set or not.
End of explanation
print(b.get_parameter('phases_t0').description)
print(b.get_parameter('phases_t0').choices)
Explanation: However, when calling b.run_solver, the mask will be applied when determining what times to compute the forward-model (see Advanced: Solver Times) and within the cost-function (see calculate_residuals, calculate_chi2, and calculate_lnlikelihood).
Phasing Options
Like the constraint between compute times and phases, phase-masking adheres to the per-dataset phases_t0 parameter to set which of the various t0s to use when mapping between time and phase.
And in the case of time-dependent systems, these will also adhere to phases_dpdt (in the case of dpdt and/or phases_period (in the case of apsidal motion (dperdt).
End of explanation
<END_TASK> |
295 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
<img style='float
Step1: Connect to server
Step2: <hr> Random binary network
A random graph where every node has the same degree, with default styling.
<br>
This and other graph plots take matrices (e.g. adjacency matrices) as inputs.
<br>
We use the networkx library to generate the graph, then get its adjancency matrix.
<br>
Double click on a point to see its neighbors (all should have 3).
<br>
Drag points to try and move the graph, it should remain tight because of the degree structure.
Step3: <hr> Random weighted network
The previous network was binary (all links either 0 or 1).
Here the links are weighted, which is reflected in the line widths.
Step4: <hr> Lobster network
The lobster graph, a backbone with some leaves, is colored here by node ordering.
<br>
We also set all nodes with degree less than 2 to gray.
Step5: <hr> Coloring by degree
Here we color points (and also change their size) to indicate their degree.
<br>
Click to confirm bigger points have more connections.
Step6: <hr> Graph of clustering
Graphs can be a useful way to look at data that doesn't neccessarily come from a graph.
<br>
Here we create a graph from a thresholded correlation matrix on data drawn from a set of clusters.
<br>
The cluster identities are clear in the resulting graph. | <ASSISTANT_TASK:>
Python Code:
import os
from lightning import Lightning
from numpy import random, asarray, linspace, corrcoef
from colorsys import hsv_to_rgb
from sklearn import datasets
import networkx as nx
Explanation: <img style='float: left' src="http://lightning-viz.github.io/images/logo.png"> <br> <br> Force-directed graphs in <a href='http://lightning-viz.github.io/'><font color='#9175f0'>Lightning</font></a>
<hr> Setup
End of explanation
lgn = Lightning(ipython=True, host='http://public.lightning-viz.org')
Explanation: Connect to server
End of explanation
n = 100
G = nx.random_regular_graph(3,n)
mat = nx.adjacency_matrix(G).todense()
lgn.force(mat)
Explanation: <hr> Random binary network
A random graph where every node has the same degree, with default styling.
<br>
This and other graph plots take matrices (e.g. adjacency matrices) as inputs.
<br>
We use the networkx library to generate the graph, then get its adjancency matrix.
<br>
Double click on a point to see its neighbors (all should have 3).
<br>
Drag points to try and move the graph, it should remain tight because of the degree structure.
End of explanation
G = nx.random_regular_graph(3,100)
mat = asarray(nx.adjacency_matrix(G).todense()) * (((random.rand(100,100))*5) ** 2)
lgn.force(mat)
Explanation: <hr> Random weighted network
The previous network was binary (all links either 0 or 1).
Here the links are weighted, which is reflected in the line widths.
End of explanation
G = nx.random_lobster(60, 0.5, 0.0, seed=44)
mat = nx.adjacency_matrix(G).todense()
n = len(G.nodes())
c = [list(asarray(hsv_to_rgb(float(x) / n , 0.8, 1.0))*255) if y > 1 else [150,150,150] for (x,y) in G.degree_iter()]
lgn.force(mat, color=c)
Explanation: <hr> Lobster network
The lobster graph, a backbone with some leaves, is colored here by node ordering.
<br>
We also set all nodes with degree less than 2 to gray.
End of explanation
G = nx.random_geometric_graph(50, 0.1)
mat = nx.adjacency_matrix(G).todense()
g = G.degree().values()
lgn.force(mat, group=g, size=(asarray(g) + 1.5)*3)
Explanation: <hr> Coloring by degree
Here we color points (and also change their size) to indicate their degree.
<br>
Click to confirm bigger points have more connections.
End of explanation
d, g = datasets.make_blobs(n_features=5, n_samples=50, centers=5, cluster_std=2.0, random_state=100)
c = corrcoef(d)
c[c<0.9] = 0
lgn.force(c, group=g)
Explanation: <hr> Graph of clustering
Graphs can be a useful way to look at data that doesn't neccessarily come from a graph.
<br>
Here we create a graph from a thresholded correlation matrix on data drawn from a set of clusters.
<br>
The cluster identities are clear in the resulting graph.
End of explanation
<END_TASK> |
296 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Copyright 2020 The TensorFlow Authors.
Step1: Weight clustering in Keras example
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Train a tf.keras model for MNIST without clustering
Step3: Evaluate the baseline model and save it for later usage
Step4: Fine-tune the pre-trained model with clustering
Apply the cluster_weights() API to a whole pre-trained model to demonstrate its effectiveness in reducing the model size after applying zip while keeping decent accuracy. For how best to balance the accuracy and compression rate for your use case, please refer to the per layer example in the comprehensive guide.
Define the model and apply the clustering API
Before you pass the model to the clustering API, make sure it is trained and shows some acceptable accuracy.
Step5: Fine-tune the model and evaluate the accuracy against baseline
Fine-tune the model with clustering for 1 epoch.
Step6: For this example, there is minimal loss in test accuracy after clustering, compared to the baseline.
Step7: Create 6x smaller models from clustering
Both strip_clustering and applying a standard compression algorithm (e.g. via gzip) are necessary to see the compression benefits of clustering.
First, create a compressible model for TensorFlow. Here, strip_clustering removes all variables (e.g. tf.Variable for storing the cluster centroids and the indices) that clustering only needs during training, which would otherwise add to model size during inference.
Step8: Then, create compressible models for TFLite. You can convert the clustered model to a format that's runnable on your targeted backend. TensorFlow Lite is an example you can use to deploy to mobile devices.
Step9: Define a helper function to actually compress the models via gzip and measure the zipped size.
Step10: Compare and see that the models are 6x smaller from clustering
Step11: Create an 8x smaller TFLite model from combining weight clustering and post-training quantization
You can apply post-training quantization to the clustered model for additional benefits.
Step12: See the persistence of accuracy from TF to TFLite
Define a helper function to evaluate the TFLite model on the test dataset.
Step13: You evaluate the model, which has been clustered and quantized, and then see the accuracy from TensorFlow persists to the TFLite backend. | <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
! pip install -q tensorflow-model-optimization
import tensorflow as tf
from tensorflow import keras
import numpy as np
import tempfile
import zipfile
import os
Explanation: Weight clustering in Keras example
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/model_optimization/guide/clustering/clustering_example"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/model-optimization/blob/master/tensorflow_model_optimization/g3doc/guide/clustering/clustering_example.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/model-optimization/blob/master/tensorflow_model_optimization/g3doc/guide/clustering/clustering_example.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/model-optimization/tensorflow_model_optimization/g3doc/guide/clustering/clustering_example.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Overview
Welcome to the end-to-end example for weight clustering, part of the TensorFlow Model Optimization Toolkit.
Other pages
For an introduction to what weight clustering is and to determine if you should use it (including what's supported), see the overview page.
To quickly find the APIs you need for your use case (beyond fully clustering a model with 16 clusters), see the comprehensive guide.
Contents
In the tutorial, you will:
Train a tf.keras model for the MNIST dataset from scratch.
Fine-tune the model by applying the weight clustering API and see the accuracy.
Create a 6x smaller TF and TFLite models from clustering.
Create a 8x smaller TFLite model from combining weight clustering and post-training quantization.
See the persistence of accuracy from TF to TFLite.
Setup
You can run this Jupyter Notebook in your local virtualenv or colab. For details of setting up dependencies, please refer to the installation guide.
End of explanation
# Load MNIST dataset
mnist = keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
# Normalize the input image so that each pixel value is between 0 to 1.
train_images = train_images / 255.0
test_images = test_images / 255.0
# Define the model architecture.
model = keras.Sequential([
keras.layers.InputLayer(input_shape=(28, 28)),
keras.layers.Reshape(target_shape=(28, 28, 1)),
keras.layers.Conv2D(filters=12, kernel_size=(3, 3), activation=tf.nn.relu),
keras.layers.MaxPooling2D(pool_size=(2, 2)),
keras.layers.Flatten(),
keras.layers.Dense(10)
])
# Train the digit classification model
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.fit(
train_images,
train_labels,
validation_split=0.1,
epochs=10
)
Explanation: Train a tf.keras model for MNIST without clustering
End of explanation
_, baseline_model_accuracy = model.evaluate(
test_images, test_labels, verbose=0)
print('Baseline test accuracy:', baseline_model_accuracy)
_, keras_file = tempfile.mkstemp('.h5')
print('Saving model to: ', keras_file)
tf.keras.models.save_model(model, keras_file, include_optimizer=False)
Explanation: Evaluate the baseline model and save it for later usage
End of explanation
import tensorflow_model_optimization as tfmot
cluster_weights = tfmot.clustering.keras.cluster_weights
CentroidInitialization = tfmot.clustering.keras.CentroidInitialization
clustering_params = {
'number_of_clusters': 16,
'cluster_centroids_init': CentroidInitialization.LINEAR
}
# Cluster a whole model
clustered_model = cluster_weights(model, **clustering_params)
# Use smaller learning rate for fine-tuning clustered model
opt = tf.keras.optimizers.Adam(learning_rate=1e-5)
clustered_model.compile(
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=opt,
metrics=['accuracy'])
clustered_model.summary()
Explanation: Fine-tune the pre-trained model with clustering
Apply the cluster_weights() API to a whole pre-trained model to demonstrate its effectiveness in reducing the model size after applying zip while keeping decent accuracy. For how best to balance the accuracy and compression rate for your use case, please refer to the per layer example in the comprehensive guide.
Define the model and apply the clustering API
Before you pass the model to the clustering API, make sure it is trained and shows some acceptable accuracy.
End of explanation
# Fine-tune model
clustered_model.fit(
train_images,
train_labels,
batch_size=500,
epochs=1,
validation_split=0.1)
Explanation: Fine-tune the model and evaluate the accuracy against baseline
Fine-tune the model with clustering for 1 epoch.
End of explanation
_, clustered_model_accuracy = clustered_model.evaluate(
test_images, test_labels, verbose=0)
print('Baseline test accuracy:', baseline_model_accuracy)
print('Clustered test accuracy:', clustered_model_accuracy)
Explanation: For this example, there is minimal loss in test accuracy after clustering, compared to the baseline.
End of explanation
final_model = tfmot.clustering.keras.strip_clustering(clustered_model)
_, clustered_keras_file = tempfile.mkstemp('.h5')
print('Saving clustered model to: ', clustered_keras_file)
tf.keras.models.save_model(final_model, clustered_keras_file,
include_optimizer=False)
Explanation: Create 6x smaller models from clustering
Both strip_clustering and applying a standard compression algorithm (e.g. via gzip) are necessary to see the compression benefits of clustering.
First, create a compressible model for TensorFlow. Here, strip_clustering removes all variables (e.g. tf.Variable for storing the cluster centroids and the indices) that clustering only needs during training, which would otherwise add to model size during inference.
End of explanation
clustered_tflite_file = '/tmp/clustered_mnist.tflite'
converter = tf.lite.TFLiteConverter.from_keras_model(final_model)
tflite_clustered_model = converter.convert()
with open(clustered_tflite_file, 'wb') as f:
f.write(tflite_clustered_model)
print('Saved clustered TFLite model to:', clustered_tflite_file)
Explanation: Then, create compressible models for TFLite. You can convert the clustered model to a format that's runnable on your targeted backend. TensorFlow Lite is an example you can use to deploy to mobile devices.
End of explanation
def get_gzipped_model_size(file):
# It returns the size of the gzipped model in bytes.
import os
import zipfile
_, zipped_file = tempfile.mkstemp('.zip')
with zipfile.ZipFile(zipped_file, 'w', compression=zipfile.ZIP_DEFLATED) as f:
f.write(file)
return os.path.getsize(zipped_file)
Explanation: Define a helper function to actually compress the models via gzip and measure the zipped size.
End of explanation
print("Size of gzipped baseline Keras model: %.2f bytes" % (get_gzipped_model_size(keras_file)))
print("Size of gzipped clustered Keras model: %.2f bytes" % (get_gzipped_model_size(clustered_keras_file)))
print("Size of gzipped clustered TFlite model: %.2f bytes" % (get_gzipped_model_size(clustered_tflite_file)))
Explanation: Compare and see that the models are 6x smaller from clustering
End of explanation
converter = tf.lite.TFLiteConverter.from_keras_model(final_model)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
tflite_quant_model = converter.convert()
_, quantized_and_clustered_tflite_file = tempfile.mkstemp('.tflite')
with open(quantized_and_clustered_tflite_file, 'wb') as f:
f.write(tflite_quant_model)
print('Saved quantized and clustered TFLite model to:', quantized_and_clustered_tflite_file)
print("Size of gzipped baseline Keras model: %.2f bytes" % (get_gzipped_model_size(keras_file)))
print("Size of gzipped clustered and quantized TFlite model: %.2f bytes" % (get_gzipped_model_size(quantized_and_clustered_tflite_file)))
Explanation: Create an 8x smaller TFLite model from combining weight clustering and post-training quantization
You can apply post-training quantization to the clustered model for additional benefits.
End of explanation
def eval_model(interpreter):
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
# Run predictions on every image in the "test" dataset.
prediction_digits = []
for i, test_image in enumerate(test_images):
if i % 1000 == 0:
print('Evaluated on {n} results so far.'.format(n=i))
# Pre-processing: add batch dimension and convert to float32 to match with
# the model's input data format.
test_image = np.expand_dims(test_image, axis=0).astype(np.float32)
interpreter.set_tensor(input_index, test_image)
# Run inference.
interpreter.invoke()
# Post-processing: remove batch dimension and find the digit with highest
# probability.
output = interpreter.tensor(output_index)
digit = np.argmax(output()[0])
prediction_digits.append(digit)
print('\n')
# Compare prediction results with ground truth labels to calculate accuracy.
prediction_digits = np.array(prediction_digits)
accuracy = (prediction_digits == test_labels).mean()
return accuracy
Explanation: See the persistence of accuracy from TF to TFLite
Define a helper function to evaluate the TFLite model on the test dataset.
End of explanation
interpreter = tf.lite.Interpreter(model_content=tflite_quant_model)
interpreter.allocate_tensors()
test_accuracy = eval_model(interpreter)
print('Clustered and quantized TFLite test_accuracy:', test_accuracy)
print('Clustered TF test accuracy:', clustered_model_accuracy)
Explanation: You evaluate the model, which has been clustered and quantized, and then see the accuracy from TensorFlow persists to the TFLite backend.
End of explanation
<END_TASK> |
297 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Reading BEM surfaces from a forward solution
Plot BEM surfaces used for forward solution generation.
Step1: Show result | <ASSISTANT_TASK:>
Python Code:
# Author: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
#
# License: BSD (3-clause)
import mne
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
fname = data_path + '/subjects/sample/bem/sample-5120-5120-5120-bem-sol.fif'
surfaces = mne.read_bem_surfaces(fname, patch_stats=True)
print("Number of surfaces : %d" % len(surfaces))
Explanation: Reading BEM surfaces from a forward solution
Plot BEM surfaces used for forward solution generation.
End of explanation
head_col = (0.95, 0.83, 0.83) # light pink
skull_col = (0.91, 0.89, 0.67)
brain_col = (0.67, 0.89, 0.91) # light blue
colors = [head_col, skull_col, brain_col]
# 3D source space
from mayavi import mlab # noqa
mlab.figure(size=(600, 600), bgcolor=(0, 0, 0))
for c, surf in zip(colors, surfaces):
points = surf['rr']
faces = surf['tris']
mlab.triangular_mesh(points[:, 0], points[:, 1], points[:, 2], faces,
color=c, opacity=0.3)
Explanation: Show result
End of explanation
<END_TASK> |
298 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
==================================================
Background on Independent Component Analysis (ICA)
==================================================
Step1: Now try to recover the sources | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
from scipy import signal
from sklearn.decomposition import FastICA, PCA
np.random.seed(0) # set seed for reproducible results
n_samples = 2000
time = np.linspace(0, 8, n_samples)
s1 = np.sin(2 * time) # Signal 1 : sinusoidal signal
s2 = np.sign(np.sin(3 * time)) # Signal 2 : square signal
s3 = signal.sawtooth(2 * np.pi * time) # Signal 3: sawtooth signal
S = np.c_[s1, s2, s3]
S += 0.2 * np.random.normal(size=S.shape) # Add noise
S /= S.std(axis=0) # Standardize data
# Mix data
A = np.array([[1, 1, 1], [0.5, 2, 1.0], [1.5, 1.0, 2.0]]) # Mixing matrix
X = np.dot(S, A.T) # Generate observations
Explanation: ==================================================
Background on Independent Component Analysis (ICA)
==================================================
:depth: 2
Many M/EEG signals including biological artifacts reflect non-Gaussian
processes. Therefore PCA-based artifact rejection will likely perform worse at
separating the signal from noise sources.
MNE-Python supports identifying artifacts and latent components using temporal ICA.
MNE-Python implements the :class:mne.preprocessing.ICA class that facilitates applying ICA
to MEG and EEG data. Here we discuss some
basics of ICA.
Concepts
ICA finds directions in the feature space corresponding to projections with high non-Gaussianity.
not necessarily orthogonal in the original feature space, but orthogonal in the whitened feature space.
In contrast, PCA finds orthogonal directions in the raw feature
space that correspond to directions accounting for maximum variance.
or differently, if data only reflect Gaussian processes ICA and PCA are equivalent.
Example: Imagine 3 instruments playing simultaneously and 3 microphones
recording mixed signals. ICA can be used to recover the sources ie. what is played by each instrument.
ICA employs a very simple model: $X = AS$ where $X$ is our observations, $A$ is the mixing matrix and $S$ is the vector of independent (latent) sources.
The challenge is to recover $A$ and $S$ from $X$.
First generate simulated data
End of explanation
# compute ICA
ica = FastICA(n_components=3)
S_ = ica.fit_transform(X) # Get the estimated sources
A_ = ica.mixing_ # Get estimated mixing matrix
# compute PCA
pca = PCA(n_components=3)
H = pca.fit_transform(X) # estimate PCA sources
plt.figure(figsize=(9, 6))
models = [X, S, S_, H]
names = ['Observations (mixed signal)',
'True Sources',
'ICA estimated sources',
'PCA estimated sources']
colors = ['red', 'steelblue', 'orange']
for ii, (model, name) in enumerate(zip(models, names), 1):
plt.subplot(4, 1, ii)
plt.title(name)
for sig, color in zip(model.T, colors):
plt.plot(sig, color=color)
plt.tight_layout()
Explanation: Now try to recover the sources
End of explanation
<END_TASK> |
299 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
模擬瀏覽器來抓
selenium (https
Step1: 找出圖片的網址
Step2: Q
直接用 urlopen 來找找看圖片網址
有了網址, 可以用 urllib 抓下來。 也可以用 spynner 來抓。
Step3: 如何換頁?
Step4: 開始回圈來抓圖吧
Step5: 已經可以抓了,還有什麼問題?
browser 瀏覽頁面時,已經顯示圖了。之後,又再 download 一次,浪費頻寬。
介面不夠美觀,無法看到進度。
頻寬問題
概念上,有兩個方向。 一是既然 browser 顯示了圖片,表示 browser 有這份圖,我們跟 browser 要就好了。另一個剛好相反,告訴瀏覽器,不要顯示圖片,把圖片的 url 交給我們即可。
這兩個方向各有利弊,以現在這個應用來說,我選擇第二個。原因有三:
QtWebKit 有選項讓你這樣做。
這樣可行。 browser 仍然會傳回正確的圖片 url。
可以順便擋住廣告圖片。
Step6: 現在來建立一個介面
Step7: 擋掉更多廣告
Step8: 利用 thread | <ASSISTANT_TASK:>
Python Code:
import spynner
import os, sys
from IPython.display import display, Image
# 用 spynner 打開瀏覽器
browser = spynner.Browser(debug_level=spynner.ERROR, debug_stream=sys.stderr)
browser.show() # 告訴 browser,要它之後不要隱身
# 在 ?????? 填入適當網址
base_url = 'http://v.??????.com/online/comic-7340.html?ch='
book_no = 54
browser.load( base_url+str(book_no))
Explanation: 模擬瀏覽器來抓
selenium (https://github.com/tjwei/2048-NN/blob/master/my2048-rl-theano-n-tuple-Copy7.ipynb)
PhantomJS http://phantomjs.org/
控制 webkit, pyqt/pygtk/Python-Webkit/ghost.py
spynner (http://weijr-note.blogspot.tw/2014/11/blog-post.html)
Mechanize/MechanicalSoup/RoboBrowser
End of explanation
browser.load_jquery(True) # spynner 內建有 jquery,用這個 method 載入,比較方便。
img_url = browser.runjs('$("#TheImg").attr("src")')
print(img_url)
Image(img_url, width=200)
# 當然不用 jquery 也可以
browser.runjs('document.getElementById("TheImg").getAttribute("src")')
Explanation: 找出圖片的網址
End of explanation
img_data = browser.download(img_url) # jpg 檔案內容
with open("cover.jpg", 'wb') as jpg_file:
jpg_file.write(img_data)
Image(data=img_data, width=200)
Explanation: Q
直接用 urlopen 來找找看圖片網址
有了網址, 可以用 urllib 抓下來。 也可以用 spynner 來抓。
End of explanation
#總共頁數
total_pages = int(browser.runjs('ps'))
total_pages
Explanation: 如何換頁?
End of explanation
for page in range(1, 1+total_pages):
browser.load("{}{}-{}".format(base_url, book_no, page))
img_url = browser.runjs('document.getElementById("TheImg").getAttribute("src")')
print(page, img_url)
with open("{}-{}.jpg".format(book_no, page), "wb") as f:
browser.download(img_url, outfd=f)
print("File saved in", os.getcwd())
Explanation: 開始回圈來抓圖吧
End of explanation
from PySide.QtWebKit import QWebSettings # 用來設定 QtWebKit
# 利用
import ipywidgets as W
# 設定不自動顯示圖片
settings = browser.webview.settings()
settings.setAttribute(QWebSettings.AutoLoadImages, False)
# 可以看到瀏覽器並不會讀進圖片
total_pages = int(browser.runjs('ps'))
for page in range(1, 1+10):
browser.load("{}{}-{}".format(base_url, book_no, page))
img_url = browser.runjs('document.getElementById("TheImg").getAttribute("src")')
print(page, img_url)
Explanation: 已經可以抓了,還有什麼問題?
browser 瀏覽頁面時,已經顯示圖了。之後,又再 download 一次,浪費頻寬。
介面不夠美觀,無法看到進度。
頻寬問題
概念上,有兩個方向。 一是既然 browser 顯示了圖片,表示 browser 有這份圖,我們跟 browser 要就好了。另一個剛好相反,告訴瀏覽器,不要顯示圖片,把圖片的 url 交給我們即可。
這兩個方向各有利弊,以現在這個應用來說,我選擇第二個。原因有三:
QtWebKit 有選項讓你這樣做。
這樣可行。 browser 仍然會傳回正確的圖片 url。
可以順便擋住廣告圖片。
End of explanation
# 建立 Image Widget 用來顯示圖片預覽
img = W.Image()
img.width = 300
# 顯示下載進度的 Progress bar
progress = W.IntProgress(min=1, value=1, max=total_pages)
display(img)
display(progress)
dir_name = os.path.join("download", "{:02d}".format(book_no))
if not os.path.exists(dir_name):
os.makedirs(dir_name)
print("Download to {}".format(os.path.join(os.getcwd(), dir_name)))
sys.stdout.flush()
# 開始下載
for page in range(1, total_pages+1):
# 取得 image url
browser.load("{}{}-{}".format(base_url, book_no, page))
img_url = browser.runjs('document.getElementById("TheImg").getAttribute("src")')
# 下載圖片
fn = os.path.join(dir_name, "{:03d}.jpg".format(page))
with open(fn, "wb") as f:
browser.download(img_url, outfd=f)
# 更新 Widget 的狀態
progress.description = "%d/%d"%(page, total_pages)
progress.value = page
img.value = Image(filename=fn).data
Explanation: 現在來建立一個介面
End of explanation
from PySide.QtNetwork import QNetworkAccessManager, QNetworkRequest # 控制 browser 的網路連線
from PySide.QtCore import QUrl # Qt 的 Url 類別
# 關掉之前的 browser
browser.close()
browser = spynner.Browser(debug_level=spynner.ERROR, debug_stream=sys.stderr)
# 建立一個 webview
# 我們不設定 AutoLoadImages=False, 但增加一些其他設定
# 這裡並不是重點,但適合我們的應用
browser.create_webview()
settings = browser.webview.settings()
# settings.setAttribute(QWebSettings.AutoLoadImages, False)
settings.setAttribute(QWebSettings.JavaEnabled, False) # 不需要 Java
settings.setAttribute(QWebSettings.DnsPrefetchEnabled, True) # 試著節省 Dns 花的時間
settings.setAttribute(QWebSettings.PrivateBrowsingEnabled, True) # 不需要瀏覽紀錄
# 建立一個空的 url
BLANK_REQUEST = QNetworkRequest(QUrl())
# 建立一個空的圖片 url
DUMMY_IMG_REQUEST = QNetworkRequest(QUrl("data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACwAAAAAAQABAAACAkQBADs="))
# 因為只需要用一次,可以取個又臭又長的名字
class EightComicNetworkAccessManager(QNetworkAccessManager):
# 只需要取代 createRequest 這個 method 即可
def createRequest(self, op, request, device=None):
url = str(request.url().toString()) # 參數很多,但只取 url 就夠用
if 'comic' not in url[:20]:
# 用很醜的方式來判斷非 8comic 網站的 url
# 用空的 url 取代原本的 url
return QNetworkAccessManager.createRequest(self, self.GetOperation, BLANK_REQUEST)
elif not url.endswith('js') and not url.endswith('css') and '.html' not in url:
# 凡是 .js .css .html 之外的,都用空的圖片 url 取代原本的 url
return QNetworkAccessManager.createRequest(self, self.GetOperation, DUMMY_IMG_REQUEST)
else:
# 傳回原本的 url
return QNetworkAccessManager.createRequest(self, op, request, device)
# 設定 browser 的 NetworkAccessManager
browser.webpage.setNetworkAccessManager(EightComicNetworkAccessManager())
browser.show()
browser.load(base_url+str(book_no))
total_pages = int(browser.runjs('ps'))
%%timeit -n 1 -r 1
# 建立 Image Widget 用來顯示圖片預覽
img = W.Image()
img.width = 300
# 顯示下載進度的 Progress bar
progress = W.IntProgress(min=1, value=1, max=total_pages)
display(img)
display(progress)
dir_name = os.path.join("download", "{:02d}".format(book_no))
if not os.path.exists(dir_name):
os.makedirs(dir_name)
print("Download to {}".format(os.path.join(os.getcwd(), dir_name)))
sys.stdout.flush()
# 開始下載
for page in range(1, total_pages+1):
# 取得 image url
browser.load("{}{}-{}".format(base_url, book_no, page))
img_url = browser.runjs('document.getElementById("TheImg").getAttribute("src")')
# 下載圖片
fn = os.path.join(dir_name, "{:03d}.jpg".format(page))
with open(fn, "wb") as f:
browser.download(img_url, outfd=f)
# 更新 Widget 的狀態
progress.description = "%d/%d"%(page, total_pages)
progress.value = page
img.value = Image(filename=fn).data
Explanation: 擋掉更多廣告
End of explanation
from urllib.request import urlopen
from multiprocessing.pool import ThreadPool
%%timeit -n 1 -r 1
book_no = 63
browser.load(base_url+str(book_no))
total_pages = int(browser.runjs('ps'))
def save_img(img_url, page):
fn = os.path.join(dir_name, "{:03d}.jpg".format(page))
with urlopen(img_url) as img_src:
with open(fn, "wb") as f:
f.write(img_src.read())
# 更新 widget 的狀態
progress.value += 1
progress.description = "img: %d/%d"%(progress.value, total_pages)
img.value = "<img src='{}' height=300 />".format(fn)
# 建立 Image Widget 用來顯示圖片預覽
img = W.HTML()
# 顯示下載進度的 Progress bar
progress = W.IntProgress(min=1, value=1, max=total_pages)
display(progress)
display(img)
dir_name = os.path.join("download", "{:02d}".format(book_no))
if not os.path.exists(dir_name):
os.makedirs(dir_name)
print("Download to {}".format(os.path.join(os.getcwd(), dir_name)))
sys.stdout.flush()
pool = ThreadPool(5)
for page in range(1, total_pages+1):
# 取得 image url
browser.load("{}{}-{}".format(base_url, book_no, page))
img_url = browser.runjs('document.getElementById("TheImg").getAttribute("src")')
pool.apply_async(save_img, (img_url, page))
pool.close()
pool.join()
Explanation: 利用 thread
End of explanation
<END_TASK> |