text_prompt
stringlengths 168
30.3k
| code_prompt
stringlengths 67
124k
|
---|---|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1-D series
Step2: DataFrame
Step3: Data types
Step4: converting types
Step5: Filtering with Pandas
Step6: Creating a new index not on the values but on the 2 letter geo-code column
Step7: Row index with "iloc" method
| <ASSISTANT_TASK:>
Python Code:
import pandas as pd
s = pd.Series([3, 5, 67, 2, 4])
s
s.name = "OneDArray"
s
s.index
s.values
s.sum()
s.min()
s.count()
s * 3
s.sort_values()
s.value_counts()
s.abs?
eu = pd.read_csv('data/eu_revolving_loans.csv',
header=[1,2,3], index_col=0, skiprows=1)
eu.tail(4)
eu.index
eu.columns
eu.shape
eu.min(axis=1)
eu.min()
eu * 3
%pylab inline
eu.plot(legend=False)
eu.dtypes
eu = pd.read_csv('data/eu_revolving_loans.csv',
header=[1,2,3],
index_col=0,
skiprows=1,
na_values=['-'])
eu.dtypes
trade = pd.read_csv('data/ext_lt_intratrd.tsv', sep='\t')
trade.dtypes
trade.columns
# expect an key error below due to extra spaces in names
trade['2013']
new_cols = dict([(col, col.strip()) for col in trade.columns])
new_cols
trade.rename(columns=new_cols)
trade = trade.rename(columns=new_cols)
trade['2013']
# selecting row 3
trade.ix[3]
# selecting all rows where column index = 0
trade.ix[:,0]
# split out the column with index 0 & assign to new column 'geo'
# representing the country 2 letter code
trade['geo'] = trade.ix[:,0].map(lambda row: row.split(',')[-1])
trade['geo'].head()
trade['geo'].isin(['UK', 'DE'])
trade[trade['geo'].isin(['UK', 'DE'])]
# boolean selecting with more complex boolean expressions
# - find all countries where there are continuous growth from 2012-2014
trade[(trade['2014'] > trade['2013']) &
(trade['2013'] > trade['2012'])]
# create a column that represents those with increase from 2012 - 2013
trade['2013inc'] = trade['2013'] > trade['2012']
trade['2013inc'].head()
trade = trade.set_index('geo')
# now filter based on the geo column
trade.loc['DE']
# now filter based on row index for the top 100 rows
trade.iloc[:100]
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Integrating MinDiff with MinDiffModel
Step2: First, download the data. For succinctness, the input preparation logic has been factored out into helper functions as described in the input preparation guide. You can read the full guide for details on this process.
Step3: Original Model
Step4: Training with a tf.data.Dataset
Step5: Integrating MinDiff for training
Step6: Wrap it in a MinDiffModel.
Step7: Compile it as you would without MinDiff.
Step8: Train it with the MinDiff dataset (train_with_min_diff_ds in this case).
Step9: Evaluation and Prediction with MinDiffModel
Step10: When calling predict you can technically also pass in the dataset with the MinDiff data but it will be ignored and not affect the output.
Step11: Limitations of using MinDiffModel directly
Step12: For keras.Sequential or keras.Model, this is perfectly fine since they use the same functions.
Step13: However, if your model is a subclass of keras.Model, wrapping it with MinDiffModel will effectively lose the customization.
| <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!pip install --upgrade tensorflow-model-remediation
import tensorflow as tf
tf.get_logger().setLevel('ERROR') # Avoid TF warnings.
from tensorflow_model_remediation import min_diff
from tensorflow_model_remediation.tools.tutorials_utils import uci as tutorials_utils
# Original DataFrame for training, sampled at 0.3 for reduced runtimes.
train_df = tutorials_utils.get_uci_data(split='train', sample=0.3)
# Dataset needed to train with MinDiff.
train_with_min_diff_ds = (
tutorials_utils.get_uci_with_min_diff_dataset(split='train', sample=0.3))
model = tutorials_utils.get_uci_model()
model.compile(optimizer='adam', loss='binary_crossentropy')
df_without_target = train_df.drop(['target'], axis=1) # Drop 'target' for x.
_ = model.fit(
x=dict(df_without_target), # The model expects a dictionary of features.
y=train_df['target'],
batch_size=128,
epochs=1)
model = tutorials_utils.get_uci_model()
model.compile(optimizer='adam', loss='binary_crossentropy')
_ = model.fit(
tutorials_utils.df_to_dataset(train_df, batch_size=128), # Converted to Dataset.
epochs=1)
original_model = tutorials_utils.get_uci_model()
min_diff_model = min_diff.keras.MinDiffModel(
original_model=original_model,
loss=min_diff.losses.MMDLoss(),
loss_weight=1)
min_diff_model.compile(optimizer='adam', loss='binary_crossentropy')
_ = min_diff_model.fit(train_with_min_diff_ds, epochs=1)
_ = min_diff_model.evaluate(
tutorials_utils.df_to_dataset(train_df, batch_size=128))
# Calling with MinDiff data will include min_diff_loss in metrics.
_ = min_diff_model.evaluate(train_with_min_diff_ds)
_ = min_diff_model.predict(
tutorials_utils.df_to_dataset(train_df, batch_size=128))
_ = min_diff_model.predict(train_with_min_diff_ds) # Identical to results above.
print('MinDiffModel.fit == keras.Model.fit')
print(min_diff.keras.MinDiffModel.fit == tf.keras.Model.fit)
print('MinDiffModel.train_step == keras.Model.train_step')
print(min_diff.keras.MinDiffModel.train_step == tf.keras.Model.train_step)
print('Sequential.fit == keras.Model.fit')
print(tf.keras.Sequential.fit == tf.keras.Model.fit)
print('tf.keras.Sequential.train_step == keras.Model.train_step')
print(tf.keras.Sequential.train_step == tf.keras.Model.train_step)
class CustomModel(tf.keras.Model):
def train_step(self, **kwargs):
pass # Custom implementation.
print('CustomModel.train_step == keras.Model.train_step')
print(CustomModel.train_step == tf.keras.Model.train_step)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We will output to a static html file.
Step2: See many examples of configuring plot tools at http
Step3: Here we'll interact with Glue from the notebook.
Step4: Now we have access to the data collection in our notebook
Step5: Now go select the "Western arm" of the star-forming region (in Glue) and make a subset of it
Step6: We can add something to our catalog and it shows up in Glue.
Step7: We can define a new subset group here or in Glue
| <ASSISTANT_TASK:>
Python Code:
import bokeh
import numpy as np
from astropy.table import Table
sdss = Table.read('data/sdss_galaxies_qsos_50k.fits')
sdss
from bokeh.models import ColumnDataSource
from bokeh.plotting import figure, gridplot, output_notebook, output_file, show
umg = sdss['u'] - sdss['g']
gmr = sdss['g'] - sdss['r']
rmi = sdss['r'] - sdss['i']
imz = sdss['i'] - sdss['z']
# create a column data source for the plots to share
source = ColumnDataSource(data=dict(umg=umg, gmr=gmr, rmi=rmi,imz=imz))
output_file('sdss_color_color.html')
TOOLS = "pan,wheel_zoom,reset,box_select,poly_select,help"
# create a new plot and add a renderer
left = figure(tools=TOOLS, width=400, height=400, title='SDSS g-r vs u-g', webgl=True)
left.x('umg', 'gmr', source=source)
# create another new plot and add a renderer
right = figure(tools=TOOLS, width=400, height=400, title='SDSS i-z vs r-i')
right.x('rmi', 'imz', source=source)
p = gridplot([[left, right]])
show(p)
#import glue
# Quick way to launch Glue
#from glue import qglue
#qglue()
import astropy.io.fits as fits
hdu = fits.open('data/w5.fits')
hdu[0].header
from astropy.table import Table
w5catalog = Table.read('data/w5_psc.vot')
wisecat = Table.read('data/w5_wise.tbl', format='ipac')
%gui qt
#qglue(catalog=catalog, image=hdu, wisecat=wisecat)
from glue.core.data_factories import load_data
from glue.core import DataCollection
from glue.core.link_helpers import LinkSame
from glue.app.qt.application import GlueApplication
#load 2 datasets from files
image = load_data('data/w5.fits')
catalog = load_data('data/w5_psc.vot')
dc = DataCollection([image, catalog])
# link positional information
dc.add_link(LinkSame(image.id['Right Ascension'], catalog.id['RAJ2000']))
dc.add_link(LinkSame(image.id['Declination'], catalog.id['DEJ2000']))
#start Glue
app = GlueApplication(dc)
app.start()
dc
dc[0].components
dc[0].id['Right Ascension']
catalog = dc[1]
j_minus_h = catalog['Jmag'] - catalog['Hmag']
catalog['jmh'] = j_minus_h
hmag = catalog.id['Hmag']
jmag = catalog.id['Jmag']
jmhred = (jmag - hmag) > 1.5
dc.new_subset_group('j - h > 1.5', jmhred)
dc.subset_groups
dc.subset_groups[2].label
catalog.subsets
catalog.subsets[0]['Jmag']
mask = catalog.subsets[0].to_mask()
new_catalog = w5catalog[mask]
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load data
Step2: Create the figure
| <ASSISTANT_TASK:>
Python Code:
import shapefile
import numpy as np
from matplotlib import cm, rcParams
import matplotlib.pyplot as plt
from mpl_toolkits.basemap import Basemap
rcParams.update({'font.size': 16}) # Increase font-size
# Load the topo file to get header information
etopo1name = 'data/basemap/etopo1_bedrock.asc'
topo_file = open(etopo1name, 'r')
# Read header (number of columns and rows, cell-size, and lower left coordinates)
ncols = int(topo_file.readline().split()[1])
nrows = int(topo_file.readline().split()[1])
xllcorner = float(topo_file.readline().split()[1])
yllcorner = float(topo_file.readline().split()[1])
cellsize = float(topo_file.readline().split()[1])
topo_file.close()
# Read in topography as a whole, disregarding first five rows (header)
etopo = np.loadtxt(etopo1name, skiprows=5)
# Data resolution is quite high. I decrease the data resolution
# to decrease the size of the final figure
dres = 2
# Swap the rows
etopo[:nrows+1, :] = etopo[nrows+1::-1, :]
etopo = etopo[::dres, ::dres]
# Create longitude and latitude vectors for etopo
lons = np.arange(xllcorner, xllcorner+cellsize*ncols, cellsize)[::dres]
lats = np.arange(yllcorner, yllcorner+cellsize*nrows, cellsize)[::dres]
fig = plt.figure(figsize=(8, 6))
# Create basemap, 870 km east-west, 659 km north-south,
# intermediate resolution, Transverse Mercator projection,
# centred around lon/lat 1°/58.5°
m = Basemap(width=870000, height=659000,
resolution='i', projection='tmerc',
lon_0=1, lat_0=58.5)
# Draw coast line
m.drawcoastlines(color='k')
# Draw continents and lakes
m.fillcontinents(lake_color='b', color='none')
# Draw a think border around the whole map
m.drawmapboundary(linewidth=3)
# Convert etopo1 coordinates lon/lat in ° to x/y in m
# (From the basemap help: Calling a Basemap class instance with the arguments
# lon, lat will convert lon/lat (in degrees) to x/y map projection coordinates
# (in meters).)
rlons, rlats = m(*np.meshgrid(lons,lats))
# Draw etopo1, first for land and then for the ocean, with different colormaps
llevels = np.arange(-500,2251,100) # check etopo.ravel().max()
lcs = m.contourf(rlons, rlats, etopo, llevels, cmap=cm.terrain)
olevels = np.arange(-3500,1,100) # check etopo.ravel().min()
cso = m.contourf(rlons, rlats, etopo, olevels, cmap=cm.ocean)
# Draw parallels and meridians
m.drawparallels(np.arange(-56,63.,2.), color='.2', labels=[1,0,0,0])
m.drawparallels(np.arange(-55,63.,2.), color='.2', labels=[0,0,0,0])
m.drawmeridians(np.arange(-6.,12.,2.), color='.2', labels=[0,0,0,1])
m.drawmeridians(np.arange(-7.,12.,2.), color='.2', labels=[0,0,0,0])
# Draw Block 9 boundaries
m.plot([1, 2, 2, 1, 1], [59, 59, 60, 60, 59], 'b', linewidth=2, latlon=True)
plt.annotate('9', m(1.1, 59.7), color='b')
# Draw maritime boundaries
m.readshapefile('data/basemap/DECC_OFF_Median_Line', 'medline', linewidth=2)
# Add Harding, Edinburgh, Bergen
# 1. Convert coordinates
EDIx, EDIy = m(-3.188889, 55.953056)
BERx, BERy = m(5.33, 60.389444)
HARx, HARy = m(1.5, 59.29)
# 2. Plot symbol
plt.plot(HARx, HARy, mfc='r', mec='k', marker='s', markersize=10)
plt.plot(EDIx, EDIy, mfc='r', mec='k', marker='o', markersize=10)
plt.plot(BERx, BERy, mfc='r', mec='k', marker='o', markersize=10)
# 3. Plot name
plt.text(EDIx+50000, EDIy+10000,'Edinburgh', color='r')
plt.text(BERx-140000, BERy, 'Bergen', color='r')
plt.text(HARx-160000, HARy, 'Harding', color='r')
plt.show()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Online Prediction with scikit-learn on AI Platform
Step2: Download the data
Step3: Part 1
Step4: Part 2
Step5: Note
Step6: Part 3
Step7: Part 4
Step8: Use the created YAML file to create a model version.
Step9: Part 5
Step10: Use gcloud to make online predictions
Step11: Test the model with an online prediction using the data of a person who makes >50K.
Step12: Use Python to make online predictions
Step13: [Optional] Part 6
Step14: Use a confusion matrix create a visualization of the predicted results from the local model. These results should be identical to the results above.
Step15: Directly compare the two results
| <ASSISTANT_TASK:>
Python Code:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
%env PROJECT_ID PROJECT_ID
%env BUCKET_NAME BUCKET_NAME
%env MODEL_NAME census
%env VERSION_NAME v1
%env REGION us-central1
# Create a directory to hold the data
! mkdir census_data
# Download the data
! curl https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.data --output census_data/adult.data
! curl https://archive.ics.uci.edu/ml/machine-learning-databases/adult/adult.test --output census_data/adult.test
import googleapiclient.discovery
import json
import numpy as np
import os
import pandas as pd
import pickle
from sklearn.ensemble import RandomForestClassifier
from sklearn.externals import joblib
from sklearn.feature_selection import SelectKBest
from sklearn.pipeline import FeatureUnion
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import LabelBinarizer
# Define the format of your input data including unused columns (These are the columns from the census data files)
COLUMNS = (
'age',
'workclass',
'fnlwgt',
'education',
'education-num',
'marital-status',
'occupation',
'relationship',
'race',
'sex',
'capital-gain',
'capital-loss',
'hours-per-week',
'native-country',
'income-level'
)
# Categorical columns are columns that need to be turned into a numerical value to be used by scikit-learn
CATEGORICAL_COLUMNS = (
'workclass',
'education',
'marital-status',
'occupation',
'relationship',
'race',
'sex',
'native-country'
)
# Load the training census dataset
with open('./census_data/adult.data', 'r') as train_data:
raw_training_data = pd.read_csv(train_data, header=None, names=COLUMNS)
# Remove the column we are trying to predict ('income-level') from our features list
# Convert the Dataframe to a lists of lists
train_features = raw_training_data.drop('income-level', axis=1).as_matrix().tolist()
# Create our training labels list, convert the Dataframe to a lists of lists
train_labels = (raw_training_data['income-level'] == ' >50K').as_matrix().tolist()
# Load the test census dataset
with open('./census_data/adult.test', 'r') as test_data:
raw_testing_data = pd.read_csv(test_data, names=COLUMNS, skiprows=1)
# Remove the column we are trying to predict ('income-level') from our features list
# Convert the Dataframe to a lists of lists
test_features = raw_testing_data.drop('income-level', axis=1).values.tolist()
# Create our training labels list, convert the Dataframe to a lists of lists
test_labels = (raw_testing_data['income-level'] == ' >50K.').values.tolist()
# Since the census data set has categorical features, we need to convert
# them to numerical values. We'll use a list of pipelines to convert each
# categorical column and then use FeatureUnion to combine them before calling
# the RandomForestClassifier.
categorical_pipelines = []
# Each categorical column needs to be extracted individually and converted to a numerical value.
# To do this, each categorical column will use a pipeline that extracts one feature column via
# SelectKBest(k=1) and a LabelBinarizer() to convert the categorical value to a numerical one.
# A scores array (created below) will select and extract the feature column. The scores array is
# created by iterating over the COLUMNS and checking if it is a CATEGORICAL_COLUMN.
for i, col in enumerate(COLUMNS[:-1]):
if col in CATEGORICAL_COLUMNS:
# Create a scores array to get the individual categorical column.
# Example:
# data = [39, 'State-gov', 77516, 'Bachelors', 13, 'Never-married', 'Adm-clerical',
# 'Not-in-family', 'White', 'Male', 2174, 0, 40, 'United-States']
# scores = [0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
#
# Returns: [['State-gov']]
# Build the scores array.
scores = [0] * len(COLUMNS[:-1])
# This column is the categorical column we want to extract.
scores[i] = 1
skb = SelectKBest(k=1)
skb.scores_ = scores
# Convert the categorical column to a numerical value
lbn = LabelBinarizer()
r = skb.transform(train_features)
lbn.fit(r)
# Create the pipeline to extract the categorical feature
categorical_pipelines.append(
('categorical-{}'.format(i), Pipeline([
('SKB-{}'.format(i), skb),
('LBN-{}'.format(i), lbn)])))
# Create pipeline to extract the numerical features
skb = SelectKBest(k=6)
# From COLUMNS use the features that are numerical
skb.scores_ = [1, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0]
categorical_pipelines.append(('numerical', skb))
# Combine all the features using FeatureUnion
preprocess = FeatureUnion(categorical_pipelines)
# Create the classifier
classifier = RandomForestClassifier()
# Transform the features and fit them to the classifier
classifier.fit(preprocess.transform(train_features), train_labels)
# Create the overall model as a single pipeline
pipeline = Pipeline([
('union', preprocess),
('classifier', classifier)
])
# Export the model to a file
joblib.dump(pipeline, 'model.joblib')
print('Model trained and saved')
! gcloud config set project $PROJECT_ID
! gsutil cp ./model.joblib gs://$BUCKET_NAME/model.joblib
! gcloud ml-engine models create $MODEL_NAME --regions $REGION
%%writefile ./config.yaml
deploymentUri: "gs://BUCKET_NAME/"
runtimeVersion: '1.4'
framework: "SCIKIT_LEARN"
pythonVersion: "3.5"
! gcloud ml-engine versions create $VERSION_NAME \
--model $MODEL_NAME \
--config config.yaml
# Get one person that makes <=50K and one that makes >50K to test our model.
print('Show a person that makes <=50K:')
print('\tFeatures: {0} --> Label: {1}\n'.format(test_features[0], test_labels[0]))
with open('less_than_50K.json', 'w') as outfile:
json.dump(test_features[0], outfile)
print('Show a person that makes >50K:')
print('\tFeatures: {0} --> Label: {1}'.format(test_features[3], test_labels[3]))
with open('more_than_50K.json', 'w') as outfile:
json.dump(test_features[3], outfile)
! gcloud ml-engine predict --model $MODEL_NAME --version $VERSION_NAME --json-instances less_than_50K.json
! gcloud ml-engine predict --model $MODEL_NAME --version $VERSION_NAME --json-instances more_than_50K.json
import googleapiclient.discovery
import os
import pandas as pd
PROJECT_ID = os.environ['PROJECT_ID']
VERSION_NAME = os.environ['VERSION_NAME']
MODEL_NAME = os.environ['MODEL_NAME']
service = googleapiclient.discovery.build('ml', 'v1')
name = 'projects/{}/models/{}'.format(PROJECT_ID, MODEL_NAME)
name += '/versions/{}'.format(VERSION_NAME)
# Due to the size of the data, it needs to be split in 2
first_half = test_features[:int(len(test_features)/2)]
second_half = test_features[int(len(test_features)/2):]
complete_results = []
for data in [first_half, second_half]:
responses = service.projects().predict(
name=name,
body={'instances': data}
).execute()
if 'error' in responses:
print(response['error'])
else:
complete_results.extend(responses['predictions'])
# Print the first 10 responses
for i, response in enumerate(complete_results[:10]):
print('Prediction: {}\tLabel: {}'.format(response, test_labels[i]))
actual = pd.Series(test_labels, name='actual')
online = pd.Series(complete_results, name='online')
pd.crosstab(actual,online)
local_results = pipeline.predict(test_features)
local = pd.Series(local_results, name='local')
pd.crosstab(actual,local)
identical = 0
different = 0
for i in range(len(complete_results)):
if complete_results[i] == local_results[i]:
identical += 1
else:
different += 1
print('identical: {}, different: {}'.format(identical,different))
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
| <ASSISTANT_TASK:>
Python Code:
import numpy as np
value = 1.0
result = np.degrees(np.arcsin(value))
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The following example loads a cube from iris-sample-data and displays it as follows
Step2: With HoloViews, you can quickly view the data in the cube interactively
| <ASSISTANT_TASK:>
Python Code:
import holoviews as hv
import holocube as hc
from cartopy import crs
from cartopy import feature as cf
hv.notebook_extension()
%%opts GeoFeature [projection=crs.Geostationary()]
coasts = hc.GeoFeature(cf.COASTLINE)
borders = hc.GeoFeature(cf.BORDERS)
ocean = hc.GeoFeature(cf.OCEAN)
ocean + borders + (ocean*borders).relabel("Overlay")
import iris
surface_temp = iris.load_cube(iris.sample_data_path('GloSea4', 'ensemble_001.pp'))
print surface_temp.summary()
%%opts GeoImage [colorbar=True] (cmap='viridis')
(hc.HoloCube(surface_temp).groupby(['time'], group_type=hc.Image) * hc.GeoFeature(cf.COASTLINE))
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Family
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables
Step9: 2. Key Properties --> Seawater Properties
Step10: 2.2. Eos Functional Temp
Step11: 2.3. Eos Functional Salt
Step12: 2.4. Eos Functional Depth
Step13: 2.5. Ocean Freezing Point
Step14: 2.6. Ocean Specific Heat
Step15: 2.7. Ocean Reference Density
Step16: 3. Key Properties --> Bathymetry
Step17: 3.2. Type
Step18: 3.3. Ocean Smoothing
Step19: 3.4. Source
Step20: 4. Key Properties --> Nonoceanic Waters
Step21: 4.2. River Mouth
Step22: 5. Key Properties --> Software Properties
Step23: 5.2. Code Version
Step24: 5.3. Code Languages
Step25: 6. Key Properties --> Resolution
Step26: 6.2. Canonical Horizontal Resolution
Step27: 6.3. Range Horizontal Resolution
Step28: 6.4. Number Of Horizontal Gridpoints
Step29: 6.5. Number Of Vertical Levels
Step30: 6.6. Is Adaptive Grid
Step31: 6.7. Thickness Level 1
Step32: 7. Key Properties --> Tuning Applied
Step33: 7.2. Global Mean Metrics Used
Step34: 7.3. Regional Metrics Used
Step35: 7.4. Trend Metrics Used
Step36: 8. Key Properties --> Conservation
Step37: 8.2. Scheme
Step38: 8.3. Consistency Properties
Step39: 8.4. Corrected Conserved Prognostic Variables
Step40: 8.5. Was Flux Correction Used
Step41: 9. Grid
Step42: 10. Grid --> Discretisation --> Vertical
Step43: 10.2. Partial Steps
Step44: 11. Grid --> Discretisation --> Horizontal
Step45: 11.2. Staggering
Step46: 11.3. Scheme
Step47: 12. Timestepping Framework
Step48: 12.2. Diurnal Cycle
Step49: 13. Timestepping Framework --> Tracers
Step50: 13.2. Time Step
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Step52: 14.2. Scheme
Step53: 14.3. Time Step
Step54: 15. Timestepping Framework --> Barotropic
Step55: 15.2. Time Step
Step56: 16. Timestepping Framework --> Vertical Physics
Step57: 17. Advection
Step58: 18. Advection --> Momentum
Step59: 18.2. Scheme Name
Step60: 18.3. ALE
Step61: 19. Advection --> Lateral Tracers
Step62: 19.2. Flux Limiter
Step63: 19.3. Effective Order
Step64: 19.4. Name
Step65: 19.5. Passive Tracers
Step66: 19.6. Passive Tracers Advection
Step67: 20. Advection --> Vertical Tracers
Step68: 20.2. Flux Limiter
Step69: 21. Lateral Physics
Step70: 21.2. Scheme
Step71: 22. Lateral Physics --> Momentum --> Operator
Step72: 22.2. Order
Step73: 22.3. Discretisation
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Step75: 23.2. Constant Coefficient
Step76: 23.3. Variable Coefficient
Step77: 23.4. Coeff Background
Step78: 23.5. Coeff Backscatter
Step79: 24. Lateral Physics --> Tracers
Step80: 24.2. Submesoscale Mixing
Step81: 25. Lateral Physics --> Tracers --> Operator
Step82: 25.2. Order
Step83: 25.3. Discretisation
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Step85: 26.2. Constant Coefficient
Step86: 26.3. Variable Coefficient
Step87: 26.4. Coeff Background
Step88: 26.5. Coeff Backscatter
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Step90: 27.2. Constant Val
Step91: 27.3. Flux Type
Step92: 27.4. Added Diffusivity
Step93: 28. Vertical Physics
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
Step96: 30.2. Closure Order
Step97: 30.3. Constant
Step98: 30.4. Background
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
Step100: 31.2. Closure Order
Step101: 31.3. Constant
Step102: 31.4. Background
Step103: 32. Vertical Physics --> Interior Mixing --> Details
Step104: 32.2. Tide Induced Mixing
Step105: 32.3. Double Diffusion
Step106: 32.4. Shear Mixing
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
Step108: 33.2. Constant
Step109: 33.3. Profile
Step110: 33.4. Background
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
Step112: 34.2. Constant
Step113: 34.3. Profile
Step114: 34.4. Background
Step115: 35. Uplow Boundaries --> Free Surface
Step116: 35.2. Scheme
Step117: 35.3. Embeded Seaice
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Step119: 36.2. Type Of Bbl
Step120: 36.3. Lateral Mixing Coef
Step121: 36.4. Sill Overflow
Step122: 37. Boundary Forcing
Step123: 37.2. Surface Pressure
Step124: 37.3. Momentum Flux Correction
Step125: 37.4. Tracers Flux Correction
Step126: 37.5. Wave Effects
Step127: 37.6. River Runoff Budget
Step128: 37.7. Geothermal Heating
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Step132: 40.2. Ocean Colour
Step133: 40.3. Extinction Depth
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Step135: 41.2. From Sea Ice
Step136: 41.3. Forced Mode Restoring
| <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ec-earth-consortium', 'sandbox-3', 'ocean')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Creating
Step2: You can also supply more extensive metadata
Step3: <div class="alert alert-info"><h4>Note</h4><p>When assigning new values to the fields of an
Step4: Creating
Step5: It is necessary to supply an "events" array in order to create an Epochs
Step6: More information about the event codes
Step7: Finally, we must specify the beginning of an epoch (the end will be inferred
Step8: Now we can create the
Step9: Creating
| <ASSISTANT_TASK:>
Python Code:
import mne
import numpy as np
# Create some dummy metadata
n_channels = 32
sampling_rate = 200
info = mne.create_info(n_channels, sampling_rate)
print(info)
# Names for each channel
channel_names = ['MEG1', 'MEG2', 'Cz', 'Pz', 'EOG']
# The type (mag, grad, eeg, eog, misc, ...) of each channel
channel_types = ['grad', 'grad', 'eeg', 'eeg', 'eog']
# The sampling rate of the recording
sfreq = 1000 # in Hertz
# The EEG channels use the standard naming strategy.
# By supplying the 'montage' parameter, approximate locations
# will be added for them
montage = 'standard_1005'
# Initialize required fields
info = mne.create_info(channel_names, sfreq, channel_types, montage)
# Add some more information
info['description'] = 'My custom dataset'
info['bads'] = ['Pz'] # Names of bad channels
print(info)
# Generate some random data
data = np.random.randn(5, 1000)
# Initialize an info structure
info = mne.create_info(
ch_names=['MEG1', 'MEG2', 'EEG1', 'EEG2', 'EOG'],
ch_types=['grad', 'grad', 'eeg', 'eeg', 'eog'],
sfreq=100
)
custom_raw = mne.io.RawArray(data, info)
print(custom_raw)
# Generate some random data: 10 epochs, 5 channels, 2 seconds per epoch
sfreq = 100
data = np.random.randn(10, 5, sfreq * 2)
# Initialize an info structure
info = mne.create_info(
ch_names=['MEG1', 'MEG2', 'EEG1', 'EEG2', 'EOG'],
ch_types=['grad', 'grad', 'eeg', 'eeg', 'eog'],
sfreq=sfreq
)
# Create an event matrix: 10 events with alternating event codes
events = np.array([
[0, 0, 1],
[1, 0, 2],
[2, 0, 1],
[3, 0, 2],
[4, 0, 1],
[5, 0, 2],
[6, 0, 1],
[7, 0, 2],
[8, 0, 1],
[9, 0, 2],
])
event_id = dict(smiling=1, frowning=2)
# Trials were cut from -0.1 to 1.0 seconds
tmin = -0.1
custom_epochs = mne.EpochsArray(data, info, events, tmin, event_id)
print(custom_epochs)
# We can treat the epochs object as we would any other
_ = custom_epochs['smiling'].average().plot()
# The averaged data
data_evoked = data.mean(0)
# The number of epochs that were averaged
nave = data.shape[0]
# A comment to describe to evoked (usually the condition name)
comment = "Smiley faces"
# Create the Evoked object
evoked_array = mne.EvokedArray(data_evoked, info, tmin,
comment=comment, nave=nave)
print(evoked_array)
_ = evoked_array.plot()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Datos LHC
Step2: Utilidades
Step3: Columnas
Step4: Y si quiero imprimir columnas, una por una?
Step5: Recuerda
Step6: Dividir datos
Step7: Preguntas
Step8: Visualizar!
Step9: Histogramas
Step10: Scatter plots
Step11: Ven algun problema ?
| <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np # modulo de computo numerico
import matplotlib.pyplot as plt # modulo de graficas
# esta linea hace que las graficas salgan en el notebook
import seaborn as sns
%matplotlib inline
df = pd.read_csv('files/mini-LHC.csv')
df.head()
print(df.shape)
print(len(df))
print(df.columns)
for col in df.columns:
print(col)
df['PRI_met']
boson_df = df[df['Label']=='s']
ruido_df = df[df['Label']=='b']
print (len(boson_df))
print (len(ruido_df))
sns.boxplot(x="Label", y="DER_mass_MMC",data=df)
plt.show()
sns.distplot(boson_df["DER_mass_MMC"],label='boson')
sns.distplot(ruido_df["DER_mass_MMC"],label='ruido')
plt.ylabel('Frecuencia')
plt.legend()
plt.title("Distribucion de DER_mass_MMC")
plt.show()
ejeX = "DER_mass_MMC"
ejeY = "PRI_tau_pt"
plt.scatter(df[ejeX],df[ejeY],alpha=0.5)
plt.xlabel(ejeX)
plt.ylabel(ejeY)
plt.show()
ejeX = "DER_mass_MMC"
ejeY = "PRI_tau_pt"
plt.scatter(boson_df[ejeX],boson_df[ejeY],c='r',alpha=0.9,s=20,label='boson',lw=0)
plt.scatter(ruido_df[ejeX],ruido_df[ejeY],c='g',alpha=0.1,s=10,label='ruido',lw=0)
plt.xlabel(ejeX)
plt.ylabel(ejeY)
plt.legend()
plt.show()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Creating an ICsound object automatically starts the engine
Step2: You can set the properties of the Csound engine with parameters to the startEngine() function.
Step3: The engine runs in a separate thread, so it doesn't block execution of python.
Step4: Use the %%csound magic command to directly type csound language code in the cell and send it to the engine. The number after the magic command is optional; it references the slot where the engine is running. If omitted, slot#1 is assumed.
Step5: So where did it print?
Step6: By default, messages from Csound are not shown, but they are stored in an internal buffer. You can view them with the printLog() function. If the log is getting too long and confusing, use the clearLog() function.
Step7: Tables can be plotted in the usual matplotlib way, but ICsound provides a plotTable function which styles the graphs.
Step8: You can get the function table values from the csound instance
Step9: Tables can also be passed by their variable name in Csound
Step10: The following will create 320 tables with 720 points each
Step11: Sending instruments
Step12: Channels
Step13: You can also read the channels from Csound. These channels can be set from ICsound or within instruments with the outvalue/chnset opcodes
Step14: Recording the output
Step15: Remote engines
Step16: Now send notes and instruments from the client
Step17: And show the log in the server
Step18: Stopping the engine
Step19: If we don't need cs_client anymore, we can delete its slot with the %csound line magic (note the single % sign and the negative slot#). The python instance cs_client can then be deleted
Step20: Audification
Step21: Instrument to play back the earthquake data stored in a table
Step22: Listen
Step23: Slower
Step24: Quicker
Step25: Other tests
| <ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
%load_ext csoundmagics
cs = ICsound(port=12894)
help(cs.startEngine)
cs.startEngine()
%%csound 1
gkinstr init 1
%%csound
print i(gkinstr)
cs.printLog()
cs.fillTable(1, np.array([8, 7, 9, 1, 1, 1]))
cs.fillTable(2, [4, 5, 7, 0, 8, 7, 9, 6])
cs.plotTable(1)
cs.plotTable(2, reuse=True)
plt.grid()
cs.table(2)
cs.makeTable(2, 1024, 10, 1)
cs.makeTable(3, 1024, -10, 0.5, 1)
cs.plotTable(2)
cs.plotTable(3, reuse=True)
#ylim((-1.1,1.1))
cs.table(2)[100: 105]
%%csound 1
giHalfSine ftgen 0, 0, 1024, 9, .5, 1, 0
cs.plotTable('giHalfSine')
randsig = np.random.random((320, 720))
i = 0
for i, row in enumerate(randsig):
cs.fillTable(50 + i, row)
print(i, '..', end=' ')
cs.plotTable(104)
%%csound 1
instr 1
asig asds
%%csound 1
instr 1
asig oscil 0.5, 440
outs asig, asig
%%csound 1
instr 1
asig oscil 0.5, 440
outs asig, asig
endin
cs.setChannel("val", 20)
cs.channel("val")
cs.startRecord("out.wav")
cs.sendScore("i 1 0 1")
import time
time.sleep(1)
cs.stopRecord()
!aplay out.wav
cs_client = ICsound()
cs_client.startClient()
cs.clearLog()
cs_client.sendScore("i 1 0 1")
cs_client.sendCode("print i(gkinstr)")
cs.printLog()
cs.stopEngine()
cs
%csound -2
del cs_client
prefix = 'http://service.iris.edu/irisws/timeseries/1/query?'
SCNL_parameters = 'net=IU&sta=ANMO&loc=00&cha=BHZ&'
times = 'starttime=2005-01-01T00:00:00&endtime=2005-01-02T00:00:00&'
output = 'output=ascii'
import urllib
f = urllib.request.urlopen(prefix + SCNL_parameters + times + output)
timeseries = f.read()
import ctcsound
data = ctcsound.pstring(timeseries).split('\n')
dates = []
values = []
for line in data[1:-1]:
date, val = line.split()
dates.append(date)
values.append(float(val))
plt.plot(values)
cs.startEngine()
cs.fillTable(1, values)
%%csound 1
instr 1
idur = p3
itable = p4
asig poscil 1/8000, 1/p3, p4
outs asig, asig
endin
cs.sendScore('i 1 0 3 1')
cs.sendScore('i 1 0 7 1')
cs.sendScore('i 1 0 1 1')
ics = ICsound(bufferSize=64)
ics.listInterfaces()
%%csound 2
instr 1
asig oscil 0.5, 440
outs asig, asig
endin
ics.sendScore("i 1 0 0.5")
%csound -2
del ics
cs.stopEngine()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Problem 1
Step2: Generate a set of $50$ one-dimensional inputs regularly spaced between -5 and 5 and store them in a variable called x, then compute the covariance matrix for these inputs, for $A=\Gamma=1$, store the results in a variable called K, and display it using matplotlib's imshow function.
Step3: Problem 1b
Step4: Now draw 5 samples from the distribution and plot them.
Step5: Problem 1c
Step6: Execute the cell below to define a handful of observations
Step7: Evaluate and plot the mean and 95% confidence interval of the resulting posterior distribution, as well as a few samples, for a squared exponential GP with $A=\Gamma=1$, assuming the measurement uncertainty on each observation was 0.1
Step8: Some things to note
Step9: Try evaluating the likelihood of the model given the observations you defined in problem 1 by executing the cell below. Hopefully it will run without errors...
Step10: Now try changing the covariance parameters and the observational uncertainties, and see how that affects the likelihood. Does it behave as you would expect, given the way these parameters affected the predictive distribution?
Step11: Plot the data and the predictive distribution and samples for the best-fit hyper-parameters
Step12: That may not have worked quite as well as you might have liked -- it's normal
Step13: Problem 3a
Step14: Problem 3b
Step15: Now you are ready to fit for all the hyper-parameters simultaneously
Step16: NB
Step17: NB
Step18: Now try fitting the data using the LinearMean mean function and the M32Kernel covariance function.
Step19: How does the best fit likelihood compare to what you obtained using the SEKernel? Which kernel would you adopt if you had to chose between the two. Write your answer in the cell below.
Step20: Now evaluate the BIC in each case. Which model is preferred?
Step21: Thus the model with a non-zero mean function is strongly preferred (BIC differences $> 10$ are generally considered to represent very strong support for one model over the other).
Step22: As you can see, the predictive distribution are essentially indistinguishable in regions where we have lots of data, but the predictive ability of the model without mean function is much poorer away from the data. Of course, this is as expected.
| <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy.spatial.distance import cdist
from numpy.random import multivariate_normal
from numpy.linalg import inv
from numpy.linalg import slogdet
from scipy.optimize import fmin
def SEKernel(par, x1, x2):
A, Gamma = par
D2 = cdist(x1.reshape(len(x1),1), x2.reshape(len(x2),1),
metric = 'sqeuclidean')
return A * np.exp(-Gamma*D2)
x = np.linspace(-5,5,50)
K = SEKernel([1.0,1.0],x,x)
plt.imshow(K,interpolation='none');
m = np.zeros(len(x))
sig = np.sqrt(np.diag(K))
plt.plot(x,m,'k-')
plt.fill_between(x,m+2*sig,m-2*sig,color='k',alpha=0.2)
plt.xlabel(r'$x$')
plt.ylabel(r'$y$')
plt.title('Prior distribution');
samples = multivariate_normal(m,K,5)
plt.plot(x,samples.T)
plt.xlabel(r'$x$')
plt.ylabel(r'$y$')
plt.title('Samples from prior distribution');
def Pred_GP(CovFunc, CovPar, xobs, yobs, eobs, xtest):
# evaluate the covariance matrix for pairs of observed inputs
K = CovFunc(CovPar, xobs, xobs)
# add white noise
K += np.identity(xobs.shape[0]) * eobs**2
# evaluate the covariance matrix for pairs of test inputs
Kss = CovFunc(CovPar, xtest, xtest)
# evaluate the cross-term
Ks = CovFunc(CovPar, xtest, xobs)
# invert K
Ki = inv(K)
# evaluate the predictive mean
m = np.dot(Ks, np.dot(Ki, yobs))
# evaluate the covariance
cov = Kss - np.dot(Ks, np.dot(Ki, Ks.T))
return m, cov
xobs = np.array([-4,-2,0,1,2])
yobs = np.array([1.0,-1.0, -1.0, 0.7, 0.0])
eobs = 0.1
m,C=Pred_GP(SEKernel,[1.0,1.0],xobs,yobs,eobs,x)
sig = np.sqrt(np.diag(C))
samples = multivariate_normal(m,C,5)
plt.errorbar(xobs,yobs,yerr=2*eobs,capsize=0,fmt='k.')
plt.plot(x,m,'k-')
plt.fill_between(x,m+2*sig,m-2*sig,color='k',alpha=0.2)
plt.plot(x,samples.T,alpha=0.5)
plt.xlabel(r'$x$')
plt.ylabel(r'$y$')
plt.title('Predictive distribution');
def NLL_GP(p,CovFunc,x,y,e):
# Evaluate the covariance matrix
K = CovFunc(p,x,x)
# Add the white noise term
K += np.identity(x.shape[0]) * e**2
# invert it
Ki = inv(K)
# evaluate each of the three terms in the NLL
term1 = 0.5 * np.dot(y,np.dot(Ki,y))
term2 = 0.5 * slogdet(K)[1]
term3 = 0.5 * len(y) * np.log(2*np.pi)
# return the total
return term1 + term2 + term3
print(NLL_GP([1.0,1.0],SEKernel,xobs,yobs,eobs))
p0 = [1.0,1.0]
p1 = fmin(NLL_GP,p0,args=(SEKernel,xobs,yobs,eobs))
print(p1)
m,C=Pred_GP(SEKernel,p1,xobs,yobs,eobs,x)
sig = np.sqrt(np.diag(C))
samples = multivariate_normal(m,C,5)
plt.errorbar(xobs,yobs,yerr=2*eobs,capsize=0,fmt='k.')
plt.plot(x,m,'k-')
plt.fill_between(x,m+2*sig,m-2*sig,color='k',alpha=0.2)
plt.plot(x,samples.T,alpha=0.5)
plt.xlabel(r'$x$')
plt.ylabel(r'$y$')
plt.title('Maximum likelihood distribution');
xobs = np.linspace(-10,10,50)
linear_trend = 0.03 * xobs - 0.3
correlated_noise = multivariate_normal(np.zeros(len(xobs)),SEKernel([0.005,2.0],xobs,xobs),1).flatten()
eobs = 0.01
white_noise = np.random.normal(0,eobs,len(xobs))
yobs = linear_trend + correlated_noise + white_noise
plt.errorbar(xobs,yobs,yerr=eobs,fmt='k.',capsize=0)
plt.xlabel(r'$x$')
plt.ylabel(r'$y$');
def LinearMean(p,x):
return p[0] * x + p[1]
pm0 = [0.03, -0.3]
m = LinearMean(pm0,xobs)
plt.errorbar(xobs,yobs,yerr=eobs,fmt='k.',capsize=0)
plt.plot(xobs,m,'r-')
plt.xlabel(r'$x$')
plt.ylabel(r'$y$');
def NLL_GP2(p,CovFunc,x,y,e, MeanFunc=None, nmp = 0):
if MeanFunc:
pc = p[:-nmp]
pm = p[-nmp:]
r = y - MeanFunc(pm,x)
else:
pc = p[:]
r = y[:]
# Evaluate the covariance matrix
K = CovFunc(pc,x,x)
# Add the white noise term
K += np.identity(x.shape[0]) * e**2
# invert it
Ki = inv(K)
# evaluate each of the three terms in the NLL
term1 = 0.5 * np.dot(r,np.dot(Ki,r))
term2 = 0.5 * slogdet(K)[1]
term3 = 0.5 * len(r) * np.log(2*np.pi)
# return the total
return term1 + term2 + term3
p0 = [0.005,2.0,0.03,-0.3]
print(NLL_GP2(p0,SEKernel,xobs,yobs,eobs,MeanFunc=LinearMean,nmp=2))
p1 = fmin(NLL_GP2,p0,args=(SEKernel,xobs,yobs,eobs,LinearMean,2))
print(p1)
# Generate test inputs (values at which we ant to evaluate the predictive distribution)
x = np.linspace(-15,15,300)
# Evaluate mean function at observed inputs, and compute residuals
mobs = LinearMean(p1[-2:],xobs)
robs = yobs-mobs
# Evaluate stochastic component at test inputs
m,C = Pred_GP(SEKernel,p1[:2],xobs,robs,eobs,x)
# Evaluate mean function at test inputs
m += LinearMean(p1[-2:],x)
sig = np.sqrt(np.diag(C))
plt.errorbar(xobs,yobs,yerr=2*eobs,capsize=0,fmt='k.')
plt.plot(x,m,'k-')
plt.fill_between(x,m+2*sig,m-2*sig,color='k',alpha=0.2)
plt.xlabel(r'$x$')
plt.ylabel(r'$y$')
plt.title('Maximum likelihood distribution');
def M32Kernel(par, x1, x2):
A, Gamma = par
R = cdist(x1.reshape(len(x1),1), x2.reshape(len(x2),1),
V = [1.0/float(Gamma)], metric = 'seuclidean')
return A * (1+np.sqrt(3)*R) * np.exp(-np.sqrt(3)*R)
p0 = [0.005,2.0,0.03,-0.3]
print(NLL_GP2(p0,M32Kernel,xobs,yobs,eobs,MeanFunc=LinearMean,nmp=2))
p1 = fmin(NLL_GP2,p0,args=(M32Kernel,xobs,yobs,eobs,LinearMean,2))
print(p1)
print(NLL_GP2(p1,M32Kernel,xobs,yobs,eobs,MeanFunc=LinearMean,nmp=2))
p0_mean = [0.005,2.0,0.03,-0.3]
p1_mean = fmin(NLL_GP2,p0_mean,args=(SEKernel,xobs,yobs,eobs,LinearMean,2))
NLL_mean = NLL_GP2(p1_mean,SEKernel,xobs,yobs,eobs,MeanFunc=LinearMean,nmp=2)
print(NLL_mean)
p0_no_mean = [0.005,2.0]
p1_no_mean = fmin(NLL_GP2,p0_no_mean,args=(SEKernel,xobs,yobs,eobs))
NLL_no_mean = NLL_GP2(p1_no_mean,SEKernel,xobs,yobs,eobs)
print(NLL_no_mean)
N = len(xobs)
BIC_mean = np.log(N) * len(p1_mean) + 2 * NLL_mean
print(BIC_mean)
BIC_no_mean = np.log(N) * len(p1_no_mean) + 2 * NLL_no_mean
print(BIC_no_mean)
# Plot the data
plt.errorbar(xobs,yobs,yerr=2*eobs,capsize=0,fmt='k.')
plt.xlabel(r'$x$')
plt.ylabel(r'$y$')
plt.title('Model comparison')
# Evaluate and plot the predictive distribution with a mean function
mobs = LinearMean(p1_mean[-2:],xobs)
robs = yobs-mobs
m,C = Pred_GP(SEKernel,p1_mean[:2],xobs,robs,eobs,x)
m += LinearMean(p1[-2:],x)
sig = np.sqrt(np.diag(C))
plt.plot(x,m,'b-')
plt.fill_between(x,m+2*sig,m-2*sig,color='b',alpha=0.2)
# Now do the same for the model without mean function
m,C = Pred_GP(SEKernel,p1_no_mean,xobs,yobs,eobs,x)
sig = np.sqrt(np.diag(C))
plt.plot(x,m,'r-')
plt.fill_between(x,m+2*sig,m-2*sig,color='r',alpha=0.2)
from astropy.table import Table
tab = Table.read('KIC2157356.txt',format='ascii')
qs = np.unique(tab['quarter'])
for q in qs:
t = tab[tab['quarter']==q]
plt.plot(t['time'],t['flux'],'.')
def QPKernel(par,x1,x2):
A, P, Gamma1, Gamma2 = par
D = cdist(x1.reshape(len(x1),1), x2.reshape(len(x2),1),
metric = 'euclidean')
D2 = cdist(x1.reshape(len(x1),1), x2.reshape(len(x2),1),
metric = 'sqeuclidean')
return A * np.exp(-Gamma1*np.sin(D*np.pi/P)**2-Gamma2*D2)
for q in qs:
# select data for the quarter
t = tab[tab['quarter']==q]
xobs = t['time']
yobs = t['flux']
eobs = t['error']
# normalise
m = np.median(yobs)
yobs = yobs / m - 1
eobs /= m
# subsample to keep computing time reasonable
xtrain = xobs[::10]
ytrain = yobs[::10]
etrain = eobs[::10]
# fit for hyper-parameters
p0 = [np.var(yobs), 13.61, 1.0, 1e-4]
p1 = fmin(NLL_GP,p0,args=(QPKernel,xtrain,ytrain,etrain))
print(p1)
# test outputs for plots
x = np.linspace(xobs.min()-10,xobs.max()+10,1000)
plt.figure()
plt.plot(xobs,yobs,'k,')
plt.plot(xtrain,ytrain,'k.')
m,C = Pred_GP(QPKernel,p0,xtrain,ytrain,etrain,x)
sig = np.sqrt(np.diag(C))
plt.plot(x,m,'k-')
plt.fill_between(x,m+2*sig,m-2*sig,color='k',alpha=0.2)
plt.xlabel(r'$t$')
plt.ylabel(r'$y$')
plt.title('Quarter {}'.format(q));
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Ice Albedo
Step7: 1.4. Atmospheric Coupling Variables
Step8: 1.5. Oceanic Coupling Variables
Step9: 1.6. Prognostic Variables
Step10: 2. Key Properties --> Software Properties
Step11: 2.2. Code Version
Step12: 2.3. Code Languages
Step13: 3. Grid
Step14: 3.2. Adaptive Grid
Step15: 3.3. Base Resolution
Step16: 3.4. Resolution Limit
Step17: 3.5. Projection
Step18: 4. Glaciers
Step19: 4.2. Description
Step20: 4.3. Dynamic Areal Extent
Step21: 5. Ice
Step22: 5.2. Grounding Line Method
Step23: 5.3. Ice Sheet
Step24: 5.4. Ice Shelf
Step25: 6. Ice --> Mass Balance
Step26: 7. Ice --> Mass Balance --> Basal
Step27: 7.2. Ocean
Step28: 8. Ice --> Mass Balance --> Frontal
Step29: 8.2. Melting
Step30: 9. Ice --> Dynamics
Step31: 9.2. Approximation
Step32: 9.3. Adaptive Timestep
Step33: 9.4. Timestep
| <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'bnu', 'sandbox-1', 'landice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.ice_albedo')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "function of ice age"
# "function of ice density"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.atmospheric_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.oceanic_coupling_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice velocity"
# "ice thickness"
# "ice temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.base_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.resolution_limit')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.grid.projection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.glaciers.dynamic_areal_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.grounding_line_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grounding line prescribed"
# "flux prescribed (Schoof)"
# "fixed grid size"
# "moving grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_sheet')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.ice_shelf')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.surface_mass_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.bedrock')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.basal.ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.mass_balance.frontal.melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.approximation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SIA"
# "SAA"
# "full stokes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.adaptive_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.landice.ice.dynamics.timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set parameters
| <ASSISTANT_TASK:>
Python Code:
# Author: Martin Luessi <mluessi@nmr.mgh.harvard.edu>
#
# License: BSD (3-clause)
import mne
from mne import io
from mne.connectivity import spectral_connectivity
from mne.datasets import sample
from mne.viz import plot_sensors_connectivity
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname)
events = mne.read_events(event_fname)
# Add a bad channel
raw.info['bads'] += ['MEG 2443']
# Pick MEG gradiometers
picks = mne.pick_types(raw.info, meg='grad', eeg=False, stim=False, eog=True,
exclude='bads')
# Create epochs for the visual condition
event_id, tmin, tmax = 3, -0.2, 1.5 # need a long enough epoch for 5 cycles
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=dict(grad=4000e-13, eog=150e-6))
# Compute connectivity for band containing the evoked response.
# We exclude the baseline period
fmin, fmax = 3., 9.
sfreq = raw.info['sfreq'] # the sampling frequency
tmin = 0.0 # exclude the baseline period
epochs.load_data().pick_types(meg='grad') # just keep MEG and no EOG now
con, freqs, times, n_epochs, n_tapers = spectral_connectivity(
epochs, method='pli', mode='multitaper', sfreq=sfreq, fmin=fmin, fmax=fmax,
faverage=True, tmin=tmin, mt_adaptive=False, n_jobs=1)
# Now, visualize the connectivity in 3D
plot_sensors_connectivity(epochs.info, con[:, :, 0])
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Constants
Step2: You probably noticed that DROPOUT_RATE has been set 0.0. Dropout has been used
Step3: Implementing the DeiT variants of ViT
Step6: Now, we'll implement the MLP and Transformer blocks.
Step8: We'll now implement a ViTClassifier class building on top of the components we just
Step9: This class can be used standalone as ViT and is end-to-end trainable. Just remove the
Step10: Let's verify if the ViTDistilled class can be initialized and called as expected.
Step11: Implementing the trainer
Step12: Load the teacher model
Step13: Training through distillation
| <ASSISTANT_TASK:>
Python Code:
from typing import List
import tensorflow as tf
import tensorflow_addons as tfa
import tensorflow_datasets as tfds
import tensorflow_hub as hub
from tensorflow import keras
from tensorflow.keras import layers
tfds.disable_progress_bar()
tf.keras.utils.set_random_seed(42)
# Model
MODEL_TYPE = "deit_distilled_tiny_patch16_224"
RESOLUTION = 224
PATCH_SIZE = 16
NUM_PATCHES = (RESOLUTION // PATCH_SIZE) ** 2
LAYER_NORM_EPS = 1e-6
PROJECTION_DIM = 192
NUM_HEADS = 3
NUM_LAYERS = 12
MLP_UNITS = [
PROJECTION_DIM * 4,
PROJECTION_DIM,
]
DROPOUT_RATE = 0.0
DROP_PATH_RATE = 0.1
# Training
NUM_EPOCHS = 20
BASE_LR = 0.0005
WEIGHT_DECAY = 0.0001
# Data
BATCH_SIZE = 256
AUTO = tf.data.AUTOTUNE
NUM_CLASSES = 5
def preprocess_dataset(is_training=True):
def fn(image, label):
if is_training:
# Resize to a bigger spatial resolution and take the random
# crops.
image = tf.image.resize(image, (RESOLUTION + 20, RESOLUTION + 20))
image = tf.image.random_crop(image, (RESOLUTION, RESOLUTION, 3))
image = tf.image.random_flip_left_right(image)
else:
image = tf.image.resize(image, (RESOLUTION, RESOLUTION))
label = tf.one_hot(label, depth=NUM_CLASSES)
return image, label
return fn
def prepare_dataset(dataset, is_training=True):
if is_training:
dataset = dataset.shuffle(BATCH_SIZE * 10)
dataset = dataset.map(preprocess_dataset(is_training), num_parallel_calls=AUTO)
return dataset.batch(BATCH_SIZE).prefetch(AUTO)
train_dataset, val_dataset = tfds.load(
"tf_flowers", split=["train[:90%]", "train[90%:]"], as_supervised=True
)
num_train = train_dataset.cardinality()
num_val = val_dataset.cardinality()
print(f"Number of training examples: {num_train}")
print(f"Number of validation examples: {num_val}")
train_dataset = prepare_dataset(train_dataset, is_training=True)
val_dataset = prepare_dataset(val_dataset, is_training=False)
# Referred from: github.com:rwightman/pytorch-image-models.
class StochasticDepth(layers.Layer):
def __init__(self, drop_prop, **kwargs):
super().__init__(**kwargs)
self.drop_prob = drop_prop
def call(self, x, training=True):
if training:
keep_prob = 1 - self.drop_prob
shape = (tf.shape(x)[0],) + (1,) * (len(tf.shape(x)) - 1)
random_tensor = keep_prob + tf.random.uniform(shape, 0, 1)
random_tensor = tf.floor(random_tensor)
return (x / keep_prob) * random_tensor
return x
def mlp(x, dropout_rate: float, hidden_units: List):
FFN for a Transformer block.
# Iterate over the hidden units and
# add Dense => Dropout.
for (idx, units) in enumerate(hidden_units):
x = layers.Dense(
units,
activation=tf.nn.gelu if idx == 0 else None,
)(x)
x = layers.Dropout(dropout_rate)(x)
return x
def transformer(drop_prob: float, name: str) -> keras.Model:
Transformer block with pre-norm.
num_patches = NUM_PATCHES + 2 if "distilled" in MODEL_TYPE else NUM_PATCHES + 1
encoded_patches = layers.Input((num_patches, PROJECTION_DIM))
# Layer normalization 1.
x1 = layers.LayerNormalization(epsilon=LAYER_NORM_EPS)(encoded_patches)
# Multi Head Self Attention layer 1.
attention_output = layers.MultiHeadAttention(
num_heads=NUM_HEADS,
key_dim=PROJECTION_DIM,
dropout=DROPOUT_RATE,
)(x1, x1)
attention_output = (
StochasticDepth(drop_prob)(attention_output) if drop_prob else attention_output
)
# Skip connection 1.
x2 = layers.Add()([attention_output, encoded_patches])
# Layer normalization 2.
x3 = layers.LayerNormalization(epsilon=LAYER_NORM_EPS)(x2)
# MLP layer 1.
x4 = mlp(x3, hidden_units=MLP_UNITS, dropout_rate=DROPOUT_RATE)
x4 = StochasticDepth(drop_prob)(x4) if drop_prob else x4
# Skip connection 2.
outputs = layers.Add()([x2, x4])
return keras.Model(encoded_patches, outputs, name=name)
class ViTClassifier(keras.Model):
Vision Transformer base class.
def __init__(self, **kwargs):
super().__init__(**kwargs)
# Patchify + linear projection + reshaping.
self.projection = keras.Sequential(
[
layers.Conv2D(
filters=PROJECTION_DIM,
kernel_size=(PATCH_SIZE, PATCH_SIZE),
strides=(PATCH_SIZE, PATCH_SIZE),
padding="VALID",
name="conv_projection",
),
layers.Reshape(
target_shape=(NUM_PATCHES, PROJECTION_DIM),
name="flatten_projection",
),
],
name="projection",
)
# Positional embedding.
init_shape = (
1,
NUM_PATCHES + 1,
PROJECTION_DIM,
)
self.positional_embedding = tf.Variable(
tf.zeros(init_shape), name="pos_embedding"
)
# Transformer blocks.
dpr = [x for x in tf.linspace(0.0, DROP_PATH_RATE, NUM_LAYERS)]
self.transformer_blocks = [
transformer(drop_prob=dpr[i], name=f"transformer_block_{i}")
for i in range(NUM_LAYERS)
]
# CLS token.
initial_value = tf.zeros((1, 1, PROJECTION_DIM))
self.cls_token = tf.Variable(
initial_value=initial_value, trainable=True, name="cls"
)
# Other layers.
self.dropout = layers.Dropout(DROPOUT_RATE)
self.layer_norm = layers.LayerNormalization(epsilon=LAYER_NORM_EPS)
self.head = layers.Dense(
NUM_CLASSES,
name="classification_head",
)
def call(self, inputs, training=True):
n = tf.shape(inputs)[0]
# Create patches and project the patches.
projected_patches = self.projection(inputs)
# Append class token if needed.
cls_token = tf.tile(self.cls_token, (n, 1, 1))
cls_token = tf.cast(cls_token, projected_patches.dtype)
projected_patches = tf.concat([cls_token, projected_patches], axis=1)
# Add positional embeddings to the projected patches.
encoded_patches = (
self.positional_embedding + projected_patches
) # (B, number_patches, projection_dim)
encoded_patches = self.dropout(encoded_patches)
# Iterate over the number of layers and stack up blocks of
# Transformer.
for transformer_module in self.transformer_blocks:
# Add a Transformer block.
encoded_patches = transformer_module(encoded_patches)
# Final layer normalization.
representation = self.layer_norm(encoded_patches)
# Pool representation.
encoded_patches = representation[:, 0]
# Classification head.
output = self.head(encoded_patches)
return output
class ViTDistilled(ViTClassifier):
def __init__(self, regular_training=False, **kwargs):
super().__init__(**kwargs)
self.num_tokens = 2
self.regular_training = regular_training
# CLS and distillation tokens, positional embedding.
init_value = tf.zeros((1, 1, PROJECTION_DIM))
self.dist_token = tf.Variable(init_value, name="dist_token")
self.positional_embedding = tf.Variable(
tf.zeros(
(
1,
NUM_PATCHES + self.num_tokens,
PROJECTION_DIM,
)
),
name="pos_embedding",
)
# Head layers.
self.head = layers.Dense(
NUM_CLASSES,
name="classification_head",
)
self.head_dist = layers.Dense(
NUM_CLASSES,
name="distillation_head",
)
def call(self, inputs, training=True):
n = tf.shape(inputs)[0]
# Create patches and project the patches.
projected_patches = self.projection(inputs)
# Append the tokens.
cls_token = tf.tile(self.cls_token, (n, 1, 1))
dist_token = tf.tile(self.dist_token, (n, 1, 1))
cls_token = tf.cast(cls_token, projected_patches.dtype)
dist_token = tf.cast(dist_token, projected_patches.dtype)
projected_patches = tf.concat(
[cls_token, dist_token, projected_patches], axis=1
)
# Add positional embeddings to the projected patches.
encoded_patches = (
self.positional_embedding + projected_patches
) # (B, number_patches, projection_dim)
encoded_patches = self.dropout(encoded_patches)
# Iterate over the number of layers and stack up blocks of
# Transformer.
for transformer_module in self.transformer_blocks:
# Add a Transformer block.
encoded_patches = transformer_module(encoded_patches)
# Final layer normalization.
representation = self.layer_norm(encoded_patches)
# Classification heads.
x, x_dist = (
self.head(representation[:, 0]),
self.head_dist(representation[:, 1]),
)
if not training or self.regular_training:
# During standard train / finetune, inference average the classifier
# predictions.
return (x + x_dist) / 2
elif training:
# Only return separate classification predictions when training in distilled
# mode.
return x, x_dist
deit_tiny_distilled = ViTDistilled()
dummy_inputs = tf.ones((2, 224, 224, 3))
outputs = deit_tiny_distilled(dummy_inputs, training=False)
print(outputs.shape)
class DeiT(keras.Model):
# Reference:
# https://keras.io/examples/vision/knowledge_distillation/
def __init__(self, student, teacher, **kwargs):
super().__init__(**kwargs)
self.student = student
self.teacher = teacher
self.student_loss_tracker = keras.metrics.Mean(name="student_loss")
self.dist_loss_tracker = keras.metrics.Mean(name="distillation_loss")
@property
def metrics(self):
metrics = super().metrics
metrics.append(self.student_loss_tracker)
metrics.append(self.dist_loss_tracker)
return metrics
def compile(
self,
optimizer,
metrics,
student_loss_fn,
distillation_loss_fn,
):
super().compile(optimizer=optimizer, metrics=metrics)
self.student_loss_fn = student_loss_fn
self.distillation_loss_fn = distillation_loss_fn
def train_step(self, data):
# Unpack data.
x, y = data
# Forward pass of teacher
teacher_predictions = tf.nn.softmax(self.teacher(x, training=False), -1)
teacher_predictions = tf.argmax(teacher_predictions, -1)
with tf.GradientTape() as tape:
# Forward pass of student.
cls_predictions, dist_predictions = self.student(x / 255.0, training=True)
# Compute losses.
student_loss = self.student_loss_fn(y, cls_predictions)
distillation_loss = self.distillation_loss_fn(
teacher_predictions, dist_predictions
)
loss = (student_loss + distillation_loss) / 2
# Compute gradients.
trainable_vars = self.student.trainable_variables
gradients = tape.gradient(loss, trainable_vars)
# Update weights.
self.optimizer.apply_gradients(zip(gradients, trainable_vars))
# Update the metrics configured in `compile()`.
student_predictions = (cls_predictions + dist_predictions) / 2
self.compiled_metrics.update_state(y, student_predictions)
self.dist_loss_tracker.update_state(distillation_loss)
self.student_loss_tracker.update_state(student_loss)
# Return a dict of performance.
results = {m.name: m.result() for m in self.metrics}
return results
def test_step(self, data):
# Unpack the data.
x, y = data
# Compute predictions.
y_prediction = self.student(x / 255.0, training=False)
# Calculate the loss.
student_loss = self.student_loss_fn(y, y_prediction)
# Update the metrics.
self.compiled_metrics.update_state(y, y_prediction)
self.student_loss_tracker.update_state(student_loss)
# Return a dict of performance.
results = {m.name: m.result() for m in self.metrics}
return results
def call(self, inputs):
return self.student(inputs / 255.0, training=False)
!wget -q https://github.com/sayakpaul/deit-tf/releases/download/v0.1.0/bit_teacher_flowers.zip
!unzip -q bit_teacher_flowers.zip
bit_teacher_flowers = keras.models.load_model("bit_teacher_flowers")
deit_tiny = ViTDistilled()
deit_distiller = DeiT(student=deit_tiny, teacher=bit_teacher_flowers)
lr_scaled = (BASE_LR / 512) * BATCH_SIZE
deit_distiller.compile(
optimizer=tfa.optimizers.AdamW(weight_decay=WEIGHT_DECAY, learning_rate=lr_scaled),
metrics=["accuracy"],
student_loss_fn=keras.losses.CategoricalCrossentropy(
from_logits=True, label_smoothing=0.1
),
distillation_loss_fn=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
)
_ = deit_distiller.fit(train_dataset, validation_data=val_dataset, epochs=NUM_EPOCHS)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step5: plotting a neuron
Step6: Testing function
Step7: Testing function
Step8: Testing seq2seq
Step9: We may have to write our own dense --> seq with keras layers Dense( ) and LSTM( ).
| <ASSISTANT_TASK:>
Python Code:
import numpy as np
import McNeuron
from keras.models import Sequential
from keras.layers.core import Dense, Reshape
from keras.layers.recurrent import LSTM
import matplotlib.pyplot as plt
from copy import deepcopy
import os
%matplotlib inline
neuron_list = McNeuron.visualize.get_all_path(os.getcwd()+"/Data/Pyramidal/chen")
neuron = McNeuron.Neuron(file_format = 'swc', input_file=neuron_list[19])
McNeuron.visualize.plot_2D(neuron)
#tmp = neuron.subsample_main_nodes()
np.shape(neuron.parent_index)
from numpy import linalg as LA
def random_subsample(neuron, number_random_node):
randomly select a few number of nodes on the neuron and make a neuron based on that.
In the selected nodes, the consecutive nodes on the neuron connects by a stright line.
Parameters
----------
neuron: Neuron
number_random_node: int
number of nodes to be selected.
Returns
-------
The subsample neuron.
I = np.arange(neuron.n_soma, neuron.n_node)
np.random.shuffle(I)
selected_index = I[0:number_random_node]
selected_index = np.union1d(np.arange(neuron.n_soma), selected_index)
selected_index = selected_index.astype(int)
selected_index = np.unique(np.sort(selected_index))
parent_ind = np.array([],dtype = int)
for i in selected_index:
p = neuron.parent_index[i]
while(~np.any(selected_index == p)):
p = neuron.parent_index[p]
(ind,) = np.where(selected_index==p)
parent_ind = np.append(parent_ind, ind)
n_list = []
for i in range(selected_index.shape[0]):
n = McNeuron.Node()
n.xyz = neuron.nodes_list[selected_index[i]].xyz
n.r = neuron.nodes_list[selected_index[i]].r
n.type = neuron.nodes_list[selected_index[i]].type
n_list.append(n)
for i in np.arange(1,selected_index.shape[0]):
j = parent_ind[i]
n_list[i].parent = n_list[j]
n_list[j].add_child(n_list[i])
return McNeuron.Neuron(file_format = 'only list of nodes', input_file = n_list)
def mesoscale_subsample(neuron, number):
main_point = neuron.subsample_main_nodes()
Nodes = main_point.nodes_list
num_rm = (main_point.n_node - number)/2.
for remove in range(int(num_rm)):
pair_list = []
Dis = np.array([])
for n in Nodes:
if n.parent is not None:
if n.parent.parent is not None:
a = n.parent.children
if(len(a)==2):
n1 = a[0]
n2 = a[1]
if(len(n1.children) == 0 and len(n2.children) == 0):
pair_list.append([n1 , n2])
dis = LA.norm(a[0].xyz - a[1].xyz,2)
Dis = np.append(Dis,dis)
(b,) = np.where(Dis == Dis.min())
b = pair_list[b[0]]
par = b[0].parent
loc = b[0].xyz + b[1].xyz
loc = loc/2
par.children = []
par.xyz = loc
Nodes.remove(b[1])
Nodes.remove(b[0])
return McNeuron.Neuron(file_format = 'only list of nodes', input_file = Nodes)
def reducing_data(swc_df, pruning_number=10):
Parameters
----------
swc_df: dataframe
the original swc file
pruning_number: int
number of nodes remaining at the end of pruning
Returns
-------
pruned_df: dataframe
pruned dataframe
L = []
for i in range(len(swc_df)):
L.append(mesoscale_subsample(McNeuron.Neuron(file_format = 'swc', input_file = swc_df[i]), pruning_number))
return L
def separate(list_of_neurons):
Parameters
----------
list_of_neurons: List of Neurons
Returns
-------
geometry: array of shape (n-1, 3)
(x, y, z) coordinates of each shape assuming that soma is at (0, 0, 0)
morphology : array of shape (n-1,)
index of node - index of parent
Geo = list()
Morph = list()
for n in range(len(list_of_neurons)):
neuron = list_of_neurons[n]
Geo.append(neuron.location)
Morph.append(neuron.parent_index)
return Geo, Morph
def geometry_generator(n_nodes=10):
Generator network: fully connected 2-layer network to generate locations
Parameters
----------
n_nodes: int
number of nodes
Returns
-------
model: keras object
number of models
model = Sequential()
model.add(Dense(input_dim=100, output_dim=512))
model.add(Activation('tanh'))
model.add(Dense(input_dim=512, output_dim=512))
model.add(Activation('tanh'))
model.add(Dense(input_dim=512, output_dim=n_nodes * 3))
model.add(Reshape((n_nodes, 3)))
return model
def morphology_generator(n_nodes=10):
Generator network: fully connected 2-layer network to generate locations
Parameters
----------
n_nodes: int
number of nodes
Returns
-------
model: keras object
number of models
model = Sequential()
# A keras seq to seq model, with the following characteristics:
# input length: 1
# input dimensionality: 100
# some hidden layers for encoding
# some hidden layers for decoding
# output length: n_nodes - 1
# output dimensionality: n_nodes - 1 (there will finally be a softmax on each output node)
return model
for i in range(4):
n_nodes = 10 + 30 * i
subsampled_neuron = mesoscale_subsample(deepcopy(neuron), n_nodes)
print 'Number of nodes: %d' % (n_nodes)
McNeuron.visualize.plot_2D(subsampled_neuron, size = 4)
McNeuron.visualize.plot_dedrite_tree(subsampled_neuron)
plt.show()
tmp = reducing_data(neuron_list[0:3], pruning_number=10)
geo, morph = separate(tmp)
print morph[0]
print morph[1]
print morph[2]
geo[2][0:3,9]
import seq2seq
from seq2seq.models import Seq2Seq
from keras.layers.core import Activation
model = Seq2Seq(input_shape=(100, 1), hidden_dim=100, output_length=11, output_dim=10, depth=2, dropout=0.4)
#model.add(Activation('softmax'))
model.compile(loss='mse', optimizer='rmsprop')
model.predict(np.random.randn(1, 100, 1))
ggm = geometry_generator(10)
tmp = ggm.predict(np.random.randn(5,100))
tmp.shape
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Model parameters (parameters of the Generalized Maxwell model)
Step2: Cantilever and general simulation paramters
Step3: Main portion of the static force spectroscopy simulation
Step4: Performing theoretical convolution
Step5: Comparing the simulation results with the theoretical convolution
Step6: Performing non-linear square optimization to retrieve properties
Step7: First a fit assuming force is linear in time
Step8: Without linear load assumption
| <ASSISTANT_TASK:>
Python Code:
import sys
sys.path.append('d:\github\pycroscopy')
from pycroscopy.simulation.afm_lib import sfs_genmaxwell_lr, compliance_maxwell
from pycroscopy.simulation.nls_fit import nls_fit, linear_fit_nob
from pycroscopy.simulation.rheology import chi_th, j_t, theta_v, theta_g
from pycroscopy.simulation.afm_calculations import derivative_cd, log_scale, av_dt, log_tw
from numba import jit
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
M = 5 #number of Maxwell arms for Generalized Maxwell model
Ge = 1.0e6 #Equilibrium modulus
G = np.zeros(M)
tau = np.zeros(M)
G[0] = 9.0e8
G[1] = 5.0e6
G[2] = 3.0e7
G[3] = 2.0e6
G[4] = 1.0e5
Gg = sum(G[:]) + Ge
tau[0] = 1.0e-3
tau[1] = 1.0e-2
tau[2] = 1.0e-1
tau[3] = 1.0e0
tau[4] = 1.0e1
k_m1 = 1.0 #first eigenmode stiffness
R = 2000.0e-9 #tip radius
alfa = 16.0*np.sqrt(R)/3.0 #cell constant (related to tip geometry)
y_dot = 100.0e-9 #approach speed
y_t_initial = 1.0e-10 #initial position of cantilever with respect to the sample
#1st, 2nd and 3rd eigenmode quality factors
Q1 = 100.0
Q2 = 200.0
Q3 = 300.0
fo1 = 1.0e4 #first eigenmode resonance frequency
period1 = 1.0/fo1 #fundamental period
dt = period1/10.0e3 #simulation timestep
simultime = y_t_initial/y_dot + 1.0 #total simulation time
printstep = 1.0e-5 #how often will the result be stored in the arrays (this should be larger than dt)
print('This cell may take a while to compute, it is performing the simulation')
jit_sfs = jit()(sfs_genmaxwell_lr) #accelerating the simulation with numba
%time t, tip, Fts, xb, defl, zs = jit_sfs(G, tau, R, dt, simultime, y_dot, y_t_initial, k_m1, fo1, Ge, Q1, printstep)
#obtaining the compliance of the generalized Maxwell model via simulation
jit_compliance = jit()(compliance_maxwell)
t_r, J_r = jit_compliance(G, tau, Ge, t[2]-t[1], t[len(t)-1]-t[0])
#performing the time derivative of force that will be convolved with the cree compliance
df_dt = derivative_cd(Fts, t-t[0])
#numerical convolution of the creep compliance with the time derivative of force as in the above equation
conv = np.convolve(J_r, df_dt[0:len(df_dt)-2], mode='full')*(t[2]-t[1])
conv = conv[range(len(J_r))]
plt.plot(t -t[0], 16.0/3.0*np.sqrt(R)*(-tip)**(3.0/2), 'y', lw=5, label = 'Lee&Radok simulation') #Lee Radok
plt.plot(t_r -t_r[0], conv, 'r--', lw=3, label = 'Theoretical convolution')
plt.xlabel(r'$time, \,s$', fontsize='20',fontweight='bold')
plt.ylabel(r'$\int_0^t J(t-\zeta) \frac{dF(\zeta)}{d\zeta} d\zeta$',fontsize='20',fontweight='bold')
plt.xscale('log')
plt.yscale('log')
plt.xlim(printstep, simultime)
plt.legend(loc=4)
t_res = 1.0e-4 #time resolution (inverse of sampling frequency)
t_exp = 1.0 #total experimental time
tip_log, t_log_sim = log_scale(tip, t-t[0], t_res, t_exp) #Weighting time and tip arrays in logarithmic scale
F_log, _ = log_scale(Fts, t -t[0], t_res, t_exp) #Weighting force array in logarithmic scale
Fdot = linear_fit_nob(t_log_sim, F_log) #Getting linear slope of force in time trace
chi_simul = alfa*pow(-tip_log,1.5)/Fdot #according to eq 19, relation between chi and tip when force is assumed to be linear
print('This cell may take a while to compute, it is performing the non-linear least square fitting')
method = 0 #the load is assumed to be linear in time
arms = 4 #number of Maxwell arms in the model
%time Jg_c, tau_c, J_c = nls_fit(t-t[0], -tip, Fts, R, t_res, t_exp, arms, method)
# defining time and frequency axes for plots
t_log = log_tw(t_res, t_exp)
omega = log_tw(1.0e-1, 1.0e5, 20)
#chi_theor = chi_th(t_th, Jg_v, J_v, tau_v)
chi_5 = chi_th(t_log, Jg_c, J_c, tau_c)
plt.plot(t_log_sim, chi_simul, 'r*', markersize=15, label=r'Simulation, see Eq.(14)')
plt.plot(t_log, chi_5, 'b', lw = 3.0, label=r'4-Voigt Fit, Eq. (9)')
plt.legend(loc=4, fontsize=13)
plt.xlabel(r'$time, \,s$', fontsize='20',fontweight='bold')
plt.ylabel(r'$\chi(t), \,Pa^{-1}s$',fontsize='20',fontweight='bold')
plt.xscale('log')
plt.yscale('log')
theta_th = theta_g(omega, G, tau, Ge)
theta_5 = theta_v(omega, Jg_c, J_c, tau_c)
plt.plot(omega, theta_th, 'y', lw = 5.0, label=r'Theoretical')
plt.plot(omega, theta_5, 'b', lw = 3.0, label=r'Fit, linear assumption')
plt.legend(loc='best', fontsize=13)
plt.xlabel(r'$\omega, \,rad/s$', fontsize='20',fontweight='bold')
plt.ylabel(r'$\theta(\omega),\,deg$',fontsize='20',fontweight='bold')
plt.xscale('log')
print('This cell may take a while to compute, it is performing the non-linear least square fitting')
method = 1 #the load is NOT assumed to be linear in time
arms = 3 #number of voigt units in the fitting model
%time Jg_nl, tau_nl, J_nl = nls_fit(t-t[0], -tip, Fts, R, t_res, t_exp, arms, method, Jg_c, J_c[1], tau_c[1], J_c[2], tau_c[2], J_c[3], tau_c[3])
theta_th = theta_g(omega, G, tau, Ge)
theta_5 = theta_v(omega, Jg_c, J_c, tau_c)
plt.plot(omega, theta_th, 'y', lw = 5.0, label=r'Theoretical')
plt.plot(omega, theta_5, 'b', lw = 3.0, label=r'Fit, linear assumption')
theta_5nl = theta_v(omega, Jg_nl, J_nl, tau_nl)
plt.plot(omega, theta_5nl, 'g', lw = 3.0, label=r'4-Voigt Fit, non linear')
plt.legend(loc='best', fontsize=13)
plt.xlabel(r'$\omega, \,rad/s$', fontsize='20',fontweight='bold')
plt.ylabel(r'$\theta(\omega),\,deg$',fontsize='20',fontweight='bold')
plt.xscale('log')
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Alternative model
Step3: Study corpus
Step4: Not really zipfian so far. Maybe read that if we really care about that.
| <ASSISTANT_TASK:>
Python Code:
import numpy as np
from dictlearn.generate_synthetic_data import FakeTextGenerator
V = 100
embedding_size = 50
markov_order = 6
temperature=1.0
sentence_size = 20
model = FakeTextGenerator(V, embedding_size, markov_order, temperature)
n_sentences=1000
sentences = model.create_corpus(n_sentences, 5, 10, 0.7, 0.1, 0.5)
import numpy as np
from dictlearn.generate_synthetic_data_alt import FakeTextGenerator
embedding_size = 20
markov_order = 3
temperature=1.0
sentence_size = 20
model = FakeTextGenerator(100, 400, embedding_size, markov_order, temperature)
n_sentences=1000
sentences = model.create_corpus(n_sentences, 5, 10, 0.7, 0.1)
import matplotlib.pyplot as plt
%matplotlib inline
plt.figure(figsize=(20, 20))
plt.imshow(model.features.T, interpolation='none')
plt.colorbar()
plt.show()
import matplotlib.pyplot as plt
%matplotlib inline
from collections import Counter
def summarize(sentences, V, label):
sentences: list of list of characters
V: vocabulary size
sentence_size = len(sentences[0])
# count tokens and their positions
#positions = np.zeros((V,sentence_size))
unigram_counts = Counter()
for sentence in sentences:
for i,tok in enumerate(sentence):
unigram_counts[tok] += 1
#positions[w, i] += 1
ordered_count = [c for _, c in unigram_counts.most_common()]
print ordered_count[:100]
print ordered_count[500:600]
print ordered_count[-100:]
total_word_count = sum(ordered_count)
# compute empirical frequency
ordered_freq = [float(oc)/total_word_count for oc in ordered_count]
print len(ordered_count), len(ordered_freq), V
plt.plot(range(len(ordered_freq)), ordered_freq)
plt.title("word frequency ordered by decreasing order of occurences (rank) on " + label)
plt.show()
plt.plot(np.log(range(len(ordered_freq))), np.log(ordered_count))
plt.title("log(word frequency) / log(rank) on " + label)
plt.show()
summarize(sentences, model.V, "corpus")
definitions = []
for defs in model.dictionary.values():
definitions += defs
summarize(definitions, V, "definitions")
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship
Step3: The very same sample of the RMS Titanic data now shows the Survived feature removed from the DataFrame. Note that data (the passenger data) and outcomes (the outcomes of survival) are now paired. That means for any passenger data.loc[i], they have the survival outcome outcome[i].
Step5: Tip
Step6: Question 1
Step7: Answer
Step9: Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females did survive the ship sinking. Let's build on our previous prediction
Step10: Question 2
Step11: Answer
Step13: Examining the survival statistics, the majority of males younger than 10 survived the ship sinking, whereas most males age 10 or older did not survive the ship sinking. Let's continue to build on our previous prediction
Step14: Question 3
Step15: Answer
Step17: After exploring the survival statistics visualization, fill in the missing code below so that the function will make your prediction.
Step18: Question 4
| <ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
# RMS Titanic data visualization code
from titanic_visualizations import survival_stats
from IPython.display import display
%matplotlib inline
# Load the dataset
in_file = 'titanic_data.csv'
full_data = pd.read_csv(in_file)
# Print the first few entries of the RMS Titanic data
display(full_data.head())
# Store the 'Survived' feature in a new variable and remove it from the dataset
outcomes = full_data['Survived']
data = full_data.drop('Survived', axis = 1)
# Show the new dataset with 'Survived' removed
display(data.head())
def accuracy_score(truth, pred):
Returns accuracy score for input truth and predictions.
# Ensure that the number of predictions matches number of outcomes
if len(truth) == len(pred):
# Calculate and return the accuracy as a percent
return "Predictions have an accuracy of {:.2f}%.".format((truth == pred).mean()*100)
else:
return "Number of predictions does not match number of outcomes!"
# Test the 'accuracy_score' function
predictions = pd.Series(np.ones(5, dtype = int))
print accuracy_score(outcomes[:5], predictions)
def predictions_0(data):
Model with no features. Always predicts a passenger did not survive.
predictions = []
for _, passenger in data.iterrows():
# Predict the survival of 'passenger'
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_0(data)
print accuracy_score(outcomes, predictions)
survival_stats(data, outcomes, 'Sex')
def predictions_1(data):
Model with one feature:
- Predict a passenger survived if they are female.
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# and write your prediction conditions here
if passenger.Sex == 'male':
predictions.append(0)
else:
predictions.append(1)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_1(data)
print accuracy_score(outcomes, predictions)
survival_stats(data, outcomes, 'Age', ["Sex == 'male'"])
def predictions_2(data):
Model with two features:
- Predict a passenger survived if they are female.
- Predict a passenger survived if they are male and younger than 10.
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# and write your prediction conditions here
if (passenger.Sex == 'female'):
predictions.append(1)
elif (passenger.Sex == 'male' and passenger.Age < 10):
predictions.append(1)
else:
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_2(data)
print accuracy_score(outcomes, predictions)
survival_stats(data, outcomes, 'Embarked', ["Pclass == 3", "Age < 30", "Sex == female", "SibSp == 2"])
def predictions_3(data):
Model with multiple features. Makes a prediction with an accuracy of at least 80%.
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# and write your prediction conditions here
if (passenger.Sex == 'female' and passenger.Pclass <> 3):
predictions.append(1)
elif (passenger.Sex == 'female' and passenger.Pclass == 3 and passenger.Age < 28 and passenger.SibSp == 0):
predictions.append(1)
elif (passenger.Sex == 'male' and passenger.Pclass <> 3 and passenger.Age < 10):
predictions.append(1)
elif (passenger.Sex == 'male' and passenger.Pclass == 1 and passenger.Age > 31 and passenger.Age < 44 and passenger.Fare > 5.000):
predictions.append(1)
else:
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_3(data)
print accuracy_score(outcomes, predictions)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: Even though the top code cell contains no instructions to draw, Vpython's way of integrating into Jupyter Notebook seems to be by adding a scene right after the first code cell. Look below for the code that made all of the above happen. Yes, that's a bit strange.
| <ASSISTANT_TASK:>
Python Code:
from vpython import *
class Vector:
def __init__(self, x, y, z):
self.v = vector(x, y, z)
def __add__(self, other):
v_sum = self.v + other.v
return Vector(*v_sum.value)
def __neg__(self):
return Vector(*((-self.v).value))
def __sub__(self, other):
V = (self + (-other))
return Vector(*V.v.value)
def __mul__(self, scalar):
V = scalar * self.v
return Vector(*V.value)
def norm(self):
v = norm(self.v)
return Vector(*v.value)
def length(self):
return mag(self.v)
def draw(self):
self.the_cyl = cylinder(pos=vector(0,0,0), axis=self.v, radius=0.1)
self.the_cyl.color = color.cyan
XBASIS = Vector(1,0,0)
YBASIS = Vector(0,1,0)
ZBASIS = Vector(0,0,1)
XNEG = -XBASIS
YNEG = -YBASIS
ZNEG = -ZBASIS
XYZ = [XBASIS, XNEG, YBASIS, YNEG, ZBASIS, ZNEG]
sphere(pos=vector(0,0,0), color = color.orange, radius=0.2)
for radial in XYZ:
radial.draw()
class Edge:
def __init__(self, v0, v1):
self.v0 = v0
self.v1 = v1
def draw(self):
cylinder wants a starting point, and a direction vector
pointer = (self.v1 - self.v0)
direction_v = norm(pointer) * pointer.length() # normalize then stretch
self.the_cyl = cylinder(pos = self.v0.v, axis=direction_v.v, radius=0.1)
self.the_cyl.color = color.green
class Polyhedron:
def __init__(self, faces, corners):
self.faces = faces
self.corners = corners
self.edges = self._get_edges()
def _get_edges(self):
take a list of face-tuples and distill
all the unique edges,
e.g. ((1,2,3)) => ((1,2),(2,3),(1,3))
e.g. icosahedron has 20 faces and 30 unique edges
( = cubocta 24 + tetra's 6 edges to squares per
jitterbug)
uniqueset = set()
for f in self.faces:
edgetries = zip(f, f[1:]+ (f[0],))
for e in edgetries:
e = tuple(sorted(e)) # keeps out dupes
uniqueset.add(e)
return tuple(uniqueset)
def draw(self):
for edge in self.edges:
the_edge = Edge(Vector(*self.corners[edge[0]]),
Vector(*self.corners[edge[1]]))
the_edge.draw()
the_verts = \
{ 'A': (0.35355339059327373, 0.35355339059327373, 0.35355339059327373),
'B': (-0.35355339059327373, -0.35355339059327373, 0.35355339059327373),
'C': (-0.35355339059327373, 0.35355339059327373, -0.35355339059327373),
'D': (0.35355339059327373, -0.35355339059327373, -0.35355339059327373),
'E': (-0.35355339059327373, -0.35355339059327373, -0.35355339059327373),
'F': (0.35355339059327373, 0.35355339059327373, -0.35355339059327373),
'G': (0.35355339059327373, -0.35355339059327373, 0.35355339059327373),
'H': (-0.35355339059327373, 0.35355339059327373, 0.35355339059327373)}
the_faces = (('A','B','C'),('A','C','D'),('A','D','B'),('B','C','D'))
other_faces = (('E','F','G'), ('E','G','H'),('E','H','F'),('F','G','H'))
tetrahedron = Polyhedron(the_faces, the_verts)
inv_tetrahedron = Polyhedron(other_faces, the_verts)
print(tetrahedron._get_edges())
print(inv_tetrahedron._get_edges())
tetrahedron.draw()
inv_tetrahedron.draw()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Next, let's define our potential by a $1\,\rm M_\odot$ point mass, and put our tracer particle initially at $1\,\rm AU$.
Step2: Let's also place our tracer on a circular orbit
Step3: Now that the potential and the initial conditions are set, we need to define a few of the orbital integration parameters. First, we need to set the type of potential, next the integrator, then how long we want to integrate the orbit, with how big time steps, and in which direction (+1 for forward in time, -1 for back in time).
Step6: To speed up the calculations in the streakline module, integrator and potential variables are assigned an integer ids, which will be input for the orbit integrator. Here are a couple of helper functions that do the translation.
Step7: So far, we've made use of astropy units, which simplifies calculations in python. However, the streakline code is written in c for performance, and expects all inputs in SI units.
Step8: Now we have all the input parameters for the orbit integrator. It is called by streakline.orbit(x_init, v_init, potential_params, potential_id, integrator_id, Nsteps, time_step, sign). This function returns a $6\times\rm N_{step}$ array, with the orbital evolution of a tracer particle. The columns of the array are
Step9: Let's check how well the integrator does by plotting the numerically integrated orbit (black) and the analytic solution (red).
Step10: Numerical orbit agrees with the analytic fairly well for this time step size. Explore what happens when you change it!
| <ASSISTANT_TASK:>
Python Code:
from __future__ import print_function, division
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import astropy.units as u
from astropy.constants import G
import streakline
%matplotlib inline
mpl.rcParams['figure.figsize'] = (8,8)
mpl.rcParams['font.size'] = 18
M = 1*u.Msun
x_ = np.array([1, 0, 0])*u.au
vc = np.sqrt(G*M/np.sqrt(np.sum(x_**2)))
vc.to(u.km/u.s)
v_ = np.array([0, vc.value, 0])*vc.unit
potential_ = 'point'
integrator_ = 'lf'
age = 1*u.yr
dt_ = 1*u.day
sign = 1.
def get_intid(integrator):
Assign integrator ID for a given integrator choice
Parameter:
integrator - either 'lf' for leap frog or 'rk' for Runge-Kutta
integrator_dict = {'lf': 0, 'rk': 1}
return integrator_dict[integrator]
def get_potid(potential):
Assign potential ID for a given potential choice
Parameter:
potential - one of the following:
'point' -- point mass
'log' -- triaxial logarithmic halo
'nfw' -- triaxial NFW halo
'gal' -- Hernquist bulge + Miyamoto-Nagai disk + triaxial NFW halo
potential_dict = {'point': 0, 'log': 2, 'nfw': 3, 'gal': 4}
return potential_dict[potential]
x = x_.si.value
v = v_.si.value
params = [M.si.value,]
potential = get_potid(potential_)
integrator = get_intid(integrator_)
N = int(age/dt_)
dt = dt_.si.value
orbit_ = streakline.orbit(x, v, params, potential, integrator, N, dt, sign)
orbit = {}
orbit['x'] = orbit_[:3]*u.m
orbit['v'] = orbit_[3:]*u.m/u.s
plt.figure()
plt.plot(orbit['x'][0].to(u.au), orbit['x'][1].to(u.au), 'k-', lw=4, zorder=0)
circle = mpl.patches.Circle((0,0), radius=1, lw=2, ec='r', fc='none', zorder=1)
plt.gca().add_artist(circle)
plt.xlim(-1, 1)
plt.ylim(-1, 1)
plt.xlabel("x (AU)");
plt.ylabel("y (AU)");
dt_ = 1*u.hr
N = int(age/dt_)
dt = dt_.si.value
print('{} timesteps'.format(N))
%timeit -n1000 orbit_ = streakline.orbit(x, v, params, potential, integrator, N, dt, sign)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This allows inline graphics in IPython (Jupyter) notebooks and imports functions nessesary for ploting as plt. In addition we import numpy as np.
Step2: Plot is as easy as this
Step3: Line style and labels are controlled in a way similar to Matlab
Step4: You can plot several individual lines at once
Step5: One more example
Step6: If you feel a bit playful (only in matplotlib > 1.3)
Step7: Following example is from matplotlib - 2D and 3D plotting in Python - great place to start for people interested in matplotlib.
Step8: When you going to plot something more or less complicated in Matplotlib, the first thing you do is open the Matplotlib example gallery and choose example closest to your case.
Step9: Maps ... using Basemap
Step10: Here we create netCDF variable objec for air (we would like to have acces to some of the attributes), but from lat and lon we import only data valies
Step11: Easiest way to look at the array is imshow
Step12: But we want some real map
Step13: Our coordinate variables are vectors
Step14: For the map we need 2d coordinate arrays. Convert lot lan to 2d
Step15: Import Basemap - library for plotting 2D data on maps
Step16: Create Basemap instance (with certain characteristics) and convert lon lat to map coordinates
Step17: Creating the map now is only two lines
Step18: We can make the map look prettier by adding couple of lines
Step19: You can change map characteristics by changin the Basemap instance
Step20: While the rest of the code might be the same
Step21: One more map exampe
| <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pylab as plt
import numpy as np
x = np.linspace(0,10,20)
y = x ** 2
plt.plot(x,y);
plt.plot(x, y, 'r--o')
plt.xlabel('x')
plt.ylabel('y')
plt.title('title');
plt.plot(x, y, 'r--o', x, y ** 1.1, 'bs', x, y ** 1.2, 'g^-' );
mu, sigma = 100, 15
x = mu + sigma * np.random.randn(10000)
# the histogram of the data
n, bins, patches = plt.hist(x, 50, normed=1, facecolor='g', alpha=0.75)
plt.xlabel('Smarts')
plt.ylabel('Probability')
plt.title('Histogram of IQ')
plt.text(60, .025, r'$\mu=100,\ \sigma=15$')
plt.axis([40, 160, 0, 0.03])
plt.grid(True)
with plt.xkcd():
x = np.linspace(0, 1)
y = np.sin(4 * np.pi * x) * np.exp(-5 * x)
plt.fill(x, y, 'r')
plt.grid(False)
n = np.array([0,1,2,3,4,5])
xx = np.linspace(-0.75, 1., 100)
x = np.linspace(0, 5, 10)
fig, axes = plt.subplots(1, 4, figsize=(12,3))
axes[0].scatter(xx, xx + 0.25*np.random.randn(len(xx)))
axes[1].step(n, n**2, lw=2)
axes[2].bar(n, n**2, align="center", width=0.5, alpha=0.5)
axes[3].fill_between(x, x**2, x**3, color="green", alpha=0.5);
# %load http://matplotlib.org/mpl_examples/pylab_examples/griddata_demo.py
from numpy.random import uniform, seed
from matplotlib.mlab import griddata
import matplotlib.pyplot as plt
import numpy as np
# make up data.
#npts = int(raw_input('enter # of random points to plot:'))
seed(0)
npts = 200
x = uniform(-2, 2, npts)
y = uniform(-2, 2, npts)
z = x*np.exp(-x**2 - y**2)
# define grid.
xi = np.linspace(-2.1, 2.1, 100)
yi = np.linspace(-2.1, 2.1, 200)
# grid the data.
zi = griddata(x, y, z, xi, yi, interp='linear')
# contour the gridded data, plotting dots at the nonuniform data points.
CS = plt.contour(xi, yi, zi, 15, linewidths=0.5, colors='k')
CS = plt.contourf(xi, yi, zi, 15, cmap=plt.cm.rainbow,
vmax=abs(zi).max(), vmin=-abs(zi).max())
plt.colorbar() # draw colorbar
# plot data points.
plt.scatter(x, y, marker='o', c='b', s=5, zorder=10)
plt.xlim(-2, 2)
plt.ylim(-2, 2)
plt.title('griddata test (%d points)' % npts)
plt.show()
from netCDF4 import Dataset
f =Dataset('air.sig995.2012.nc')
air = f.variables['air']
lat = f.variables['lat'][:]
lon = f.variables['lon'][:]
plt.imshow(air[0,:,:])
plt.colorbar();
air_c = air[:] - 273.15
lat.shape
lon2, lat2 = np.meshgrid(lon,lat)
from mpl_toolkits.basemap import Basemap
m = Basemap(projection='npstere',boundinglat=60,lon_0=0,resolution='l')
x, y = m(lon2, lat2)
m.drawcoastlines()
m.contourf(x,y,air_c[0,:,:])
fig = plt.figure(figsize=(15,7))
m.fillcontinents(color='gray',lake_color='gray')
m.drawcoastlines()
m.drawparallels(np.arange(-80.,81.,20.))
m.drawmeridians(np.arange(-180.,181.,20.))
m.drawmapboundary(fill_color='white')
m.contourf(x,y,air_c[0,:,:],40)
plt.title('Monthly mean SAT')
plt.colorbar()
m = Basemap(projection='ortho',lat_0=45,lon_0=-100,resolution='l')
x, y = m(lon2, lat2)
fig = plt.figure(figsize=(15,7))
#m.fillcontinents(color='gray',lake_color='gray')
m.drawcoastlines()
m.drawparallels(np.arange(-80.,81.,20.))
m.drawmeridians(np.arange(-180.,181.,20.))
m.drawmapboundary(fill_color='white')
cs = m.contourf(x,y,air_c[0,:,:],20)
plt.title('Monthly mean SAT')
m = Basemap(projection='cyl',llcrnrlat=-90,urcrnrlat=90,\
llcrnrlon=0,urcrnrlon=360,resolution='c')
x, y = m(lon2, lat2)
fig = plt.figure(figsize=(15,7))
#m.fillcontinents(color='gray',lake_color='gray')
m.drawcoastlines()
m.drawparallels(np.arange(-80.,81.,20.))
m.drawmeridians(np.arange(0.,360.,20.))
m.drawmapboundary(fill_color='white')
cs = m.contourf(x,y,air[0,:,:],20)
plt.title('Monthly mean SAT')
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Para inserir website, o display do IPython tem suporte para mostrar um IFrame, onde é possível colocar o endereço de um website
| <ASSISTANT_TASK:>
Python Code:
from IPython.display import YouTubeVideo
# a talk about IPython at Sage Days at U. Washington, Seattle.
# Video credit: William Stein.
YouTubeVideo('1j_HxD4iLn8')
from IPython.display import IFrame
IFrame('http://adessowiki.fee.unicamp.br/rnpi', width=700, height=350)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <a id='pyenv'></a>
Step2: Creating a VE
Step3: Notice that the * is missing.
Step4: Automatically activating and deactivating VEs
| <ASSISTANT_TASK:>
Python Code:
%%bash
echo -n "System Python version: "
python --version
%%bash
which python
%%bash
echo -n "Trying to import pip-install-test ... "
python -c "
try:
import pip_install_test
except ModuleNotFoundError:
print('pip-install-test is not installed')
else:
print('pip-install-test is installed')"
%%bash
python -m venv /tmp/test_VE
%%bash
tree /tmp/test_VE | head
ls -l /tmp/test_VE/lib/python3.8/site-packages/
%%bash
echo "Activating VE"
source /tmp/test_VE/bin/activate
echo "Python's path"
which python
ls -l `which python`
echo "Python's version"
python --version
echo "Trying to import pip-install-test"
python -c "
try:
import pip_install_test
except ModuleNotFoundError:
print('pip-install-test is not installed')
else:
print('pip-install-test is installed')"
echo "Installing pip-install-test"
pip install pip-install-test
echo "Trying to import pip-install-test"
python -c "
try:
import pip_install_test
except ModuleNotFoundError:
print('pip-install-test is not installed')
else:
print('pip-install-test is installed')"
echo "Deactivating VE"
deactivate
echo "Python's path"
which python
echo "Python's version"
python --version
echo "Trying to import pip-install-test"
python -c "
try:
import pip_install_test
except ModuleNotFoundError:
print('pip-install-test is not installed')
else:
print('pip-install-test is installed')"
echo "Deleting VE"
rm -rf /tmp/test_VE
%%bash
pyenv install 3.8.5
%%bash
pyenv versions
%%bash
pyenv which python
%%bash
pyenv global 3.8.5
%%bash
pyenv versions
%%bash
pyenv which python
%%bash
which python
%%bash
python --version
%%bash
echo "Trying to import pip-install-test"
python -c "
try:
import pip_install_test
except:
print('pip-install-test is not installed')
else:
print('pip-install-test is installed')"
echo "Which pip I'm using?"
which pip
echo "Installing pip-install-test"
pip install pip-install-test
echo "Trying to import pip-install-test"
python -c "
try:
import pip_install_test
except:
print('pip-install-test is not installed')
else:
print('pip-install-test is installed')"
%%bash
echo "Returning to System's Python"
pyenv global system
pyenv versions
echo "Pringing Python's version"
python --version
echo "Priting selected Python's version"
pyenv version
%%bash
pyenv uninstall -f 3.8.5
%%bash
pyenv install 3.8.5
%%bash
pyenv virtualenv 3.8.5 socket_programming__385
%%bash
pyenv virtualenvs
%%bash
eval "$(pyenv init -)" # <- This should be in .bashrc
pyenv activate socket_programming__385
pyenv virtualenvs
pyenv deactivate
%%bash
pyenv virtualenvs
%%bash
pyenv uninstall -f socket_programming__385
%%bash
echo "Showing versions of Python"
pyenv version
echo "Ensure the test VE does not exit"
pyenv uninstall -f my_python_project__system
echo "Create the test VE"
pyenv virtualenv 3.8.5 my_python_project__system
echo "Create and go into the test Python project"
rm -rf /tmp/my_python_project
mkdir /tmp/my_python_project
cd /tmp/my_python_project
echo "Hook the VE and the project"
pyenv local my_python_project__system
echo "Inside of the project the Python's version is ..."
python --version
echo "Outside of the project the Python's version is ..."
cd
python --version
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Análise manual
Step2: O deputado que mais usou a cota parlamentar totalizou R\$ 516.027,24 em 2015, uma média de um pouco mais que R$ 43.000,00 mensais. Vamos verificar seu maior gasto
Step3: Será que um pagamento de R$ 88.500,00 para divulgação da atividade parlamentar é muito alto? Vamos ver os 5 maiores pagamentos desse tipo, entre todos os deputados, ordenado do maior pro menor
Step4: Descobrimos então que outros parlamentares gastaram ainda mais para divulgar suas atividades. Nesse momento, seu foco pode ter mudado dos R\$ 88.500,00 de Jhonatan de Jesus para os R\$ 189.600,00 de Arnaldo Faria de Sá. Comparando os gastos da tabela acima, o primeiro colocado se destoa a ponto de investigarmos melhor esse gasto? Note que começamos com uma ideia
Step5: Observe que os valores de x = 0 são mais frequentes e a frequência diminui para as laterais. Você pode conferir mais detalhes sobre a Distribuição Normal na Wikipedia.
Step6: Bem diferente da distribuição normal padrão, não é? Embora invisíveis nessa escala, há alguns poucos gastos muito altos à direita. Além disso, notamos muitos gastos próximo do zero e uma diminuição brusca da barra ao lado.
Step7: Agora os gastos estão muito mais próximos da distribuição normal padrão. Segundo Andrew Ng, a distribuição não precisa ser muito igual à normal para obter bons resultados. Podemos seguir para o próximo passo
Step8: Os gastos com divulgação são os primeiros colocados. A tabela acima é a mesma que a última tabela da abordagem manual e sofre do mesmo problema
Step9: NaN significa que não há esse valor nos dados, mas é fácil entender o porquê pelo nome.
Step10: A porcentagem está bem próxima da teórica. Mais de 99% dos gastos com alimentação está abaixo de R\$ 564,93 e não é à toa que os gastos entre 4 e 6 mil estão entre os 5 primeiros da tabela acima. Marllos Sampaio, por exemplo, faz parte dos cerca de 0,3\% que mais gastaram com alimentação.
| <ASSISTANT_TASK:>
Python Code:
import pandas as pd
ceap = pd.read_csv('dados/ceap2015.csv.zip')
linhas, colunas = ceap.shape
print('Temos {} entradas com {} colunas cada.'.format(linhas, colunas))
print('Primeira entrada:')
ceap.iloc[0]
colunas = ['txNomeParlamentar', 'sgPartido', 'sgUF', 'vlrLiquido']
grupo = ['txNomeParlamentar', 'sgPartido', 'sgUF']
ceap[colunas].groupby(grupo).sum().sort_values('vlrLiquido', ascending=False).head(3)
nome = "JHONATAN DE JESUS"
ceap[ceap.txNomeParlamentar == nome].sort_values('vlrLiquido', ascending=False).iloc[0]
colunas = ['vlrLiquido', 'txNomeParlamentar', 'sgPartido', 'sgUF', 'txtDescricao']
ceap.query('numSubCota == 5')[colunas].sort_values('vlrLiquido', ascending=False).head()
import matplotlib # gráficos
import numpy as np # cálculos
%matplotlib inline
matplotlib.style.use('ggplot')
positivos = ceap[ceap.vlrLiquido > 0].vlrLiquido
aleatorios = pd.Series(np.random.randn(len(positivos)), name='normal')
aleatorios.plot.hist(bins=75, ylim=(0, 35000));
positivos.plot.hist(bins=75);
def log_zscores(valores):
positivos = valores[valores > 0].dropna()
logs = np.log(positivos)
return (logs - logs.mean()) / logs.std()
vlrLiquido_z = log_zscores(ceap.vlrLiquido)
pd.concat([aleatorios, vlrLiquido_z], axis=1).plot.hist(bins=75, alpha=0.6);
from scipy.stats import norm
def prob(valores):
probs = valores.copy()
probs[probs <= 0] = np.nan
z = log_zscores(probs)
probs[z.index] = norm.sf(z)
return probs
ceap['prob_geral'] = prob(ceap.vlrLiquido)
colunas = ['prob_geral', 'vlrLiquido', 'txNomeParlamentar', 'sgPartido', 'sgUF', 'txtDescricao']
ceap[colunas].sort_values('prob_geral').head()
colunas = ['numSubCota', 'vlrLiquido']
ceap['prob_grupo'] = ceap[colunas].groupby('numSubCota').transform(prob)
colunas = ['prob_grupo', 'vlrLiquido', 'txNomeParlamentar', 'sgPartido', 'sgUF', 'txtDescricao']
ceap[colunas].sort_values('prob_grupo').head()
alim = ceap.query('numSubCota == 13 and vlrLiquido > 0').vlrLiquido.dropna()
alim_log = np.log(alim)
média_log = alim_log.mean()
sigma_log = alim_log.std()
limite_log = média_log + 3 * sigma_log
limite = np.exp(limite_log)
print('Valor limite = R$ {:.2f}:'.format(limite))
valores_abaixo = len(alim[alim < limite])
valores_totais = len(alim)
print('{} valores abaixo, em um total de {} = {:.3f}%.'.format(
valores_abaixo, valores_totais, 100 * valores_abaixo/valores_totais))
ceap['prob_total'] = ceap.prob_geral * ceap.prob_grupo
colunas = ['prob_total', 'vlrLiquido', 'txNomeParlamentar', 'sgPartido', 'sgUF', 'txtDescricao']
ceap[colunas].sort_values('prob_total').head(10)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Family
Step7: 1.4. Basic Approximations
Step8: 2. Key Properties --> Resolution
Step9: 2.2. Canonical Horizontal Resolution
Step10: 2.3. Range Horizontal Resolution
Step11: 2.4. Number Of Vertical Levels
Step12: 2.5. High Top
Step13: 3. Key Properties --> Timestepping
Step14: 3.2. Timestep Shortwave Radiative Transfer
Step15: 3.3. Timestep Longwave Radiative Transfer
Step16: 4. Key Properties --> Orography
Step17: 4.2. Changes
Step18: 5. Grid --> Discretisation
Step19: 6. Grid --> Discretisation --> Horizontal
Step20: 6.2. Scheme Method
Step21: 6.3. Scheme Order
Step22: 6.4. Horizontal Pole
Step23: 6.5. Grid Type
Step24: 7. Grid --> Discretisation --> Vertical
Step25: 8. Dynamical Core
Step26: 8.2. Name
Step27: 8.3. Timestepping Type
Step28: 8.4. Prognostic Variables
Step29: 9. Dynamical Core --> Top Boundary
Step30: 9.2. Top Heat
Step31: 9.3. Top Wind
Step32: 10. Dynamical Core --> Lateral Boundary
Step33: 11. Dynamical Core --> Diffusion Horizontal
Step34: 11.2. Scheme Method
Step35: 12. Dynamical Core --> Advection Tracers
Step36: 12.2. Scheme Characteristics
Step37: 12.3. Conserved Quantities
Step38: 12.4. Conservation Method
Step39: 13. Dynamical Core --> Advection Momentum
Step40: 13.2. Scheme Characteristics
Step41: 13.3. Scheme Staggering Type
Step42: 13.4. Conserved Quantities
Step43: 13.5. Conservation Method
Step44: 14. Radiation
Step45: 15. Radiation --> Shortwave Radiation
Step46: 15.2. Name
Step47: 15.3. Spectral Integration
Step48: 15.4. Transport Calculation
Step49: 15.5. Spectral Intervals
Step50: 16. Radiation --> Shortwave GHG
Step51: 16.2. ODS
Step52: 16.3. Other Flourinated Gases
Step53: 17. Radiation --> Shortwave Cloud Ice
Step54: 17.2. Physical Representation
Step55: 17.3. Optical Methods
Step56: 18. Radiation --> Shortwave Cloud Liquid
Step57: 18.2. Physical Representation
Step58: 18.3. Optical Methods
Step59: 19. Radiation --> Shortwave Cloud Inhomogeneity
Step60: 20. Radiation --> Shortwave Aerosols
Step61: 20.2. Physical Representation
Step62: 20.3. Optical Methods
Step63: 21. Radiation --> Shortwave Gases
Step64: 22. Radiation --> Longwave Radiation
Step65: 22.2. Name
Step66: 22.3. Spectral Integration
Step67: 22.4. Transport Calculation
Step68: 22.5. Spectral Intervals
Step69: 23. Radiation --> Longwave GHG
Step70: 23.2. ODS
Step71: 23.3. Other Flourinated Gases
Step72: 24. Radiation --> Longwave Cloud Ice
Step73: 24.2. Physical Reprenstation
Step74: 24.3. Optical Methods
Step75: 25. Radiation --> Longwave Cloud Liquid
Step76: 25.2. Physical Representation
Step77: 25.3. Optical Methods
Step78: 26. Radiation --> Longwave Cloud Inhomogeneity
Step79: 27. Radiation --> Longwave Aerosols
Step80: 27.2. Physical Representation
Step81: 27.3. Optical Methods
Step82: 28. Radiation --> Longwave Gases
Step83: 29. Turbulence Convection
Step84: 30. Turbulence Convection --> Boundary Layer Turbulence
Step85: 30.2. Scheme Type
Step86: 30.3. Closure Order
Step87: 30.4. Counter Gradient
Step88: 31. Turbulence Convection --> Deep Convection
Step89: 31.2. Scheme Type
Step90: 31.3. Scheme Method
Step91: 31.4. Processes
Step92: 31.5. Microphysics
Step93: 32. Turbulence Convection --> Shallow Convection
Step94: 32.2. Scheme Type
Step95: 32.3. Scheme Method
Step96: 32.4. Processes
Step97: 32.5. Microphysics
Step98: 33. Microphysics Precipitation
Step99: 34. Microphysics Precipitation --> Large Scale Precipitation
Step100: 34.2. Hydrometeors
Step101: 35. Microphysics Precipitation --> Large Scale Cloud Microphysics
Step102: 35.2. Processes
Step103: 36. Cloud Scheme
Step104: 36.2. Name
Step105: 36.3. Atmos Coupling
Step106: 36.4. Uses Separate Treatment
Step107: 36.5. Processes
Step108: 36.6. Prognostic Scheme
Step109: 36.7. Diagnostic Scheme
Step110: 36.8. Prognostic Variables
Step111: 37. Cloud Scheme --> Optical Cloud Properties
Step112: 37.2. Cloud Inhomogeneity
Step113: 38. Cloud Scheme --> Sub Grid Scale Water Distribution
Step114: 38.2. Function Name
Step115: 38.3. Function Order
Step116: 38.4. Convection Coupling
Step117: 39. Cloud Scheme --> Sub Grid Scale Ice Distribution
Step118: 39.2. Function Name
Step119: 39.3. Function Order
Step120: 39.4. Convection Coupling
Step121: 40. Observation Simulation
Step122: 41. Observation Simulation --> Isscp Attributes
Step123: 41.2. Top Height Direction
Step124: 42. Observation Simulation --> Cosp Attributes
Step125: 42.2. Number Of Grid Points
Step126: 42.3. Number Of Sub Columns
Step127: 42.4. Number Of Levels
Step128: 43. Observation Simulation --> Radar Inputs
Step129: 43.2. Type
Step130: 43.3. Gas Absorption
Step131: 43.4. Effective Radius
Step132: 44. Observation Simulation --> Lidar Inputs
Step133: 44.2. Overlap
Step134: 45. Gravity Waves
Step135: 45.2. Sponge Layer
Step136: 45.3. Background
Step137: 45.4. Subgrid Scale Orography
Step138: 46. Gravity Waves --> Orographic Gravity Waves
Step139: 46.2. Source Mechanisms
Step140: 46.3. Calculation Method
Step141: 46.4. Propagation Scheme
Step142: 46.5. Dissipation Scheme
Step143: 47. Gravity Waves --> Non Orographic Gravity Waves
Step144: 47.2. Source Mechanisms
Step145: 47.3. Calculation Method
Step146: 47.4. Propagation Scheme
Step147: 47.5. Dissipation Scheme
Step148: 48. Solar
Step149: 49. Solar --> Solar Pathways
Step150: 50. Solar --> Solar Constant
Step151: 50.2. Fixed Value
Step152: 50.3. Transient Characteristics
Step153: 51. Solar --> Orbital Parameters
Step154: 51.2. Fixed Reference Date
Step155: 51.3. Transient Method
Step156: 51.4. Computation Method
Step157: 52. Solar --> Insolation Ozone
Step158: 53. Volcanos
Step159: 54. Volcanos --> Volcanoes Treatment
| <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ipsl', 'sandbox-3', 'atmos')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "AGCM"
# "ARCM"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "primitive equations"
# "non-hydrostatic"
# "anelastic"
# "Boussinesq"
# "hydrostatic"
# "quasi-hydrostatic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.resolution.high_top')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "modified"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.key_properties.orography.changes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "related to ice sheets"
# "related to tectonics"
# "modified mean"
# "modified variance if taken into account in model (cf gravity waves)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spectral"
# "fixed grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "finite elements"
# "finite volumes"
# "finite difference"
# "centered finite difference"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "second"
# "third"
# "fourth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "filter"
# "pole rotation"
# "artificial island"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gaussian"
# "Latitude-Longitude"
# "Cubed-Sphere"
# "Icosahedral"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "isobaric"
# "sigma"
# "hybrid sigma-pressure"
# "hybrid pressure"
# "vertically lagrangian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.timestepping_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Adams-Bashforth"
# "explicit"
# "implicit"
# "semi-implicit"
# "leap frog"
# "multi-step"
# "Runge Kutta fifth order"
# "Runge Kutta second order"
# "Runge Kutta third order"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface pressure"
# "wind components"
# "divergence/curl"
# "temperature"
# "potential temperature"
# "total water"
# "water vapour"
# "water liquid"
# "water ice"
# "total water moments"
# "clouds"
# "radiation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sponge layer"
# "radiation boundary condition"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "iterated Laplacian"
# "bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heun"
# "Roe and VanLeer"
# "Roe and Superbee"
# "Prather"
# "UTOPIA"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Eulerian"
# "modified Euler"
# "Lagrangian"
# "semi-Lagrangian"
# "cubic semi-Lagrangian"
# "quintic semi-Lagrangian"
# "mass-conserving"
# "finite volume"
# "flux-corrected"
# "linear"
# "quadratic"
# "quartic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "dry mass"
# "tracer mass"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Priestley algorithm"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "VanLeer"
# "Janjic"
# "SUPG (Streamline Upwind Petrov-Galerkin)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "2nd order"
# "4th order"
# "cell-centred"
# "staggered grid"
# "semi-staggered grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa D-grid"
# "Arakawa E-grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Angular momentum"
# "Horizontal momentum"
# "Enstrophy"
# "Mass"
# "Total energy"
# "Vorticity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "conservation fixer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.aerosols')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "sulphate"
# "nitrate"
# "sea salt"
# "dust"
# "ice"
# "organic"
# "BC (black carbon / soot)"
# "SOA (secondary organic aerosols)"
# "POM (particulate organic matter)"
# "polar stratospheric ice"
# "NAT (nitric acid trihydrate)"
# "NAD (nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particle)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "wide-band model"
# "correlated-k"
# "exponential sum fitting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "two-stream"
# "layer interaction"
# "bulk"
# "adaptive"
# "multi-stream"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CO2"
# "CH4"
# "N2O"
# "CFC-11 eq"
# "CFC-12 eq"
# "HFC-134a eq"
# "Explicit ODSs"
# "Explicit other fluorinated gases"
# "O3"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CFC-12"
# "CFC-11"
# "CFC-113"
# "CFC-114"
# "CFC-115"
# "HCFC-22"
# "HCFC-141b"
# "HCFC-142b"
# "Halon-1211"
# "Halon-1301"
# "Halon-2402"
# "methyl chloroform"
# "carbon tetrachloride"
# "methyl chloride"
# "methylene chloride"
# "chloroform"
# "methyl bromide"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HFC-134a"
# "HFC-23"
# "HFC-32"
# "HFC-125"
# "HFC-143a"
# "HFC-152a"
# "HFC-227ea"
# "HFC-236fa"
# "HFC-245fa"
# "HFC-365mfc"
# "HFC-43-10mee"
# "CF4"
# "C2F6"
# "C3F8"
# "C4F10"
# "C5F12"
# "C6F14"
# "C7F16"
# "C8F18"
# "c-C4F8"
# "NF3"
# "SF6"
# "SO2F2"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bi-modal size distribution"
# "ensemble of ice crystals"
# "mean projected area"
# "ice water path"
# "crystal asymmetry"
# "crystal aspect ratio"
# "effective crystal radius"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud droplet number concentration"
# "effective cloud droplet radii"
# "droplet size distribution"
# "liquid water path"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "geometric optics"
# "Mie theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Monte Carlo Independent Column Approximation"
# "Triplecloud"
# "analytic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "number concentration"
# "effective radii"
# "size distribution"
# "asymmetry"
# "aspect ratio"
# "mixing state"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "T-matrix"
# "geometric optics"
# "finite difference time domain (FDTD)"
# "Mie theory"
# "anomalous diffraction approximation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "scattering"
# "emission/absorption"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Mellor-Yamada"
# "Holtslag-Boville"
# "EDMF"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TKE prognostic"
# "TKE diagnostic"
# "TKE coupled with water"
# "vertical profile of Kz"
# "non-local diffusion"
# "Monin-Obukhov similarity"
# "Coastal Buddy Scheme"
# "Coupled with convection"
# "Coupled with gravity waves"
# "Depth capped at cloud base"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "adjustment"
# "plume ensemble"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "CAPE"
# "bulk"
# "ensemble"
# "CAPE/WFN based"
# "TKE/CIN based"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vertical momentum transport"
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "updrafts"
# "downdrafts"
# "radiative effect of anvils"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mass-flux"
# "cumulus-capped boundary layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "same as deep (unified)"
# "included in boundary layer turbulence"
# "separate diagnosis"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convective momentum transport"
# "entrainment"
# "detrainment"
# "penetrative convection"
# "re-evaporation of convective precipitation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "tuning parameter based"
# "single moment"
# "two moment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "liquid rain"
# "snow"
# "hail"
# "graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "mixed phase"
# "cloud droplets"
# "cloud ice"
# "ice nucleation"
# "water vapour deposition"
# "effect of raindrops"
# "effect of snow"
# "effect of graupel"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "atmosphere_radiation"
# "atmosphere_microphysics_precipitation"
# "atmosphere_turbulence_convection"
# "atmosphere_gravity_waves"
# "atmosphere_solar"
# "atmosphere_volcano"
# "atmosphere_cloud_simulator"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "entrainment"
# "detrainment"
# "bulk cloud"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "cloud amount"
# "liquid"
# "ice"
# "rain"
# "snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "random"
# "maximum"
# "maximum-random"
# "exponential"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "coupled with deep"
# "coupled with shallow"
# "not coupled with convection"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "no adjustment"
# "IR brightness"
# "visible optical depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "lowest altitude level"
# "highest altitude level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Inline"
# "Offline"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "surface"
# "space borne"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ice spheres"
# "ice non-spherical"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "max"
# "random"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.sponge_layer')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rayleigh friction"
# "Diffusive sponge layer"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "continuous spectrum"
# "discrete spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "effect on drag"
# "effect on lifting"
# "enhanced topography"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear mountain waves"
# "hydraulic jump"
# "envelope orography"
# "low level flow blocking"
# "statistical sub-grid scale variance"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "non-linear calculation"
# "more than two cardinal directions"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "includes boundary layer ducting"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "convection"
# "precipitation"
# "background spectrum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "spatially dependent"
# "temporally dependent"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "linear theory"
# "non-linear theory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "total wave"
# "single wave"
# "spectral"
# "linear"
# "wave saturation vs Richardson number"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_pathways.pathways')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "SW radiation"
# "precipitating energetic particles"
# "cosmic rays"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "transient"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Berger 1978"
# "Laskar 2004"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "high frequency solar constant anomaly"
# "stratospheric aerosols optical thickness"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load Annotated Data
Step2: Params
Step3: Prep Data
Step4: Sklearn Experiments
Step5: No tfidf
Step6: With tfidf
Step7: TFIDF improves the ROC score for both types of ngram models although it gives a bigger boost for the char-ngram models.
Step8: LSTM
Step9: Conv LSTM
| <ASSISTANT_TASK:>
Python Code:
def get_best_estimator(cv):
params = cv.best_params_
model = cv.estimator
model = model.set_params(**params)
return model
def save_best_estimator(cv, directory, name):
model = get_best_estimator(cv)
save_pipeline(model, directory, name)
task = 'attack'
data = load_comments_and_labels(task)
path = '../../models/cv/'
n_max = 10000000
n_iter = 15
X_train, y_train_ohv = assemble_data(data, 'comments', 'plurality', splits = ['train'])
X_dev, y_dev_ohv = assemble_data(data, 'comments', 'plurality', splits = ['dev'])
_, y_train_ed = assemble_data(data, 'comments', 'empirical_dist', splits = ['train'])
_, y_dev_ed = assemble_data(data, 'comments', 'empirical_dist', splits = ['dev'])
y_train_ohm = one_hot(y_train_ed)
y_dev_ohm = one_hot(y_dev_ed)
X_train = X_train[:n_max]
X_dev = X_dev[:n_max]
y_train_ohv = y_train_ohv[:n_max]
y_dev_ohv = y_dev_ohv[:n_max]
y_train_ed = y_train_ed[:n_max]
y_dev_ed = y_dev_ed[:n_max]
y_train_ohm = y_train_ohm[:n_max]
y_dev_ohm = y_dev_ohm[:n_max]
results_list = []
max_features = (5000, 10000, 50000, 100000)
C = (0.0001, 0.001, 0.01, 0.1, 1, 10)
alg = Pipeline([
('vect', CountVectorizer()),
('clf', LogisticRegression()),
])
# linear char-gram, no tfidf
param_grid = {
'vect__max_features': max_features,
'vect__ngram_range': ((1,5),),
'vect__analyzer' : ('char',),
'clf__C' : C,
}
m = tune (X_train, y_train_ohv, X_dev, y_dev_ohv, alg, param_grid, n_iter, roc_scorer, n_jobs = 6, verbose = True)
# linear word-gram, no tfidf
param_grid = {
'vect__max_features': max_features,
'vect__ngram_range': ((1,2),),
'vect__analyzer' : ('word',),
'clf__C' : C,
}
m = tune (X_train, y_train_ohv, X_dev, y_dev_ohv, alg, param_grid, n_iter, roc_scorer, n_jobs = 6, verbose = True)
alg = Pipeline([
('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', LogisticRegression()),
])
# linear char-gram, tfidf
param_grid = {
'vect__max_features': max_features,
'vect__ngram_range': ((1,5),),
'vect__analyzer' : ('char',),
'tfidf__sublinear_tf' : (True, False),
'tfidf__norm' : (None, 'l2'),
'clf__C' : C,
}
m = tune (X_train, y_train_ohv, X_dev, y_dev_ohv, alg, param_grid, n_iter, roc_scorer, n_jobs = 6, verbose = True)
# linear word-gram, tfidf
param_grid = {
'vect__max_features': max_features,
'vect__ngram_range': ((1,2),),
'vect__analyzer' : ('word',),
'tfidf__sublinear_tf' : (True, False),
'tfidf__norm' : (None, 'l2'),
'clf__C' : C,
}
m = tune (X_train, y_train_ohv, X_dev, y_dev_ohv, alg, param_grid, n_iter, roc_scorer, n_jobs = 6, verbose = True)
alg = Pipeline([
('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('to_dense', DenseTransformer()),
('clf', KerasClassifier(build_fn=make_mlp, output_dim = 2, verbose=False)),
])
dependencies = [( 'vect__max_features', 'clf__input_dim')]
char_vec_params = {
'vect__max_features': (5000, 10000, 30000),
'vect__ngram_range': ((1,5),),
'vect__analyzer' : ('char',)
}
word_vect_params = {
'vect__max_features': (5000, 10000, 30000),
'vect__ngram_range': ((1,2),),
'vect__analyzer' : ('word',)
}
tfidf_params = {
'tfidf__sublinear_tf' : (True, False),
'tfidf__norm' : ('l2',),
}
linear_clf_params = {
'clf__alpha' : (0.000000001, 0.0000001, 0.00001, 0.001, 0.01),
'clf__hidden_layer_sizes' : ((),),
'clf__nb_epoch' : (2,4,8,16),
'clf__batch_size': (200,)
}
mlp_clf_params = {
'clf__alpha' : (0.000000001, 0.0000001, 0.00001, 0.001, 0.01),
'clf__hidden_layer_sizes' : ((50,), (50, 50), (50, 50, 50)),
'clf__nb_epoch' : (2,4,8,16),
'clf__batch_size': (200,)
}
for model in ['linear', 'mlp']:
for gram in ['word', 'char']:
for label in ['oh', 'ed']:
params = {}
if model == 'linear':
params.update(linear_clf_params)
else:
params.update(mlp_clf_params)
params.update(tfidf_params)
if gram == 'char':
params.update(char_vec_params)
else:
params.update(word_vect_params)
if label == 'oh':
y_train = y_train_ohm
y_dev = y_dev_ohm
else:
y_train = y_train_ed
y_dev = y_dev_ed
print('\n\n\n %s %s %s' % (model, gram, label))
cv = tune (X_train, y_train, X_dev, y_dev,
alg, params,
n_iter,
roc_scorer,
n_jobs = 1,
verbose = True,
dependencies = dependencies)
save_best_estimator(cv, path, '%s_%s_%s' % (model, gram, label))
est = get_best_estimator(cv)
est.fit(X_train, y_train)
best_spearman = spearman_scorer(est, X_dev, y_dev_ed) * 100
print ("\n best spearman: ", best_spearman)
best_roc = max(cv.grid_scores_, key=lambda x: x[1])[1] * 100
print ("\n best roc: ", best_roc)
results_list.append({'model_type': model,
'ngram_type': gram,
'label_type' : label,
'cv': cv.grid_scores_,
'best_roc': round(best_roc, 3),
'best_spearman': round(best_spearman, 3)
})
results_df = pd.DataFrame(results_list)
results_df
grid_scores[0].mean_validation_score
grid_scores = results_df['cv'][0]
max(grid_scores, key = lambda x: x.mean_validation_score).parameters
import json
def get_best_params(grid_scores):
return json.dumps(max(grid_scores, key = lambda x: x.mean_validation_score).parameters)
results_df['best_params'] = results_df['cv'].apply(get_best_params)
results_df.to_csv('cv_results.csv')
alg = Pipeline([
('seq', SequenceTransformer()),
('clf', KerasClassifier(build_fn=make_lstm, output_dim = 2, verbose=True)),
])
dependencies = [( 'seq__max_features', 'clf__max_features'),
( 'seq__max_len', 'clf__max_len')]
word_seq_params = {
'seq__max_features' : (5000, 10000, 30000),
'seq__max_len' : (100, 200, 500),
'seq__analyzer' : ('word',)
}
char_seq_params = {
'seq__max_features' : (100,),
'seq__max_len' : (200, 500, 1000),
'seq__analyzer' : ('char',)
}
clf_params = {
'clf__dropout' : (0.1, 0.2, 0.4),
'clf__embedding_size' : (64, 128),
'clf__lstm_output_size': (64, 128),
'clf__nb_epoch' : (2,3,4),
'clf__batch_size': (200,)
}
from pprint import pprint
model = 'lstm'
for gram in ['word', 'char']:
for label in ['oh', 'ed']:
params = {}
params.update(clf_params)
if gram == 'char':
params.update(char_seq_params)
else:
params.update(word_seq_params)
if label == 'oh':
y_train = y_train_ohm
y_dev = y_dev_ohm
else:
y_train = y_train_ed
y_dev = y_dev_ed
pprint(params)
print('\n\n\n %s %s %s' % (model, gram, label))
cv = tune (X_train, y_train, X_dev, y_dev,
alg, params,
n_iter,
roc_scorer,
n_jobs = 1,
verbose = True,
dependencies = dependencies)
save_best_estimator(cv, path, '%s_%s_%s' % (model, gram, label))
est = get_best_estimator(cv)
est.fit(X_train, y_train)
best_spearman = spearman_scorer(est, X_dev, y_dev_ed) * 100
print ("\n best spearman: ", best_spearman)
best_roc = max(cv.grid_scores_, key=lambda x: x[1])[1] * 100
print ("\n best roc: ", best_roc)
results_list.append({'model_type': model,
'ngram_type': gram,
'label_type' : label,
'cv': cv.grid_scores_,
'best_roc': round(best_roc, 3),
'best_spearman': round(best_spearman, 3)
})
alg = Pipeline([
('seq', SequenceTransformer()),
('clf', KerasClassifier(build_fn=make_conv_lstm, output_dim = 2, verbose=True)),
])
dependencies = [( 'seq__max_features', 'clf__max_features'),
( 'seq__max_len', 'clf__max_len')]
word_seq_params = {
'seq__max_features' : (5000, 10000, 30000),
'seq__max_len' : (100, 200, 500),
'seq__analyzer' : ('word',),
'clf__filter_length': (2, 4, 6),
'clf__pool_length' : (2, 4, 6)
}
char_seq_params = {
'seq__max_features' : (100,),
'seq__max_len' : (200, 500, 1000),
'seq__analyzer' : ('char',),
'clf__filter_length': (5, 10, 15),
'clf__pool_length' : (5, 10, 15)
}
clf_params = {
'clf__dropout' : (0.1, 0.2, 0.4),
'clf__embedding_size' : (64, 128),
'clf__lstm_output_size': (64, 128),
'clf__nb_epoch' : (2,3,4),
'clf__batch_size': (200,),
'clf__nb_filter' : (64, 128),
}
model = 'conv_lstm'
for gram in ['word', 'char']:
for label in ['oh', 'ed']:
params = {}
params.update(clf_params)
if gram == 'char':
params.update(char_seq_params)
else:
params.update(word_seq_params)
if label == 'oh':
y_train = y_train_ohm
y_dev = y_dev_ohm
else:
y_train = y_train_ed
y_dev = y_dev_ed
pprint(params)
print('\n\n\n %s %s %s' % (model, gram, label))
cv = tune (X_train, y_train, X_dev, y_dev,
alg, params,
n_iter,
roc_scorer,
n_jobs = 1,
verbose = True,
dependencies = dependencies)
save_best_estimator(cv, path, '%s_%s_%s' % (model, gram, label))
est = get_best_estimator(cv)
est.fit(X_train, y_train)
best_spearman = spearman_scorer(est, X_dev, y_dev_ed) * 100
print ("\n best spearman: ", best_spearman)
best_roc = max(cv.grid_scores_, key=lambda x: x[1])[1] * 100
print ("\n best roc: ", best_roc)
results_list.append({'model_type': model,
'ngram_type': gram,
'label_type' : label,
'cv': cv.grid_scores_,
'best_roc': round(best_roc, 3),
'best_spearman': round(best_spearman, 3)
})
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Lower Dropout
Step2: It's definitely overfitting.
Step3: It takes longer to reach a slightly lower validation score, but does not overfit.
| <ASSISTANT_TASK:>
Python Code:
import pylearn2.utils
import pylearn2.config
import theano
import neukrill_net.dense_dataset
import neukrill_net.utils
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import holoviews as hl
%load_ext holoviews.ipython
import sklearn.metrics
m = pylearn2.utils.serial.load(
"/disk/scratch/neuroglycerin/models/8aug_extra_layers0p8_recent.pkl")
nll_channels = [c for c in m.monitor.channels.keys() if 'nll' in c]
def make_curves(model, *args):
curves = None
for c in args:
channel = model.monitor.channels[c]
c = c[0].upper() + c[1:]
if not curves:
curves = hl.Curve(zip(channel.example_record,channel.val_record),group=c)
else:
curves += hl.Curve(zip(channel.example_record,channel.val_record),group=c)
return curves
make_curves(m,*nll_channels)
mh = pylearn2.utils.serial.load(
"/disk/scratch/neuroglycerin/models/8aug_extra_layers0p5_recent.pkl")
make_curves(mh,*nll_channels)
cl = m.monitor.channels['valid_y_nll']
ch = mh.monitor.channels['valid_y_nll']
compare = []
for t,v in zip(cl.example_record,cl.val_record):
for t2,v2 in zip(ch.example_record,ch.val_record):
if v2 < v:
compare.append((float(v),np.max([t2-t,0])))
break
plt.plot(*zip(*compare))
plt.xlabel("valid_y_nll")
plt.ylabel("time difference")
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This code sets up everything we need
Step2: Put your code below this!
| <ASSISTANT_TASK:>
Python Code:
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
from IPython.display import display, clear_output
def calc_total_distance(table_of_distances, city_order):
'''
Calculates distances between a sequence of cities.
Inputs: N x N table containing distances between each pair of the N
cities, as well as an array of length N+1 containing the city order,
which starts and ends with the same city (ensuring that the path is
closed)
Returns: total path length for the closed loop.
'''
total_distance = 0.0
# loop over cities and sum up the path length between successive pairs
for i in range(city_order.size-1):
total_distance += table_of_distances[city_order[i]][city_order[i+1]]
return total_distance
def plot_cities(city_order,city_x,city_y):
'''
Plots cities and the path between them.
Inputs: ordering of cities, x and y coordinates of each city.
Returns: a plot showing the cities and the path between them.
'''
# first make x,y arrays
x = []
y = []
# put together arrays of x and y positions that show the order that the
# salesman traverses the cities
for i in range(0, city_order.size):
x.append(city_x[city_order[i]])
y.append(city_y[city_order[i]])
# append the first city onto the end so the loop is closed
x.append(city_x[city_order[0]])
y.append(city_y[city_order[0]])
#time.sleep(0.1)
clear_output(wait=True)
display(fig) # Reset display
fig.clear() # clear output for animation
plt.xlim(-0.2, 20.2) # give a little space around the edges of the plot
plt.ylim(-0.2, 20.2)
# plot city positions in blue, and path in red.
plt.plot(city_x,city_y, 'bo', x, y, 'r-')
# number of cities we'll use.
number_of_cities = 30
# seed for random number generator so we get the same value every time!
np.random.seed(2024561414)
# create random x,y positions for our current number of cities. (Distance scaling is arbitrary.)
city_x = np.random.random(size=number_of_cities)*20.0
city_y = np.random.random(size=number_of_cities)*20.0
# table of city distances - empty for the moment
city_distances = np.zeros((number_of_cities,number_of_cities))
# calculate distnace between each pair of cities and store it in the table.
# technically we're calculating 2x as many things as we need (as well as the
# diagonal, which should all be zeros), but whatever, it's cheap.
for a in range(number_of_cities):
for b in range(number_of_cities):
city_distances[a][b] = ((city_x[a]-city_x[b])**2 + (city_y[a]-city_y[b])**2 )**0.5
# create the array of cities in the order we're going to go through them
city_order = np.arange(city_distances.shape[0])
# tack on the first city to the end of the array, since that ensures a closed loop
city_order = np.append(city_order, city_order[0])
fig = plt.figure()
# Put your code here!
# number of steps we'll take
N_steps = 1000
step = [0]
distance = [calc_total_distance(city_distances,city_order)]
for i in range(N_steps):
swap1 = np.random.randint(1,city_order.shape[0]-2)
swap2 = np.random.randint(1,city_order.shape[0]-2)
orig_distance = calc_total_distance(city_distances,city_order)
new_city_order = np.copy(city_order)
hold = new_city_order[swap1]
new_city_order[swap1] = new_city_order[swap2]
new_city_order[swap2] = hold
new_distance = calc_total_distance(city_distances,new_city_order)
if new_distance < orig_distance:
city_order = np.copy(new_city_order)
step.append(i)
distance.append(new_distance)
plot_cities(city_order,city_x,city_y)
plt.plot(step,distance)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps
Step2: Inline Question #1
Step3: You should expect to see approximately 27% accuracy. Now lets try out a larger k, say k = 5
Step5: You should expect to see a slightly better performance than with k = 1.
Step6: Cross-validation
| <ASSISTANT_TASK:>
Python Code:
# Run some setup code for this notebook.
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
# This is a bit of magic to make matplotlib figures appear inline in the notebook
# rather than in a new window.
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# Some more magic so that the notebook will reload external python modules;
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
# Load the raw CIFAR-10 data.
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# As a sanity check, we print out the size of the training and test data.
print 'Training data shape: ', X_train.shape
print 'Training labels shape: ', y_train.shape
print 'Test data shape: ', X_test.shape
print 'Test labels shape: ', y_test.shape
# Visualize some examples from the dataset.
# We show a few examples of training images from each class.
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
num_classes = len(classes)
samples_per_class = 7
for y, cls in enumerate(classes):
idxs = np.flatnonzero(y_train == y)
idxs = np.random.choice(idxs, samples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt_idx = i * num_classes + y + 1
plt.subplot(samples_per_class, num_classes, plt_idx)
plt.imshow(X_train[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls)
plt.show()
# Subsample the data for more efficient code execution in this exercise
num_training = 5000
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
num_test = 500
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
# Reshape the image data into rows
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))
print X_train.shape, X_test.shape
from cs231n.classifiers import KNearestNeighbor
# Create a kNN classifier instance.
# Remember that training a kNN classifier is a noop:
# the Classifier simply remembers the data and does no further processing
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
# Open cs231n/classifiers/k_nearest_neighbor.py and implement
# compute_distances_two_loops.
# Test your implementation:
dists = classifier.compute_distances_two_loops(X_test)
print dists.shape
# We can visualize the distance matrix: each row is a single test example and
# its distances to training examples
plt.imshow(dists, interpolation='none')
plt.show()
# Now implement the function predict_labels and run the code below:
# We use k = 1 (which is Nearest Neighbor).
y_test_pred = classifier.predict_labels(dists, k=1)
# Compute and print the fraction of correctly predicted examples
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
y_test_pred = classifier.predict_labels(dists, k=5)
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
# Now lets speed up distance matrix computation by using partial vectorization
# with one loop. Implement the function compute_distances_one_loop and run the
# code below:
dists_one = classifier.compute_distances_one_loop(X_test)
# To ensure that our vectorized implementation is correct, we make sure that it
# agrees with the naive implementation. There are many ways to decide whether
# two matrices are similar; one of the simplest is the Frobenius norm. In case
# you haven't seen it before, the Frobenius norm of two matrices is the square
# root of the squared sum of differences of all elements; in other words, reshape
# the matrices into vectors and compute the Euclidean distance between them.
difference = np.linalg.norm(dists - dists_one, ord='fro')
print 'Difference was: %f' % (difference, )
if difference < 0.001:
print 'Good! The distance matrices are the same'
else:
print 'Uh-oh! The distance matrices are different'
# Now implement the fully vectorized version inside compute_distances_no_loops
# and run the code
dists_two = classifier.compute_distances_no_loops(X_test)
# check that the distance matrix agrees with the one we computed before:
difference = np.linalg.norm(dists - dists_two, ord='fro')
print 'Difference was: %f' % (difference, )
if difference < 0.001:
print 'Good! The distance matrices are the same'
else:
print 'Uh-oh! The distance matrices are different'
# Let's compare how fast the implementations are
def time_function(f, *args):
Call a function f with args and return the time (in seconds) that it took to execute.
import time
tic = time.time()
f(*args)
toc = time.time()
return toc - tic
two_loop_time = time_function(classifier.compute_distances_two_loops, X_test)
print 'Two loop version took %f seconds' % two_loop_time
one_loop_time = time_function(classifier.compute_distances_one_loop, X_test)
print 'One loop version took %f seconds' % one_loop_time
no_loop_time = time_function(classifier.compute_distances_no_loops, X_test)
print 'No loop version took %f seconds' % no_loop_time
# you should see significantly faster performance with the fully vectorized implementation
num_folds = 5
k_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100]
X_train_folds = []
y_train_folds = []
################################################################################
# TODO: #
# Split up the training data into folds. After splitting, X_train_folds and #
# y_train_folds should each be lists of length num_folds, where #
# y_train_folds[i] is the label vector for the points in X_train_folds[i]. #
# Hint: Look up the numpy array_split function. #
################################################################################
from sklearn.cross_validation import KFold
X_val_folds = []
y_val_folds = []
kf = KFold(y_train.shape[0], n_folds=num_folds)
for train_index, val_index in kf:
X_train_fold, X_val_fold = X_train[train_index], X_train[val_index]
y_train_fold, y_val_fold = y_train[train_index], y_train[val_index]
X_train_folds.append(X_train_fold)
y_train_folds.append(y_train_fold)
X_val_folds.append(X_val_fold)
y_val_folds.append(y_val_fold)
################################################################################
# END OF YOUR CODE #
################################################################################
# A dictionary holding the accuracies for different values of k that we find
# when running cross-validation. After running cross-validation,
# k_to_accuracies[k] should be a list of length num_folds giving the different
# accuracy values that we found when using that value of k.
k_to_accuracies = {}
################################################################################
# TODO: #
# Perform k-fold cross validation to find the best value of k. For each #
# possible value of k, run the k-nearest-neighbor algorithm num_folds times, #
# where in each case you use all but one of the folds as training data and the #
# last fold as a validation set. Store the accuracies for all fold and all #
# values of k in the k_to_accuracies dictionary. #
################################################################################
k_to_accuracies = {}
classifier = KNearestNeighbor()
for k in k_choices:
for n_fold in range(num_folds):
classifier.train(X_train_folds[n_fold], y_train_folds[n_fold])
dists = classifier.compute_distances_no_loops(X_val_folds[n_fold])
y_val_pred = classifier.predict_labels(dists, k=k)
num_correct = np.sum(y_val_pred == y_val_folds[n_fold])
accuracy = float(num_correct) / y_val_folds[n_fold].shape[0]
if not k in k_to_accuracies:
k_to_accuracies[k] = []
k_to_accuracies[k].append(accuracy)
print "k = {}".format(k)
print 'Got %d / %d correct => accuracy: %f' % (num_correct, y_val_folds[n_fold].shape[0], accuracy)
################################################################################
# END OF YOUR CODE #
################################################################################
# Print out the computed accuracies
for k in sorted(k_to_accuracies):
for accuracy in k_to_accuracies[k]:
print 'k = %d, accuracy = %f' % (k, accuracy)
# plot the raw observations
for k in k_choices:
accuracies = k_to_accuracies[k]
plt.scatter([k] * len(accuracies), accuracies)
# plot the trend line with error bars that correspond to standard deviation
accuracies_mean = np.array([np.mean(v) for k,v in sorted(k_to_accuracies.items())])
accuracies_std = np.array([np.std(v) for k,v in sorted(k_to_accuracies.items())])
plt.errorbar(k_choices, accuracies_mean, yerr=accuracies_std)
plt.title('Cross-validation on k')
plt.xlabel('k')
plt.ylabel('Cross-validation accuracy')
max_index = np.argmax(accuracies_mean)
print "Best k = {}, maximum value ={}".format(k_choices[max_index], accuracies_mean[max_index])
plt.show()
# Based on the cross-validation results above, choose the best value for k,
# retrain the classifier using all the training data, and test it on the test
# data. You should be able to get above 28% accuracy on the test data.
best_k = 10
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
y_test_pred = classifier.predict(X_test, k=best_k)
# Compute and display the accuracy
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Note
Step2: Now, let's start with the ANTs normalization workflow!
Step3: Experiment parameters (ANTs)
Step4: Note if you're not using the corresponding docker image, than the template file might not be in your data directory. To get mni_icbm152_nlin_asym_09c, either download it from this website, unpack it and move it to /data/ds000114/derivatives/fmriprep/ or run the following command in a cell
Step5: Specify input & output stream (ANTs)
Step6: Specify Workflow (ANTs)
Step7: Visualize the workflow (ANTs)
Step8: Run the Workflow (ANTs)
Step9: Normalization with SPM12
Step10: Experiment parameters (SPM12)
Step11: Specify Nodes (SPM12)
Step12: Specify input & output stream (SPM12)
Step13: Specify Workflow (SPM12)
Step14: Visualize the workflow (SPM12)
Step15: Run the Workflow (SPM12)
Step16: Comparison between ANTs and SPM normalization
Step17: First, let's compare the normalization of the anatomical images
Step18: And what about the contrast images for Finger > others?
| <ASSISTANT_TASK:>
Python Code:
%%bash
datalad get -J 4 -d /data/ds000114 /data/ds000114/derivatives/fmriprep/sub-0[2345789]/anat/*h5
!ls /data/ds000114/derivatives/fmriprep/sub-*/anat/*h5
from os.path import join as opj
from nipype import Workflow, Node, MapNode
from nipype.interfaces.ants import ApplyTransforms
from nipype.interfaces.utility import IdentityInterface
from nipype.interfaces.io import SelectFiles, DataSink
from nipype.interfaces.fsl import Info
experiment_dir = '/output'
output_dir = 'datasink'
working_dir = 'workingdir'
# list of subject identifiers (remember we use only right handed subjects)
subject_list = ['02', '03', '04', '05', '07', '08', '09']
# task name
task_name = "fingerfootlips"
# Smoothing widths used during preprocessing
fwhm = [4, 8]
# Template to normalize to
template = '/data/ds000114/derivatives/fmriprep/mni_icbm152_nlin_asym_09c/1mm_T1.nii.gz'
# Apply Transformation - applies the normalization matrix to contrast images
apply2con = MapNode(ApplyTransforms(args='--float',
input_image_type=3,
interpolation='BSpline',
invert_transform_flags=[False],
num_threads=1,
reference_image=template,
terminal_output='file'),
name='apply2con', iterfield=['input_image'])
# Infosource - a function free node to iterate over the list of subject names
infosource = Node(IdentityInterface(fields=['subject_id', 'fwhm_id']),
name="infosource")
infosource.iterables = [('subject_id', subject_list),
('fwhm_id', fwhm)]
# SelectFiles - to grab the data (alternativ to DataGrabber)
templates = {'con': opj(output_dir, '1stLevel',
'sub-{subject_id}/fwhm-{fwhm_id}', '???_00??.nii'),
'transform': opj('/data/ds000114/derivatives/fmriprep/', 'sub-{subject_id}', 'anat',
'sub-{subject_id}_t1w_space-mni152nlin2009casym_warp.h5')}
selectfiles = Node(SelectFiles(templates,
base_directory=experiment_dir,
sort_filelist=True),
name="selectfiles")
# Datasink - creates output folder for important outputs
datasink = Node(DataSink(base_directory=experiment_dir,
container=output_dir),
name="datasink")
# Use the following DataSink output substitutions
substitutions = [('_subject_id_', 'sub-')]
subjFolders = [('_fwhm_id_%ssub-%s' % (f, sub), 'sub-%s_fwhm%s' % (sub, f))
for f in fwhm
for sub in subject_list]
subjFolders += [('_apply2con%s/' % (i), '') for i in range(9)] # number of contrast used in 1stlevel an.
substitutions.extend(subjFolders)
datasink.inputs.substitutions = substitutions
# Initiation of the ANTs normalization workflow
antsflow = Workflow(name='antsflow')
antsflow.base_dir = opj(experiment_dir, working_dir)
# Connect up the ANTs normalization components
antsflow.connect([(infosource, selectfiles, [('subject_id', 'subject_id'),
('fwhm_id', 'fwhm_id')]),
(selectfiles, apply2con, [('con', 'input_image'),
('transform', 'transforms')]),
(apply2con, datasink, [('output_image', 'norm_ants.@con')]),
])
# Create ANTs normalization graph
antsflow.write_graph(graph2use='colored', format='png', simple_form=True)
# Visualize the graph
from IPython.display import Image
Image(filename=opj(antsflow.base_dir, 'antsflow', 'graph.png'))
antsflow.run('MultiProc', plugin_args={'n_procs': 4})
from os.path import join as opj
from nipype.interfaces.spm import Normalize12
from nipype.interfaces.utility import IdentityInterface
from nipype.interfaces.io import SelectFiles, DataSink
from nipype.algorithms.misc import Gunzip
from nipype import Workflow, Node
experiment_dir = '/output'
output_dir = 'datasink'
working_dir = 'workingdir'
# list of subject identifiers
subject_list = ['02', '03', '04', '05', '07', '08', '09']
# task name
task_name = "fingerfootlips"
# Smoothing withds used during preprocessing
fwhm = [4, 8]
template = '/opt/spm12-r7219/spm12_mcr/spm12/tpm/TPM.nii'
# Gunzip - unzip the anatomical image
gunzip = Node(Gunzip(), name="gunzip")
# Normalize - normalizes functional and structural images to the MNI template
normalize = Node(Normalize12(jobtype='estwrite',
tpm=template,
write_voxel_sizes=[1, 1, 1]),
name="normalize")
# Infosource - a function free node to iterate over the list of subject names
infosource = Node(IdentityInterface(fields=['subject_id', 'fwhm_id']),
name="infosource")
infosource.iterables = [('subject_id', subject_list),
('fwhm_id', fwhm)]
# SelectFiles - to grab the data (alternativ to DataGrabber)
templates = {'con': opj(output_dir, '1stLevel',
'sub-{subject_id}/fwhm-{fwhm_id}', '???_00??.nii'),
'anat': opj('/data/ds000114/derivatives', 'fmriprep', 'sub-{subject_id}',
'anat', 'sub-{subject_id}_t1w_preproc.nii.gz')}
selectfiles = Node(SelectFiles(templates,
base_directory=experiment_dir,
sort_filelist=True),
name="selectfiles")
# Datasink - creates output folder for important outputs
datasink = Node(DataSink(base_directory=experiment_dir,
container=output_dir),
name="datasink")
# Use the following DataSink output substitutions
substitutions = [('_subject_id_', 'sub-')]
subjFolders = [('_fwhm_id_%ssub-%s' % (f, sub), 'sub-%s_fwhm%s' % (sub, f))
for f in fwhm
for sub in subject_list]
substitutions.extend(subjFolders)
datasink.inputs.substitutions = substitutions
# Specify Normalization-Workflow & Connect Nodes
spmflow = Workflow(name='spmflow')
spmflow.base_dir = opj(experiment_dir, working_dir)
# Connect up SPM normalization components
spmflow.connect([(infosource, selectfiles, [('subject_id', 'subject_id'),
('fwhm_id', 'fwhm_id')]),
(selectfiles, normalize, [('con', 'apply_to_files')]),
(selectfiles, gunzip, [('anat', 'in_file')]),
(gunzip, normalize, [('out_file', 'image_to_align')]),
(normalize, datasink, [('normalized_files', 'norm_spm.@files'),
('normalized_image', 'norm_spm.@image'),
]),
])
# Create SPM normalization graph
spmflow.write_graph(graph2use='colored', format='png', simple_form=True)
# Visualize the graph
from IPython.display import Image
Image(filename=opj(spmflow.base_dir, 'spmflow', 'graph.png'))
spmflow.run('MultiProc', plugin_args={'n_procs': 4})
from nilearn.plotting import plot_stat_map
%matplotlib inline
anatimg = '/data/ds000114/derivatives/fmriprep/mni_icbm152_nlin_asym_09c/1mm_T1.nii.gz'
plot_stat_map(
'/data/ds000114/derivatives/fmriprep/sub-02/anat/sub-02_t1w_space-mni152nlin2009casym_preproc.nii.gz',
title='anatomy - ANTs (normalized to ICBM152)', bg_img=anatimg,
threshold=200, display_mode='ortho', cut_coords=(-50, 0, -10));
plot_stat_map(
'/output/datasink/norm_spm/sub-02_fwhm4/wsub-02_t1w_preproc.nii',
title='anatomy - SPM (normalized to SPM\'s TPM)', bg_img=anatimg,
threshold=200, display_mode='ortho', cut_coords=(-50, 0, -10));
plot_stat_map(
'/output/datasink/norm_ants/sub-02_fwhm8/con_0005_trans.nii', title='contrast5 - fwhm=8 - ANTs',
bg_img=anatimg, threshold=2, vmax=5, display_mode='ortho', cut_coords=(-39, -37, 56));
plot_stat_map(
'/output/datasink/norm_spm/sub-02_fwhm8/wcon_0005.nii', title='contrast5 - fwhm=8 - SPM',
bg_img=anatimg, threshold=2, vmax=5, display_mode='ortho', cut_coords=(-39, -37, 56));
from nilearn.plotting import plot_glass_brain
plot_glass_brain(
'/output/datasink/norm_ants/sub-02_fwhm8/con_0005_trans.nii', colorbar=True,
threshold=3, display_mode='lyrz', black_bg=True, vmax=6, title='contrast5 - fwhm=8 - ANTs')
plot_glass_brain(
'/output/datasink/norm_spm/sub-02_fwhm8/wcon_0005.nii', colorbar=True,
threshold=3, display_mode='lyrz', black_bg=True, vmax=6, title='contrast5 - fwhm=8 - SPM');
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <h2> Input </h2>
Step2: <h2> Create features out of input data </h2>
Step3: <h2> train_and_evaluate </h2>
| <ASSISTANT_TASK:>
Python Code:
# Ensure the right version of Tensorflow is installed.
!pip freeze | grep tensorflow==2.6
import tensorflow as tf
import numpy as np
import shutil
print(tf.__version__)
CSV_COLUMNS = ['fare_amount', 'pickuplon','pickuplat','dropofflon','dropofflat','passengers', 'key']
LABEL_COLUMN = 'fare_amount'
DEFAULTS = [[0.0], [-74.0], [40.0], [-74.0], [40.7], [1.0], ['nokey']]
def read_dataset(filename, mode, batch_size = 512):
def _input_fn():
def decode_csv(value_column):
columns = tf.compat.v1.decode_csv(value_column, record_defaults = DEFAULTS)
features = dict(zip(CSV_COLUMNS, columns))
label = features.pop(LABEL_COLUMN)
return features, label
# Create list of files that match pattern
file_list = tf.compat.v1.gfile.Glob(filename)
# Create dataset from file list
dataset = tf.compat.v1.data.TextLineDataset(file_list).map(decode_csv)
if mode == tf.estimator.ModeKeys.TRAIN:
num_epochs = None # indefinitely
dataset = dataset.shuffle(buffer_size = 10 * batch_size)
else:
num_epochs = 1 # end-of-input after this
dataset = dataset.repeat(num_epochs).batch(batch_size)
return dataset.make_one_shot_iterator().get_next()
return _input_fn
INPUT_COLUMNS = [
tf.feature_column.numeric_column('pickuplon'),
tf.feature_column.numeric_column('pickuplat'),
tf.feature_column.numeric_column('dropofflat'),
tf.feature_column.numeric_column('dropofflon'),
tf.feature_column.numeric_column('passengers'),
]
def add_more_features(feats):
# Nothing to add (yet!)
return feats
feature_cols = add_more_features(INPUT_COLUMNS)
def serving_input_fn():
feature_placeholders = {
'pickuplon' : tf.compat.v1.placeholder(tf.float32, [None]),
'pickuplat' : tf.compat.v1.placeholder(tf.float32, [None]),
'dropofflat' : tf.compat.v1.placeholder(tf.float32, [None]),
'dropofflon' : tf.compat.v1.placeholder(tf.float32, [None]),
'passengers' : tf.compat.v1.placeholder(tf.float32, [None]),
}
features = {
key: tf.expand_dims(tensor, -1)
for key, tensor in feature_placeholders.items()
}
return tf.estimator.export.ServingInputReceiver(features, feature_placeholders)
def train_and_evaluate(output_dir, num_train_steps):
estimator = tf.estimator.LinearRegressor(
model_dir = output_dir,
feature_columns = feature_cols)
train_spec=tf.estimator.TrainSpec(
input_fn = read_dataset('./taxi-train.csv', mode = tf.estimator.ModeKeys.TRAIN),
max_steps = num_train_steps)
exporter = tf.estimator.LatestExporter('exporter', serving_input_fn)
eval_spec=tf.estimator.EvalSpec(
input_fn = read_dataset('./taxi-valid.csv', mode = tf.estimator.ModeKeys.EVAL),
steps = None,
start_delay_secs = 1, # start evaluating after N seconds
throttle_secs = 10, # evaluate every N seconds
exporters = exporter)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
# Run training
OUTDIR = 'taxi_trained'
shutil.rmtree(OUTDIR, ignore_errors = True) # start fresh each time
train_and_evaluate(OUTDIR, num_train_steps = 5000)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Fix the Contents sheet to correctly reflect the Worksheet names
Step2: Tidy up Data
Step3: One down, 31 to go...
Step4: Those '\n (Quarter 4 2021)' entries are unnecessary, so for this table, lets clear them
Step5: Table 2a
Step6: Table 2x
Step7: 6 down, 26 to go.
Step8: Table 4
Step9: Of note; new offset for the header row at index 3 instead of index 1, due to lots of fluff at the start that is probably not going to be consistent between reports so that will almost certainly mess up my day in a few months.
Step11: Thats awkward enough to get it's own function...
Step12: Table 5
Step13: For some reason; Mid-ulster has a 'Standardised HPI' which throws off the above trick, so we gotta make it ugly...
Step15: We could turn this into a proper multiindex but it would mean pushing the Period/Year/Quarter columns into keys which would be inconsistent behaviour with the rest of the 'cleaned' dataset, so that can be a downstream problem; at least we've got the relevant metrics consistent!
Step16: Table 5a
Step18: df.iloc[1,2]=c
Step19: Table 6
Step21: Table 7
Step22: Table 8
Step23: Table 9
Step24: Table 9x
Step25: Table 10x
Step26: And We're Done!
| <ASSISTANT_TASK:>
Python Code:
from bs4 import BeautifulSoup
import pandas as pd
import requests
# Pull the latest pages of https://www.finance-ni.gov.uk/publications/ni-house-price-index-statistical-reports and extract links
base_url= 'https://www.finance-ni.gov.uk/publications/ni-house-price-index-statistical-reports'
base_content = requests.get(base_url).content
base_soup = BeautifulSoup(base_content)
for a in base_soup.find_all('a'):
if a.attrs.get('href','').endswith('xlsx'):
source_name, source_url = a.contents[1],a.attrs['href']
source_df = pd.read_excel(source_url, sheet_name = None) # Load all worksheets in
source_df.keys()
source_df['Contents']
new_header = source_df['Contents'].iloc[0]
source_df['Contents'] = source_df['Contents'][1:]
source_df['Contents'].columns = new_header
source_df['Contents'].columns = [*new_header[:-1],'Title']
[t for t in source_df['Contents']['Title'].values if t.startswith('Table')]
# Replace 'Figure' with 'Fig' in 'Worksheet Name'
with pd.option_context('mode.chained_assignment',None):
source_df['Contents']['Worksheet Name'] = source_df['Contents']['Worksheet Name'].str.replace('Figure','Fig')
source_df['Table 1']
def basic_cleanup(df:pd.DataFrame, offset=1)->pd.DataFrame:
df = df.copy()
# Re-header from row 1 (which was row 3 in excel)
new_header = df.iloc[offset]
df = df.iloc[offset+1:]
df.columns = new_header
# remove 'NaN' trailing columns
df = df[df.columns[pd.notna(df.columns)]]
# 'NI' is a usually hidden column that appears to be a checksum;
#if it's all there and all 100, remove it, otherwise, complain.
# (Note, need to change this 'if' logic to just 'if there's a
# column with all 100's, but cross that bridge later)
if 'NI' in df:
assert df['NI'].all() and df['NI'].mean() == 100, "Not all values in df['NI'] == 100"
df = df.drop('NI', axis=1)
# Strip rows below the first all-nan row, if there is one
# (Otherwise this truncates the tables as there is no
# idxmax in the table of all 'false's)
if any(df.isna().all(axis=1)):
idx_first_bad_row = df.isna().all(axis=1).idxmax()
df = df.loc[:idx_first_bad_row-1]
# By Inspection, other tables use 'Sale Year' and 'Sale Quarter'
if set(df.keys()).issuperset({'Sale Year','Sale Quarter'}):
df = df.rename(columns = {
'Sale Year':'Year',
'Sale Quarter': 'Quarter'
})
# For 'Year','Quarter' indexed pages, there is an implied Year
# in Q2/4, so fill it downwards
if set(df.keys()).issuperset({'Year','Quarter'}):
df['Year'] = df['Year'].astype(float).fillna(method='ffill').astype(int)
# In Pandas we can represent Y/Q combinations as proper datetimes
#https://stackoverflow.com/questions/53898482/clean-way-to-convert-quarterly-periods-to-datetime-in-pandas
df.insert(loc=0,
column='Period',
value=pd.PeriodIndex(df.apply(lambda r:f'{r.Year}-{r.Quarter}', axis=1), freq='Q')
)
# reset index, try to fix dtypes, etc, (this should be the last
# operation before returning!
df = df.reset_index(drop=True).infer_objects()
return df
df = basic_cleanup(source_df['Table 1'])
df
dest_df = {
'Table 1': basic_cleanup(source_df['Table 1'])
}
len([k for k in source_df.keys() if k.startswith('Table')])
df = basic_cleanup(source_df['Table 2'])
df
df.columns = [c.split('\n')[0] for c in df.columns]
df
dest_df['Table 2'] = df
df = basic_cleanup(source_df['Table 2a'])
df
dest_df['Table 2']['Property Type']
import re
table2s = re.compile('Table 2[a-z]')
assert table2s.match('Table 2') is None, 'Table 2 is matching itself!'
assert table2s.match('Table 20') is None, 'Table 2 is greedy!'
assert table2s.match('Table 2z') is not None, 'Table 2 is matching incorrectly!'
table2s = re.compile('Table 2[a-z]')
for table in source_df:
if table2s.match(table):
dest_df[table] = basic_cleanup(source_df[table])
len(dest_df), len([k for k in source_df.keys() if k.startswith('Table') and k not in dest_df])
df = basic_cleanup(source_df['Table 3'])
df.columns = [c.split('\n')[0] for c in df.columns] # Stolen from Table 2 Treatment
df
dest_df['Table 3'] = df
df = basic_cleanup(source_df['Table 3a'])
df
table3s = re.compile('Table 3[a-z]')
for table in source_df:
if table3s.match(table):
dest_df[table] = basic_cleanup(source_df[table])
len(dest_df), len([k for k in source_df.keys() if k.startswith('Table') and k not in dest_df])
df = source_df['Table 4']
df
df.iloc[:,1]=df.iloc[:,1].str.replace('Quarter ([1-4])',r'Q\1', regex=True)
df
df=df[~df.iloc[:,1].str.contains('Total').fillna(False)]
# Lose the year new-lines (needs astype because non str lines are
# correctly inferred to be ints, so .str methods nan-out
with pd.option_context('mode.chained_assignment',None):
df.iloc[:,0]=df.iloc[:,0].astype(str).str.replace('\n','')
df
basic_cleanup(df, offset=3)
def cleanup_table_4(df):
Table 4: Number of Verified Residential Property Sales
* Regex 'Quarter X' to 'QX' in future 'Sales Quarter' column
* Drop Year Total rows
* Clear any Newlines from the future 'Sales Year' column
* call `basic_cleanup` with offset=3
df.iloc[:,1]=df.iloc[:,1].str.replace('Quarter ([1-4])',r'Q\1', regex=True)
df=df[~df.iloc[:,1].str.contains('Total').fillna(False)]
# Lose the year new-lines (needs astype because non str lines are
# correctly inferred to be ints, so .str methods nan-out
with pd.option_context('mode.chained_assignment',None):
df.iloc[:,0]=df.iloc[:,0].astype(str).str.replace('\n','')
return basic_cleanup(df, offset=3)
cleanup_table_4(source_df['Table 4'].copy())
dest_df['Table 4'] = cleanup_table_4(source_df['Table 4'])
len(dest_df), len([k for k in source_df.keys() if k.startswith('Table') and k not in dest_df])
df = basic_cleanup(source_df['Table 5'])
df
# Two inner-columns per LGD
lgds = df.columns[3:].str.replace(' HPI','').str.replace(' Standardised Price','').unique()
lgds
lgds = df.columns[3:].str.replace(' Standardised HPI',' HPI')\
.str.replace(' HPI','')\
.str.replace(' Standardised Price','').unique()
lgds
df.columns = [*df.columns[:3], *pd.MultiIndex.from_product([lgds,['Index','Price']], names=['LGD','Metric'])]
df
def cleanup_table_5(df):
Table 5: Standardised House Price & Index for each Local Government District Northern Ireland
*
# Basic Cleanup first
df = basic_cleanup(df)
# Build multi-index of LGD / Metric [Index,Price]
# Two inner-columns per LGD
lgds = df.columns[3:].str.replace(' Standardised HPI',' HPI')\
.str.replace(' HPI','')\
.str.replace(' Standardised Price','')\
.unique()
df.columns = [*df.columns[:3], *pd.MultiIndex.from_product([lgds,['Index','Price']], names=['LGD','Metric'])]
return df
cleanup_table_5(source_df['Table 5'])
dest_df['Table 5']=cleanup_table_5(source_df['Table 5'])
len(dest_df), len([k for k in source_df.keys() if k.startswith('Table') and k not in dest_df])
df = source_df['Table 5a'].copy()
df
dates = df.iloc[:,0].str.extract('(Q[1-4]) ([0-9]{4})').rename(columns={0:'Quarter',1:'Year'})
for c in ['Quarter','Year']:# insert the dates in order, so they come out in reverse in the insert
df.insert(1,c,dates[c])
df.iloc[2,1]=c # Need to have the right colname for when `basic_cleanup` is called.
df.iloc[2,1]=c
df
df=df[~df.iloc[:,0].str.contains('Total').fillna(False)]
basic_cleanup(df,offset=2)
def cleanup_table_5a(df):
Table 5a: Number of Verified Residential Property Sales by Local Government District
* Parse the 'Sale Year/Quarter' to two separate cols
* Insert future-headers for Quarter and Year cols
* Remove rows with 'total' in the first column
* Disregard the 'Sale Year/Quarter' column
* perform `basic_cleanup` with offset=2
# Safety first
df=df.copy()
# Extract 'Quarter' and 'Year' columns from the future 'Sale Year/Quarter' column
dates = df.iloc[:,0].str.extract('(Q[1-4]) ([0-9]{4})').rename(columns={0:'Quarter',1:'Year'})
for c in ['Quarter','Year']:# insert the dates in order, so they come out in reverse in the insert
df.insert(1,c,dates[c])
df.iloc[2,1]=c # Need to have the right colname for when `basic_cleanup` is called.
# Remove 'total' rows from the future 'Sale Year/Quarter' column
df=df[~df.iloc[:,0].str.contains('Total').fillna(False)]
# Remove the 'Sale Year/Quarter' column all together
df = df.iloc[:,1:]
# Standard cleanup
df = basic_cleanup(df, offset=2)
return df
cleanup_table_5a(source_df['Table 5a'])
dest_df['Table 5a']=cleanup_table_5a(source_df['Table 5a'])
len(dest_df), len([k for k in source_df.keys() if k.startswith('Table') and k not in dest_df])
df = basic_cleanup(source_df['Table 6'])
df
dest_df['Table 6']=basic_cleanup(source_df['Table 6'])
len(dest_df), len([k for k in source_df.keys() if k.startswith('Table') and k not in dest_df])
df = source_df['Table 7'].copy()
df.head()
df.iloc[1,0] = 'Year'
df.iloc[1,1] = 'Quarter'
df.head()
basic_cleanup(df).head()
def cleanup_table_7(df):
Table 7: Standardised House Price & Index for Rural Areas of Northern Ireland by drive times
* Insert Year/Quarter future-headers
* Clean normally
# TODO THIS MIGHT BE VALID FOR MULTIINDEXING ON DRIVETIME/[Index/Price]
df = df.copy()
df.iloc[1,0] = 'Year'
df.iloc[1,1] = 'Quarter'
df = basic_cleanup(df)
return df
cleanup_table_7(source_df['Table 7'])
dest_df['Table 7'] = cleanup_table_7(source_df['Table 7'])
len(dest_df), len([k for k in source_df.keys() if k.startswith('Table') and k not in dest_df])
cleanup_table_5a(source_df['Table 8']).head()
cleanup_table_8 = cleanup_table_5a
dest_df['Table 8'] = cleanup_table_8(source_df['Table 8'])
len(dest_df), len([k for k in source_df.keys() if k.startswith('Table') and k not in dest_df])
basic_cleanup(source_df['Table 9'])
dest_df['Table 9'] = basic_cleanup(source_df['Table 9'])
len(dest_df), len([k for k in source_df.keys() if k.startswith('Table') and k not in dest_df])
cleanup_table_7(source_df['Table 9a'])
cleanup_table_9x = cleanup_table_7
table9s = re.compile('Table 9[a-z]')
for table in source_df:
if table9s.match(table):
dest_df[table] = cleanup_table_9x(source_df[table])
len(dest_df), len([k for k in source_df.keys() if k.startswith('Table') and k not in dest_df])
source_df['Table 10a']
cleanup_table_5a(source_df['Table 10a'])
cleanup_table_10x = cleanup_table_5a
table10s = re.compile('Table 10[a-z]')
for table in source_df:
if table10s.match(table):
dest_df[table] = cleanup_table_10x(source_df[table])
len(dest_df), len([k for k in source_df.keys() if k.startswith('Table') and k not in dest_df])
dest_df['Contents'] = source_df['Contents'][source_df['Contents']['Worksheet Name'].str.startswith('Table')]
with pd.ExcelWriter('NI Housing Price Index.xlsx') as writer:
# Thankfully these are semantically sortable otherwise this would be a _massive_ pain
for k,df in sorted(dest_df.items()):
df.to_excel(writer, sheet_name=k)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The test is condected at a fully confined two-aquifer system. Both the pumping well and the observation piezometer are screened at the second aquifer.
Step2: Load data of two observation wells
Step3: Create single layer model (overlying aquifer and aquitard are excluded)
Step4: To improve model's performance, rc & res are adding
Step5: Create three-layer conceptual model
Step6: Try adding res & rc
Step7: Calibrate with fitted characters for upper aquifer
Step8: The optimized value of res is very close to the minimum limitation, thus res has little effect on the performance of the model. res is removed in this calibration.
Step9: Summary of values simulated by MLU
| <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from ttim import *
Q = 82.08 #constant discharge in m^3/d
zt0 = -46 #top boundary of upper aquifer in m
zb0 = -49 #bottom boundary of upper aquifer in m
zt1 = -52 #top boundary of lower aquifer in m
zb1 = -55 #bottom boundary of lower aquifer in m
rw = 0.05 #well radius in m
data1 = np.loadtxt('data/schroth_obs1.txt', skiprows = 1)
t1 = data1[:, 0]
h1 = data1[:, 1]
r1 = 0
data2 = np.loadtxt('data/schroth_obs2.txt', skiprows = 1)
t2 = data2[:, 0]
h2 = data2[:, 1]
r2 = 46 #distance between observation well2 and pumping well
ml_0 = ModelMaq(z=[zt1, zb1], kaq=10, Saq=1e-4, tmin=1e-4, tmax=1)
w_0 = Well(ml_0, xw=0, yw=0, rw=rw, tsandQ = [(0, Q), (1e+08, 0)])
ml_0.solve()
ca_0 = Calibrate(ml_0)
ca_0.set_parameter(name='kaq0', initial=10)
ca_0.set_parameter(name='Saq0', initial=1e-4)
ca_0.series(name='obs1', x=r1, y=0, t=t1, h=h1, layer=0)
ca_0.series(name='obs2', x=r2, y=0, t=t2, h=h2, layer=0)
ca_0.fit(report=True)
display(ca_0.parameters)
print('RMSE:', ca_0.rmse())
hm1_0 = ml_0.head(r1, 0, t1)
hm2_0 = ml_0.head(r2, 0, t2)
plt.figure(figsize = (8, 5))
plt.semilogx(t1, h1, '.', label='obs1')
plt.semilogx(t2, h2, '.', label='obs2')
plt.semilogx(t1, hm1_0[-1], label='ttim1')
plt.semilogx(t2, hm2_0[-1], label='ttim2')
plt.xlabel('time(d)')
plt.ylabel('head(m)')
plt.legend()
plt.savefig('C:/Users/DELL/Python Notebook/MT BE/Fig/schroth_one1.eps');
ml_1 = ModelMaq(z=[zt1, zb1], kaq=10, Saq=1e-4, tmin=1e-4, tmax=1)
w_1 = Well(ml_1, xw=0, yw=0, rw=rw, rc=0, res=5, tsandQ = [(0, Q), (1e+08, 0)])
ml_1.solve()
ca_1 = Calibrate(ml_1)
ca_1.set_parameter(name='kaq0', initial=10)
ca_1.set_parameter(name='Saq0', initial=1e-4)
ca_1.set_parameter_by_reference(name='rc', parameter=w_1.rc[:], initial=0.2)
ca_1.set_parameter_by_reference(name='res', parameter=w_1.res[:], initial=3)
ca_1.series(name='obs1', x=r1, y=0, t=t1, h=h1, layer=0)
ca_1.series(name='obs2', x=r2, y=0, t=t2, h=h2, layer=0)
ca_1.fit(report=True)
display(ca_1.parameters)
print('RMSE:', ca_1.rmse())
hm1_1 = ml_1.head(r1, 0, t1)
hm2_1 = ml_1.head(r2, 0, t2)
plt.figure(figsize = (8, 5))
plt.semilogx(t1, h1, '.', label='obs1')
plt.semilogx(t2, h2, '.', label='obs2')
plt.semilogx(t1, hm1_1[-1], label='ttim1')
plt.semilogx(t2, hm2_1[-1], label='ttim2')
plt.xlabel('time(d)')
plt.ylabel('head(m)')
plt.legend()
plt.savefig('C:/Users/DELL/Python Notebook/MT BE/Fig/schroth_one2.eps');
ml_2 = ModelMaq(kaq=[17.28, 2], z=[zt0, zb0, zt1, zb1], c=200, Saq=[1.2e-4, 1e-5],\
Sll=3e-5, topboundary='conf', tmin=1e-4, tmax=0.5)
w_2 = Well(ml_2, xw=0, yw=0, rw=rw, tsandQ = [(0, Q), (1e+08, 0)], layers=1)
ml_2.solve()
ca_2 = Calibrate(ml_2)
ca_2.set_parameter(name= 'kaq0', initial=20, pmin=0)
ca_2.set_parameter(name='kaq1', initial=1, pmin=0)
ca_2.set_parameter(name='Saq0', initial=1e-4, pmin=0)
ca_2.set_parameter(name='Saq1', initial=1e-5, pmin=0)
ca_2.set_parameter_by_reference(name='Sll', parameter=ml_2.aq.Sll[:],\
initial=1e-4, pmin=0)
ca_2.set_parameter(name='c1', initial=100, pmin=0)
ca_2.series(name='obs1', x=r1, y=0, t=t1, h=h1, layer=1)
ca_2.series(name='obs2', x=r2, y=0, t=t2, h=h2, layer=1)
ca_2.fit(report=True)
display(ca_2.parameters)
print('RMSE:',ca_2.rmse())
hm1_2 = ml_2.head(r1, 0, t1)
hm2_2 = ml_2.head(r2, 0, t2)
plt.figure(figsize = (8, 5))
plt.semilogx(t1, h1, '.', label='obs1')
plt.semilogx(t2, h2, '.', label='obs2')
plt.semilogx(t1, hm1_2[-1], label='ttim1')
plt.semilogx(t2, hm2_2[-1], label='ttim2')
plt.xlabel('time(d)')
plt.ylabel('head(m)')
plt.legend()
plt.savefig('C:/Users/DELL/Python Notebook/MT BE/Fig/schroth_three1.eps');
ml_3 = ModelMaq(kaq=[19, 2], z=[zt0, zb0, zt1, zb1], c=200, Saq=[4e-4, 1e-5],\
Sll=1e-4, topboundary='conf', tmin=1e-4, tmax=0.5)
w_3 = Well(ml_3, xw=0, yw=0, rw=rw, rc=None, res=0, tsandQ = [(0, Q), (1e+08, 0)], \
layers=1)
ml_3.solve()
ca_3 = Calibrate(ml_3)
ca_3.set_parameter(name= 'kaq0', initial=20, pmin=0)
ca_3.set_parameter(name='kaq1', initial=1, pmin=0)
ca_3.set_parameter(name='Saq0', initial=1e-4, pmin=0)
ca_3.set_parameter(name='Saq1', initial=1e-5, pmin=0)
ca_3.set_parameter_by_reference(name='Sll', parameter=ml_3.aq.Sll[:],\
initial=1e-4, pmin=0)
ca_3.set_parameter(name='c1', initial=100, pmin=0)
ca_3.set_parameter_by_reference(name='res', parameter=w_3.res[:], initial=0, pmin=0)
ca_3.set_parameter_by_reference(name='rc', parameter=w_3.rc[:], initial=0.2, pmin=0)
ca_3.series(name='obs1', x=r1, y=0, t=t1, h=h1, layer=1)
ca_3.series(name='obs2', x=r2, y=0, t=t2, h=h2, layer=1)
ca_3.fit(report=True)
display(ca_3.parameters)
print('RMSE:', ca_3.rmse())
hm1_3 = ml_3.head(r1, 0, t1)
hm2_3 = ml_3.head(r2, 0, t2)
plt.figure(figsize = (8, 5))
plt.semilogx(t1, h1, '.', label='obs1')
plt.semilogx(t2, h2, '.', label='obs2')
plt.semilogx(t1, hm1_3[-1], label='ttim1')
plt.semilogx(t2, hm2_3[-1], label='ttim2')
plt.xlabel('time(d)')
plt.ylabel('head(m)')
plt.legend()
plt.savefig('C:/Users/DELL/Python Notebook/MT BE/Fig/schroth_three2.eps');
ml_4 = ModelMaq(kaq=[17.28, 2], z=[zt0, zb0, zt1, zb1], c=200, Saq=[1.2e-4, 1e-5],\
Sll=3e-5, topboundary='conf', tmin=1e-4, tmax=0.5)
w_4 = Well(ml_4, xw=0, yw=0, rw=rw, rc=None, res=0, tsandQ = [(0, Q), (1e+08, 0)], \
layers=1)
ml_4.solve()
ca_4 = Calibrate(ml_4)
ca_4.set_parameter(name='kaq1', initial=1, pmin=0)
ca_4.set_parameter(name='Saq1', initial=1e-5, pmin=0)
ca_4.set_parameter(name='c1', initial=100, pmin=0)
ca_4.set_parameter_by_reference(name='rc', parameter=w_4.rc[:], initial=0.2, pmin=0)
ca_4.series(name='obs1', x=r1, y=0, t=t1, h=h1, layer=1)
ca_4.series(name='obs2', x=r2, y=0, t=t2, h=h2, layer=1)
ca_4.fit(report=True)
display(ca_4.parameters)
print('RMSE:', ca_4.rmse())
hm1_4 = ml_4.head(r1, 0, t1)
hm2_4 = ml_4.head(r2, 0, t2)
plt.figure(figsize = (8, 5))
plt.semilogx(t1, h1, '.', label='obs1')
plt.semilogx(t2, h2, '.', label='obs2')
plt.semilogx(t1, hm1_4[-1], label='ttim1')
plt.semilogx(t2, hm2_4[-1], label='ttim2')
plt.xlabel('time(d)')
plt.ylabel('head(m)')
plt.legend()
plt.savefig('C:/Users/DELL/Python Notebook/MT BE/Fig/schroth_three3.eps');
t = pd.DataFrame(columns=['k0[m/d]','k1[m/d]','Ss0[1/m]','Ss1[1/m]','Sll[1/m]','c[d]',\
'res', 'rc'], \
index=['MLU', 'MLU-fixed k1','ttim','ttim-rc','ttim-fixed upper'])
t.loc['ttim-rc'] = ca_3.parameters['optimal'].values
t.iloc[2,0:6] = ca_2.parameters['optimal'].values
t.iloc[4,5] = ca_4.parameters['optimal'].values[2]
t.iloc[4,7] = ca_4.parameters['optimal'].values[3]
t.iloc[4,0] = 17.28
t.iloc[4,1] = ca_4.parameters['optimal'].values[0]
t.iloc[4,2] = 1.2e-4
t.iloc[4,3] = ca_4.parameters['optimal'].values[1]
t.iloc[4,4] = 3e-5
t.iloc[0, 0:6] = [17.424, 6.027e-05, 1.747, 6.473e-06, 3.997e-05, 216]
t.iloc[1, 0:6] = [2.020e-04, 9.110e-04, 3.456, 6.214e-05, 7.286e-05, 453.5]
t['RMSE'] = [0.023452, 0.162596, ca_2.rmse(), ca_3.rmse(), ca_4.rmse()]
t
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set Caffe to CPU mode, load the net in the test phase for inference, and configure input preprocessing.
Step2: Let's start with a simple classification. We'll set a batch of 50 to demonstrate batch processing, even though we'll only be classifying one image. (Note that the batch size can also be changed on-the-fly.)
Step3: Feed in the image (with some preprocessing) and classify with a forward pass.
Step4: What did the input look like?
Step5: Adorable, but was our classification correct?
Step6: Indeed! But how long did it take?
Step7: That's a while, even for a batch size of 50 images. Let's switch to GPU mode.
Step8: Much better. Now let's look at the net in more detail.
Step9: The parameters and their shapes. The parameters are net.params['name'][0] while biases are net.params['name'][1].
Step10: Helper functions for visualization
Step11: The input image
Step12: The first layer output, conv1 (rectified responses of the filters above, first 36 only)
Step13: The second layer filters, conv2
Step14: The second layer output, conv2 (rectified, only the first 36 of 256 channels)
Step15: The third layer output, conv3 (rectified, all 384 channels)
Step16: The fourth layer output, conv4 (rectified, all 384 channels)
Step17: The fifth layer output, conv5 (rectified, all 256 channels)
Step18: The fifth layer after pooling, pool5
Step19: The first fully connected layer, fc6 (rectified)
Step20: The second fully connected layer, fc7 (rectified)
Step21: The final probability output, prob
Step22: Let's see the top 5 predicted labels.
| <ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# Make sure that caffe is on the python path:
caffe_root = '/home/ubuntu/digits/caffe/' # this file is expected to be in {caffe_root}/examples
import sys
sys.path.insert(0, caffe_root + 'python')
import caffe
plt.rcParams['figure.figsize'] = (10, 10)
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
import os
if not os.path.isfile(caffe_root + 'models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel'):
print("Downloading pre-trained CaffeNet model...")
# !../scripts/download_model_binary.py ../models/bvlc_reference_caffenet
caffe.set_mode_cpu()
net = caffe.Net(caffe_root + 'models/bvlc_reference_caffenet/deploy.prototxt',
caffe_root + 'models/bvlc_reference_caffenet/bvlc_reference_caffenet.caffemodel',
caffe.TEST)
# input preprocessing: 'data' is the name of the input blob == net.inputs[0]
transformer = caffe.io.Transformer({'data': net.blobs['data'].data.shape})
transformer.set_transpose('data', (2,0,1))
transformer.set_mean('data', np.load(caffe_root + 'python/caffe/imagenet/ilsvrc_2012_mean.npy').mean(1).mean(1)) # mean pixel
transformer.set_raw_scale('data', 255) # the reference model operates on images in [0,255] range instead of [0,1]
transformer.set_channel_swap('data', (2,1,0)) # the reference model has channels in BGR order instead of RGB
mean = np.load(caffe_root + 'python/caffe/imagenet/ilsvrc_2012_mean.npy')
print mean.shape
print mean.mean(1).mean(1)
# set net to batch size of 50
net.blobs['data'].reshape(50,3,227,227)
net.blobs['data'].data[...] = transformer.preprocess('data', caffe.io.load_image(caffe_root + 'examples/images/fish-bike.jpg'))
out = net.forward()
print("Predicted class is #{}.".format(out['prob'].argmax()))
plt.imshow(transformer.deprocess('data', net.blobs['data'].data[0]))
# load labels
imagenet_labels_filename = caffe_root + 'data/ilsvrc12/synset_words.txt'
try:
labels = np.loadtxt(imagenet_labels_filename, str, delimiter='\t')
except:
!../data/ilsvrc12/get_ilsvrc_aux.sh
labels = np.loadtxt(imagenet_labels_filename, str, delimiter='\t')
# sort top k predictions from softmax output
top_k = net.blobs['prob'].data[0].flatten().argsort()[-1:-6:-1]
print labels[top_k]
# CPU mode
net.forward() # call once for allocation
%timeit net.forward()
# GPU mode
caffe.set_device(0)
caffe.set_mode_gpu()
net.forward() # call once for allocation
%timeit net.forward()
[(k, v.data.shape) for k, v in net.blobs.items()]
[(k, v[0].data.shape) for k, v in net.params.items()]
# take an array of shape (n, height, width) or (n, height, width, channels)
# and visualize each (height, width) thing in a grid of size approx. sqrt(n) by sqrt(n)
def vis_square(data, padsize=1, padval=0):
data -= data.min()
data /= data.max()
# force the number of filters to be square
n = int(np.ceil(np.sqrt(data.shape[0])))
padding = ((0, n ** 2 - data.shape[0]), (0, padsize), (0, padsize)) + ((0, 0),) * (data.ndim - 3)
data = np.pad(data, padding, mode='constant', constant_values=(padval, padval))
# tile the filters into an image
data = data.reshape((n, n) + data.shape[1:]).transpose((0, 2, 1, 3) + tuple(range(4, data.ndim + 1)))
data = data.reshape((n * data.shape[1], n * data.shape[3]) + data.shape[4:])
print data.shape
plt.imshow(data)
# the parameters are a list of [weights, biases]
filters = net.params['conv1'][0].data
vis_square(filters.transpose(0, 2, 3, 1))
feat = net.blobs['conv1'].data[0, :36]
vis_square(feat, padval=1)
filters = net.params['conv2'][0].data
vis_square(filters[:48].reshape(48**2, 5, 5))
feat = net.blobs['conv2'].data[0, :36]
vis_square(feat, padval=1)
feat = net.blobs['conv3'].data[0]
vis_square(feat, padval=0.5)
feat = net.blobs['conv4'].data[0]
vis_square(feat, padval=0.5)
feat = net.blobs['conv5'].data[0]
vis_square(feat, padval=0.5)
feat = net.blobs['pool5'].data[0]
vis_square(feat, padval=1)
feat = net.blobs['fc6'].data[0]
plt.subplot(2, 1, 1)
plt.plot(feat.flat)
plt.subplot(2, 1, 2)
_ = plt.hist(feat.flat[feat.flat > 0], bins=100)
feat = net.blobs['fc7'].data[0]
plt.subplot(2, 1, 1)
plt.plot(feat.flat)
plt.subplot(2, 1, 2)
_ = plt.hist(feat.flat[feat.flat > 0], bins=100)
feat = net.blobs['prob'].data[0]
plt.plot(feat.flat)
# load labels
imagenet_labels_filename = caffe_root + 'data/ilsvrc12/synset_words.txt'
try:
labels = np.loadtxt(imagenet_labels_filename, str, delimiter='\t')
except:
!../data/ilsvrc12/get_ilsvrc_aux.sh
labels = np.loadtxt(imagenet_labels_filename, str, delimiter='\t')
# sort top k predictions from softmax output
top_k = net.blobs['prob'].data[0].flatten().argsort()[-1:-6:-1]
print labels[top_k]
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Preparing the data
Step2: Counting word frequency
Step3: Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.
Step4: What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.
Step5: The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words.
Step6: Text to vector function
Step7: If you do this right, the following code should return
Step8: Now, run through our entire review data set and convert each review to a word vector.
Step9: Train, Validation, Test sets
Step10: Building the network
Step11: Intializing the model
Step12: Training the network
Step13: Testing
Step14: Try out your own text!
| <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import tensorflow as tf
import tflearn
from tflearn.data_utils import to_categorical
reviews = pd.read_csv('reviews.txt', header=None)
labels = pd.read_csv('labels.txt', header=None)
from collections import Counter
total_counts = Counter()
for i, r in enumerate(reviews[0]):
tokens = r.split(" ")
for i, t in enumerate(tokens):
total_counts[t] += 1
print("Total words in data set: ", len(total_counts))
vocab = sorted(total_counts, key=total_counts.get, reverse=True)[:10000]
print(vocab[:60])
vocab = sorted(total_counts, key=total_counts.get, reverse=True)[:10000]
print(vocab[:60])
print(vocab[-1], ': ', total_counts[vocab[-1]])
word2idx = {word: i for i, word in enumerate(vocab)}
def text_to_vector(text):
word_vector = np.zeros(len(vocab), dtype=np.int_)
for word in text.split(' '):
idx = word2idx.get(word, None)
if idx is None:
continue
else:
word_vector[idx] += 1
return np.array(word_vector)
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
word_vectors = np.zeros((len(reviews), len(vocab)), dtype=np.int_)
for ii, (_, text) in enumerate(reviews.iterrows()):
word_vectors[ii] = text_to_vector(text[0])
# Printing out the first 5 word vectors
word_vectors[:5, :23]
Y = (labels=='positive').astype(np.int_)
records = len(labels)
shuffle = np.arange(records)
np.random.shuffle(shuffle)
test_fraction = 0.9
train_split, test_split = shuffle[:int(records*test_fraction)], shuffle[int(records*test_fraction):]
trainX, trainY = word_vectors[train_split,:], to_categorical(Y.values[train_split], 2)
testX, testY = word_vectors[test_split,:], to_categorical(Y.values[test_split], 2)
trainY
# Network building
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
#### Your code ####
net = tflearn.input_data([None, 10000]) # Input
net = tflearn.fully_connected(net, 5, activation='ReLU') # Hidden
net = tflearn.fully_connected(net, 2, activation='softmax') # Output
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
model = tflearn.DNN(net)
return model
model = build_model()
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=128, n_epoch=10)
predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_)
test_accuracy = np.mean(predictions == testY[:,0], axis=0)
print("Test accuracy: ", test_accuracy)
# Helper function that uses your model to predict sentiment
def test_sentence(sentence):
positive_prob = model.predict([text_to_vector(sentence.lower())])[0][1]
print('Sentence: {}'.format(sentence))
print('P(positive) = {:.3f} :'.format(positive_prob),
'Positive' if positive_prob > 0.5 else 'Negative')
sentence = "Moonlight is by far the best movie of 2016."
test_sentence(sentence)
sentence = "It's amazing anyone could be talented enough to make something this spectacularly awful"
test_sentence(sentence)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Compile and build.
Step2: We can wire up the GPIO pins to a logic analyzer to verify that our circuit produces the correct sine waveform.
Step3: TODO
| <ASSISTANT_TASK:>
Python Code:
import math
import numpy as np
def sine(x):
return np.sin(2 * math.pi * x)
x = np.linspace(0., 1., num=256, endpoint=False)
import magma as m
m.set_mantle_target('ice40')
import mantle
def DefineDDS(n, has_ce=False):
class _DDS(m.Circuit):
name = f'DDS{n}'
IO = ['I', m.In(m.UInt(n)), "O", m.Out(m.UInt(n))] + m.ClockInterface(has_ce=has_ce)
@classmethod
def definition(io):
reg = mantle.Register(n, has_ce=has_ce)
m.wire(reg(m.uint(reg.O) + io.I, CE=io.CE), io.O)
return _DDS
def DDS(n, has_ce=False):
return DefineDDS(n, has_ce)()
from loam.boards.icestick import IceStick
icestick = IceStick()
icestick.Clock.on()
for i in range(8):
icestick.J1[i].input().on()
icestick.J3[i].output().on()
main = icestick.main()
dds = DDS(16, True)
wavetable = 128 + 127 * sine(x)
wavetable = [int(x) for x in wavetable]
rom = mantle.Memory(height=256, width=16, rom=list(wavetable), readonly=True)
phase = m.concat(main.J1, m.bits(0,8))
# You can also hardcode a constant as the phase
# phase = m.concat(m.bits(32, 8), m.bits(0,8))
# Use counter COUT hooked up to CE of registers to slow everything down so we can see it on the LEDs
c = mantle.Counter(10)
addr = dds( phase, CE=c.COUT)
O = rom( addr[8:] )
m.wire( c.COUT, rom.RE )
m.wire( O[0:8], main.J3 )
m.EndCircuit()
m.compile('build/dds', main)
%%bash
cd build
cat sin.pcf
yosys -q -p 'synth_ice40 -top main -blif dds.blif' dds.v
arachne-pnr -q -d 1k -o dds.txt -p dds.pcf dds.blif
icepack dds.txt dds.bin
iceprog dds.bin
import csv
import magma as m
with open("data/dds-capture.csv") as sine_capture_csv:
csv_reader = csv.reader(sine_capture_csv)
next(csv_reader, None) # skip the headers
rows = [row for row in csv_reader]
timestamps = [float(row[0]) for row in rows]
values = [m.bitutils.seq2int(tuple(int(x) for x in row[1:])) for row in rows]
%matplotlib inline
import matplotlib.pyplot as plt
plt.plot(timestamps[:100], values[:100], "b.")
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: With NumPy arrays, all the same functionality you know and love from lists is still there.
Step2: These operations all work whether you're using Python lists or NumPy arrays.
Step3: To build the NumPy equivalent, you can basically just feed the Python list-matrix into the NumPy array method
Step4: The real difference, though, comes with actually indexing these elements. With Python lists, you can index individual elements only in this way
Step5: With NumPy arrays, you can use that same notation...or you can use comma-separated indices
Step6: It's not earth-shattering, but enough to warrant a heads-up.
Step7: Here's a great visual summary of slicing NumPy arrays, assuming you're starting from an array with shape (3, 3)
Step8: We know video is 3D because we can also access its ndim attribute.
Step9: Another example--to go straight to cutting-edge academic research--is 3D video microscope data of multiple tagged fluorescent markers. This would result in a five-axis NumPy object
Step10: We can also ask how many elements there are total, using the size attribute
Step11: These are extreme examples, but they're to illustrate how flexible NumPy arrays are.
Step12: Notice how the number "9", initially the third axis, steadily marches to the front as the axes before it are accessed.
Step13: how does Python know that you want to add the scalar value 10 to each element of the vector x? Because (in a word) broadcasting.
Step14: In this example, the scalar value 1 is broadcast to all the elements of zeros, converting the operation to element-wise addition.
Step15: But on some intuitive level, this hopefully makes sense
Step16: In this example, the shape of x is (3, 4). The shape of y is just 4. Their trailing axes are both 4, therefore the "smaller" array will be broadcast to fit the size of the larger array, and the operation (addition, in this case) is performed element-wise.
Step17: This is randomly generated data, yes, but it could easily be 7 data points in 4 dimensions. That is, we have 7 observations of variables with 4 descriptors. Perhaps it's
Step18: Just for your reference, here's the original data
Step19: Now, we can use our mask to access only the indices we want to set to 0.
Step20: voilà! Every negative number has been set to 0, and all the other values were left unchanged. Now we can continue with whatever analysis we may have had in mind.
Step21: Fancy Indexing
Step22: We have 8 rows and 4 columns, where each row is a 4-element vector of the same value repeated across the columns, and that value is the index of the row.
Step23: Ta-daaa! Pretty spiffy!
Step24: Ok, this will take a little explaining, bear with me
| <ASSISTANT_TASK:>
Python Code:
li = ["this", "is", "a", "list"]
print(li)
print(li[1:3]) # Print element 1 (inclusive) to 3 (exclusive)
print(li[2:]) # Print element 2 and everything after that
print(li[:-1]) # Print everything BEFORE element -1 (the last one)
import numpy as np
x = np.array([1, 2, 3, 4, 5])
print(x)
print(x[1:3])
print(x[2:])
print(x[:-1])
python_matrix = [ [1, 2, 3], [4, 5, 6], [7, 8, 9] ]
print(python_matrix)
numpy_matrix = np.array(python_matrix)
print(numpy_matrix)
print(python_matrix) # The full list-of-lists
print(python_matrix[0]) # The inner-list at the 0th position of the outer-list
print(python_matrix[0][0]) # The 0th element of the 0th inner-list
print(numpy_matrix)
print(numpy_matrix[0])
print(numpy_matrix[0, 0]) # Note the comma-separated format!
x = np.array([ [1, 2, 3], [4, 5, 6], [7, 8, 9] ])
print(x)
print(x[:, 1]) # Take ALL of axis 0, and one index of axis 1.
video = np.empty(shape = (1920, 1080, 5000))
print("Axis 0 length:", video.shape[0]) # How many rows?
print("Axis 1 length:", video.shape[1]) # How many columns?
print("Axis 2 length:", video.shape[2]) # How many frames?
print(video.ndim)
del video
tensor = np.empty(shape = (2, 640, 480, 360, 100))
print(tensor.shape)
# Axis 0: color channel--used to differentiate between fluorescent markers
# Axis 1: height--same as before
# Axis 2: width--same as before
# Axis 3: depth--capturing 3D depth at each time interval, like a 3D movie
# Axis 4: frame--same as before
print(tensor.size)
del tensor
example = np.empty(shape = (3, 5, 9))
print(example.shape)
sliced = example[0] # Indexed the first axis.
print(sliced.shape)
sliced_again = example[0, 0] # Indexed the first and second axes.
print(sliced_again.shape)
x = np.array([1, 2, 3, 4, 5])
x += 10
print(x)
zeros = np.zeros(shape = (3, 4))
print(zeros)
zeros += 1 # Just add 1.
print(zeros)
x = np.zeros(shape = (3, 3))
y = np.ones(4)
x + y
x = np.zeros(shape = (3, 4))
y = np.array([1, 2, 3, 4])
z = x + y
print(z)
x = np.random.standard_normal(size = (7, 4))
print(x)
mask = x < 0
print(mask)
print(x)
x[mask] = 0
print(x)
mask = (x < 1) & (x > 0.5) # True for any value less than 1 but greater than 0.5
x[mask] = 99 # We're setting any value in this matrix < 1 but > 0.5 to 99
print(x)
matrix = np.empty(shape = (8, 4))
for i in range(8):
matrix[i] = i # Broadcasting is happening here!
print(matrix)
indices = np.array([7, 0, 5, 2]) # Here's my "indexing" array--note the order of the numbers.
print(matrix[indices])
matrix = np.arange(32).reshape((8, 4))
print(matrix) # This 8x4 matrix has integer elements that increment by 1 column-wise, then row-wise.
indices = ( np.array([1, 7, 4]), np.array([3, 0, 1]) ) # This is a tuple of 2 NumPy arrays!
print(matrix[indices])
( np.array([1, 7, 4]), np.array([3, 0, 1]) )
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
| <ASSISTANT_TASK:>
Python Code:
import pandas as pd
df = pd.DataFrame({'Sp': ['MM1', 'MM1', 'MM1', 'MM2', 'MM2', 'MM2', 'MM4', 'MM4', 'MM4'],
'Mt': ['S1', 'S1', 'S3', 'S3', 'S4', 'S4', 'S2', 'S2', 'S2'],
'Value': ['a', 'n', 'cb', 'mk', 'bg', 'dgd', 'rd', 'cb', 'uyi'],
'count': [3, 2, 5, 8, 10, 1, 2, 2, 7]})
def g(df):
return df[df.groupby(['Sp', 'Mt'])['count'].transform(min) == df['count']]
result = g(df.copy())
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the Captured Streaming Data in Python
Step3: Load Data in SparkSQL
| <ASSISTANT_TASK:>
Python Code:
sc
sqlContext
import time
import simplejson as json
filename = '/home/anaconda/md0/data/2016_potus/stream/tweets.json'
langs = {}
start_time = time.time()
f_p = open(filename,'r')
for line in f_p:
tweet = json.loads(line)
if 'lang' in tweet:
if tweet['lang'] in langs:
langs[tweet['lang']] += 1
else:
langs[tweet['lang']] = 1
elapsed_time = time.time() - start_time
print "%02f seconds" % elapsed_time
# Pretty print langs as JSON
print "%s" % json.dumps(langs, indent=4)
import time
filename = '/home/anaconda/md0/data/2016_potus/stream/tweets.json'
start_time = time.time()
# Form a Spark dataframe and register a temp table
sdf = sqlContext.read.json(filename)
sdf.registerTempTable('tweets')
query = "select lang, count(*) as num from tweets group by lang order by num desc"
pdf = sqlContext.sql(query).toPandas()
elapsed_time = time.time() - start_time
print "%02f seconds" % elapsed_time
pdf
sdf.printSchema()
start_time = time.time()
query =
select
sq2.time_zone as time_zone,
sq2.mentions as mentions,
sq2.clinton_rank as clinton_rank,
sq2.trump_rank as trump_rank,
sq2.sanders_rank as sanders_rank
from (
select
sq.time_zone as time_zone,
sq.clinton + sq.trump + sq.sanders as mentions,
dense_rank() over (order by sq.clinton desc) as clinton_rank,
dense_rank() over (order by sq.trump desc) as trump_rank,
dense_rank() over (order by sq.sanders desc) as sanders_rank
from (
select
user.time_zone as time_zone,
sum(case
when lower(text) like '%clinton%' or lower(text) like '%hillary%'
then 1
else 0
end) as clinton,
sum(case
when lower(text) like '%trump%' or lower(text) like '%donald%'
then 1
else 0
end) as trump,
sum(case
when lower(text) like '%sanders%' or lower(text) like '%bernie%'
then 1
else 0
end) as sanders
from tweets
group by user.time_zone
) sq
) sq2
where
sq2.clinton_rank < 30
or sq2.trump_rank < 30
or sq2.sanders_rank < 30
order by sq2.mentions desc
pdf = sqlContext.sql(query).toPandas()
elapsed_time = time.time() - start_time
print "%02f seconds" % elapsed_time
pdf
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 6.2 Updating dictionaries
Step2: An implication of this is that we can start with an empty dictionary and add keys as we go along.
Step3: 6.2.1 Dictionary exercise
Step4: 6.2.2 Using the .update method
Step5: 6.2.3 Creating dictionaries using kwargs
Step6: Note from the example above that when creating dictionaries, DO NOT enclose the keys within quotation marks. However, when accessing the value of a dictionary by its key, you MUST use quotation marks.
Step7: 7. If and boolen conditionals
Step8: Here is our first example using the if keyword.
Step9: The variable x has been assigned the value 300. When Python sees the statement if x == 300
Step10: The conditional checks and see whether the remainder of 13 when divided by 2 is 0. (Recall that's what % does.) Since 13 returns 1 remainder when divided by 2, the condition is false. Hence the string 'This is an even number' is not printed.
Step11: Try re-executing the cell above with various values of y and see the effect on the output.
Step12: See how much more elegant this is instead of nested if else statements.
Step13: As before, play around with the various values of marks to make sure that elif structure is working as intended. Notice that I phrased the conditional only to check agains a lower bound. This is because Python will only execute the code block corresponding to first True conditional in the elif sequence. Even if subsequent conditionals evaluate to true, their code is not run.
Step14: But Python allows us to write a more readable form of the for loop. So the following is equivalent to the above and is preferred.
Step15: Note that the "counter variable" staff_name is actually a variable containing the current item in the list as the loop progresses. I could have used any name I wanted for the variable - I could use i to represent the staff's name. But I chose staff_name for readability purposes. As the variable staff_name runs through each item of staff, the code block print is executed with the current value of staff_name. Once that is done, the variable is updated with the next item in the list and the block is executed once more. This proceeds until the end of the list is reached and the for loop terminates.
Step16: Here's a more mathematical usage of the for statement. Suppose we want to compute the decimal expansion of $\sqrt{2}$ accurate to 3 decimal places.After searching Wikipedia, I came up with this recursive formula $$\begin{align}a_0 &=1 \ a_{n+1} &= \frac{a_n}{2}+\frac{1}{a_n}\end{align}$$ Here's how we could implement this.
Step17: 8.2.1 Your mission, should you choose to accept it
| <ASSISTANT_TASK:>
Python Code:
# creating a dictionary and assigning it to a variable
staff = {'name': 'Andy', 'age': 28, 'email': 'andy@company.com' }
staff['name']
staff['age']
print(staff['email'])
# A dictionary is of class dict
print(type(staff))
# list of all keys, note the brackets at the end.
# .keys is a method associated to dictionaries
staff.keys()
# list of all values, in no particular order
staff.values()
# list all key-value pairings using .items
staff.items()
# Hey, Andy mistakenly keyed in his age. He is actually 29 years old!
staff['age'] = 29
print(staff)
# HR wants us to record down his staff ID.
staff['id'] = 12345
print(staff)
# Let's check the list of keys
staff.keys()
favourite_food = dict() # You could also type favourite_food = {}
print(favourite_food)
# update your dictionary here
# and print the dictionary
print(favourite_food)
staff.update({'salary': 980.15, 'department':'finance', 'colleagues': ['George', 'Liz']})
# Who are Andy's colleagues? Enter answer below
# Which department does he work in? Enter answer below
my_favourite_things = dict(food="Assam Laksa", music="classical", number = 2)
# my favourite number
my_favourite_things['number']
# An error
my_favourite_things[food]
# ...but this is correct
food = 'food'
my_favourite_things[food]
# examples of binary comparison
1 < 2
# compound statements
1 < 2 or 1 == 2
# using bitwise operators
1<2 & 1==2
x = 300
if x == 300:
print('This is Sparta!')
y = 13
print(y)
if y % 2 == 0:
print('This is an even number')
print("I guess it's odd then")
y = 22
if y%2 ==0:
print("{} is an even number".format(y))
else:
print("{} is an odd number".format(y))
y = 13
if y%2 ==0:
print("{} is an even number".format(y))
else:
print("{} is an odd number".format(y))
# Nested if else statements
y = 25
remainder = y%3
if remainder == 0:
print("{} is divisible by 3".format(y))
else:
print("{} is not divisible by 3".format(y))
if remainder ==1:
print("But has remainder {}".format(remainder))
else:
print("But has remainder {}".format(remainder))
y=25
remainder = y%3
if remainder == 0:
div = 'is'
s = 'Hence'
elif remainder == 1:
div = 'is not'
s = 'But'
elif remainder == 2:
div = 'is not'
s = 'But'
print('{} {} divisible by 3\n{} has remainder {}'.format(y, div, s, remainder))
marks = 78.35
if marks >= 80:
grade = 'A'
elif marks >= 70:
grade = 'B'
elif marks >= 60:
grade = 'C'
elif marks >= 50:
grade = 'D'
elif marks >= 45:
grade = 'E'
else:
grade = 'F'
print('Student obtained %.1f marks in the test and hence is awarded %s for this module' % (marks, grade))
staff = ['Lisa', 'Mark', 'Andy']
for i in range(0,3): # range(0,3) is a function that produces a sequence of numbers in the form of a list: [0,1,2]
print("Staff member "+staff[i])
for staff_name in staff:
print("Staff member "+ staff_name)
# A common programming interview task. Print 'foo' if x is divisible by 3 and 'bar' if it is divisible by 5 and 'baz'
# if x is divisible by both 3 and 5. Do this for numbers 1 to 15.
for num in range(1,16): # range(1,16) produces a list of numbers started from 1 and ending at 15.
if num % 3 == 0 and num % 5 !=0:
print('%d foo' % (num))
elif num % 5 == 0 and num % 3 != 0:
print('%d bar' % (num))
elif num % 5 == 0 and num % 3 == 0:
print('%d baz' % (num))
else:
print('%d' % (num))
max_iter = 10
a = 1
# Since _ is considered a valid variable name, we can use this to
# "suppress" counting indices.
for _ in range(0, max_iter):
a_next = a/2.0 + 1/a
if abs(a_next-a) < 1e-4: # You can use engineering format numbers in Python
print("Required accuracy found! Breaking out of the loop.")
break
a = a_next
print("Approximation of sqrt(2) is: %.3f" % (a))
# Answer
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set the structure
Step2: Result from VASP DFPT calculations using the supercell structure
Step3: Initialize phonopy and set the force constants obtained from VASP
Step4: Define the paths for plotting the bandstructure and set them in phonopy
Step5: Set the mesh in reciprocal space and plot DOS
| <ASSISTANT_TASK:>
Python Code:
import os
import numpy as np
import pymatgen as pmg
from pymatgen.io.vasp.outputs import Vasprun
from phonopy import Phonopy
from phonopy.structure.atoms import Atoms as PhonopyAtoms
%matplotlib inline
Si_primitive = PhonopyAtoms(symbols=['Si'] * 2,
scaled_positions=[(0, 0, 0), (0.75, 0.5, 0.75)],
cell=[[3.867422 ,0.000000, 0.000000],
[1.933711, 3.349287, 0.000000],
[-0.000000, -2.232856, 3.157737]])
# supercell size
scell = [[2,0,0],[0,2,0],[0,0,2]]
vrun = Vasprun(os.path.join(os.path.dirname(pmg.__file__), "..", 'test_files', "vasprun.xml.dfpt.phonon"))
phonon = Phonopy(Si_primitive, scell)
# negative sign to ensure consistency with phonopy convention
phonon.set_force_constants(-vrun.force_constants)
bands = []
# path 1
q_start = np.array([0.5, 0.5, 0.0])
q_end = np.array([0.0, 0.0, 0.0])
band = []
for i in range(51):
band.append(q_start + (q_end - q_start) / 50 * i)
bands.append(band)
# path 2
q_start = np.array([0.0, 0.0, 0.0])
q_end = np.array([0.5, 0.0, 0.0])
band = []
for i in range(51):
band.append(q_start + (q_end - q_start) / 50 * i)
bands.append(band)
phonon.set_band_structure(bands)
phonon.plot_band_structure().show()
mesh = [31, 31, 31]
phonon.set_mesh(mesh)
phonon.set_total_DOS()
phonon.plot_total_DOS().show()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Second, let's create the motion correction and smoothing node. For an explanation about this step, see the Nodes and Interfaces tutorial.
Step2: Third, let's create the workflow that will contain those three nodes. For an explanation about this step, see the Workflow tutorial.
Step3: Now that everything is set up, let's run the preprocessing workflow.
Step4: After the execution of the workflow we have all the data hidden in the working directory 'working_dir'. Let's take a closer look at the content of this folder
Step5: Let's take a look at the output folder
Step6: Let's take a look at the new output folder structure
| <ASSISTANT_TASK:>
Python Code:
from nipype import SelectFiles, Node
# Create SelectFiles node
templates={'func': '{subject_id}/func/{subject_id}_task-flanker_run-1_bold.nii.gz'}
sf = Node(SelectFiles(templates),
name='selectfiles')
sf.inputs.base_directory = '/data/ds102'
sf.inputs.subject_id = 'sub-01'
from nipype.interfaces.fsl import MCFLIRT, IsotropicSmooth
# Create Motion Correction Node
mcflirt = Node(MCFLIRT(mean_vol=True,
save_plots=True),
name='mcflirt')
# Create Smoothing node
smooth = Node(IsotropicSmooth(fwhm=4),
name='smooth')
from nipype import Workflow
from os.path import abspath
# Create a preprocessing workflow
wf = Workflow(name="preprocWF")
wf.base_dir = 'working_dir'
# Connect the three nodes to each other
wf.connect([(sf, mcflirt, [("func", "in_file")]),
(mcflirt, smooth, [("out_file", "in_file")])])
wf.run()
from nipype.interfaces.io import DataSink
# Create DataSink object
sinker = Node(DataSink(), name='sinker')
# Name of the output folder
sinker.inputs.base_directory = 'output'
# Connect DataSink with the relevant nodes
wf.connect([(smooth, sinker, [('out_file', 'in_file')]),
(mcflirt, sinker, [('mean_img', 'mean_img'),
('par_file', 'par_file')]),
])
wf.run()
wf.connect([(smooth, sinker, [('out_file', 'preproc.@in_file')]),
(mcflirt, sinker, [('mean_img', 'preproc.@mean_img'),
('par_file', 'preproc.@par_file')]),
])
wf.run()
# Define substitution strings
substitutions = [('_task-flanker', ''),
('_bold_mcf', ''),
('.nii.gz_mean_reg', '_mean'),
('.nii.gz.par', '.par')]
# Feed the substitution strings to the DataSink node
sinker.inputs.substitutions = substitutions
# Run the workflow again with the substitutions in place
wf.run()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In this example, we will download some text from wikipedia, split it up into chunks and then plot it. We will use the wikipedia package to retrieve the wiki pages for 'dog' and 'cat'.
Step2: Below is a snippet of some of the text from the dog wikipedia page. As you can see, the word dog appears in many of the sentences, but also words related to dog like wolf and carnivore appear.
Step3: Now we will simply pass the text samples as a list to hyp.plot. By default hypertools will transform the text data using a topic model that was fit on a variety of wikipedia pages. Specifically, the text is vectorized using the scikit-learn CountVectorizer and then passed on to a LatentDirichletAllocation to estimate topics. As can be seen below, the 5 chunks of text from the dog/cat wiki pages cluster together, suggesting they are made up of distint topics.
Step4: Now, let's add a third very different topic to the plot.
Step5: As you might expect, the cat and dog text chunks are closer to each other than to basketball in this topic space. Since cats and dogs are both animals, they share many more features (and thus are described with similar text) than basketball.
Step6: Visualizing Wikipedia pages
Step7: Visualizing State of the Union Addresses
Step8: Changing the reduction model
Step9: Defining a corpus
| <ASSISTANT_TASK:>
Python Code:
import hypertools as hyp
import wikipedia as wiki
%matplotlib inline
def chunk(s, count):
return [''.join(x) for x in zip(*[list(s[z::count]) for z in range(count)])]
chunk_size = 5
dog_text = wiki.page('Dog').content
cat_text = wiki.page('Cat').content
dog = chunk(dog_text, int(len(dog_text)/chunk_size))
cat = chunk(cat_text, int(len(cat_text)/chunk_size))
dog[0][:1000]
hue=['dog']*chunk_size+['cat']*chunk_size
geo = hyp.plot(dog + cat, 'o', hue=hue, size=[8, 6])
bball_text = wiki.page('Basketball').content
bball = chunk(bball_text, int(len(bball_text)/chunk_size))
hue=['dog']*chunk_size+['cat']*chunk_size+['bball']*chunk_size
geo = hyp.plot(dog + cat + bball, 'o', hue=hue, labels=hue, size=[8, 6])
nips = hyp.load('nips')
nips.plot(size=[8, 6])
wiki = hyp.load('wiki')
wiki.plot(size=[8, 6])
sotus = hyp.load('sotus')
sotus.plot(size=[10,8])
sotus.plot(reduce='UMAP', size=[10, 8])
sotus.plot(reduce='UMAP', corpus='nips', size=[10, 8])
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Task
Step3: A toy example
Step4: See some of the co-occrrence statistics
Step5: this says us that idea was seen with time 258 times in the corpus I've used.
Step6: Task
Step7: Projecting word vectors from 2000 dimensions to 2
Step8: Now we have word vector embedding to a low dimensional space!
Step9: Task Do the cluster you see align with your grouping of words?
Step11: Just an example to see what we've got there.
| <ASSISTANT_TASK:>
Python Code:
# This is a code cell. It can be executed by pressing CTRL+Enter
print('Hello')
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
import pandas
pandas.options.display.max_columns = 11
pandas.options.display.max_rows = 5
import matplotlib
matplotlib.rcParams['font.size'] = 15
matplotlib.rcParams['figure.figsize'] = 15, 9
matplotlib.rcParams['savefig.dpi'] = 227
from random import sample
from urllib.request import urlretrieve
import pandas as pd
import seaborn as sns
import numpy as np
def get_space(url, key='space'):
Download the co-occurrence data.
frame_file, _ = urlretrieve(url)
return pd.read_hdf(frame_file, key=key)
# Load the space into the memory
toy_space = get_space(
'http://www.eecs.qmul.ac.uk/~dm303/static/eecs_open14/space_frame_eecs14.h5'
)
# So far we are interested in just these words
interesting_words = ['idea', 'notion', 'boy', 'girl']
# Query the vector space for the words of interest
toy_space.loc[interesting_words]
# We are going to use pairwise_distances function from the sklearn package
from sklearn.metrics.pairwise import pairwise_distances
# Compute distances for the words of interest
distances = pairwise_distances(
toy_space.loc[interesting_words].values,
metric='cosine',
)
# Show the result
np.round(
pd.DataFrame(distances, index=interesting_words, columns=interesting_words),
3,
)
# np.exp(-distances) is a fancy way of converting distances to similarities
pd.DataFrame(np.exp(-distances), index=interesting_words, columns=interesting_words)
from sklearn import manifold
from sklearn.preprocessing import MinMaxScaler
# clf will be able to "project" word vectors to 2 dimensions
clf = manifold.MDS(n_components=2, dissimilarity='precomputed')
# in X we store the projection results
X = MinMaxScaler().fit_transform( # Normalize the values between 0 and 1 so it's easier to plot.
clf.fit_transform(pairwise_distances(toy_space.values, metric='cosine'))
)
pd.DataFrame(X, index=toy_space.index)
import pylab as pl
pl.figure()
for word, (x, y) in zip(toy_space.index, X):
pl.text(x, y, word)
pl.tight_layout()
space = get_space(
'http://www.eecs.qmul.ac.uk/~dm303/static/data/bigo_matrix.h5.gz'
)
space.loc[
['John', 'Mary', 'girl', 'boy'],
['tree', 'car', 'face', 'England', 'France']
]
def plot(space, words, file_name=None):
Plot the `words` from the given `space`.
cooc = space.loc[words]
missing_words = list(cooc[cooc.isnull().all(axis=1)].index)
assert not missing_words, '{0} are not in the space'.format(missing_words)
distances = pairwise_distances(cooc, metric='cosine')
clf = manifold.MDS(n_components=2, dissimilarity='precomputed', n_jobs=2)
X = MinMaxScaler().fit_transform(
clf.fit_transform(distances)
)
for word, (x, y) in zip(words, X):
pl.text(x, y, word)
pl.tight_layout()
if file_name is not None:
pl.savefig(file_name)
matplotlib.rcParams['font.size'] = 20
x= plot(
space,
(
'red orange pink green blue white yellow black '
'mother father son daughter aunt uncle '
'concept research theory '
'car bus tube road bicycle train '
'karate fight fencing '
'apple company fruit train set '
''.split()
)
)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Given an ordered binary tree $t$, the expression $t.\texttt{isEmpty}()$ checks whether $t$ is the empty tree.
Step2: Given an ordered binary tree $t$ and a key $k$, the expression $t.\texttt{member}(k)$ returns True if the key $k$ is stored in the tree $t$.
Step3: The method $\texttt{insert}()$ is specified via recursive equations.
Step4: The method $\texttt{self}.\texttt{delete}(k)$ removes the key $k$ from the tree $\texttt{self}$. It is defined as follows
Step5: The method $\texttt{self}.\texttt{delMin}()$ removes the smallest key from the given tree $\texttt{self}$
Step6: Given two ordered binary trees $s$ and $t$, the expression $s.\texttt{update}(t)$ overwrites the attributes of $s$ with the corresponding attributes of $t$.
Step7: The function $\texttt{restore}(\texttt{self})$ restores the balancing condition of the given binary tree
Step8: The function $\texttt{self}.\texttt{_setValues}(k, l, r)$ overwrites the member variables of the node $\texttt{self}$ with the given values.
Step9: The function $\texttt{createNode}(k, l, r)$ creates an AVL-tree of that has the key $k$ stored at its root,
Step10: The method $t.\texttt{pop}()$ take an AVL tree $t$ and removes and returns the smallest key that is present in $t$. It is specified as follows
Step11: Display Code
Step12: Given an ordered binary tree, this function renders the tree graphically using graphviz.
Step13: This method assigns a unique identifier with each node. The dictionary NodeDict maps these identifiers to the nodes where they occur.
Step14: Testing
Step15: Let's generate an ordered binary tree with random keys.
Step16: This tree looks more or less balanced. Lets us try to create a tree by inserting sorted numbers because that resulted in linear complexity for ordered binary trees.
Step17: Next, we compute the set of prime numbers $\leq 100$. Mathematically, this set is given as follows
| <ASSISTANT_TASK:>
Python Code:
class Set:
def __init__(self):
self.mKey = None
self.mLeft = None
self.mRight = None
self.mHeight = 0
def isEmpty(self):
return self.mKey == None
Set.isEmpty = isEmpty
def member(self, key):
if self.isEmpty():
return
elif self.mKey == key:
return True
elif key < self.mKey:
return self.mLeft.member(key)
else:
return self.mRight.member(key)
Set.member = member
def insert(self, key):
if self.isEmpty():
self.mKey = key
self.mLeft = Set()
self.mRight = Set()
self.mHeight = 1
elif self.mKey == key:
pass
elif key < self.mKey:
self.mLeft.insert(key)
self._restore()
else:
self.mRight.insert(key)
self._restore()
Set.insert = insert
def delete(self, key):
if self.isEmpty():
return
if key == self.mKey:
if self.mLeft.isEmpty():
self._update(self.mRight)
elif self.mRight.isEmpty():
self._update(self.mLeft)
else:
self.mRight, self.mKey = self.mRight._delMin()
elif key < self.mKey:
self.mLeft.delete(key)
else:
self.mRight.delete(key)
Set.delete = delete
def _delMin(self):
if self.mLeft.isEmpty():
return self.mRight, self.mKey
else:
ls, km = self.mLeft._delMin()
self.mLeft = ls
self._restore()
return self, km
Set._delMin = _delMin
def _update(self, t):
self.mKey = t.mKey
self.mLeft = t.mLeft
self.mRight = t.mRight
self.mHeight = t.mHeight
Set._update = _update
def _restore(self):
if abs(self.mLeft.mHeight - self.mRight.mHeight) <= 1:
self._restoreHeight()
return
if self.mLeft.mHeight > self.mRight.mHeight:
k1, l1, r1 = self.mKey, self.mLeft, self.mRight
k2, l2, r2 = l1.mKey, l1.mLeft, l1.mRight
if l2.mHeight >= r2.mHeight:
self._setValues(k2, l2, createNode(k1, r2, r1))
else:
k3, l3, r3 = r2.mKey, r2.mLeft, r2.mRight
self._setValues(k3, createNode(k2, l2, l3),
createNode(k1, r3, r1))
elif self.mRight.mHeight > self.mLeft.mHeight:
k1, l1, r1 = self.mKey, self.mLeft, self.mRight
k2, l2, r2 = r1.mKey, r1.mLeft, r1.mRight
if r2.mHeight >= l2.mHeight:
self._setValues(k2, createNode(k1, l1, l2), r2)
else:
k3, l3, r3 = l2.mKey, l2.mLeft, l2.mRight
self._setValues(k3, createNode(k1, l1, l3),
createNode(k2, r3, r2))
self._restoreHeight()
Set._restore = _restore
def _setValues(self, k, l, r):
self.mKey = k
self.mLeft = l
self.mRight = r
Set._setValues = _setValues
def _restoreHeight(self):
self.mHeight = max(self.mLeft.mHeight, self.mRight.mHeight) + 1
Set._restoreHeight = _restoreHeight
def createNode(key, left, right):
node = Set()
node.mKey = key
node.mLeft = left
node.mRight = right
node.mHeight = max(left.mHeight, right.mHeight) + 1
return node
def pop(self):
if self.mKey == None:
raise KeyError
if self.mLeft.mKey == None:
key = self.mKey
self._update(self.mRight)
return key
return self.mLeft.pop()
Set.pop = pop
import graphviz as gv
def toDot(self):
Set.sNodeCount = 0 # this is a static variable of the class Set
dot = gv.Digraph(node_attr={'shape': 'record', 'style': 'rounded'})
NodeDict = {}
self._assignIDs(NodeDict)
for n, t in NodeDict.items():
if t.mKey != None:
dot.node(str(n), label=str(t.mKey))
else:
dot.node(str(n), label='', shape='point')
for n, t in NodeDict.items():
if not t.mLeft == None:
dot.edge(str(n), str(t.mLeft.mID))
if not t.mRight == None:
dot.edge(str(n), str(t.mRight.mID))
return dot
Set.toDot = toDot
def _assignIDs(self, NodeDict):
Set.sNodeCount += 1
self.mID = Set.sNodeCount
NodeDict[self.mID] = self
if self.isEmpty():
return
self.mLeft ._assignIDs(NodeDict)
self.mRight._assignIDs(NodeDict)
Set._assignIDs = _assignIDs
def demo():
m = Set()
m.insert("anton")
m.insert("hugo")
m.insert("gustav")
m.insert("jens")
m.insert("hubert")
m.insert("andre")
m.insert("philipp")
m.insert("rene")
return m
t = demo()
t.toDot()
while not t.isEmpty():
print(t.pop())
display(t.toDot())
import random as rnd
t = Set()
for k in range(30):
k = rnd.randrange(100)
t.insert(k)
display(t.toDot())
while not t.isEmpty():
print(t.pop(), end=' ')
display(t.toDot())
t = Set()
for k in range(30):
t.insert(k)
display(t.toDot())
while not t.isEmpty():
print(t.pop(), end=' ')
display(t.toDot())
S = Set()
for k in range(2, 101):
S.insert(k)
display(S.toDot())
for i in range(2, 101):
for j in range(2, 101):
S.delete(i * j)
display(S.toDot())
while not S.isEmpty():
print(S.pop(), end=' ')
display(S.toDot())
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Indefinite integrals
Step2: Integral 1
Step3: Integral 2
Step4: Integral 3
Step5: Integral 4
Step6: Integral 5
| <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from scipy import integrate
def integrand(x, a):
return 1.0/(x**2 + a**2)
def integral_approx(a):
# Use the args keyword argument to feed extra arguments to your integrand
I, e = integrate.quad(integrand, 0, np.inf, args=(a,))
return I
def integral_exact(a):
return 0.5*np.pi/a
print("Numerical: ", integral_approx(1.0))
print("Exact : ", integral_exact(1.0))
assert True # leave this cell to grade the above integral
def integrand(x,a,b):
return np.sin(a*x)/np.sinh(b*x)
def integrate_approx(a,b):
I,e=integrate.quad(integrand,0,np.inf, args=(a,b))
return I
def integrate_exact(a,b):
return np.pi/(2*b)*np.tanh(a*np.pi/(2*b))
print('Numerical:', integrate_approx(1.0,2.0))
print('Exact:', integrate_exact(1.0,2.0))
assert True # leave this cell to grade the above integral
def integrand(x,a,b):
return np.exp(-a*x)*np.cos(b*x)
def integrate_approx(a,b):
I,e=integrate.quad(integrand,0,np.inf, args=(a,b))
return I
def integrate_exact(a,b):
return a/(a**2+b**2)
print('Numerical:', integrate_approx(1.0,2.0))
print('Exact:', integrate_exact(1.0,2.0))
assert True # leave this cell to grade the above integral
def integrand(x,p):
return (1-np.cos(p*x))/x**2
def integrate_approx(p):
I,e=integrate.quad(integrand,0,np.inf, args=(p))
return I
def integrate_exact(p):
return p*np.pi/2
print('Numerical:', integrate_approx(4.0))
print('Exact:', integrate_exact(4.0))
assert True # leave this cell to grade the above integral
def integrand(x,a,b):
return np.log(a**2+x**2)/(b**2+x**2)
def integrate_approx(a,b):
I,e=integrate.quad(integrand,0,np.inf, args=(a,b))
return I
def integrate_exact(a,b):
return np.pi/b*np.log(a+b)
print('Numerical:', integrate_approx(3.0,4.0))
print('Exact:', integrate_exact(3.0,4.0))
assert True # leave this cell to grade the above integral
def integrand(x,a,b):
return np.sqrt(a**2-x**2)
def integrate_approx(a,b):
I,e=integrate.quad(integrand,0,a, args=(a,b))
return I
def integrate_exact(a,b):
return np.pi*a**2/4
print('Numerical:', integrate_approx(1.0,2.0))
print('Exact:', integrate_exact(1.0,2.0))
assert True # leave this cell to grade the above integral
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Euler's method
Step4: The midpoint method is another numerical method for solving the above differential equation. In general it is more accurate than the Euler method. It uses the update equation
Step6: You are now going to solve the following differential equation
Step7: In the following cell you are going to solve the above ODE using four different algorithms
| <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from scipy.integrate import odeint
from IPython.html.widgets import interact, fixed
def solve_euler(derivs, y0, x):
Solve a 1d ODE using Euler's method.
Parameters
----------
derivs : function
The derivative of the diff-eq with the signature deriv(y,x) where
y and x are floats.
y0 : float
The initial condition y[0] = y(x[0]).
x : np.ndarray, list, tuple
The array of times at which of solve the diff-eq.
Returns
-------
y : np.ndarray
Array of solutions y[i] = y(x[i])
h=x[-1]-x[-2]
data = [y0]
for t in x[1:]:
data.append(data[-1]+h*derivs(data[-1],t))
return data
assert np.allclose(solve_euler(lambda y, x: 1, 0, [0,1,2]), [0,1,2])
def solve_midpoint(derivs, y0, x):
Solve a 1d ODE using the Midpoint method.
Parameters
----------
derivs : function
The derivative of the diff-eq with the signature deriv(y,x) where y
and x are floats.
y0 : float
The initial condition y[0] = y(x[0]).
x : np.ndarray, list, tuple
The array of times at which of solve the diff-eq.
Returns
-------
y : np.ndarray
Array of solutions y[i] = y(x[i])
h=x[-1]-x[-2]
data = [y0]
for t in x[1:]:
data.append(data[-1]+h*derivs(data[-1]+h/2*derivs(data[-1],t),t+h/2))
return data
assert np.allclose(solve_euler(lambda y, x: 1, 0, [0,1,2]), [0,1,2])
def solve_exact(x):
compute the exact solution to dy/dx = x + 2y.
Parameters
----------
x : np.ndarray
Array of x values to compute the solution at.
Returns
-------
y : np.ndarray
Array of solutions at y[i] = y(x[i]).
data = np.array(.25*np.exp(2*x)-.5*x-.25)
return data
assert np.allclose(solve_exact(np.array([0,1,2])),np.array([0., 1.09726402, 12.39953751]))
x = np.linspace(0,1,11)
def derivs(y, x):
dy = x+2*y
return dy
euler_error = np.array(solve_euler(derivs, 0, x))-solve_exact(x)
midpoint_error = np.array(solve_midpoint(derivs, 0, x))-solve_exact(x)
odeint_error = np.array(odeint(derivs, 0, x)).flatten()-solve_exact(x)
f = plt.figure(figsize = (9,6))
ax = plt.subplot(211)
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
plt.plot(x,solve_euler(derivs, 0, x), label="Euler")
plt.plot(x,solve_midpoint(derivs, 0, x), label="Midpoint")
plt.plot(x,solve_exact(x), label="Exact")
plt.plot(x,odeint(derivs, 0, x), label="ODEInt")
plt.ylabel("y(x)")
plt.xlabel("x")
plt.title(r"Numerical Solutions to $\frac{dy}{dx}=x+2y$")
plt.legend(loc = "best")
ax = plt.subplot(212)
ax.spines['top'].set_visible(False)
ax.spines['right'].set_visible(False)
ax.yaxis.set_ticks_position('left')
ax.xaxis.set_ticks_position('bottom')
plt.plot(x,abs(euler_error), label = "Euler Error")
plt.plot(x,abs(midpoint_error), label = "Midpoint Error")
plt.plot(x,abs(odeint_error), label = "ODEInt Error")
plt.ylabel("Errors")
plt.xlabel("x")
plt.title(r"Errors of numerical solutions to $\frac{dy}{dx}=x+2y$")
plt.legend(loc = "best")
plt.tight_layout()
assert True # leave this for grading the plots
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Survey
Step2: Trivariate
| <ASSISTANT_TASK:>
Python Code:
import dit
from dit.pid.helpers import compare_measures
from dit.pid.distributions import bivariates, trivariates
dit.ditParams['print.exact'] = dit.ditParams['repr.print'] = True
dit.ditParams['text.font'] = 'linechar'
for name, dist in bivariates.items():
compare_measures(dist, name=name)
for name, dist in trivariates.items():
compare_measures(dist, name=name)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1.3 Lecture des données métier
Step2: 2 Exploration par Analyse en composantes principales
Step3: La fonction définie ci-après affiche un nuage de points dans un plan factoriel.
Step4: Calcul de la matrice des composantes principales. C'est aussi un changement (transformation) de base; de la base canonique dans la base des vecteurs propres.
Step5: 2.2 Valeurs propres ou variances des composantes principales
Step6: Un graphique plus explicite décrit les distribution de ces composantes par des diagrames boîtes; seules les premières sont affichées.
Step7: Commenter la décroissance des variances, le choix éventuel d'une dimension ou nombre de composantes à retenir sur les 561.
Step8: Q Commenter la séparation des deux types de situation par le premier axe.
Step9: Graphe illisible en mettant les libellés en clair. Seule une * est représentée.
Step10: Identification des variables participant le plus au premier axe. Ce n'est pas plus clair ! Seule la réprésentation des individus apporte finalement des éléments de compréhension.
Step11: 3 Exploration par Analyse Factorielle Discriminante (AFD)
Step12: 3.2 Représentation des individus
Step13: Q Que dire de la séparation des classes. Sont-elles toutes séparables deux à deux ?
Step14: 4.2. Prévision de l'activité pour l'échantillon test
Step15: Q Quelles sont les classes qui restent difficiles à discriminer ?
Step16: 5.2. Prévision de l'activité pour l'échantillon test
| <ASSISTANT_TASK:>
Python Code:
# Importation des principals librairies et
# Affichage des graphiques dans le notebook
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import time
# Lecture des données d'apprentissage
# Attention, il peut y avoir plusieurs espaces comme séparateur dans le fichier
Xtrain = pd.read_table("X_train.txt", sep = '\s+', header = None)
Xtrain.head()
# Variable cible
ytrain = pd.read_table("y_train.txt", sep = '\s+', header = None, names = list('y'))
# Le type dataFrame est inutile et même gênant pour les la suite
ytrain = ytrain["y"]
# Lecture des données de test
Xtest = pd.read_table("X_test.txt", sep = '\s+', header = None)
Xtest.shape
ytest = pd.read_table("y_test.txt", sep = '\s+', header = None, names = list('y'))
ytest = ytest["y"]
# Significaiton des codes de y
label_dic = {1 : "Marcher", 2 : "Monter escalier", 3 : "Descendre escalier",
4 : "Assis", 5 : "Debout", 6 : "Couche"}
labels = label_dic.values()
from sklearn.decomposition import PCA
from sklearn.preprocessing import scale
def plot_pca(X_R, fig, ax, nbc, nbc2):
for i in range(6):
xs = X_R[ytrain == i + 1, nbc - 1]
ys = X_R[ytrain == i + 1, nbc2 - 1]
label = label_dic[i + 1]
color = cmaps(i)
ax.scatter(xs, ys, color = color, alpha = .8, s = 1, label = label)
ax.set_xlabel("PC%d : %.2f %%" %(nbc, pca.explained_variance_ratio_[nbc - 1] * 100), fontsize = 10)
ax.set_ylabel("PC%d : %.2f %%" %(nbc2, pca.explained_variance_ratio_[nbc2 - 1] * 100), fontsize = 10)
pca = PCA()
X_r = pca.fit_transform(Xtrain)
plt.plot(pca.explained_variance_ratio_[0:10])
plt.show()
plt.boxplot(X_r[:,0:10])
plt.show()
cmaps = plt.get_cmap("Accent")
fig = plt.figure(figsize = (20, 20))
count = 0
for nbc, nbc2,count in [(1, 2, 1), (1, 3, 2), (1, 4, 3), (2, 3, 5), (2, 4, 6), (3, 4, 9)] :
ax = fig.add_subplot(3, 3, count)
plot_pca(X_r, fig, ax, nbc, nbc2)
plt.legend(loc = 'upper right', bbox_to_anchor = (1.8, 0.5), markerscale = 10)
plt.show()
with open('features.txt', 'r') as content_file:
featuresNames = content_file.read()
columnsNames = list(map(lambda x : x.split(" ")[1], featuresNames.split("\n")[:-1]))
# coordonnées des variables
coord1 = pca.components_[0] * np.sqrt(pca.explained_variance_[0])
coord2 = pca.components_[1] * np.sqrt(pca.explained_variance_[1])
fig = plt.figure(figsize = (8,8))
ax = fig.add_subplot(1, 1, 1)
for i, j in zip(coord1, coord2, ):
plt.text(i, j, "*")
plt.arrow(0, 0, i, j, color = 'r')
plt.axis((-1.2, 1.2, -1.2, 1.2))
# cercle
c = plt.Circle((0,0), radius = 1, color = 'b', fill = False)
ax.add_patch(c)
plt.show()
np.array(columnsNames)[abs(coord1) > .6]
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
method = LinearDiscriminantAnalysis()
lda = method.fit(Xtrain, ytrain)
X_r2 = lda.transform(Xtrain)
fig = plt.figure(figsize= (20, 20))
count = 0
for nbc, nbc2,count in [(1, 2, 1), (1, 3, 2), (1, 4, 3), (2, 3, 5), (2, 4, 6), (3, 4, 9)] :
ax = fig.add_subplot(3, 3, count)
plot_pca(X_r2, fig, ax, nbc, nbc2)
plt.legend(loc = 'upper right', bbox_to_anchor = (1.8, 0.5), markerscale = 10)
plt.show()
method = LinearDiscriminantAnalysis()
ts = time.time()
method.fit(Xtrain, ytrain)
scoreLDA = method.score(Xtest, ytest)
ypredLDA = method.predict(Xtest)
te = time.time()
from sklearn.metrics import confusion_matrix
print("Score : %f, time running : %d secondes" %(scoreLDA, te - ts))
pd.DataFrame(confusion_matrix(ytest, ypredLDA), index = labels, columns=labels)
from sklearn.linear_model import LogisticRegression
ts = time.time()
method = LogisticRegression()
method.fit(Xtrain, ytrain)
scoreLR = method.score(Xtest, ytest)
ypredLR = method.predict(Xtest)
te = time.time()
from sklearn.metrics import confusion_matrix
print("Score : %f, time running : %d secondes" %(scoreLR, te-ts))
pd.DataFrame(confusion_matrix(ytest, ypredLR), index = labels, columns=labels)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Language Translation
Step3: Explore the Data
Step6: Implement Preprocessing Function
Step8: Preprocess all the data and save it
Step10: Check Point
Step12: Check the Version of TensorFlow and Access to GPU
Step15: Build the Neural Network
Step18: Process Decoding Input
Step21: Encoding
Step24: Decoding - Training
Step27: Decoding - Inference
Step30: Build the Decoding Layer
Step33: Build the Neural Network
Step34: Neural Network Training
Step36: Build the Graph
Step39: Train
Step41: Save Parameters
Step43: Checkpoint
Step46: Sentence to Sequence
Step48: Translate
| <ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
source_id_text = []
target_id_text = []
for sentence in source_text.split('\n'):
source_ids_words = [source_vocab_to_int[word] for word in sentence.split()]
source_id_text.append(source_ids_words)
for sentence in target_text.split('\n'):
sentence = sentence + ' <EOS>'
target_ids_words = [target_vocab_to_int[word] for word in sentence.split()]
target_id_text.append(target_ids_words)
return source_id_text, target_id_text
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
def model_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate, keep probability)
# TODO: Implement Function
input_data = tf.placeholder(tf.int32, [None, None],name="input")
targets = tf.placeholder(tf.int32, [None, None], name="targets")
learning_rate = tf.placeholder(tf.float32, name="learning_rate")
keep_prob = tf.placeholder(tf.float32, name="keep_prob")
return input_data, targets, learning_rate, keep_prob
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
def process_decoding_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for decoding
:param target_data: Target Placeholder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
# TODO: Implement Function
ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
dec_input = tf.concat([tf.fill([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1)
return dec_input
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_decoding_input(process_decoding_input)
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:return: RNN state
# TODO: Implement Function
# Encoder
enc_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers)
_, enc_state = tf.nn.dynamic_rnn(tf.contrib.rnn.DropoutWrapper(enc_cell, keep_prob), rnn_inputs, dtype=tf.float32)
return enc_state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,
output_fn, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param sequence_length: Sequence Length
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Train Logits
# TODO: Implement Function
# Training Decoder
train_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state)
train_pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(
tf.contrib.rnn.DropoutWrapper(dec_cell, keep_prob), train_decoder_fn, dec_embed_input, sequence_length, scope=decoding_scope)
# Apply output function
train_logits = output_fn(train_pred)
return train_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size, decoding_scope, output_fn, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param maximum_length: The maximum allowed time steps to decode
:param vocab_size: Size of vocabulary
:param decoding_scope: TensorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Inference Logits
# TODO: Implement Function
infer_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference(
output_fn, encoder_state, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length - 1, vocab_size)
inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(tf.contrib.rnn.DropoutWrapper(dec_cell, keep_prob), infer_decoder_fn, scope=decoding_scope)
return inference_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size,
num_layers, target_vocab_to_int, keep_prob):
Create decoding layer
:param dec_embed_input: Decoder embedded input
:param dec_embeddings: Decoder embeddings
:param encoder_state: The encoded state
:param vocab_size: Size of vocabulary
:param sequence_length: Sequence Length
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param keep_prob: Dropout keep probability
:return: Tuple of (Training Logits, Inference Logits)
# TODO: Implement Function
start_of_sequence_id = target_vocab_to_int['<GO>']
end_of_sequence_id = target_vocab_to_int['<EOS>']
cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.DropoutWrapper(tf.contrib.rnn.BasicLSTMCell(rnn_size), output_keep_prob=keep_prob)] * num_layers)
with tf.variable_scope("decoding") as decoding_scope:
output_fn = lambda logits: tf.contrib.layers.fully_connected(logits, vocab_size, None, scope=decoding_scope)
train_logits = decoding_layer_train(encoder_state, cell, dec_embed_input, sequence_length,
decoding_scope, output_fn, keep_prob)
with tf.variable_scope("decoding", reuse=True) as infer_scope:
infer_logits = decoding_layer_infer(encoder_state, cell, dec_embeddings, start_of_sequence_id,
end_of_sequence_id, sequence_length, vocab_size, infer_scope,
output_fn, keep_prob)
return train_logits, infer_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param sequence_length: Sequence Length
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training Logits, Inference Logits)
# Encoder embedding
enc_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, enc_embedding_size)
encoder_state = encoding_layer(enc_embed_input, rnn_size, num_layers, keep_prob)
dec_input = process_decoding_input(target_data, target_vocab_to_int, batch_size)
# Decoder Embedding
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
train_logits, infer_logits = decoding_layer(dec_embed_input, dec_embeddings, encoder_state, target_vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob)
return train_logits, infer_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
# Number of Epochs
epochs = 20
# Batch Size
batch_size = 512
# RNN Size
rnn_size = 128
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 227
decoding_embedding_size = 227
# Learning Rate
learning_rate = 0.001
# Dropout Keep Probability
keep_probability = 0.8
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_source_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob = model_inputs()
sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(
tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int),
encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int)
tf.identity(inference_logits, 'logits')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
train_logits,
targets,
tf.ones([input_shape[0], sequence_length]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
DON'T MODIFY ANYTHING IN THIS CELL
import time
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1]), (0,0)],
'constant')
return np.mean(np.equal(target, np.argmax(logits, 2)))
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = helper.pad_sentence_batch(source_int_text[:batch_size])
valid_target = helper.pad_sentence_batch(target_int_text[:batch_size])
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch) in enumerate(
helper.batch_data(train_source, train_target, batch_size)):
start_time = time.time()
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
sequence_length: target_batch.shape[1],
keep_prob: keep_probability})
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch, keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_source, keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits)
end_time = time.time()
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
# TODO: Implement Function
return [vocab_to_int.get(word, vocab_to_int['<UNK>']) for word in sentence.lower().split()]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
translate_sentence = 'he saw a old yellow truck .'
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('logits:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)]))
print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)]))
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Problem 1
Step2: Generate a set of $50$ one-dimensional inputs regularly spaced between -5 and 5 and store them in a variable called x, then compute the covariance matrix for these inputs, for $A=\Gamma=1$, store the results in a variable called K, and display it using matplotlib's imshow function.
Step3: Problem 1b
Step4: Now draw 5 samples from the distribution and plot them.
Step5: Problem 1c
Step6: Execute the cell below to define a handful of observations
Step7: Evaluate and plot the mean and 95% confidence interval of the resulting posterior distribution, as well as a few samples, for a squared exponential GP with $A=\Gamma=1$, assuming the measurement uncertainty on each observation was 0.1
Step8: Some things to note
Step9: Try evaluating the likelihood of the model given the observations you defined in problem 1 by executing the cell below. Hopefully it will run without errors...
Step10: Now try changing the covariance parameters and the observational uncertainties, and see how that affects the likelihood. Does it behave as you would expect, given the way these parameters affected the predictive distribution?
Step11: Plot the data and the predictive distribution and samples for the best-fit hyper-parameters
Step12: That may not have worked quite as well as you might have liked -- it's normal
Step13: Problem 3a
Step14: Problem 3b
Step15: Now you are ready to fit for all the hyper-parameters simultaneously
Step16: NB
Step17: NB
Step18: Now try fitting the data using the LinearMean mean function and the M32Kernel covariance function.
Step19: How does the best fit likelihood compare to what you obtained using the SEKernel? Which kernel would you adopt if you had to chose between the two. Write your answer in the cell below.
Step20: Now evaluate the BIC in each case. Which model is preferred?
Step21: Thus the model with a non-zero mean function is strongly preferred (BIC differences $> 10$ are generally considered to represent very strong support for one model over the other).
| <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy.spatial.distance import cdist
from numpy.random import multivariate_normal
from numpy.linalg import inv
from numpy.linalg import slogdet
from scipy.optimize import fmin
def SEKernel(par, x1, x2):
A, Gamma = par
D2 = cdist(# complete
return # complete
x = np.linspace(# complete
K = # complete
plt.imshow(K,interpolation='none');
m = # complete
sig = # complete
plt.plot(x,m,'k-')
plt.fill_between(x,m+2*sig,m-2*sig,color='k',alpha=0.2)
plt.xlabel(r'$x$')
plt.ylabel(r'$y$')
plt.title('Prior distribution');
samples = multivariate_normal(# complete
plt.plot(x,samples.T)
plt.xlabel(r'$x$')
plt.ylabel(r'$y$')
plt.title('Samples from prior distribution');
def Pred_GP(CovFunc, CovPar, xobs, yobs, eobs, xtest):
# evaluate the covariance matrix for pairs of observed inputs
K = # complete
# add white noise
K += np.identity(# complete
# evaluate the covariance matrix for pairs of test inputs
Kss = # complete
# evaluate the cross-term
Ks = # complete
# invert K
Ki = inv(K)
# evaluate the predictive mean
m = np.dot(# complete
# evaluate the covariance
cov = # complete
return m, cov
xobs = np.array([-4,-2,0,1,2])
yobs = np.array([1.0,-1.0, -1.0, 0.7, 0.0])
eobs = 0.1
m,C=Pred_GP(# complete
sig = # complete
samples = multivariate_normal(# complete
plt.errorbar(xobs,yobs,yerr=2*eobs,capsize=0,fmt='k.')
plt.plot(x,m,'k-')
plt.fill_between(x,m+2*sig,m-2*sig,color='k',alpha=0.2)
plt.plot(x,samples.T,alpha=0.5)
plt.xlabel(r'$x$')
plt.ylabel(r'$y$')
plt.title('Predictive distribution');
def NLL_GP(p,CovFunc,x,y,e):
# Evaluate the covariance matrix
K = # complete
# Add the white noise term
K += # complete
# invert it
Ki = inv(K)
# evaluate each of the three terms in the NLL
term1 = # complete
term2 = # complete
term3 = # complete
# return the total
return term1 + term2 + term3
print(NLL_GP(# complete
p0 = [1.0,1.0]
p1 = fmin(NLL_GP,p0,args=(# complete
print(p1)
# You can reuse code from Problem 1c almost exactly here...
xobs = np.linspace(-10,10,50)
linear_trend = 0.03 * xobs - 0.3
correlated_noise = multivariate_normal(np.zeros(len(xobs)),SEKernel([0.005,2.0],xobs,xobs),1).flatten()
eobs = 0.01
white_noise = np.random.normal(0,eobs,len(xobs))
yobs = linear_trend + correlated_noise + white_noise
plt.errorbar(xobs,yobs,yerr=eobs,fmt='k.',capsize=0)
plt.xlabel(r'$x$')
plt.ylabel(r'$y$');
def LinearMean(p,x):
return # complete
pm0 = [0.03, -0.3]
m = # complete
plt.errorbar(xobs,yobs,yerr=eobs,fmt='k.',capsize=0)
plt.plot(xobs,m,'r-')
plt.xlabel(r'$x$')
plt.ylabel(r'$y$');
def NLL_GP2(p,CovFunc,x,y,e, MeanFunc=None, nmp = 0):
if MeanFunc:
pc = p[# complete
pm = p[# complete
r = y - # complete
else:
pc = p[:]
r = y[:]
# Evaluate the covariance matrix
K = # complete
# Add the white noise term
K += # complete
# invert it
Ki = inv(K)
# evaluate each of the three terms in the NLL
term1 = # complete
term2 = # complete
term3 = # complete
# return the total
return term1 + term2 + term3
p0 = [0.005,2.0,0.03,-0.3]
print(NLL_GP2# complete
p1 = fmin(# complete
print(p1)
# Generate test inputs (values at which we ant to evaluate the predictive distribution)
x = np.linspace(# complete
# Evaluate mean function at observed inputs, and compute residuals
mobs = # complete
robs = yobs-mobs
# Evaluate stochastic component at test inputs
m,C = Pred_GP(# complete
# Evaluate mean function at test inputs
m += # complete
sig = # complete
plt.errorbar(xobs,yobs,yerr=2*eobs,capsize=0,fmt='k.')
plt.plot(x,m,'k-')
plt.fill_between(x,m+2*sig,m-2*sig,color='k',alpha=0.2)
plt.xlabel(r'$x$')
plt.ylabel(r'$y$')
plt.title('Maximum likelihood distribution');
def M32Kernel(par, x1, x2):
A, Gamma = par
R = cdist(# complete
return # complete
p0 = [0.005,2.0,0.03,-0.3]
print(NLL_GP2(# complete
p1 = fmin(# complete
print(p1)
print(NLL_GP2(# complete
# Copy and paste your answer to the previous problem and modify it as needed
N = len(xobs)
BIC_mean = # complete
print(BIC_mean)
BIC_no_mean = # complete
print(BIC_no_mean)
# Plot the data
plt.errorbar(xobs,yobs,yerr=2*eobs,capsize=0,fmt='k.')
plt.xlabel(r'$x$')
plt.ylabel(r'$y$')
plt.title('Model comparison')
# Evaluate and plot the predictive distribution with a mean function
mobs = # complete
robs = yobs-mobs
m,C = Pred_GP(# complete
m += # complete
sig = # complete
plt.plot(x,m,'b-')
plt.fill_between(x,m+2*sig,m-2*sig,color='b',alpha=0.2)
# Now do the same for the model without mean function
m,C = Pred_GP(# complete
sig = # complete
plt.plot(x,m,'r-')
plt.fill_between(x,m+2*sig,m-2*sig,color='r',alpha=0.2)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
| <ASSISTANT_TASK:>
Python Code:
def get_row(lst, x):
coords = [(i, j) for i in range(len(lst)) for j in range(len(lst[i])) if lst[i][j] == x]
return sorted(sorted(coords, key=lambda x: x[1], reverse=True), key=lambda x: x[0])
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Loss under a Mixture of Gaussians model
Step2: We use autograd for functions that deliver gradients of those losses
Step3: Just a pretty display
Step4: Learning, starting from random weights and bias.
| <ASSISTANT_TASK:>
Python Code:
def sigmoid(phi):
return 1.0/(1.0 + np.exp(-phi))
def calc_prob_class1(params):
# Sigmoid perceptron ('logistic regression')
tildex = X - params['mean']
W = params['wgts']
phi = np.dot(tildex, W)
return sigmoid(phi) # Sigmoid perceptron ('logistic regression')
def calc_membership(params):
# NB. this is just a helper function for training_loss really.
tildex = X - params['mean']
W, r2, R2 = params['wgts'], params['r2'], params['R2']
Dr2 = np.power(np.dot(tildex, W), 2.0)
L2X = (np.power(tildex, 2.0)).sum(1)
DR2 = L2X - Dr2
dist2 = (Dr2/r2) + (DR2/R2) # rescaled 'distance' to the shifted 'origin'
membership = np.exp(-0.5*dist2)
#print(membership)
return np.array(membership)
def classification_loss(params):
membership = calc_membership(params)
Y = calc_prob_class1(params)
return np.sum(membership*(Targ*np.log(Y) + (1-Targ)*np.log(1-Y)))
def MoG_loss(params):
membership = calc_membership(params)
return np.sum(membership)
classification_gradient = grad(classification_loss)
MoG_gradient = grad(MoG_loss)
# Be able to show the current solution, against the data in 2D.
def show_result(params, X, Targ):
print("Parameters:")
for key in params.keys():
print(key,'\t', params[key])
print("Loss:", training_loss(params))
membership = calc_membership(params)
Y = calc_prob_class1(params)
pl.clf()
marksize = 8
cl ={0:'red', 1:'black'}
for i, x in enumerate(X):
pl.plot(x[0],x[1],'x',color=cl[int(Targ[i])],alpha=.4,markersize=marksize)
pl.plot(x[0],x[1],'o',color=cl[int(Targ[i])],alpha=1.-float(abs(Targ[i]-Y[i])),markersize=marksize)
pl.axis('equal')
s = X.ravel().max() - X.ravel().min()
m, w = params['mean'], params['wgts']
# Show the mean in blue
#pl.arrow(0, 0, m[0], m[1], head_width=0.25, head_length=0.5, fc='b', ec='b', linewidth=1, alpha=.95)
# Show the perceptron decision boundary, in green
pl.arrow(m[0]-w[0], m[1]-w[1], w[0], w[1], head_width=s, head_length=s/5, fc='g', ec='g', linewidth=3, alpha=.5)
pl.show()
def do_one_learning_step(params,X,Targ,rate):
grads = classification_gradient(params)
params['wgts'] = params['wgts'] + rate * grads['wgts'] # one step of learning
params['mean'] = params['mean'] + rate * grads['mean'] # one step of learning
return (params)
init_w = rng.normal(0,1,size=(Nins))
init_m = 4.*rng.normal(0,1,size=(Nins))
rate = 0.5 / Npats
params = {'wgts':init_w, 'mean':init_m, 'r2':1000.0, 'R2':1000.0}
for t in range(250):
params = do_one_learning_step(params,X,Targ,rate)
show_result(params, X, Targ)
Y = sigmoid(np.dot(X-params['mean'], params['wgts']))
print('vanilla loss: ', np.sum(Targ*np.log(Y) + (1-Targ)*np.log(1-Y)))
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: What's the Fast Inverse Root Method you ask?
Step2: Close Enough!
Step3: Here, $$\begin{align}
Step4: A Tale of two Variables
Step5: Recall that the log-log plot for $y = M \cdot x^C$ is linear because
Step6: Notice how as $C$ gets closer to $-\frac{1}{2}$, the 0x5f3759df line also gets closer to $x^C$.
Step7: Graphing Calculator Woes
Step8: Hmm, weren't we expecting 0x5f3759df instead of 0x5f400000?
Step9: ```c
Step10: Hey, that actually looks pretty good! But what about the errors?
Step11: An error of around $10\%$? That's like nothing!
| <ASSISTANT_TASK:>
Python Code:
setup_html = r'''
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/github-fork-ribbon-css/0.2.0/gh-fork-ribbon.min.css" />
<!--[if lt IE 9]>
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/github-fork-ribbon-css/0.2.0/gh-fork-ribbon.ie.min.css" />
<![endif]-->
<style>
.github-fork-ribbon::after {line-height: initial !important;padding: initial !important;}
.github-fork-ribbon {font-size: 14px;}
.navigate-up, .navigate-down {
display: none !important;
}
</style>
<script>
$(document).ready(function() {
$("body").append('<a class="github-fork-ribbon" href="http://www.bullshitmath.lol" title="Bullshit Math">Bullshit Math</a>')
});
</script>
'''
# IPython.display.display_html(setup_html, raw=True)
hide_code_in_slideshow()
%matplotlib inline
from struct import pack, unpack
import numpy as np
import matplotlib.pyplot as plt
@np.vectorize
def sharp(x):
return unpack('I', pack('f', x))[0]
@np.vectorize
def flat(y):
return unpack('f', pack('I', int(y) & 0xffffffff))[0]
star_long_star_amp = sharp;
star_float_star_amp = flat;
hide_code_in_slideshow();
@np.vectorize
def rsqrt(x): # float rsqrt(float x) {
i = star_long_star_amp(x); # long i = * ( long * ) &x;
i = 0x5f3759df - ( i >> 1 ); # i = 0x5f3759df - ( i >> 1 );
return star_float_star_amp(i); # return * ( float * ) &i;
# }
# Construct a plot
fig = plt.figure(figsize=(16,8));
ax = plt.axes();
# Plot the approximation and the actual inverse sqrt function
x = np.linspace(1, 50, 5000);
approximation, = ax.plot(x, rsqrt(x))
actual, = ax.plot(x, 1/np.sqrt(x))
fig.legend(handles=[approximation, actual], labels=[r'qsqrt(x)', r'$\frac{1}{\sqrt{x}}$'], fontsize=20);
fig.suptitle(r"$\frac{1}{\sqrt{x}}$ versus qsqrt(x)", fontsize=26);
hide_code_in_slideshow()
from struct import pack, unpack
to_long = lambda hole: unpack('i', hole)[0] # y = (long*) x
to_float = lambda hole: unpack('f', hole)[0] # y = (float*) x
from_long = lambda hole: pack('i', int(hole) % 0x80000000) # long* y = &x
from_float = lambda hole: pack('f', float(hole)) # float* y = &x
hide_code_in_slideshow()
@np.vectorize
def f2l(x):
return to_long(from_float(x))
@np.vectorize
def l2f(y):
return to_float(from_long(y))
int( l2f(f2l(1) + f2l(1)) ) # 1 + 1 is ...
def foobar(M, C):
return np.vectorize(lambda x: l2f(M + C * f2l(x)))
# rsqrt(x) is instantiated with M = 0x5f3759df and C = -1/2
rsqrt = foobar(0x5f3759df, -1.0/2.0)
import matplotlib
matplotlib.rcParams['text.usetex'] = False
matplotlib.rcParams['text.latex.unicode'] = False
x = np.linspace(1, 1000, 5000)
allM = (1 << 26, 1 << 28, 0x5f3759df)
properties = {
(0, 0): {'M': allM, 'C': -2},
(1, 0): {'M': allM, 'C': 8},
(0, 1): {'M': allM, 'C': 0.3},
(1, 1): {'M': allM, 'C': -0.6},
}
fig, axarr = plt.subplots(2, 2, figsize=(14,8));
for key, property in properties.items():
C = property['C']
axarr[key].set_ylim(1e-39, 1e41)
handle, = axarr[key].loglog(x, x ** C, linestyle='dotted');
handles = [handle]
for M in property['M']:
baz = foobar(M, C)
kwargs = {'ls' : 'dashed'} if M == 0x5f3759df else {}
handle, = axarr[key].loglog(x, np.abs(baz(x)), **kwargs)
handles.append(handle)
axarr[key].set_title(r'For slope C = $%s$, ${\rm foobar}_{M,%s}(x)$' % (C, C))
axarr[key].legend(
handles,
[
r'$x^{%s}$' % C,
r'$M = 2^{26}$',
r'$M = 2^{28}$',
r'$M = {\rm 0x5f3759df}$'
], loc=4)
hide_code_in_slideshow()
from IPython.display import HTML
from matplotlib import animation
animation.Animation._repr_html_ = lambda anim: anim.to_html5_video()
x = np.linspace(1, 1000, 5000)
allM = (1 << 26, 1 << 28, 0x5f3759df)
fig = plt.figure(figsize=(14,8))
ax = plt.axes(ylim=(1e-39, 1e41))
def plotSomeMagic(C, fig, ax, handles=None):
if not handles:
handle, = ax.loglog(x, x ** C, linestyle='dotted');
handles = [handle]
for M in allM:
baz = foobar(M, C)
kwargs = {'ls' : 'dashed'} if M == 0x5f3759df else {}
handle, = ax.loglog(x, np.abs(baz(x)), **kwargs)
handles.append(handle)
else:
handles[0].set_data(x, x ** C)
baz = foobar(allM[0], C)
handles[1].set_data(x, np.abs(baz(x)))
baz = foobar(allM[1], C)
handles[2].set_data(x, np.abs(baz(x)))
baz = foobar(allM[2], C)
handles[3].set_data(x, np.abs(baz(x)))
ax.set_title(r'For slope C = $%s$, ${\rm foobar}_{M,%s}(x)$' % (C, C))
ax.legend(
handles,
[
r'$x^{%s}$' % C,
r'$M = 2^{26}$',
r'$M = 2^{28}$',
r'$M = {\rm 0x5f3759df}$'
], loc=4)
return tuple(handles)
handles = plotSomeMagic(0, fig, ax)
# initialization function: plot the background of each frame
def init():
return plotSomeMagic(1, fig, ax, handles)
# animation function. This is called sequentially
def animate(i):
return plotSomeMagic(i, fig, ax, handles)
hide_code_in_slideshow()
video = animation.FuncAnimation(fig, animate, init_func=init, frames=np.arange(-2,8,0.10), interval=100, blit=True)
plt.close();
video
# What is 1#?
display(Latex(r'Just $\textsf{f2l}(1) = \textsf{%s}$.' % hex(f2l(1))))
# What about inverse square-root?
display(Latex(r'For the inverse square-root, its magical constant should be \
$$\left(1 - \frac{-1}{2}\right)\textsf{f2l}(1) = \textsf{%s}$$'
% hex(3 * f2l(1) // 2)))
hide_code_in_slideshow()
def qexp(C):
# (1 - C) * f2l(1) + C * f2l(x)
return np.vectorize(lambda x: l2f((1 - C) * f2l(1) + C * f2l(x)))
x = np.linspace(1, 1000, 5000)
properties = {
(0, 0): {'M': allM, 'C': -1},
(1, 0): {'M': allM, 'C': 2},
(0, 1): {'M': allM, 'C': 0.3},
(1, 1): {'M': allM, 'C': -0.6},
}
fig, axarr = plt.subplots(2, 2, figsize=(14,8));
for key, property in properties.items():
C = property['C']
handle, = axarr[key].plot(x, x ** C);
handles = [handle]
baz = qexp(C)
handle, = axarr[key].plot(x, baz(x))
handles.append(handle)
# axarr[key].set_title(r'For slope C = $%s$, ${\rm foobar}_{M,%s}(x)$' % (C, C))
axarr[key].legend(
handles,
[
r'$x^{%s}$' % C,
r'$M^* = $ %s' % hex(int(C * sharp(1))),
], loc=4)
hide_code_in_slideshow()
from matplotlib.ticker import FuncFormatter
def to_percent(y, position):
# Ignore the passed in position. This has the effect of scaling the default
# tick locations.
s = str(int(100 * y))
# The percent symbol needs escaping in latex
if matplotlib.rcParams['text.usetex'] is True:
return s + r'$\%$'
else:
return s + '%'
# Create the formatter using the function to_percent. This multiplies all the
# default labels by 100, making them all percentages
formatter = FuncFormatter(to_percent)
# ax.yaxis.set_major_formatter(formatter)
hide_code_in_slideshow()
x = np.linspace(1, 1000, 5000)
properties = {
(0, 0): {'C': -1},
(1, 0): {'C': 2},
(0, 1): {'C': 0.3},
(1, 1): {'C': -0.6},
}
fig, axarr = plt.subplots(2, 2, figsize=(14,8));
for key, property in properties.items():
axarr[key].set_ylim(0, 0.5)
axarr[key].yaxis.set_major_formatter(formatter)
C = property['C']
baz = qexp(C)
handle, = axarr[key].plot(x, np.abs(x ** C - baz(x))/(x ** C));
axarr[key].set_title(r'Relative error for $x^{%s}$' % C)
axarr[key].legend(
[handle],
[r'Relative error for $x^{%s}$' % C])
hide_code_in_slideshow()
%%html
<div id="meh">
<small style="font-size: 8px;">[Double Click for Code]</small>
<style>
.hide-in-slideshow-meh {
display: None ! important;
}
</style>
</div>
<script type="text/javascript">
var show_meh = function() {
var p = $("#meh");
var orig = p;
if (p.length==0) return;
while (!p.hasClass("cell")) {
p=p.parent();
if (p.prop("tagName") =="body") return;
}
var cell = p;
cell.dblclick(function() {
if (!orig.hasClass("hide-in-slideshow-meh")) {
cell.find(".input").removeClass("hide-in-slideshow-meh");
orig.addClass("hide-in-slideshow-meh");
} else {
cell.find(".input").addClass("hide-in-slideshow-meh");
orig.removeClass("hide-in-slideshow-meh");
}
});
cell.find(".input").addClass("hide-in-slideshow-meh");
}
show_meh();
</script>
<pre id="wee" class="language-c cm-s-ipython highlight">
// For x^(-0.5)
float qpow(float x) {
long i = * ( long * ) &x;
i = 0x5f400000 + -0.5 * i;
return * ( float * ) &i;
}
</pre>
<p>
<input type="text" id="pown" val="-0.5"/>
</p>
<script type="text/javascript">
require.config({
paths: {
"highlight": "https://cdnjs.cloudflare.com/ajax/libs/highlight.js/9.4.0/highlight.min",
}
});
require(["highlight"], function(hljs) {
hljs.configure({
classPrefix: 'cm-' // don't append class prefix
});
$('#wee').each(function(i, block) {
hljs.highlightBlock(block);
});
$("#pown").keyup(function() {
var C = Number($("#pown").val());
if (Number.isNaN(C)) return;
var M = Math.floor(1065353216 * (1 - C)) % 4294967296;
var sign = M >= 0 ? '' : '-';
var code = "// For x^(" + C + ")\nfloat qpow(float x) {\n long i = * ( long * ) &x;\n i = " + sign + "0x" + Math.abs(M).toString(16) + " + " + C + " * i;\n return * ( float * ) &i;\n}\n";
$("#wee").html(code);
$('#wee').each(function(i, block) {
hljs.highlightBlock(block);
});
});
});
</script>
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Part 1
Step2: The first two sentences have very similar content, and as such the SCM should be large. Before we compute the SCM, we want to remove stopwords ("the", "to", etc.), as these do not contribute a lot to the information in the sentences.
Step3: Now, as we mentioned earlier, we will be using some downloaded pre-trained embeddings. Note that the embeddings we have chosen here require a lot of memory. We will use the embeddings to construct a term similarity matrix that will be used by the softcossim function.
Step4: So let's compute SCM using the softcossim function.
Step5: Let's try the same thing with two completely unrelated sentences. Notice that the similarity is smaller.
Step6: Part 2
Step7: Using the corpus we have just build, we will now construct a dictionary, a TF-IDF model, a word2vec model, and a term similarity matrix.
Step8: Evaluation
Step9: Finally, we will perform an evaluation to compare three unsupervised similarity measures – the Soft Cosine Measure, two different implementations of the Word Mover's Distance, and standard cosine similarity. We will use the Mean Average Precision (MAP) as an evaluation measure and 10-fold cross-validation to get an estimate of the variance of MAP for each similarity measure.
Step10: The table below shows the pointwise estimates of means and standard variances for MAP scores and elapsed times. Baselines and winners for each year are displayed in bold. We can see that the Soft Cosine Measure gives a strong performance on both the 2016 and the 2017 dataset.
| <ASSISTANT_TASK:>
Python Code:
# Initialize logging.
import logging
logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
sentence_obama = 'Obama speaks to the media in Illinois'.lower().split()
sentence_president = 'The president greets the press in Chicago'.lower().split()
sentence_orange = 'Oranges are my favorite fruit'.lower().split()
# Import and download stopwords from NLTK.
from nltk.corpus import stopwords
from nltk import download
download('stopwords') # Download stopwords list.
# Remove stopwords.
stop_words = stopwords.words('english')
sentence_obama = [w for w in sentence_obama if w not in stop_words]
sentence_president = [w for w in sentence_president if w not in stop_words]
sentence_orange = [w for w in sentence_orange if w not in stop_words]
# Prepare a dictionary and a corpus.
from gensim import corpora
documents = [sentence_obama, sentence_president, sentence_orange]
dictionary = corpora.Dictionary(documents)
corpus = [dictionary.doc2bow(document) for document in documents]
# Convert the sentences into bag-of-words vectors.
sentence_obama = dictionary.doc2bow(sentence_obama)
sentence_president = dictionary.doc2bow(sentence_president)
sentence_orange = dictionary.doc2bow(sentence_orange)
%%time
import gensim.downloader as api
w2v_model = api.load("glove-wiki-gigaword-50")
similarity_matrix = w2v_model.similarity_matrix(dictionary)
from gensim.matutils import softcossim
similarity = softcossim(sentence_obama, sentence_president, similarity_matrix)
print('similarity = %.4f' % similarity)
similarity = softcossim(sentence_obama, sentence_orange, similarity_matrix)
print('similarity = %.4f' % similarity)
%%time
from itertools import chain
import json
from re import sub
from os.path import isfile
import gensim.downloader as api
from gensim.utils import simple_preprocess
from nltk.corpus import stopwords
from nltk import download
download("stopwords") # Download stopwords list.
stopwords = set(stopwords.words("english"))
def preprocess(doc):
doc = sub(r'<img[^<>]+(>|$)', " image_token ", doc)
doc = sub(r'<[^<>]+(>|$)', " ", doc)
doc = sub(r'\[img_assist[^]]*?\]', " ", doc)
doc = sub(r'http[s]?://(?:[a-zA-Z]|[0-9]|[$-_@.&+]|[!*\(\),]|(?:%[0-9a-fA-F][0-9a-fA-F]))+', " url_token ", doc)
return [token for token in simple_preprocess(doc, min_len=0, max_len=float("inf")) if token not in stopwords]
corpus = list(chain(*[
chain(
[preprocess(thread["RelQuestion"]["RelQSubject"]), preprocess(thread["RelQuestion"]["RelQBody"])],
[preprocess(relcomment["RelCText"]) for relcomment in thread["RelComments"]])
for thread in api.load("semeval-2016-2017-task3-subtaskA-unannotated")]))
print("Number of documents: %d" % len(documents))
%%time
from gensim.corpora import Dictionary
from gensim.models import TfidfModel
from gensim.models import Word2Vec
from multiprocessing import cpu_count
dictionary = Dictionary(corpus)
tfidf = TfidfModel(dictionary=dictionary)
w2v_model = Word2Vec(corpus, workers=cpu_count(), min_count=5, size=300, seed=12345)
similarity_matrix = w2v_model.wv.similarity_matrix(dictionary, tfidf, nonzero_limit=100)
print("Number of unique words: %d" % len(dictionary))
datasets = api.load("semeval-2016-2017-task3-subtaskBC")
from math import isnan
from time import time
from gensim.similarities import MatrixSimilarity, WmdSimilarity, SoftCosineSimilarity
import numpy as np
from sklearn.model_selection import KFold
from wmd import WMD
def produce_test_data(dataset):
for orgquestion in datasets[dataset]:
query = preprocess(orgquestion["OrgQSubject"]) + preprocess(orgquestion["OrgQBody"])
documents = [
preprocess(thread["RelQuestion"]["RelQSubject"]) + preprocess(thread["RelQuestion"]["RelQBody"])
for thread in orgquestion["Threads"]]
relevance = [
thread["RelQuestion"]["RELQ_RELEVANCE2ORGQ"] in ("PerfectMatch", "Relevant")
for thread in orgquestion["Threads"]]
yield query, documents, relevance
def cossim(query, documents):
# Compute cosine similarity between the query and the documents.
query = tfidf[dictionary.doc2bow(query)]
index = MatrixSimilarity(
tfidf[[dictionary.doc2bow(document) for document in documents]],
num_features=len(dictionary))
similarities = index[query]
return similarities
def softcossim(query, documents):
# Compute Soft Cosine Measure between the query and the documents.
query = tfidf[dictionary.doc2bow(query)]
index = SoftCosineSimilarity(
tfidf[[dictionary.doc2bow(document) for document in documents]],
similarity_matrix)
similarities = index[query]
return similarities
def wmd_gensim(query, documents):
# Compute Word Mover's Distance as implemented in PyEMD by William Mayner
# between the query and the documents.
index = WmdSimilarity(documents, w2v_model)
similarities = index[query]
return similarities
def wmd_relax(query, documents):
# Compute Word Mover's Distance as implemented in WMD by Source{d}
# between the query and the documents.
words = [word for word in set(chain(query, *documents)) if word in w2v_model.wv]
indices, words = zip(*sorted((
(index, word) for (index, _), word in zip(dictionary.doc2bow(words), words))))
query = dict(tfidf[dictionary.doc2bow(query)])
query = [
(new_index, query[dict_index])
for new_index, dict_index in enumerate(indices)
if dict_index in query]
documents = [dict(tfidf[dictionary.doc2bow(document)]) for document in documents]
documents = [[
(new_index, document[dict_index])
for new_index, dict_index in enumerate(indices)
if dict_index in document] for document in documents]
embeddings = np.array([w2v_model.wv[word] for word in words], dtype=np.float32)
nbow = dict(((index, list(chain([None], zip(*document)))) for index, document in enumerate(documents)))
nbow["query"] = (None, *zip(*query))
distances = WMD(embeddings, nbow, vocabulary_min=1).nearest_neighbors("query")
similarities = [-distance for _, distance in sorted(distances)]
return similarities
strategies = {
"cossim" : cossim,
"softcossim": softcossim,
"wmd-gensim": wmd_gensim,
"wmd-relax": wmd_relax}
def evaluate(split, strategy):
# Perform a single round of evaluation.
results = []
start_time = time()
for query, documents, relevance in split:
similarities = strategies[strategy](query, documents)
assert len(similarities) == len(documents)
precision = [
(num_correct + 1) / (num_total + 1) for num_correct, num_total in enumerate(
num_total for num_total, (_, relevant) in enumerate(
sorted(zip(similarities, relevance), reverse=True)) if relevant)]
average_precision = np.mean(precision) if precision else 0.0
results.append(average_precision)
return (np.mean(results) * 100, time() - start_time)
def crossvalidate(args):
# Perform a cross-validation.
dataset, strategy = args
test_data = np.array(list(produce_test_data(dataset)))
kf = KFold(n_splits=10)
samples = []
for _, test_index in kf.split(test_data):
samples.append(evaluate(test_data[test_index], strategy))
return (np.mean(samples, axis=0), np.std(samples, axis=0))
%%time
from multiprocessing import Pool
args_list = [
(dataset, technique)
for dataset in ("2016-test", "2017-test")
for technique in ("softcossim", "wmd-gensim", "wmd-relax", "cossim")]
with Pool() as pool:
results = pool.map(crossvalidate, args_list)
from IPython.display import display, Markdown
output = []
baselines = [
(("2016-test", "**Winner (UH-PRHLT-primary)**"), ((76.70, 0), (0, 0))),
(("2016-test", "**Baseline 1 (IR)**"), ((74.75, 0), (0, 0))),
(("2016-test", "**Baseline 2 (random)**"), ((46.98, 0), (0, 0))),
(("2017-test", "**Winner (SimBow-primary)**"), ((47.22, 0), (0, 0))),
(("2017-test", "**Baseline 1 (IR)**"), ((41.85, 0), (0, 0))),
(("2017-test", "**Baseline 2 (random)**"), ((29.81, 0), (0, 0)))]
table_header = ["Dataset | Strategy | MAP score | Elapsed time (sec)", ":---|:---|:---|---:"]
for row, ((dataset, technique), ((mean_map_score, mean_duration), (std_map_score, std_duration))) \
in enumerate(sorted(chain(zip(args_list, results), baselines), key=lambda x: (x[0][0], -x[1][0][0]))):
if row % (len(strategies) + 3) == 0:
output.extend(chain(["\n"], table_header))
map_score = "%.02f ±%.02f" % (mean_map_score, std_map_score)
duration = "%.02f ±%.02f" % (mean_duration, std_duration) if mean_duration else ""
output.append("%s|%s|%s|%s" % (dataset, technique, map_score, duration))
display(Markdown('\n'.join(output)))
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 建立 ndarray
Step2: 看 ndarray 的第一件事情: shape , dtype
Step3: 有時候,可以看圖
Step4: 有很多其他建立的方式
Step5: 這是一堆資料
Step6: Q0
Step7: Q1
Step8: Indexing
Step9: Q2
Step10: ndarray 也可以
Step11: Q3
Step12: 試試看下面的結果
Step13: Q4
Step14: 用圖形來練習
Step15: Q
Step16: Q
Step17: indexing 的其他用法
Step18: Reshaping
Step19: 堆疊在一起
Step20: 作用在整個 array/axis 的函數
Step21: 多重意義的運用, 水平平均,整合垂直平均
Step22: Tensor 乘法
Step23: 矩陣乘法
| <ASSISTANT_TASK:>
Python Code:
# 起手式
import numpy as np
np.array([1,2,3,4])
x = _
y = np.array([[1.,2,3],[4,5,6]])
y
x.shape
y.shape
x.dtype
y.dtype
# import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
# 畫圖
plt.plot(x, 'x');
# 建立 0 array
np.zeros_like(y)
np.zeros((10,10))
# 跟 range 差不多
x = np.arange(0, 10, 0.1)
# 亂數
y = np.random.uniform(-1,1, size=x.shape)
plt.plot(x, y)
x = np.linspace(0, 2* np.pi, 1000)
plt.plot(x, np.sin(x))
#可以用 %run -i 跑參考範例
%run -i q0.py
# 或者看看參考範例
#%load q0.py
# 參考答案
#%load q1.py
a = np.arange(30)
a
a[5]
a[3:7]
# 列出所有奇數項
a[1::2]
# 還可以用來設定值
a[1::2] = -1
a
# 或是
a[1::2] = -a[::2]-1
a
%run -i q2.py
#%load q2.py
b = np.array([[1,2,3], [4,5,6], [7,8,9]])
b
b[1][2]
b[1,2]
b[1]
b = np.random.randint(0,99, size=(5,10))
b
b[[1,3]]
b[(1,3)]
b[[1,2], [3,4]]
b[[(1,2),(3,4)]]
b[[True, False, False, True, False]]
#參考範例
%run -i q4.py
# 還記得剛才的
from PIL import Image
img = Image.open('img/Green-Rolling-Hills-Landscape-800px.png')
img_array = np.array(img)
Image.fromarray(img_array)
# 用來顯示圖片的函數
from IPython.display import display
def show(img_array):
display(Image.fromarray(img_array))
# 將圖片縮小成一半
%run -i q_half.py
# 將圖片放大
%run -i q_scale2.py
# 圖片上下顛倒
show(img_array[::-1])
%run -i q_paste.py
%run -i q_grayscale.py
# 用迴圈畫圓
%run -i q_slow_circle.py
# 用 fancy index 畫圓
%run -i q_fast_circle.py
# 還可以做模糊化
a = img_array.astype(float)
for i in range(10):
a[1:,1:] = (a[1:,1:]+a[:-1,1:]+a[1:,:-1]+a[:-1,:-1])/4
show(a.astype('uint8'))
# 求邊界
a = img_array.astype(float)
a = a @ [0.299, 0.587, 0.114, 0]
a = np.abs((a[1:]-a[:-1]))*2
show(a.astype('uint8'))
# reshaping 的應用
R,G,B,A = img_array.reshape(-1,4).T
plt.hist((R,G,B,A), color="rgby");
# 例子
show(np.hstack([img_array, img_array2]))
# 例子
np.concatenate([img_array, img_array2], axis=2).shape
np.max([1,2,3,4])
np.sum([1,2,3,4])
np.mean([1,2,3,4])
np.min([1,2,3,4])
x_mean = img_array.astype(float).min(axis=0, keepdims=True)
print(x_mean.dtype, x_mean.shape)
y_mean = img_array.astype(float).min(axis=1, keepdims=True)
print(y_mean.dtype, y_mean.shape)
# 自動 broadcast
xy_combined = ((x_mean+y_mean)/2).astype('uint8')
show(xy_combined)
# = 1*4 + 2*5 + 4*6
np.dot([1,2,3], [4,5,6])
u=np.array([1,2,3])
v=np.array([4,5,6])
print( u@v )
print( (u*v).sum() )
A=np.random.randint(0,10, size=(5,3))
A
B=np.random.randint(0,10, size=(3,7))
B
A.dot(B)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Part 1
Step2: Notes
| <ASSISTANT_TASK:>
Python Code:
from collections import defaultdict, Counter
from itertools import zip_longest
import json
import os
import re
import sys
import urllib
import numpy as np
import requests
from eva_cttv_pipeline.clinvar_xml_io.clinvar_xml_io import *
%matplotlib inline
import matplotlib.pyplot as plt
from eva_cttv_pipeline.clinvar_xml_io.clinvar_xml_io.hgvs_variant import HgvsVariant
PROJECT_ROOT = '/home/april/projects/opentargets/'
clinvar_path = os.path.join(PROJECT_ROOT, 'ClinVarFullRelease_00-latest.xml.gz')
# clinvar_path = os.path.join(PROJECT_ROOT, 'clinvar-sample.xml.gz')
dataset = ClinVarDataset(clinvar_path)
def vcv(record):
if record.measure:
return find_mandatory_unique_element(record.rcv, './MeasureSet').attrib['Acc']
limit = None
# for each HGVS that structural variants pipeline would process, how many rcvs/vcvs are associated?
# and would any of them potentially get annotated by the simple VEP pipeline? (assuming repeats override complex)
complex_hgvs_to_complex_rcv = defaultdict(list)
complex_hgvs_to_complex_vcv = defaultdict(list)
complex_hgvs_to_other_rcv = defaultdict(list)
complex_hgvs_to_other_vcv = defaultdict(list)
i = 0
for r in dataset:
if pipeline.can_process(r):
complex_hgvs = [h for h in r.measure.current_hgvs if h is not None]
for h in complex_hgvs:
complex_hgvs_to_complex_rcv[h].append(r.accession)
complex_hgvs_to_complex_vcv[h].append(vcv(r))
else:
if r.measure and r.measure.current_hgvs:
other_hgvs = [h for h in r.measure.current_hgvs if h is not None]
for h in other_hgvs:
if h in complex_hgvs_to_complex_rcv:
complex_hgvs_to_other_rcv[h].append(r.accession)
complex_hgvs_to_other_vcv[h].append(vcv(r))
i += 1
if limit and i > limit:
break
from eva_cttv_pipeline.clinvar_xml_io.clinvar_xml_io.hgvs_variant import SequenceType
problem_rcvs = []
for h in complex_hgvs_to_other_rcv.keys():
if HgvsVariant(h).sequence_type == SequenceType.GENOMIC:
problem_rcvs.extend(complex_hgvs_to_other_rcv[h])
problem_rcvs.extend(complex_hgvs_to_complex_rcv[h])
problem_rcvs = set(problem_rcvs)
problem_rcvs # includes both complex and "other" rcvs
for r in dataset:
if r.accession in problem_rcvs:
print(r.accession)
print(vcv(r))
print(r.measure.current_hgvs)
print(r.measure.vcf_full_coords)
print('\n=========\n')
for h, vcvs in complex_hgvs_to_complex_vcv.items():
num_vcvs = len(set(vcvs))
if num_vcvs > 1 and HgvsVariant(h).sequence_type == SequenceType.GENOMIC:
print(h)
print(set(vcvs))
print('\n========\n')
# for two sets of HGVS identifiers associated with two different VCVs, what's the intersection & set difference?
with_coordinates = {'NM_000080.4:c.1327delG', 'LRG_1254t1:c.1327del', 'LRG_1254:g.9185del', 'NG_028005.1:g.70553del', 'NG_008029.2:g.9185del', 'NC_000017.11:g.4898892del', 'NC_000017.10:g.4802186del', None, None, 'p.Glu443Lysfs*64', 'NP_000071.1:p.Glu443LysfsTer64'}
no_coordinates = {'LRG_1254t1:c.1327del', 'NM_000080.4:c.1327del', 'LRG_1254:g.9185del', 'NG_028005.1:g.70553del', 'NG_008029.2:g.9185del', 'NC_000017.11:g.4898892del', None, 'LRG_1254p1:p.Glu443fs', 'NP_000071.1:p.Glu443fs'}
with_coordinates & no_coordinates
with_coordinates - no_coordinates
no_coordinates - with_coordinates
import pandas as pd
pd.set_option('display.max_colwidth', None)
from consequence_prediction.structural_variants import pipeline as structural_pipeline
from consequence_prediction.vep_mapping_pipeline.consequence_mapping import colon_based_id_to_vep_id, process_variants
problem_path = os.path.join(PROJECT_ROOT, 'complex-events/rcvs_sharing_hgvs.xml.gz')
problem_dataset = ClinVarDataset(problem_path)
# convert VEP pipeline to be more usable...
IUPAC_AMBIGUOUS_SEQUENCE = re.compile(r'[^ACGT]')
def vep_pipeline_main(clinvar_xml):
variants = []
for clinvar_record in ClinVarDataset(clinvar_xml):
if clinvar_record.measure is None or not clinvar_record.measure.has_complete_coordinates:
continue
m = clinvar_record.measure
if IUPAC_AMBIGUOUS_SEQUENCE.search(m.vcf_ref + m.vcf_alt):
continue
variants.append(f'{m.chr}:{m.vcf_pos}:{m.vcf_ref}:{m.vcf_alt}')
variants_to_query = [colon_based_id_to_vep_id(v) for v in variants]
variant_results = process_variants(variants_to_query)
variant_data = []
for variant_id, gene_id, gene_symbol, consequence_term, distance in variant_results:
variant_data.append((variant_id, '1', gene_id, gene_symbol, consequence_term, distance))
consequences = pd.DataFrame(variant_data, columns=('VariantID', 'PlaceholderOnes', 'EnsemblGeneID',
'EnsemblGeneName', 'ConsequenceTerm', 'Distance'))
return consequences
vep_consequences = vep_pipeline_main(problem_path)
vep_consequences
struct_consequences = structural_pipeline.main(problem_path)
# haven't implemented the single base deletion case as it's not a range, but I think we'd get the following
# https://rest.ensembl.org/vep/human/region/NC_000017.11:4898892-4898892:1/DEL?content-type=application/json
struct_consequences = struct_consequences.append(
pd.DataFrame(
[['NC_000017.11 4898892 4898892 DEL +', 1, 'ENSG00000108556', 'CHRNE', 'frameshift_variant', 0]],
columns=('VariantID', 'PlaceholderOnes', 'EnsemblGeneID', 'EnsemblGeneName', 'ConsequenceTerm', 'Distance')
)
)
struct_consequences
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Toy Network
Step2: Centrality
Step3: Eigenvector Centrality
Step4: Betweenness Centrality
Step5: Centrality Measures Are Different
Step6: Transitivity
Step7: Measure Transitivity
Step8: Clustering Coefficient
Step9: Community Detection
Step10: Real Network
Step11: Subset the Data
Step12: Subset the Data
Step13: Create the network (two ways)
Step14: Set edge weights for Network Object
Step15: Thresholding
Step16: Look at the network
Step17: Take out the singletons to get a clearer picture
Step18: Look at the degree distribution
Step19: Look at party in the network
Step20: Prepare the Visualization
Step21: Visualize the network by party
Step22: Do it again with a lower threshold
Step23: Modularity
Step24: Visualize the Communities
Step25: How did we do?
Step26: Pretty, but now what?
Step27: Merge in some network data
Step28: Degree is not significant
Step29: Betweeness is!
Step40: Questions?
| <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import networkx as nx
import numpy as np
import scipy as sp
import itertools
import matplotlib.pyplot as plt
import statsmodels.api as sm
%matplotlib inline
G = nx.Graph()
G.add_nodes_from(['A','B','C','D','E','F','G'])
G.add_edges_from([('A','B'),('A','C'),
('A','D'),('A','F'),
('B','E'),('C','E'),
('F','G')])
nx.draw_networkx(G, with_labels=True)
deg = nx.degree_centrality(G)
print(deg)
eig_c = nx.eigenvector_centrality_numpy(G)
toy_adj = nx.adjacency_matrix(G)
print(eig_c)
val,vec = np.linalg.eig(toy_adj.toarray())
print(val)
vec[:,0]
betw = nx.betweenness_centrality(G)
print(betw)
cent_scores = pd.DataFrame({'deg':deg,'eig_c':eig_c,'betw':betw})
print(cent_scores.corr())
cent_scores
G_trans = G.copy()
G_trans.add_edge('A','E')
G_trans.add_edge('F','D')
nx.draw_networkx(G_trans, with_labels=True)
print("Transitivity:")
print(nx.transitivity(G))
print(nx.transitivity(G_trans))
print("Triangles:")
print(nx.triangles(G))
print(nx.triangles(G_trans))
print("Clustering coefficient")
print(nx.clustering(G))
print(nx.clustering(G_trans))
print("Average Clustering")
print(nx.average_clustering(G))
print(nx.average_clustering(G_trans))
coms = nx.algorithms.community.centrality.girvan_newman(G)
i = 2
for com in itertools.islice(coms,4):
print(i, ' communities')
i+=1
print(tuple(c for c in com))
edges = []
with open('cosponsors.txt') as d:
for line in d:
edges.append(line.split())
dates = pd.read_csv('Dates.txt',sep='-',header=None)
dates.columns = ['year','month','day']
index_loc = np.where(dates.year==2004)
edges_04 = [edges[i] for i in index_loc[0]]
# Get nodes
senate = pd.read_csv('senate.csv')
senators = senate.loc[senate.congress==108,['id','party']]
# Creae adjacency matrix
adj_mat = np.zeros([len(senators),len(senators)])
senators = pd.DataFrame(senators)
senators['adj_ind']=range(len(senators))
# Create Graph Object
senateG= nx.Graph()
senateG.add_nodes_from(senators.id)
party_dict = dict(zip(senators.id,senators.party))
nx.set_node_attributes(senateG, name='party',values=party_dict)
for bill in edges_04:
if bill[0] == "NA": continue
bill = [int(i) for i in bill]
if bill[0] not in list(senators.id): continue
combos = list(itertools.combinations(bill,2))
senateG.add_edges_from(combos)
for pair in combos:
i = senators.loc[senators.id == int(pair[0]), 'adj_ind']
j = senators.loc[senators.id == int(pair[1]), 'adj_ind']
adj_mat[i,j]+=1
adj_mat[j,i]+=1
for row in range(len(adj_mat)):
cols = np.where(adj_mat[row,:])[0]
i = senators.loc[senators.adj_ind==row,'id']
i = int(i)
for col in cols:
j = senators.loc[senators.adj_ind==col,'id']
j = int(j)
senateG[i][j]['bills']=adj_mat[row,col]
bill_dict = nx.get_edge_attributes(senateG,'bills')
elarge=[(i,j) for (i,j) in bill_dict if bill_dict[(i,j)] >40]
nx.draw_spring(senateG, edgelist = elarge,with_labels=True)
senateGt= nx.Graph()
senateGt.add_nodes_from(senateG.nodes)
senateGt.add_edges_from(elarge)
deg = senateGt.degree()
rem = [n[0] for n in deg if n[1]==0]
senateGt_all = senateGt.copy()
senateGt.remove_nodes_from(rem)
nx.draw_spring(senateGt,with_labels=True)
foo=pd.DataFrame({'tup':deg})
deg = senateGt.degree()
foo = pd.DataFrame(foo)
foo[['grp','deg']]=foo['tup'].apply(pd.Series)
foo.deg.plot.hist()
party = nx.get_node_attributes(senateG,'party')
dems = []
gop = []
for i in party:
if party[i]==100: dems.append(i)
else: gop.append(i)
pos = nx.spring_layout(senateGt)
pos_all = nx.circular_layout(senateG)
dem_dict={}
gop_dict={}
dem_lone = {}
gop_lone= {}
for n in dems:
if n in rem: dem_lone[n]=pos_all[n]
else:dem_dict[n] = pos[n]
for n in gop:
if n in rem: gop_lone[n]=pos_all[n]
else:gop_dict[n] = pos[n]
dems = list(set(dems)-set(rem))
gop = list(set(gop)-set(rem))
nx.draw_networkx_nodes(senateGt, pos=dem_dict, nodelist = dems,node_color='b',node_size = 100)
nx.draw_networkx_nodes(senateGt, pos=gop_dict, nodelist = gop,node_color='r', node_size = 100)
nx.draw_networkx_nodes(senateG, pos=dem_lone, nodelist = list(dem_lone.keys()),node_color='b',node_size = 200)
nx.draw_networkx_nodes(senateG, pos=gop_lone, nodelist = list(gop_lone.keys()),node_color='r', node_size = 200)
nx.draw_networkx_edges(senateGt,pos=pos, edgelist=elarge)
dems = list(set(dems)-set(rem))
gop = list(set(gop)-set(rem))
nx.draw_networkx_nodes(senateGt, pos=dem_dict, nodelist = dems,node_color='b',node_size = 100)
nx.draw_networkx_nodes(senateGt, pos=gop_dict, nodelist = gop,node_color='r', node_size = 100)
nx.draw_networkx_nodes(senateGt_all, pos=dem_lone, nodelist = list(dem_lone.keys()),node_color='b',node_size = 100)
nx.draw_networkx_nodes(senateGt_all, pos=gop_lone, nodelist = list(gop_lone.keys()),node_color='r', node_size = 100)
nx.draw_networkx_edges(senateGt,pos=pos, edgelist=elarge)
colors = greedy_modularity_communities(senateGt, weight = 'bills')
pos = nx.spring_layout(senateGt)
pos0={}
pos1={}
for n in colors[0]:
pos0[n] = pos[n]
for n in colors[1]:
pos1[n] = pos[n]nx.draw_networkx_nodes(senateGt, pos=pos0, nodelist = colors[0],node_color='r')
nx.draw_networkx_nodes(senateGt, pos=pos1, nodelist = colors[1],node_color='b')
nx.draw_networkx_edges(senateGt,pos=pos, edgelist=elarge)
print('gop misclassification')
for i in colors[1]:
if i in dems: print(i,len(senateGt[i]))
print('dem misclassification')
for i in colors[0]:
if i in gop: print(i,len(senateGt[i]))
sh = pd.read_csv('SH.tab',sep='\t')
sh['dem']= sh.party==100
sh['dem']=sh.dem*1
model_data = sh.loc[
(sh.congress == 108) & (sh.chamber=='S'),
['ids','dem','pb','pa']
]
model_data['passed']=model_data.pb+model_data.pa
model_data.set_index('ids',inplace=True)
bet_cent = nx.betweenness_centrality(senateG,weight='bills')
bet_cent = pd.Series(bet_cent)
deg_cent = nx.degree_centrality(senateGt)
deg_cent = pd.Series(deg_cent)
model_data['between']=bet_cent
model_data['degree']=deg_cent
y =model_data.loc[:,'passed']
x =model_data.loc[:,['degree','dem']]
x['c'] = 1
ols_model1 = sm.OLS(y,x,missing='drop')
results = ols_model1.fit()
print(results.summary())
y =model_data.loc[:,'passed']
x =model_data.loc[:,['between','dem']]
x['c'] = 1
ols_model1 = sm.OLS(y,x,missing='drop')
results = ols_model1.fit()
print(results.summary())
# Some functions from the NetworkX package
import heapq
class MappedQueue(object):
The MappedQueue class implements an efficient minimum heap. The
smallest element can be popped in O(1) time, new elements can be pushed
in O(log n) time, and any element can be removed or updated in O(log n)
time. The queue cannot contain duplicate elements and an attempt to push an
element already in the queue will have no effect.
MappedQueue complements the heapq package from the python standard
library. While MappedQueue is designed for maximum compatibility with
heapq, it has slightly different functionality.
Examples
--------
A `MappedQueue` can be created empty or optionally given an array of
initial elements. Calling `push()` will add an element and calling `pop()`
will remove and return the smallest element.
>>> q = MappedQueue([916, 50, 4609, 493, 237])
>>> q.push(1310)
True
>>> x = [q.pop() for i in range(len(q.h))]
>>> x
[50, 237, 493, 916, 1310, 4609]
Elements can also be updated or removed from anywhere in the queue.
>>> q = MappedQueue([916, 50, 4609, 493, 237])
>>> q.remove(493)
>>> q.update(237, 1117)
>>> x = [q.pop() for i in range(len(q.h))]
>>> x
[50, 916, 1117, 4609]
References
----------
.. [1] Cormen, T. H., Leiserson, C. E., Rivest, R. L., & Stein, C. (2001).
Introduction to algorithms second edition.
.. [2] Knuth, D. E. (1997). The art of computer programming (Vol. 3).
Pearson Education.
def __init__(self, data=[]):
Priority queue class with updatable priorities.
self.h = list(data)
self.d = dict()
self._heapify()
def __len__(self):
return len(self.h)
def _heapify(self):
Restore heap invariant and recalculate map.
heapq.heapify(self.h)
self.d = dict([(elt, pos) for pos, elt in enumerate(self.h)])
if len(self.h) != len(self.d):
raise AssertionError("Heap contains duplicate elements")
def push(self, elt):
Add an element to the queue.
# If element is already in queue, do nothing
if elt in self.d:
return False
# Add element to heap and dict
pos = len(self.h)
self.h.append(elt)
self.d[elt] = pos
# Restore invariant by sifting down
self._siftdown(pos)
return True
def pop(self):
Remove and return the smallest element in the queue.
# Remove smallest element
elt = self.h[0]
del self.d[elt]
# If elt is last item, remove and return
if len(self.h) == 1:
self.h.pop()
return elt
# Replace root with last element
last = self.h.pop()
self.h[0] = last
self.d[last] = 0
# Restore invariant by sifting up, then down
pos = self._siftup(0)
self._siftdown(pos)
# Return smallest element
return elt
def update(self, elt, new):
Replace an element in the queue with a new one.
# Replace
pos = self.d[elt]
self.h[pos] = new
del self.d[elt]
self.d[new] = pos
# Restore invariant by sifting up, then down
pos = self._siftup(pos)
self._siftdown(pos)
def remove(self, elt):
Remove an element from the queue.
# Find and remove element
try:
pos = self.d[elt]
del self.d[elt]
except KeyError:
# Not in queue
raise
# If elt is last item, remove and return
if pos == len(self.h) - 1:
self.h.pop()
return
# Replace elt with last element
last = self.h.pop()
self.h[pos] = last
self.d[last] = pos
# Restore invariant by sifting up, then down
pos = self._siftup(pos)
self._siftdown(pos)
def _siftup(self, pos):
Move element at pos down to a leaf by repeatedly moving the smaller
child up.
h, d = self.h, self.d
elt = h[pos]
# Continue until element is in a leaf
end_pos = len(h)
left_pos = (pos << 1) + 1
while left_pos < end_pos:
# Left child is guaranteed to exist by loop predicate
left = h[left_pos]
try:
right_pos = left_pos + 1
right = h[right_pos]
# Out-of-place, swap with left unless right is smaller
if right < left:
h[pos], h[right_pos] = right, elt
pos, right_pos = right_pos, pos
d[elt], d[right] = pos, right_pos
else:
h[pos], h[left_pos] = left, elt
pos, left_pos = left_pos, pos
d[elt], d[left] = pos, left_pos
except IndexError:
# Left leaf is the end of the heap, swap
h[pos], h[left_pos] = left, elt
pos, left_pos = left_pos, pos
d[elt], d[left] = pos, left_pos
# Update left_pos
left_pos = (pos << 1) + 1
return pos
def _siftdown(self, pos):
Restore invariant by repeatedly replacing out-of-place element with
its parent.
h, d = self.h, self.d
elt = h[pos]
# Continue until element is at root
while pos > 0:
parent_pos = (pos - 1) >> 1
parent = h[parent_pos]
if parent > elt:
# Swap out-of-place element with parent
h[parent_pos], h[pos] = elt, parent
parent_pos, pos = pos, parent_pos
d[elt] = pos
d[parent] = parent_pos
else:
# Invariant is satisfied
break
return pos
from __future__ import division
import networkx as nx
from networkx.algorithms.community.quality import modularity
def greedy_modularity_communities(G, weight=None):
Find communities in graph using Clauset-Newman-Moore greedy modularity
maximization. This method currently supports the Graph class and does not
consider edge weights.
Greedy modularity maximization begins with each node in its own community
and joins the pair of communities that most increases modularity until no
such pair exists.
Parameters
----------
G : NetworkX graph
Returns
-------
Yields sets of nodes, one for each community.
Examples
--------
>>> from networkx.algorithms.community import greedy_modularity_communities
>>> G = nx.karate_club_graph()
>>> c = list(greedy_modularity_communities(G))
>>> sorted(c[0])
[8, 14, 15, 18, 20, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33]
References
----------
.. [1] M. E. J Newman 'Networks: An Introduction', page 224
Oxford University Press 2011.
.. [2] Clauset, A., Newman, M. E., & Moore, C.
"Finding community structure in very large networks."
Physical Review E 70(6), 2004.
# Count nodes and edges
N = len(G.nodes())
m = sum([d.get('weight', 1) for u, v, d in G.edges(data=True)])
q0 = 1.0 / (2.0*m)
# Map node labels to contiguous integers
label_for_node = dict((i, v) for i, v in enumerate(G.nodes()))
node_for_label = dict((label_for_node[i], i) for i in range(N))
# Calculate degrees
k_for_label = G.degree(G.nodes(), weight=weight)
k = [k_for_label[label_for_node[i]] for i in range(N)]
# Initialize community and merge lists
communities = dict((i, frozenset([i])) for i in range(N))
merges = []
# Initial modularity
partition = [[label_for_node[x] for x in c] for c in communities.values()]
q_cnm = modularity(G, partition)
# Initialize data structures
# CNM Eq 8-9 (Eq 8 was missing a factor of 2 (from A_ij + A_ji)
# a[i]: fraction of edges within community i
# dq_dict[i][j]: dQ for merging community i, j
# dq_heap[i][n] : (-dq, i, j) for communitiy i nth largest dQ
# H[n]: (-dq, i, j) for community with nth largest max_j(dQ_ij)
a = [k[i]*q0 for i in range(N)]
dq_dict = dict(
(i, dict(
(j, 2*q0 - 2*k[i]*k[j]*q0*q0)
for j in [
node_for_label[u]
for u in G.neighbors(label_for_node[i])]
if j != i))
for i in range(N))
dq_heap = [
MappedQueue([
(-dq, i, j)
for j, dq in dq_dict[i].items()])
for i in range(N)]
H = MappedQueue([
dq_heap[i].h[0]
for i in range(N)
if len(dq_heap[i]) > 0])
# Merge communities until we can't improve modularity
while len(H) > 1:
# Find best merge
# Remove from heap of row maxes
# Ties will be broken by choosing the pair with lowest min community id
try:
dq, i, j = H.pop()
except IndexError:
break
dq = -dq
# Remove best merge from row i heap
dq_heap[i].pop()
# Push new row max onto H
if len(dq_heap[i]) > 0:
H.push(dq_heap[i].h[0])
# If this element was also at the root of row j, we need to remove the
# dupliate entry from H
if dq_heap[j].h[0] == (-dq, j, i):
H.remove((-dq, j, i))
# Remove best merge from row j heap
dq_heap[j].remove((-dq, j, i))
# Push new row max onto H
if len(dq_heap[j]) > 0:
H.push(dq_heap[j].h[0])
else:
# Duplicate wasn't in H, just remove from row j heap
dq_heap[j].remove((-dq, j, i))
# Stop when change is non-positive
if dq <= 0:
break
# Perform merge
communities[j] = frozenset(communities[i] | communities[j])
del communities[i]
merges.append((i, j, dq))
# New modularity
q_cnm += dq
# Get list of communities connected to merged communities
i_set = set(dq_dict[i].keys())
j_set = set(dq_dict[j].keys())
all_set = (i_set | j_set) - set([i, j])
both_set = i_set & j_set
# Merge i into j and update dQ
for k in all_set:
# Calculate new dq value
if k in both_set:
dq_jk = dq_dict[j][k] + dq_dict[i][k]
elif k in j_set:
dq_jk = dq_dict[j][k] - 2.0*a[i]*a[k]
else:
# k in i_set
dq_jk = dq_dict[i][k] - 2.0*a[j]*a[k]
# Update rows j and k
for row, col in [(j, k), (k, j)]:
# Save old value for finding heap index
if k in j_set:
d_old = (-dq_dict[row][col], row, col)
else:
d_old = None
# Update dict for j,k only (i is removed below)
dq_dict[row][col] = dq_jk
# Save old max of per-row heap
if len(dq_heap[row]) > 0:
d_oldmax = dq_heap[row].h[0]
else:
d_oldmax = None
# Add/update heaps
d = (-dq_jk, row, col)
if d_old is None:
# We're creating a new nonzero element, add to heap
dq_heap[row].push(d)
else:
# Update existing element in per-row heap
dq_heap[row].update(d_old, d)
# Update heap of row maxes if necessary
if d_oldmax is None:
# No entries previously in this row, push new max
H.push(d)
else:
# We've updated an entry in this row, has the max changed?
if dq_heap[row].h[0] != d_oldmax:
H.update(d_oldmax, dq_heap[row].h[0])
# Remove row/col i from matrix
i_neighbors = dq_dict[i].keys()
for k in i_neighbors:
# Remove from dict
dq_old = dq_dict[k][i]
del dq_dict[k][i]
# Remove from heaps if we haven't already
if k != j:
# Remove both row and column
for row, col in [(k, i), (i, k)]:
# Check if replaced dq is row max
d_old = (-dq_old, row, col)
if dq_heap[row].h[0] == d_old:
# Update per-row heap and heap of row maxes
dq_heap[row].remove(d_old)
H.remove(d_old)
# Update row max
if len(dq_heap[row]) > 0:
H.push(dq_heap[row].h[0])
else:
# Only update per-row heap
dq_heap[row].remove(d_old)
del dq_dict[i]
# Mark row i as deleted, but keep placeholder
dq_heap[i] = MappedQueue()
# Merge i into j and update a
a[j] += a[i]
a[i] = 0
communities = [
frozenset([label_for_node[i] for i in c])
for c in communities.values()]
return sorted(communities, key=len, reverse=True)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Simulating A/B testing to build intuition
Step2: Each test is like flipping a fair coin N times
Step3: Run the cell above a few times.
Step4: Rewards of the actions might be different
Step5: Add treatment B to make it more instesting
Step6: Simulating multiple arms with different payoffs
Step7: Why worry about the total reward? I thought we wanted to know if A > B?
Step8: Now is them it jump to your power and significance testing expertise.
Step9: More Arms
Step10: Quick aside
Step11: 2. Practicalities of testing and operation
Step12: 3. Optimizing outcomes with multiple options with different payoffs
Step13: Anealing Softmax
Step14: UCB2
| <ASSISTANT_TASK:>
Python Code:
from IPython.display import Image
Image(filename='img/treat_aud_reward.jpg')
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import pandas as pd
from numpy.random import binomial
from ggplot import *
import random
import sys
plt.figure(figsize=(6,6),dpi=80);
%matplotlib inline
Image(filename='img/ab.jpg')
Image(filename='img/a.jpg')
# This is A/ testing!
# This is the result of 1 arm, 100 trials
df = pd.DataFrame({"coin_toss":binomial(1,0.5,100)})
df.hist()
# Everyone got the same treatment, this is the distribution of the outcome
# reward is the total height of the right-hand bar
plt.show()
# every sample is 0/1, heads or tails
df.head()
# now with a high probability of heads
df = pd.DataFrame({"coin_toss":binomial(1,0.6,100)})
df.hist()
plt.show()
# Compare the variability across many different experiments
# of 100 flips each (variability of the mean)
df = pd.DataFrame({"coin_%i"%i:binomial(1,0.5,100) for i in range(20)})
df.hist()
plt.show()
# Can we distinguish a small differce in probability?
df = pd.DataFrame({"coin_%i"%i:binomial(1,0.52,100) for i in range(20)})
df.hist()
plt.show()
# 1 arm
payoff = [-0.1,0.5]
a = np.bincount(binomial(1,0.5,100))
print "Number of 0s and 1s:", a
print "Total reward with pay off specified =", np.dot(a, payoff)
# 2-arm, equal unity reward per coin
# (4 outcomes but 1,0=0,1 with this payoff vector)
payoff = [0,1,2]
a = np.bincount(binomial(2,0.5,100))
print a
print np.dot(a, payoff)
payoff1=[0,1]
reward1 = np.dot(np.bincount(binomial(1,0.5,100)), payoff1)
print "Arm A reward = ", reward1
payoff2=[0,1.05]
reward2 = np.dot(np.bincount(binomial(1,0.5,100)), payoff2)
print "Arm B reward = ", reward2
total_reward = reward1 + reward2
print "Total reward for arms A and B = ", total_reward
def a_b_test(one_payoff=[1, 1.01]):
# assume payoff for outcome 0 is 0
reward1 = np.bincount(binomial(1,0.5,100))[1] * one_payoff[0]
reward2 = np.bincount(binomial(1,0.5,100))[1] * one_payoff[1]
return reward1, reward2, reward1 + reward2, reward1-reward2
n_tests = 1000
sim = np.array([a_b_test() for i in range(n_tests)])
df = pd.DataFrame(sim, columns=["t1", "t2", "tot", "diff"])
print "Number of tests in which Arm B won (expect > {} because of payoff) = {}".format(
n_tests/2
, len(df[df["diff"] <= 0.0]))
df.hist()
plt.show()
def a_b_test(ps=[0.5, 0.51], one_payoff=[1, 1]):
reward1 = np.bincount(binomial(1,ps[0],100))[1] * one_payoff[0]
reward2 = np.bincount(binomial(1,ps[1],100))[1] * one_payoff[1]
return reward1, reward2, reward1 + reward2, reward1-reward2
n_tests= 100
sim = np.array([a_b_test() for i in range(n_tests)])
df = pd.DataFrame(sim, columns=["t1", "t2", "tot", "diff"])
print "Number of tests in which Arm B won (expect > {} because of probability) = {}".format(
n_tests/2
, len(df[df["diff"] <= 0.0]))
df.hist()
plt.show()
Image(filename='img/abcd.jpg')
# repeating what did before with equal equal payoff, more arms
# remember the degenerate outcomes
df = pd.DataFrame({"tot_reward":binomial(2,0.5,100)})
df.hist()
plt.show()
# ok, now with 4
df = pd.DataFrame({"tot_reward":binomial(4,0.5,100)})
df.hist()
plt.show()
# a little more practice with total reward distribution
trials = 100
probabilities = [0.1, 0.1, 0.9]
reward = np.zeros(trials)
for m in probabilities:
# equal rewards of 1 or 0
reward += binomial(1,m,trials)
df = pd.DataFrame({"reward":reward, "fair__uniform_reward":binomial(3,0.5,trials)})
df.hist()
plt.show()
sys.path.append('../../BanditsBook/python')
from core import *
random.seed(1)
# Mean (arm probabilities) (Bernoulli)
means = [0.1, 0.1, 0.1, 0.1, 0.9]
# Mulitple arms!
n_arms = len(means)
random.shuffle(means)
arms = map(lambda (mu): BernoulliArm(mu), means)
print("Best arm is " + str(ind_max(means)))
t_horizon = 250
n_sims = 1000
data = []
for epsilon in [0.1, 0.2, 0.3, 0.4, 0.5]:
algo = EpsilonGreedy(epsilon, [], [])
algo.initialize(n_arms)
# results are column oriented
# simulation_num, time, chosen arm, reward, cumulative reward
results = test_algorithm(algo, arms, n_sims, t_horizon)
results.append([epsilon]*len(results[0]))
data.extend(np.array(results).T)
df = pd.DataFrame(data
, columns = ["Sim"
, "T"
, "ChosenArm"
, "Reward"
, "CumulativeReward"
, "Epsilon"])
df.head()
a=df.groupby(["Epsilon", "T"]).mean().reset_index()
a.head()
ggplot(aes(x="T",y="Reward", color="Epsilon"), data=a) + geom_line()
ggplot(aes(x="T",y="CumulativeReward", color="Epsilon"), data=a) + geom_line()
t_horizon = 250
n_sims = 1000
algo = AnnealingSoftmax([], [])
algo.initialize(n_arms)
data = np.array(test_algorithm(algo, arms, n_sims, t_horizon)).T
df = pd.DataFrame(data)
#df.head()
df.columns = ["Sim", "T", "ChosenArm", "Reward", "CumulativeReward"]
df.head()
a=df.groupby(["T"]).mean().reset_index()
a.head()
ggplot(aes(x="T",y="Reward", color="Sim"), data=a) + geom_line()
ggplot(aes(x="T",y="CumulativeReward", color="Sim"), data=a) + geom_line()
t_horizon = 250
n_sims = 1000
data = []
for alpha in [0.1, 0.3, 0.5, 0.7, 0.9]:
algo = UCB2(alpha, [], [])
algo.initialize(n_arms)
results = test_algorithm(algo, arms, n_sims, t_horizon)
results.append([alpha]*len(results[0]))
data.extend(np.array(results).T)
df = pd.DataFrame(data, columns = ["Sim", "T", "ChosenArm", "Reward", "CumulativeReward", "Alpha"])
df.head()
a=df.groupby(["Alpha", "T"]).mean().reset_index()
a.head()
ggplot(aes(x="T",y="Reward", color="Alpha"), data=a) + geom_line()
ggplot(aes(x="T",y="CumulativeReward", color="Alpha"), data=a) + geom_line()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In the following cell, complete the code with an expression that evaluates to a list of integers derived from the raw numbers in numbers_str, assigning the value of this expression to a variable numbers. If you do everything correctly, executing the cell should produce the output 985 (not '985').
Step2: Great! We'll be using the numbers list you created above in the next few problems.
Step3: In the cell below, write an expression that evaluates to a list of the integers from numbers that are evenly divisible by three, sorted in numerical order. Expected output
Step4: Problem set #2
Step5: Now, in the cell below, write a list comprehension that evaluates to a list of names of the planets that have a diameter greater than four earth radii. Expected output
Step6: In the cell below, write a single expression that evaluates to the sum of the mass of all planets in the solar system. Expected output
Step7: Good work. Last one with the planets. Write an expression that evaluates to the names of the planets that have the word giant anywhere in the value for their type key. Expected output
Step8: EXTREME BONUS ROUND
Step9: In the cell above, I defined a variable poem_lines which has a list of lines in the poem, and imported the re library.
Step10: ['Then took the other, as just as fair,',
Step11: Okay, now a slightly trickier one. In the cell below, I've created a string all_lines which evaluates to the entire text of the poem in one string. Execute this cell.
Step12: Now, write an expression that evaluates to all of the words in the poem that follow the word 'I'. (The strings in the resulting list should not include the I.) Hint
Step13: Finally, something super tricky. Here's a list of strings that contains a restaurant menu. Your job is to wrangle this plain text, slightly-structured data into a list of dictionaries.
Step14: You'll need to pull out the name of the dish and the price of the dish. The v after the hyphen indicates that the dish is vegetarian---you'll need to include that information in your dictionary as well. I've included the basic framework; you just need to fill in the contents of the for loop.
| <ASSISTANT_TASK:>
Python Code:
numbers_str = '496,258,332,550,506,699,7,985,171,581,436,804,736,528,65,855,68,279,721,120'
number_list = numbers_str.split(",")
numbers = [int(item) for item in number_list]
max(numbers)
#len(numbers)
sorted(numbers)[10:]
from math import sqrt
squared = []
for item in numbers:
if item < 100:
numbers_squared = sqrt(item)
squared.append(numbers_squared)
squared
planets = [
{'diameter': 0.382,
'mass': 0.06,
'moons': 0,
'name': 'Mercury',
'orbital_period': 0.24,
'rings': 'no',
'type': 'terrestrial'},
{'diameter': 0.949,
'mass': 0.82,
'moons': 0,
'name': 'Venus',
'orbital_period': 0.62,
'rings': 'no',
'type': 'terrestrial'},
{'diameter': 1.00,
'mass': 1.00,
'moons': 1,
'name': 'Earth',
'orbital_period': 1.00,
'rings': 'no',
'type': 'terrestrial'},
{'diameter': 0.532,
'mass': 0.11,
'moons': 2,
'name': 'Mars',
'orbital_period': 1.88,
'rings': 'no',
'type': 'terrestrial'},
{'diameter': 11.209,
'mass': 317.8,
'moons': 67,
'name': 'Jupiter',
'orbital_period': 11.86,
'rings': 'yes',
'type': 'gas giant'},
{'diameter': 9.449,
'mass': 95.2,
'moons': 62,
'name': 'Saturn',
'orbital_period': 29.46,
'rings': 'yes',
'type': 'gas giant'},
{'diameter': 4.007,
'mass': 14.6,
'moons': 27,
'name': 'Uranus',
'orbital_period': 84.01,
'rings': 'yes',
'type': 'ice giant'},
{'diameter': 3.883,
'mass': 17.2,
'moons': 14,
'name': 'Neptune',
'orbital_period': 164.8,
'rings': 'yes',
'type': 'ice giant'}]
[item['name'] for item in planets if item['diameter'] > 2]
sum([item['mass'] for item in planets])
import re
planet_with_giant= [item['name'] for item in planets if re.search(r'\bgiant\b', item['type'])]
planet_with_giant
import re
poem_lines = ['Two roads diverged in a yellow wood,',
'And sorry I could not travel both',
'And be one traveler, long I stood',
'And looked down one as far as I could',
'To where it bent in the undergrowth;',
'',
'Then took the other, as just as fair,',
'And having perhaps the better claim,',
'Because it was grassy and wanted wear;',
'Though as for that the passing there',
'Had worn them really about the same,',
'',
'And both that morning equally lay',
'In leaves no step had trodden black.',
'Oh, I kept the first for another day!',
'Yet knowing how way leads on to way,',
'I doubted if I should ever come back.',
'',
'I shall be telling this with a sigh',
'Somewhere ages and ages hence:',
'Two roads diverged in a wood, and I---',
'I took the one less travelled by,',
'And that has made all the difference.']
[item for item in poem_lines if re.search(r'\b[a-zA-Z]{4}\b \b[a-zA-Z]{4}\b', item)]
[item for item in poem_lines if re.search(r'\b[a-zA-Z]{5}\b.?$',item)]
all_lines = " ".join(poem_lines)
re.findall(r'[I] (\b\w+\b)', all_lines)
entrees = [
"Yam, Rosemary and Chicken Bowl with Hot Sauce $10.95",
"Lavender and Pepperoni Sandwich $8.49",
"Water Chestnuts and Peas Power Lunch (with mayonnaise) $12.95 - v",
"Artichoke, Mustard Green and Arugula with Sesame Oil over noodles $9.95 - v",
"Flank Steak with Lentils And Tabasco Pepper With Sweet Chilli Sauce $19.95",
"Rutabaga And Cucumber Wrap $8.49 - v"
]
menu = []
for item in entrees:
entrees_dictionary= {}
match = re.search(r'(.*) .(\d*\d\.\d{2})\ ?( - v+)?$', item)
if match:
name = match.group(1)
price= match.group(2)
if match.group(3):
entrees_dictionary['vegetarian']= True
else:
entrees_dictionary['vegetarian']= False
entrees_dictionary['name']= name
entrees_dictionary['price']= price
menu.append(entrees_dictionary)
menu
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exercise A.1.1
Step2: Exercise A.1.2
Step3: Exercise A.1.3
Step4: Exercise A.1.4
Step5: Exercise A.1.5
Step6: Exercise A.1.6
Step7: <a id='Numpy'></a>
Step8: Lets imagine that we want to change to 2 classes instead by combining classes A with B and C with D. Use np.reshape and np.sum to create a new vector Y1. Hint
Step9: Exercise A.2.2
Step10: Exercise A.2.3
Step11: <a id='h5py'></a>
Step12: Each row represents a LHC collision event. Each column contains some observable from that event. The variable names are
Step13: Some of these variables represent the "raw" kinematics of the observed final state particles, while others are "features" that are derived from these raw quantities
Step14: <a id='Pandas'></a>
Step15: Now we can read the data into a pandas dataframe. It's a ~GB file, so be patient.
Step16: Another nice feature of pandas is that you can see the data in Jupyter by just evaluating the dataframe
Step17: The first column stores the "truth" label of whether an event was signal or background. Pandas makes it easy to create dataframes that store only the signal or background events
Step18: The following example plots the signal and background distributions of every variable. Note that we use VarNames[1
Step19: <a id='Scikit-learn'></a>
Step20: Lets separate the data into inputs (X) vs outputs (Y) and training vs testing samples
Step21: We can train the classifier as follow
Step22: We can plot the output, comparing signal and background
Step23: And we can make a ROC curve and evaluate the AUC
Step24: Exercise B.3.1
Step25: Exercise B.3.2
Step26: Answer the following questions
Step27: Now lets define training and test samples. Note that DNNs take very long to train, so for testing purposes we will use only about 10% of the 5 million events in the training/validation sample. Once you get everything working, you can go back and make the final version of your plots with the full sample.
Step28: <a id='Keras'></a>
Step29: The model has to be compiled. At this time we set the loss function and the optimizer too
Step30: Now we train. We are running only 10 epochs in this example. Models may need hundreds of epochs before they stop improving.
Step31: The model history keeps track of the loss and accuracy for each epoch. Note that the training above was setup to run on the validation sample at the end of each epoch
Step32: You can plot the loss versus epoch
Step33: Exercise C.1.1
Step34: We can evaluate how the trained model does on the test sample as follows
Step35: And we can make ROC curves as before
Step36: Exercise C.1.2
Step37: Exercise C.1.3
Step38: Exercise C.1.4
| <ASSISTANT_TASK:>
Python Code:
import random
x=random.random()
print ("The Value of x is", x)
# Skeleton
def GenerateData(N,min,max):
out = []
### BEGIN SOLUTION
# Fill in your solution here
### END SOLUTION
return out
Data=GenerateData(1000,-10,10)
print ("Data Type:", type(Data))
print ("Data Length:", len(Data))
if len(Data)>0:
print ("Type of Data Contents:", type(Data[0]))
print ("Data Minimum:", min(Data))
print ("Data Maximum:", max(Data))
# Skeleton
def mean(Data):
m=0
### BEGIN SOLUTION
# Fill in your solution here
### END SOLUTION
return m
print ("Mean of Data:", mean(Data))
def where(mylist,myfunc):
out= []
### BEGIN SOLUTION
# Fill in your solution here
### END SOLUTION
return out
def inrange(mymin,mymax):
def testrange(x):
return x<mymax and x>=mymin
return testrange
# Examples:
F1=inrange(0,10)
F2=inrange(10,20)
print (F1(0), F1(1), F1(10), F1(15), F1(20))
print (F2(0), F2(1), F2(10), F2(15), F2(20))
print ("Number of Entries passing F1:", len(where(Data,F1)))
print ("Number of Entries passing F2:", len(where(Data,F2)))
### BEGIN SOLUTION
# Fill in your solution here
### END SOLUTION
def GenerateDataFromFunction(N,mymin,mymax,myfunc):
out = []
### BEGIN SOLUTION
# Fill in your solution here
### END SOLUTION
return out
import math
def gaussian(mean, sigma):
def f(x):
return (1/math.sqrt(2*math.pi*sigma**2))*math.exp(-( (x-mean)**2)/(2*(sigma**2) ))
return f
# Example Instantiation
g1=gaussian(0,1)
g2=gaussian(10,3)
### BEGIN SOLUTION
# Fill in your solution here
### END SOLUTION
import numpy as np
Y=np.array( [ [0, 1, 0, 0], # Class B
[1, 0, 0, 0], # Class A
[0, 0, 0, 1], # Class C
[0, 0, 1, 0] # Class D
])
print ("Shape of Y:", Y.shape)
print ("Transpose:", np.transpose(Y))
print ("Reshape 8,2:", np.transpose(Y).reshape((8,2)))
print ("Sum:", np.sum(np.transpose(Y).reshape((8,2)),axis=1))
Y1= np.sum(np.transpose(Y)
.reshape((8,2)),axis=1).reshape(4,2)
print ("Answer: ",Y1)
X=np.random.normal(4,10,1000)
print(np.mean(X))
print(np.min(X))
print(np.max(X))
print(np.var(X))
import math
X1=(X-np.mean(X))/math.sqrt(np.var(X)) # Replace X with your answer
print(np.mean(X1))
print(np.var(X1))
X0=np.random.random(1000)
def CheckFlatness(D,steps=10):
maxD=np.max(D)
minD=np.min(D)
i=minD
stepsize=(maxD-minD)/steps
while i<maxD:
print (i,i+stepsize,":",np.shape(np.where((D<=(i+stepsize)) & (D>i) )))
i+=stepsize
CheckFlatness(X0)
CheckFlatness(X)
filename="SUSY.csv"
# print out the first 5 lines using unix head command (note in jupyter ! => shell command)
!head -5 "SUSY.csv"
VarNames=["signal", "l_1_pT", "l_1_eta","l_1_phi", "l_2_pT", "l_2_eta", "l_2_phi", "MET", "MET_phi", "MET_rel", "axial_MET", "M_R", "M_TR_2", "R", "MT2", "S_R", "M_Delta_R", "dPhi_r_b", "cos_theta_r1"]
RawNames=["l_1_pT", "l_1_eta","l_1_phi", "l_2_pT", "l_2_eta", "l_2_phi"]
FeatureNames=[ "MET", "MET_phi", "MET_rel", "axial_MET", "M_R", "M_TR_2", "R", "MT2", "S_R", "M_Delta_R", "dPhi_r_b", "cos_theta_r1"]
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
df = pd.read_csv(filename, dtype='float64', names=VarNames)
df
df_sig=df[df.signal==1]
df_bkg=df[df.signal==0]
for var in VarNames[1:]:
print(var)
plt.figure()
plt.hist(df_sig[var],bins=100,histtype="step", color="red",label="background",stacked=True)
plt.hist(df_bkg[var],bins=100,histtype="step", color="blue", label="signal",stacked=True)
plt.legend(loc='upper right')
plt.show()
import sklearn.discriminant_analysis as DA
Fisher=DA.LinearDiscriminantAnalysis()
N_Train=4000000
Train_Sample=df[:N_Train]
Test_Sample=df[N_Train:]
X_Train=Train_Sample[VarNames[1:]]
y_Train=Train_Sample["signal"]
X_Test=Test_Sample[VarNames[1:]]
y_Test=Test_Sample["signal"]
Test_sig=Test_Sample[Test_Sample.signal==1]
Test_bkg=Test_Sample[Test_Sample.signal==0]
Fisher.fit(X_Train,y_Train)
plt.figure()
plt.hist(Fisher.decision_function(Test_sig[VarNames[1:]]),bins=100,histtype="step", color="blue", label="signal",stacked=True)
plt.hist(Fisher.decision_function(Test_bkg[VarNames[1:]]),bins=100,histtype="step", color="red", label="background",stacked=True)
plt.legend(loc='upper right')
plt.show()
from sklearn.metrics import roc_curve, auc
fpr, tpr, _ = roc_curve(y_Test, Fisher.decision_function(X_Test))
roc_auc = auc(fpr, tpr)
plt.plot(fpr,tpr,color='darkorange',label='ROC curve (area = %0.2f)' % roc_auc)
plt.legend(loc="lower right")
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.show()
X_Train_Raw=Train_Sample[RawNames]
X_Test_Raw=Test_Sample[RawNames]
X_Train_Features=Train_Sample[FeatureNames]
X_Test_Features=Test_Sample[FeatureNames]
def TrainFisher(X_Train,X_Test,y_Train):
Fisher=DA.LinearDiscriminantAnalysis()
Fisher.fit(X_Train,y_Train)
fpr, tpr, _ = roc_curve(y_Test, Fisher.decision_function(X_Test))
roc_auc = auc(fpr, tpr)
plt.plot(fpr,tpr,color='darkorange',label='ROC curve (area = %0.2f)' % roc_auc)
plt.legend(loc="lower right")
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.show()
return Fisher
RawFisher=TrainFisher(X_Train_Raw,X_Test_Raw,y_Train)
FeatureFisher=TrainFisher(X_Train_Features,X_Test_Features,y_Train)
def PlotSignificance(N_S,N_B, N_S_min=1):
plt.figure()
eff_sig,bins_sig,p_sig=plt.hist(Fisher.decision_function(Test_sig[VarNames[1:]]),bins=100,histtype="step", color="blue", label="signal",cumulative=-1,stacked=True,density=True)
eff_bkg,bins_bkg,p_bkg=plt.hist(Fisher.decision_function(Test_bkg[VarNames[1:]]),bins=100,histtype="step", color="red", label="background",cumulative=-1,stacked=True,density=True)
plt.legend(loc='upper right')
plt.show()
good_bins = np.where(eff_sig*N_S>=N_S_min)
print(len(good_bins[0]))
if len(good_bins[0])<1:
print ("Insufficient Signal.")
return 0,0,0
significance=(N_S*eff_sig)/np.sqrt((N_B*eff_bkg)+(N_S*eff_sig))
plt.figure()
plt.plot(bins_sig[:-1],significance)
max_sign=np.max(significance[good_bins])
max_signI=np.argmax(significance[good_bins])
plt.show()
print ("Max significance at ", bins_sig[max_signI], " of", max_sign)
return bins_sig[max_signI],max_sign, max_signI
PlotSignificance(1000000,1e11)
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
filename="SUSY.csv"
VarNames=["signal", "l_1_pT", "l_1_eta","l_1_phi", "l_2_pT", "l_2_eta", "l_2_phi", "MET", "MET_phi", "MET_rel", "axial_MET", "M_R", "M_TR_2", "R", "MT2", "S_R", "M_Delta_R", "dPhi_r_b", "cos_theta_r1"]
RawNames=["l_1_pT", "l_1_eta","l_1_phi", "l_2_pT", "l_2_eta", "l_2_phi"]
FeatureNames=[ "MET", "MET_phi", "MET_rel", "axial_MET", "M_R", "M_TR_2", "R", "MT2", "S_R", "M_Delta_R", "dPhi_r_b", "cos_theta_r1"]
df = pd.read_csv(filename, dtype='float64', names=VarNames)
N_Max=550000
N_Train=500000
Train_Sample=df[:N_Train]
Test_Sample=df[N_Train:N_Max]
X_Train=np.array(Train_Sample[VarNames[1:]])
y_Train=np.array(Train_Sample["signal"])
X_Test=np.array(Test_Sample[VarNames[1:]])
y_Test=np.array(Test_Sample["signal"])
from keras.models import Sequential
from keras.layers import Dense
model = Sequential()
model.add(Dense(12, input_dim=X_Train.shape[1], kernel_initializer='random_uniform', activation='relu'))
model.add(Dense(8, kernel_initializer='random_uniform', activation='relu'))
model.add(Dense(1, kernel_initializer='random_uniform', activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
history=model.fit(X_Train, y_Train, validation_data=(X_Test,y_Test), epochs=10, batch_size=2048)
print(history.history)
loss_history=history.history["loss"]
plt.plot(range(len(loss_history)),loss_history)
## Your Solution Here
scores = model.evaluate(X_Test, y_Test)
print(scores)
from sklearn.metrics import roc_curve, auc
fpr, tpr, _ = roc_curve(y_Test, model.predict(X_Test))
roc_auc = auc(fpr, tpr)
plt.plot(fpr,tpr,color='darkorange',label='ROC curve (area = %0.2f)' % roc_auc)
plt.legend(loc="lower right")
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.show()
## Your solution here
## Your solution here
## Your solution here
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Linear regression
Step2: Quick exploration
Step3: You can also do quick plotting of the data. The results are not aesthetically the best, but it is useful for a quick visual of the data
Step4: Much nicer results can be obtained using a dedicated plotter like seaborn.
Step5: Performing the regression
Step6: ANOVA anslysis
Step7: Each row in the above data is one group.
| <ASSISTANT_TASK:>
Python Code:
# Cumulative probability P(X<120) where X ~ N(100, 10^2)
print("P(X<120) where X ~ N(100, 10^2) = %.3f" % stats.norm.cdf(120, loc=100, scale=10))
# Calculate value
print("x for which P(X < x = 0.97) = %.1f" % stats.norm.ppf(0.97, loc=100, scale=10))
# Cumulative probability P(X<120) where X ~ N(100, 10^2)
print("P(X<120) where X ~ t with df = 10, mean = 100 and sigma = 10) = %.3f" % stats.t.cdf(120, df=10, loc=100, scale=10))
# Calculate value
print("x for which P(X < x = 0.97) = %.1f" % stats.t.ppf(0.97, df=10, loc=100, scale=10))
df = pd.read_csv("co2_temp_yr.csv", delimiter=",")
print(df)
df.describe()
ax = df.plot(x="CO2 ppm", y="Global Temp", style='o')
ax = sns.regplot(x="CO2 ppm", y="Global Temp", data=df)
res = stats.linregress(df["CO2 ppm"], df["Global Temp"])
print("Slope = %.3f" % res.slope)
print("Intercept = %.3f" % res.intercept)
print("R = %.3f" % res.rvalue)
print("Std error = %.3f" % res.stderr)
df = pd.read_table("polymer.csv", delimiter=",", index_col=0)
print(df)
print(stats.f_oneway(*df.as_matrix()))
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Chemistry Scheme Scope
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables Form
Step9: 1.6. Number Of Tracers
Step10: 1.7. Family Approach
Step11: 1.8. Coupling With Chemical Reactivity
Step12: 2. Key Properties --> Software Properties
Step13: 2.2. Code Version
Step14: 2.3. Code Languages
Step15: 3. Key Properties --> Timestep Framework
Step16: 3.2. Split Operator Advection Timestep
Step17: 3.3. Split Operator Physical Timestep
Step18: 3.4. Split Operator Chemistry Timestep
Step19: 3.5. Split Operator Alternate Order
Step20: 3.6. Integrated Timestep
Step21: 3.7. Integrated Scheme Type
Step22: 4. Key Properties --> Timestep Framework --> Split Operator Order
Step23: 4.2. Convection
Step24: 4.3. Precipitation
Step25: 4.4. Emissions
Step26: 4.5. Deposition
Step27: 4.6. Gas Phase Chemistry
Step28: 4.7. Tropospheric Heterogeneous Phase Chemistry
Step29: 4.8. Stratospheric Heterogeneous Phase Chemistry
Step30: 4.9. Photo Chemistry
Step31: 4.10. Aerosols
Step32: 5. Key Properties --> Tuning Applied
Step33: 5.2. Global Mean Metrics Used
Step34: 5.3. Regional Metrics Used
Step35: 5.4. Trend Metrics Used
Step36: 6. Grid
Step37: 6.2. Matches Atmosphere Grid
Step38: 7. Grid --> Resolution
Step39: 7.2. Canonical Horizontal Resolution
Step40: 7.3. Number Of Horizontal Gridpoints
Step41: 7.4. Number Of Vertical Levels
Step42: 7.5. Is Adaptive Grid
Step43: 8. Transport
Step44: 8.2. Use Atmospheric Transport
Step45: 8.3. Transport Details
Step46: 9. Emissions Concentrations
Step47: 10. Emissions Concentrations --> Surface Emissions
Step48: 10.2. Method
Step49: 10.3. Prescribed Climatology Emitted Species
Step50: 10.4. Prescribed Spatially Uniform Emitted Species
Step51: 10.5. Interactive Emitted Species
Step52: 10.6. Other Emitted Species
Step53: 11. Emissions Concentrations --> Atmospheric Emissions
Step54: 11.2. Method
Step55: 11.3. Prescribed Climatology Emitted Species
Step56: 11.4. Prescribed Spatially Uniform Emitted Species
Step57: 11.5. Interactive Emitted Species
Step58: 11.6. Other Emitted Species
Step59: 12. Emissions Concentrations --> Concentrations
Step60: 12.2. Prescribed Upper Boundary
Step61: 13. Gas Phase Chemistry
Step62: 13.2. Species
Step63: 13.3. Number Of Bimolecular Reactions
Step64: 13.4. Number Of Termolecular Reactions
Step65: 13.5. Number Of Tropospheric Heterogenous Reactions
Step66: 13.6. Number Of Stratospheric Heterogenous Reactions
Step67: 13.7. Number Of Advected Species
Step68: 13.8. Number Of Steady State Species
Step69: 13.9. Interactive Dry Deposition
Step70: 13.10. Wet Deposition
Step71: 13.11. Wet Oxidation
Step72: 14. Stratospheric Heterogeneous Chemistry
Step73: 14.2. Gas Phase Species
Step74: 14.3. Aerosol Species
Step75: 14.4. Number Of Steady State Species
Step76: 14.5. Sedimentation
Step77: 14.6. Coagulation
Step78: 15. Tropospheric Heterogeneous Chemistry
Step79: 15.2. Gas Phase Species
Step80: 15.3. Aerosol Species
Step81: 15.4. Number Of Steady State Species
Step82: 15.5. Interactive Dry Deposition
Step83: 15.6. Coagulation
Step84: 16. Photo Chemistry
Step85: 16.2. Number Of Reactions
Step86: 17. Photo Chemistry --> Photolysis
Step87: 17.2. Environmental Conditions
| <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'bnu', 'sandbox-3', 'atmoschem')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: BERT Question Answer with TensorFlow Lite Model Maker
Step2: Import the required packages.
Step3: The "End-to-End Overview" demonstrates a simple end-to-end example. The following sections walk through the example step by step to show more detail.
Step4: Load Input Data Specific to an On-device ML App and Preprocess the Data
Step5: You can also train the MobileBERT model with your own dataset. If you are running this notebook on Colab, upload your data by using the left sidebar.
Step6: Customize the TensorFlow Model
Step7: Have a look at the detailed model structure.
Step8: Evaluate the Customized Model
Step9: Export to TensorFlow Lite Model
Step10: You can use the TensorFlow Lite model file in the bert_qa reference app using BertQuestionAnswerer API in TensorFlow Lite Task Library by downloading it from the left sidebar on Colab.
Step11: You can also evaluate the tflite model with the evaluate_tflite method. This step is expected to take a long time.
Step12: Advanced Usage
| <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
!sudo apt -y install libportaudio2
!pip install -q tflite-model-maker-nightly
import numpy as np
import os
import tensorflow as tf
assert tf.__version__.startswith('2')
from tflite_model_maker import model_spec
from tflite_model_maker import question_answer
from tflite_model_maker.config import ExportFormat
from tflite_model_maker.question_answer import DataLoader
spec = model_spec.get('mobilebert_qa_squad')
train_data_path = tf.keras.utils.get_file(
fname='triviaqa-web-train-8000.json',
origin='https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-web-train-8000.json')
validation_data_path = tf.keras.utils.get_file(
fname='triviaqa-verified-web-dev.json',
origin='https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-verified-web-dev.json')
train_data = DataLoader.from_squad(train_data_path, spec, is_training=True)
validation_data = DataLoader.from_squad(validation_data_path, spec, is_training=False)
model = question_answer.create(train_data, model_spec=spec)
model.summary()
model.evaluate(validation_data)
model.export(export_dir='.')
model.export(export_dir='.', export_format=ExportFormat.VOCAB)
model.evaluate_tflite('model.tflite', validation_data)
new_spec = model_spec.get('mobilebert_qa')
new_spec.seq_len = 512
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Question 1
Step2: We can then print the Sum of the costs of all those rows. (The cost column is named total_cost.)
Step3: Question 2
Step4: We then use the aggregate function to sum the total_cost column for each table in the group. The resulting values are collapsed into a new table, totals, which has a row for each county and a column named total_cost_sum containing the new total.
Step5: Finally, we sort the counties by their total cost, limit the results to the top 10 and then print the results as a text bar chart.
| <ASSISTANT_TASK:>
Python Code:
import agate
table = agate.Table.from_csv('examples/realdata/ks_1033_data.csv')
print(table)
kansas_city = table.where(lambda r: r['county'] in ('JACKSON', 'CLAY', 'CASS', 'PLATTE'))
print(len(table.rows))
print(len(kansas_city.rows))
print('$%d' % kansas_city.aggregate(agate.Sum('total_cost')))
# Group by county
counties = table.group_by('county')
print(counties.keys())
# Aggregate totals for all counties
totals = counties.aggregate([
('total_cost_sum', agate.Sum('total_cost'),)
])
print(totals.column_names)
totals.order_by('total_cost_sum', reverse=True).limit(20).print_bars('county', 'total_cost_sum', width=100)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
| <ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import torch
softmax_output = load_data()
y = torch.argmax(softmax_output, dim=1).view(-1, 1)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Reading data from BigQuery with TFX and Vertex Pipelines
Step2: Did you restart the runtime?
Step3: Login in to Google for this notebook
Step4: If you are on AI Platform Notebooks, authenticate with Google Cloud before
Step5: Set up variables
Step6: Set gcloud to use your project.
Step7: By default the Vertex Pipelines uses the default GCE VM service account of
Step8: Please see
Step9: All features were already normalized to 0~1 except species which is the
Step13: Write model code.
Step14: Copy the module file to GCS which can be accessed from the pipeline components.
Step16: Write a pipeline definition
Step17: Run the pipeline on Vertex Pipelines.
Step18: The generated definition file can be submitted using kfp client.
| <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Use the latest version of pip.
!pip install --upgrade pip
!pip install --upgrade "tfx[kfp]<2"
# docs_infra: no_execute
import sys
if not 'google.colab' in sys.modules:
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
import sys
if 'google.colab' in sys.modules:
from google.colab import auth
auth.authenticate_user()
import tensorflow as tf
print('TensorFlow version: {}'.format(tf.__version__))
from tfx import v1 as tfx
print('TFX version: {}'.format(tfx.__version__))
import kfp
print('KFP version: {}'.format(kfp.__version__))
GOOGLE_CLOUD_PROJECT = '' # <--- ENTER THIS
GOOGLE_CLOUD_PROJECT_NUMBER = '' # <--- ENTER THIS
GOOGLE_CLOUD_REGION = '' # <--- ENTER THIS
GCS_BUCKET_NAME = '' # <--- ENTER THIS
if not (GOOGLE_CLOUD_PROJECT and GOOGLE_CLOUD_PROJECT_NUMBER and GOOGLE_CLOUD_REGION and GCS_BUCKET_NAME):
from absl import logging
logging.error('Please set all required parameters.')
!gcloud config set project {GOOGLE_CLOUD_PROJECT}
PIPELINE_NAME = 'penguin-bigquery'
# Path to various pipeline artifact.
PIPELINE_ROOT = 'gs://{}/pipeline_root/{}'.format(
GCS_BUCKET_NAME, PIPELINE_NAME)
# Paths for users' Python module.
MODULE_ROOT = 'gs://{}/pipeline_module/{}'.format(
GCS_BUCKET_NAME, PIPELINE_NAME)
# Paths for users' data.
DATA_ROOT = 'gs://{}/data/{}'.format(GCS_BUCKET_NAME, PIPELINE_NAME)
# This is the path where your model will be pushed for serving.
SERVING_MODEL_DIR = 'gs://{}/serving_model/{}'.format(
GCS_BUCKET_NAME, PIPELINE_NAME)
print('PIPELINE_ROOT: {}'.format(PIPELINE_ROOT))
!gcloud projects add-iam-policy-binding {GOOGLE_CLOUD_PROJECT} \
--member=serviceAccount:{GOOGLE_CLOUD_PROJECT_NUMBER}-compute@developer.gserviceaccount.com \
--role=roles/bigquery.user
# docs_infra: no_execute
%%bigquery --project {GOOGLE_CLOUD_PROJECT}
SELECT *
FROM `tfx-oss-public.palmer_penguins.palmer_penguins`
LIMIT 5
QUERY = "SELECT * FROM `tfx-oss-public.palmer_penguins.palmer_penguins`"
_trainer_module_file = 'penguin_trainer.py'
%%writefile {_trainer_module_file}
# Copied from https://www.tensorflow.org/tfx/tutorials/tfx/penguin_simple
from typing import List
from absl import logging
import tensorflow as tf
from tensorflow import keras
from tensorflow_transform.tf_metadata import schema_utils
from tfx import v1 as tfx
from tfx_bsl.public import tfxio
from tensorflow_metadata.proto.v0 import schema_pb2
_FEATURE_KEYS = [
'culmen_length_mm', 'culmen_depth_mm', 'flipper_length_mm', 'body_mass_g'
]
_LABEL_KEY = 'species'
_TRAIN_BATCH_SIZE = 20
_EVAL_BATCH_SIZE = 10
# Since we're not generating or creating a schema, we will instead create
# a feature spec. Since there are a fairly small number of features this is
# manageable for this dataset.
_FEATURE_SPEC = {
**{
feature: tf.io.FixedLenFeature(shape=[1], dtype=tf.float32)
for feature in _FEATURE_KEYS
},
_LABEL_KEY: tf.io.FixedLenFeature(shape=[1], dtype=tf.int64)
}
def _input_fn(file_pattern: List[str],
data_accessor: tfx.components.DataAccessor,
schema: schema_pb2.Schema,
batch_size: int) -> tf.data.Dataset:
Generates features and label for training.
Args:
file_pattern: List of paths or patterns of input tfrecord files.
data_accessor: DataAccessor for converting input to RecordBatch.
schema: schema of the input data.
batch_size: representing the number of consecutive elements of returned
dataset to combine in a single batch
Returns:
A dataset that contains (features, indices) tuple where features is a
dictionary of Tensors, and indices is a single Tensor of label indices.
return data_accessor.tf_dataset_factory(
file_pattern,
tfxio.TensorFlowDatasetOptions(
batch_size=batch_size, label_key=_LABEL_KEY),
schema=schema).repeat()
def _make_keras_model() -> tf.keras.Model:
Creates a DNN Keras model for classifying penguin data.
Returns:
A Keras Model.
# The model below is built with Functional API, please refer to
# https://www.tensorflow.org/guide/keras/overview for all API options.
inputs = [keras.layers.Input(shape=(1,), name=f) for f in _FEATURE_KEYS]
d = keras.layers.concatenate(inputs)
for _ in range(2):
d = keras.layers.Dense(8, activation='relu')(d)
outputs = keras.layers.Dense(3)(d)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(
optimizer=keras.optimizers.Adam(1e-2),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[keras.metrics.SparseCategoricalAccuracy()])
model.summary(print_fn=logging.info)
return model
# TFX Trainer will call this function.
def run_fn(fn_args: tfx.components.FnArgs):
Train the model based on given args.
Args:
fn_args: Holds args used to train the model as name/value pairs.
# This schema is usually either an output of SchemaGen or a manually-curated
# version provided by pipeline author. A schema can also derived from TFT
# graph if a Transform component is used. In the case when either is missing,
# `schema_from_feature_spec` could be used to generate schema from very simple
# feature_spec, but the schema returned would be very primitive.
schema = schema_utils.schema_from_feature_spec(_FEATURE_SPEC)
train_dataset = _input_fn(
fn_args.train_files,
fn_args.data_accessor,
schema,
batch_size=_TRAIN_BATCH_SIZE)
eval_dataset = _input_fn(
fn_args.eval_files,
fn_args.data_accessor,
schema,
batch_size=_EVAL_BATCH_SIZE)
model = _make_keras_model()
model.fit(
train_dataset,
steps_per_epoch=fn_args.train_steps,
validation_data=eval_dataset,
validation_steps=fn_args.eval_steps)
# The result of the training should be saved in `fn_args.serving_model_dir`
# directory.
model.save(fn_args.serving_model_dir, save_format='tf')
!gsutil cp {_trainer_module_file} {MODULE_ROOT}/
from typing import List, Optional
def _create_pipeline(pipeline_name: str, pipeline_root: str, query: str,
module_file: str, serving_model_dir: str,
beam_pipeline_args: Optional[List[str]],
) -> tfx.dsl.Pipeline:
Creates a TFX pipeline using BigQuery.
# NEW: Query data in BigQuery as a data source.
example_gen = tfx.extensions.google_cloud_big_query.BigQueryExampleGen(
query=query)
# Uses user-provided Python function that trains a model.
trainer = tfx.components.Trainer(
module_file=module_file,
examples=example_gen.outputs['examples'],
train_args=tfx.proto.TrainArgs(num_steps=100),
eval_args=tfx.proto.EvalArgs(num_steps=5))
# Pushes the model to a file destination.
pusher = tfx.components.Pusher(
model=trainer.outputs['model'],
push_destination=tfx.proto.PushDestination(
filesystem=tfx.proto.PushDestination.Filesystem(
base_directory=serving_model_dir)))
components = [
example_gen,
trainer,
pusher,
]
return tfx.dsl.Pipeline(
pipeline_name=pipeline_name,
pipeline_root=pipeline_root,
components=components,
# NEW: `beam_pipeline_args` is required to use BigQueryExampleGen.
beam_pipeline_args=beam_pipeline_args)
# docs_infra: no_execute
import os
# We need to pass some GCP related configs to BigQuery. This is currently done
# using `beam_pipeline_args` parameter.
BIG_QUERY_WITH_DIRECT_RUNNER_BEAM_PIPELINE_ARGS = [
'--project=' + GOOGLE_CLOUD_PROJECT,
'--temp_location=' + os.path.join('gs://', GCS_BUCKET_NAME, 'tmp'),
]
PIPELINE_DEFINITION_FILE = PIPELINE_NAME + '_pipeline.json'
runner = tfx.orchestration.experimental.KubeflowV2DagRunner(
config=tfx.orchestration.experimental.KubeflowV2DagRunnerConfig(),
output_filename=PIPELINE_DEFINITION_FILE)
_ = runner.run(
_create_pipeline(
pipeline_name=PIPELINE_NAME,
pipeline_root=PIPELINE_ROOT,
query=QUERY,
module_file=os.path.join(MODULE_ROOT, _trainer_module_file),
serving_model_dir=SERVING_MODEL_DIR,
beam_pipeline_args=BIG_QUERY_WITH_DIRECT_RUNNER_BEAM_PIPELINE_ARGS))
# docs_infra: no_execute
from google.cloud import aiplatform
from google.cloud.aiplatform import pipeline_jobs
import logging
logging.getLogger().setLevel(logging.INFO)
aiplatform.init(project=GOOGLE_CLOUD_PROJECT, location=GOOGLE_CLOUD_REGION)
job = pipeline_jobs.PipelineJob(template_path=PIPELINE_DEFINITION_FILE,
display_name=PIPELINE_NAME)
job.submit()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: There is much, much more to know, but these few operations are fundamental to what we'll
Step2: The CSR representation can be very efficient for computations, but it is not
Step3: Often, once an LIL matrix is created, it is useful to convert it to a CSR format
Step4: There are several other sparse formats that can be useful for various problems
Step5: There are many, many more plot types available. One useful way to explore these is by
| <ASSISTANT_TASK:>
Python Code:
import numpy as np
# Generating a random array
X = np.random.random((3, 5)) # a 3 x 5 array
print(X)
# Accessing elements
# get a single element
print(X[0, 0])
# get a row
print(X[1])
# get a column
print(X[:, 1])
# Transposing an array
print(X.T)
# Turning a row vector into a column vector
y = np.linspace(0, 12, 5)
print(y)
# make into a column vector
print(y[:, np.newaxis])
# getting the shape or reshaping an array
print(X.shape)
print(X.reshape(5, 3))
# indexing by an array of integers (fancy indexing)
indices = np.array([3, 1, 0])
print(indices)
X[:, indices]
from scipy import sparse
# Create a random array with a lot of zeros
X = np.random.random((10, 5))
print(X)
# set the majority of elements to zero
X[X < 0.7] = 0
print(X)
# turn X into a csr (Compressed-Sparse-Row) matrix
X_csr = sparse.csr_matrix(X)
print(X_csr)
# convert the sparse matrix to a dense array
print(X_csr.toarray())
# Create an empty LIL matrix and add some items
X_lil = sparse.lil_matrix((5, 5))
for i, j in np.random.randint(0, 5, (15, 2)):
X_lil[i, j] = i + j
print(X_lil)
print(X_lil.toarray())
print(X_lil.tocsr())
%matplotlib inline
import matplotlib.pyplot as plt
# plotting a line
x = np.linspace(0, 10, 100)
plt.plot(x, np.sin(x))
# scatter-plot points
x = np.random.normal(size=500)
y = np.random.normal(size=500)
plt.scatter(x, y)
# showing images
x = np.linspace(1, 12, 100)
y = x[:, np.newaxis]
im = y * np.sin(x) * np.cos(y)
print(im.shape)
# imshow - note that origin is at the top-left by default!
plt.imshow(im)
# Contour plot - note that origin here is at the bottom-left by default!
plt.contour(im)
# 3D plotting
from mpl_toolkits.mplot3d import Axes3D
ax = plt.axes(projection='3d')
xgrid, ygrid = np.meshgrid(x, y.ravel())
ax.plot_surface(xgrid, ygrid, im, cmap=plt.cm.jet, cstride=2, rstride=2, linewidth=0)
# %load http://matplotlib.org/mpl_examples/pylab_examples/ellipse_collection.py
import matplotlib.pyplot as plt
import numpy as np
from matplotlib.collections import EllipseCollection
x = np.arange(10)
y = np.arange(15)
X, Y = np.meshgrid(x, y)
XY = np.hstack((X.ravel()[:,np.newaxis], Y.ravel()[:,np.newaxis]))
ww = X/10.0
hh = Y/15.0
aa = X*9
fig, ax = plt.subplots()
ec = EllipseCollection(ww, hh, aa, units='x', offsets=XY,
transOffset=ax.transData)
ec.set_array((X+Y).ravel())
ax.add_collection(ec)
ax.autoscale_view()
ax.set_xlabel('X')
ax.set_ylabel('y')
cbar = plt.colorbar(ec)
cbar.set_label('X+Y')
plt.show()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Part 2
Step2: Part 3
Step4: Assignment wrapup
| <ASSISTANT_TASK:>
Python Code:
# put your code here!
myfile = open('oshea.txt','w')
myfile.write('I love pizza.\n')
myfile.write('I have two children, both of whom are girls.\n')
myfile.write('I also have two dogs.\n')
myfile.write('Arguably, I spend far too much time on the computer.\n')
myfile.close()
myfilenew = open('oshea.txt','r')
for line in myfilenew:
words = line.split(' ')
print(words[0])
myfilenew.close()
# put your code here!
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import csv
# writing
x = np.arange(-10.0,10.0,0.1)
y = np.sin(x)
csvfile = open('my_sine_wave.csv','w',newline='')
csvwriter = csv.writer(csvfile,delimiter=',')
for i in range(x.size):
csvwriter.writerow([x[i], y[i]])
csvfile.close()
# reading
combined_arrays = np.loadtxt('my_sine_wave.csv',delimiter=',',unpack=True)
plt.plot(combined_arrays[0],combined_arrays[1])
# put your code here!
# writing
np.savez('my_sine_wave.npz',xvals=x,sinewave=y)
# reading
all_data = np.load('my_sine_wave.npz')
plt.plot(all_data['xvals'],all_data['sinewave'],'r-')
from IPython.display import HTML
HTML(
<iframe
src="https://goo.gl/forms/cGV5yNRzgxzx6naf2?embedded=true"
width="80%"
height="1200px"
frameborder="0"
marginheight="0"
marginwidth="0">
Loading...
</iframe>
)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Language Detection
Step2: Tokenization
Step3: Part of Speech Tagging
Step4: Named Entity Recognition
Step5: Polarity
Step6: Embeddings
Step7: Morphology
Step8: Transliteration
| <ASSISTANT_TASK:>
Python Code:
import polyglot
from polyglot.text import Text, Word
text = Text("Bonjour, Mesdames.")
print("Language Detected: Code={}, Name={}\n".format(text.language.code, text.language.name))
zen = Text("Beautiful is better than ugly. "
"Explicit is better than implicit. "
"Simple is better than complex.")
print(zen.words)
print(zen.sentences)
text = Text(u"O primeiro uso de desobediência civil em massa ocorreu em setembro de 1906.")
print("{:<16}{}".format("Word", "POS Tag")+"\n"+"-"*30)
for word, tag in text.pos_tags:
print(u"{:<16}{:>2}".format(word, tag))
text = Text(u"In Großbritannien war Gandhi mit dem westlichen Lebensstil vertraut geworden")
print(text.entities)
print("{:<16}{}".format("Word", "Polarity")+"\n"+"-"*30)
for w in zen.words[:6]:
print("{:<16}{:>2}".format(w, w.polarity))
word = Word("Obama", language="en")
print("Neighbors (Synonms) of {}".format(word)+"\n"+"-"*30)
for w in word.neighbors:
print("{:<16}".format(w))
print("\n\nThe first 10 dimensions out the {} dimensions\n".format(word.vector.shape[0]))
print(word.vector[:10])
word = Text("Preprocessing is an essential step.").words[0]
print(word.morphemes)
from polyglot.transliteration import Transliterator
transliterator = Transliterator(source_lang="en", target_lang="ru")
print(transliterator.transliterate(u"preprocessing"))
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: 2. Build map
Step4: 2.2. HTML for popups
Step5: 2.3. Create map
| <ASSISTANT_TASK:>
Python Code:
df = pd.read_csv('toc_trends_long_format.csv')
df.dropna(subset=['latitude', 'longitude'], inplace=True)
df = df.query('(analysis_period == "1990-2016") and (non_missing > 0)')
base = "http://77.104.141.195/~icpwater/wp-content/core_plots/trends_plots_1990-2016/"
fname = df['station_id'].astype(str) + '_' + df['par_id'] + '_' + df['data_period'] + '.png'
df['link'] = base + fname
df.head()
from branca.element import Template, MacroElement
template =
{% macro html(this, kwargs) %}
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<title>jQuery UI Draggable - Default functionality</title>
<link rel="stylesheet" href="//code.jquery.com/ui/1.12.1/themes/base/jquery-ui.css">
<script src="https://code.jquery.com/jquery-1.12.4.js"></script>
<script src="https://code.jquery.com/ui/1.12.1/jquery-ui.js"></script>
<script>
$( function() {
$( "#maplegend" ).draggable({
start: function (event, ui) {
$(this).css({
right: "auto",
top: "auto",
bottom: "auto"
});
}
});
});
</script>
</head>
<body>
<div id='maplegend' class='maplegend'
style='position: absolute; z-index:9999; border:2px solid grey; background-color:rgba(255, 255, 255, 0.8);
border-radius:6px; padding: 10px; font-size:14px; right: 20px; bottom: 20px;'>
<div class='legend-title'>Legend (draggable)</div>
<div class='legend-scale'>
<ul class='legend-labels'>
<li><span style='background:red;opacity:0.7;'></span>Increasing</li>
<li><span style='background:yellow;opacity:0.7;'></span>No trend</li>
<li><span style='background:green;opacity:0.7;'></span>Decreasing</li>
</ul>
</div>
</div>
</body>
</html>
<style type='text/css'>
.maplegend .legend-title {
text-align: left;
margin-bottom: 5px;
font-weight: bold;
font-size: 90%;
}
.maplegend .legend-scale ul {
margin: 0;
margin-bottom: 5px;
padding: 0;
float: left;
list-style: none;
}
.maplegend .legend-scale ul li {
font-size: 80%;
list-style: none;
margin-left: 0;
line-height: 18px;
margin-bottom: 2px;
}
.maplegend ul.legend-labels li span {
display: block;
float: left;
height: 16px;
width: 30px;
margin-right: 5px;
margin-left: 0;
border: 1px solid #999;
}
.maplegend .legend-source {
font-size: 80%;
color: #777;
clear: both;
}
.maplegend a {
color: #777;
}
</style>
{% endmacro %}
macro = MacroElement()
macro._template = Template(template)
# HTML for popup styling
html =
<center><h3>{par_id} at {station_name}, {country}</h3></center>
<center><table>
<tr>
<td><b>ICPW ID:</b></td>
<td>{station_id}</td>
</tr>
<tr>
<td><b>ICPW code:</b></td>
<td>{station_code}</td>
</tr>
<tr>
<td><b>NFC code:</b></td>
<td>{nfc_code}</td>
</tr>
<tr>
<td><b>Number of years with data:</b></td>
<td>{non_missing}</td>
</tr>
<tr>
<td><b>Median:</b></td>
<td>{median:.3f}</td>
</tr>
<tr>
<td><b>Standard deviation:</b></td>
<td>{std_dev:.3f}</td>
</tr>
<tr>
<td><b>Mann-Kendall p-value:</b></td>
<td>{mk_p_val:.3f}</td>
</tr>
<tr>
<td><b>Trend:</b></td>
<td>{trend}</td>
</tr>
<tr>
<td><b>Theil-Sen slope:</b></td>
<td>{sen_slp:.3f}</td>
</tr>
</table></center>
<center><img src={link} height="300"></center>
# Create basemap
m = folium.Map(location=[55, -35], zoom_start=3)
# Add Google aerial imagery
folium.raster_layers.TileLayer(tiles='http://{s}.google.com/vt/lyrs=s&x={x}&y={y}&z={z}',
attr='google',
name='Google satellite',
max_zoom=20,
subdomains=['mt0', 'mt1', 'mt2', 'mt3'],
overlay=False,
control=True).add_to(m)
# Loop over parameters
for par in df['par_id'].unique():
# Add feature group. Show TOC by default
if par == 'TOC':
fg = folium.FeatureGroup(name=par, show=True)
else:
fg = folium.FeatureGroup(name=par, show=False)
m.add_child(fg)
# Get data
par_df = df.query('par_id == @par')
rec_list = par_df.to_dict(orient='records')
# Get station summary data
for rec in rec_list:
popup = HTML(html.format(**rec))
# Get marker colour
trend = rec['trend']
if trend == 'increasing':
colour = 'red'
elif trend == 'decreasing':
colour = 'green'
else:
colour = 'yellow'
cm = folium.CircleMarker(location=[rec['latitude'], rec['longitude']],
radius=6,
popup=popup,
parse_html=True,
fill=True,
fill_color=colour,
color='black',
fill_opacity=0.7,
)
fg.add_child(cm)
folium.LayerControl().add_to(m)
m.get_root().add_child(macro)
m.save("icpw_map.html")
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Simulation setup
Step2: Create the coordinate systems
Step3: Step 1
Step4: Step 2
Step5: Now compute the point-spread function via
Step6: Step 3
Step7: Compute the cumulative distribution
Step8: Interpolate the cumulative distribution
| <ASSISTANT_TASK:>
Python Code:
import sys
%pylab inline
import scipy.special
from scipy.interpolate import interp1d
from scipy.interpolate import RectBivariateSpline
print('Python {}\n'.format(sys.version))
print('NumPy\t\t{}'.format(np.__version__))
print('matplotlib\t{}'.format(matplotlib.__version__))
print('SciPy\t\t{}'.format(scipy.__version__))
# Image properties
# Size of the PSF array, pixels
size_x = 256
size_y = 256
size_z = 1
# Precision control
num_basis = 100 # Number of rescaled Bessels that approximate the phase function
num_samples = 1000 # Number of pupil samples along radial direction
oversampling = 2 # Defines the upsampling ratio on the image space grid for computations
# Microscope parameters
NA = 1.4
wavelength = 0.610 # microns
M = 100 # magnification
ns = 1.33 # specimen refractive index (RI)
ng0 = 1.5 # coverslip RI design value
ng = 1.5 # coverslip RI experimental value
ni0 = 1.5 # immersion medium RI design value
ni = 1.5 # immersion medium RI experimental value
ti0 = 150 # microns, working distance (immersion medium thickness) design value
tg0 = 170 # microns, coverslip thickness design value
tg = 170 # microns, coverslip thickness experimental value
resPSF = 0.02 # microns (resPSF in the Java code)
resLateral = 0.1 # microns (resLateral in the Java code)
res_axial = 0.25 # microns
pZ = 2 # microns, particle distance from coverslip
z = [-2] # microns, stage displacement away from best focus
# Scaling factors for the Fourier-Bessel series expansion
min_wavelength = 0.436 # microns
scaling_factor = NA * (3 * np.arange(1, num_basis + 1) - 2) * min_wavelength / wavelength
# Place the origin at the center of the final PSF array
x0 = (size_x - 1) / 2
y0 = (size_y - 1) / 2
# Find the maximum possible radius coordinate of the PSF array by finding the distance
# from the center of the array to a corner
max_radius = round(sqrt((size_x - x0) * (size_x - x0) + (size_y - y0) * (size_y - y0))) + 1;
# Radial coordinates, image space
r = resPSF * np.arange(0, oversampling * max_radius) / oversampling
# Radial coordinates, pupil space
a = min([NA, ns, ni, ni0, ng, ng0]) / NA
rho = np.linspace(0, a, num_samples)
# Convert z to array
z = np.array(z)
# Define the wavefront aberration
OPDs = pZ * np.sqrt(ns * ns - NA * NA * rho * rho) # OPD in the sample
OPDi = (z.reshape(-1,1) + ti0) * np.sqrt(ni * ni - NA * NA * rho * rho) - ti0 * np.sqrt(ni0 * ni0 - NA * NA * rho * rho) # OPD in the immersion medium
OPDg = tg * np.sqrt(ng * ng - NA * NA * rho * rho) - tg0 * np.sqrt(ng0 * ng0 - NA * NA * rho * rho) # OPD in the coverslip
W = 2 * np.pi / wavelength * (OPDs + OPDi + OPDg)
# Sample the phase
# Shape is (number of z samples by number of rho samples)
phase = np.cos(W) + 1j * np.sin(W)
# Define the basis of Bessel functions
# Shape is (number of basis functions by number of rho samples)
J = scipy.special.jv(0, scaling_factor.reshape(-1, 1) * rho)
# Compute the approximation to the sampled pupil phase by finding the least squares
# solution to the complex coefficients of the Fourier-Bessel expansion.
# Shape of C is (number of basis functions by number of z samples).
# Note the matrix transposes to get the dimensions correct.
C, residuals, _, _ = np.linalg.lstsq(J.T, phase.T)
b = 2 * np. pi * r.reshape(-1, 1) * NA / wavelength
# Convenience functions for J0 and J1 Bessel functions
J0 = lambda x: scipy.special.jv(0, x)
J1 = lambda x: scipy.special.jv(1, x)
# See equation 5 in Li, Xue, and Blu
denom = scaling_factor * scaling_factor - b * b
R = (scaling_factor * J1(scaling_factor * a) * J0(b * a) * a - b * J0(scaling_factor * a) * J1(b * a) * a)
R /= denom
# The transpose places the axial direction along the first dimension of the array, i.e. rows
# This is only for convenience.
PSF_rz = (np.abs(R.dot(C))**2).T
# Create the fleshed-out xy grid of radial distances from the center
xy = np.mgrid[0:size_y, 0:size_x]
r_pixel = np.sqrt((xy[1] - x0) * (xy[1] - x0) + (xy[0] - y0) * (xy[0] - y0)) * resPSF
PSF = np.zeros((size_y, size_x, size_z))
for z_index in range(PSF.shape[2]):
# Interpolate the radial PSF function
PSF_interp = interp1d(r, PSF_rz[z_index, :])
# Evaluate the PSF at each value of r_pixel
PSF[:,:, z_index] = PSF_interp(r_pixel.ravel()).reshape(size_y, size_x)
# Normalize to the area
norm_const = np.sum(np.sum(PSF[:,:,0])) * resPSF**2
PSF /= norm_const
plt.imshow(PSF[:,:,0])
plt.show()
cdf = np.cumsum(PSF[:,:,0], axis=1) * resPSF
cdf = np.cumsum(cdf, axis=0) * resPSF
print('Min: {:.4f}'.format(np.min(cdf)))
print('Max: {:.4f}'.format(np.max(cdf)))
plt.imshow(cdf)
plt.show()
x = (resPSF * (xy[1] - x0))[0]
y = (resPSF * (xy[0] - y0))[:,0]
# Compute the interpolated CDF
f = RectBivariateSpline(x, y, cdf)
def generatePixelSignature(pX, pY, eX, eY, eZ):
value = f((pX - eX + 0.5) * resLateral, (pY - eY + 0.5) * resLateral) + \
f((pX - eX - 0.5) * resLateral, (pY - eY - 0.5) * resLateral) - \
f((pX - eX + 0.5) * resLateral, (pY - eY - 0.5) * resLateral) - \
f((pX - eX - 0.5) * resLateral, (pY - eY + 0.5) * resLateral)
return value
generatePixelSignature(0, 0, 0, -1, 0)
generatePixelSignature(1, 1, 1, 1, 0)
generatePixelSignature(2, 1, 1, 1, 0)
generatePixelSignature(0, 1, 1, 1, 0)
generatePixelSignature(1, 2, 1, 1, 0)
generatePixelSignature(1, 0, 1, 1, 0)
generatePixelSignature(-1, 1, 1, 1, 0)
generatePixelSignature(3, 1, 1, 1, 0)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This was developed using Python 3.5.2 (Anaconda) and TensorFlow version
Step2: The VGG-16 model is downloaded from the internet. This is the default directory where you want to save the data-files. The directory will be created if it does not exist.
Step3: Helper-functions for image manipulation
Step4: Save an image as a jpeg-file. The image is given as a numpy array with pixel-values between 0 and 255.
Step5: This function plots a large image. The image is given as a numpy array with pixel-values between 0 and 255.
Step9: Loss Functions
Step10: Example
Step11: Then we load the style-image which has the colours and textures we want in the mixed-image.
Step12: Then we define a list of integers which identify the layers in the neural network that we want to use for matching the content-image. These are indices into the layers in the neural network. For the VGG16 model, the 5th layer (index 4) seems to work well as the sole content-layer.
Step13: Then we define another list of integers for the style-layers.
| <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
import PIL.Image
tf.__version__
import vgg16
# vgg16.data_dir = 'vgg16/'
vgg16.maybe_download()
def load_image(filename, max_size=None):
image = PIL.Image.open(filename)
if max_size is not None:
# Calculate the appropriate rescale-factor for
# ensuring a max height and width, while keeping
# the proportion between them.
factor = max_size / np.max(image.size)
# Scale the image's height and width.
size = np.array(image.size) * factor
# The size is now floating-point because it was scaled.
# But PIL requires the size to be integers.
size = size.astype(int)
# Resize the image.
image = image.resize(size, PIL.Image.LANCZOS)
print(image)
# Convert to numpy floating-point array.
return np.float32(image)
def save_image(image, filename):
# Ensure the pixel-values are between 0 and 255.
image = np.clip(image, 0.0, 255.0)
# Convert to bytes.
image = image.astype(np.uint8)
# Write the image-file in jpeg-format.
with open(filename, 'wb') as file:
PIL.Image.fromarray(image).save(file, 'jpeg')
def plot_image_big(image):
# Ensure the pixel-values are between 0 and 255.
image = np.clip(image, 0.0, 255.0)
# Convert pixels to bytes.
image = image.astype(np.uint8)
# Convert to a PIL-image and display it.
display(PIL.Image.fromarray(image))
def plot_images(content_image, style_image, mixed_image):
# Create figure with sub-plots.
fig, axes = plt.subplots(1, 3, figsize=(10, 10))
# Adjust vertical spacing.
fig.subplots_adjust(hspace=0.1, wspace=0.1)
# Use interpolation to smooth pixels?
smooth = True
# Interpolation type.
if smooth:
interpolation = 'sinc'
else:
interpolation = 'nearest'
# Plot the content-image.
# Note that the pixel-values are normalized to
# the [0.0, 1.0] range by dividing with 255.
ax = axes.flat[0]
ax.imshow(content_image / 255.0, interpolation=interpolation)
ax.set_xlabel("Content")
# Plot the mixed-image.
ax = axes.flat[1]
ax.imshow(mixed_image / 255.0, interpolation=interpolation)
ax.set_xlabel("Mixed")
# Plot the style-image
ax = axes.flat[2]
ax.imshow(style_image / 255.0, interpolation=interpolation)
ax.set_xlabel("Style")
# Remove ticks from all the plots.
for ax in axes.flat:
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
def mean_squared_error(a, b):
return tf.reduce_mean(tf.square(a - b))
def create_content_loss(session, model, content_image, layer_ids):
Create the loss-function for the content-image.
Parameters:
session: An open TensorFlow session for running the model's graph.
model: The model, e.g. an instance of the VGG16-class.
content_image: Numpy float array with the content-image.
layer_ids: List of integer id's for the layers to use in the model.
# Create a feed-dict with the content-image.
feed_dict = model.create_feed_dict(image=content_image)
# Get references to the tensors for the given layers.
layers = model.get_layer_tensors(layer_ids)
# Calculate the output values of those layers when
# feeding the content-image to the model.
values = session.run(layers, feed_dict=feed_dict)
# Set the model's graph as the default so we can add
# computational nodes to it. It is not always clear
# when this is necessary in TensorFlow, but if you
# want to re-use this code then it may be necessary.
with model.graph.as_default():
# Initialize an empty list of loss-functions.
layer_losses = []
# For each layer and its corresponding values
# for the content-image.
for value, layer in zip(values, layers):
# These are the values that are calculated
# for this layer in the model when inputting
# the content-image. Wrap it to ensure it
# is a const - although this may be done
# automatically by TensorFlow.
value_const = tf.constant(value)
# The loss-function for this layer is the
# Mean Squared Error between the layer-values
# when inputting the content- and mixed-images.
# Note that the mixed-image is not calculated
# yet, we are merely creating the operations
# for calculating the MSE between those two.
loss = mean_squared_error(layer, value_const)
# Add the loss-function for this layer to the
# list of loss-functions.
layer_losses.append(loss)
# The combined loss for all layers is just the average.
# The loss-functions could be weighted differently for
# each layer. You can try it and see what happens.
total_loss = tf.reduce_mean(layer_losses)
return total_loss
def gram_matrix(tensor):
shape = tensor.get_shape()
# Get the number of feature channels for the input tensor,
# which is assumed to be from a convolutional layer with 4-dim.
num_channels = int(shape[3])
# Reshape the tensor so it is a 2-dim matrix. This essentially
# flattens the contents of each feature-channel.
matrix = tf.reshape(tensor, shape=[-1, num_channels])
# Calculate the Gram-matrix as the matrix-product of
# the 2-dim matrix with itself. This calculates the
# dot-products of all combinations of the feature-channels.
gram = tf.matmul(tf.transpose(matrix), matrix)
return gram
def create_style_loss(session, model, style_image, layer_ids):
Create the loss-function for the style-image.
Parameters:
session: An open TensorFlow session for running the model's graph.
model: The model, e.g. an instance of the VGG16-class.
style_image: Numpy float array with the style-image.
layer_ids: List of integer id's for the layers to use in the model.
# Create a feed-dict with the style-image.
feed_dict = model.create_feed_dict(image=style_image)
# Get references to the tensors for the given layers.
layers = model.get_layer_tensors(layer_ids)
layerIdCount=len(layer_ids)
print('count of layer ids:',layerIdCount)
# Set the model's graph as the default so we can add
# computational nodes to it. It is not always clear
# when this is necessary in TensorFlow, but if you
# want to re-use this code then it may be necessary.
with model.graph.as_default():
# Construct the TensorFlow-operations for calculating
# the Gram-matrices for each of the layers.
gram_layers = [gram_matrix(layer) for layer in layers]
# Calculate the values of those Gram-matrices when
# feeding the style-image to the model.
values = session.run(gram_layers, feed_dict=feed_dict)
# Initialize an empty list of loss-functions.
layer_losses = []
# For each Gram-matrix layer and its corresponding values.
for value, gram_layer in zip(values, gram_layers):
# These are the Gram-matrix values that are calculated
# for this layer in the model when inputting the
# style-image. Wrap it to ensure it is a const,
# although this may be done automatically by TensorFlow.
value_const = tf.constant(value)
# The loss-function for this layer is the
# Mean Squared Error between the Gram-matrix values
# for the content- and mixed-images.
# Note that the mixed-image is not calculated
# yet, we are merely creating the operations
# for calculating the MSE between those two.
loss = mean_squared_error(gram_layer, value_const)
# Add the loss-function for this layer to the
# list of loss-functions.
layer_losses.append(loss)
# The combined loss for all layers is just the average.
# The loss-functions could be weighted differently for
# each layer. You can try it and see what happens.
total_loss = tf.reduce_mean(layer_losses)
return total_loss
def create_denoise_loss(model):
loss = tf.reduce_sum(tf.abs(model.input[:,1:,:,:] - model.input[:,:-1,:,:])) + \
tf.reduce_sum(tf.abs(model.input[:,:,1:,:] - model.input[:,:,:-1,:]))
return loss
def style_transfer(content_image, style_image,
content_layer_ids, style_layer_ids,
weight_content=1.5, weight_style=10.0,
weight_denoise=0.3,
num_iterations=120, step_size=10.0):
Use gradient descent to find an image that minimizes the
loss-functions of the content-layers and style-layers. This
should result in a mixed-image that resembles the contours
of the content-image, and resembles the colours and textures
of the style-image.
Parameters:
content_image: Numpy 3-dim float-array with the content-image.
style_image: Numpy 3-dim float-array with the style-image.
content_layer_ids: List of integers identifying the content-layers.
style_layer_ids: List of integers identifying the style-layers.
weight_content: Weight for the content-loss-function.
weight_style: Weight for the style-loss-function.
weight_denoise: Weight for the denoising-loss-function.
num_iterations: Number of optimization iterations to perform.
step_size: Step-size for the gradient in each iteration.
# Create an instance of the VGG16-model. This is done
# in each call of this function, because we will add
# operations to the graph so it can grow very large
# and run out of RAM if we keep using the same instance.
model = vgg16.VGG16()
# Create a TensorFlow-session.
session = tf.InteractiveSession(graph=model.graph)
# Print the names of the content-layers.
print("Content layers:")
print(model.get_layer_names(content_layer_ids))
print('Content Layers:',content_layer_ids)
print()
# Print the names of the style-layers.
print("Style layers:")
print(model.get_layer_names(style_layer_ids))
print('Style Layers:',style_layer_ids)
print()
#Printing the input paramenter to the function
print('Weight Content:',weight_content)
print('Weight Style:',weight_style)
# Commented by Shreyas..........
#print('Weight Denoise:',weight_denoise)
print('Number of Iterations:',num_iterations)
print('Step Size:',step_size)
print()
# Create the loss-function for the content-layers and -image.
loss_content = create_content_loss(session=session,
model=model,
content_image=content_image,
layer_ids=content_layer_ids)
# Create the loss-function for the style-layers and -image.
loss_style = create_style_loss(session=session,
model=model,
style_image=style_image,
layer_ids=style_layer_ids)
# Create the loss-function for the denoising of the mixed-image.
#loss_denoise = create_denoise_loss(model)
# Create TensorFlow variables for adjusting the values of
# the loss-functions. This is explained below.
adj_content = tf.Variable(1e-10, name='adj_content')
adj_style = tf.Variable(1e-10, name='adj_style')
#adj_denoise = tf.Variable(1e-10, name='adj_denoise')
# Initialize the adjustment values for the loss-functions.
#session.run([adj_content.initializer,
# adj_style.initializer,
# adj_denoise.initializer])
session.run([adj_content.initializer,
adj_style.initializer])
# Create TensorFlow operations for updating the adjustment values.
# These are basically just the reciprocal values of the
# loss-functions, with a small value 1e-10 added to avoid the
# possibility of division by zero.
update_adj_content = adj_content.assign(1.0 / (loss_content + 1e-10))
update_adj_style = adj_style.assign(1.0 / (loss_style + 1e-10))
#update_adj_denoise = adj_denoise.assign(1.0 / (loss_denoise + 1e-10))
# This is the weighted loss-function that we will minimize
# below in order to generate the mixed-image.
# Because we multiply the loss-values with their reciprocal
# adjustment values, we can use relative weights for the
# loss-functions that are easier to select, as they are
# independent of the exact choice of style- and content-layers.
#loss_combined = weight_content * adj_content * loss_content + \
# weight_style * adj_style * loss_style + \
# weight_denoise * adj_denoise * loss_denoise
loss_combined = weight_content * adj_content * loss_content + \
weight_style * adj_style * loss_style
#loss_combined = loss_combined/3
# Use TensorFlow to get the mathematical function for the
# gradient of the combined loss-function with regard to
# the input image.
gradient = tf.gradients(loss_combined, model.input)
# List of tensors that we will run in each optimization iteration.
#run_list = [gradient, update_adj_content, update_adj_style, \
# update_adj_denoise]
run_list = [gradient, update_adj_content, update_adj_style]
# The mixed-image is initialized with random noise.
# It is the same size as the content-image.
mixed_image = np.random.rand(*content_image.shape) + 128
for i in range(num_iterations):
# Create a feed-dict with the mixed-image.
feed_dict = model.create_feed_dict(image=mixed_image)
# Use TensorFlow to calculate the value of the
# gradient, as well as updating the adjustment values.
#grad, adj_content_val, adj_style_val, adj_denoise_val \
#= session.run(run_list, feed_dict=feed_dict)
grad, adj_content_val, adj_style_val \
= session.run(run_list, feed_dict=feed_dict)
# Reduce the dimensionality of the gradient.
grad = np.squeeze(grad)
# Scale the step-size according to the gradient-values.
step_size_scaled = step_size / (np.std(grad) + 1e-8)
# Update the image by following the gradient.
mixed_image -= grad * step_size_scaled
# Ensure the image has valid pixel-values between 0 and 255.
mixed_image = np.clip(mixed_image, 0.0, 255.0)
# Print a little progress-indicator.
print(". ", end="")
# Display status once every 10 iterations, and the last.
if (i % 10 == 0) or (i == num_iterations - 1):
print()
print("Iteration:", i)
# Print adjustment weights for loss-functions.
#msg = "Weight Adj. for Content: {0:.2e}, Style: {1:.2e}, Denoise: {2:.2e}"
#print(msg.format(adj_content_val, adj_style_val, adj_denoise_val))
msg = "Weight Adj. for Content: {0:.2e}, Style: {1:.2e}"
print(msg.format(adj_content_val, adj_style_val))
# Plot the content-, style- and mixed-images.
plot_images(content_image=content_image,
style_image=style_image,
mixed_image=mixed_image)
#Saving the mixed image after every 10 iterations
filename='images/outputs_StyleTransfer/Mixed_Iteration' + str(i) +'.jpg'
print(filename)
save_image(mixed_image, filename)
print()
print("Final image:")
plot_image_big(mixed_image)
# Close the TensorFlow session to release its resources.
session.close()
# Return the mixed-image.
return mixed_image
content_filename = 'images/eiffel.jpg'
content_image = load_image(content_filename, max_size=None)
filenamecontent='images/outputs_StyleTransfer/Content.jpg'
print(filenamecontent)
save_image(content_image, filenamecontent)
style_filename = 'images/style26.jpg'
style_image = load_image(style_filename, max_size=None)
filenamestyle='images/outputs_StyleTransfer/Style.jpg'
print(filenamestyle)
save_image(style_image, filenamestyle)
content_layer_ids = [4,6]
# The VGG16-model has 13 convolutional layers.
# This selects all those layers as the style-layers.
# This is somewhat slow to optimize.
style_layer_ids = list(range(13))
# You can also select a sub-set of the layers, e.g. like this:
# style_layer_ids = [1, 2, 3, 4]
%%time
img = style_transfer(content_image=content_image,
style_image=style_image,
content_layer_ids=content_layer_ids,
style_layer_ids=style_layer_ids,
weight_content=1.5,
weight_style=10.0,
weight_denoise=0.3,
num_iterations=150,
step_size=10.0)
# Function for printing mixed output image
filename='images/outputs_StyleTransfer/Mixed.jpg'
save_image(img, filename)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Family
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables
Step9: 2. Key Properties --> Seawater Properties
Step10: 2.2. Eos Functional Temp
Step11: 2.3. Eos Functional Salt
Step12: 2.4. Eos Functional Depth
Step13: 2.5. Ocean Freezing Point
Step14: 2.6. Ocean Specific Heat
Step15: 2.7. Ocean Reference Density
Step16: 3. Key Properties --> Bathymetry
Step17: 3.2. Type
Step18: 3.3. Ocean Smoothing
Step19: 3.4. Source
Step20: 4. Key Properties --> Nonoceanic Waters
Step21: 4.2. River Mouth
Step22: 5. Key Properties --> Software Properties
Step23: 5.2. Code Version
Step24: 5.3. Code Languages
Step25: 6. Key Properties --> Resolution
Step26: 6.2. Canonical Horizontal Resolution
Step27: 6.3. Range Horizontal Resolution
Step28: 6.4. Number Of Horizontal Gridpoints
Step29: 6.5. Number Of Vertical Levels
Step30: 6.6. Is Adaptive Grid
Step31: 6.7. Thickness Level 1
Step32: 7. Key Properties --> Tuning Applied
Step33: 7.2. Global Mean Metrics Used
Step34: 7.3. Regional Metrics Used
Step35: 7.4. Trend Metrics Used
Step36: 8. Key Properties --> Conservation
Step37: 8.2. Scheme
Step38: 8.3. Consistency Properties
Step39: 8.4. Corrected Conserved Prognostic Variables
Step40: 8.5. Was Flux Correction Used
Step41: 9. Grid
Step42: 10. Grid --> Discretisation --> Vertical
Step43: 10.2. Partial Steps
Step44: 11. Grid --> Discretisation --> Horizontal
Step45: 11.2. Staggering
Step46: 11.3. Scheme
Step47: 12. Timestepping Framework
Step48: 12.2. Diurnal Cycle
Step49: 13. Timestepping Framework --> Tracers
Step50: 13.2. Time Step
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Step52: 14.2. Scheme
Step53: 14.3. Time Step
Step54: 15. Timestepping Framework --> Barotropic
Step55: 15.2. Time Step
Step56: 16. Timestepping Framework --> Vertical Physics
Step57: 17. Advection
Step58: 18. Advection --> Momentum
Step59: 18.2. Scheme Name
Step60: 18.3. ALE
Step61: 19. Advection --> Lateral Tracers
Step62: 19.2. Flux Limiter
Step63: 19.3. Effective Order
Step64: 19.4. Name
Step65: 19.5. Passive Tracers
Step66: 19.6. Passive Tracers Advection
Step67: 20. Advection --> Vertical Tracers
Step68: 20.2. Flux Limiter
Step69: 21. Lateral Physics
Step70: 21.2. Scheme
Step71: 22. Lateral Physics --> Momentum --> Operator
Step72: 22.2. Order
Step73: 22.3. Discretisation
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Step75: 23.2. Constant Coefficient
Step76: 23.3. Variable Coefficient
Step77: 23.4. Coeff Background
Step78: 23.5. Coeff Backscatter
Step79: 24. Lateral Physics --> Tracers
Step80: 24.2. Submesoscale Mixing
Step81: 25. Lateral Physics --> Tracers --> Operator
Step82: 25.2. Order
Step83: 25.3. Discretisation
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Step85: 26.2. Constant Coefficient
Step86: 26.3. Variable Coefficient
Step87: 26.4. Coeff Background
Step88: 26.5. Coeff Backscatter
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Step90: 27.2. Constant Val
Step91: 27.3. Flux Type
Step92: 27.4. Added Diffusivity
Step93: 28. Vertical Physics
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
Step96: 30.2. Closure Order
Step97: 30.3. Constant
Step98: 30.4. Background
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
Step100: 31.2. Closure Order
Step101: 31.3. Constant
Step102: 31.4. Background
Step103: 32. Vertical Physics --> Interior Mixing --> Details
Step104: 32.2. Tide Induced Mixing
Step105: 32.3. Double Diffusion
Step106: 32.4. Shear Mixing
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
Step108: 33.2. Constant
Step109: 33.3. Profile
Step110: 33.4. Background
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
Step112: 34.2. Constant
Step113: 34.3. Profile
Step114: 34.4. Background
Step115: 35. Uplow Boundaries --> Free Surface
Step116: 35.2. Scheme
Step117: 35.3. Embeded Seaice
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Step119: 36.2. Type Of Bbl
Step120: 36.3. Lateral Mixing Coef
Step121: 36.4. Sill Overflow
Step122: 37. Boundary Forcing
Step123: 37.2. Surface Pressure
Step124: 37.3. Momentum Flux Correction
Step125: 37.4. Tracers Flux Correction
Step126: 37.5. Wave Effects
Step127: 37.6. River Runoff Budget
Step128: 37.7. Geothermal Heating
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Step132: 40.2. Ocean Colour
Step133: 40.3. Extinction Depth
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Step135: 41.2. From Sea Ice
Step136: 41.3. Forced Mode Restoring
| <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'test-institute-2', 'sandbox-3', 'ocean')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
| <ASSISTANT_TASK:>
Python Code:
import numpy as np
from sklearn.preprocessing import MinMaxScaler
a = np.array([[-1, 2], [-0.5, 6]])
scaler = MinMaxScaler()
a_one_column = a.reshape(-1, 1)
result_one_column = scaler.fit_transform(a_one_column)
result = result_one_column.reshape(a.shape)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The speech commands
Step2: The below code is generating a label ID for a command. The ID will be -1 for any command not in the to_keep list. Other ID will be the index of the keyword in this list.
Step3: The feature
Step4: The final feature is the zcr computed on a segment of 1 second and filtered. We are using a sliding window and using a Hann window.
Step5: The patterns
Step6: Following code is giving the number of speech samples for each keyword.
Step7: The following code is generating the patterns used for the training of the ML model,.
Step8: Below code is extracting the training and test patterns.
Step9: Testing on a signal
Step10: Simple function to display a spectrogram. It is adapted from a SciPy example.
Step11: Display of the feature to compare with the spectrogram.
Step12: Patterns for training
Step13: Logistic Regression
Step14: We are using the best estimator found during the randomized search
Step15: The confusion matrix is generated from the test patterns to check the behavior of the classifier
Step16: We compute the final score. 0.8 is really the minimum acceptable value for this kind of demo.
Step17: We can now save the model so that next time we want to play with the notebook and test the CMSIS-DSP implementation we do not have to retrain the model
Step18: And we can reload the saved model
Step19: Reference implementation with Matrix
Step20: And like in the code above with scikit-learn, we are checking the result with the confusion matrix and the score. It should give the same results
Step21: CMSIS-DSP implementation
Step22: For the FIR, CMSIS-DSP is using a FIR instance structure and thus we need to define it
Step23: Let's check that the feature is giving the same result as the reference implemenattion using linear algebra.
Step24: The feature code is working, so now we can implement the predict
Step25: And finally we can check the CMSIS-DSP behavior of the test patterns
Step26: We are getting very similar results to the reference implementation. Now let's explore fixed point.
Step27: Now we can implement the zcr and feature in Q31.
Step28: Let's check the feature on the data to compare with the F32 version and check it is working
Step29: The Q31 feature is very similar to the F32 one so now we can implement the predict
Step30: Now we can check the Q31 implementation on the test patterns
Step31: The score is as good as the F32 implementation.
Step32: Q15 version is as good as other versions so we are selecting this implementation to run on the Arduino (once it has been converted to C).
Step33: To describe our compute graph, we need to describe the nodes which are used in this graph.
Step34: We need some parameters. Those parameters need to be coherent with the values defined in the features in the above code.
Step36: Below function is
Step37: Next line is generating sched.py which is the Python implementation of the compute graph and its static scheduling. This file is describing the FIFOs connecting the nodes and describing how the nodes are scheduled.
Step38: Next line is generating the C++ schedule that we will need for the Arduino implementation
Step39: Now we'd like to test the Q15 classifier and the static schedule on a real patterns.
Step40: Let's plot the signal to check we have the right one
Step41: Now we can run our static schedule on this file.
Step42: The code is working. We are getting more printed Yes than Yes in the pattern because we are sliding by 0.5 second between each recognition and the same word can be recognized several time.
Step43: Once the model is loaded, we extract the values and convert them to Q15
Step45: Now we need to generate C arrays for the ML model parameters. Those parameters are generated into kws/coef.cpp
Step46: Generation of the coef code
Step47: The implementation of the nodes is in kws/AppNodes.h. It is very similar to the appnodes.py but using the CMSIS-DSP C API.
Step48: The first time the below command is executed, it will take a very long time. The full CMSIS-DSP library has to be rebuilt for the Arduino.
Step49: Testing
| <ASSISTANT_TASK:>
Python Code:
import cmsisdsp as dsp
import cmsisdsp.fixedpoint as fix
import numpy as np
import os.path
import glob
import pathlib
import random
import soundfile as sf
import matplotlib.pyplot as plt
from IPython.display import display,Audio,HTML
import scipy.signal
from numpy.lib.stride_tricks import sliding_window_view
from scipy.signal.windows import hann
from sklearn import svm
from sklearn.metrics import confusion_matrix, ConfusionMatrixDisplay
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import RandomizedSearchCV
from scipy.stats import uniform
from sklearn.linear_model import LogisticRegression
import pickle
MINISPEECH="mini_speech_commands"
commands=np.array([os.path.basename(f) for f in glob.glob(os.path.join(MINISPEECH,"mini_speech_commands","*"))])
commands=commands[commands != "README.md"]
# Any other word will be recognized as unknown
to_keep=['yes']
UNKNOWN_CLASS = -1
def get_label(name):
return(pathlib.PurePath(name).parts[-2])
def get_label_id(name):
label=get_label(name)
if label in to_keep:
return(to_keep.index(label))
else:
return(UNKNOWN_CLASS)
def zcr(w):
w = w-np.mean(w)
f=w[:-1]
g=w[1:]
k=np.count_nonzero(np.logical_and(f*g<0, g>f))
return(1.0*k/len(f))
def feature(data):
samplerate=16000
input_len = 16000
# The speech pattern is padded to ensure it has a duration of 1 second
waveform = data[:input_len]
zero_padding = np.zeros(
16000 - waveform.shape[0],
dtype=np.float32)
signal = np.hstack([waveform, zero_padding])
# We decompose the intput signal into overlapping window. And the signal in each window
# is premultiplied by a Hann window of the right size.
# Warning : if you change the window duration and audio offset, you'll need to change the value
# in the scripts used for the scheduling of the compute graph later.
winDuration=25e-3
audioOffsetDuration=10e-3
winLength=int(np.floor(samplerate*winDuration))
audioOffset=int(np.floor(samplerate*audioOffsetDuration))
overlap=winLength-audioOffset
window=hann(winLength,sym=False)
reta=[zcr(x*window) for x in sliding_window_view(signal,winLength)[::audioOffset,:]]
# The final signal is filtered. We have tested several variations on the feature. This filtering is
# improving the recognition
reta=scipy.signal.lfilter(np.ones(10)/10.0,[1],reta)
return(np.array(reta))
class Pattern:
def __init__(self,p):
global UNKNOWN_CLASS
if isinstance(p, str):
self._isFile=True
self._filename=p
self._label=get_label_id(p)
data, samplerate = sf.read(self._filename)
self._feature = feature(data)
else:
self._isFile=False
self._noiseLevel=p
self._label=UNKNOWN_CLASS
noise=np.random.randn(16000)*p
self._feature=feature(noise)
@property
def label(self):
return(self._label)
@property
def feature(self):
return(self._feature)
# Only useful for plotting
# The random pattern will be different each time
@property
def signal(self):
if not self._isFile:
return(np.random.randn(16000)*self._noiseLevel)
else:
data, samplerate = sf.read(self._filename)
return(data)
files_per_command=len(glob.glob(os.path.join(MINISPEECH,"mini_speech_commands",commands[0],"*")))
files_per_command
# Add patterns we want to detect
filenames=[]
for f in to_keep:
filenames+=glob.glob(os.path.join(MINISPEECH,"mini_speech_commands",f,"*"))
random.shuffle(filenames)
# Add remaining patterns
remaining_words=list(set(commands)-set(to_keep))
nb_noise=0
remaining=[]
for f in remaining_words:
remaining+=glob.glob(os.path.join(MINISPEECH,"mini_speech_commands",f,"*"))
random.shuffle(remaining)
filenames += remaining[0:files_per_command-nb_noise]
patterns=[Pattern(x) for x in filenames]
for i in range(nb_noise):
patterns.append(Pattern(np.abs(np.random.rand(1)*0.05)[0]))
random.shuffle(patterns)
print(len(patterns))
patterns=np.array(patterns)
nb_patterns = len(patterns)
nb_train= int(np.floor(0.8 * nb_patterns))
nb_tests=nb_patterns-nb_train
train_patterns = patterns[:nb_train]
test_patterns = patterns[-nb_tests:]
nbpat=50
data = patterns[nbpat].signal
samplerate=16000
plt.plot(data)
plt.show()
audio=Audio(data=data,rate=samplerate,autoplay=False)
audio
def get_spectrogram(waveform,fs):
# Zero-padding for an audio waveform with less than 16,000 samples.
input_len = 16000
waveform = waveform[:input_len]
zero_padding = np.zeros(
16000 - waveform.shape[0],
dtype=np.float32)
mmax=np.max(np.abs(waveform))
equal_length = np.hstack([waveform, zero_padding])
f, t, Zxx = scipy.signal.stft(equal_length, fs, nperseg=1000)
plt.pcolormesh(t, f, np.abs(Zxx), vmin=0, vmax=mmax/100, shading='gouraud')
plt.title('STFT Magnitude')
plt.ylabel('Frequency [Hz]')
plt.xlabel('Time [sec]')
plt.show()
get_spectrogram(data,16000)
feat=feature(data)
plt.plot(feat)
plt.show()
X=np.array([x.feature for x in train_patterns])
X.shape
y=np.array([x.label for x in train_patterns])
y.shape
y_test = [x.label for x in test_patterns]
X_test = [x.feature for x in test_patterns]
distributionsb = dict(C=uniform(loc=1, scale=1000)
)
reg = LogisticRegression(penalty="l1", solver="saga", tol=0.1)
clfb=RandomizedSearchCV(reg, distributionsb,random_state=0,n_iter=50).fit(X, y)
clfb.best_estimator_
y_pred = clfb.predict(X_test)
labels=["Unknown"] + to_keep
ConfusionMatrixDisplay.from_predictions(y_test, y_pred,display_labels=labels)
clfb.score(X_test, y_test)
with open("logistic.pickle","wb") as f:
s = pickle.dump(clfb,f)
with open("logistic.pickle","rb") as f:
clfb=pickle.load(f)
def predict(feat):
coef=clfb.best_estimator_.coef_
intercept=clfb.best_estimator_.intercept_
res=np.dot(coef,feat) + intercept
if res<0:
return(-1)
else:
return(0)
y_pred_ref = [predict(x) for x in X_test]
labels=["Unknown"] + to_keep
ConfusionMatrixDisplay.from_predictions(y_test, y_pred_ref,display_labels=labels)
np.count_nonzero(np.equal(y_test,y_pred_ref))/len(y_test)
coef_f32=clfb.best_estimator_.coef_
intercept_f32=clfb.best_estimator_.intercept_
def dsp_zcr(w):
m = dsp.arm_mean_f32(w)
m = -m
w = dsp.arm_offset_f32(w,m)
f=w[:-1]
g=w[1:]
k=np.count_nonzero(np.logical_and(f*g<0, g>f))
return(1.0*k/len(f))
firf32 = dsp.arm_fir_instance_f32()
def dsp_feature(data):
samplerate=16000
input_len = 16000
waveform = data[:input_len]
zero_padding = np.zeros(
16000 - waveform.shape[0],
dtype=np.float32)
signal = np.hstack([waveform, zero_padding])
winDuration=25e-3
audioOffsetDuration=10e-3
winLength=int(np.floor(samplerate*winDuration))
audioOffset=int(np.floor(samplerate*audioOffsetDuration))
overlap=winLength -audioOffset
window=hann(winLength,sym=False)
reta=[dsp_zcr(dsp.arm_mult_f32(x,window)) for x in sliding_window_view(signal,winLength)[::audioOffset,:]]
# Reset state and filter
# We want to start with a clean filter each time we filter a new feature.
# So the filter state is reset each time.
blockSize=98
numTaps=10
stateLength = numTaps + blockSize - 1
dsp.arm_fir_init_f32(firf32,10,np.ones(10)/10.0,np.zeros(stateLength))
reta=dsp.arm_fir_f32(firf32,reta)
return(np.array(reta))
feat=dsp_feature(data)
plt.plot(feat)
plt.show()
def dsp_predict(feat):
res=dsp.arm_dot_prod_f32(coef_f32,feat)
res = res + intercept_f32
if res[0]<0:
return(-1)
else:
return(0)
y_pred_ref = [dsp_predict(dsp_feature(x.signal)) for x in test_patterns]
labels=["Unknown"] + to_keep
ConfusionMatrixDisplay.from_predictions(y_test, y_pred_ref,display_labels=labels)
np.count_nonzero(np.equal(y_test,y_pred_ref))/len(y_test)
scaled_coef=clfb.best_estimator_.coef_
coef_shift=0
while np.max(np.abs(scaled_coef)) > 1:
scaled_coef = scaled_coef / 2.0
coef_shift = coef_shift + 1
coef_q31=fix.toQ31(scaled_coef)
scaled_intercept = clfb.best_estimator_.intercept_
intercept_shift = 0
while np.abs(scaled_intercept) > 1:
scaled_intercept = scaled_intercept / 2.0
intercept_shift = intercept_shift + 1
intercept_q31=fix.toQ31(scaled_intercept)
def dsp_zcr_q31(w):
m = dsp.arm_mean_q31(w)
# Negate can saturate so we use CMSIS-DSP function which is working on array (and we have a scalar)
m = dsp.arm_negate_q31(np.array([m]))[0]
w = dsp.arm_offset_q31(w,m)
f=w[:-1]
g=w[1:]
k=np.count_nonzero(np.logical_and(np.logical_or(np.logical_and(f>0,g<0), np.logical_and(f<0,g>0)),g>f))
# k < len(f) so shift should be 0 except when k == len(f)
# When k==len(f) normally quotient is 0x40000000 and shift 1 and we convert
# this to 0x7FFFFFF and shift 0
status,quotient,shift_val=dsp.arm_divide_q31(k,len(f))
if shift_val==1:
return(dsp.arm_shift_q31(np.array([quotient]),shift)[0])
else:
return(quotient)
firq31 = dsp.arm_fir_instance_q31()
def dsp_feature_q31(data):
samplerate=16000
input_len = 16000
waveform = data[:input_len]
zero_padding = np.zeros(
16000 - waveform.shape[0],
dtype=np.int32)
signal = np.hstack([waveform, zero_padding])
winDuration=25e-3
audioOffsetDuration=10e-3
winLength=int(np.floor(samplerate*winDuration))
audioOffset=int(np.floor(samplerate*audioOffsetDuration))
overlap=winLength-audioOffset
window=fix.toQ31(hann(winLength,sym=False))
reta=[dsp_zcr_q31(dsp.arm_mult_q31(x,window)) for x in sliding_window_view(signal,winLength)[::audioOffset,:]]
# Reset state and filter
blockSize=98
numTaps=10
stateLength = numTaps + blockSize - 1
dsp.arm_fir_init_q31(firq31,10,fix.toQ31(np.ones(10)/10.0),np.zeros(stateLength,dtype=np.int32))
reta=dsp.arm_fir_q31(firq31,reta)
return(np.array(reta))
feat=fix.Q31toF32(dsp_feature_q31(fix.toQ31(data)))
plt.plot(feat)
plt.show()
def dsp_predict_q31(feat):
res=dsp.arm_dot_prod_q31(coef_q31,feat)
# Before adding the res and the intercept we need to ensure they are in the same Qx.x format
# The scaling applied to the coefs and to the intercept is different so we need to scale
# the intercept to take this into account
scaled=dsp.arm_shift_q31(np.array([intercept_q31]),intercept_shift-coef_shift)[0]
# Because dot prod output is in Q16.48
# and ret is on 64 bits
scaled = np.int64(scaled) << 17
res = res + scaled
if res<0:
return(-1)
else:
return(0)
y_pred_ref = [dsp_predict_q31(dsp_feature_q31(fix.toQ31(x.signal))) for x in test_patterns]
labels=["Unknown"] + to_keep
ConfusionMatrixDisplay.from_predictions(y_test, y_pred_ref,display_labels=labels)
np.count_nonzero(np.equal(y_test,y_pred_ref))/len(y_test)
scaled_coef=clfb.best_estimator_.coef_
coef_shift=0
while np.max(np.abs(scaled_coef)) > 1:
scaled_coef = scaled_coef / 2.0
coef_shift = coef_shift + 1
coef_q15=fix.toQ15(scaled_coef)
scaled_intercept = clfb.best_estimator_.intercept_
intercept_shift = 0
while np.abs(scaled_intercept) > 1:
scaled_intercept = scaled_intercept / 2.0
intercept_shift = intercept_shift + 1
intercept_q15=fix.toQ15(scaled_intercept)
def dsp_zcr_q15(w):
m = dsp.arm_mean_q15(w)
# Negate can saturate so we use CMSIS-DSP function which is working on array (and we have a scalar)
m = dsp.arm_negate_q15(np.array([m]))[0]
w = dsp.arm_offset_q15(w,m)
f=w[:-1]
g=w[1:]
k=np.count_nonzero(np.logical_and(np.logical_or(np.logical_and(f>0,g<0), np.logical_and(f<0,g>0)),g>f))
# k < len(f) so shift should be 0 except when k == len(f)
# When k==len(f) normally quotient is 0x4000 and shift 1 and we convert
# this to 0x7FFF and shift 0
status,quotient,shift_val=dsp.arm_divide_q15(k,len(f))
if shift_val==1:
return(dsp.arm_shift_q15(np.array([quotient]),shift)[0])
else:
return(quotient)
firq15 = dsp.arm_fir_instance_q15()
def dsp_feature_q15(data):
samplerate=16000
input_len = 16000
waveform = data[:input_len]
zero_padding = np.zeros(
16000 - waveform.shape[0],
dtype=np.int16)
signal = np.hstack([waveform, zero_padding])
winDuration=25e-3
audioOffsetDuration=10e-3
winLength=int(np.floor(samplerate*winDuration))
audioOffset=int(np.floor(samplerate*audioOffsetDuration))
overlap=winLength - audioOffset
window=fix.toQ15(hann(winLength,sym=False))
reta=[dsp_zcr_q15(dsp.arm_mult_q15(x,window)) for x in sliding_window_view(signal,winLength)[::audioOffset,:]]
# Reset state and filter
blockSize=98
numTaps=10
stateLength = numTaps + blockSize - 1
dsp.arm_fir_init_q15(firq15,10,fix.toQ15(np.ones(10)/10.0),np.zeros(stateLength,dtype=np.int16))
reta=dsp.arm_fir_q15(firq15,reta)
return(np.array(reta))
feat=fix.Q15toF32(dsp_feature_q15(fix.toQ15(data)))
plt.plot(feat)
plt.show()
def dsp_predict_q15(feat):
res=dsp.arm_dot_prod_q15(coef_q15,feat)
scaled=dsp.arm_shift_q15(np.array([intercept_q15]),intercept_shift-coef_shift)[0]
# Because dot prod output is in Q34.30
# and ret is on 64 bits
scaled = np.int64(scaled) << 15
res = res + scaled
if res<0:
return(-1)
else:
return(0)
y_pred_ref = [dsp_predict_q15(dsp_feature_q15(fix.toQ15(x.signal))) for x in test_patterns]
labels=["Unknown"] + to_keep
ConfusionMatrixDisplay.from_predictions(y_test, y_pred_ref,display_labels=labels)
np.count_nonzero(np.equal(y_test,y_pred_ref))/len(y_test)
from cmsisdsp.sdf.scheduler import *
class Source(GenericSource):
def __init__(self,name,inLength):
GenericSource.__init__(self,name)
q15Type=CType(Q15)
self.addOutput("o",q15Type,inLength)
@property
def typeName(self):
return "Source"
class Sink(GenericSink):
def __init__(self,name,outLength):
GenericSink.__init__(self,name)
q15Type=CType(Q15)
self.addInput("i",q15Type,outLength)
@property
def typeName(self):
return "Sink"
class Feature(GenericNode):
def __init__(self,name,inLength):
GenericNode.__init__(self,name)
q15Type=CType(Q15)
self.addInput("i",q15Type,inLength)
self.addOutput("o",q15Type,1)
@property
def typeName(self):
return "Feature"
class FIR(GenericNode):
def __init__(self,name,inLength,outLength):
GenericNode.__init__(self,name)
q15Type=CType(Q15)
self.addInput("i",q15Type,inLength)
self.addOutput("o",q15Type,outLength)
@property
def typeName(self):
return "FIR"
class KWS(GenericNode):
def __init__(self,name,inLength):
GenericNode.__init__(self,name)
q15Type=CType(Q15)
self.addInput("i",q15Type,inLength)
self.addOutput("o",q15Type,1)
@property
def typeName(self):
return "KWS"
q15Type=CType(Q15)
FS=16000
winDuration=25e-3
audioOffsetDuration=10e-3
winLength=int(np.floor(FS*winDuration))
audio_input_length=int(np.floor(FS*audioOffsetDuration))
AUDIO_INTERRUPT_LENGTH = audio_input_length
def gen_sched(python_code=True):
src=Source("src",AUDIO_INTERRUPT_LENGTH)
# For Python code, the input is a numpy array which is passed
# as argument of the node
if python_code:
src.addVariableArg("input_array")
sink=Sink("sink",1)
feature=Feature("feature",winLength)
feature.addVariableArg("window")
sliding_audio=SlidingBuffer("audioWin",q15Type,winLength,winLength-audio_input_length)
FEATURE_LENGTH=98 # for one second
FEATURE_OVERLAP = 49 # We slide feature by 0.5 second
sliding_feature=SlidingBuffer("featureWin",q15Type,FEATURE_LENGTH,FEATURE_OVERLAP)
kws=KWS("kws",FEATURE_LENGTH)
# Parameters of the ML model used by the node.
kws.addVariableArg("coef_q15")
kws.addVariableArg("coef_shift")
kws.addVariableArg("intercept_q15")
kws.addVariableArg("intercept_shift")
fir=FIR("fir",FEATURE_LENGTH,FEATURE_LENGTH)
# Description of the compute graph
g = Graph()
g.connect(src.o, sliding_audio.i)
g.connect(sliding_audio.o, feature.i)
g.connect(feature.o, sliding_feature.i)
g.connect(sliding_feature.o, fir.i)
g.connect(fir.o, kws.i)
g.connect(kws.o, sink.i)
# For Python we run for only around 13 seconds of input signal.
# Without this, it would run forever.
conf=Configuration()
if python_code:
conf.debugLimit=13
# We compute the scheduling
sched = g.computeSchedule(conf)
print("Schedule length = %d" % sched.scheduleLength)
print("Memory usage %d bytes" % sched.memory)
# We generate the scheduling code for a Python and C++ implementations
if python_code:
conf.pyOptionalArgs="input_array,window,coef_q15,coef_shift,intercept_q15,intercept_shift"
sched.pythoncode(".",config=conf)
with open("test.dot","w") as f:
sched.graphviz(f)
else:
conf.cOptionalArgs=const q15_t *window,
const q15_t *coef_q15,
const int coef_shift,
const q15_t intercept_q15,
const int intercept_shift
conf.memoryOptimization=True
# When schedule is long
conf.codeArray=True
sched.ccode("kws",config=conf)
with open("kws/test.dot","w") as f:
sched.graphviz(f)
gen_sched(True)
gen_sched(False)
from urllib.request import urlopen
import io
import soundfile as sf
test_pattern_url="https://github.com/ARM-software/VHT-SystemModeling/blob/main/EchoCanceller/sounds/yesno.wav?raw=true"
f = urlopen(test_pattern_url)
filedata = f.read()
data, samplerate = sf.read(io.BytesIO(filedata))
if len(data.shape)>1:
data=data[:,0]
plt.plot(data)
plt.show()
import sched as s
from importlib import reload
import appnodes
appnodes= reload(appnodes)
s = reload(s)
dataQ15=fix.toQ15(data)
windowQ15=fix.toQ15(hann(winLength,sym=False))
nb,error = s.scheduler(dataQ15,windowQ15,coef_q15,coef_shift,intercept_q15,intercept_shift)
with open("logistic.pickle","rb") as f:
clfb=pickle.load(f)
scaled_coef=clfb.best_estimator_.coef_
coef_shift=0
while np.max(np.abs(scaled_coef)) > 1:
scaled_coef = scaled_coef / 2.0
coef_shift = coef_shift + 1
coef_q15=fix.toQ15(scaled_coef)
scaled_intercept = clfb.best_estimator_.intercept_
intercept_shift = 0
while np.abs(scaled_intercept) > 1:
scaled_intercept = scaled_intercept / 2.0
intercept_shift = intercept_shift + 1
intercept_q15=fix.toQ15(scaled_intercept)
def carray(a):
s="{"
k=0
for x in a:
s = s + ("%d," % (x,))
k = k + 1
if k == 10:
k=0;
s = s + "\n"
s = s + "}"
return(s)
ccode=#include "arm_math.h"
#include "coef.h"
const q15_t fir_coefs[NUMTAPS]=%s;
const q15_t coef_q15[%d]=%s;
const q15_t intercept_q15 = %d;
const int coef_shift=%d;
const int intercept_shift=%d;
const q15_t window[%d]=%s;
def gen_coef_code():
fir_coef = carray(fix.toQ15(np.ones(10)/10.0))
winq15=carray(fix.toQ15(hann(winLength,sym=False)))
res = ccode % (fir_coef,
len(coef_q15[0]),
carray(coef_q15[0]),
intercept_q15,
coef_shift,
intercept_shift,
winLength,
winq15
)
with open(os.path.join("kws","coef.cpp"),"w") as f:
print(res,file=f)
gen_coef_code()
!arduino-cli board list
!arduino-cli config init
!arduino-cli lib install Arduino_CMSIS-DSP
!arduino-cli compile -b arduino:mbed_nano:nano33ble kws
!arduino-cli upload -b arduino:mbed_nano:nano33ble -p COM5 kws
import serial
import ipywidgets as widgets
import time
import threading
STOPSERIAL=False
def stop_action(btn):
global STOPSERIAL
STOPSERIAL=True
out = widgets.Output(layout={'border': '1px solid black','height':'40px'})
button = widgets.Button(
description='Stop',
disabled=False,
button_style='', # 'success', 'info', 'warning', 'danger' or ''
tooltip='Click me'
)
button.on_click(stop_action)
out.clear_output()
display(widgets.VBox([out,button]))
STOPSERIAL = False
def get_serial():
try:
with serial.Serial('COM6', 115200, timeout=1) as ser:
ser.reset_input_buffer()
global STOPSERIAL
while not STOPSERIAL:
data=ser.readline()
if (len(data)>0):
with out:
out.clear_output()
res=data.decode('ascii').rstrip()
if res=="Yes":
display(HTML("<p style='color:#00AA00';>YES</p>"))
else:
print(res)
with out:
out.clear_output()
print("Communication closed")
except Exception as inst:
with out:
out.clear_output()
print(inst)
t = threading.Thread(target=get_serial)
t.start()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
Step2: In order for apsidal motion to be apparent, we need an eccentric system that is precessing.
Step3: Let's set a very noticeable rate of precession.
Step4: We'll add lc and orb datasets to see how the apsidal motion affects each. We'll need to sample over several orbits of the binary (which has a period of 3 days, by default).
Step5: Influence on Orbits (positions)
Step6: Influence on Light Curves (fluxes)
| <ASSISTANT_TASK:>
Python Code:
!pip install -I "phoebe>=2.1,<2.2"
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
b['ecc'] = 0.2
b['dperdt'] = 2.0 * u.deg/u.d
b.add_dataset('lc', times=np.linspace(0,1,101), dataset='lc01')
b.add_dataset('lc', times=np.linspace(4,5,101), dataset='lc02')
b.add_dataset('orb', times=np.linspace(0,5,401), dataset='orb01')
b.run_compute(irrad_method='none')
afig, mplfig = b['orb01@model'].plot(y='ws', time=[0,1,2,3,4,5], show=True)
afig, mplfig = b['lc01@model'].plot()
afig, mplfig = b['lc02@model'].plot(time=[0,1,4,5], show=True)
afig, mplfig = b['lc01@model'].plot(x='phases')
afig, mplfig = b['lc02@model'].plot(x='phases', show=True)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Error plots for MiniZephyr vs. the AnalyticalHelmholtz response
Step2: Relative error of the MiniZephyr solution (in %)
| <ASSISTANT_TASK:>
Python Code:
import sys
sys.path.append('../')
import numpy as np
from zephyr.backend import MiniZephyr, SparseKaiserSource, AnalyticalHelmholtz
import matplotlib.pyplot as plt
import matplotlib.cm as cm
import matplotlib
%matplotlib inline
from IPython.display import set_matplotlib_formats
set_matplotlib_formats('png')
matplotlib.rcParams['savefig.dpi'] = 150 # Change this to adjust figure size
systemConfig = {
'dx': 1., # m
'dz': 1., # m
'c': 2500., # m/s
'rho': 1000., # kg/m^3
'nx': 100, # count
'nz': 200, # count
'freq': 2e2, # Hz
}
nx = systemConfig['nx']
nz = systemConfig['nz']
dx = systemConfig['dx']
dz = systemConfig['dz']
MZ = MiniZephyr(systemConfig)
AH = AnalyticalHelmholtz(systemConfig)
SKS = SparseKaiserSource(systemConfig)
xs, zs = 25, 25
sloc = np.array([xs, zs]).reshape((1,2))
q = SKS(sloc)
uMZ = MZ*q
uAH = AH(sloc)
clip = 100
plotopts = {
'vmin': -np.pi,
'vmax': np.pi,
'extent': [0., dx * nx, dz * nz, 0.],
'cmap': cm.bwr,
}
fig = plt.figure()
ax1 = fig.add_subplot(1,4,1)
plt.imshow(np.angle(uAH.reshape((nz, nx))), **plotopts)
plt.title('AH Phase')
ax2 = fig.add_subplot(1,4,2)
plt.imshow(np.angle(uMZ.reshape((nz, nx))), **plotopts)
plt.title('MZ Phase')
plotopts.update({
'vmin': -clip,
'vmax': clip,
})
ax3 = fig.add_subplot(1,4,3)
plt.imshow(uAH.reshape((nz, nx)).real, **plotopts)
plt.title('AH Real')
ax4 = fig.add_subplot(1,4,4)
plt.imshow(uMZ.reshape((nz, nx)).real, **plotopts)
plt.title('MZ Real')
fig.tight_layout()
fig = plt.figure()
ax = fig.add_subplot(1,1,1, aspect=0.1)
plt.plot(uAH.real.reshape((nz, nx))[:,xs], label='AnalyticalHelmholtz')
plt.plot(uMZ.real.reshape((nz, nx))[:,xs], label='MiniZephyr')
plt.legend(loc=4)
plt.title('Real part of response through xs=%d'%xs)
uMZr = uMZ.reshape((nz, nx))
uAHr = uAH.reshape((nz, nx))
plotopts.update({
'cmap': cm.jet,
'vmin': 0.,
'vmax': 20.,
})
fig = plt.figure()
ax1 = fig.add_subplot(1,2,1)
plt.imshow(abs(uAHr - uMZr)/(abs(uAHr)+1e-15) * 100, **plotopts)
cb = plt.colorbar()
cb.set_label('Percent error')
plotopts.update({'vmax': 5.})
ax2 = fig.add_subplot(1,2,2)
plt.imshow(abs(uAHr - uMZr)/(abs(uAHr)+1e-15) * 100, **plotopts)
cb = plt.colorbar()
cb.set_label('Percent error')
fig.tight_layout()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create datasets
Step2: Feature Space
Step3: We'll be using the popular data manipulation framework pandas.
Step4: We can use head() to get a quick look at the contents of each table
Step5: This is very representative of a typical industry dataset.
Step6: Data Cleaning / Feature Engineering
Step7: Turn state Holidays to Bool
Step8: Define function for joining tables on specific fields.
Step9: Join weather/state names.
Step10: In pandas you can add new columns to a dataframe by simply defining it. We'll do this for googletrends by extracting dates and state names from the given data and adding those columns.
Step11: The following extracts particular date fields from a complete datetime for the purpose of constructing categoricals.
Step12: We'll add to every table w/ a date field.
Step13: Now we can outer join all of our data into a single dataframe.
Step14: Next we'll fill in missing values to avoid complications w/ na's.
Step15: Next we'll extract features "CompetitionOpenSince" and "CompetitionDaysOpen". Note the use of apply() in mapping a function across dataframe values.
Step16: We'll replace some erroneous / outlying data.
Step17: Added "CompetitionMonthsOpen" field, limit the maximum to 2 years to limit number of unique embeddings.
Step18: Same process for Promo dates.
Step19: Durations
Step20: We've defined a class elapsed for cumulative counting across a sorted dataframe.
Step21: And a function for applying said class across dataframe rows and adding values to a new column.
Step22: Let's walk through an example.
Step23: We'll do this for two more fields.
Step24: We're going to set the active index to Date.
Step25: Then set null values from elapsed field calculations to 0.
Step26: Next we'll demonstrate window functions in pandas to calculate rolling quantities.
Step27: Next we want to drop the Store indices grouped together in the window function.
Step28: Now we'll merge these values onto the df.
Step29: It's usually a good idea to back up large tables of extracted / wrangled features before you join them onto another one, that way you can go back to it easily if you need to make changes to it.
Step30: We'll back this up as well.
Step31: We now have our final set of engineered features.
Step32: While these steps were explicitly outlined in the paper, these are all fairly typical feature engineering steps for dealing with time series data and are practical in any similar setting.
Step33: This dictionary maps categories to embedding dimensionality. In generally, categories we might expect to be conceptually more complex have larger dimension.
Step35: Name categorical variables
Step37: Likewise for continuous
Step38: Replace nulls w/ 0 for continuous, "" for categorical.
Step39: Here we create a list of tuples, each containing a variable and an instance of a transformer for that variable.
Step40: The same instances need to be used for the test set as well, so values are mapped/standardized appropriately.
Step41: Example of first five rows of zeroth column being transformed appropriately.
Step42: We can also pickle these mappings, which is great for portability!
Step43: Sample data
Step44: We speculate that this may have cost them a higher standing in the competition. One reason this may be the case is that a little EDA reveals that there are often periods where stores are closed, typically for refurbishment. Before and after these periods, there are naturally spikes in sales that one might expect. Be ommitting this data from their training, the authors gave up the ability to leverage information about these periods to predict this otherwise volatile behavior.
Step45: We're going to run on a sample.
Step46: In time series data, cross-validation is not random. Instead, our holdout data is always the most recent data, as it would be in real application.
Step47: Here's a preprocessor for our categoricals using our instance mapper.
Step48: Same for continuous.
Step49: Grab our targets.
Step50: Finally, the authors modified the target values by applying a logarithmic transformation and normalizing to unit scale by dividing by the maximum log value.
Step52: Note
Step53: Root-mean-squared percent error is the metric Kaggle used for this competition.
Step54: These undo the target transformations.
Step56: Create models
Step57: Helper function for getting categorical name and dim.
Step58: Helper function for constructing embeddings. Notice commented out codes, several different ways to compute embeddings at play.
Step59: Helper function for continuous inputs.
Step60: Let's build them.
Step61: Now we can put them together. Given the inputs, continuous and categorical embeddings, we're going to concatenate all of them.
Step62: Start training
Step63: Result on validation data
Step64: Using 3rd place data
Step66: Neural net
Step67: XGBoost
Step68: Easily, competition distance is the most important, while events are not important at all.
| <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import math, keras, datetime, pandas as pd, numpy as np, keras.backend as K
import matplotlib.pyplot as plt, xgboost, operator, random, pickle
from utils2 import *
np.set_printoptions(threshold=50, edgeitems=20)
limit_mem()
from isoweek import Week
from pandas_summary import DataFrameSummary
%cd data/rossman/
def concat_csvs(dirname):
os.chdir(dirname)
filenames=glob.glob("*.csv")
wrote_header = False
with open("../"+dirname+".csv","w") as outputfile:
for filename in filenames:
name = filename.split(".")[0]
with open(filename) as f:
line = f.readline()
if not wrote_header:
wrote_header = True
outputfile.write("file,"+line)
for line in f:
outputfile.write(name + "," + line)
outputfile.write("\n")
os.chdir("..")
# concat_csvs('googletrend')
# concat_csvs('weather')
table_names = ['train', 'store', 'store_states', 'state_names',
'googletrend', 'weather', 'test']
tables = [pd.read_csv(fname+'.csv', low_memory=False) for fname in table_names]
from IPython.display import HTML
for t in tables: display(t.head())
for t in tables: display(DataFrameSummary(t).summary())
train, store, store_states, state_names, googletrend, weather, test = tables
len(train),len(test)
train.StateHoliday = train.StateHoliday!='0'
test.StateHoliday = test.StateHoliday!='0'
def join_df(left, right, left_on, right_on=None):
if right_on is None: right_on = left_on
return left.merge(right, how='left', left_on=left_on, right_on=right_on,
suffixes=("", "_y"))
weather = join_df(weather, state_names, "file", "StateName")
googletrend['Date'] = googletrend.week.str.split(' - ', expand=True)[0]
googletrend['State'] = googletrend.file.str.split('_', expand=True)[2]
googletrend.loc[googletrend.State=='NI', "State"] = 'HB,NI'
def add_datepart(df):
df.Date = pd.to_datetime(df.Date)
df["Year"] = df.Date.dt.year
df["Month"] = df.Date.dt.month
df["Week"] = df.Date.dt.week
df["Day"] = df.Date.dt.day
add_datepart(weather)
add_datepart(googletrend)
add_datepart(train)
add_datepart(test)
trend_de = googletrend[googletrend.file == 'Rossmann_DE']
store = join_df(store, store_states, "Store")
len(store[store.State.isnull()])
joined = join_df(train, store, "Store")
len(joined[joined.StoreType.isnull()])
joined = join_df(joined, googletrend, ["State","Year", "Week"])
len(joined[joined.trend.isnull()])
joined = joined.merge(trend_de, 'left', ["Year", "Week"], suffixes=('', '_DE'))
len(joined[joined.trend_DE.isnull()])
joined = join_df(joined, weather, ["State","Date"])
len(joined[joined.Mean_TemperatureC.isnull()])
joined_test = test.merge(store, how='left', left_on='Store', right_index=True)
len(joined_test[joined_test.StoreType.isnull()])
joined.CompetitionOpenSinceYear = joined.CompetitionOpenSinceYear.fillna(1900).astype(np.int32)
joined.CompetitionOpenSinceMonth = joined.CompetitionOpenSinceMonth.fillna(1).astype(np.int32)
joined.Promo2SinceYear = joined.Promo2SinceYear.fillna(1900).astype(np.int32)
joined.Promo2SinceWeek = joined.Promo2SinceWeek.fillna(1).astype(np.int32)
joined["CompetitionOpenSince"] = pd.to_datetime(joined.apply(lambda x: datetime.datetime(
x.CompetitionOpenSinceYear, x.CompetitionOpenSinceMonth, 15), axis=1).astype(pd.datetime))
joined["CompetitionDaysOpen"] = joined.Date.subtract(joined["CompetitionOpenSince"]).dt.days
joined.loc[joined.CompetitionDaysOpen<0, "CompetitionDaysOpen"] = 0
joined.loc[joined.CompetitionOpenSinceYear<1990, "CompetitionDaysOpen"] = 0
joined["CompetitionMonthsOpen"] = joined["CompetitionDaysOpen"]//30
joined.loc[joined.CompetitionMonthsOpen>24, "CompetitionMonthsOpen"] = 24
joined.CompetitionMonthsOpen.unique()
joined["Promo2Since"] = pd.to_datetime(joined.apply(lambda x: Week(
x.Promo2SinceYear, x.Promo2SinceWeek).monday(), axis=1).astype(pd.datetime))
joined["Promo2Days"] = joined.Date.subtract(joined["Promo2Since"]).dt.days
joined.loc[joined.Promo2Days<0, "Promo2Days"] = 0
joined.loc[joined.Promo2SinceYear<1990, "Promo2Days"] = 0
joined["Promo2Weeks"] = joined["Promo2Days"]//7
joined.loc[joined.Promo2Weeks<0, "Promo2Weeks"] = 0
joined.loc[joined.Promo2Weeks>25, "Promo2Weeks"] = 25
joined.Promo2Weeks.unique()
columns = ["Date", "Store", "Promo", "StateHoliday", "SchoolHoliday"]
class elapsed(object):
def __init__(self, fld):
self.fld = fld
self.last = pd.to_datetime(np.nan)
self.last_store = 0
def get(self, row):
if row.Store != self.last_store:
self.last = pd.to_datetime(np.nan)
self.last_store = row.Store
if (row[self.fld]): self.last = row.Date
return row.Date-self.last
df = train[columns]
def add_elapsed(fld, prefix):
sh_el = elapsed(fld)
df[prefix+fld] = df.apply(sh_el.get, axis=1)
fld = 'SchoolHoliday'
df = df.sort_values(['Store', 'Date'])
add_elapsed(fld, 'After')
df = df.sort_values(['Store', 'Date'], ascending=[True, False])
add_elapsed(fld, 'Before')
fld = 'StateHoliday'
df = df.sort_values(['Store', 'Date'])
add_elapsed(fld, 'After')
df = df.sort_values(['Store', 'Date'], ascending=[True, False])
add_elapsed(fld, 'Before')
fld = 'Promo'
df = df.sort_values(['Store', 'Date'])
add_elapsed(fld, 'After')
df = df.sort_values(['Store', 'Date'], ascending=[True, False])
add_elapsed(fld, 'Before')
display(df.head())
df = df.set_index("Date")
columns = ['SchoolHoliday', 'StateHoliday', 'Promo']
for o in ['Before', 'After']:
for p in columns:
a = o+p
df[a] = df[a].fillna(pd.Timedelta(0)).dt.days
bwd = df[['Store']+columns].sort_index().groupby("Store").rolling(7, min_periods=1).sum()
fwd = df[['Store']+columns].sort_index(ascending=False
).groupby("Store").rolling(7, min_periods=1).sum()
bwd.drop('Store',1,inplace=True)
bwd.reset_index(inplace=True)
fwd.drop('Store',1,inplace=True)
fwd.reset_index(inplace=True)
df.reset_index(inplace=True)
df = df.merge(bwd, 'left', ['Date', 'Store'], suffixes=['', '_bw'])
df = df.merge(fwd, 'left', ['Date', 'Store'], suffixes=['', '_fw'])
df.drop(columns,1,inplace=True)
df.head()
df.to_csv('df.csv')
df = pd.read_csv('df.csv', index_col=0)
df["Date"] = pd.to_datetime(df.Date)
df.columns
joined = join_df(joined, df, ['Store', 'Date'])
joined.to_csv('joined.csv')
joined = pd.read_csv('joined.csv', index_col=0)
joined["Date"] = pd.to_datetime(joined.Date)
joined.columns
from sklearn_pandas import DataFrameMapper
from sklearn.preprocessing import LabelEncoder, Imputer, StandardScaler
cat_var_dict = {'Store': 50, 'DayOfWeek': 6, 'Year': 2, 'Month': 6,
'Day': 10, 'StateHoliday': 3, 'CompetitionMonthsOpen': 2,
'Promo2Weeks': 1, 'StoreType': 2, 'Assortment': 3, 'PromoInterval': 3,
'CompetitionOpenSinceYear': 4, 'Promo2SinceYear': 4, 'State': 6,
'Week': 2, 'Events': 4, 'Promo_fw': 1,
'Promo_bw': 1, 'StateHoliday_fw': 1,
'StateHoliday_bw': 1, 'SchoolHoliday_fw': 1,
'SchoolHoliday_bw': 1}
cat_vars = [o[0] for o in
sorted(cat_var_dict.items(), key=operator.itemgetter(1), reverse=True)]
cat_vars = ['Store', 'DayOfWeek', 'Year', 'Month', 'Day', 'StateHoliday',
'StoreType', 'Assortment', 'Week', 'Events', 'Promo2SinceYear',
'CompetitionOpenSinceYear', 'PromoInterval', 'Promo', 'SchoolHoliday', 'State']
# mean/max wind; min temp; cloud; min/mean humid;
contin_vars = ['CompetitionDistance',
'Max_TemperatureC', 'Mean_TemperatureC', 'Min_TemperatureC',
'Max_Humidity', 'Mean_Humidity', 'Min_Humidity', 'Max_Wind_SpeedKm_h',
'Mean_Wind_SpeedKm_h', 'CloudCover', 'trend', 'trend_DE',
'AfterStateHoliday', 'BeforeStateHoliday', 'Promo', 'SchoolHoliday']
contin_vars = ['CompetitionDistance', 'Max_TemperatureC', 'Mean_TemperatureC',
'Max_Humidity', 'trend', 'trend_DE', 'AfterStateHoliday', 'BeforeStateHoliday']
for v in contin_vars: joined.loc[joined[v].isnull(), v] = 0
for v in cat_vars: joined.loc[joined[v].isnull(), v] = ""
cat_maps = [(o, LabelEncoder()) for o in cat_vars]
contin_maps = [([o], StandardScaler()) for o in contin_vars]
cat_mapper = DataFrameMapper(cat_maps)
cat_map_fit = cat_mapper.fit(joined)
cat_cols = len(cat_map_fit.features)
cat_cols
contin_mapper = DataFrameMapper(contin_maps)
contin_map_fit = contin_mapper.fit(joined)
contin_cols = len(contin_map_fit.features)
contin_cols
cat_map_fit.transform(joined)[0,:5], contin_map_fit.transform(joined)[0,:5]
pickle.dump(contin_map_fit, open('contin_maps.pickle', 'wb'))
pickle.dump(cat_map_fit, open('cat_maps.pickle', 'wb'))
[len(o[1].classes_) for o in cat_map_fit.features]
joined_sales = joined[joined.Sales!=0]
n = len(joined_sales)
n
samp_size = 100000
np.random.seed(42)
idxs = sorted(np.random.choice(n, samp_size, replace=False))
joined_samp = joined_sales.iloc[idxs].set_index("Date")
samp_size = n
joined_samp = joined_sales.set_index("Date")
train_ratio = 0.9
train_size = int(samp_size * train_ratio)
train_size
joined_valid = joined_samp[train_size:]
joined_train = joined_samp[:train_size]
len(joined_valid), len(joined_train)
def cat_preproc(dat):
return cat_map_fit.transform(dat).astype(np.int64)
cat_map_train = cat_preproc(joined_train)
cat_map_valid = cat_preproc(joined_valid)
def contin_preproc(dat):
return contin_map_fit.transform(dat).astype(np.float32)
contin_map_train = contin_preproc(joined_train)
contin_map_valid = contin_preproc(joined_valid)
y_train_orig = joined_train.Sales
y_valid_orig = joined_valid.Sales
max_log_y = np.max(np.log(joined_samp.Sales))
y_train = np.log(y_train_orig)/max_log_y
y_valid = np.log(y_valid_orig)/max_log_y
#y_train = np.log(y_train)
ymean=y_train_orig.mean()
ystd=y_train_orig.std()
y_train = (y_train_orig-ymean)/ystd
#y_valid = np.log(y_valid)
y_valid = (y_valid_orig-ymean)/ystd
def rmspe(y_pred, targ = y_valid_orig):
pct_var = (targ - y_pred)/targ
return math.sqrt(np.square(pct_var).mean())
def log_max_inv(preds, mx = max_log_y):
return np.exp(preds * mx)
# - This can be used if ymean and ystd are calculated above (they are currently commented out)
def normalize_inv(preds):
return preds * ystd + ymean
1 97s - loss: 0.0104 - val_loss: 0.0083
2 93s - loss: 0.0076 - val_loss: 0.0076
3 90s - loss: 0.0071 - val_loss: 0.0076
4 90s - loss: 0.0068 - val_loss: 0.0075
5 93s - loss: 0.0066 - val_loss: 0.0075
6 95s - loss: 0.0064 - val_loss: 0.0076
7 98s - loss: 0.0063 - val_loss: 0.0077
8 97s - loss: 0.0062 - val_loss: 0.0075
9 95s - loss: 0.0061 - val_loss: 0.0073
0 101s - loss: 0.0061 - val_loss: 0.0074
def split_cols(arr):
return np.hsplit(arr,arr.shape[1])
# - This gives the correct list length for the model
# - (list of 23 elements: 22 embeddings + 1 array of 16-dim elements)
map_train = split_cols(cat_map_train) + [contin_map_train]
map_valid = split_cols(cat_map_valid) + [contin_map_valid]
len(map_train)
# map_train = split_cols(cat_map_train) + split_cols(contin_map_train)
# map_valid = split_cols(cat_map_valid) + split_cols(contin_map_valid)
def cat_map_info(feat): return feat[0], len(feat[1].classes_)
cat_map_info(cat_map_fit.features[1])
# - In Keras 2 the "initializations" module is not available.
# - To keep here the custom initializer the code from Keras 1 "uniform" initializer is exploited
def my_init(scale):
# return lambda shape, name=None: initializations.uniform(shape, scale=scale, name=name)
return K.variable(np.random.uniform(low=-scale, high=scale, size=shape),
name=name)
# - In Keras 2 the "initializations" module is not available.
# - To keep here the custom initializer the code from Keras 1 "uniform" initializer is exploited
def emb_init(shape, name=None):
# return initializations.uniform(shape, scale=2/(shape[1]+1), name=name)
return K.variable(np.random.uniform(low=-2/(shape[1]+1), high=2/(shape[1]+1), size=shape),
name=name)
def get_emb(feat):
name, c = cat_map_info(feat)
#c2 = cat_var_dict[name]
c2 = (c+1)//2
if c2>50: c2=50
inp = Input((1,), dtype='int64', name=name+'_in')
# , kernel_regularizer=l2(1e-6) # Keras 2
u = Flatten(name=name+'_flt')(Embedding(c, c2, input_length=1, embeddings_initializer=emb_init)(inp)) # Keras 2
# u = Flatten(name=name+'_flt')(Embedding(c, c2, input_length=1)(inp))
return inp,u
def get_contin(feat):
name = feat[0][0]
inp = Input((1,), name=name+'_in')
return inp, Dense(1, name=name+'_d', kernel_initializer=my_init(1.))(inp) # Keras 2
contin_inp = Input((contin_cols,), name='contin')
contin_out = Dense(contin_cols*10, activation='relu', name='contin_d')(contin_inp)
#contin_out = BatchNormalization()(contin_out)
embs = [get_emb(feat) for feat in cat_map_fit.features]
#conts = [get_contin(feat) for feat in contin_map_fit.features]
#contin_d = [d for inp,d in conts]
x = concatenate([emb for inp,emb in embs] + [contin_out]) # Keras 2
#x = concatenate([emb for inp,emb in embs] + contin_d) # Keras 2
x = Dropout(0.02)(x)
x = Dense(1000, activation='relu', kernel_initializer='uniform')(x)
x = Dense(500, activation='relu', kernel_initializer='uniform')(x)
x = Dropout(0.2)(x)
x = Dense(1, activation='sigmoid')(x)
model = Model([inp for inp,emb in embs] + [contin_inp], x)
#model = Model([inp for inp,emb in embs] + [inp for inp,d in conts], x)
model.compile('adam', 'mean_absolute_error')
#model.compile(Adam(), 'mse')
%%time
hist = model.fit(map_train, y_train, batch_size=128, epochs=25,
verbose=1, validation_data=(map_valid, y_valid))
hist.history
plot_train(hist)
preds = np.squeeze(model.predict(map_valid, 1024))
log_max_inv(preds)
# - This will work if ymean and ystd are calculated in the "Data" section above (in this case uncomment)
# normalize_inv(preds)
pkl_path = '/data/jhoward/github/entity-embedding-rossmann/'
def load_pickle(fname):
return pickle.load(open(pkl_path+fname + '.pickle', 'rb'))
[x_pkl_orig, y_pkl_orig] = load_pickle('feature_train_data')
max_log_y_pkl = np.max(np.log(y_pkl_orig))
y_pkl = np.log(y_pkl_orig)/max_log_y_pkl
pkl_vars = ['Open', 'Store', 'DayOfWeek', 'Promo', 'Year', 'Month', 'Day',
'StateHoliday', 'SchoolHoliday', 'CompetitionMonthsOpen', 'Promo2Weeks',
'Promo2Weeks_L', 'CompetitionDistance',
'StoreType', 'Assortment', 'PromoInterval', 'CompetitionOpenSinceYear',
'Promo2SinceYear', 'State', 'Week', 'Max_TemperatureC', 'Mean_TemperatureC',
'Min_TemperatureC', 'Max_Humidity', 'Mean_Humidity', 'Min_Humidity', 'Max_Wind_SpeedKm_h',
'Mean_Wind_SpeedKm_h', 'CloudCover','Events', 'Promo_fw', 'Promo_bw',
'StateHoliday_fw', 'StateHoliday_bw', 'AfterStateHoliday', 'BeforeStateHoliday',
'SchoolHoliday_fw', 'SchoolHoliday_bw', 'trend_DE', 'trend']
x_pkl = np.array(x_pkl_orig)
gt_enc = StandardScaler()
gt_enc.fit(x_pkl[:,-2:])
x_pkl[:,-2:] = gt_enc.transform(x_pkl[:,-2:])
x_pkl.shape
x_pkl = x_pkl[idxs]
y_pkl = y_pkl[idxs]
x_pkl_trn, x_pkl_val = x_pkl[:train_size], x_pkl[train_size:]
y_pkl_trn, y_pkl_val = y_pkl[:train_size], y_pkl[train_size:]
x_pkl_trn.shape
xgb_parms = {'learning_rate': 0.1, 'subsample': 0.6,
'colsample_bylevel': 0.6, 'silent': True, 'objective': 'reg:linear'}
xdata_pkl = xgboost.DMatrix(x_pkl_trn, y_pkl_trn, feature_names=pkl_vars)
xdata_val_pkl = xgboost.DMatrix(x_pkl_val, y_pkl_val, feature_names=pkl_vars)
xgb_parms['seed'] = random.randint(0,1e9)
model_pkl = xgboost.train(xgb_parms, xdata_pkl)
model_pkl.eval(xdata_val_pkl)
#0.117473
importance = model_pkl.get_fscore()
importance = sorted(importance.items(), key=operator.itemgetter(1))
df = pd.DataFrame(importance, columns=['feature', 'fscore'])
df['fscore'] = df['fscore'] / df['fscore'].sum()
df.plot(kind='barh', x='feature', y='fscore', legend=False, figsize=(6, 10))
plt.title('XGBoost Feature Importance')
plt.xlabel('relative importance');
#np.savez_compressed('vars.npz', pkl_cats, pkl_contins)
#np.savez_compressed('deps.npz', y_pkl)
pkl_cats = np.stack([x_pkl[:,pkl_vars.index(f)] for f in cat_vars], 1)
pkl_contins = np.stack([x_pkl[:,pkl_vars.index(f)] for f in contin_vars], 1)
co_enc = StandardScaler().fit(pkl_contins)
pkl_contins = co_enc.transform(pkl_contins)
pkl_contins_trn, pkl_contins_val = pkl_contins[:train_size], pkl_contins[train_size:]
pkl_cats_trn, pkl_cats_val = pkl_cats[:train_size], pkl_cats[train_size:]
y_pkl_trn, y_pkl_val = y_pkl[:train_size], y_pkl[train_size:]
def get_emb_pkl(feat):
name, c = cat_map_info(feat)
c2 = (c+2)//3
if c2>50: c2=50
inp = Input((1,), dtype='int64', name=name+'_in')
u = Flatten(name=name+'_flt')(Embedding(c, c2, input_length=1, init=emb_init)(inp))
return inp,u
n_pkl_contin = pkl_contins_trn.shape[1]
contin_inp = Input((n_pkl_contin,), name='contin')
contin_out = BatchNormalization()(contin_inp)
map_train_pkl = split_cols(pkl_cats_trn) + [pkl_contins_trn]
map_valid_pkl = split_cols(pkl_cats_val) + [pkl_contins_val]
def train_pkl(bs=128, ne=10):
return model_pkl.fit(map_train_pkl, y_pkl_trn, batch_size=bs, nb_epoch=ne,
verbose=0, validation_data=(map_valid_pkl, y_pkl_val))
def get_model_pkl():
conts = [get_contin_pkl(feat) for feat in contin_map_fit.features]
embs = [get_emb_pkl(feat) for feat in cat_map_fit.features]
x = merge([emb for inp,emb in embs] + [contin_out], mode='concat')
x = Dropout(0.02)(x)
x = Dense(1000, activation='relu', init='uniform')(x)
x = Dense(500, activation='relu', init='uniform')(x)
x = Dense(1, activation='sigmoid')(x)
model_pkl = Model([inp for inp,emb in embs] + [contin_inp], x)
model_pkl.compile('adam', 'mean_absolute_error')
#model.compile(Adam(), 'mse')
return model_pkl
model_pkl = get_model_pkl()
train_pkl(128, 10).history['val_loss']
K.set_value(model_pkl.optimizer.lr, 1e-4)
train_pkl(128, 5).history['val_loss']
1 97s - loss: 0.0104 - val_loss: 0.0083
2 93s - loss: 0.0076 - val_loss: 0.0076
3 90s - loss: 0.0071 - val_loss: 0.0076
4 90s - loss: 0.0068 - val_loss: 0.0075
5 93s - loss: 0.0066 - val_loss: 0.0075
6 95s - loss: 0.0064 - val_loss: 0.0076
7 98s - loss: 0.0063 - val_loss: 0.0077
8 97s - loss: 0.0062 - val_loss: 0.0075
9 95s - loss: 0.0061 - val_loss: 0.0073
0 101s - loss: 0.0061 - val_loss: 0.0074
plot_train(hist)
preds = np.squeeze(model_pkl.predict(map_valid_pkl, 1024))
y_orig_pkl_val = log_max_inv(y_pkl_val, max_log_y_pkl)
rmspe(log_max_inv(preds, max_log_y_pkl), y_orig_pkl_val)
X_train = np.concatenate([cat_map_train, contin_map_train], axis=1)
X_valid = np.concatenate([cat_map_valid, contin_map_valid], axis=1)
all_vars = cat_vars + contin_vars
xgb_parms = {'learning_rate': 0.1, 'subsample': 0.6,
'colsample_bylevel': 0.6, 'silent': True, 'objective': 'reg:linear'}
xdata = xgboost.DMatrix(X_train, y_train, feature_names=all_vars)
xdata_val = xgboost.DMatrix(X_valid, y_valid, feature_names=all_vars)
xgb_parms['seed'] = random.randint(0,1e9)
model = xgboost.train(xgb_parms, xdata)
model.eval(xdata_val)
model.eval(xdata_val)
importance = model.get_fscore()
importance = sorted(importance.items(), key=operator.itemgetter(1))
df = pd.DataFrame(importance, columns=['feature', 'fscore'])
df['fscore'] = df['fscore'] / df['fscore'].sum()
df.plot(kind='barh', x='feature', y='fscore', legend=False, figsize=(6, 10))
plt.title('XGBoost Feature Importance')
plt.xlabel('relative importance');
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: EXPLORE THE DATA
Step2: SUBSET THE DATA
Step3: STANDARDIZE THE DATA
Step5: K-MEANS ANALYSIS - INITIAL CLUSTER SET
Step6: Interpret 2 cluster solution
Step7: BEGIN multiple steps to merge cluster assignment with clustering variables to examine cluster variable means by cluster
Step8: calculate clustering variable means by cluster
Step9: validate clusters in training data by examining cluster differences in CLASS using ANOVA first have to merge CLASS of poker hand with clustering variables and cluster assignment data
| <ASSISTANT_TASK:>
Python Code:
# read training and test data from the url link and save the file to your working directory
url = "http://archive.ics.uci.edu/ml/machine-learning-databases/poker/poker-hand-training-true.data"
urllib.request.urlretrieve(url, "poker_train.csv")
url2 = "http://archive.ics.uci.edu/ml/machine-learning-databases/poker/poker-hand-testing.data"
urllib.request.urlretrieve(url2, "poker_test.csv")
# read the data in and add column names
data_train = pd.read_csv("poker_train.csv", header=None,
names=['S1', 'C1', 'S2', 'C2', 'S3', 'C3','S4', 'C4', 'S5', 'C5', 'CLASS'])
data_test = pd.read_csv("poker_test.csv", header=None,
names=['S1', 'C1', 'S2', 'C2', 'S3', 'C3','S4', 'C4', 'S5', 'C5', 'CLASS'])
# summary statistics including counts, mean, stdev, quartiles for the training dataset
data_train.head(n=5)
data_train.dtypes # data types of each variable
data_train.describe()
# subset clustering variables
cluster=data_train[['S1', 'C1', 'S2', 'C2', 'S3', 'C3','S4', 'C4', 'S5', 'C5']]
# standardize clustering variables to have mean=0 and sd=1 so that card suit and
# rank are on the same scale as to have the variables equally contribute to the analysis
clustervar=cluster.copy() # create a copy
clustervar['S1']=preprocessing.scale(clustervar['S1'].astype('float64'))
clustervar['C1']=preprocessing.scale(clustervar['C1'].astype('float64'))
clustervar['S2']=preprocessing.scale(clustervar['S2'].astype('float64'))
clustervar['C2']=preprocessing.scale(clustervar['C2'].astype('float64'))
clustervar['S3']=preprocessing.scale(clustervar['S3'].astype('float64'))
clustervar['C3']=preprocessing.scale(clustervar['C3'].astype('float64'))
clustervar['S4']=preprocessing.scale(clustervar['S4'].astype('float64'))
clustervar['C4']=preprocessing.scale(clustervar['C4'].astype('float64'))
clustervar['S5']=preprocessing.scale(clustervar['S5'].astype('float64'))
clustervar['C5']=preprocessing.scale(clustervar['C5'].astype('float64'))
# The data has been already split data into train and test sets
clus_train = clustervar
# k-means cluster analysis for 1-10 clusters due to the 10 possible class outcomes for poker hands
from scipy.spatial.distance import cdist
clusters=range(1,11)
meandist=[]
# loop through each cluster and fit the model to the train set
# generate the predicted cluster assingment and append the mean distance my taking the sum divided by the shape
for k in clusters:
model=KMeans(n_clusters=k)
model.fit(clus_train)
clusassign=model.predict(clus_train)
meandist.append(sum(np.min(cdist(clus_train, model.cluster_centers_, 'euclidean'), axis=1))
/ clus_train.shape[0])
Plot average distance from observations from the cluster centroid
to use the Elbow Method to identify number of clusters to choose
plt.plot(clusters, meandist)
plt.xlabel('Number of clusters')
plt.ylabel('Average distance')
plt.title('Selecting k with the Elbow Method') # pick the fewest number of clusters that reduces the average distance
model3=KMeans(n_clusters=2)
model3.fit(clus_train) # has cluster assingments based on using 2 clusters
clusassign=model3.predict(clus_train)
# plot clusters
''' Canonical Discriminant Analysis for variable reduction:
1. creates a smaller number of variables
2. linear combination of clustering variables
3. Canonical variables are ordered by proportion of variance accounted for
4. most of the variance will be accounted for in the first few canonical variables
'''
from sklearn.decomposition import PCA # CA from PCA function
pca_2 = PCA(2) # return 2 first canonical variables
plot_columns = pca_2.fit_transform(clus_train) # fit CA to the train dataset
plt.scatter(x=plot_columns[:,0], y=plot_columns[:,1], c=model3.labels_,) # plot 1st canonical variable on x axis, 2nd on y-axis
plt.xlabel('Canonical variable 1')
plt.ylabel('Canonical variable 2')
plt.title('Scatterplot of Canonical Variables for 2 Clusters')
plt.show() # close or overlapping clusters idicate correlated variables with low in-class variance but not good separation. 2 cluster might be better.
# create a unique identifier variable from the index for the
# cluster training data to merge with the cluster assignment variable
clus_train.reset_index(level=0, inplace=True)
# create a list that has the new index variable
cluslist=list(clus_train['index'])
# create a list of cluster assignments
labels=list(model3.labels_)
# combine index variable list with cluster assignment list into a dictionary
newlist=dict(zip(cluslist, labels))
newlist
# convert newlist dictionary to a dataframe
newclus=DataFrame.from_dict(newlist, orient='index')
newclus
# rename the cluster assignment column
newclus.columns = ['cluster']
# now do the same for the cluster assignment variable create a unique identifier variable from the index for the
# cluster assignment dataframe to merge with cluster training data
newclus.reset_index(level=0, inplace=True)
# merge the cluster assignment dataframe with the cluster training variable dataframe
# by the index variable
merged_train=pd.merge(clus_train, newclus, on='index')
merged_train.head(n=100)
# cluster frequencies
merged_train.cluster.value_counts()
clustergrp = merged_train.groupby('cluster').mean()
print ("Clustering variable means by cluster")
print(clustergrp)
# split into test / train for class
pokerhand_train=data_train['CLASS']
pokerhand_test=data_test['CLASS']
# put into a pandas dataFrame
pokerhand_train=pd.DataFrame(pokerhand_train)
pokerhand_test=pd.DataFrame(pokerhand_test)
pokerhand_train.reset_index(level=0, inplace=True) # reset index
merged_train_all=pd.merge(pokerhand_train, merged_train, on='index') # merge the pokerhand train with merged clusters
sub1 = merged_train_all[['CLASS', 'cluster']].dropna()
import statsmodels.formula.api as smf
import statsmodels.stats.multicomp as multi
# respone formula
pokermod = smf.ols(formula='CLASS ~ cluster', data=sub1).fit()
print (pokermod.summary())
print ('means for Poker hands by cluster')
m1= sub1.groupby('cluster').mean()
print (m1)
print ('standard deviations for Poker hands by cluster')
m2= sub1.groupby('cluster').std()
print (m2)
mc1 = multi.MultiComparison(sub1['CLASS'], sub1['cluster'])
res1 = mc1.tukeyhsd()
print(res1.summary())
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The goal of this notebook is to illustrate how a model can wrapped and used in different tasks in Emukit.
Step2: Define the objective function
Step3: Define the parameter space
Step4: Collect some observations of f
Step5: Fit and wrap a model to the collected data
Step6: 2. Load the package components, run the decision loop (if needed), solve your problem
Step7: Bayesian optimization
Step8: Experimental design
Step9: Bayesian Quadrature
Step10: Sensitivity analysis
| <ASSISTANT_TASK:>
Python Code:
# General imports and parameters of figures should be loaded at the beginning of the overview
import numpy as np
from emukit.test_functions import branin_function
from emukit.core import ParameterSpace, ContinuousParameter
from emukit.core.initial_designs import RandomDesign
from GPy.models import GPRegression
from emukit.model_wrappers import GPyModelWrapper
from emukit.model_wrappers.gpy_quadrature_wrappers import BaseGaussianProcessGPy, RBFGPy
import warnings
warnings.filterwarnings('ignore')
f, _ = branin_function()
parameter_space = ParameterSpace([ContinuousParameter('x1', -5, 10), ContinuousParameter('x2', 0, 15)])
num_data_points = 30
design = RandomDesign(parameter_space)
X = design.get_samples(num_data_points)
Y = f(X)
model_gpy = GPRegression(X,Y)
model_gpy.optimize()
model_emukit = GPyModelWrapper(model_gpy)
# Decision loops
from emukit.experimental_design import ExperimentalDesignLoop
from emukit.bayesian_optimization.loops import BayesianOptimizationLoop
from emukit.quadrature.loop import VanillaBayesianQuadratureLoop
# Acquisition functions
from emukit.bayesian_optimization.acquisitions import ExpectedImprovement
from emukit.experimental_design.acquisitions import ModelVariance
from emukit.quadrature.acquisitions import IntegralVarianceReduction
# Acquistion optimizers
from emukit.core.optimization import GradientAcquisitionOptimizer
# Stopping conditions
from emukit.core.loop import FixedIterationsStoppingCondition
from emukit.core.loop import ConvergenceStoppingCondition
# Bayesian quadrature kernel and model
from emukit.quadrature.kernels import QuadratureRBFLebesgueMeasure
from emukit.quadrature.methods import VanillaBayesianQuadrature
from emukit.quadrature.measures import LebesgueMeasure
# Load core elements for Bayesian optimization
expected_improvement = ExpectedImprovement(model = model_emukit)
optimizer = GradientAcquisitionOptimizer(space = parameter_space)
# Create the Bayesian optimization object
bayesopt_loop = BayesianOptimizationLoop(model = model_emukit,
space = parameter_space,
acquisition = expected_improvement,
batch_size = 5)
# Run the loop and extract the optimum
# Run the loop until we either complete 10 steps or converge
stopping_condition = FixedIterationsStoppingCondition(i_max = 10) | ConvergenceStoppingCondition(eps=0.01)
bayesopt_loop.run_loop(f, stopping_condition)
# Load core elements for Experimental design
model_variance = ModelVariance(model = model_emukit)
optimizer = GradientAcquisitionOptimizer(space = parameter_space)
# Create the Experimental design object
expdesign_loop = ExperimentalDesignLoop(space = parameter_space,
model = model_emukit,
acquisition = model_variance,
update_interval = 1,
batch_size = 5)
# Run the loop
stopping_condition = FixedIterationsStoppingCondition(i_max = 10)
expdesign_loop.run_loop(f, stopping_condition)
# Define the lower and upper bounds of the integral.
integral_bounds = [(-5, 10), (0, 15)]
# Load core elements for Bayesian quadrature
emukit_measure = LebesgueMeasure.from_bounds(integral_bounds)
emukit_qrbf = QuadratureRBFLebesgueMeasure(RBFGPy(model_gpy.kern), emukit_measure)
emukit_model = BaseGaussianProcessGPy(kern=emukit_qrbf, gpy_model=model_gpy)
emukit_method = VanillaBayesianQuadrature(base_gp=emukit_model, X=X, Y=Y)
# Create the Bayesian quadrature object
bq_loop = VanillaBayesianQuadratureLoop(model=emukit_method)
# Run the loop and extract the integral estimate
num_iter = 5
bq_loop.run_loop(f, stopping_condition=num_iter)
integral_mean, integral_variance = bq_loop.model.integrate()
from emukit.sensitivity.monte_carlo import MonteCarloSensitivity
# No loop here, compute Sobol indices
senstivity_analysis = MonteCarloSensitivity(model = model_emukit, input_domain = parameter_space)
main_effects, total_effects, _ = senstivity_analysis.compute_effects(num_monte_carlo_points = 10000)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Interactive Line Plotting of Data Frames
Step2: Plotting independent series
Step3: This does not affect filtering or pivoting in any way
Step4: Interactive Line Plotting of Traces
Step5: You can also change the drawstyle to "steps-post" for step plots. These are suited if the data is discrete
Step6: Synchronized zoom in multiple plots
Step7: EventPlot
Step8: Lane names can also be specified as strings (or hashable objects that have an str representation) as follows
Step9: TracePlot
| <ASSISTANT_TASK:>
Python Code:
import sys,os
sys.path.append("..")
import numpy.random
import pandas as pd
import shutil
import tempfile
import trappy
trace_thermal = "./trace.txt"
trace_sched = "../tests/raw_trace.dat"
TEMP_BASE = "/tmp"
def setup_thermal():
tDir = tempfile.mkdtemp(dir="/tmp", prefix="trappy_doc", suffix = ".tempDir")
shutil.copyfile(trace_thermal, os.path.join(tDir, "trace.txt"))
return tDir
def setup_sched():
tDir = tempfile.mkdtemp(dir="/tmp", prefix="trappy_doc", suffix = ".tempDir")
shutil.copyfile(trace_sched, os.path.join(tDir, "trace.dat"))
return tDir
temp_thermal_location = setup_thermal()
trace1 = trappy.FTrace(temp_thermal_location)
trace2 = trappy.FTrace(temp_thermal_location)
trace2.thermal.data_frame["temp"] = trace1.thermal.data_frame["temp"] * 2
trace2.cpu_out_power.data_frame["power"] = trace1.cpu_out_power.data_frame["power"] * 2
columns = ["tick", "tock"]
df = pd.DataFrame(numpy.random.randn(1000, 2), columns=columns).cumsum()
trappy.ILinePlot(df, column=columns).view()
columns = ["tick", "tock", "bang"]
df_len = 1000
df1 = pd.DataFrame(numpy.random.randn(df_len, 3), columns=columns, index=range(df_len)).cumsum()
df2 = pd.DataFrame(numpy.random.randn(df_len, 3), columns=columns, index=(numpy.arange(0.5, df_len, 1))).cumsum()
trappy.ILinePlot([df1, df2], column="tick").view()
df1["bang"] = df1["bang"].apply(lambda x: numpy.random.randint(0, 4))
df2["bang"] = df2["bang"].apply(lambda x: numpy.random.randint(0, 4))
trappy.ILinePlot([df1, df2], column="tick", filters = {'bang' : [2]}, title="tick column values for which bang is 2").view()
trappy.ILinePlot([df1, df2], column="tick", pivot="bang", title="tick column pivoted on bang column").view()
map_label = {
"00000000,00000006" : "A57",
"00000000,00000039" : "A53",
}
l = trappy.ILinePlot(
trace1, # TRAPpy FTrace Object
trappy.cpu_power.CpuInPower, # TRAPpy Event (maps to a unique word in the Trace)
column=[ # Column(s)
"dynamic_power",
"load1"],
filters={ # Filter the data
"cdev_state": [
1,
0]},
pivot="cpus", # One plot for each pivot will be created
map_label=map_label, # Optionally, provide an alternative label for pivots
per_line=1) # Number of graphs per line
l.view()
l = trappy.ILinePlot(
trace1, # TRAPpy FTrace Object
trappy.cpu_power.CpuInPower, # TRAPpy Event (maps to a unique word in the Trace)
column=[ # Column(s)
"dynamic_power",
"load1"],
filters={ # Filter the data
"cdev_state": [
1,
0]},
pivot="cpus", # One plot for each pivot will be created
per_line=1, # Number of graphs per line
drawstyle="steps-post")
l.view()
trappy.ILinePlot(
trace1,
signals=["cpu_in_power:dynamic_power", "cpu_in_power:load1"],
pivot="cpus",
group="synchronized",
sync_zoom=True
).view()
A = [
[0, 3, 0],
[4, 5, 2],
]
B = [
[0, 2, 1],
[2, 3, 3],
[3, 4, 0],
]
C = [
[0, 2, 3],
[2, 3, 2],
[3, 4, 1],
]
EVENTS = {}
EVENTS["A"] = A
EVENTS["B"] = B
EVENTS["C"] = C
trappy.EventPlot(EVENTS,
keys=EVENTS.keys, # Name of the Process Element
lane_prefix="LANE: ", # Name of Each TimeLine
num_lanes=4, # Number of Timelines
domain=[0,5] # Time Domain
).view()
A = [
[0, 3, "zero"],
[4, 5, "two"],
]
B = [
[0, 2, 1],
[2, 3, "three"],
[3, 4, "zero"],
]
C = [
[0, 2, "three"],
[2, 3, "two"],
[3, 4, 1],
]
EVENTS = {}
EVENTS["A"] = A
EVENTS["B"] = B
EVENTS["C"] = C
trappy.EventPlot(EVENTS,
keys=EVENTS.keys, # Name of the Process Element
lanes=["zero", 1, "two", "three"],
domain=[0,5] # Time Domain
).view()
f = setup_sched()
trappy.plotter.plot_trace(f)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The driver is used to execute the harmonization. It will handle the data formatting needed to execute the harmonizaiton operation and stores the harmonized results until they are needed.
Step2: Since the default function which chooses, which method to use does not apply the budget method, we specify overrides to use budget for all the variables in the model data.
Step3: All data of interest is combined in order to easily view it. We will specifically investigate output for the World in this example. A few operations are performed in order to get the data into a plotting-friendly format.
Step4: Calculation details
Step5: We calculate the carbon budget from the model and historical data by estimating the integral between discrete data points using the Riemann trapezoidal sum.
Step6: Harmonization via Optimization
Step7: The model itself minimizes the $L_2$ norm of the rates of change
Step8: The historical value must match
Step9: And the carbon budget must be maintained, using a trapezoidal rule Reimann sum,
Step10: The model is solved with IPOPT and compared with the original trajectory.
| <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import pyomo.environ as pyomo
import aneris
from aneris.tutorial import load_data
%matplotlib inline
model, hist, driver = load_data()
driver.overrides = model[["Model", "Scenario", "Region", "Variable", "Unit"]].assign(Method="budget")
driver.overrides.head()
for scenario in driver.scenarios():
driver.harmonize(scenario)
harmonized, metadata, diagnostics = driver.harmonized_results()
data = pd.concat([hist, model, harmonized], sort=True)
df = data[data.Region == 'World']
df = pd.melt(df, id_vars=aneris.iamc_idx, value_vars=aneris.numcols(df),
var_name='Year', value_name='Emissions')
df['Label'] = df['Model'] + ' ' + df['Variable']
df.head()
sns.lineplot(x='Year', y='Emissions', hue='Label', data=df.assign(Year=df.Year.astype(int)))
plt.legend(bbox_to_anchor=(1.05, 1))
ms = df.loc[(df.Variable == 'prefix|Emissions|BC|suffix') & (df.Model == 'model')].set_index('Year')['Emissions'].dropna()
hs = df.loc[(df.Variable == 'prefix|Emissions|BC|suffix') & (df.Model == 'History')].set_index('Year')['Emissions'].dropna()
def calc_budget(data):
# trapezoid rule reimann sum
dx = data.index.to_series().astype(int).diff()
y1 = data
dy = data.diff()
budget = (dx * (y1 - .5 * dy)).iloc[1:].sum()
return budget
calc_budget(ms) / 1e3 # in Gt BC
years = ms.index[ms.index.astype(int) >= int(hs.index[-1])]
model_vals = ms.loc[years]
hist_val = hs.iloc[-1]
budget = calc_budget(model_vals)
model = pyomo.ConcreteModel()
model.x = pyomo.Var(list(years), initialize=0, domain=pyomo.Reals)
x = pd.Series([model.x[y] for y in years], years)
delta_years = years.to_series().astype(int).diff()
delta_x = x.diff()
delta_m = model_vals.diff()
def l2_norm():
return pyomo.quicksum(((delta_m / delta_years - delta_x / delta_years) ** 2).dropna())
model.obj = pyomo.Objective(expr=l2_norm(), sense=pyomo.minimize)
model.hist_val = pyomo.Constraint(expr=model.x[years[0]] == hist_val)
model.budget = pyomo.Constraint(expr=calc_budget(x) == budget)
solver = pyomo.SolverFactory('ipopt')
solver.solve(model)
data = pd.concat(
dict(
model=ms,
history=hs,
model_harmonized=pd.Series([pyomo.value(model.x[y]) for y in years], years)
),
axis=1,
sort=True
)
data.index = data.index.astype(int)
data.plot.line()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: OLS Analysis Using Full PSU dataset
Step3: Partitioning a dataset in training and test sets
Step4: Determine Feature Importances
Step5: Test Prediction Results
| <ASSISTANT_TASK:>
Python Code:
#Import required packages
import pandas as pd
import numpy as np
import datetime
import matplotlib.pyplot as plt
def format_date(df_date):
Splits Meeting Times and Dates into datetime objects where applicable using regex.
df_date['Days'] = df_date['Meeting_Times'].str.extract('([^\s]+)', expand=True)
df_date['Start_Date'] = df_date['Meeting_Dates'].str.extract('([^\s]+)', expand=True)
df_date['Year'] = df_date['Term'].astype(str).str.slice(0,4)
df_date['Quarter'] = df_date['Term'].astype(str).str.slice(4,6)
df_date['Term_Date'] = pd.to_datetime(df_date['Year'] + df_date['Quarter'], format='%Y%m')
df_date['End_Date'] = df_date['Meeting_Dates'].str.extract('(?<=-)(.*)(?= )', expand=True)
df_date['Start_Time'] = df_date['Meeting_Times'].str.extract('(?<= )(.*)(?=-)', expand=True)
df_date['Start_Time'] = pd.to_datetime(df_date['Start_Time'], format='%H%M')
df_date['End_Time'] = df_date['Meeting_Times'].str.extract('((?<=-).*$)', expand=True)
df_date['End_Time'] = pd.to_datetime(df_date['End_Time'], format='%H%M')
df_date['Duration_Hr'] = ((df_date['End_Time'] - df_date['Start_Time']).dt.seconds)/3600
return df_date
def format_xlist(df_xl):
revises % capacity calculations by using Max Enrollment instead of room capacity.
df_xl['Cap_Diff'] = np.where(df_xl['Xlst'] != '',
df_xl['Max_Enrl'].astype(int) - df_xl['Actual_Enrl'].astype(int),
df_xl['Room_Capacity'].astype(int) - df_xl['Actual_Enrl'].astype(int))
df_xl = df_xl.loc[df_xl['Room_Capacity'].astype(int) < 999]
return df_xl
pd.set_option('display.max_rows', None)
df = pd.read_csv('data/PSU_master_classroom_91-17.csv', dtype={'Schedule': object, 'Schedule Desc': object})
df = df.fillna('')
df = format_date(df)
# Avoid classes that only occur on a single day
df = df.loc[df['Start_Date'] != df['End_Date']]
#terms = [199104, 199204, 199304, 199404, 199504, 199604, 199704, 199804, 199904, 200004, 200104, 200204, 200304, 200404, 200504, 200604, 200704, 200804, 200904, 201004, 201104, 201204, 201304, 201404, 201504, 201604]
terms = [200604, 200704, 200804, 200904, 201004, 201104, 201204, 201304, 201404, 201504, 201604]
df = df.loc[df['Term'].isin(terms)]
df = df.loc[df['Online Instruct Method'] != 'Fully Online']
# Calculate number of days per week and treat Sunday condition
df['Days_Per_Week'] = df['Days'].str.len()
df['Room_Capacity'] = df['Room_Capacity'].apply(lambda x: x if (x != 'No Data Available') else 0)
df['Building'] = df['ROOM'].str.extract('([^\s]+)', expand=True)
df_cl = format_xlist(df)
df_cl['%_Empty'] = df_cl['Cap_Diff'].astype(float) / df_cl['Room_Capacity'].astype(float)
# Normalize the results
df_cl['%_Empty'] = df_cl['Actual_Enrl'].astype(np.float32)/df_cl['Room_Capacity'].astype(np.float32)
df_cl = df_cl.replace([np.inf, -np.inf], np.nan).dropna()
from sklearn.preprocessing import LabelEncoder
df_cl = df_cl.sample(n = 15000)
# Save as a 1D array. Otherwise will throw errors.
y = np.asarray(df_cl['%_Empty'], dtype="|S6")
df_cl = df_cl[['Dept', 'Class', 'Days', 'Start_Time', 'ROOM', 'Term', 'Room_Capacity', 'Building']]
cat_columns = ['Dept', 'Class', 'Days', 'Start_Time', 'ROOM', 'Building']
for column in cat_columns:
room_mapping = {label: idx for idx, label in enumerate(np.unique(df_cl['{0}'.format(column)]))}
df_cl['{0}'.format(column)] = df_cl['{0}'.format(column)].map(room_mapping)
from distutils.version import LooseVersion as Version
from sklearn import __version__ as sklearn_version
if Version(sklearn_version) < '0.18':
from sklearn.cross_validation import train_test_split
else:
from sklearn.model_selection import train_test_split
X = df_cl.iloc[:, 1:].values
X_train, X_test, y_train, y_test = \
train_test_split(X, y, test_size=0.3, random_state=0)
from sklearn.ensemble import RandomForestClassifier
feat_labels = df_cl.columns[1:]
forest = RandomForestClassifier(n_estimators=100,
random_state=0,
n_jobs=-1) # -1 sets n_jobs=n_CPU cores
forest.fit(X_train, y_train)
importances = forest.feature_importances_
indices = np.argsort(importances)[::-1]
for f in range(X_train.shape[1]):
print("%2d) %-*s %f" % (f + 1, 30,
feat_labels[indices[f]],
importances[indices[f]]))
plt.title('Feature Importances')
plt.bar(range(X_train.shape[1]),
importances[indices],
color='lightblue',
align='center')
plt.xticks(range(X_train.shape[1]),
feat_labels[indices], rotation=90)
plt.xlim([-1, X_train.shape[1]])
plt.tight_layout()
plt.show()
# Random Forest Classification
from sklearn import model_selection
kfold = model_selection.KFold(n_splits=10, random_state=0)
model = RandomForestClassifier(n_estimators=20, random_state=0, n_jobs=-1)
results = model_selection.cross_val_score(model, X, y, cv=kfold)
print(results.mean())
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The command %matplotlib inline is not a Python command, but an IPython command. When using the console, or the notebook, it makes the plots appear inline. You do not want to use this in a plain Python code.
Step2: We have defined two sequences - in this case lists, but tuples would also work. One contains the $x$-axis coordinates, the other the data points to appear on the $y$-axis. A basic plot is produced using the plot command of pyplot. However, this plot will not automatically appear on the screen, as after plotting the data you may wish to add additional information. Nothing will actually happen until you either save the figure to a file (using pyplot.savefig(<filename>)) or explicitly ask for it to be displayed (with the show command). When the plot is displayed the program will typically pause until you dismiss the plot.
Step3: Whilst most of the commands are self-explanatory, a note should be made of the strings line r'$x$'. These strings are in LaTeX format, which is the standard typesetting method for professional-level mathematics. The $ symbols surround mathematics. The r before the definition of the string is Python notation, not LaTeX. It says that the following string will be "raw"
| <ASSISTANT_TASK:>
Python Code:
from matplotlib import pyplot
%matplotlib inline
from math import sin, pi
x = []
y = []
for i in range(201):
x.append(0.01*i)
y.append(sin(pi*x[-1])**2)
pyplot.plot(x, y)
pyplot.show()
from math import sin, pi
x = []
y = []
for i in range(201):
x.append(0.01*i)
y.append(sin(pi*x[-1])**2)
pyplot.plot(x, y, marker='+', markersize=8, linestyle=':',
linewidth=3, color='b', label=r'$\sin^2(\pi x)$')
pyplot.legend(loc='lower right')
pyplot.xlabel(r'$x$')
pyplot.ylabel(r'$y$')
pyplot.title('A basic plot')
pyplot.show()
from math import sin, pi, exp, log
x = []
y1 = []
y2 = []
for i in range(201):
x.append(1.0+0.01*i)
y1.append(exp(sin(pi*x[-1])))
y2.append(log(pi+x[-1]*sin(x[-1])))
pyplot.loglog(x, y1, linestyle='--', linewidth=4,
color='k', label=r'$y_1=e^{\sin(\pi x)}$')
pyplot.loglog(x, y2, linestyle='-.', linewidth=4,
color='r', label=r'$y_2=\log(\pi+x\sin(x))$')
pyplot.legend(loc='lower right')
pyplot.xlabel(r'$x$')
pyplot.ylabel(r'$y$')
pyplot.title('A basic logarithmic plot')
pyplot.show()
from math import sin, pi, exp, log
x = []
y1 = []
y2 = []
for i in range(201):
x.append(1.0+0.01*i)
y1.append(exp(sin(pi*x[-1])))
y2.append(log(pi+x[-1]*sin(x[-1])))
pyplot.semilogy(x, y1, linestyle='None', marker='o',
color='g', label=r'$y_1=e^{\sin(\pi x)}$')
pyplot.semilogy(x, y2, linestyle='None', marker='^',
color='r', label=r'$y_2=\log(\pi+x\sin(x))$')
pyplot.legend(loc='lower right')
pyplot.xlabel(r'$x$')
pyplot.ylabel(r'$y$')
pyplot.title('A different logarithmic plot')
pyplot.show()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Load and prepare the data
Step3: Tokenize
Step5: Padding
Step7: Preprocess pipeline
Step8: Split the data into training and test sets
Step10: Ids Back to Text
Step12: 2. Recurrent neural network
Step13: Train the model
Step14: Evaluate the model
| <ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import helper
import keras
helper.info_gpu()
np.random.seed(9)
%matplotlib inline
%load_ext autoreload
%autoreload 2
with open('data/small_vocab_en', "r") as f:
english_sentences = f.read().split('\n')
with open('data/small_vocab_fr', "r") as f:
french_sentences = f.read().split('\n')
print("Number of sentences: {}\n".format(len(english_sentences)))
for i in range(2):
print("sample {}:".format(i))
print("{} \n{} \n".format(english_sentences[i], french_sentences[i]))
import collections
words = dict()
words["English"] = [word for sentence in english_sentences for word in sentence.split()]
words["French"] = [word for sentence in french_sentences for word in sentence.split()]
for key, value in words.items():
print("{}: {} words, {} unique words".format(key,
len(value), len(collections.Counter(value))))
from keras.preprocessing.text import Tokenizer
def tokenize(x):
:param x: List of sentences/strings to be tokenized
:return: Tuple of (tokenized x data, tokenizer used to tokenize x)
tokenizer = Tokenizer()
tokenizer.fit_on_texts(x)
tokens = tokenizer.texts_to_sequences(x)
return tokens, tokenizer
from keras.preprocessing.sequence import pad_sequences
def pad(x, length=None):
:param x: List of sequences.
:param length: Length to pad the sequence to. If None, longest sequence length in x.
:return: Padded numpy array of sequences
return pad_sequences(x, maxlen=length, padding='post')
def preprocess(x, y, length=None):
:param x: Feature List of sentences
:param y: Label List of sentences
:return: Tuple of (Preprocessed x, Preprocessed y, x tokenizer, y tokenizer)
preprocess_x, x_tk = tokenize(x)
preprocess_y, y_tk = tokenize(y)
preprocess_x = pad(preprocess_x, length)
preprocess_y = pad(preprocess_y, length)
# Keras's sparse_categorical_crossentropy function requires the labels to be in 3 dims
preprocess_y = preprocess_y.reshape(*preprocess_y.shape, 1)
return preprocess_x, preprocess_y, x_tk, y_tk
x, y, x_tk, y_tk = preprocess(english_sentences, french_sentences)
print('Data Preprocessed')
# Only the 10 last translations will be predicted
x_train, y_train = x[:-10], y[:-10]
x_test, y_test = x[-10:-1], y[-10:-1] # last sentence removed
test_english_sentences, test_french_sentences = english_sentences[-10:], french_sentences[-10:]
def logits_to_text(logits, tokenizer, show_pad=True):
Turn logits from a neural network into text using the tokenizer
:param logits: Logits from a neural network
:param tokenizer: Keras Tokenizer fit on the labels
:return: String that represents the text of the logits
index_to_words = {id: word for word, id in tokenizer.word_index.items()}
index_to_words[0] = '<PAD>' if show_pad else ''
return ' '.join([index_to_words[prediction] for prediction in np.argmax(logits, 1)])
from keras.models import Sequential
from keras.layers import GRU, Dense, TimeDistributed, LSTM, Bidirectional, RepeatVector
from keras.layers.embeddings import Embedding
from keras.layers.core import Dropout
from keras.losses import sparse_categorical_crossentropy
def rnn_model(input_shape, output_sequence_length, english_vocab_size, french_vocab_size):
Build a model with embedding, encoder-decoder, and bidirectional RNN
:param input_shape: Tuple of input shape
:param output_sequence_length: Length of output sequence
:param english_vocab_size: Number of unique English words in the dataset
:param french_vocab_size: Number of unique French words in the dataset
:return: Keras model built, but not trained
learning_rate = 0.01
model = Sequential()
vector_size = english_vocab_size // 10
model.add(
Embedding(
english_vocab_size, vector_size, input_shape=input_shape[1:], mask_zero=False))
model.add(Bidirectional(GRU(output_sequence_length)))
model.add(Dense(128, activation='relu'))
model.add(RepeatVector(output_sequence_length))
model.add(Bidirectional(GRU(128, return_sequences=True)))
model.add(TimeDistributed(Dense(french_vocab_size, activation="softmax")))
print(model.summary())
model.compile(
loss=sparse_categorical_crossentropy,
optimizer=keras.optimizers.adam(learning_rate),
metrics=['accuracy'])
return model
model = rnn_model(x_train.shape, y_train.shape[1], len(x_tk.word_index), len(y_tk.word_index))
print('Training...')
callbacks = [keras.callbacks.EarlyStopping(monitor='val_acc', patience=3, verbose=1)]
%time history = model.fit(x_train, y_train, batch_size=1024, epochs=50, verbose=0, \
validation_split=0.2, callbacks=callbacks)
helper.show_training(history)
score = model.evaluate(x_test, y_test, verbose=0)
print("Test Accuracy: {:.2f}\n".format(score[1]))
y = model.predict(x_test)
for idx, value in enumerate(y):
print('Sample: {}'.format(test_english_sentences[idx]))
print('Actual: {}'.format(test_french_sentences[idx]))
print('Predicted: {}\n'.format(logits_to_text(value, y_tk, show_pad=False)))
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Login to and initialize wandb. You will need to use your wandb API key to run this demo.
Step2: Dataset and Dataloader for Custom Object Detection
Step3: Visualizing sample data from train split
| <ASSISTANT_TASK:>
Python Code:
import torch
import numpy as np
import wandb
import label_utils
from torch.utils.data import DataLoader
from torchvision import transforms
from PIL import Image
wandb.login()
config = {
"num_workers": 4,
"pin_memory": True,
"batch_size": 32,
"dataset": "drinks",
"train_split": "drinks/labels_train.csv",
"test_split": "drinks/labels_test.csv",}
run = wandb.init(project="dataloader-project", entity="upeee", config=config)
test_dict, test_classes = label_utils.build_label_dictionary(
config['test_split'])
train_dict, train_classes = label_utils.build_label_dictionary(
config['train_split'])
class ImageDataset(torch.utils.data.Dataset):
def __init__(self, dictionary, transform=None):
self.dictionary = dictionary
self.transform = transform
def __len__(self):
return len(self.dictionary)
def __getitem__(self, idx):
# retrieve the image filename
key = list(self.dictionary.keys())[idx]
# retrieve all bounding boxes
boxes = self.dictionary[key]
# open the file as a PIL image
img = Image.open(key)
# apply the necessary transforms
# transforms like crop, resize, normalize, etc
if self.transform:
img = self.transform(img)
# return a list of images and corresponding labels
return img, boxes
train_split = ImageDataset(train_dict, transforms.ToTensor())
test_split = ImageDataset(test_dict, transforms.ToTensor())
# This is approx 95/5 split
print("Train split len:", len(train_split))
print("Test split len:", len(test_split))
# We do not have a validation split
def collate_fn(batch):
maxlen = max([len(x[1]) for x in batch])
images = []
boxes = []
for i in range(len(batch)):
img, box = batch[i]
images.append(img)
# pad with zeros if less than maxlen
if len(box) < maxlen:
box = np.concatenate(
(box, np.zeros((maxlen-len(box), box.shape[-1]))), axis=0)
box = torch.from_numpy(box)
boxes.append(box)
return torch.stack(images, 0), torch.stack(boxes, 0)
train_loader = DataLoader(train_split,
batch_size=config['batch_size'],
shuffle=True,
num_workers=config['num_workers'],
pin_memory=config['pin_memory'],
collate_fn=collate_fn)
test_loader = DataLoader(test_split,
batch_size=config['batch_size'],
shuffle=False,
num_workers=config['num_workers'],
pin_memory=config['pin_memory'],
collate_fn=collate_fn)
# sample one mini-batch
images, boxes = next(iter(train_loader))
# map of label to class name
class_labels = {i: label_utils.index2class(i) for i in train_classes}
run.display(height=1000)
table = wandb.Table(columns=['Image'])
# we use wandb to visualize the objects and bounding boxes
for image, box in zip(images, boxes):
dict = []
for i in range(box.shape[0]):
if box[i, -1] == 0:
continue
dict_item = {}
dict_item["position"] = {
"minX": box[i, 0].item(),
"maxX": box[i, 1].item(),
"minY": box[i, 2].item(),
"maxY": box[i, 3].item(),
}
dict_item["domain"] = "pixel"
dict_item["class_id"] = (int)(box[i, 4].item())
dict_item["box_caption"] = label_utils.index2class(
dict_item["class_id"])
dict.append(dict_item)
img = wandb.Image(image, boxes={
"ground_truth": {
"box_data": dict,
"class_labels": class_labels
}
})
table.add_data(img)
wandb.log({"train_loader": table})
wandb.finish()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load Digits Dataset
Step2: Create Pipeline
Step3: Create k-Fold Cross-Validation
Step4: Conduct k-Fold Cross-Validation
Step5: Calculate Mean Performance Score
| <ASSISTANT_TASK:>
Python Code:
# Load libraries
import numpy as np
from sklearn import datasets
from sklearn import metrics
from sklearn.model_selection import KFold, cross_val_score
from sklearn.pipeline import make_pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.preprocessing import StandardScaler
# Load the digits dataset
digits = datasets.load_digits()
# Create the features matrix
X = digits.data
# Create the target vector
y = digits.target
# Create standardizer
standardizer = StandardScaler()
# Create logistic regression
logit = LogisticRegression()
# Create a pipeline that standardizes, then runs logistic regression
pipeline = make_pipeline(standardizer, logit)
# Create k-Fold cross-validation
kf = KFold(n_splits=10, shuffle=True, random_state=1)
# Do k-fold cross-validation
cv_results = cross_val_score(pipeline, # Pipeline
X, # Feature matrix
y, # Target vector
cv=kf, # Cross-validation technique
scoring="accuracy", # Loss function
n_jobs=-1) # Use all CPU scores
# Calculate mean
cv_results.mean()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: That's all we need to create and train a model
Step2: Movielens 100k
Step3: Here's some benchmarks on the same dataset for the popular Librec system for collaborative filtering. They show best results based on RMSE of 0.91, which corresponds to an MSE of 0.91**2 = 0.83.
Step4: Movie bias
Step5: Movie weights
| <ASSISTANT_TASK:>
Python Code:
user,item,title = 'userId','movieId','title'
path = untar_data(URLs.ML_SAMPLE)
path
ratings = pd.read_csv(path/'ratings.csv')
ratings.head()
data = CollabDataBunch.from_df(ratings, seed=42)
y_range = [0,5.5]
learn = collab_learner(data, n_factors=50, y_range=y_range)
learn.fit_one_cycle(3, 5e-3)
path=Config.data_path()/'ml-100k'
ratings = pd.read_csv(path/'u.data', delimiter='\t', header=None,
names=[user,item,'rating','timestamp'])
ratings.head()
movies = pd.read_csv(path/'u.item', delimiter='|', encoding='latin-1', header=None,
names=[item, 'title', 'date', 'N', 'url', *[f'g{i}' for i in range(19)]])
movies.head()
len(ratings)
rating_movie = ratings.merge(movies[[item, title]])
rating_movie.head()
data = CollabDataBunch.from_df(rating_movie, seed=42, valid_pct=0.1, item_name=title)
data.show_batch()
y_range = [0,5.5]
learn = collab_learner(data, n_factors=40, y_range=y_range, wd=1e-1)
learn.lr_find()
learn.recorder.plot(skip_end=15)
learn.fit_one_cycle(5, 5e-3)
learn.save('dotprod')
learn.load('dotprod');
learn.model
g = rating_movie.groupby(title)['rating'].count()
top_movies = g.sort_values(ascending=False).index.values[:1000]
top_movies[:10]
movie_bias = learn.bias(top_movies, is_item=True)
movie_bias.shape
mean_ratings = rating_movie.groupby(title)['rating'].mean()
movie_ratings = [(b, i, mean_ratings.loc[i]) for i,b in zip(top_movies,movie_bias)]
item0 = lambda o:o[0]
sorted(movie_ratings, key=item0)[:15]
sorted(movie_ratings, key=lambda o: o[0], reverse=True)[:15]
movie_w = learn.weight(top_movies, is_item=True)
movie_w.shape
movie_pca = movie_w.pca(3)
movie_pca.shape
fac0,fac1,fac2 = movie_pca.t()
movie_comp = [(f, i) for f,i in zip(fac0, top_movies)]
sorted(movie_comp, key=itemgetter(0), reverse=True)[:10]
sorted(movie_comp, key=itemgetter(0))[:10]
movie_comp = [(f, i) for f,i in zip(fac1, top_movies)]
sorted(movie_comp, key=itemgetter(0), reverse=True)[:10]
sorted(movie_comp, key=itemgetter(0))[:10]
idxs = np.random.choice(len(top_movies), 50, replace=False)
idxs = list(range(50))
X = fac0[idxs]
Y = fac2[idxs]
plt.figure(figsize=(15,15))
plt.scatter(X, Y)
for i, x, y in zip(top_movies[idxs], X, Y):
plt.text(x,y,i, color=np.random.rand(3)*0.7, fontsize=11)
plt.show()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exercise 1
Step2: Exercise 2
Step3: Exercise 3
| <ASSISTANT_TASK:>
Python Code:
# Import all necessary libraries, this is a configuration step for the exercise.
# Please run it before the simulation code!
import numpy as np
import matplotlib.pyplot as plt
# Show the plots in the Notebook.
plt.switch_backend("nbagg")
def fourier_derivative(f, dx):
# Length of vector f
nx = np.size(f)
# Initialize k vector up to Nyquist wavenumber
kmax = np.pi/dx
dk = kmax/(nx/2)
k = np.arange(float(nx))
k[: int(nx/2)] = k[: int(nx/2)] * dk
k[int(nx/2) :] = k[: int(nx/2)] - kmax
# Fourier derivative
ff = np.fft.fft(f); ff = 1j*k*ff
df_num = np.real(np.fft.ifft(ff))
return df_num
# Basic parameters
# ---------------------------------------------------------------
nx = 128
x, dx = np.linspace(2*np.pi/nx, 2*np.pi, nx, retstep=True)
sigma = 0.5
xo = np.pi
# Initialize Gauss function
f = np.exp(-1/sigma**2 * (x - xo)**2)
# Numerical derivative
df_num = fourier_derivative(f, dx)
# Analytical derivative
df_ana = -2*(x-xo)/sigma**2 * np.exp(-1/sigma**2 * (x-xo)**2)
# To make the error visible, it is multiply by 10^13
df_err = 1e13*(df_ana - df_num)
# Error between analytical and numerical solution
err = np.sum((df_num - df_ana)**2) / np.sum(df_ana**2) * 100
print('Error: %s' %err)
# Plot analytical and numerical derivatives
# ---------------------------------------------------------------
plt.subplot(2,1,1)
plt.plot(x, f, "g", lw = 1.5, label='Gaussian')
plt.legend(loc='upper right', shadow=True)
plt.xlabel('$x$')
plt.ylabel('$f(x)$')
plt.axis([2*np.pi/nx, 2*np.pi, 0, 1])
plt.subplot(2,1,2)
plt.plot(x, df_ana, "b", lw = 1.5, label='Analytical')
plt.plot(x, df_num, 'k--', lw = 1.5, label='Numerical')
plt.plot(x, df_err, "r", lw = 1.5, label='Difference')
plt.legend(loc='upper right', shadow=True)
plt.xlabel('$x$')
plt.ylabel('$\partial_x f(x)$')
plt.axis([2*np.pi/nx, 2*np.pi, -2, 2])
plt.show()
#plt.savefig('Fig_5.9.png')
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read in train & val data
Step2: Extract X and Y matrices
Step4: Convert to SystemML Matrices
Step6: Trigger Caching (Optional)
Step8: Save Matrices (Optional)
Step10: Softmax Classifier
Step12: Train
Step14: Eval
Step16: LeNet-like ConvNet
Step18: Hyperparameter Search
Step20: Train
Step22: Eval
| <ASSISTANT_TASK:>
Python Code:
%load_ext autoreload
%autoreload 2
%matplotlib inline
import os
import matplotlib.pyplot as plt
import numpy as np
from pyspark.sql.functions import col, max
import systemml # pip3 install systemml
from systemml import MLContext, dml
plt.rcParams['figure.figsize'] = (10, 6)
ml = MLContext(sc)
# Settings
size=64
grayscale = True
c = 1 if grayscale else 3
p = 0.01
tr_sample_filename = os.path.join("data", "train_{}_sample_{}{}.parquet".format(p, size, "_grayscale" if grayscale else ""))
val_sample_filename = os.path.join("data", "val_{}_sample_{}{}.parquet".format(p, size, "_grayscale" if grayscale else ""))
train_df = sqlContext.read.load(tr_sample_filename)
val_df = sqlContext.read.load(val_sample_filename)
train_df, val_df
tc = train_df.count()
vc = val_df.count()
tc, vc, tc + vc
train_df.select(max(col("__INDEX"))).show()
train_df.groupBy("tumor_score").count().show()
val_df.groupBy("tumor_score").count().show()
# Note: Must use the row index column, or X may not
# necessarily correspond correctly to Y
X_df = train_df.select("__INDEX", "sample")
X_val_df = val_df.select("__INDEX", "sample")
y_df = train_df.select("__INDEX", "tumor_score")
y_val_df = val_df.select("__INDEX", "tumor_score")
X_df, X_val_df, y_df, y_val_df
script =
# Scale images to [-1,1]
X = X / 255
X_val = X_val / 255
X = X * 2 - 1
X_val = X_val * 2 - 1
# One-hot encode the labels
num_tumor_classes = 3
n = nrow(y)
n_val = nrow(y_val)
Y = table(seq(1, n), y, n, num_tumor_classes)
Y_val = table(seq(1, n_val), y_val, n_val, num_tumor_classes)
outputs = ("X", "X_val", "Y", "Y_val")
script = dml(script).input(X=X_df, X_val=X_val_df, y=y_df, y_val=y_val_df).output(*outputs)
X, X_val, Y, Y_val = ml.execute(script).get(*outputs)
X, X_val, Y, Y_val
# script =
# # Trigger conversions and caching
# # Note: This may take a while, but will enable faster iteration later
# print(sum(X))
# print(sum(Y))
# print(sum(X_val))
# print(sum(Y_val))
#
# script = dml(script).input(X=X, X_val=X_val, Y=Y, Y_val=Y_val)
# ml.execute(script)
# script =
# write(X, "data/X_"+p+"_sample_binary", format="binary")
# write(Y, "data/Y_"+p+"_sample_binary", format="binary")
# write(X_val, "data/X_val_"+p+"_sample_binary", format="binary")
# write(Y_val, "data/Y_val_"+p+"_sample_binary", format="binary")
#
# script = dml(script).input(X=X, X_val=X_val, Y=Y, Y_val=Y_val, p=p)
# ml.execute(script)
script =
source("softmax_clf.dml") as clf
# Hyperparameters & Settings
lr = 1e-2 # learning rate
mu = 0.9 # momentum
decay = 0.999 # learning rate decay constant
batch_size = 50
epochs = 500
log_interval = 1
n = 200 # sample size for overfitting sanity check
# Train
[W, b] = clf::train(X[1:n,], Y[1:n,], X[1:n,], Y[1:n,], lr, mu, decay, batch_size, epochs, log_interval)
outputs = ("W", "b")
script = dml(script).input(X=X, Y=Y, X_val=X_val, Y_val=Y_val).output(*outputs)
W, b = ml.execute(script).get(*outputs)
W, b
script =
source("softmax_clf.dml") as clf
# Hyperparameters & Settings
lr = 5e-7 # learning rate
mu = 0.5 # momentum
decay = 0.999 # learning rate decay constant
batch_size = 50
epochs = 1
log_interval = 10
# Train
[W, b] = clf::train(X, Y, X_val, Y_val, lr, mu, decay, batch_size, epochs, log_interval)
outputs = ("W", "b")
script = dml(script).input(X=X, Y=Y, X_val=X_val, Y_val=Y_val).output(*outputs)
W, b = ml.execute(script).get(*outputs)
W, b
script =
source("softmax_clf.dml") as clf
# Eval
probs = clf::predict(X, W, b)
[loss, accuracy] = clf::eval(probs, Y)
probs_val = clf::predict(X_val, W, b)
[loss_val, accuracy_val] = clf::eval(probs_val, Y_val)
outputs = ("loss", "accuracy", "loss_val", "accuracy_val")
script = dml(script).input(X=X, Y=Y, X_val=X_val, Y_val=Y_val, W=W, b=b).output(*outputs)
loss, acc, loss_val, acc_val = ml.execute(script).get(*outputs)
loss, acc, loss_val, acc_val
script =
source("convnet.dml") as clf
# Hyperparameters & Settings
lr = 1e-2 # learning rate
mu = 0.9 # momentum
decay = 0.999 # learning rate decay constant
lambda = 0 #5e-04
batch_size = 50
epochs = 300
log_interval = 1
dir = "models/lenet-cnn/sanity/"
n = 200 # sample size for overfitting sanity check
# Train
[Wc1, bc1, Wc2, bc2, Wc3, bc3, Wa1, ba1, Wa2, ba2] = clf::train(X[1:n,], Y[1:n,], X[1:n,], Y[1:n,], C, Hin, Win, lr, mu, decay, lambda, batch_size, epochs, log_interval, dir)
outputs = ("Wc1", "bc1", "Wc2", "bc2", "Wc3", "bc3", "Wa1", "ba1", "Wa2", "ba2")
script = (dml(script).input(X=X, X_val=X_val, Y=Y, Y_val=Y_val,
C=c, Hin=size, Win=size)
.output(*outputs))
Wc1, bc1, Wc2, bc2, Wc3, bc3, Wa1, ba1, Wa2, ba2 = ml.execute(script).get(*outputs)
Wc1, bc1, Wc2, bc2, Wc3, bc3, Wa1, ba1, Wa2, ba2
script =
source("convnet.dml") as clf
dir = "models/lenet-cnn/hyperparam-search/"
# TODO: Fix `parfor` so that it can be efficiently used for hyperparameter tuning
j = 1
while(j < 2) {
#parfor(j in 1:10000, par=6) {
# Hyperparameter Sampling & Settings
lr = 10 ^ as.scalar(rand(rows=1, cols=1, min=-7, max=-1)) # learning rate
mu = as.scalar(rand(rows=1, cols=1, min=0.5, max=0.9)) # momentum
decay = as.scalar(rand(rows=1, cols=1, min=0.9, max=1)) # learning rate decay constant
lambda = 10 ^ as.scalar(rand(rows=1, cols=1, min=-7, max=-1)) # regularization constant
batch_size = 50
epochs = 1
log_interval = 10
trial_dir = dir + "j/"
# Train
[Wc1, bc1, Wc2, bc2, Wc3, bc3, Wa1, ba1, Wa2, ba2] = clf::train(X, Y, X_val, Y_val, C, Hin, Win, lr, mu, decay, lambda, batch_size, epochs, log_interval, trial_dir)
# Eval
#probs = clf::predict(X, C, Hin, Win, Wc1, bc1, Wc2, bc2, Wc3, bc3, Wa1, ba1, Wa2, ba2)
#[loss, accuracy] = clf::eval(probs, Y)
probs_val = clf::predict(X_val, C, Hin, Win, Wc1, bc1, Wc2, bc2, Wc3, bc3, Wa1, ba1, Wa2, ba2)
[loss_val, accuracy_val] = clf::eval(probs_val, Y_val)
# Save hyperparams
str = "lr: " + lr + ", mu: " + mu + ", decay: " + decay + ", lambda: " + lambda + ", batch_size: " + batch_size
name = dir + accuracy_val + "," + j #+","+accuracy+","+j
write(str, name)
j = j + 1
}
script = (dml(script).input(X=X, X_val=X_val, Y=Y, Y_val=Y_val, C=c, Hin=size, Win=size))
ml.execute(script)
script =
source("convnet.dml") as clf
# Hyperparameters & Settings
lr = 0.00205 # learning rate
mu = 0.632 # momentum
decay = 0.99 # learning rate decay constant
lambda = 0.00385
batch_size = 50
epochs = 1
log_interval = 10
dir = "models/lenet-cnn/train/"
# Train
[Wc1, bc1, Wc2, bc2, Wc3, bc3, Wa1, ba1, Wa2, ba2] = clf::train(X, Y, X_val, Y_val, C, Hin, Win, lr, mu, decay, lambda, batch_size, epochs, log_interval, dir)
outputs = ("Wc1", "bc1", "Wc2", "bc2", "Wc3", "bc3", "Wa1", "ba1", "Wa2", "ba2")
script = (dml(script).input(X=X, X_val=X_val, Y=Y, Y_val=Y_val,
C=c, Hin=size, Win=size)
.output(*outputs))
Wc1, bc1, Wc2, bc2, Wc3, bc3, Wa1, ba1, Wa2, ba2 = ml.execute(script).get(*outputs)
Wc1, bc1, Wc2, bc2, Wc3, bc3, Wa1, ba1, Wa2, ba2
script =
source("convnet.dml") as clf
# Eval
probs = clf::predict(X, C, Hin, Win, Wc1, bc1, Wc2, bc2, Wc3, bc3, Wa1, ba1, Wa2, ba2)
[loss, accuracy] = clf::eval(probs, Y)
probs_val = clf::predict(X_val, C, Hin, Win, Wc1, bc1, Wc2, bc2, Wc3, bc3, Wa1, ba1, Wa2, ba2)
[loss_val, accuracy_val] = clf::eval(probs_val, Y_val)
outputs = ("loss", "accuracy", "loss_val", "accuracy_val")
script = (dml(script).input(X=X, X_val=X_val, Y=Y, Y_val=Y_val,
C=c, Hin=size, Win=size,
Wc1=Wc1, bc1=bc1,
Wc2=Wc2, bc2=bc2,
Wc3=Wc3, bc3=bc3,
Wa1=Wa1, ba1=ba1,
Wa2=Wa2, ba2=ba2)
.output(*outputs))
loss, acc, loss_val, acc_val = ml.execute(script).get(*outputs)
loss, acc, loss_val, acc_val
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Dados
Step2: Análise modal
Step3: Ordenação dos modos
Step4: Normalização dos modos de vibração relativamente à matriz de massa
Step5: Massa e rigidez modais
Step6: Excitação dinâmica
Step8: Função auxiliar para o cálculo da resposta modal
Step9: Solução clássica
Step10: Solução alternativa
| <ASSISTANT_TASK:>
Python Code:
import sys
import math
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
%matplotlib inline
print('System: {}'.format(sys.version))
for package in (np, mpl):
print('Package: {} {}'.format(package.__name__, package.__version__))
MM = np.matrix(np.diag([1.,2.,3.]))
KK = np.matrix([[2,-1,0],[-1,2,-1],[0,-1,1]])*1000.
print(MM)
print(KK)
W2, F1 = np.linalg.eig(KK.I@MM)
print(W2)
print(F1)
ix = np.argsort(W2)[::-1]
W2 = 1./W2[ix]
F1 = F1[:,ix]
Wn = np.sqrt(W2)
print(W2)
print(Wn)
print(F1)
Fn = F1/np.sqrt(np.diag(F1.T@MM@F1))
print(Fn)
Mn = np.diag(Fn.T@MM@Fn)
Kn = np.diag(Fn.T@KK@Fn)
print(Mn)
print(Kn)
sp = np.matrix([[0.], [1.], [0.]])
wp = 2.*np.pi*4.
print(sp)
print(wp)
tt = np.arange(1000)*0.005
pt = np.sin(wp*tt)
plt.figure()
plt.plot(tt, pt)
plt.xlabel('Tempo (s)')
plt.ylabel('Força (kN)')
plt.show()
def qn(amplitude, wp, beta, zn, tt):
Calcula a resposta modal do modo n.
qn_t = amplitude * ((1.-beta**2)*np.sin(wp*tt)-2.*zn*beta*np.cos(wp*tt))/((1.-beta**2)**2+(2.*zn*beta)**2)
return qn_t
Gn = np.diag(1./Mn)@Fn.T@sp
print(Gn)
qn_t = []
plt.figure()
for n in range(3):
an = Gn[n]/Kn[n]
bn = wp/Wn[n]
q = qn(an[0,0], wp, bn, 0.05, tt)
qn_t.append(q)
plt.plot(tt, q, label='an={:.2e},bn={:.2f}'.format(an[0,0], bn))
plt.legend()
plt.xlabel('Tempo (s)')
plt.ylabel('Deslocamento modal (m)')
plt.show()
plt.figure()
u_t = Fn@qn_t
for n in range(3):
plt.plot(tt, u_t[n].T, label='{:.2e}'.format(np.max(u_t[n])))
plt.legend()
plt.xlabel('Tempo (s)')
plt.ylabel('Deslocamento (m)')
plt.show()
ust = KK.I@sp
print(ust)
qn_st_t = []
plt.figure()
for n in range(3):
an = Fn.T[n]@MM@ust/Mn[n]
bn = wp/Wn[n]
qst = qn(an[0,0], wp, bn, 0.05, tt)
qn_st_t.append(qst)
plt.plot(tt, qst, label='an={:.2e},bn={:.2f}'.format(an[0,0], bn))
plt.legend()
plt.xlabel('Tempo (s)')
plt.ylabel('Deslocamento modal (m)')
plt.show()
plt.figure()
u_t = Fn@qn_st_t
for n in range(3):
plt.plot(tt, u_t[n].T, label='{:.2e}'.format(np.max(u_t[n])))
plt.legend()
plt.xlabel('Tempo (s)')
plt.ylabel('Deslocamento (m)')
plt.show()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
| <ASSISTANT_TASK:>
Python Code::
import tensorflow as tf
from tensorflow.keras.losses import MeanAbsoluteError
y_true = [1., 0.]
y_pred = [2., 3.]
mae_loss = MeanAbsoluteError()
loss = mae_loss(y_true, y_pred).numpy()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Date string for filenames
Step2: Load data
Step3: Because the 2017 facility dataframe only includes annually reporting facilities I'm going to duplicate the plant id, lat/lon, and state information from 2016.
Step4: Load known NERC labels from EIA-860
Step5: I want to assign NERC regions for every year. We have data for 2012 onward from the EIA-860 files. For the purpose of this analysis I'll assume that all years from 2001-2011 are the same NERC as 2012.
Step6: Look for plants listed with different NERC labels
Step7: Some plants in EIA-860 don't have NERC labels. Drop them now.
Step8: Load EIA-860m for some info on recent facilities
Step9: Make lists of plant codes for SPP and TRE facilities
Step10: Append my 2017 SPP and TRE guesses to the full nerc dataframe
Step11: Clean and prep data for KNN
Step12: Checked to make sure the type of merge doesn't matter once rows without nerc values are dropped
Step13: Drop plants that don't have lat/lon data (using just lon to check), and then drop duplicates. If any plants have kept the same plant id but moved over time (maybe a diesel generator?) or switched NERC they will show up twice.
Step14: Separate out the list of plants where we don't have NERC labels from EIA-860.
Step15: Create X and y matricies
Step16: GridSearch to find the best parameters in a RandomForest Classifier
Step17: Accuracy score by region
Step18: F1 score by region
Step19: Plants without lat/lon
Step20: Encode state names as numbers for use in sklearn
Step21: Accuracy score by region
Step22: F1 score by region
Step23: Use best RandomForest parameters to predict NERC for unknown plants
Step24: Ensuring that no plants in Alaska or Hawaii are assigned to continental NERCs, or the other way around.
Step25: Export plants with lat/lon, state, and nerc
Step26: There are 7 facilities that don't show up in my labeled data.
| <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import os
from os.path import join
import pandas as pd
from sklearn import neighbors, metrics
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import train_test_split, GridSearchCV
from collections import Counter
from copy import deepcopy
cwd = os.getcwd()
data_path = join(cwd, '..', 'Data storage')
file_date = '2018-03-06'
most_recent_860_year = 2016
path = os.path.join(data_path, 'Derived data',
'Facility gen fuels and CO2 {}.csv'.format(file_date))
facility_df = pd.read_csv(path)
facility_df['state'] = facility_df['geography'].str[-2:]
plants = facility_df.loc[:, ['plant id', 'year', 'lat', 'lon', 'state']]
plants.drop_duplicates(inplace=True)
# make a copy of 2016 (or most recent annual data year) and change the year to
plants_2017 = plants.loc[plants['year'] == most_recent_860_year, :].copy()
plants_2017.loc[:, 'year'] += 1
plants = pd.concat([plants.loc[plants['year']<=most_recent_860_year, :], plants_2017])
(set(plants.loc[plants.year==2016, 'plant id']) - set(plants.loc[plants.year==2017, 'plant id']))
eia_base_path = join(data_path, 'EIA downloads')
file_860_info = {
# 2011: {'io': join(eia_base_path, 'eia8602011', 'Plant.xlsx'),
# 'skiprows': 0,
# 'parse_cols': 'B,J'},
2012: {'io': join(eia_base_path, 'eia8602012', 'PlantY2012.xlsx'),
'skiprows': 0,
'usecols': 'B,J'},
2013: {'io': join(eia_base_path, 'eia8602013', '2___Plant_Y2013.xlsx'),
'skiprows': 0,
'usecols': 'C,L'},
2014: {'io': join(eia_base_path, 'eia8602014', '2___Plant_Y2014.xlsx'),
'skiprows': 0,
'usecols': 'C,L'},
2015: {'io': join(eia_base_path, 'eia8602015', '2___Plant_Y2015.xlsx'),
'skiprows': 0,
'usecols': 'C,L'},
2016: {'io': join(eia_base_path, 'eia8602016', '2___Plant_Y2016.xlsx'),
'skiprows': 0,
'usecols': 'C,L'}
}
eia_nercs = {}
for key, args in file_860_info.items():
eia_nercs[key] = pd.read_excel(**args)
eia_nercs[key].columns = ['plant id', 'nerc']
eia_nercs[key]['year'] = key
for year in range(2001, 2012):
# the pandas .copy() method is deep by default but I'm not sure in this case
df = deepcopy(eia_nercs[2012])
df['year'] = year
eia_nercs[year] = df
df = deepcopy(eia_nercs[2016])
df['year'] = 2017
eia_nercs[2017] = df
eia_nercs.keys()
eia_nercs[2001].head()
nercs = pd.concat(eia_nercs.values())
nercs.sort_values('year', inplace=True)
nercs.head()
(set(nercs.loc[(nercs.nerc == 'MRO') &
(nercs.year == 2016), 'plant id'])
- set(nercs.loc[(nercs.nerc == 'MRO') &
(nercs.year == 2017), 'plant id']))
nercs.year.unique()
for df_ in list(eia_nercs.values()) + [nercs]:
print('{} total records'.format(len(df_)))
print('{} unique plants'.format(len(df_['plant id'].unique())))
dup_plants = nercs.loc[nercs['plant id'].duplicated(keep=False), 'plant id'].unique()
dup_plants
region_list = []
for plant in dup_plants:
regions = nercs.loc[nercs['plant id'] == plant, 'nerc'].unique()
# regions = regions.tolist()
region_list.append(regions)
Counter(tuple(x) for x in region_list)
(facility_df.loc[facility_df['plant id'].isin(dup_plants), :]
.groupby('year')['generation (MWh)'].sum()
/ facility_df.loc[:, :]
.groupby('year')['generation (MWh)'].sum())
nan_plants = {}
all_nan = []
years = nercs.year.unique()
for year in years:
nan_plants[year] = nercs.loc[(nercs.year == year) &
(nercs.isnull().any(axis=1)), 'plant id'].tolist()
all_nan.extend(nan_plants[year])
# number of plants that don't have a nerc in at least one year
len(all_nan)
# drop all the rows without a nerc value
nercs.dropna(inplace=True)
nan_plants[2017]
path = join(data_path, 'EIA downloads', 'december_generator2017.xlsx')
# Check the excel file columns if there is a read error. They should match
# the plant id, plant state, operating year, and balancing authority code.
_m860 = pd.read_excel(path, sheet_name='Operating',skip_footer=1,
usecols='C,F,P,AE', skiprows=0)
_m860.columns = _m860.columns.str.lower()
# most_recent_860_year is defined at the top of this notebook
# The goal here is to only look at plants that started operating after
# the most recent annual data. So only include units starting after
# the last annual data and that don't have plant ids in the nercs
# dataframe
m860 = _m860.loc[(_m860['operating year'] > most_recent_860_year)].copy() #&
# (~_m860['plant id'].isin(nercs['plant id'].unique()))].copy()
m860.tail()
m860.loc[(m860['plant state'].isin(['TX', 'OK'])) &
(m860['balancing authority code'] == 'SWPP'), 'nerc'] = 'SPP'
m860.loc[(m860['plant state'].isin(['TX'])) &
(m860['balancing authority code'] == 'ERCO'), 'nerc'] = 'TRE'
# Drop all rows except the ones I've labeled as TRE or SPP
m860.dropna(inplace=True)
m860.head()
nercs.head()
# Create additional dataframes with 2017 SPP and TRE plants.
# Use these to fill in values for 2017 plants
m860_spp_plants = (m860.loc[m860['nerc'] == 'SPP', 'plant id']
.drop_duplicates()
.reset_index(drop=True))
additional_spp = pd.DataFrame(m860_spp_plants.copy())
# additional_spp['plant id'] = m860_spp_plants
additional_spp['nerc'] = 'SPP'
additional_spp['year'] = 2017
m860_tre_plants = (m860.loc[m860['nerc'] == 'TRE', 'plant id']
.drop_duplicates()
.reset_index(drop=True))
additional_tre = pd.DataFrame(m860_tre_plants)
# additional_tre['plant id'] = m860_tre_plants
additional_tre['nerc'] = 'TRE'
additional_tre['year'] = 2017
additional_spp
additional_tre
nercs = pd.concat([nercs, additional_spp, additional_tre])
plants.head()
nercs.tail()
df = pd.merge(plants, nercs, on=['plant id', 'year'], how='left')
omitted = set(df['plant id'].unique()) - set(nercs['plant id'].unique())
df.head()
df.tail()
df.columns
cols = ['plant id', 'lat', 'lon', 'nerc', 'state', 'year']
df_slim = (df.loc[:, cols].dropna(subset=['lon'])
.drop_duplicates(subset=['plant id', 'year', 'nerc']))
df_slim.tail()
unknown = df_slim.loc[df_slim.nerc.isnull()].copy()
print("{} plants/years don't have NERC labels\n".format(len(unknown)))
print(unknown.head())
unknown.tail()
X = df_slim.loc[df_slim.notnull().all(axis=1), ['lat', 'lon', 'year']]
y = df_slim.loc[df_slim.notnull().all(axis=1), 'nerc']
len(X)
# Make sure that unknown and X include all records from df_slim
len(X) + len(unknown) - len(df_slim)
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.33, random_state=42)
from sklearn.ensemble import RandomForestClassifier
rf = RandomForestClassifier()
params = dict(
n_estimators = [5, 10, 25, 50],
min_samples_split = [2, 5, 10],
min_samples_leaf = [1, 3, 5],
)
clf_rf = GridSearchCV(rf, params, n_jobs=-1, iid=False, verbose=1)
clf_rf.fit(X_train, y_train)
clf_rf.best_estimator_, clf_rf.best_score_
clf_rf.score(X_test, y_test)
nerc_labels = nercs.nerc.dropna().unique()
for region in nerc_labels:
mask = y_test == region
X_masked = X_test[mask]
y_hat_masked = clf_rf.predict(X_masked)
y_test_masked = y_test[mask]
accuracy = metrics.accuracy_score(y_test_masked, y_hat_masked)
print('{} : {}'.format(region, accuracy))
y_hat = clf_rf.predict(X_test)
for region in nerc_labels:
f1 = metrics.f1_score(y_test, y_hat, labels=[region], average='macro')
print('{} : {}'.format(region, f1))
cols = ['plant id', 'nerc', 'state', 'year', 'lon']
df_state_slim = (df.loc[:, cols].dropna(subset=['state']).copy())
df_state_slim.head()
len(df_state_slim)
le = LabelEncoder()
df_state_slim.loc[:, 'enc state'] = le.fit_transform(df_state_slim.loc[:, 'state'].tolist())
len(df_state_slim)
unknown_state = df_state_slim.loc[(df_state_slim.nerc.isnull()) &
(df_state_slim.lon.isnull())].copy()
len(unknown_state), len(unknown)
X_state = df_state_slim.loc[df_state_slim.notnull().all(axis=1), ['enc state', 'year']].copy()
y_state = df_state_slim.loc[df_state_slim.notnull().all(axis=1), 'nerc'].copy()
X_state_train, X_state_test, y_state_train, y_state_test = train_test_split(
X_state, y_state, test_size=0.33, random_state=42)
rf = RandomForestClassifier()
params = dict(
n_estimators = [5, 10, 25, 50],
min_samples_split = [2, 5, 10],
min_samples_leaf = [1, 3, 5],
)
clf_rf_state = GridSearchCV(rf, params, n_jobs=-1, iid=False, verbose=1)
clf_rf_state.fit(X_state_train, y_state_train)
clf_rf_state.best_estimator_, clf_rf_state.best_score_
clf_rf_state.score(X_state_test, y_state_test)
nerc_labels = nercs.nerc.dropna().unique()
for region in nerc_labels:
mask = y_state_test == region
X_state_masked = X_state_test[mask]
y_state_hat_masked = clf_rf_state.predict(X_state_masked)
y_state_test_masked = y_state_test[mask]
accuracy = metrics.accuracy_score(y_state_test_masked, y_state_hat_masked)
print('{} : {}'.format(region, accuracy))
y_state_hat = clf_rf_state.predict(X_state_test)
for region in nerc_labels:
f1 = metrics.f1_score(y_state_test, y_state_hat, labels=[region], average='macro')
print('{} : {}'.format(region, f1))
unknown.loc[:, 'nerc'] = clf_rf.predict(unknown.loc[:, ['lat', 'lon', 'year']])
unknown_state.loc[:, 'nerc'] = clf_rf_state.predict(unknown_state.loc[:, ['enc state', 'year']])
print(unknown.loc[unknown.state.isin(['AK', 'HI']), 'nerc'].unique())
print(unknown.loc[unknown.nerc.isin(['HICC', 'ASCC']), 'state'].unique())
Counter(unknown['nerc'])
unknown.head()
unknown_state.head()
nercs.tail()
unknown.head()
unknown_state.tail()
len(unknown_state['plant id'].unique())
df_slim.head()
labeled = pd.concat([df_slim.loc[df_slim.notnull().all(axis=1)],
unknown,
unknown_state.loc[:, ['plant id', 'nerc', 'state', 'year']]])
labeled.tail()
labeled.loc[labeled.nerc.isnull()]
facility_df.loc[~facility_df['plant id'].isin(labeled['plant id']), 'plant id'].unique()
len(labeled), len(nercs)
nerc_labels
mro_2016 = set(labeled.loc[(labeled.nerc == 'MRO') &
(labeled.year == 2016), 'plant id'])
mro_2017 = set(labeled.loc[(labeled.nerc == 'MRO') &
(labeled.year == 2017), 'plant id'])
(set(nercs.loc[(nercs.nerc=='MRO') &
(nercs.year==2017),'plant id'])
- mro_2017)
for nerc in nerc_labels:
l = len((set(labeled.loc[(labeled.nerc == nerc) &
(labeled.year == 2016), 'plant id'])
- set(labeled.loc[(labeled.nerc == nerc) &
(labeled.year == 2017), 'plant id'])))
print('{} plants dropped in {}'.format(l, nerc))
(set(labeled.loc[(labeled.nerc == 'MRO') &
(labeled.year == 2016), 'plant id'])
- set(labeled.loc[(labeled.nerc == 'MRO') &
(labeled.year == 2017), 'plant id']))
path = join(data_path, 'Facility labels', 'Facility locations_RF.csv')
labeled.to_csv(path, index=False)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create Noisy Data
Step2: Model Fitting
Step3: Fit result from an lmfit Model can be inspected with
Step4: These methods a re convenient but extracting the data
Step5: The glance
Step6: The tidy function returns one row for each parameter.
Step7: Augment
Step8: The augment function returns one row for each data point.
| <ASSISTANT_TASK:>
Python Code:
import numpy as np
from numpy import sqrt, pi, exp, linspace
from lmfit import Model
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format='retina' # for hi-dpi displays
import lmfit
print('lmfit: %s' % lmfit.__version__)
import pybroom as br
x = np.linspace(-10, 10, 101)
peak1 = lmfit.models.GaussianModel(prefix='p1_')
peak2 = lmfit.models.GaussianModel(prefix='p2_')
model = peak1 + peak2
params = model.make_params(p1_amplitude=1, p2_amplitude=1,
p1_sigma=1, p2_sigma=1)
y_data = model.eval(x=x, p1_center=-1, p2_center=2, p1_sigma=0.5, p2_sigma=1, p1_amplitude=1, p2_amplitude=2)
y_data.shape
y_data += np.random.randn(*y_data.shape)/10
plt.plot(x, y_data)
params = model.make_params(p1_center=0, p2_center=3,
p1_sigma=0.5, p2_sigma=1,
p1_amplitude=1, p2_amplitude=2)
result = model.fit(y_data, x=x, params=params)
print(result.fit_report())
result.params.pretty_print()
dg = br.glance(result)
dg.drop('model', 1).drop('message', 1)
dt = br.tidy(result)
dt
dt.loc[dt.name == 'p1_center']
da = br.augment(result)
da.head()
d = br.augment(result)
fig, ax = plt.subplots(2, 1, figsize=(7, 8))
ax[1].plot('x', 'data', data=d, marker='o', ls='None')
ax[1].plot('x', "Model(gaussian, prefix='p1_')", data=d, lw=2, ls='--')
ax[1].plot('x', "Model(gaussian, prefix='p2_')", data=d, lw=2, ls='--')
ax[1].plot('x', 'best_fit', data=d, lw=2)
ax[0].plot('x', 'residual', data=d);
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: aws cli
Step2: IAM (identity and access management)
Step3: Find cheapest prices
Step4: Finding AMIs
| <ASSISTANT_TASK:>
Python Code:
import boto3
s3=boto3.client('s3')
list=s3.list_objects(Bucket='mert01')['Contents']
list[1:3]
# https://stackoverflow.com/questions/3337912/quick-way-to-list-all-files-in-amazon-s3-bucket
import boto
s3 = boto.connect_s3()
bucket = s3.get_bucket('mert01')
#bl = bucket.list()
#for key in bucket.list():
# print(key.name)
import itertools
for el in itertools.islice(bl, 0, 3):
print(el.name)
# len(bl)
i = 0
for key in bucket.list():
i = i + 1
print(i)
bl2 = bucket.list(prefix="201")
for el in itertools.islice(bl2, 0, 3):
print(el.name)
# https://stackoverflow.com/questions/10054985/how-to-delete-files-recursively-from-an-s3-bucket#18698235
result = bucket.delete_keys([key.name for key in bl2])
result
!aws configure help
!aws configure list
!cat ~/.aws/config
!export AWS_DEFAULT_REGION=us-west-1
!aws ec2 describe-volumes
!aws iam list-groups
!aws iam list-attached-group-policies --group-name iterative
!aws iam list-users
!awespottr c4.xlarge
!export AWS_DEFAULT_REGION=us-east-2
!aws ec2 describe-spot-price-history --availability-zone "${AWS_DEFAULT_REGION}b" --product-description "Linux/UNIX" --instance-types c4.xlarge --start-time `date -u --date="7 days ago" +'%Y-%m-%dT%H:%M:00'` | jq -r -c '.SpotPriceHistory[] | (.SpotPrice)' | head -n 20
!aws ec2 describe-regions
!aws ec2 describe-availability-zones --region us-east-2
!aws ec2 describe-images --owners self amazon --filters "Name=root-device-type,Values=ebs" > data/ex_aws01.json
!cat data/ex_aws01.json | head -n 30
!jq '{ami: .Images[].ImageId}' data/ex_aws01.json | head -n 8
!jq '{ami: [.Images[].ImageId]}' data/ex_aws01.json | head -n 5
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: And the distribution of time between goals is given by the exponential
Step2: I use the variable lam because lambda is a
Step3: Figure 7.1
Step4: I chose the upper bound, 10, because the probability of scoring more
Step5: Figure 7.2
Step6: The subtraction operator invokes Pmf.__sub__, which enumerates pairs
Step7: With the distributions from the previous section, p_win is 46%,
Step8: high is the upper bound of the distribution. In this case I
Step9: Figure 7.3 below shows the resulting distributions. For time values
Step10: For the Bruins, the probability of winning in overtime is 52%.
Step11: Figure 7.3
| <ASSISTANT_TASK:>
Python Code:
def EvalPoissonPmf(k, lam):
return (lam)**k * math.exp(-lam) / math.factorial(k)
def EvalExponentialPdf(x, lam):
return lam * math.exp(-lam * x)
from hockey import *
import thinkplot
suite1 = Hockey('bruins')
suite1.UpdateSet([0, 2, 8, 4])
suite2 = Hockey('canucks')
suite2.UpdateSet([1, 3, 1, 0])
thinkplot.PrePlot(num=2)
thinkplot.Pmf(suite1)
thinkplot.Pmf(suite2)
lam = 3.4
goal_dist = thinkbayes.MakePoissonPmf(lam, 10)
goal_dist1 = MakeGoalPmf(suite1)
goal_dist2 = MakeGoalPmf(suite2)
thinkplot.Clf()
thinkplot.PrePlot(num=2)
thinkplot.Pmf(goal_dist1)
thinkplot.Pmf(goal_dist2)
goal_dist1 = MakeGoalPmf(suite1)
goal_dist2 = MakeGoalPmf(suite2)
diff = goal_dist1 - goal_dist2
p_win = diff.ProbGreater(0)
p_loss = diff.ProbLess(0)
p_tie = diff.Prob(0)
print(p_win)
print(p_loss)
print(p_tie)
lam = 3.4
time_dist = thinkbayes.MakeExponentialPmf(lam, high=2, n=101)
def MakeGoalTimePmf(suite):
metapmf = thinkbayes.Pmf()
for lam, prob in suite.Items():
pmf = thinkbayes.MakeExponentialPmf(lam, high=2, n=2001)
metapmf.Set(pmf, prob)
mix = thinkbayes.MakeMixture(metapmf, name=suite.name)
return mix
time_dist1 = MakeGoalTimePmf(suite1)
time_dist2 = MakeGoalTimePmf(suite2)
p_overtime = thinkbayes.PmfProbLess(time_dist1, time_dist2)
p_tie = diff.Prob(0)
p_overtime = thinkbayes.PmfProbLess(time_dist1, time_dist2)
p_win = diff.ProbGreater(0) + p_tie * p_overtime
import matplotlib.pyplot as plt
thinkplot.PrePlot(num=2)
thinkplot.Pmf(time_dist1)
thinkplot.Pmf(time_dist2)
plt.legend();
# win the next two
p_series = p_win**2
# split the next two, win the third
p_series += 2 * p_win * (1-p_win) * p_win
p_series
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We explore the Lorenz system of differential equations
Step2: For the default set of parameters, we see the trajectories swirling around two points, called attractors.
Step3: After interacting with the system, we can take the result and perform further computations. In this case, we compute the average positions in \(x\), \(y\) and \(z\).
Step4: Creating histograms of the average positions (across different trajectories) show that, on average, the trajectories swirl about the attractors.
| <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from ipywidgets import interactive, fixed
from lorenz import solve_lorenz
w=interactive(solve_lorenz,sigma=(0.0,50.0),rho=(0.0,50.0))
w
t, x_t = w.result
w.kwargs
xyz_avg = x_t.mean(axis=1)
xyz_avg.shape
from matplotlib import pyplot as plt
plt.hist(xyz_avg[:,0])
plt.title('Average $x(t)$');
plt.hist(xyz_avg[:,1])
plt.title('Average $y(t)$');
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: La clase poly1D representa polinomios unidimensionales con base en sus coeficientes. Sea el polinomio
Step2: Se puede evaluar el valor de $p(x)$
Step3: Se puede determinar el orden del polinomio con
Step4: y sus raíces
Step5: Y que tal el polinomio con raíces complejas $p2(x) = 2 x^2 -3 x +7$
Step6: Se puede obtener la gráfica del polinomio
Step7: Ajuste de curvas mediante polinomios
Step8: Teniendo un conjunto de values x,y se puede determinar el polimonio que mejor se ajusta
Step9: Obtener la gráfica de las observaciones y del polinomio de mejor ajuste
| <ASSISTANT_TASK:>
Python Code:
import numpy as np
p = np.poly1d([6., 1., -2.])
p(0), p(1), p(10)
p.order
p.roots
# Comprobando las raíces
p(p.roots)
np.poly1d([2.,-3.,7.]).roots
import matplotlib.pyplot as plt
xvalues = np.linspace(-10.,10.,100)
plt.plot(xvalues, p(xvalues), '-')
plt.show()
y = p(xvalues) + np.random.randn(xvalues.size)*p(xvalues).std()/2
# Intentar ajustar un polinomio de grado 3
np.polyfit(xvalues,y,deg=3)
# Intentar ajustar un polinomio de grado 2
p3 = np.poly1d( np.polyfit(xvalues,y,deg=2) )
p3
plt.plot(xvalues, y, 'xr', xvalues, p3(xvalues), '-b')
plt.show()
<END_TASK> |