text_prompt
stringlengths 168
30.3k
| code_prompt
stringlengths 67
124k
|
---|---|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Statistical analysis on Allsides bias rating
Step2: from the stats page "For normally distributed data, the skewness should be about 0. A skewness value > 0 means that there is more weight in the left tail of the distribution. The function skewtest can be used to determine if the skewness value is close enough to 0, statistically speaking."
Step3: Donald Trump Image Box images versus Baseline images source bias according to Facebook bias ratings
Step4: The Kolmogorov-Smirnov analyis shows that the distribution of political representation across image sources is different between the baseline images and those found in the image box.
Step5: Prepare data for chi squared test
Step6: Remove Unknown / unreliable row
Step7: Calculate percentages for plotting purposes
Step8: Test Hillary Clinton Image Box images against Baseline images
Step9: Test Donald Trump Image Box images against Basline images
| <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
%matplotlib inline
plt.style.use('ggplot')
plt.rcParams['figure.figsize'] = (15, 3)
plt.rcParams['font.family'] = 'sans-serif'
pd.set_option('display.width', 5000)
pd.set_option('display.max_columns', 60)
HC_baseline = pd.read_csv('./BASELINE/HC_baseline_full_ratings.csv')
DT_baseline = pd.read_csv('./BASELINE/DT_baseline_full_ratings.csv')
HC_imagebox = pd.read_csv('./IMAGE_BOX/HC_imagebox_full_ratings.csv')
DT_imagebox = pd.read_csv('./IMAGE_BOX/DT_imagebox_full_ratings.csv')
print("Baseline skew: ", stats.skew(HC_baseline.facebookbias_rating[HC_baseline.facebookbias_rating<3]))
print("Image Box skew: ", stats.skew(HC_imagebox.facebookbias_rating[HC_imagebox.facebookbias_rating<3]))
print("Baseline skew: ", stats.skewtest(HC_baseline.facebookbias_rating[HC_baseline.facebookbias_rating<3]))
print("Image Box skew: ", stats.skewtest(HC_imagebox.facebookbias_rating[HC_imagebox.facebookbias_rating<3]))
stats.ks_2samp(HC_baseline.facebookbias_rating[HC_baseline.facebookbias_rating<3],
HC_imagebox.facebookbias_rating[HC_imagebox.facebookbias_rating<3])
HC_imagebox.facebookbias_rating.plot.hist(alpha=0.5, bins=20, range=(-1,1), color='blue')
HC_baseline.facebookbias_rating.plot.hist(alpha=0.5, bins=20, range=(-1,1), color='green')
print("Baseline skew: ", stats.skew(DT_baseline.facebookbias_rating[DT_baseline.facebookbias_rating<3]))
print("Image Box skew: ", stats.skew(DT_imagebox.facebookbias_rating[DT_imagebox.facebookbias_rating<3]))
stats.ks_2samp(DT_baseline.facebookbias_rating[DT_baseline.facebookbias_rating<3],
DT_imagebox.facebookbias_rating[DT_imagebox.facebookbias_rating<3])
DT_imagebox.facebookbias_rating.plot.hist(alpha=0.5, bins=20, range=(-1,1), color='red')
DT_baseline.facebookbias_rating.plot.hist(alpha=0.5, bins=20, range=(-1,1), color='green')
print("Number of missing ratings for Hillary Clinton Baseline data: ", len(HC_baseline[HC_baseline.facebookbias_rating == 999]))
print("Number of missing ratings for Hillary Clinton Image Box data: ", len(HC_imagebox[HC_imagebox.facebookbias_rating == 999]))
print("Number of missing ratings for Donald Trump Baseline data: ", len(DT_baseline[DT_baseline.facebookbias_rating == 999]))
print("Number of missing ratings for Donald Trump Image Box data: ", len(DT_baseline[DT_imagebox.facebookbias_rating == 999]))
def convert_to_ints(col):
if col == 'Left':
return -1
elif col == 'Center':
return 0
elif col == 'Right':
return 1
else:
return np.nan
HC_imagebox['final_rating_ints'] = HC_imagebox.final_rating.apply(convert_to_ints)
DT_imagebox['final_rating_ints'] = DT_imagebox.final_rating.apply(convert_to_ints)
HC_baseline['final_rating_ints'] = HC_baseline.final_rating.apply(convert_to_ints)
DT_baseline['final_rating_ints'] = DT_baseline.final_rating.apply(convert_to_ints)
HC_imagebox.final_rating_ints.value_counts()
DT_imagebox.final_rating_ints.value_counts()
HC_baseline_counts = HC_baseline.final_rating.value_counts()
HC_imagebox_counts = HC_imagebox.final_rating.value_counts()
DT_baseline_counts = DT_baseline.final_rating.value_counts()
DT_imagebox_counts = DT_imagebox.final_rating.value_counts()
HC_baseline_counts.head()
normalised_bias_ratings = pd.DataFrame({'HC_ImageBox':HC_imagebox_counts,
'HC_Baseline' : HC_baseline_counts,
'DT_ImageBox': DT_imagebox_counts,
'DT_Baseline': DT_baseline_counts} )
normalised_bias_ratings
normalised_bias_ratings = normalised_bias_ratings[:3]
normalised_bias_ratings.loc[:,'HC_Baseline_pcnt'] = normalised_bias_ratings.HC_Baseline/normalised_bias_ratings.HC_Baseline.sum()*100
normalised_bias_ratings.loc[:,'HC_ImageBox_pcnt'] = normalised_bias_ratings.HC_ImageBox/normalised_bias_ratings.HC_ImageBox.sum()*100
normalised_bias_ratings.loc[:,'DT_Baseline_pcnt'] = normalised_bias_ratings.DT_Baseline/normalised_bias_ratings.DT_Baseline.sum()*100
normalised_bias_ratings.loc[:,'DT_ImageBox_pcnt'] = normalised_bias_ratings.DT_ImageBox/normalised_bias_ratings.DT_ImageBox.sum()*100
normalised_bias_ratings
normalised_bias_ratings.columns
HC_percentages = normalised_bias_ratings[['HC_Baseline_pcnt', 'HC_ImageBox_pcnt']]
DT_percentages = normalised_bias_ratings[['DT_Baseline_pcnt', 'DT_ImageBox_pcnt']]
stats.chisquare(f_exp=normalised_bias_ratings.HC_Baseline,
f_obs=normalised_bias_ratings.HC_ImageBox)
HC_percentages.plot.bar()
stats.chisquare(f_exp=normalised_bias_ratings.DT_Baseline,
f_obs=normalised_bias_ratings.DT_ImageBox)
DT_percentages.plot.bar()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Line plot of sunspot data
Step2: Use np.loadtxt to read the data into a NumPy array called data. Then create two new 1d NumPy arrays named years and ssc that have the sequence of year and sunspot counts.
Step3: Make a line plot showing the sunspot count as a function of year.
Step4: Describe the choices you have made in building this visualization and how they make it effective.
| <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import os
assert os.path.isfile('yearssn.dat')
data=np.loadtxt('yearssn.dat')
years=np.array(data[::1,0])
ssc=np.array(data[::1,1])
raise NotImplementedError()
assert len(years)==315
assert years.dtype==np.dtype(float)
assert len(ssc)==315
assert ssc.dtype==np.dtype(float)
plt.figure(figsize=(20,1.5))
plt.plot(years,ssc,'r-')
plt.ylabel('sunspots')
plt.xlabel('year')
plt.title('Sunspots')
plt.xlim(1700,2015)
plt.ylim(0,200)
assert True # leave for grading
plt.figure(figsize=(12,12))
for i in years:
if i<1800:
plt.subplot(4,1,1)
plt.plot(years[:100:1],ssc[:100:1],'r-')
plt.ylim(0,200)
plt.title('Sunspots/Year')
elif i>=1800 and i<1900:
plt.subplot(4,1,2)
plt.plot(years[101:200:1],ssc[101:200:1],'r-')
plt.ylim(0,200)
elif i>=1900 and i<2000:
plt.subplot(4,1,3)
plt.plot(years[201:300:1],ssc[201:300:1],'r-')
plt.ylim(0,200)
elif i>=2000:
plt.subplot(4,1,4)
plt.plot(years[301::1],ssc[301::1],'r-')
plt.ylim(0,200)
plt.xlim(2000,2100)
plt.tight_layout
raise NotImplementedError()
assert True # leave for grading
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Sparse 2d interpolation
Step2: The following plot should show the points on the boundary and the single point in the interior
Step3: Use meshgrid and griddata to interpolate the function $f(x,y)$ on the entire square domain
Step4: Plot the values of the interpolated scalar field using a contour plot. Customize your plot to make it effective and beautiful.
| <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
sns.set_style('white')
from scipy.interpolate import griddata
x = np.empty((1,),dtype=int)
x[0] = 0
for i in range(-4,5):
x = np.hstack((x,np.array((i,i))))
x = np.hstack((x,np.array([-5]*11)))
x = np.hstack((x,np.array([5]*11)))
y = np.empty((1,),dtype=int)
y[0]=0
y = np.hstack((y,np.array((5,-5)*9)))
for i in range(-5,6):
y = np.hstack((y,np.array((i))))
for i in range(-5,6):
y = np.hstack((y,np.array((i))))
f=np.zeros_like(y)
f[0]=1
plt.scatter(x, y);
assert x.shape==(41,)
assert y.shape==(41,)
assert f.shape==(41,)
assert np.count_nonzero(f)==1
xnew = np.linspace(-5, 5, 100)
ynew = np.linspace(-5, 5, 100)
Xnew, Ynew = np.meshgrid(xnew, ynew)
Fnew = griddata((x,y), f, (Xnew, Ynew), method='cubic', fill_value=0.0)
assert xnew.shape==(100,)
assert ynew.shape==(100,)
assert Xnew.shape==(100,100)
assert Ynew.shape==(100,100)
assert Fnew.shape==(100,100)
plt.contourf(Xnew,Ynew,Fnew,cmap='jet');
plt.colorbar(shrink=.8);
plt.xlabel('x value');
plt.ylabel('y value');
plt.title('Contour Plot of Interpolated Function');
assert True # leave this to grade the plot
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Next we create a DisplayPort object and configure it with the resolution and pixel format we would like.
Step2: We can now test the output by setting the blue pixel of the frame in a loop and outputting the loop. From this we can see that the loop easily runs at 60 frames per second.
Step3: It's far more exciting though to show an image from a webcam. We can do this using OpenCV. First we need to instantiate the VideoCapture device and set the resolution to match our frame.
Step4: Then we can do a similar loop and record the framerate. This is slower due to the overhead in capturing frames from the camera.
Step5: Finally we need to close the devices.
| <ASSISTANT_TASK:>
Python Code:
from pynq.lib.video import *
displayport = DisplayPort()
displayport.configure(VideoMode(1280, 720, 24), PIXEL_RGB)
import time
start = time.time()
for i in range(600):
frame = displayport.newframe()
frame[:,:,0] = i % 256
displayport.writeframe(frame)
end = time.time()
duration = end - start
print(f"Took {duration} seconds at {600 / duration} FPS")
import cv2
capture = cv2.VideoCapture(0)
capture.set(3, 1280)
capture.set(4, 720)
number_frames = 150
start = time.time()
for _ in range(number_frames):
frame = displayport.newframe()
capture.read(frame)
displayport.writeframe(frame)
end = time.time()
duration = end - start
print(f"Took {duration} seconds at {number_frames / duration} FPS")
capture.release()
displayport.close()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Feature Exploration, Engineering and Cleaning
Step2: Well it is no surprise that our task is to somehow extract the information out of the categorical variables
Step3: All right so now having cleaned the features and extracted relevant information and dropped the categorical columns our features should now all be numeric, a format suitable to feed into our Machine Learning models. However before we proceed let us generate some simple correlation and distribution plots of our transformed dataset to observe ho
Step4: Takeaway from the Plots
Step5: Ensembling & Stacking models
Step6: Bear with me for those who already know this but for people who have not created classes or objects in Python before, let me explain what the code given above does. In creating my base classifiers, I will only use the models already present in the Sklearn library and therefore only extend the class for that.
Step7: Generating our Base First-Level Models
Step8: Furthermore, since having mentioned about Objects and classes within the OOP framework, let us now create 5 objects that represent our 5 learning models via our Helper Sklearn Class we defined earlier.
Step9: Creating NumPy arrays out of our train and test sets
Step10: Output of the First level Predictions
Step11: Feature importances generated from the different classifiers
Step12: So I have not yet figured out how to assign and store the feature importances outright. Therefore I'll print out the values from the code above and then simply copy and paste into Python lists as below (sorry for the lousy hack)
Step13: Create a dataframe from the lists containing the feature importance data for easy plotting via the Plotly package.
Step14: Interactive feature importances via Plotly scatterplots
Step15: Now let us calculate the mean of all the feature importances and store it as a new column in the feature importance dataframe
Step16: Plotly Barplot of Average Feature Importances
Step17: Second-Level Predictions from the First-level Output
Step18: Correlation Heatmap of the Second Level Training set
Step19: There have been quite a few articles and Kaggle competition winner stories about the merits of having trained models that are more uncorrelated with one another producing better scores.
Step20: Having now concatenated and joined both the first-level train and test predictions as x_train and x_test, we can now fit a second-level learning model.
Step21: Just a quick run down of the XGBoost parameters used in the model
| <ASSISTANT_TASK:>
Python Code:
# Load in our libraries
import pandas as pd
import numpy as np
import re
import sklearn
import xgboost as xgb
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
import plotly.offline as py
py.init_notebook_mode(connected=True)
import plotly.graph_objs as go
import plotly.tools as tls
import warnings
warnings.filterwarnings('ignore')
# Going to use these 5 base models for the stacking
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier, GradientBoostingClassifier, ExtraTreesClassifier
from sklearn.svm import SVC
from sklearn.cross_validation import KFold
from sklearn.preprocessing import StandardScaler
from sklearn.learning_curve import validation_curve
from sklearn.grid_search import GridSearchCV
# Load in the train and test datasets
train = pd.read_csv('./input/train.csv')
test = pd.read_csv('./input/test.csv')
# Store our passenger ID for easy access
PassengerId = test['PassengerId']
train.head(5)
def get_Cabin_Class(name):
if(type(name) == float):
name = 'None'
title_search = re.search('[A-Z]', name)
if title_search:
return title_search.group(0)
return 'None'
train.Cabin.apply(get_Cabin_Class).value_counts().to_dict()
#train[train['Cabin'] == 'F G73']
full_data = [train, test]
# Some features of my own that I have added in
# Gives the length of the name
train['Name_length'] = train['Name'].apply(len)
test['Name_length'] = test['Name'].apply(len)
# Feature that tells whether a passenger had a cabin on the Titanic
train['Has_Cabin'] = train["Cabin"].apply(lambda x: 0 if type(x) == float else 1)
test['Has_Cabin'] = test["Cabin"].apply(lambda x: 0 if type(x) == float else 1)
# Feature engineering steps taken from Sina
# Create new feature FamilySize as a combination of SibSp and Parch
for dataset in full_data:
dataset['FamilySize'] = dataset['SibSp'] + dataset['Parch'] + 1
# Create new feature IsAlone from FamilySize
for dataset in full_data:
dataset['IsAlone'] = 0
dataset.loc[dataset['FamilySize'] == 1, 'IsAlone'] = 1
# Remove all NULLS in the Embarked column
for dataset in full_data:
dataset['Embarked'] = dataset['Embarked'].fillna('S')
# Remove all NULLS in the Fare column and create a new feature CategoricalFare
for dataset in full_data:
dataset['Fare'] = dataset['Fare'].fillna(train['Fare'].median())
# Create a New feature CategoricalAge
for dataset in full_data:
age_avg = dataset['Age'].mean()
age_std = dataset['Age'].std()
age_null_count = dataset['Age'].isnull().sum()
age_null_random_list = np.random.randint(age_avg - age_std, age_avg + age_std, size=age_null_count)
dataset['Age'][np.isnan(dataset['Age'])] = age_null_random_list
dataset['Age'] = dataset['Age'].astype(int)
# Define function to extract titles from passenger names
def get_title(name):
title_search = re.search(' ([A-Za-z]+)\.', name)
# If the title exists, extract and return it.
if title_search:
return title_search.group(1)
return ""
# Create a new feature Title, containing the titles of passenger names
for dataset in full_data:
dataset['Title'] = dataset['Name'].apply(get_title)
# def get_Cabin_Class(name):
# if(type(name) == float):
# return 'None'
# title_search = re.search('[A-Z]', name).group(0)
# if (title_search):
# if(title_search == 'T'):
# return 'None'
# return title_search
# return 'None'
# for dataset in full_data:
# dataset['Cabin'] = dataset['Cabin'].apply(get_Cabin_Class)
# Group all non-common titles into one single grouping "Rare"
for dataset in full_data:
dataset['Title'] = dataset['Title'].replace(['Lady', 'Countess','Capt', 'Col','Don', 'Dr', 'Major', 'Rev', 'Sir', 'Jonkheer', 'Dona'], 'Rare')
dataset['Title'] = dataset['Title'].replace('Mlle', 'Miss')
dataset['Title'] = dataset['Title'].replace('Ms', 'Miss')
dataset['Title'] = dataset['Title'].replace('Mme', 'Mrs')
def data_mapping(dataset):
#Mapping Cabin
#cabin = pd.get_dummies(dataset['Cabin'], prefix='Cabin')
# Mapping Sex
#dataset['Sex'] = dataset['Sex'].map( {'female': 0, 'male': 1} ).astype(int)
sex = pd.get_dummies(dataset['Sex'],prefix='Sex')
# Mapping titles
# title_mapping = {"Mr": 1, "Miss": 2, "Mrs": 3, "Master": 4, "Rare": 5}
# dataset['Title'] = dataset['Title'].map(title_mapping)
# dataset['Title'] = dataset['Title'].fillna(0)
title = pd.get_dummies(dataset['Title'],prefix='Title')
# Mapping Embarked
#dataset['Embarked'] = dataset['Embarked'].map( {'S': 0, 'C': 1, 'Q': 2} ).astype(int)
embarked = pd.get_dummies(dataset['Embarked'],prefix='Embarked')
# Mapping Fare
dataset.loc[ dataset['Fare'] <= 7.91, 'Fare'] = 0
dataset.loc[(dataset['Fare'] > 7.91) & (dataset['Fare'] <= 14.454), 'Fare'] = 1
dataset.loc[(dataset['Fare'] > 14.454) & (dataset['Fare'] <= 31), 'Fare'] = 2
dataset.loc[ dataset['Fare'] > 31, 'Fare'] = 3
dataset['Fare'] = dataset['Fare'].astype(int)
#dataset['CategoricalFare'] = pd.qcut(train['Fare'], 4) #Lu's comment: Mapping base on cut result on train set
fare = pd.get_dummies(dataset['Fare'],prefix='Fare')
# Mapping Age
dataset.loc[ dataset['Age'] <= 16, 'Age'] = 0
dataset.loc[(dataset['Age'] > 16) & (dataset['Age'] <= 32), 'Age'] = 1
dataset.loc[(dataset['Age'] > 32) & (dataset['Age'] <= 48), 'Age'] = 2
dataset.loc[(dataset['Age'] > 48) & (dataset['Age'] <= 64), 'Age'] = 3
dataset.loc[ dataset['Age'] > 64, 'Age'] = 4;
dataset['Age'] = dataset['Age'].astype(int)
#dataset['Age'] = pd.cut(dataset['Age'], 5) #Lu's comment: Mapping base on cut result on train set
age = pd.get_dummies(dataset['Age'],prefix='Age')
# Mapping Pclass
pclass = pd.get_dummies(dataset['Pclass'],prefix='Pclass')
#dataset.join([sex,title,embarked,fare,age])
dataset = pd.concat([dataset,sex,title,embarked,fare,age,pclass],axis= 1)
dataset.drop(['Sex','Title','Embarked','Fare','Age','Pclass'], axis=1, inplace=True)
return dataset
train = data_mapping(train)
test = data_mapping(test)
#print(dataset)
# Feature selection
drop_elements = ['PassengerId', 'Name', 'Ticket', 'Cabin', 'SibSp']
train = train.drop(drop_elements, axis = 1)
#train = train.drop(['CategoricalAge', 'CategoricalFare'], axis = 1)
test = test.drop(drop_elements, axis = 1)
train.columns.size
test.columns.size
colormap = plt.cm.coolwarm
plt.figure(figsize=(22,22))
plt.title('Pearson Correlation of Features', y=1.05, size=15)
sns.heatmap(train.astype(float).corr(),linewidths=0.1,vmax=1.0, square=True, cmap=colormap, linecolor='white', annot=True)
g = sns.pairplot(train[['Survived','Name_length','Sex_female','Title_Mr','Fare_3']], hue='Survived', palette = 'seismic',size=1.3,diag_kind = 'kde',diag_kws=dict(shade=True),plot_kws=dict(s=10) )
g.set(xticklabels=[])
# Some useful parameters which will come in handy later on
ntrain = train.shape[0]
ntest = test.shape[0]
SEED = 0 # for reproducibility
NFOLDS = 5 # set folds for out-of-fold prediction
kf = KFold(ntrain, n_folds= NFOLDS, random_state=SEED)
# Class to extend the Sklearn classifier
class SklearnHelper(object):
def __init__(self, clf, seed=0, params=None):
params['random_state'] = seed
self.clf = clf(**params)
def train(self, x_train, y_train):
self.clf.fit(x_train, y_train)
def predict(self, x):
return self.clf.predict(x)
def fit(self,x,y):
return self.clf.fit(x,y)
def feature_importances(self,x,y):
result = self.clf.fit(x,y).feature_importances_
print(result)
return result
# Class to extend XGboost classifer
ntrain
def get_oof(clf, x_train, y_train, x_test):
oof_train = np.zeros((ntrain,)) # n * 1
oof_test = np.zeros((ntest,))
oof_test_skf = np.empty((NFOLDS, ntest))
for i, (train_index, test_index) in enumerate(kf):
x_tr = x_train[train_index]
y_tr = y_train[train_index]
x_te = x_train[test_index]
clf.train(x_tr, y_tr)
oof_train[test_index] = clf.predict(x_te)
oof_test_skf[i, :] = clf.predict(x_test)
oof_test[:] = oof_test_skf.mean(axis=0)
return oof_train.reshape(-1, 1), oof_test.reshape(-1, 1) # 1 * n
# Put in our parameters for said classifiers
# Random Forest parameters
rf_params = {
'n_jobs': -1,
'n_estimators': 500,
'warm_start': True,
#'max_features': 0.2,
'max_depth': 6,
'min_samples_leaf': 2,
'max_features' : 'sqrt',
'verbose': 0
}
# Extra Trees Parameters
et_params = {
'n_jobs': -1,
'n_estimators':500,
#'max_features': 0.5,
'max_depth': 8,
'min_samples_leaf': 2,
'verbose': 0
}
# AdaBoost parameters
ada_params = {
'n_estimators': 500,
'learning_rate' : 0.75
}
# Gradient Boosting parameters
gb_params = {
'n_estimators': 500,
#'max_features': 0.2,
'max_depth': 5,
'min_samples_leaf': 2,
'verbose': 0
}
# Support Vector Classifier parameters
svc_params = {
'kernel' : 'linear',
'C' : 0.025
}
# Create 5 objects that represent our 4 models
rf = SklearnHelper(clf=RandomForestClassifier, seed=SEED, params=rf_params)
et = SklearnHelper(clf=ExtraTreesClassifier, seed=SEED, params=et_params)
ada = SklearnHelper(clf=AdaBoostClassifier, seed=SEED, params=ada_params)
gb = SklearnHelper(clf=GradientBoostingClassifier, seed=SEED, params=gb_params)
svc = SklearnHelper(clf=SVC, seed=SEED, params=svc_params)
# Create Numpy arrays of train, test and target ( Survived) dataframes to feed into our models
y_train = train['Survived'].ravel()
train = train.drop(['Survived'], axis=1)
x_train = train.values # Creates an array of the train data
x_test = test.values # Creats an array of the test data
#standardization
stdsc = StandardScaler()
x_train = stdsc.fit_transform(x_train)
x_test = stdsc.transform(x_test)
x_train.shape
# Create our OOF train and test predictions. These base results will be used as new features
et_oof_train, et_oof_test = get_oof(et, x_train, y_train, x_test) # Extra Trees
rf_oof_train, rf_oof_test = get_oof(rf,x_train, y_train, x_test) # Random Forest
ada_oof_train, ada_oof_test = get_oof(ada, x_train, y_train, x_test) # AdaBoost
gb_oof_train, gb_oof_test = get_oof(gb,x_train, y_train, x_test) # Gradient Boost
svc_oof_train, svc_oof_test = get_oof(svc,x_train, y_train, x_test) # Support Vector Classifier
print("Training is complete")
rf_features = rf.feature_importances(x_train,y_train)
et_features = et.feature_importances(x_train, y_train)
ada_features = ada.feature_importances(x_train, y_train)
gb_features = gb.feature_importances(x_train,y_train)
# rf_features = [0.10474135, 0.21837029, 0.04432652, 0.02249159, 0.05432591, 0.02854371
# ,0.07570305, 0.01088129 , 0.24247496, 0.13685733 , 0.06128402]
# et_features = [ 0.12165657, 0.37098307 ,0.03129623 , 0.01591611 , 0.05525811 , 0.028157
# ,0.04589793 , 0.02030357 , 0.17289562 , 0.04853517, 0.08910063]
# ada_features = [0.028 , 0.008 , 0.012 , 0.05866667, 0.032 , 0.008
# ,0.04666667 , 0. , 0.05733333, 0.73866667, 0.01066667]
# gb_features = [ 0.06796144 , 0.03889349 , 0.07237845 , 0.02628645 , 0.11194395, 0.04778854
# ,0.05965792 , 0.02774745, 0.07462718, 0.4593142 , 0.01340093]
cols = train.columns.values
# Create a dataframe with features
feature_dataframe = pd.DataFrame( {'features': cols,
'Random Forest feature importances': rf_features,
'Extra Trees feature importances': et_features,
'AdaBoost feature importances': ada_features,
'Gradient Boost feature importances': gb_features
})
# Scatter plot
trace = go.Scatter(
y = feature_dataframe['Random Forest feature importances'].values,
x = feature_dataframe['features'].values,
mode='markers',
marker=dict(
sizemode = 'diameter',
sizeref = 1,
size = 25,
# size= feature_dataframe['AdaBoost feature importances'].values,
#color = np.random.randn(500), #set color equal to a variable
color = feature_dataframe['Random Forest feature importances'].values,
colorscale='Portland',
showscale=True
),
text = feature_dataframe['features'].values
)
data = [trace]
layout= go.Layout(
autosize= True,
title= 'Random Forest Feature Importance',
hovermode= 'closest',
# xaxis= dict(
# title= 'Pop',
# ticklen= 5,
# zeroline= False,
# gridwidth= 2,
# ),
yaxis=dict(
title= 'Feature Importance',
ticklen= 5,
gridwidth= 2
),
showlegend= False
)
fig = go.Figure(data=data, layout=layout)
py.iplot(fig,filename='scatter2010')
# Scatter plot
trace = go.Scatter(
y = feature_dataframe['Extra Trees feature importances'].values,
x = feature_dataframe['features'].values,
mode='markers',
marker=dict(
sizemode = 'diameter',
sizeref = 1,
size = 25,
# size= feature_dataframe['AdaBoost feature importances'].values,
#color = np.random.randn(500), #set color equal to a variable
color = feature_dataframe['Extra Trees feature importances'].values,
colorscale='Portland',
showscale=True
),
text = feature_dataframe['features'].values
)
data = [trace]
layout= go.Layout(
autosize= True,
title= 'Extra Trees Feature Importance',
hovermode= 'closest',
# xaxis= dict(
# title= 'Pop',
# ticklen= 5,
# zeroline= False,
# gridwidth= 2,
# ),
yaxis=dict(
title= 'Feature Importance',
ticklen= 5,
gridwidth= 2
),
showlegend= False
)
fig = go.Figure(data=data, layout=layout)
py.iplot(fig,filename='scatter2010')
# Scatter plot
trace = go.Scatter(
y = feature_dataframe['AdaBoost feature importances'].values,
x = feature_dataframe['features'].values,
mode='markers',
marker=dict(
sizemode = 'diameter',
sizeref = 1,
size = 25,
# size= feature_dataframe['AdaBoost feature importances'].values,
#color = np.random.randn(500), #set color equal to a variable
color = feature_dataframe['AdaBoost feature importances'].values,
colorscale='Portland',
showscale=True
),
text = feature_dataframe['features'].values
)
data = [trace]
layout= go.Layout(
autosize= True,
title= 'AdaBoost Feature Importance',
hovermode= 'closest',
# xaxis= dict(
# title= 'Pop',
# ticklen= 5,
# zeroline= False,
# gridwidth= 2,
# ),
yaxis=dict(
title= 'Feature Importance',
ticklen= 5,
gridwidth= 2
),
showlegend= False
)
fig = go.Figure(data=data, layout=layout)
py.iplot(fig,filename='scatter2010')
# Scatter plot
trace = go.Scatter(
y = feature_dataframe['Gradient Boost feature importances'].values,
x = feature_dataframe['features'].values,
mode='markers',
marker=dict(
sizemode = 'diameter',
sizeref = 1,
size = 25,
# size= feature_dataframe['AdaBoost feature importances'].values,
#color = np.random.randn(500), #set color equal to a variable
color = feature_dataframe['Gradient Boost feature importances'].values,
colorscale='Portland',
showscale=True
),
text = feature_dataframe['features'].values
)
data = [trace]
layout= go.Layout(
autosize= True,
title= 'Gradient Boosting Feature Importance',
hovermode= 'closest',
# xaxis= dict(
# title= 'Pop',
# ticklen= 5,
# zeroline= False,
# gridwidth= 2,
# ),
yaxis=dict(
title= 'Feature Importance',
ticklen= 5,
gridwidth= 2
),
showlegend= False
)
fig = go.Figure(data=data, layout=layout)
py.iplot(fig,filename='scatter2010')
# Create the new column containing the average of values
feature_dataframe['mean'] = feature_dataframe.mean(axis= 1) # axis = 1 computes the mean row-wise
feature_dataframe.head(3)
y = feature_dataframe['mean'].values
x = feature_dataframe['features'].values
data = [go.Bar(
x= x,
y= y,
width = 0.5,
marker=dict(
color = feature_dataframe['mean'].values,
colorscale='Portland',
showscale=True,
reversescale = False
),
opacity=0.6
)]
layout= go.Layout(
autosize= True,
title= 'Barplots of Mean Feature Importance',
hovermode= 'closest',
# xaxis= dict(
# title= 'Pop',
# ticklen= 5,
# zeroline= False,
# gridwidth= 2,
# ),
yaxis=dict(
title= 'Feature Importance',
ticklen= 5,
gridwidth= 2
),
showlegend= False
)
fig = go.Figure(data=data, layout=layout)
py.iplot(fig, filename='bar-direct-labels')
base_predictions_train = pd.DataFrame( {'RandomForest': rf_oof_train.ravel(),
'ExtraTrees': et_oof_train.ravel(),
'AdaBoost': ada_oof_train.ravel(),
'GradientBoost': gb_oof_train.ravel()
})
base_predictions_train.head()
data = [
go.Heatmap(
z= base_predictions_train.astype(float).corr().values ,
x=base_predictions_train.columns.values,
y= base_predictions_train.columns.values,
colorscale='Portland',
showscale=True,
reversescale = True
)
]
py.iplot(data, filename='labelled-heatmap')
x_train
x_train = np.concatenate(( et_oof_train, rf_oof_train, ada_oof_train, gb_oof_train, svc_oof_train), axis=1)
x_test = np.concatenate(( et_oof_test, rf_oof_test, ada_oof_test, gb_oof_test, svc_oof_test), axis=1)
x_train.shape
gbm = xgb.XGBClassifier(
#learning_rate = 0.02,
n_estimators= 2000,
max_depth= 4,
min_child_weight= 2,
#gamma=1,
gamma=0.9,
subsample=0.8,
colsample_bytree=0.8,
objective= 'binary:logistic',
nthread= -1,
scale_pos_weight=1).fit(
x_train, y_train,
eval_set=[(x_train, y_train)],
eval_metric='logloss',
verbose=True
)
predictions = gbm.predict(x_test)
import matplotlib.pyplot as plt
from sklearn.learning_curve import learning_curve
param_dist = {
"n_estimators": 2000,
"max_depth": 4,
"min_child_weight": 2,
#gamma=1,
"gamma":0.9,
"subsample":0.8,
"colsample_bytree":0.8,
"objective": 'binary:logistic',
"nthread": -1,
"scale_pos_weight":1
}
clf = xgb.XGBClassifier(**param_dist)
train_sizes, train_scores, test_scores =\
learning_curve(estimator=clf,
X=x_train,
y=y_train,
train_sizes=np.linspace(0.1, 1.0, 10),
cv=10,
n_jobs=1)
train_mean = np.mean(train_scores, axis=1)
train_std = np.std(train_scores, axis=1)
test_mean = np.mean(test_scores, axis=1)
test_std = np.std(test_scores, axis=1)
plt.plot(train_sizes, train_mean,
color='blue', marker='o',
markersize=8, label='training accuracy')
plt.fill_between(train_sizes,
train_mean + train_std,
train_mean - train_std,
alpha=0.15, color='blue')
plt.plot(train_sizes, test_mean,
color='green', linestyle='--',
marker='s', markersize=8,
label='validation accuracy')
plt.fill_between(train_sizes,
test_mean + test_std,
test_mean - test_std,
alpha=0.15, color='green')
plt.grid()
plt.xlabel('Number of training samples')
plt.ylabel('Accuracy')
plt.legend(loc='lower right')
plt.ylim([0.8, 1.0])
plt.tight_layout()
# plt.savefig('./figures/learning_curve.png', dpi=300)
plt.show()
#XGBoost eval example
# clf.fit(x_train, y_train,
# eval_set=[(x_train, y_train)],
# eval_metric='logloss',
# verbose=False)
# # Load evals result by calling the evals_result() function
# evals_result = clf.evals_result()
# print('Access logloss metric directly from validation_0:')
# print(evals_result['validation_0']['logloss'])
# print('')
# print('Access metrics through a loop:')
# for e_name, e_mtrs in evals_result.items():
# print('- {}'.format(e_name))
# for e_mtr_name, e_mtr_vals in e_mtrs.items():
# print(' - {}'.format(e_mtr_name))
# print(' - {}'.format(e_mtr_vals))
# print('')
# print('Access complete dict:')
#print(evals_result['validation_0']['logloss'][-1])
xgb_model = xgb.XGBClassifier()
clf = GridSearchCV(xgb_model,
{'max_depth': [3,4,5],
'n_estimators': [2000],
'gamma':[0.8,0.9,1],
"min_child_weight": [2,3],
"subsample":[0.8,0.9],
'colsample_bytree':[0.8],
"scale_pos_weight":[1]}, verbose=1)
clf.fit(x_train,y_train)
print('*' * 30)
print(clf.best_score_)
print('*' * 30)
print(clf.best_params_)
# Generate Submission File
StackingSubmission = pd.DataFrame({ 'PassengerId': PassengerId,
'Survived': predictions })
StackingSubmission.to_csv("StackingSubmission.csv", index=False)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Download a raster file representing the flow accumulation for South America. This gives an idea of the river network.
Step2: We transform the data a bit so that rivers appear thicker.
Step3: The original data is in the WGS 84 projection, but Leaflet uses Web Mercator, so we need to reproject.
Step4: Let's convert our NumPy array to an image. For that we must specify a colormap (here plt.cm.jet).
Step5: The image is embedded in the URL as a PNG file, so that it can be sent to the browser.
Step6: Finally we can overlay our image and if everything went fine it should be exactly over South America.
Step7: You can play with the opacity slider and check that rivers from our data file match the rivers on OpenStreetMap.
| <ASSISTANT_TASK:>
Python Code:
import requests
import os
from tqdm import tqdm
import zipfile
import rasterio
from affine import Affine
import numpy as np
import scipy.ndimage
from rasterio.warp import reproject, Resampling
import PIL
import matplotlib.pyplot as plt
from base64 import b64encode
try:
from StringIO import StringIO
py3 = False
except ImportError:
from io import StringIO, BytesIO
py3 = True
from ipyleaflet import Map, ImageOverlay, basemap_to_tiles, basemaps
url = 'http://earlywarning.usgs.gov/hydrodata/sa_30s_zip_grid/sa_acc_30s_grid.zip'
filename = os.path.basename(url)
name = filename[:filename.find('_grid')]
adffile = name + '/' + name + '/w001001.adf'
if not os.path.exists(adffile):
r = requests.get(url, stream=True)
with open(filename, 'wb') as f:
total_length = int(r.headers.get('content-length'))
for chunk in tqdm(r.iter_content(chunk_size=1024), total=(total_length/1024) + 1):
if chunk:
f.write(chunk)
f.flush()
zip = zipfile.ZipFile(filename)
zip.extractall('.')
dataset = rasterio.open(adffile)
acc_orig = dataset.read()[0]
acc = np.where(acc_orig<0, 0, acc_orig)
shrink = 1 # if you are out of RAM try increasing this number (should be a power of 2)
radius = 5 # you can play with this number to change the width of the rivers
circle = np.zeros((2*radius+1, 2*radius+1)).astype('uint8')
y, x = np.ogrid[-radius:radius+1,-radius:radius+1]
index = x**2 + y**2 <= radius**2
circle[index] = 1
acc = np.sqrt(acc)
acc = scipy.ndimage.maximum_filter(acc, footprint=circle)
acc[acc_orig<0] = np.nan
acc = acc[::shrink, ::shrink]
# At this point if GDAL complains about not being able to open EPSG support file gcs.csv, try in the terminal:
# export GDAL_DATA=`gdal-config --datadir`
with rasterio.Env():
rows, cols = acc.shape
src_transform = list(dataset.affine)
src_transform[0] *= shrink
src_transform[4] *= shrink
src_transform = Affine(*src_transform[:6])
src_crs = {'init': 'EPSG:4326'}
source = acc
dst_crs = {'init': 'EPSG:3857'}
dst_transform, width, height = rasterio.warp.calculate_default_transform(src_crs, dst_crs, cols, rows, *dataset.bounds)
dst_shape = height, width
destination = np.zeros(dst_shape)
reproject(
source,
destination,
src_transform=src_transform,
src_crs=src_crs,
dst_transform=dst_transform,
dst_crs=dst_crs,
resampling=Resampling.nearest)
acc_web = destination
acc_norm = acc_web - np.nanmin(acc_web)
acc_norm = acc_norm / np.nanmax(acc_norm)
acc_norm = np.where(np.isfinite(acc_web), acc_norm, 0)
acc_im = PIL.Image.fromarray(np.uint8(plt.cm.jet(acc_norm)*255))
acc_mask = np.where(np.isfinite(acc_web), 255, 0)
mask = PIL.Image.fromarray(np.uint8(acc_mask), mode='L')
im = PIL.Image.new('RGBA', acc_norm.shape[::-1], color=None)
im.paste(acc_im, mask=mask)
if py3:
f = BytesIO()
else:
f = StringIO()
im.save(f, 'png')
data = b64encode(f.getvalue())
if py3:
data = data.decode('ascii')
imgurl = 'data:image/png;base64,' + data
b = dataset.bounds
bounds = [(b.bottom, b.left), (b.top, b.right)]
io = ImageOverlay(url=imgurl, bounds=bounds)
center = [-10, -60]
zoom = 2
m = Map(center=center, zoom=zoom, interpolation='nearest')
m
tile = basemap_to_tiles(basemaps.Esri.WorldStreetMap)
m.add_layer(tile)
m.add_layer(io)
io.interact(opacity=(0.0,1.0,0.01))
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Train and export the model
Step2: For the example, we only trained the model for a single epoch, so it only trains to ~96% accuracy.
Step3: Using the python TocoConverter, the saved model can be converted into a TFLite model.
Step4: Write it out to a tflite file
Step5: To quantize the model on export, set the post_training_quantize flag
Step6: Note how the resulting file, with post_training_quantize set, is approximately 1/4 the size.
Step7: Run the TFLite models
Step8: Load the model into an interpreter
Step9: Test the model on one image
Step10: Evaluate the models
Step11: We can repeat the evaluation on the weight quantized model to obtain
Step12: In this example, we have compressed model with no difference in the accuracy.
Step13: The info.txt file lists the input and output names. You can also find them using TensorBoard to visually inspect the graph.
| <ASSISTANT_TASK:>
Python Code:
! pip uninstall -y tensorflow
! pip install -U tf-nightly
import tensorflow as tf
tf.enable_eager_execution()
! git clone --depth 1 https://github.com/tensorflow/models
import sys
import os
if sys.version_info.major >= 3:
import pathlib
else:
import pathlib2 as pathlib
# Add `models` to the python path.
models_path = os.path.join(os.getcwd(), "models")
sys.path.append(models_path)
saved_models_root = "/tmp/mnist_saved_model"
# The above path addition is not visible to subprocesses, add the path for the subprocess as well.
# Note: channels_last is required here or the conversion may fail.
!PYTHONPATH={models_path} python models/official/mnist/mnist.py --train_epochs=1 --export_dir {saved_models_root} --data_format=channels_last
saved_model_dir = str(sorted(pathlib.Path(saved_models_root).glob("*"))[-1])
saved_model_dir
import tensorflow as tf
tf.enable_eager_execution()
converter = tf.lite.TocoConverter.from_saved_model(saved_model_dir)
tflite_model = converter.convert()
tflite_models_dir = pathlib.Path("/tmp/mnist_tflite_models/")
tflite_models_dir.mkdir(exist_ok=True, parents=True)
tflite_model_file = tflite_models_dir/"mnist_model.tflite"
tflite_model_file.write_bytes(tflite_model)
# Note: If you don't have a recent tf-nightly installed, the
# "post_training_quantize" line will have no effect.
tf.logging.set_verbosity(tf.logging.INFO)
converter.post_training_quantize = True
tflite_quant_model = converter.convert()
tflite_model_quant_file = tflite_models_dir/"mnist_model_quant.tflite"
tflite_model_quant_file.write_bytes(tflite_quant_model)
!ls -lh {tflite_models_dir}
import numpy as np
mnist_train, mnist_test = tf.keras.datasets.mnist.load_data()
images, labels = tf.to_float(mnist_test[0])/255.0, mnist_test[1]
# Note: If you change the batch size, then use
# `tf.lite.Interpreter.resize_tensor_input` to also change it for
# the interpreter.
mnist_ds = tf.data.Dataset.from_tensor_slices((images, labels)).batch(1)
interpreter = tf.lite.Interpreter(model_path=str(tflite_model_file))
interpreter.allocate_tensors()
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
tf.logging.set_verbosity(tf.logging.DEBUG)
interpreter_quant = tf.lite.Interpreter(model_path=str(tflite_model_quant_file))
interpreter_quant.allocate_tensors()
input_index = interpreter_quant.get_input_details()[0]["index"]
output_index = interpreter_quant.get_output_details()[0]["index"]
for img, label in mnist_ds.take(1):
break
interpreter.set_tensor(input_index, img)
interpreter.invoke()
predictions = interpreter.get_tensor(output_index)
import matplotlib.pylab as plt
plt.imshow(img[0])
template = "True:{true}, predicted:{predict}"
_ = plt.title(template.format(true= str(label[0].numpy()),
predict=str(predictions[0,0])))
plt.grid(False)
def eval_model(interpreter, mnist_ds):
total_seen = 0
num_correct = 0
for img, label in mnist_ds:
total_seen += 1
interpreter.set_tensor(input_index, img)
interpreter.invoke()
predictions = interpreter.get_tensor(output_index)
if predictions == label.numpy():
num_correct += 1
if total_seen % 500 == 0:
print("Accuracy after %i images: %f" %
(total_seen, float(num_correct) / float(total_seen)))
return float(num_correct) / float(total_seen)
print(eval_model(interpreter, mnist_ds))
print(eval_model(interpreter_quant, mnist_ds))
archive_path = tf.keras.utils.get_file("resnet_v2_101.tgz", "https://storage.googleapis.com/download.tensorflow.org/models/tflite_11_05_08/resnet_v2_101.tgz", extract=True)
archive_path = pathlib.Path(archive_path)
archive_dir = str(archive_path.parent)
! cat {archive_dir}/resnet_v2_101_299_info.txt
graph_def_file = pathlib.Path(archive_path).parent/"resnet_v2_101_299_frozen.pb"
input_arrays = ["input"]
output_arrays = ["output"]
converter = tf.lite.TocoConverter.from_frozen_graph(
str(graph_def_file), input_arrays, output_arrays, input_shapes={"input":[1,299,299,3]})
converter.post_training_quantize = True
resnet_tflite_file = graph_def_file.parent/"resnet_v2_101_quantized.tflite"
resnet_tflite_file.write_bytes(converter.convert())
!ls -lh {archive_dir}/*.tflite
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data preperation
Step2: Now, let us see a preview of what the dataset looks like.
Step3: Build the word count vector for each review
Step4: Now, we will perform 2 simple data transformations
Step5: Now, let us explore what the sample example above looks like after these 2 transformations. Here, each entry in the word_count column is a dictionary where the key is the word and the value is a count of the number of times the word occurs.
Step6: Extract sentiments
Step7: Now, we will assign reviews with a rating of 4 or higher to be positive reviews, while the ones with rating of 2 or lower are negative. For the sentiment column, we use +1 for the positive class label and -1 for the negative class label.
Step8: Now, we can see that the dataset contains an extra column called sentiment which is either positive (+1) or negative (-1).
Step9: Train a sentiment classifier with logistic regression
Step10: Aside. You may get an warning to the effect of "Terminated due to numerical difficulties --- this model may not be ideal". It means that the quality metric (to be covered in Module 3) failed to improve in the last iteration of the run. The difficulty arises as the sentiment model puts too much weight on extremely rare words. A way to rectify this is to apply regularization, to be covered in Module 4. Regularization lessens the effect of extremely rare words. For the purpose of this assignment, however, please proceed with the model above.
Step11: There are a total of 121713 coefficients in the model. Recall from the lecture that positive weights $w_j$ correspond to weights that cause positive sentiment, while negative weights correspond to negative sentiment.
Step12: Quiz question
Step13: Let's dig deeper into the first row of the sample_test_data. Here's the full review
Step14: That review seems pretty positive.
Step15: We will now make a class prediction for the sample_test_data. The sentiment_model should predict +1 if the sentiment is positive and -1 if the sentiment is negative. Recall from the lecture that the score (sometimes called margin) for the logistic regression model is defined as
Step16: Predicting sentiment
Step17: Run the following code to verify that the class predictions obtained by your calculations are the same as that obtained from GraphLab Create.
Step18: Checkpoint
Step19: Checkpoint
Step20: Quiz Question
Step21: Find the most positive (and negative) review
Step22: Quiz Question
Step23: Quiz Question
Step24: Now, let's compute the classification accuracy of the sentiment_model on the test_data.
Step25: Quiz Question
Step26: For each review, we will use the word_count column and trim out all words that are not in the significant_words list above. We will use the SArray dictionary trim by keys functionality. Note that we are performing this on both the training and test set.
Step27: Let's see what the first example of the dataset looks like
Step28: The word_count column had been working with before looks like the following
Step29: Since we are only working with a subet of these words, the column word_count_subset is a subset of the above dictionary. In this example, only 2 significant words are present in this review.
Step30: Train a logistic regression model on a subset of data
Step31: We can compute the classification accuracy using the get_classification_accuracy function you implemented earlier.
Step32: Now, we will inspect the weights (coefficients) of the simple_model
Step33: Let's sort the coefficients (in descending order) by the value to obtain the coefficients with the most positive effect on the sentiment.
Step34: Quiz Question
Step35: Comparing models
Step36: Now, compute the classification accuracy of the simple_model on the train_data
Step37: Quiz Question
Step38: Next, we will compute the classification accuracy of the simple_model on the test_data
Step39: Quiz Question
Step40: Now compute the accuracy of the majority class classifier on test_data.
| <ASSISTANT_TASK:>
Python Code:
from __future__ import division
import graphlab
import math
import string
products = graphlab.SFrame('amazon_baby.gl/')
products
products[269]
def remove_punctuation(text):
import string
return text.translate(None, string.punctuation)
review_without_puctuation = products['review'].apply(remove_punctuation)
products['word_count'] = graphlab.text_analytics.count_words(review_without_puctuation)
products[269]['word_count']
products = products[products['rating'] != 3]
len(products)
products['sentiment'] = products['rating'].apply(lambda rating : +1 if rating > 3 else -1)
products
train_data, test_data = products.random_split(.8, seed=1)
print len(train_data)
print len(test_data)
sentiment_model = graphlab.logistic_classifier.create(train_data,
target = 'sentiment',
features=['word_count'],
validation_set=None)
sentiment_model
weights = sentiment_model.coefficients
weights.column_names()
num_positive_weights = len(weights[weights["value"] >= 0])
num_negative_weights = len(weights[weights["value"] < 0])
print "Number of positive weights: %s " % num_positive_weights
print "Number of negative weights: %s " % num_negative_weights
sample_test_data = test_data[10:13]
print sample_test_data['rating']
sample_test_data
sample_test_data[0]['review']
sample_test_data[1]['review']
scores = sentiment_model.predict(sample_test_data, output_type='margin')
print scores
y = scores.apply(lambda x: 1 if x >= 0 else -1)
print y
print "Class predictions according to GraphLab Create:"
print sentiment_model.predict(sample_test_data)
prob = scores.apply(lambda x: 1/(1 + math.exp(-1 * x)))
print prob
print "Class predictions according to GraphLab Create:"
print sentiment_model.predict(sample_test_data, output_type='probability')
min(prob)
test_data["probability"] = sentiment_model.predict(test_data, output_type='probability')
best_20 = test_data.topk("probability", k=20)
best_20.print_rows(20)
worst_20 = test_data.topk("probability", k=20, reverse = True)
worst_20.print_rows(20)
def get_classification_accuracy(model, data, true_labels):
# First get the predictions
## YOUR CODE HERE
scores = model.predict(data, output_type='margin')
y = scores.apply(lambda x: 1 if x > 0 else -1)
# Compute the number of correctly classified examples
## YOUR CODE HERE
correctly_classified = 0
for i in range(len(data)):
if y[i] == true_labels[i]:
correctly_classified += 1
# Then compute accuracy by dividing num_correct by total number of examples
## YOUR CODE HERE
accuracy = correctly_classified / len(true_labels)
return accuracy
get_classification_accuracy(sentiment_model, test_data, test_data['sentiment'])
significant_words = ['love', 'great', 'easy', 'old', 'little', 'perfect', 'loves',
'well', 'able', 'car', 'broke', 'less', 'even', 'waste', 'disappointed',
'work', 'product', 'money', 'would', 'return']
len(significant_words)
train_data['word_count_subset'] = train_data['word_count'].dict_trim_by_keys(significant_words, exclude=False)
test_data['word_count_subset'] = test_data['word_count'].dict_trim_by_keys(significant_words, exclude=False)
train_data[0]['review']
print train_data[0]['word_count']
print train_data[0]['word_count_subset']
simple_model = graphlab.logistic_classifier.create(train_data,
target = 'sentiment',
features=['word_count_subset'],
validation_set=None)
simple_model
get_classification_accuracy(simple_model, test_data, test_data['sentiment'])
simple_model.coefficients
c = simple_model.coefficients.sort('value', ascending=False)["index", "value"].apply(lambda x: x["value"] > 0 and x["index"] in significant_words)
len([x for x in c if x != 0])
print c
sentiment_model_coefficients = sentiment_model.coefficients.sort('value', ascending=False)
sentiment_model_coefficients_positive = sentiment_model_coefficients[sentiment_model_coefficients["value"] > 0]
get_classification_accuracy(sentiment_model, train_data, train_data['sentiment'])
get_classification_accuracy(simple_model, train_data, train_data['sentiment'])
get_classification_accuracy(sentiment_model, test_data, test_data['sentiment'])
get_classification_accuracy(simple_model, train_data, train_data['sentiment'])
num_positive = (train_data['sentiment'] == +1).sum()
num_negative = (train_data['sentiment'] == -1).sum()
print num_positive
print num_negative
num_positive/len(train_data)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: Moving average
Step4: Trend and Seasonality
Step5: Naive Forecast
Step6: Now let's compute the mean absolute error between the forecasts and the predictions in the validation period
Step9: That's our baseline, now let's try a moving average.
Step10: That's worse than naive forecast! The moving average does not anticipate trend or seasonality, so let's try to remove them by using differencing. Since the seasonality period is 365 days, we will subtract the value at time t – 365 from the value at time t.
Step11: Focusing on the validation period
Step12: Great, the trend and seasonality seem to be gone, so now we can use the moving average
Step13: Now let's bring back the trend and seasonality by adding the past values from t – 365
Step14: Better than naive forecast, good. However the forecasts look a bit too random, because we're just adding past values, which were noisy. Let's use a moving averaging on past values to remove some of the noise
| <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
keras = tf.keras
def plot_series(time, series, format="-", start=0, end=None, label=None):
plt.plot(time[start:end], series[start:end], format, label=label)
plt.xlabel("Time")
plt.ylabel("Value")
if label:
plt.legend(fontsize=14)
plt.grid(True)
def trend(time, slope=0):
return slope * time
def seasonal_pattern(season_time):
Just an arbitrary pattern, you can change it if you wish
return np.where(season_time < 0.4,
np.cos(season_time * 2 * np.pi),
1 / np.exp(3 * season_time))
def seasonality(time, period, amplitude=1, phase=0):
Repeats the same pattern at each period
season_time = ((time + phase) % period) / period
return amplitude * seasonal_pattern(season_time)
def white_noise(time, noise_level=1, seed=None):
rnd = np.random.RandomState(seed)
return rnd.randn(len(time)) * noise_level
time = np.arange(4 * 365 + 1)
slope = 0.05
baseline = 10
amplitude = 40
series = baseline + trend(time, slope) + seasonality(time, period=365, amplitude=amplitude)
noise_level = 5
noise = white_noise(time, noise_level, seed=42)
series += noise
plt.figure(figsize=(10, 6))
plot_series(time, series)
plt.show()
split_time = 1000
time_train = time[:split_time]
x_train = series[:split_time]
time_valid = time[split_time:]
x_valid = series[split_time:]
naive_forecast = series[split_time - 1:-1]
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid, start=0, end=150, label="Series")
plot_series(time_valid, naive_forecast, start=1, end=151, label="Forecast")
keras.metrics.mean_absolute_error(x_valid, naive_forecast).numpy()
def moving_average_forecast(series, window_size):
Forecasts the mean of the last few values.
If window_size=1, then this is equivalent to naive forecast
forecast = []
for time in range(len(series) - window_size):
forecast.append(series[time:time + window_size].mean())
return np.array(forecast)
def moving_average_forecast(series, window_size):
Forecasts the mean of the last few values.
If window_size=1, then this is equivalent to naive forecast
This implementation is *much* faster than the previous one
mov = np.cumsum(series)
mov[window_size:] = mov[window_size:] - mov[:-window_size]
return mov[window_size - 1:-1] / window_size
moving_avg = moving_average_forecast(series, 30)[split_time - 30:]
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid, label="Series")
plot_series(time_valid, moving_avg, label="Moving average (30 days)")
keras.metrics.mean_absolute_error(x_valid, moving_avg).numpy()
diff_series = (series[365:] - series[:-365])
diff_time = time[365:]
plt.figure(figsize=(10, 6))
plot_series(diff_time, diff_series, label="Series(t) – Series(t–365)")
plt.show()
plt.figure(figsize=(10, 6))
plot_series(time_valid, diff_series[split_time - 365:], label="Series(t) – Series(t–365)")
plt.show()
diff_moving_avg = moving_average_forecast(diff_series, 50)[split_time - 365 - 50:]
plt.figure(figsize=(10, 6))
plot_series(time_valid, diff_series[split_time - 365:], label="Series(t) – Series(t–365)")
plot_series(time_valid, diff_moving_avg, label="Moving Average of Diff")
plt.show()
diff_moving_avg_plus_past = series[split_time - 365:-365] + diff_moving_avg
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid, label="Series")
plot_series(time_valid, diff_moving_avg_plus_past, label="Forecasts")
plt.show()
keras.metrics.mean_absolute_error(x_valid, diff_moving_avg_plus_past).numpy()
diff_moving_avg_plus_smooth_past = moving_average_forecast(series[split_time - 370:-359], 11) + diff_moving_avg
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid, label="Series")
plot_series(time_valid, diff_moving_avg_plus_smooth_past, label="Forecasts")
plt.show()
keras.metrics.mean_absolute_error(x_valid, diff_moving_avg_plus_smooth_past).numpy()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <br></br>
Step2: <br></br>
Step3: <br></br>
Step4: <br></br>
Step5: <br></br>
Step6: <br></br>
Step7: <br></br>
Step8: <br></br>
Step9: <br></br>
Step10: <br></br>
Step11: <br></br>
Step12: <br></br>
Step13: <br></br>
Step14: <br></br>
Step16: <br></br>
Step17: <br></br>
Step18: <br></br>
Step19: <br></br>
Step20: Confusion Matrix
Step21: <br></br>
| <ASSISTANT_TASK:>
Python Code:
import warnings
import numpy as np
import pandas as pd
from time import time
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn import datasets
from sklearn.decomposition import PCA
from sklearn.linear_model import LogisticRegression
from sklearn.linear_model import LinearRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
from sklearn.model_selection import cross_val_score
from sklearn.linear_model import Ridge
from sklearn.preprocessing import PolynomialFeatures
from sklearn.pipeline import make_pipeline
import Utils
from Utils import cmap_light
from Utils import cmap_bold
boston_dataset = datasets.load_boston()
print(boston_dataset.DESCR)
X = boston_dataset.data
Y = boston_dataset.target
names = list(boston_dataset.feature_names) + ['Price']
labels = np.reshape(Y,
(Y.shape[0], 1))
df = pd.DataFrame(data=np.concatenate((X, labels), axis=1),
columns=names)
df.head(10)
df_tmp = df[['CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX',
'RM', 'Price']]
df_tmp.head(10)
df_tmp.describe()
from Utils import plot_boston_dataset
plot_boston_dataset(boston_dataset.data,
boston_dataset.target)
model = LinearRegression()
model.fit(X, Y)
r2 = model.score(X, Y)
print("R^2 value: {:0.3f}".format(r2))
example_n = np.random.randint(0, Y.shape[0])
Utils.describe_example_boston_dataset(X[example_n])
print("\n\nPredicted price: {:2.2f} Real value: {:2.2f}".format(
model.predict(X[example_n].reshape(1, -1))[0], Y[example_n]))
iris_dataset = datasets.load_iris()
print("Features: " + str(iris_dataset.feature_names))
print("Classes: " + str(iris_dataset.target_names))
X = iris_dataset.data
y = iris_dataset.target
# Load it to a DF
idx = np.random.permutation(150)
y = y[idx]
X = X[idx]
labels = np.reshape(y,
(y.shape[0], 1))
df = pd.DataFrame(data=np.concatenate((X, labels), axis=1),
columns=iris_dataset.feature_names + ['Class'])
df.head(10)
df.describe()
# Let's take a peak at the data:
plt.figure(figsize=(8,8))
colors = "bry"
for i, color in zip([0, 1, 2], colors):
idx = np.where(y == i)
plt.scatter(X[idx, 0], X[idx, 1], c=color, cmap=plt.cm.Paired)
plt.text(5.25, 2.20, "Versicolor", fontsize=14)
plt.text(7, 3.5, "Virginica", fontsize=14)
plt.text(4.5, 3.75, "Setosa", fontsize=14)
plt.title("The 3 different Iris species", fontsize=18,
fontweight='bold')
plt.xlabel(iris_dataset.feature_names[0], fontsize=14)
plt.ylabel(iris_dataset.feature_names[1], fontsize=14)
plt.show()
# We will focus identifying only the Iris Setosa
plt.figure(figsize=(8,8))
colors = "br"
idx = np.where(y == 0) # Give me the indices of the Iris Setosa examples
plt.scatter(X[idx, 0], X[idx, 1], c='b', cmap=plt.cm.Paired)
plt.text(4.5, 3.75, "Setosa", fontsize=14)
idx = np.where(y != 0) # where it's not Iris Setosa
plt.scatter(X[idx, 0], X[idx, 1], c='r', cmap=plt.cm.Paired)
plt.text(7.0, 2.5, "Others", fontsize=14)
plt.title("Scatter plot of Iris Setosa and the others Iris",
fontsize=18, fontweight='bold')
plt.xlabel(iris_dataset.feature_names[0], fontsize=14)
plt.ylabel(iris_dataset.feature_names[1], fontsize=14)
plt.show()
# We only care about whether each flower is a
# Iris Setosa and we are looking only at two of their features
X = iris_dataset.data
y = iris_dataset.target
new_y = y == 0
model = LogisticRegression(random_state=42, verbose=0)
model.fit(X[:,0:2], new_y)
accuracy = model.score(X[:,0:2], new_y)
print("Accuracy: {:0.3f}%".format(accuracy*100))
from Utils import predict_mesh
# Let's take a look at what our model is doing
# First plot the examples
plt.figure(figsize=(8,8))
colors = "br"
idx = np.where(y == 0)
plt.scatter(X[idx, 0], X[idx, 1], c='b', cmap=plt.cm.Paired)
plt.text(4.5, 3.75, "Setosa", fontsize=14)
idx = np.where(y != 0)
plt.scatter(X[idx, 0], X[idx, 1], c='r', cmap=plt.cm.Paired)
plt.text(7.0, 2.5, "Others", fontsize=14)
(xx, yy, Z) = predict_mesh(X, model)
plt.contour(xx, yy, Z, cmap=plt.cm.Paired)
plt.title("Decision Boundary", fontsize=18, fontweight='bold')
plt.xlabel(iris_dataset.feature_names[0], fontsize=14)
plt.ylabel(iris_dataset.feature_names[1], fontsize=14)
plt.show()
def sigmoid(x):
return 1 / (1 + np.exp(-x))
x = np.arange(-20, 20, 0.001)
y = sigmoid(x)
plt.figure(figsize=(10,5))
plt.plot(x, y)
plt.title("Sigmoid Function", fontsize=14)
plt.show()
# Read the data file and drop the collumns we don't care about:
whisky_dataframe = pd.read_csv(
filepath_or_buffer="whiskies.csv", header=0, sep=',',
index_col=1)
whisky_dataframe.drop(['RowID', 'Postcode', ' Latitude',
' Longitude'], inplace=True, axis=1)
# We get 12 features
whisky_dataframe.head(10)
whisky_dataframe.describe()
Utils.plot_whisky_histograms(whisky_dataframe)
Utils.plot_whiky_body_correlation(whisky_dataframe)
Utils.plot_1d_random_data(0.5, 30)
Utils.plot_2d_random_data(0.5, 30)
n = 50
random_data_1 =np.random.multivariate_normal(
mean= [0, 0], cov=[[3, 3], [0, 0.5]], size=n)
random_data_2 =np.random.multivariate_normal(
mean= [6, 6], cov=[[3, 3], [0, 0.5]], size=n)
random_data = np.concatenate([random_data_1, random_data_2], axis=0)
random_labels = np.concatenate([np.ones((n,1)),np.zeros((n,1))], axis=0)
fig = plt.figure(figsize=(8, 8))
plt.scatter(random_data[:, 0], random_data[:, 1],
c=random_labels, cmap=cmap_light)
#plt.scatter(random_data_2[:, 0], random_data_2[:, 1], c='r')
plt.plot([-5, 10], [-5, 10], 'r--')
plt.plot([5, 0], [0, 5], 'g--')
plt.xlim((-7, 14))
plt.ylim((-7, 14))
plt.title('Random Data with Principal Components', fontsize=16)
plt.xlabel('Random Dimension 1', fontsize=14)
plt.ylabel('Random Dimension 2', fontsize=14)
plt.show()
pca = PCA(n_components=2)
pca.fit(random_data)
transformed_data = pca.fit_transform(random_data)
plt.figure(figsize=(8,6))
plt.scatter(transformed_data[:,0], transformed_data[:,1],
c=random_labels, cmap=cmap_light)
plt.plot([-10, 10], [0, 0], 'r--')
plt.xlim((-10, 10))
plt.ylim((-5, 5))
plt.title('Transformed Random Data', fontsize=16)
plt.xlabel('Random Dimension 1', fontsize=14)
plt.ylabel('Random Dimension 2', fontsize=14)
plt.show()
pca = PCA(n_components=1)
pca.fit(random_data)
transformed_data = pca.fit_transform(random_data)
plt.figure(figsize=(8,5))
plt.scatter(transformed_data[:,0], np.zeros((2 * n ,1)),
c=random_labels, cmap=cmap_light)
plt.plot([-10, 10], [0, 0], 'r--')
plt.xlim((-10, 10))
plt.ylim((-5, 5))
plt.title('Transformed Random Data', fontsize=16)
plt.xlabel('Random Dimension 1', fontsize=14)
plt.show()
print("% of variance explained by PCA: {:0.1f}% \
".format(
pca.explained_variance_ratio_[0]*100))
### Adapted from:
# http://scikit-learn.org/stable/auto_examples/
# linear_model/plot_polynomial_interpolation.html
# Author: Mathieu Blondel
# Jake Vanderplas
# License: BSD 3 clause
def f(x, noise=False):
Sinusoidal function to be approximated
if(noise):
np.random.RandomState(42)
return np.sin(x) + np.random.randn(x.shape[0])/4
return np.sin(x)
space_size = 2*np.pi
# generate points used to plot
x_plot = np.linspace(-space_size, space_size, 100)
# generate points and keep a subset of them
x = np.linspace(-space_size, space_size, 100)
rng = np.random.RandomState(42)
rng.shuffle(x)
x = np.sort(x[:10])
y = f(x, True)
# create matrix versions of these arrays
X = x[:, np.newaxis]
X_plot = x_plot[:, np.newaxis]
colors = ['teal', 'green', 'brown', 'blue']
lw = 2
fig = plt.figure(figsize=(12,12))
for count, degree in enumerate([1, 3, 5, 10]):
ax = fig.add_subplot(2, 2, count+1)
ax.plot(x_plot, f(x_plot), color='cornflowerblue', linewidth=lw,
label="ground truth")
ax.scatter(x, y, color='navy', s=30, marker='o',
label="training points")
model = make_pipeline(PolynomialFeatures(degree), Ridge(random_state=42))
model.fit(X, y)
y_plot = model.predict(X_plot)
ax.plot(x_plot, y_plot, color=colors[count], linewidth=lw,
label="degree {0} with r^2 = {1:.2f}".format(
degree, model.score(X,y)))
ax.legend(loc='lower left', fontsize=14)
ax.set_ylim((-5, 5))
plt.show()
whisky_data = whisky_dataframe.values
pca = PCA(n_components=2, whiten=True)
# Here whiten means centering the data around 0,
# which is needed so that PCA works correctly
transformed_data = pca.fit_transform(whisky_data)
print("% of variance explained by each component: \
\n 1st {:0.1f}% \
\n 2nd {:0.1f}% \
".format(
pca.explained_variance_ratio_[0]*100,
pca.explained_variance_ratio_[1]*100))
fig = plt.figure(figsize=(8,6))
plt.scatter(x = transformed_data[:,0], y=transformed_data[:,1])
plt.xlim((-3, 5))
plt.ylim((-3, 5))
plt.title('Transformed Whisky Data', fontsize=16)
plt.xlabel('Principal Component 1', fontsize=14)
plt.ylabel('Principal Component 2', fontsize=14)
plt.show()
labels = whisky_dataframe['Tobacco']
whisky_data = whisky_dataframe.drop('Tobacco', axis=1).values
print("Percentage of Positive Labels: {:.2f}%".format(
np.sum(labels)/len(labels)*100))
pca = PCA(n_components=2, whiten=True)
# Here whiten means centering the data around 0,
# which is neededso that PCA works correctly
transformed_data = pca.fit_transform(whisky_data)
train_data, test_data, train_labels, test_labels = train_test_split(
transformed_data, labels, test_size=0.30, random_state=0)
# Without Class weights
classf = LogisticRegression()
# With Class weights
class_weight={0:1, 1: 12}
classf = LogisticRegression(class_weight=class_weight)
classf.fit(train_data, train_labels)
accuracy = classf.score(train_data, train_labels)
print("\n\nTraining Accuracy:\t {:0.3f}%\n\n".format(accuracy*100))
accuracy = classf.score(test_data, test_labels)
print("Test Accuracy:\t\t {:0.3f}%\n\n".format(accuracy*100))
print("\tTraining \n")
predicted_labels = classf.predict(train_data)
cm = confusion_matrix(train_labels, predicted_labels)
Utils.print_cm(cm)
print("\n\tTesting \n")
predicted_labels = classf.predict(test_data)
cm = confusion_matrix(test_labels, predicted_labels)
Utils.print_cm(cm)
class_weight={0:1, 1: 12}
classf = LogisticRegression(random_state=42,
class_weight=class_weight)
#classf = LogisticRegression(random_state=42)
# Select parameters to use in Cross-Validation
classf_cv = classf
data_cv = transformed_data
N_CV = 10
# Cross Validation
t0 = time()
scores = cross_val_score(classf_cv, data_cv, labels, cv = N_CV)
print("Scores: ")
for i, score in enumerate(scores):
print( '\t' + str(i) + ':\t' + str(score))
print("Accuracy: %0.3f (+/- %0.3f)" % (scores.mean(), scores.std() * 2))
print("\nCross val done in %0.3fs." % (time() - t0))
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Extract NN Features
Step2: Predicting Own Labels from Selected Images
Step3: Horizontal Striped Data
Step4: neither the svm or the logistic reg is doing well
Step5: the accuracy achieved is above chance (as determined by permutation testing)
Step6: classification performance is mucher better on this dataset
| <ASSISTANT_TASK:>
Python Code:
import sys
import os
sys.path.append(os.getcwd()+'/../')
# our lib
from lib.resnet50 import ResNet50
from lib.imagenet_utils import preprocess_input, decode_predictions
#keras
from keras.preprocessing import image
from keras.models import Model
# sklearn
import sklearn
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import permutation_test_score
# other
import numpy as np
import glob
import pandas as pd
import ntpath
# plotting
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
def preprocess_img(img_path):
img = image.load_img(img_path, target_size=(224, 224))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
return(x,img)
def perf_measure(y_actual, y_hat):
TP = 0
FP = 0
TN = 0
FN = 0
for i in range(len(y_hat)):
if y_actual[i]==y_hat[i]==1:
TP += 1
for i in range(len(y_hat)):
if (y_hat[i]==1) and (y_actual[i]!=y_hat[i]):
FP += 1
for i in range(len(y_hat)):
if y_actual[i]==y_hat[i]==0:
TN += 1
for i in range(len(y_hat)):
if (y_hat[i]==0) and (y_actual[i]!=y_hat[i]):
FN += 1
return(TP, FP, TN, FN)
# instantiate the model
base_model = ResNet50(include_top=False, weights='imagenet') #this will pull the weights from the folder
# cut the model to lower levels only
model = Model(input=base_model.input, output=base_model.get_layer('avg_pool').output)
#img_paths = glob.glob('../img/baiyi/*')
#
img_paths = glob.glob('../original_img/*')
img_paths[0:3]
# create dataframe with all image features
img_feature_df = pd.DataFrame()
for i,img_path in enumerate(img_paths):
x,img = preprocess_img(img_path) # preprocess
model_output = model.predict(x)[0,0,0,:]
img_feature_df.loc[i,'img_path']=img_path
img_feature_df.loc[i,'nn_features']=str(list(model_output))
img_feature_df['img_name'] = img_feature_df['img_path'].apply(lambda x: ntpath.basename(x))
img_feature_df.head()
img_feature_df.to_csv('../data_nn_features/img_features_all.csv')
#data_folder ='processed_data/classifer_exp_1/'
#os.mkdir(data_folder)
# get target and non-target lists
def create_image_class_dataframe(target_img_folder):
# all the image folders
non_target_img_folders = ['../original_img/']
target_img_paths=glob.glob(target_img_folder+'*')
target_img_paths_stemless = [ntpath.basename(t) for t in target_img_paths]
non_target_img_paths =[]
for non_target_folder in non_target_img_folders:
for img_path in glob.glob(non_target_folder+'*'):
if ntpath.basename(img_path) not in target_img_paths_stemless: # remove targets from non-target list
non_target_img_paths.append(img_path)
# create data frame with image name and label
img_paths = np.append(target_img_paths,non_target_img_paths)
labels = np.append(np.ones(len(target_img_paths)),np.zeros(len(non_target_img_paths)))
df = pd.DataFrame(data=np.vstack((img_paths,labels)).T,columns=['img_path','label'])
df['img_name'] = df['img_path'].apply(lambda x: ntpath.basename(x)) # add image name
df['label'] = df['label'].apply(lambda x: float(x)) # add label
# load up features per image
img_feature_df = pd.read_csv('../data_nn_features/img_features_all.csv',index_col=0)
img_feature_df.head()
# create feature matrix out of loaded up features.
for i,row in df.iterrows():
features = img_feature_df.loc[img_feature_df.img_name==row['img_name'],'nn_features'].as_matrix()[0].replace(']','').replace('[','').split(',')
features = [np.float(f) for f in features]
lab = row['img_name']
if i==0:
X = features
labs = lab
else:
X = np.vstack((X,features))
labs = np.append(labs,lab)
xcolumns = ['x'+str(i) for i in np.arange(X.shape[1])]
X_df = pd.DataFrame(np.hstack((labs[:,np.newaxis],X)),columns=['img_name']+xcolumns)
# merge together
df = df.merge(X_df,on='img_name')
# make sure there is only one instance per image in dataframe
lens = np.array([])
for img_name in df.img_name.unique():
lens = np.append(lens,len(df.loc[df.img_name==img_name]))
assert len(np.unique(lens)[:])==1
return(df)
# remove some non-targets to make dataset smaller #
# i_class0 = np.where(df.label==0.0)[0]
# i_class0_remove = np.random.choice(i_class0,int(np.round(len(i_class0)/1.1)))
# df_smaller = df.drop(i_class0_remove)
#df_smaller.to_csv('test.csv')
# image folder
target_img_folder ='../data_img_classes/class_horiztonal_striped/'
df = create_image_class_dataframe(target_img_folder)
df.head()
print('target class')
plt.figure(figsize=(12,3))
for i in range(5):
img_path= df['img_path'][i]
img = image.load_img(img_path, target_size=(224, 224))
plt.subplot(1,5,i+1)
plt.imshow(img)
plt.grid(b=False)
xcolumns=['x'+str(i) for i in np.arange(2024)]
X = df.loc[:,xcolumns].as_matrix().astype('float')
y= df.loc[:,'label'].as_matrix().astype('float')
X_train, X_test, y_train, y_test = sklearn.model_selection.train_test_split(X,y,stratify=y,test_size=.33)
print(' training shape {0} \n testing shape {1}').format(X_train.shape,X_test.shape)
print('\n target/non-target \n (train) {0}\{1} \n (test) {2}\{3}').format(y_train.sum(),(1-y_train).sum(),y_test.sum(),(1-y_test).sum())
# classifiers
C = 1.0
clf_LR = LogisticRegression(C=C, penalty='l1', tol=0.01)
clf_svm = sklearn.svm.SVC(C=C,kernel='linear')
clf_LR.fit(X_train, y_train)
clf_svm.fit(X_train, y_train)
coef = clf_LR.coef_[0,:]
plt.figure(figsize=(12,3))
sns.set_style('white')
plt.scatter(np.arange(len(coef)),coef)
plt.xlabel('nnet feature')
plt.ylabel('LogReg coefficient')
sns.despine()
y_pred = clf_LR.predict(X_test)
(TP,FP,TN,FN) =perf_measure(y_test,y_pred)
print('TruePos:{0}\nFalsePos:{1}\nTrueNeg:{2}\nFalseNeg:{3}').format(TP,FP,TN,FN)
y_pred = clf_svm.predict(X_test)
(TP,FP,TN,FN) =perf_measure(y_test,y_pred)
print('TruePos:{0}\nFalsePos:{1}\nTrueNeg:{2}\nFalseNeg:{3}').format(TP,FP,TN,FN)
# from sklearn.model_selection import StratifiedKFold
# skf = StratifiedKFold(n_splits=5,shuffle=True)
# for train, test in skf.split(X, y):
# #print("%s %s" % (train, test))
# C=1.0
# clf_LR = LogisticRegression(C=C, penalty='l1', tol=0.01)
# clf_LR.fit(X[train], y[train])
# y_pred = clf_LR.predict(X[test])
# (TP,FP,TN,FN) =perf_measure(y[test],y_pred)
# print('\nTruePos:{0}\nFalsePos:{1}\nTrueNeg:{2}\nFalseNeg:{3}').format(TP,FP,TN,FN)
clf_LR = LogisticRegression(C=C, penalty='l1', tol=0.01)
skf = StratifiedKFold(n_splits=5,shuffle=True)
score, permutation_scores, pvalue = permutation_test_score(
clf_LR, X, y, scoring="accuracy", cv=skf, n_permutations=100)
#
plt.hist(permutation_scores)
plt.axvline(score)
sns.despine()
plt.xlabel('accuracy')
print(pvalue)
# image folder
target_img_folder ='../data_img_classes/class_red_pink/'
df = create_image_class_dataframe(target_img_folder)
df.head()
print('target class')
plt.figure(figsize=(12,3))
for i in range(5):
img_path= df['img_path'][i+1]
img = image.load_img(img_path, target_size=(224, 224))
plt.subplot(1,5,i+1)
plt.imshow(img)
plt.grid(b=False)
# split data
xcolumns=['x'+str(i) for i in np.arange(2024)]
X = df.loc[:,xcolumns].as_matrix().astype('float')
y= df.loc[:,'label'].as_matrix().astype('float')
X_train, X_test, y_train, y_test = sklearn.model_selection.train_test_split(X,y,stratify=y,test_size=.33)
print(' training shape {0} \n testing shape {1}').format(X_train.shape,X_test.shape)
print('\n target/non-target \n (train) {0}\{1} \n (test) {2}\{3}').format(y_train.sum(),(1-y_train).sum(),y_test.sum(),(1-y_test).sum())
# Train
clf_svm.fit(X_train, y_train)
# test
y_pred = clf_svm.predict(X_test)
(TP,FP,TN,FN) =perf_measure(y_test,y_pred)
print('TruePos:{0}\nFalsePos:{1}\nTrueNeg:{2}\nFalseNeg:{3}').format(TP,FP,TN,FN)
clf_LR = LogisticRegression(C=C, penalty='l1', tol=0.01)
skf = StratifiedKFold(n_splits=5,shuffle=True)
score, permutation_scores, pvalue = permutation_test_score(
clf_LR, X, y, scoring="accuracy", cv=skf, n_permutations=100)
plt.hist(permutation_scores)
plt.axvline(score)
sns.despine()
plt.xlabel('accuracy')
plt.title('permutation test on test set classification')
print(pvalue)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 写函数可返回1 - 1/3 + 1/5 - 1/7...的前n项的和。在主程序中,分别令n=1000及100000,打印4倍该函数的和。
Step2: 将task3中的练习1及练习4改写为函数,并进行调用
Step3: 写程序,可以求从整数m到整数n累加的和,间隔为k,求和部分需用函数实现,主程序中由用户输入m,n,k调用函数验证正确性。
| <ASSISTANT_TASK:>
Python Code:
def factorial_sum(end):
i = 0
factorial_n = 1
while i < end:
i = i + 1
factorial_n = factorial_n *i
return factorial_n
m= int(input('请输入第1个整数,以回车结束。'))
n= int(input('请输入第2个整数,以回车结束。'))
k = int(input('请输入第3个整数,以回车结束。'))
print('最终的和是:', factorial_sum(m) + factorial_sum(n) + factorial_sum(k))
def number_sum(end):
i=0
total=0
while i < end:
i=i+1
if i%2!=0:
total=total+(1/(2*i-1))
else:
total=total-(1/(2*i-1))
return total
n=1000
m=100000
print('当n=1000时,最终值为:',4*number_sum(n))
print('当n=100000时,最终值为:',4*number_sum(m))
def constellation(name,month,day):
if month==1 and day<21:
print(name,'你是摩羯座!')
if month==12 and day>21:
print(name,'你是摩羯座!')
if month==2 and day<19:
print(name,'你是水瓶座!')
if month==1 and day>20:
print(name,'你是水瓶座!')
if month==3 and day<21:
print(name,'你是双鱼座!')
if month==2 and day>18:
print(name,'你是双鱼座!')
if month==4 and day<21:
print(name,'你是白羊座!')
if month==3 and day>20:
print(name,'你是白羊座!')
if month==5 and day<22:
print(name,'你是金牛座!')
if month==4 and day>20:
print(name,'你是金牛座!')
if month==6 and day<23:
print(name,'你是摩羯座!')
if month==5 and day>21:
print(name,'你是摩羯座!')
if month==7 and day<24:
print(name,'你是双子座!')
if month==6 and day>22:
print(name,'你是双子座!')
if month==8 and day<24:
print(name,'你是巨蟹座!')
if month==7 and day>23:
print(name,'你是巨蟹座!')
if month==9 and day<24:
print(name,'你是狮子座!')
if month==8 and day>23:
print(name,'你是狮子座!')
if month==10 and day<24:
print(name,'你是处女座!')
if month==9 and day>23:
print(name,'你是处女座!')
if month==11 and day<23:
print(name,'你是天秤座!')
if month==10 and day>23:
print(name,'你是天秤座!')
if month==12 and day<23:
print(name,'你是射手座!')
if month==11 and day>22:
print(name,'你是射手座!')
a=str(input('your name:'))
b=int(input('your birth month:'))
c=int(input('your birth day'))
constellation(a,b,c)
def change(word):
if word.endswith(('s','sh','ch','x','o')):
print(word+'es')
elif word.endswith(('by','cy','dy','fy','gy','hy','jy','ky','ly','my','ny','py','qy','ry','sy','ty','vy','wy','xy','zy')):
print('把y改i再加es')
else:
print(word+'s')
a=str(input())
change(a)
def number_sum(a,c):
i=0
total=a
while i<c+1:
i=i+1
a=a+1
total=total+a
return total
m=int(input('请输入m的值,以回车结束。'))
n=int(input('请输入n的值,以回车结束。'))
k=int(input('请输入k的值,以回车结束。'))
print('从整数m到整数n累加和为:',number_sum(m,k))
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: record schedules for 2 weeks, then augment count with weekly flight numbers.
Step2: good dates
Step3: Save
| <ASSISTANT_TASK:>
Python Code:
L=json.loads(file('../json/L.json','r').read())
M=json.loads(file('../json/M.json','r').read())
N=json.loads(file('../json/N.json','r').read())
import requests
AP={}
for c in M:
if c not in AP:AP[c]={}
for i in range(len(L[c])):
AP[c][N[c][i]]=L[c][i]
baseurl='https://www.airportia.com/'
import requests, urllib2
def urlgetter(url):
s = requests.Session()
cookiesopen = s.get(url)
cookies=str(s.cookies)
fcookies=[[k[:k.find('=')],k[k.find('=')+1:k.find(' for ')]] for k in cookies[cookies.find('Cookie '):].split('Cookie ')[1:]]
#push token
opener = urllib2.build_opener()
for k in fcookies:
opener.addheaders.append(('Cookie', k[0]+'='+k[1]))
#read html
return s.get(url).content
SD={}
SC=json.loads(file('../json/SC2.json','r').read())
#pop out last - if applicable
try: SD.pop(c)
except: pass
for h in range(len(AP.keys())):
c=AP.keys()[h]
#country not parsed yet
if c in SC:
if c not in SD:
SD[c]=[]
print h,c
airportialinks=AP[c]
sch={}
#all airports of country, where there is traffic
for i in airportialinks:
if i in SC[c]:
print i,
if i not in sch:sch[i]={}
url=baseurl+airportialinks[i]
m=urlgetter(url)
for d in range (3,17):
#date not parsed yet
if d not in sch[i]:
url=baseurl+airportialinks[i]+'arrivals/201704'+str(d)
m=urlgetter(url)
soup = BeautifulSoup(m, "lxml")
#if there are flights at all
if len(soup.findAll('table'))>0:
sch[i][d]=pd.read_html(m)[0]
else: print '--W-',d,
SD[c]=sch
print
dbpath='E:/Dropbox/Public/datarepo/aviation/' #large file db path
file(dbpath+"json/SD_arrv.json",'w').write(repr(SD))
I3=json.loads(file('../json/I3.json','r').read())
MDF=pd.DataFrame()
for c in SD:
sch=SD[c]
mdf=pd.DataFrame()
for i in sch:
for d in sch[i]:
df=sch[i][d].drop(sch[i][d].columns[3:],axis=1).drop(sch[i][d].columns[0],axis=1)
df['To']=i
df['Date']=d
mdf=pd.concat([mdf,df])
if len(sch)>0:
mdf['City']=[i[:i.rfind(' ')] for i in mdf['From']]
mdf['Airport']=[i[i.rfind(' ')+1:] for i in mdf['From']]
cpath=I3[c].lower()
file('../countries/'+cpath+"/json/mdf_arrv.json",'w').write(json.dumps(mdf.reset_index().to_json()))
MDF=pd.concat([MDF,mdf])
print c,
dbpath='E:/Dropbox/Public/datarepo/aviation/' #large file db path
MDF.reset_index().to_json(dbpath+'json/MDF_arrv.json')
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Regular Expressions
Step6: If I want, I can make the regular expression case insensitive with a COMPLIATION FLAG.
Step12: The Real Power of Regular Expressions...
Step16: Character Class Size
Step18: Metacharacters in Character Classes
Step20: Metacharacters (\)
Step21: \d
Step22: \D
Step23: \s
Step24: \S
Step25: \w
Step26: \W
Step29: Putting Things Together
Step31: How about finding telephones?
Step35: How about finding names?
Step42: Find all strings that end in "ython"
| <ASSISTANT_TASK:>
Python Code:
import re
nameString = \
Wendy, Brian, Karen, Charlene, Jeff.
wendy, brian, Karen, charlene, jeff
r1 = re.compile(rBrian)
print (r1.findall(nameString))
r1 = re.compile(rBrian)
print (r1.findall(nameString))
r2 = re.compile(rBrian, re.IGNORECASE)
print (r2.findall(nameString))
r3 = re.compile(rBrian,re.I) # alias for IGNORECASE
print (r3.findall(nameString))
test = abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMN
OPQRSTUVWXYZ0123456789
r4 = re.compile(r[abc])
print (r4.findall(test))
r5 = re.compile(r[a-c])
print (r5.findall(test))
r6 = re.compile(r[a-z])
print (r6.findall(test))
r7 = re.compile(r[a-zA-Z0-9])
print (r7.findall(test))
r8 = re.compile(ra[bcd]*b,re.I)
r9 = re.compile(ra{1,3}b)
print (r8.findall(test))
print (r9.findall(test))
print(r9.findall(aabaaabab))
r10 = re.compile(r[^A-Z])
print (r10.findall(test))
r11 = re.compile(r"\*")
print (r11.findall(The range of the function f1* is [0,12]
and the domain of f1* is [0,144]))
r12 = re.compile(r"\d")
print (r12.findall("My work address is 729 Arapeen Drive, Salt Lake City, UT, 84108."))
r13 = re.compile(r"\D")
print (r13.findall("My work address is 729 Arapeen Drive, Salt Lake City, UT, 84108.")
)
r14 = re.compile(r"\s")
result14= r14.findall("My work address is 729 Arapeen Drive, Salt Lake City, UT, 84108.")
print (result14)
r15 = re.compile(r"[ \t\n\r\f\v]")
result15 = r15.findall("My work address is 729 Arapeen Drive, Salt Lake City, UT, 84108.")
print (result15)
print( result14 == result15)
r16 = re.compile(r"\S")
r17 = re.compile(r"[^ \t\n\r\f\v]")
result16 = r16.findall("My work address is 729 Arapeen Drive, Salt Lake City, UT, 84108.")
result17 = r17.findall("My work address is 729 Arapeen Drive, Salt Lake City, UT, 84108.")
print (result16)
print (result17)
print (result16 == result17)
r18 = re.compile(r"\w")
r19 = re.compile(r"[a-zA-Z0-9_]")
result18 = r18.findall("My work address is 729 Arapeen Drive, Salt Lake City, UT, 84108.")
result19 = r19.findall("My work address is 729 Arapeen Drive, Salt Lake City, UT, 84108.")
print (result18)
print (result19)
print (result18 == result19)
r18 = re.compile(r"\W")
r19 = re.compile(r"[^a-zA-Z0-9_]")
result18 = r18.findall("My work address is 729 Arapeen Drive, Salt Lake City, UT, 84108.")
result19 = r19.findall("My work address is 729 Arapeen Drive, Salt Lake City, UT, 84108.")
print (result18)
print (result19)
print (result18 == result19)
print (',' in result18)
address = "My work address is 729 Arapeen Drive, Salt Lake City, UT, 84108."
findZipcode = re.compile(r\d{5,5})
findZipcode2 = re.compile(r[0-9]{5,5})
print (findZipcode.findall(address))
print (findZipcode2.findall(address))
txt = open("../Resources/contact.html","r").read()
findPhone = re.compile(r[0-9]{3,3}-\d{4,4}|[0-9]{3,3}-[0-9]{3,3}-\d{4,4})
print (findPhone.findall(txt))
findName1 = re.compile(r[A-Z][a-z]+\s+[A-Z][a-z]+)
#print findName1.findall("Brian Chapman, Wendy Chapman, Jeremiah Chapman")
names1 = findName1.findall(txt)
print (names1[:80])
findName2 = re.compile(r[A-Z][a-z]*,\s+[A-Z][a-z]*)
names2 = findName2.findall(txt)
print (names2[:20])
testString = Brian has a nephew named Ben. Br. Chapman died yesterday. Brian Chapman Brian E. Chapman Brian Earl Chapman Wendy Webber Chapman Clare 1234 4321.1234
python python.org http://python.org www.python.org jython zython Brad Bob cpython brian http://www.python.org perl Perl PERL
rEx1 = re.compile(r[a-zA-Z]*ython)
print rEx1.findall(testString)
rEx2 = re.compile(rBrian,re.I)
print rEx2.findall(testString)
rEx3 = re.compile(rjython|python|ziggy|zoom,re.I) # this illustrates an OR
print rEx3.findall(testString)
rEx4 = re.compile(rpython|perl) # this illustrates an OR
rEx4b = re.compile(rpython|perl,re.IGNORECASE) # this illustrates an OR
print rEx4.findall(testString)
print rEx4b.findall(testString)
rEx5 = re.compile(rB[a-z]*) # this illustrates an AND
print rEx5.findall(testString)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Built-in data structures
Step2: Note that in Python, indexing starts at 0!
Step3: Dictionaries ("Hashes")
Step4: Functions, control structures
Step5: Control structures
Step6: While
Step7: For-loops
Step8: Note that either double or single quotes can be used to delimit strings.
Step9: Defining classes
Step10: However, note that we can get pretty far without needing classes!
Step11: help expects the object you need help about. Just instantiate one, if you do not have it!
Step12: IPython also has a number of so-called "magic" functions
Step13: Scripts, modules, packages
Step14: Modules
Step15: Remember, everything is an object. Access the contents of modules accordingly.
Step16: Packages for data crunching
Step17: There are many other useful commands built in, e.g. load(), save(), loadtxt(), etc.
Step18: One of the most useful built-in commands is ``shape''
Step19: Doing computations
Step20: Plotting
Step21: In a box and whisker plot, the box extends from the lower to upper quartile values of the data, with a line at the median. The whiskers extend from the box to show the range of the data. Outlier points are those past the end of the whiskers.
Step22: How about a $2 \times 5$ matrix plus a $1 \times 5 \times 3$ tensor?
Step23: We need to make the number of dimensions match. Solution
Step24: Style
Step25: Then I use np to access Numpy methods and plt to access plotting methods.
| <ASSISTANT_TASK:>
Python Code:
print "hello world"
print "hello world", 1, 2, 1 + 2
a = 1
b = 1
print "hello world", a + b
a = 1
b = "hello"
print "hello world", a + b
b = "world"
print "hello %s" % b
T = (1, 2, 3, "hello")
print T[0]
L = [1, 2, 3, "hello"]
L[0] = "Lists are mutable"
print L[0]
L.append("goodbye")
print L[-1]
D = {"a": 1, "b": 2}
print D["a"]
def timesfour(x):
return 4 * x
print timesfour(2)
s = "z"
if s == "y":
print "y"
elif s == "z":
print "z"
else:
print "b"
a = 1.0
s = "hello"
while a != 10.0 and s == "hello":
a = a + 1.0
print a
for i in range(3):
print i
for i in [1, 2, 'x', 3, 4, 'h', 5]:
print i
type(timesfour)
# At this point try tab completion on timesfour.
timesfour
# here "object" specifies the superclass
class MyClass(object):
common = 10
# this is a constructor
def __init__(self):
self.myvariable = 3
def myfunction(self, arg1, arg2):
return self.myvariable
classinstance = MyClass()
# note that arguments passed to myfunction are ignored
print 'classinstance.myfunction(1, 2): %s' % classinstance.myfunction(1, 2)
# This variable is shared by all classes.
classinstance2 = MyClass()
print 'classinstance.common: %s' % classinstance.common
print 'classinstance2.common: %s' % classinstance2.common
# Note how we use the class name instead of the instance.
MyClass.common = 30
print 'classinstance.common: %s' % classinstance.common
print 'classinstance2.common: %s' % classinstance2.common
# This will not update the variable on the class,
# instead it will bind a new object to the old
# variable name.
classinstance.common = 10
print 'classinstance.common: %s' % classinstance.common
print 'classinstance2.common: %s' % classinstance2.common
MyClass.common = 50
# This has not changed, because classinstance.common is
# now an instance variable.
print 'classinstance.common: %s' % classinstance.common
# but this has changed
print 'classinstance2.common: %s' % classinstance2.common
# This class inherits from MyClass. The example
# class above inherits from "object", which makes
# it what's called a "new-style class".
# You can read more about these here: http://stackoverflow.com/a/54873
# Multiple inheritance is declared as:
# class OtherClass(MyClass1, MyClass2, MyClassN)
class OtherClass(MyClass):
# The "self" argument is passed automatically
# and refers to the class instance, so you can set
# instance variables as above, but from inside the class.
def __init__(self, arg1):
self.myvariable = 3
print "arg1: %s" % arg1
classinstance = OtherClass("hello")
print 'classinstance.myfunction(1, 2): %s' % classinstance.myfunction(1, 2)
# This class doesn't have a .test member, but
# we can add one to the instance anyway. Note
# that this will only be a member of classinstance.
classinstance.test = 10
print 'classinstance.test: %s' % classinstance.test
a = [1,2,3]
help(a)
list?
%lsmagic
%%writefile mytest.py
for i in xrange(3):
print "hello %d" % i
%run mytest.py
import datetime
# imports a single object
from datetime import date
# imports everything (careful, pollutes namespace)
from datetime import *
from datetime import datetime
datetime.now()
myobject = datetime.month
print myobject
array([1,2,3])
ones((2,3))
zeros((3,2))
eye(4)
# draw from normal distribution with mean 0, std 1
randn(2,2)
a = np.array([[1,2,3],[4,5,6]])
print a
print a.T
a.mean(0)
a.mean(1)
a.mean()
a.std(0)
a.max()
print randn(3,3).shape
print randn(2,5,7,3).shape
print array([1,2,3,4]).shape
a = array([1,2,3])
print a[0]
print a[1]
a = array([[1,2,3],[4,5,6]])
print a[0,0]
print a[1,2]
a = randn(2,3,4)
print a
print a[0,0,0]
print a[0,1,3]
a = array([[1,2,3,4],[5,6,7,8],[9,10,11,12]])
print a
print a[0, :] # a "slice"
print a[:, 0] # another "slice"
print a[1:3, 0] # another "slice"
print a[1:3, :] # another "slice" (this is a 2-d block)
x = arange(0,pi,pi/180)
plot(x, cos(x))
x1 = 1 + 0.1*randn(20)
y1 = 1 + 0.1*randn(20)
x2 = -1 + 0.5*randn(20)
y2 = -1 + 0.5*randn(20)
scatter(x1,y1)
scatter(x2,y2,c='r',marker='x')
xlim(-2,2)
ylim(-2,2)
legend(['class 1', 'class2'], loc='lower right')
subplot(1,2,1)
hist(randn(200),bins=10)
ylabel('counts')
xlabel('bins')
subplot(1,2,2)
boxplot(randn(200))
title('box and whisker plot')
print (randn(2,5) + randn(1,5)).shape
print randn(2,5).shape
print randn(2,5) + randn(1,5,3)
print randn(2,5)[:,:,newaxis].shape
print (randn(2,5)[:,:,newaxis] + randn(1,5,3)).shape
import numpy as np
import matplotlib.pyplot as plt
from IPython.display import YouTubeVideo
# a short video about using NumPy arrays, from Enthought
YouTubeVideo('vWkb7VahaXQ')
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 2. Key Properties --> Variables
Step7: 3. Key Properties --> Seawater Properties
Step8: 3.2. Ocean Freezing Point Value
Step9: 4. Key Properties --> Resolution
Step10: 4.2. Canonical Horizontal Resolution
Step11: 4.3. Number Of Horizontal Gridpoints
Step12: 5. Key Properties --> Tuning Applied
Step13: 5.2. Target
Step14: 5.3. Simulations
Step15: 5.4. Metrics Used
Step16: 5.5. Variables
Step17: 6. Key Properties --> Key Parameter Values
Step18: 6.2. Additional Parameters
Step19: 7. Key Properties --> Assumptions
Step20: 7.2. On Diagnostic Variables
Step21: 7.3. Missing Processes
Step22: 8. Key Properties --> Conservation
Step23: 8.2. Properties
Step24: 8.3. Budget
Step25: 8.4. Was Flux Correction Used
Step26: 8.5. Corrected Conserved Prognostic Variables
Step27: 9. Grid --> Discretisation --> Horizontal
Step28: 9.2. Grid Type
Step29: 9.3. Scheme
Step30: 9.4. Thermodynamics Time Step
Step31: 9.5. Dynamics Time Step
Step32: 9.6. Additional Details
Step33: 10. Grid --> Discretisation --> Vertical
Step34: 10.2. Number Of Layers
Step35: 10.3. Additional Details
Step36: 11. Grid --> Seaice Categories
Step37: 11.2. Number Of Categories
Step38: 11.3. Category Limits
Step39: 11.4. Ice Thickness Distribution Scheme
Step40: 11.5. Other
Step41: 12. Grid --> Snow On Seaice
Step42: 12.2. Number Of Snow Levels
Step43: 12.3. Snow Fraction
Step44: 12.4. Additional Details
Step45: 13. Dynamics
Step46: 13.2. Transport In Thickness Space
Step47: 13.3. Ice Strength Formulation
Step48: 13.4. Redistribution
Step49: 13.5. Rheology
Step50: 14. Thermodynamics --> Energy
Step51: 14.2. Thermal Conductivity
Step52: 14.3. Heat Diffusion
Step53: 14.4. Basal Heat Flux
Step54: 14.5. Fixed Salinity Value
Step55: 14.6. Heat Content Of Precipitation
Step56: 14.7. Precipitation Effects On Salinity
Step57: 15. Thermodynamics --> Mass
Step58: 15.2. Ice Vertical Growth And Melt
Step59: 15.3. Ice Lateral Melting
Step60: 15.4. Ice Surface Sublimation
Step61: 15.5. Frazil Ice
Step62: 16. Thermodynamics --> Salt
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Step65: 17.2. Constant Salinity Value
Step66: 17.3. Additional Details
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Step68: 18.2. Constant Salinity Value
Step69: 18.3. Additional Details
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Step72: 20.2. Additional Details
Step73: 21. Thermodynamics --> Melt Ponds
Step74: 21.2. Formulation
Step75: 21.3. Impacts
Step76: 22. Thermodynamics --> Snow Processes
Step77: 22.2. Snow Aging Scheme
Step78: 22.3. Has Snow Ice Formation
Step79: 22.4. Snow Ice Formation Scheme
Step80: 22.5. Redistribution
Step81: 22.6. Heat Diffusion
Step82: 23. Radiative Processes
Step83: 23.2. Ice Radiation Transmission
| <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'mri', 'sandbox-1', 'seaice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The data peaks around 0 and around 6. Replacing the missing values with ~3 will be inserting values into the data set that mostly don't exist in the observed values. The median is slightly better, but will still cause the imputed values to be in one of the two clusters. This has the effect of essentially increasing the variance of the appropriate distribution. Let's take a look at what the distribution looks like if we add 500 missing values and then impute them using the mean of the observed values.
Step2: It doesn't appear to be that great. We can see the issue with increased variance by trying to fit a Gaussian mixture model to the data with the imputed values, versus fitting it to the data and ignoring missing values.
Step3: When we impute the missing values, it seems that one component is fit properly and one has drastically increased variance. In contrast, when we ignore the missing values, we fit a model that represents the underlying data much more faithfully.
Step4: Even when the data is all drawn from a single, Gaussian, distribution, it is not a great idea to do mean imputation. We can see that the standard deviation of the learned distribution is significantly smaller than the true standard deviation (of 1), whereas if the missing data is ignored the value is closer.
Step5: In even the simplest case of Gaussian distributed data with a diagonal covariance matrix, it is more accurate to use the ignoring strategy rather than imputing the mean. When the data set is mostly unobserved the mean imputation strategy tends to do better in this case, but only because there is so little data for the ignoring strategy to actually train on. The deflation of the variance benefits the mean imputation strategy because all of the off-diagonal elements should be 0, but are likely to be artificially high when there are only few examples of the pairs of the variables co-existing in the dataset. This weakness in the ignoring strategy also makes it more likely to encounter linear algebra errors, such as a non-invertable covariance matrix.
Step6: This may seem to be an obvious thing to do. However, it suggests a way for dealing with multivariate data being modeled with an IndependentComponentsDistribution when some of the features are missing. Specifically, treat each column independently, and update based on the observed values, regardless of if there is an unobserved value in the same sample but another column. For example
Step7: Easy. As we saw above, we can do the same to learn a multivariate Gaussian distribution in the presence of missing data. Again, we don't need to change anything about how we interact with the data, and there are no flags to toggle.
Step8: In an IndependentComponentsDistribution, this just means that when multiplying together the probabilities of each feature to get the total probability, that some dimensions don't factor into the calculation.
Step9: 2. K-Means Clustering
Step10: Just like the other models, you don't need to change the method calls in order to handle missing data. You can fit a K-means model to data sets with missing values and make predictions on samples with missing values in the same way you would without missing values. The prediction step will assign samples to the nearest centroid in the dimensions that are observed, ignoring the missing values.
Step11: We can see that there are some blue points in the red cluster on the right plot because those samples are entirely NaN. Any sample that is entirely NaN is assigned to cluster 0. Otherwise, it's still able to identify the two clusters even there there are many missing values.
Step12: However, if one was modeling each dimension independently, there should be no hit at all!
Step13: 4. Naive Bayes / Bayes Classifiers
Step14: Since pomegranate also has semi-supervised learning built-in, this means that one can now fit Bayes classifiers on data sets with missingness in both the labels and in the values! Since semi-supervised learning does rely on EM, it will be slower to train multivariate Gaussian models with missing values than not to.
Step15: 5. Hidden Markov Models
Step16: Now let's run the forward algorithm on a simple sequence.
Step17: Let's see what happens when we remove one of the characters.
Step18: We can see that initially the first character is aligned to s1 because there is a 100% chance of going from the start state to s1. The value is 0.25 because it is equal to the transition probability (1.0) multiplied by the emission probability (0.25). In the next step, you can see that the probability is equally diffused between two options, staying in the current state (transition probability of 0.5) and moving to s2 (also transition probability of 0.5). Since the character is missing, there is no emission probability to multiply by.
Step19: Fitting is pretty much the same story as the previous models. Like the Bayes classifiers, one can now train a hidden Markov model in a supervised manner, having some observations in the sequence missing, but also labels on each observation. Labeled missing data can still be used to train the transition parameters.
| <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import time
import pandas
import random
import numpy
import matplotlib.pyplot as plt
import seaborn; seaborn.set_style('whitegrid')
import itertools
from pomegranate import *
random.seed(0)
numpy.random.seed(0)
numpy.set_printoptions(suppress=True)
%load_ext watermark
%watermark -m -n -p numpy,scipy,pomegranate
%pylab inline
from pomegranate import *
import seaborn
seaborn.set_style('whitegrid')
numpy.random.seed(0)
X = numpy.concatenate([numpy.random.normal(0, 1, size=(1000)), numpy.random.normal(6, 1, size=(1250))])
plt.title("Bimodal Distribution", fontsize=14)
plt.hist(X, bins=numpy.arange(-3, 9, 0.1), alpha=0.6)
plt.ylabel("Count", fontsize=14)
plt.yticks(fontsize=12)
plt.xlabel("Value", fontsize=14)
plt.yticks(fontsize=12)
plt.vlines(numpy.mean(X), 0, 80, color='r', label="Mean")
plt.vlines(numpy.median(X), 0, 80, color='b', label="Median")
plt.legend(fontsize=14)
plt.show()
X = numpy.concatenate([X, [numpy.nan]*500])
X_imp = X.copy()
X_imp[numpy.isnan(X_imp)] = numpy.mean(X_imp[~numpy.isnan(X_imp)])
plt.title("Bimodal Distribution", fontsize=14)
plt.hist(X_imp, bins=numpy.arange(-3, 9, 0.1), alpha=0.6)
plt.ylabel("Count", fontsize=14)
plt.yticks(fontsize=12)
plt.xlabel("Value", fontsize=14)
plt.yticks(fontsize=12)
plt.vlines(numpy.mean(X), 0, 80, color='r', label="Mean")
plt.vlines(numpy.median(X), 0, 80, color='b', label="Median")
plt.legend(fontsize=14)
plt.show()
x = numpy.arange(-3, 9, 0.1)
model1 = GeneralMixtureModel.from_samples(NormalDistribution, 2, X_imp.reshape(X_imp.shape[0], 1))
model2 = GeneralMixtureModel.from_samples(NormalDistribution, 2, X.reshape(X.shape[0], 1))
p1 = model1.probability(x.reshape(x.shape[0], 1))
p2 = model2.probability(x.reshape(x.shape[0], 1))
plt.figure(figsize=(12, 3))
plt.subplot(121)
plt.title("Mean Impute Missing Values", fontsize=14)
plt.hist(X_imp, bins=x, alpha=0.6, density=True)
plt.plot(x, p1, color='b')
plt.ylabel("Count", fontsize=14); plt.yticks(fontsize=12)
plt.xlabel("Value", fontsize=14); plt.yticks(fontsize=12)
plt.subplot(122)
plt.title("Ignore Missing Values", fontsize=14)
plt.hist(X[~numpy.isnan(X)], bins=x, alpha=0.6, density=True)
plt.plot(x, p2, color='b')
plt.ylabel("Count", fontsize=14); plt.yticks(fontsize=12)
plt.xlabel("Value", fontsize=14); plt.yticks(fontsize=12)
plt.show()
X = numpy.concatenate([numpy.random.normal(0, 1, size=(750)), [numpy.nan]*250])
X_imp = X.copy()
X_imp[numpy.isnan(X_imp)] = numpy.mean(X_imp[~numpy.isnan(X_imp)])
x = numpy.arange(-3, 3, 0.1)
d1 = NormalDistribution.from_samples(X_imp)
d2 = NormalDistribution.from_samples(X)
p1 = d1.probability(x.reshape(x.shape[0], 1))
p2 = d2.probability(x.reshape(x.shape[0], 1))
plt.figure(figsize=(12, 3))
plt.subplot(121)
plt.title("Mean Impute Missing Values", fontsize=14)
plt.hist(X_imp, bins=x, alpha=0.6, density=True, label="$\sigma$ = {:4.4}".format(d1.parameters[1]))
plt.plot(x, p1, color='b')
plt.ylabel("Count", fontsize=14); plt.yticks(fontsize=12)
plt.xlabel("Value", fontsize=14); plt.yticks(fontsize=12)
plt.legend(fontsize=14)
plt.subplot(122)
plt.title("Ignore Missing Values", fontsize=14)
plt.hist(X[~numpy.isnan(X)], bins=x, alpha=0.6, density=True, label="$\sigma$ = {:4.4}".format(d2.parameters[1]))
plt.plot(x, p2, color='b')
plt.ylabel("Count", fontsize=14); plt.yticks(fontsize=12)
plt.xlabel("Value", fontsize=14); plt.yticks(fontsize=12)
plt.legend(fontsize=14)
plt.show()
n, d, steps = 1000, 10, 50
diffs1 = numpy.zeros(int(steps*0.86))
diffs2 = numpy.zeros(int(steps*0.86))
X = numpy.random.normal(6, 3, size=(n, d))
for k, size in enumerate(range(0, int(n*d*0.86), n*d // steps)):
idxs = numpy.random.choice(numpy.arange(n*d), replace=False, size=size)
i, j = idxs // d, idxs % d
cov_true = numpy.cov(X, rowvar=False, bias=True)
X_nan = X.copy()
X_nan[i, j] = numpy.nan
X_mean = X_nan.copy()
for col in range(d):
mask = numpy.isnan(X_mean[:,col])
X_mean[mask, col] = X_mean[~mask, col].mean()
diff = numpy.abs(numpy.cov(X_mean, rowvar=False, bias=True) - cov_true).sum()
diffs1[k] = diff
dist = MultivariateGaussianDistribution.from_samples(X_nan)
diff = numpy.abs(numpy.array(dist.parameters[1]) - cov_true).sum()
diffs2[k] = diff
plt.title("Error in Multivariate Gaussian Covariance Matrix", fontsize=16)
plt.plot(diffs1, label="Mean")
plt.plot(diffs2, label="Ignore")
plt.xlabel("Percentage Missing", fontsize=14)
plt.ylabel("L1 Errors", fontsize=14)
plt.xticks(range(0, 51, 10), numpy.arange(0, 5001, 1000) / 5000.)
plt.xlim(0, 50)
plt.legend(fontsize=14)
plt.show()
X = numpy.random.randn(100)
X_nan = numpy.concatenate([X, [numpy.nan]*100])
print("Fitting only to observed values:")
print(NormalDistribution.from_samples(X))
print()
print("Fitting to observed and missing values:")
print(NormalDistribution.from_samples(X_nan))
X = numpy.random.normal(0, 1, size=(500, 3))
idxs = numpy.random.choice(1500, replace=False, size=500)
i, j = idxs // 3, idxs % 3
X[i, j] = numpy.nan
d = IndependentComponentsDistribution.from_samples(X, distributions=[NormalDistribution]*3)
d
NormalDistribution(1, 2).probability(numpy.nan)
d.probability((numpy.nan, 2, 3))
d.distributions[1].probability(2) * d.distributions[2].probability(3)
X = numpy.concatenate([numpy.random.normal(0, 1, size=(50, 2)), numpy.random.normal(3, 1, size=(75, 2))])
X_nan = X.copy()
idxs = numpy.random.choice(250, replace=False, size=50)
i, j = idxs // 2, idxs % 2
X_nan[i, j] = numpy.nan
model1 = Kmeans.from_samples(2, X)
model2 = Kmeans.from_samples(2, X_nan)
y1 = model1.predict(X)
y2 = model2.predict(X_nan)
plt.figure(figsize=(14, 6))
plt.subplot(121)
plt.title("Fit w/o Missing Values", fontsize=16)
plt.scatter(X[y1 == 0,0], X[y1 == 0,1], color='b')
plt.scatter(X[y1 == 1,0], X[y1 == 1,1], color='r')
plt.subplot(122)
plt.title("Fit w/ Missing Values", fontsize=16)
plt.scatter(X[y2 == 0,0], X[y2 == 0,1], color='b')
plt.scatter(X[y2 == 1,0], X[y2 == 1,1], color='r')
plt.show()
X = numpy.concatenate([numpy.random.normal(0, 1, size=(1000, 10)), numpy.random.normal(2, 1, size=(1250, 10))])
idxs = numpy.random.choice(22500, replace=False, size=5000)
i, j = idxs // 10, idxs % 10
X_nan = X.copy()
X_nan[i, j] = numpy.nan
%timeit GeneralMixtureModel.from_samples(MultivariateGaussianDistribution, 2, X, max_iterations=10)
%timeit GeneralMixtureModel.from_samples(MultivariateGaussianDistribution, 2, X_nan, max_iterations=10)
%timeit -n 100 GeneralMixtureModel.from_samples([NormalDistribution]*2, 2, X, max_iterations=10)
%timeit -n 100 GeneralMixtureModel.from_samples([NormalDistribution]*2, 2, X_nan, max_iterations=10)
y = numpy.concatenate([numpy.zeros(1000), numpy.ones(1250)])
%timeit -n 100 BayesClassifier.from_samples(MultivariateGaussianDistribution, X, y)
%timeit -n 100 BayesClassifier.from_samples(MultivariateGaussianDistribution, X_nan, y)
idx = numpy.random.choice(2250, replace=False, size=750)
y_nan = y.copy()
y_nan[idx] = -1
model = BayesClassifier.from_samples(MultivariateGaussianDistribution, X_nan, y_nan, verbose=True)
%timeit BayesClassifier.from_samples(MultivariateGaussianDistribution, X, y_nan)
%timeit BayesClassifier.from_samples(MultivariateGaussianDistribution, X_nan, y_nan)
d1 = DiscreteDistribution({'A': 0.25, 'B': 0.75})
d2 = DiscreteDistribution({'A': 0.67, 'B': 0.33})
s1 = State(d1, name="s1")
s2 = State(d2, name="s2")
model = HiddenMarkovModel()
model.add_states(s1, s2)
model.add_transition(model.start, s1, 1.0)
model.add_transition(s1, s1, 0.5)
model.add_transition(s1, s2, 0.5)
model.add_transition(s2, s2, 0.5)
model.add_transition(s2, s1, 0.5)
model.bake()
numpy.exp(model.forward(['A', 'B', 'A', 'A']))
numpy.exp(model.forward(['A', 'nan', 'A', 'A']))
model.predict(['A', 'A', 'B', 'B', 'A', 'A'])
model.predict(['A', 'nan', 'B', 'B', 'nan', 'A'])
X = numpy.random.randint(3, size=(500, 10)).astype('float64')
idxs = numpy.random.choice(5000, replace=False, size=2000)
i, j = idxs // 10, idxs % 10
X_nan = X.copy()
X_nan[i, j] = numpy.nan
%timeit -n 100 BayesianNetwork.from_samples(X, algorithm='exact')
%timeit -n 100 BayesianNetwork.from_samples(X_nan, algorithm='exact')
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's create some training / testing pairs
Step2: Just to check, let's print the shapes of the different train/test pairs
Step3: Let's test training with SPY n_train times
| <ASSISTANT_TASK:>
Python Code:
# Basic imports
import os
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import datetime as dt
import scipy.optimize as spo
import sys
from time import time
from sklearn.metrics import r2_score, median_absolute_error
from multiprocessing import Pool
import pickle
%matplotlib inline
%pylab inline
pylab.rcParams['figure.figsize'] = (20.0, 10.0)
%load_ext autoreload
%autoreload 2
sys.path.append('../../')
import recommender.simulator as sim
from utils.analysis import value_eval
from recommender.agent import Agent
from functools import partial
def show_results(results_list, data_in_df, symbol, graph=False):
for values in results_list:
total_value = values.sum(axis=1)
print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(total_value))))
print('-'*100)
initial_date = total_value.index[0]
compare_results = data_in_df.loc[initial_date:, 'Close'].copy()
compare_results.name = symbol
compare_results_df = pd.DataFrame(compare_results)
compare_results_df['portfolio'] = total_value
std_comp_df = compare_results_df / compare_results_df.iloc[0]
if graph:
plt.figure()
std_comp_df.plot()
n_train = 20
data_df = pd.read_pickle('../../data/data_df.pkl')
u_data_df = data_df.unstack()
n_samples = u_data_df.shape[0]
n_test = 504
first_test_start = n_samples - 4 * n_test
test_starts = np.linspace(first_test_start, n_samples - n_test, n_train).astype(np.int)
train_test_pairs = [(u_data_df.iloc[:test_start], u_data_df.iloc[test_start:test_start + n_test])
for test_start in test_starts]
for data_train_df, data_test_df in train_test_pairs:
print(data_train_df.shape)
print(data_test_df.shape)
print('-'*100)
n_train = 20
# Set the train/test pairs
n_test = 504
data_df = pd.read_pickle('../../data/data_df.pkl')
u_data_df = data_df.unstack()
n_samples = u_data_df.shape[0]
first_test_start = n_samples - 4 * n_test
test_starts = np.linspace(first_test_start, n_samples - n_test, n_train).astype(np.int)
train_test_pairs = [(u_data_df.iloc[:test_start], u_data_df.iloc[test_start:test_start + n_test])
for test_start in test_starts]
STARTING_DAYS_AHEAD = 252
POSSIBLE_FRACTIONS = [0.0, 1.0]
symbol = 'SPY'
print('Testing with:\n{}'.format(symbol))
# Results holder
results_df = pd.DataFrame(columns=['sharpe',
'sharpe_bench',
'cum_ret',
'cum_ret_bench',
'sharpe_quotient',
'cum_ret_quotient',])
for data_train_all_df, data_test_all_df in train_test_pairs:
# Set the datasets to use
total_data_train_df = data_train_all_df.stack()
total_data_test_df = data_test_all_df.stack()
data_train_df = data_train_all_df[symbol]
data_test_df = data_test_all_df[symbol]
data_in_df = data_train_df
# Set the environment
env, num_states, num_actions = sim.initialize_env(total_data_train_df,
symbol,
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
n_levels=10)
# Create the Agent
agent = Agent(num_states=num_states,
num_actions=num_actions,
random_actions_rate=0.98,
random_actions_decrease=0.9999,
dyna_iterations=0,
name='Agent_0')
# Train
print('Training: {}'.format(symbol))
n_epochs = 7
for i in range(n_epochs):
tic = time()
env.reset(STARTING_DAYS_AHEAD)
results_list = sim.simulate_period(total_data_train_df,
symbol,
agent,
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
verbose=False,
other_env=env)
toc = time()
print('Epoch: {}'.format(i))
print('Elapsed time: {} seconds.'.format((toc-tic)))
print('Random Actions Rate: {}'.format(agent.random_actions_rate))
show_results([results_list], data_in_df, symbol)
# Test
print('-'*100)
print('Testing: {}'.format(symbol))
TEST_DAYS_AHEAD = 20
env.set_test_data(total_data_test_df, TEST_DAYS_AHEAD)
tic = time()
results_list = sim.simulate_period(total_data_test_df,
symbol,
agent,
learn=False,
starting_days_ahead=TEST_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
verbose=False,
other_env=env)
toc = time()
# Show results
print('Elapsed time: {} seconds.'.format((toc-tic)))
print('Random Actions Rate: {}'.format(agent.random_actions_rate))
show_results([results_list], data_test_df, symbol, graph=False)
print('-'*100)
# Save some results
total_value = results_list.sum(axis=1)
sharpe, cumret, _, _, _ = value_eval(pd.DataFrame(total_value))
sharpe_bench, cumret_bench, _, _, _ = value_eval(pd.DataFrame(data_test_df['Close'].iloc[TEST_DAYS_AHEAD:]))
sharpe_q = sharpe / sharpe_bench
cumret_q = cumret / cumret_bench
results_df = results_df.append(pd.DataFrame({symbol: {'sharpe': sharpe,
'sharpe_bench': sharpe_bench,
'cum_ret': cumret,
'cum_ret_bench': cumret_bench,
'sharpe_quotient': sharpe_q,
'cum_ret_quotient': cumret_q}}).T)
results_df
results_df['sharpe_delta'] = results_df['sharpe'] - results_df['sharpe_bench']
results_df
delta = results_df['sharpe_delta']
pd.DataFrame(delta.describe())
delta.hist(bins=20)
results_df['sharpe_delta'].plot()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
Step2: Relevant Parameters
Step3: Relevant Constraints
Step4: Influence on Meshes (potentials, volumes)
Step5: Influence on Radial Velocities
Step6: Influence on Light Curves (fluxes)
| <ASSISTANT_TASK:>
Python Code:
!pip install -I "phoebe>=2.0,<2.1"
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
print b.get(qualifier='ecc')
print b.get(qualifier='ecosw', context='component')
print b.get(qualifier='esinw', context='component')
b.filter(qualifier='pot', context='constraint')
print b.get(qualifier='pot', component='primary', context='constraint')
print b.get(qualifier='ecosw', context='constraint')
print b.get(qualifier='esinw', context='constraint')
b.add_dataset('mesh', times=np.linspace(0,1,11))
b.set_value('ecc', 0.2)
b.run_compute()
print b['pot@primary@model']
ax, artists = b['mesh01'].plot(x='times', y='pot')
print b['rpole@primary@model']
axs, artists = b['mesh01'].plot(x='times', y='rpole')
print b['volume@primary@model']
ax, artists = b['mesh01'].plot(x='times', y='volume')
b.remove_dataset('mesh01')
b.add_dataset('rv', times=np.linspace(0,1,51))
b.run_compute()
axs, artists = b.plot()
b.remove_dataset('rv01')
b.add_dataset('lc', times=np.linspace(0,1,51))
b.run_compute()
axs, artists = b.plot()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's play with conditional probability.
Step2: P(F) is just the probability of being 30 in this data set
Step3: And P(E) is the overall probability of buying something, regardless of your age
Step4: If E and F were independent, then we would expect P(E | F) to be about the same as P(E). But they're not; PE is 0.45, and P(E|F) is 0.3. So, that tells us that E and F are dependent (which we know they are in this example.)
Step5: P(E,F) is different from P(E|F). P(E,F) would be the probability of both being in your 30's and buying something, out of the total population - not just the population of people in their 30's
Step6: P(E,F) = P(E)P(F), and they are pretty close in this example. But because E and F are actually dependent on each other, and the randomness of the data we're working with, it's not quite the same.
| <ASSISTANT_TASK:>
Python Code:
from numpy import random
random.seed(0)
totals = {20:0, 30:0, 40:0, 50:0, 60:0, 70:0}
purchases = {20:0, 30:0, 40:0, 50:0, 60:0, 70:0}
totalPurchases = 0
for _ in range(100000):
ageDecade = random.choice([20, 30, 40, 50, 60, 70])
purchaseProbability = float(ageDecade) / 100.0
totals[ageDecade] += 1
if (random.random() < purchaseProbability):
totalPurchases += 1
purchases[ageDecade] += 1
totals
purchases
totalPurchases
PEF = float(purchases[30]) / float(totals[30])
print('P(purchase | 30s): ' + str(PEF)
PF = float(totals[30]) / 100000.0
print("P(30's): " + str(PF))
PE = float(totalPurchases) / 100000.0
print("P(Purchase):" + str(PE))
print("P(30's)P(Purchase)" + str(PE * PF))
print("P(30's, Purchase)" + str(float(purchases[30]) / 100000.0))
print((purchases[30] / 100000.0) / PF)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Linear Modelling
Step2: Logit model
Step3: Results indicate there is some correlation between two of the independent variables
Step4: The data is clustered around the zeros. Let's try a log transformation.
Step5: Regression Model
Step6: The log transformations helped increase the fit from and R-squared of ~0.05 to ~0.20.
Step7: Without 'fs', we lost some information but not much
| <ASSISTANT_TASK:>
Python Code:
# opens raw data
with open ('../data/clean_data/df_profile', 'rb') as fp:
df = pickle.load(fp)
# creates copy with non-missing observations
df_active = df.loc[df.status != 'inactive', ].copy()
# examines status of users
status = df_active['status'].value_counts()
# plots chart
status.plot.pie(autopct='%.f', figsize=(5,5))
plt.ylabel('')
plt.show()
# displays correlation matrix
df_active.corr()
# creates design_matrix
X = df_active
X['intercept'] = 1
# displays variance inflation factor
vif_results = pd.DataFrame()
vif_results['VIF Factor'] = [vif(X.values, i) for i in range(X.shape[1])]
vif_results['features'] = X.columns
vif_results
sns.pairplot(data=df_active, y_vars=['st'], x_vars=['fa', 'fs', 'age'])
plt.show()
# takes log transformation
df_active['st'] = np.log(df_active['st']+1)
df_active['fa'] = np.log(df_active['fa']+1)
df_active['fs'] = np.log(df_active['fs']+1)
sns.pairplot(data=df_active, y_vars=['st'], x_vars=['fa', 'fs', 'age'])
plt.show()
# runs OLS regression
formula = 'st ~ fa + fs + cc + age'
reg = smf.ols(data=df_active, formula=formula).fit()
print(reg.summary())
# runs OLS regression
formula = 'st ~ fa + cc + age'
reg = smf.ols(data=df_active, formula=formula).fit()
print(reg.summary())
def graph(formula, x_range):
y = np.array(x_range)
x = formula(y)
plt.plot(y,x)
graph(lambda x : (np.exp(reg.params[0]+reg.params[1]*(np.log(x-1)))-1),
range(2,100,1))
graph(lambda x : (np.exp(reg.params[0]+reg.params[1]*(np.log(x-1))+reg.params[2])-1),
range(2,100,1))
plt.show()
ages = [0, 1, 5, 10, 15]
for age in ages:
graph(lambda x : (np.exp(reg.params[0]+reg.params[1]*(np.log(x-1))+reg.params[3]*age)-1),
range(2,100,1))
plt.show()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Fun with Turtles
Step2: Looking at documentation for tutles
Step3: Using Range
Step4: Lots of turtles
Step5: Math
Step6: Random
Step7: Chapter 4 Exercises
Step8: Cool sample by
| <ASSISTANT_TASK:>
Python Code:
spins = input("How many times did you spin? (Enter a negative number for couter-clockwise spins) ")
# Need to ensure the direction of the spin is always correct.
# This works because of the definition of the modulo operator and what happens with
# negative numbers. Keep this in mind for the assignment for chapter 3.
degrees = (float(spins) * 360) % 360
print("You are facing", degrees, "degrees relative to north")
import turtle
wn = turtle.Screen() # creates a graphics window
alex = turtle.Turtle() # create a turtle named alex
alex.speed(1)
alex.shape('turtle')
for i in [0,1,2,3,4,5]:
alex.forward(150) # tell alex to move forward by 150 units
alex.left(85) # turn by 90 degrees
alex.forward(75)
## This won't run as expected with the notebook. Wait a moment for the window to be created.
wn.exitonclick()
# Copied from the documentation example. This is bad practice to do import * DO NOT DO IT!
from turtle import *
color('red', 'yellow')
begin_fill()
while True:
forward(200)
left(170)
if abs(pos()) < 1:
break
end_fill()
done()
for number in range(6, 0, -1):
print("I have", number, "cookies. Iím going to eat one.")
print('I ate all my cookies')
import turtle
wn = turtle.Screen()
wn.bgcolor("lightgreen")
tess = turtle.Turtle()
tess.color("blue")
tess.shape("turtle")
jim = turtle.Turtle()
jim.color("green")
jim.shape("turtle")
carl = turtle.Turtle()
carl.color("red")
carl.shape("turtle")
tess.up()
carl.up()
jim.up()
# Keep in mind for today's studio
for size in range(5, 60, 2): # start with size = 5 and grow by 2
tess.stamp() # leave an impression on the canvas
carl.stamp()
jim.stamp()
carl.forward(size + 10)
jim.forward(size)
tess.forward(size) # move tess along
carl.right(90)
jim.left(24)
tess.right(24) # and turn her
wn.exitonclick()
from math import sqrt
print(sqrt(24))
print(sqrt(25))
print(sqrt(-2))
import random
random.seed(5)
print(random.randint(0,10))
print(random.randint(0,10))
print(random.randint(0,10), '\n')
random.seed(5)
print(random.randint(0,10))
print(random.randint(0,10))
print(random.randint(0,10), '\n')
random.seed(5)
print(random.randint(0,10))
print(random.randint(0,10))
print(random.randint(0,10), '\n')
random.seed(5)
print(random.randint(0,10))
print(random.randint(0,10))
print(random.randint(0,10), '\n')
bottles_of_beer = 99
for bottle_number in range(bottles_of_beer, 0, -1):
print(bottle_number, "Bottles of Beer on the Wall")
print("Take one down pass it around")
import turtle
import random
wn = turtle.Screen()
anaise = turtle.Turtle()
hour = 1
lines = 1
angle = 1
anaise.speed(0)
# Set the color mode so the mac is happy.
# wn.colormode(255)
#change starting point of line randomly
while lines < 200:
anaise.goto(random.randrange(50), random.randrange(50))
anaise.down()
angle = (random.randrange(360))
anaise.color(random.randrange(255),random.randrange(255),random.randrange(255))
anaise.pensize(random.randrange(11))
anaise.right(angle)
anaise.forward(random.randrange(100))
anaise.up()
#Count the number of times the loop occurs
lines = lines + 1
wn.exitonclick()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Sadržaj
Step2: [Skica
Step3: [Skica
Step4: Osnovne mjere
Step5: Primjer
Step6: Variranje klasifikacijskog praga
Step7: Krivulja preciznost-odziv
Step8: ROC i AUC
Step9: F-mjera
Step10: Višeklasna klasifikacija
| <ASSISTANT_TASK:>
Python Code:
# Učitaj osnovne biblioteke...
import scipy as sp
import sklearn
import pandas as pd
%pylab inline
y_test = sp.random.choice((0,1), size=10); y_test
y_pred = sp.random.choice((0,1), size=10); y_pred
def cm(y_true, y_pred):
tp = 0
fp = 0
fn = 0
tn = 0
for (t, p) in zip(y_true, y_pred):
if t == 0 and p == 1: fp += 1
elif t == 1 and p == 0: fn += 1
elif t == 1 and p == 1: tp += 1
else: tn += 1
return sp.array([[tp, fp], [fn, tn]])
cm(y_test, y_pred)
from sklearn.metrics import confusion_matrix
confusion_matrix(y_test, y_pred)
confusion_matrix(y_test, y_pred, labels=[1,0])
cm(y_test, y_pred)
from sklearn.metrics import accuracy_score, precision_score, recall_score
accuracy_score(y_test, y_pred)
precision_score(y_test, y_pred)
recall_score(y_test, y_pred)
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import Imputer
titanic_df = pd.read_csv("../data/titanic-train.csv")
titanic_df.drop(['PassengerId'], axis=1, inplace=True)
titanic_df1 = titanic_df[['Pclass', 'Sex', 'Age','Survived']]
titanic_X = titanic_df[['Pclass', 'Sex', 'Age']].as_matrix()
titanic_y = titanic_df['Survived'].as_matrix()
le = LabelEncoder()
titanic_X[:,1] = le.fit_transform(titanic_X[:,1])
imp = Imputer(missing_values='NaN', strategy='mean', axis=0)
titanic_X = imp.fit_transform(titanic_X)
titanic_X
titanic_y
shape(titanic_X), shape(titanic_y)
from sklearn import cross_validation
X_train, X_test, y_train, y_test = cross_validation.train_test_split(titanic_X, titanic_y, train_size=2.0/3, random_state=42)
from sklearn.linear_model import LogisticRegression
lr = LogisticRegression(C=1)
lr.fit(X_train, y_train)
lr.predict(X_train)
y_pred_lr = lr.predict(X_test); y_pred_lr
y_test
cm(y_test, y_pred_lr)
accuracy_score(y_test, y_pred_lr)
lr.score(X_test, y_test)
lr.score(X_train, y_train)
precision_score(y_test, y_pred_lr, pos_label=1)
recall_score(y_test, y_pred_lr, pos_label=1)
from sklearn.svm import SVC
svm = SVC(C=1)
svm.fit(X_train, y_train)
svm.score(X_test, y_test)
y_pred_svm = svm.predict(X_test); y_pred_svm
cm(y_test, y_pred_svm)
precision_score(y_test, y_pred_svm, pos_label=1)
recall_score(y_test, y_pred_svm, pos_label=1)
y_scores_lr = lr.predict_proba(X_test)[:,1]; y_scores_lr
print precision_score(y_test, y_pred_lr)
print recall_score(y_test, y_pred_lr)
threshold = 0.4
y_pred_lr_tweaked = map(lambda s : 1 if s > threshold else 0, y_scores_lr)
print y_pred_lr_tweaked
print precision_score(y_test, y_pred_lr_tweaked)
print recall_score(y_test, y_pred_lr_tweaked)
from sklearn.metrics import precision_recall_curve
pr, re, _ = precision_recall_curve(y_test, y_scores_lr, pos_label=1)
pr
re
plt.plot(re, pr)
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.show()
from sklearn.metrics import average_precision_score
average_precision_score(y_test, y_scores_lr)
y_scores_svm = svm.decision_function(X_test)[:,0]
print y_scores_svm
pr_lr, re_lr, _ = precision_recall_curve(y_test, y_scores_lr, pos_label=1)
pr_svm, re_svm, _ = precision_recall_curve(y_test, y_scores_svm, pos_label=1)
plt.plot(re_lr, pr_lr, label='LR')
plt.plot(re_svm, pr_svm, label='SVM')
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.legend()
plt.show()
print average_precision_score(y_test, y_scores_lr)
print average_precision_score(y_test, y_scores_svm)
from sklearn.metrics import roc_curve, auc
fpr_lr, tpr_lr, _ = roc_curve(y_test, y_scores_lr)
roc_auc_lr = auc(fpr_lr, tpr_lr)
fpr_svm, tpr_svm, _ = roc_curve(y_test, y_scores_svm)
roc_auc_svm = auc(fpr_svm, tpr_svm)
plt.plot(fpr_lr, tpr_lr, label='LR ROC curve (area = %0.2f)' % roc_auc_lr)
plt.plot(fpr_svm, tpr_svm, label='SVM ROC curve (area = %0.2f)' % roc_auc_svm)
plt.plot([0, 1], [0, 1], 'k--')
plt.xlabel('FPR')
plt.ylabel('TPR')
plt.legend(loc='lower right')
plt.show()
def f_beta(p, r, beta):
return ((1 + beta**2) * p * r) / (beta**2 * p + r)
f_beta(0.5, 0.9, 1)
f_beta(0.5, 0.9, 0.5)
f_beta(0.5, 0.9, 2)
(0.5 + 0.9) / 2
sqrt(0.5 * 0.9)
2/(1/0.5 + 1/0.9)
r = 0.5
xs = sp.linspace(0, 1)
plt.plot(xs, (xs + r)/2, label='aritm')
plt.plot(xs, sp.sqrt(xs*r), label='geom')
plt.plot(xs, 2/(1/xs + 1/r), label='harm')
plt.legend(loc='lower right')
plt.show()
data = sp.loadtxt("path/do/glass.data", delimiter=",", skiprows=1)
print data
shape(data)
glass_X, glass_y = data[:,1:10], data[:,10]
from sklearn import cross_validation
X_train, X_test, y_train, y_test = cross_validation.train_test_split(glass_X, glass_y, train_size=2.0/3, random_state=42)
X_train.shape, X_test.shape
from sklearn.svm import SVC
m = SVC() # SVC(C=1, gamma='auto')
m.fit(X_train, y_train)
m.classes_
y_pred = m.predict(X_test); y_pred
from sklearn.metrics import confusion_matrix
confusion_matrix(y_test, y_pred)
from sklearn.metrics import f1_score
f1_score(y_test, y_pred, pos_label=l, average=None)
sp.mean(_)
f1_score(y_test, y_pred, average='macro')
f1_score(y_test, y_pred, average='micro')
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Perplexity on Each Dataset
Step2: Loss vs. Epoch
Step3: Perplexity vs. Epoch
Step4: Generations
Step5: BLEU Analysis
Step6: N-pairs BLEU Analysis
Step7: Alignment Analysis
| <ASSISTANT_TASK:>
Python Code:
report_files = ["/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing10_200_512_04drb/encdec_noing10_200_512_04drb.json", "/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing10_bow_200_512_04drb/encdec_noing10_bow_200_512_04drb.json"]
log_files = ["/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing10_200_512_04drb/encdec_noing10_200_512_04drb_logs.json", "/Users/bking/IdeaProjects/LanguageModelRNN/experiment_results/encdec_noing10_bow_200_512_04drb/encdec_noing10_bow_200_512_04drb_logs.json"]
reports = []
logs = []
import json
import matplotlib.pyplot as plt
import numpy as np
for report_file in report_files:
with open(report_file) as f:
reports.append((report_file.split('/')[-1].split('.json')[0], json.loads(f.read())))
for log_file in log_files:
with open(log_file) as f:
logs.append((log_file.split('/')[-1].split('.json')[0], json.loads(f.read())))
for report_name, report in reports:
print '\n', report_name, '\n'
print 'Encoder: \n', report['architecture']['encoder']
print 'Decoder: \n', report['architecture']['decoder']
%matplotlib inline
from IPython.display import HTML, display
def display_table(data):
display(HTML(
u'<table><tr>{}</tr></table>'.format(
u'</tr><tr>'.join(
u'<td>{}</td>'.format('</td><td>'.join(unicode(_) for _ in row)) for row in data)
)
))
def bar_chart(data):
n_groups = len(data)
train_perps = [d[1] for d in data]
valid_perps = [d[2] for d in data]
test_perps = [d[3] for d in data]
fig, ax = plt.subplots(figsize=(10,8))
index = np.arange(n_groups)
bar_width = 0.3
opacity = 0.4
error_config = {'ecolor': '0.3'}
train_bars = plt.bar(index, train_perps, bar_width,
alpha=opacity,
color='b',
error_kw=error_config,
label='Training Perplexity')
valid_bars = plt.bar(index + bar_width, valid_perps, bar_width,
alpha=opacity,
color='r',
error_kw=error_config,
label='Valid Perplexity')
test_bars = plt.bar(index + 2*bar_width, test_perps, bar_width,
alpha=opacity,
color='g',
error_kw=error_config,
label='Test Perplexity')
plt.xlabel('Model')
plt.ylabel('Scores')
plt.title('Perplexity by Model and Dataset')
plt.xticks(index + bar_width / 3, [d[0] for d in data])
plt.legend()
plt.tight_layout()
plt.show()
data = [['<b>Model</b>', '<b>Train Perplexity</b>', '<b>Valid Perplexity</b>', '<b>Test Perplexity</b>']]
for rname, report in reports:
data.append([rname, report['train_perplexity'], report['valid_perplexity'], report['test_perplexity']])
display_table(data)
bar_chart(data[1:])
%matplotlib inline
plt.figure(figsize=(10, 8))
for rname, l in logs:
for k in l.keys():
plt.plot(l[k][0], l[k][1], label=str(k) + ' ' + rname + ' (train)')
plt.plot(l[k][0], l[k][2], label=str(k) + ' ' + rname + ' (valid)')
plt.title('Loss v. Epoch')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()
plt.show()
%matplotlib inline
plt.figure(figsize=(10, 8))
for rname, l in logs:
for k in l.keys():
plt.plot(l[k][0], l[k][3], label=str(k) + ' ' + rname + ' (train)')
plt.plot(l[k][0], l[k][4], label=str(k) + ' ' + rname + ' (valid)')
plt.title('Perplexity v. Epoch')
plt.xlabel('Epoch')
plt.ylabel('Perplexity')
plt.legend()
plt.show()
def print_sample(sample, best_bleu=None):
enc_input = ' '.join([w for w in sample['encoder_input'].split(' ') if w != '<pad>'])
gold = ' '.join([w for w in sample['gold'].split(' ') if w != '<mask>'])
print('Input: '+ enc_input + '\n')
print('Gend: ' + sample['generated'] + '\n')
print('True: ' + gold + '\n')
if best_bleu is not None:
cbm = ' '.join([w for w in best_bleu['best_match'].split(' ') if w != '<mask>'])
print('Closest BLEU Match: ' + cbm + '\n')
print('Closest BLEU Score: ' + str(best_bleu['best_score']) + '\n')
print('\n')
def display_sample(samples, best_bleu=False):
for enc_input in samples:
data = []
for rname, sample in samples[enc_input]:
gold = ' '.join([w for w in sample['gold'].split(' ') if w != '<mask>'])
data.append([rname, '<b>Generated: </b>' + sample['generated']])
if best_bleu:
cbm = ' '.join([w for w in sample['best_match'].split(' ') if w != '<mask>'])
data.append([rname, '<b>Closest BLEU Match: </b>' + cbm + ' (Score: ' + str(sample['best_score']) + ')'])
data.insert(0, ['<u><b>' + enc_input + '</b></u>', '<b>True: ' + gold+ '</b>'])
display_table(data)
def process_samples(samples):
# consolidate samples with identical inputs
result = {}
for rname, t_samples, t_cbms in samples:
for i, sample in enumerate(t_samples):
enc_input = ' '.join([w for w in sample['encoder_input'].split(' ') if w != '<pad>'])
if t_cbms is not None:
sample.update(t_cbms[i])
if enc_input in result:
result[enc_input].append((rname, sample))
else:
result[enc_input] = [(rname, sample)]
return result
samples = process_samples([(rname, r['train_samples'], r['best_bleu_matches_train'] if 'best_bleu_matches_train' in r else None) for (rname, r) in reports])
display_sample(samples, best_bleu='best_bleu_matches_train' in reports[1][1])
samples = process_samples([(rname, r['valid_samples'], r['best_bleu_matches_valid'] if 'best_bleu_matches_valid' in r else None) for (rname, r) in reports])
display_sample(samples, best_bleu='best_bleu_matches_valid' in reports[1][1])
samples = process_samples([(rname, r['test_samples'], r['best_bleu_matches_test'] if 'best_bleu_matches_test' in r else None) for (rname, r) in reports])
display_sample(samples, best_bleu='best_bleu_matches_test' in reports[1][1])
def print_bleu(blue_structs):
data= [['<b>Model</b>', '<b>Overall Score</b>','<b>1-gram Score</b>','<b>2-gram Score</b>','<b>3-gram Score</b>','<b>4-gram Score</b>']]
for rname, blue_struct in blue_structs:
data.append([rname, blue_struct['score'], blue_struct['components']['1'], blue_struct['components']['2'], blue_struct['components']['3'], blue_struct['components']['4']])
display_table(data)
# Training Set BLEU Scores
print_bleu([(rname, report['train_bleu']) for (rname, report) in reports])
# Validation Set BLEU Scores
print_bleu([(rname, report['valid_bleu']) for (rname, report) in reports])
# Test Set BLEU Scores
print_bleu([(rname, report['test_bleu']) for (rname, report) in reports])
# All Data BLEU Scores
print_bleu([(rname, report['combined_bleu']) for (rname, report) in reports])
# Training Set BLEU n-pairs Scores
print_bleu([(rname, report['n_pairs_bleu_train']) for (rname, report) in reports])
# Validation Set n-pairs BLEU Scores
print_bleu([(rname, report['n_pairs_bleu_valid']) for (rname, report) in reports])
# Test Set n-pairs BLEU Scores
print_bleu([(rname, report['n_pairs_bleu_test']) for (rname, report) in reports])
# Combined n-pairs BLEU Scores
print_bleu([(rname, report['n_pairs_bleu_all']) for (rname, report) in reports])
# Ground Truth n-pairs BLEU Scores
print_bleu([(rname, report['n_pairs_bleu_gold']) for (rname, report) in reports])
def print_align(reports):
data= [['<b>Model</b>', '<b>Average (Train) Generated Score</b>','<b>Average (Valid) Generated Score</b>','<b>Average (Test) Generated Score</b>','<b>Average (All) Generated Score</b>', '<b>Average (Gold) Score</b>']]
for rname, report in reports:
data.append([rname, report['average_alignment_train'], report['average_alignment_valid'], report['average_alignment_test'], report['average_alignment_all'], report['average_alignment_gold']])
display_table(data)
print_align(reports)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Problem 1) Examine the Training Data
Step2: Problem 1b
Step3: We will now ignore everything in the test set until we have fully optimized the machine learning model.
Step4: Problem 2b
Step5: A nice property of RF, relative to $k$NN, is that RF naturally provides an estimate of the most important features in a model.
Step6: write your answer here
Step7: Problem 3b
Step8: Problem 3c
Step9: Problem 3d
Step10: write your answer here
Step11: write your answer here
Step12: Problem 5b
Step13: Problem 5c
Step14: write your answer here
Step15: Problem 5e
Step16: Problem 6) Classify New Data
Step17: Problem 6a
Step18: Problem 6b
Step19: Problem 6c
| <ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
sdss_df = pd.read_hdf("sdss_training_set.h5")
sns.pairplot(sdss_df, hue = 'class', diag_kind = 'hist')
from sklearn.model_selection import train_test_split
rs = 1851
# complete
X = # complete
y = # complete
train_X, test_X, train_y, test_y = # complete
from sklearn.neighbors import KNeighborsClassifier
knn_clf = # complete
# complete
from sklearn.ensemble import RandomForestClassifier
rf_clf = # complete
# complete
feat_str = ',\n'.join(['{}'.format(feat) for feat in np.array(feats)[np.argsort(rf_clf.feature_importances_)[::-1]]])
print('From most to least important: \n{}'.format(feat_str))
from sklearn.metrics import accuracy_score
phot_y = # complete
# complete
# complete
# complete
print("The baseline FoM = {:.4f}".format( # complete
from sklearn.model_selection import cross_val_score
knn_cv = cross_val_score( # complete
print('The kNN model FoM = {:.4f} +/- {:.4f}'.format( # complete
rf_cv = cross_val_score( # complete
print('The RF model FoM = {:.4f} +/- {:.4f}'.format( # complete
for k in [1,10,100]:
# complete
print('With k = {:d}, the kNN FoM = {:.4f} +/- {:.4f}'.format( # complete
for ntree in [1,10,30,100,300]:
# complete
print('With {:d} trees the FoM = {:.4f} +/- {:.4f}'.format( # complete
phot_y = # complete
# complete
# complete
# complete
print("The baseline FoM = {:.4f}".format( # complete
rf_clf = RandomForestClassifier( # complete
# complete
# complete
print("The RF model has FoM = {:.4f}".format( # complete
from sklearn.metrics import confusion_matrix
print(confusion_matrix( # complete
from sklearn.metrics import roc_curve
test_y_int = # complete
# complete
test_preds_proba = rf_clf.predict_proba( # complete
fpr, tpr, thresh = roc_curve( # complete
fig, ax = plt.subplots()
ax.plot( # complete
ax.set_xlabel('FPR')
ax.set_ylabel('TPR')
tpr_99_thresh = # complete
print('This model requires a classification threshold of {:.4f}'.format(tpr_99_thresh))
fpr_at_tpr_99 = # complete
print('This model misclassifies {:.2f}% of galaxies'.format(fpr_at_tpr_99*100))
new_data_df = pd.read_hdf("blind_test_set.h5")
new_X = # complete
new_y = # complete
new_preds = # complete
print("The model has an accuracy of {:.4f}".format( # complete
from sklearn.model_selection import GridSearchCV
# complete
print('The best model has {}'.format( # complete
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: DataFlowKernal
Step2: Hello World App
Step3: Futures
Step4: Data Dependencies
Step5: Parallelism
| <ASSISTANT_TASK:>
Python Code:
# Import Parsl
import parsl
from parsl import *
print(parsl.__version__)
# Let's create a pool of threads to execute our functions
workers = ThreadPoolExecutor(max_workers=4)
# We pass the workers to the DataFlowKernel which will execute our Apps over the workers.
dfk = DataFlowKernel(executors=[workers])
# Here we define our first App function, a simple python app that returns a string
@App('python', dfk)
def hello ():
return 'Hello World!'
app_future = hello()
# Check status
print("Status: ", app_future.done())
# Get result
print("Result: ", app_future.result())
@App('python', dfk)
def pi(total):
# App functions have to import modules they will use.
import random
# Set the size of the box (edge length) in which we drop random points
edge_length = 10000
center = edge_length / 2
c2 = center ** 2
count = 0
for i in range(total):
# Drop a random point in the box.
x,y = random.randint(1, edge_length),random.randint(1, edge_length)
# Count points within the circle
if (x-center)**2 + (y-center)**2 < c2:
count += 1
return (count*4/total)
@App('python', dfk)
def avg_three(a,b,c):
return (a+b+c)/3
a, b, c = pi(10**6), pi(10**6), pi(10**6)
avg_pi = avg_three(a, b, c)
# Print the results
print("A: {0:.5f} B: {1:.5f} C: {2:.5f}".format(a.result(), b.result(), c.result()))
print("Average: {0:.5f}".format(avg_pi.result()))
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Explanation
Step2: What loss functions best recover the curve $f$ from our dataset?
Step3: Test recovery of $f$.
| <ASSISTANT_TASK:>
Python Code:
#@title Default title text
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import numpy as np
import tensorflow.compat.v2 as tf
import matplotlib.pyplot as plt
tf.enable_v2_behavior()
# Mode as a function of observation
def f(s):
return np.sin(s*2*np.pi/100.)/2.
N = 100
s = np.random.uniform(-100, 100, size=N) # observations between -100 and 100
a = np.random.uniform(-1, 1, size=N) # Actions between -1 and 1
P = 0.2
y = -100*np.abs(a - f(s))**P
y /= np.max(np.abs(y))
print(np.max(y))
print(np.min(y))
plt.scatter(s, a, c=y)
plt.plot(np.sort(s), f(np.sort(s)))
plt.plot()
data = (s[:, np.newaxis], a[:, np.newaxis], y[:, np.newaxis])
s_features = tf.constant(np.linspace(-100, 100, 50)[np.newaxis, :], dtype=tf.float32)
hidden_widths = [1000, 500]
model = tf.keras.Sequential(
[tf.keras.layers.Lambda(lambda x: tf.exp(-(x - s_features)**2/2000))]
+ [tf.keras.layers.Dense(w, activation='relu') for w in hidden_widths]
+ [tf.keras.layers.Dense(1, activation=None)]
)
# loss A
# ||h(s) - a|^p - R|^q
# This is danabo's mode regression loss
p = 0.1
q = 1/P
# p = q = 2.0
def loss(model, s, a, y):
reg = tf.linalg.global_norm(model.trainable_variables)
return tf.reduce_mean(tf.abs(-tf.abs(model(s)-a)**p - y)**q) + 0.003*reg
# loss B
# |h(s) - a|^p * exp(R/tau)
# This is one of Dale's surrogate loss, specifically dot-product loss.
p = 1.0
tau = 1/10.
def loss(model, s, a, y):
reg = tf.linalg.global_norm(model.trainable_variables)
target = tf.cast(tf.exp(y/tau), tf.float32)
return tf.reduce_mean(tf.abs(model(s)-a)**p * target) + 0.0005*reg
np.var(s)
# Initialize model
device_string = '/device:GPU:0'
# device_string = '/device:TPU:0'
# device_string = '' # CPU
with tf.device(device_string):
model(data[0])
print(loss(model, *data).numpy()) # Initialize model
optimizer = tf.keras.optimizers.Adam(learning_rate=0.0001)
def sample_batch(batch_size, *args):
assert args
idx = np.random.choice(args[0].shape[0], batch_size)
return tuple([arg[idx] for arg in args])
for i in range(10000):
# batch = sample_batch(100, *data)
batch = data
optimizer.minimize(lambda: loss(model, *batch), model.trainable_variables)
if i % 100 == 0:
print(i, '\t', loss(model, *data).numpy())
X = np.linspace(-100, 100, 200)[:, np.newaxis]
Y = model(X).numpy()
plt.plot(X, Y)
plt.show()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Unsupervised Clustering
Step2: First use word count to try to gauge article content
Step3: The most common words are "the", "in", etc. which are not important. We'll use TF-IDF to fix this problem.
Step4: These words are much more relevant than the ones from before.
Step5: Build nearest neighbor model which returns closest articles by cosine distance
| <ASSISTANT_TASK:>
Python Code:
import os
from urllib import urlretrieve
import graphlab
URL = 'https://d396qusza40orc.cloudfront.net/phoenixassets/people_wiki.csv'
def get_data(filename='people_wiki.csv', url=URL, force_download=False):
Download and cache the fremont data
Parameters
----------
filename: string (optional)
location to save the data
url: string (optional)
force_download: bool (optional)
if True, force redownload of data
Returns
-------
data: graphlab SFrame. Similer to a pandas DataFrame,
but with capacity for faster analysis of larger data sets
if force_download or not os.path.exists(filename):
urlretrieve(url, filename)
sf = graphlab.SFrame('people_wiki.csv')
return sf
people = get_data()
people.head()
# Look at some individual article text
obama = people[people['name'] == 'Barack Obama']
obama['text']
clooney = people[people['name'] == 'George Clooney']
clooney['text']
obama['word_count'] = graphlab.text_analytics.count_words(obama['text'])
obama['word_count']
# stack the word_count field (dict) in order to create a table for sorting. This will show the most common words
obama_wordcount_table = obama[['word_count']].stack('word_count', new_column_name=['word', 'count'])
obama_wordcount_table.sort('count', ascending=False)
# add word count field
people['word_count'] = graphlab.text_analytics.count_words(people['text'])
#use graphlab's built in tfidf, which uses calculation above
people['tfidf'] = graphlab.text_analytics.tf_idf(people['word_count'])
people.head()
# look at tfidf for Obama
obama = people[people['name']=='Barack Obama']
obama[['tfidf']].stack('tfidf', new_column_name=['word', 'tfidf']).sort('tfidf', ascending=False)
# look at two more people
clinton = people[people['name'] == 'Bill Clinton']
beckham = people[people['name'] == 'David Beckham']
Tswift = people[people['name'] == 'Taylor Swift']
# calculate distance between two articles using cosine distance. Min is 0, max distance is 1.
graphlab.distances.cosine(obama['tfidf'][0], clinton['tfidf'][0])
graphlab.distances.cosine(obama['tfidf'][0], Tswift['tfidf'][0])
knn_model = graphlab.nearest_neighbors.create(people, features=['tfidf'], label='name')
# closest Wikipedia articles to Obama
knn_model.query(obama)
knn_model.query(Tswift)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's check how many items in the dictionary.
Step2: Yes, 7949 items is right. How about question numbers? It is continuous number or not? We might want to check the first and last 10 items.
Step3: Yes, it's not continuous data in terms of qid. But, it's OK. What about just see one question? How can we do it? Just look at qid 1.
Step4: Yes, it's dictionary. So, you can use some dictionary functions. Check this out.
Step5: How can figure out questions length without tokenizing question it self?
Step6: Make questions pickled data
Step7: Yes, now we can load pickled data as a variable.
Step8: Yes, it took only few second. I will save it, make it as a commit, and push it into github. So, you can use pickled data instead of converting questions.csv
| <ASSISTANT_TASK:>
Python Code:
import csv
import gzip
import cPickle as pickle
from collections import defaultdict
import yaml
question_reader = csv.reader(open("../data/questions.csv"))
question_header = ["answer", "group", "category", "question", "pos_token"]
questions = defaultdict(dict)
for row in question_reader:
question = {}
row[-1] = yaml.load(row[-1].replace(": u'", ": '"))
qid = int(row.pop(0))
for index, item in enumerate(row):
question[question_header[index]] = item
questions[qid] = question
print len(questions)
print sorted(questions.keys())[:10]
print sorted(questions.keys())[-10:]
questions[1]
questions[1].keys()
questions[1]['answer']
questions[1]['pos_token']
questions[1]['pos_token'].keys()
questions[1]['pos_token'].values()
questions[1]['pos_token'].items()
max(questions[1]['pos_token'].keys())
with gzip.open("questions.pklz", "wb") as output:
pickle.dump(questions, output)
with gzip.open("questions.pklz", "rb") as fp:
questions_new = pickle.load(fp)
print len(questions_new)
print questions == questions
print questions == questions_new
questions_new[0] = 1
print questions == questions_new
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Loading an example geomodel
Step2: Basic plotting API
Step3: Geomodel plot
Step4: Interactive plot
Step5: Granular 3-D Visualization
Step6: Plotting individual surfaces
Step7: Plotting input data
Step8: Plot structured grids
Step9: Interactive Block with cross sections
Step10: Interactive Plotting
Step11: Now if you move the data the model updates!
Step12: Interactive Plotting
| <ASSISTANT_TASK:>
Python Code:
# Importing GemPy
import gempy as gp
# Embedding matplotlib figures in the notebooks
%matplotlib inline
# Importing auxiliary libraries
import numpy as np
import matplotlib.pyplot as plt
data_path = 'https://raw.githubusercontent.com/cgre-aachen/gempy_data/master/'
geo_model = gp.create_data('viz_3d',
[0, 2000, 0, 2000, 0, 1600],
[50, 50, 50],
path_o=data_path + "data/input_data/lisa_models/foliations" + str(7) + ".csv",
path_i=data_path + "data/input_data/lisa_models/interfaces" + str(7) + ".csv"
)
gp.map_stack_to_surfaces(
geo_model,
{"Fault_1": 'Fault_1', "Fault_2": 'Fault_2',
"Strat_Series": ('Sandstone', 'Siltstone', 'Shale', 'Sandstone_2', 'Schist', 'Gneiss')}
)
geo_model.set_is_fault(['Fault_1', 'Fault_2'])
geo_model.set_topography()
gp.set_interpolator(geo_model)
gp.compute_model(geo_model, compute_mesh=True)
gp.plot_3d(geo_model, show_surfaces=False, notebook=True)
gp.plot_3d(geo_model, notebook=True)
gp.plot_3d(geo_model, notebook=False)
geo_model.surfaces
gpv = gp.plot_3d(geo_model, show_data=False, show_results=False, plotter_type='background')
# Plotting all surfaces...
gpv.plot_surfaces()
# ... masked by topography
gpv.plot_topography()
# Just few surfaces
gpv.plot_surfaces(['Siltstone', 'Gneiss'])
gpv.plot_surfaces(["Fault_1"])
gpv.plot_surfaces(["Shale"], clear=False)
gpv.plot_surface_points()
gpv.plot_orientations()
mesh = gpv.surface_points_mesh
mesh
mesh.points[:, -1]
mesh.n_arrays
gpv.plot_structured_grid("scalar", series = 'Strat_Series')
gp.plot.plot_interactive_3d(geo_model, show_topography=True)
gpv = gp.plot_3d(geo_model, show_data=False, show_results=False, plotter_type='background')
gpv.plot_surface_points()
gpv.plot_orientations()
gpv.plot_surfaces()
gpv.toggle_live_updating()
gpv.toggle_live_updating()
gpv.live_updating = True
gpv.plot_surface_points()
gpv.plot_orientations()
geo_model.modify_surface_points(0, X=-100, plot_object=gpv)
geo_model.add_surface_points(-200, 1500, 600, 'Schist', plot_object=gpv)
geo_model.delete_surface_points(22, plot_object=gpv)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: Batch Normalization using tf.layers.batch_normalization<a id="example_1"></a>
Step6: We'll use the following function to create convolutional layers in our network. They are very basic
Step8: Run the following cell, along with the earlier cells (to load the dataset and define the necessary functions).
Step10: With this many layers, it's going to take a lot of iterations for this network to learn. By the time you're done training these 800 batches, your final test and validation accuracies probably won't be much better than 10%. (It will be different each time, but will most likely be less than 15%.)
Step12: TODO
Step13: TODO
Step15: With batch normalization, you should now get an accuracy over 90%. Notice also the last line of the output
Step17: TODO
Step18: TODO
| <ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True, reshape=False)
DO NOT MODIFY THIS CELL
def fully_connected(prev_layer, num_units):
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
layer = tf.layers.dense(prev_layer, num_units, activation=tf.nn.relu)
return layer
DO NOT MODIFY THIS CELL
def conv_layer(prev_layer, layer_depth):
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
strides = 2 if layer_depth % 3 == 0 else 1
conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', activation=tf.nn.relu)
return conv_layer
DO NOT MODIFY THIS CELL
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]]})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
def fully_connected(prev_layer, num_units, is_training):
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
layer = tf.layers.dense(prev_layer, num_units, use_bias = False, activation=None)
layer = tf.layers.batch_normalization(layer, training = is_training)
layer = tf.nn.relu(layer)
return layer
def conv_layer(prev_layer, layer_depth, is_training):
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
strides = 2 if layer_depth % 3 == 0 else 1
conv_layer = tf.layers.conv2d(prev_layer, layer_depth*4, 3, strides, 'same', use_bias = False, activation=None)
conv_layer = tf.layers.batch_normalization(conv_layer, training = is_training)
conv_layer = tf.nn.relu(conv_layer)
return conv_layer
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
# training boolean
is_training = tf.placeholder(tf.bool)
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i, is_training)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100, is_training)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
# Tell TensorFlow to update the population statistics while training
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys, is_training: True})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels,
is_training: False})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys, is_training: False})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels,
is_training: False})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels,
is_training: False})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]],
is_training: False})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
def fully_connected(prev_layer, num_units):
Create a fully connectd layer with the given layer as input and the given number of neurons.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param num_units: int
The size of the layer. That is, the number of units, nodes, or neurons.
:returns Tensor
A new fully connected layer
layer = tf.layers.dense(prev_layer, num_units, activation=tf.nn.relu)
return layer
def conv_layer(prev_layer, layer_depth):
Create a convolutional layer with the given layer as input.
:param prev_layer: Tensor
The Tensor that acts as input into this layer
:param layer_depth: int
We'll set the strides and number of feature maps based on the layer's depth in the network.
This is *not* a good way to make a CNN, but it helps us create this example with very little code.
:returns Tensor
A new convolutional layer
strides = 2 if layer_depth % 3 == 0 else 1
in_channels = prev_layer.get_shape().as_list()[3]
out_channels = layer_depth*4
weights = tf.Variable(
tf.truncated_normal([3, 3, in_channels, out_channels], stddev=0.05))
bias = tf.Variable(tf.zeros(out_channels))
conv_layer = tf.nn.conv2d(prev_layer, weights, strides=[1,strides, strides, 1], padding='SAME')
conv_layer = tf.nn.bias_add(conv_layer, bias)
conv_layer = tf.nn.relu(conv_layer)
return conv_layer
def train(num_batches, batch_size, learning_rate):
# Build placeholders for the input samples and labels
inputs = tf.placeholder(tf.float32, [None, 28, 28, 1])
labels = tf.placeholder(tf.float32, [None, 10])
# Feed the inputs into a series of 20 convolutional layers
layer = inputs
for layer_i in range(1, 20):
layer = conv_layer(layer, layer_i)
# Flatten the output from the convolutional layers
orig_shape = layer.get_shape().as_list()
layer = tf.reshape(layer, shape=[-1, orig_shape[1] * orig_shape[2] * orig_shape[3]])
# Add one fully connected layer
layer = fully_connected(layer, 100)
# Create the output layer with 1 node for each
logits = tf.layers.dense(layer, 10)
# Define loss and training operations
model_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=logits, labels=labels))
train_opt = tf.train.AdamOptimizer(learning_rate).minimize(model_loss)
# Create operations to test accuracy
correct_prediction = tf.equal(tf.argmax(logits,1), tf.argmax(labels,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# Train and test the network
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for batch_i in range(num_batches):
batch_xs, batch_ys = mnist.train.next_batch(batch_size)
# train this batch
sess.run(train_opt, {inputs: batch_xs, labels: batch_ys})
# Periodically check the validation or training loss and accuracy
if batch_i % 100 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Batch: {:>2}: Validation loss: {:>3.5f}, Validation accuracy: {:>3.5f}'.format(batch_i, loss, acc))
elif batch_i % 25 == 0:
loss, acc = sess.run([model_loss, accuracy], {inputs: batch_xs, labels: batch_ys})
print('Batch: {:>2}: Training loss: {:>3.5f}, Training accuracy: {:>3.5f}'.format(batch_i, loss, acc))
# At the end, score the final accuracy for both the validation and test sets
acc = sess.run(accuracy, {inputs: mnist.validation.images,
labels: mnist.validation.labels})
print('Final validation accuracy: {:>3.5f}'.format(acc))
acc = sess.run(accuracy, {inputs: mnist.test.images,
labels: mnist.test.labels})
print('Final test accuracy: {:>3.5f}'.format(acc))
# Score the first 100 test images individually. This won't work if batch normalization isn't implemented correctly.
correct = 0
for i in range(100):
correct += sess.run(accuracy,feed_dict={inputs: [mnist.test.images[i]],
labels: [mnist.test.labels[i]]})
print("Accuracy on 100 samples:", correct/100)
num_batches = 800
batch_size = 64
learning_rate = 0.002
tf.reset_default_graph()
with tf.Graph().as_default():
train(num_batches, batch_size, learning_rate)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Define new workspace and project
Step2: Generate a pore network
Step3: Create a geometry
Step4: Add a phase
Step5: Add a physics
Step6: The diffusivity coefficient of the considered chemical species in water is also defined.
Step7: Defining a new model
Step8: Define a transient Fickian diffusion algorithm
Step9: Add boundary conditions
Step10: Define initial conditions
Step11: Note that both set_value_BC and set_IC also accept as input, in addition to a single scalar value, an ndarray.
Step12: Note that the output time stepping t_output may a scalar, ND-array, or list. For a scalar, it is considered as an output interval. If t_output > t_final, no transient data is stored. If t_output is not a multiple of t_step, t_output will be approximated. When t_output is a list or ND-array, transient solutions corresponding to this list or array will be stored. Finally, initial, final and steady-state (if reached) solutions are always stored.
Step13: Note that the quantity corresponds to the quantity solved for.
Step14: Post process and export the results
Step15: Note that the solutions at every exported time step contain the @ character followed by the time value. Here the solution is exported after each $5s$ in addition to the final time step which is not a multiple of $5$ in this example.
Step16: The solution is here stored in the phase before export.
Step17: Export the results into an xdmf file to be able to play an animation of the time dependent concentration on Paraview.
Step18: Visialization using Matplotlib
| <ASSISTANT_TASK:>
Python Code:
import numpy as np
import openpnm as op
np.random.seed(10)
%matplotlib inline
np.set_printoptions(precision=5)
ws = op.Workspace()
ws.settings["loglevel"] = 40
proj = ws.new_project()
net = op.network.Cubic(shape=[29, 13, 1], spacing=1e-5, project=proj)
geo = op.geometry.StickAndBall(network=net, pores=net.Ps, throats=net.Ts)
phase = op.phases.Water(network=net)
phys = op.physics.GenericPhysics(network=net, phase=phase, geometry=geo)
phase['pore.diffusivity'] = 2e-09
mod = op.models.physics.diffusive_conductance.ordinary_diffusion
phys.add_model(propname='throat.diffusive_conductance', model=mod, regen_mode='normal')
fd = op.algorithms.TransientFickianDiffusion(network=net, phase=phase)
fd.set_value_BC(pores=net.pores('front'), values=0.5)
fd.set_value_BC(pores=net.pores('back'), values=0.2)
fd.set_IC(0.2)
fd.setup(t_scheme='cranknicolson', t_final=100, t_output=5, t_step=1, t_tolerance=1e-12)
print(fd.settings)
fd.run()
print(fd)
fd['pore.concentration@10']
phase.update(fd.results())
proj.export_data(phases=[phase], filename='./results/out', filetype='xdmf')
#NBVAL_IGNORE_OUTPUT
import matplotlib.pyplot as plt
c = fd['pore.concentration'].reshape((net._shape))
fig, ax = plt.subplots(figsize=(6, 6))
plt.imshow(c[:,:,0])
plt.title('Concentration (mol/m$^3$)')
plt.colorbar();
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Update PATH_TO_TRAIN and PATH_TO_TEST to the path for rsc15_train_full.txt and rsc15_test.txt respectively
Step2: Let's take a look to the datasets
Step3: Sneak Peak to the dataset
Step4: Training GRU
Step5: Evaluating GRU
| <ASSISTANT_TASK:>
Python Code:
# -*- coding: utf-8 -*-
import theano
import pickle
import sys
import os
sys.path.append('../..')
import numpy as np
import pandas as pd
import gru4rec #If this shows an error probably the notebook is not in GRU4Rec/examples/rsc15/
import evaluation
# Validate that the following assert makes sense in your platform
# This works on Windows with a NVIDIA GPU
# In other platforms theano.config.device gives other things than 'cuda' when using the GPU
assert 'cuda' in theano.config.device,("Theano is not configured to use the GPU. Please check .theanorc. "
"Check http://deeplearning.net/software/theano/tutorial/using_gpu.html")
PATH_TO_TRAIN = 'C:/Users/frede/datasets/recsys2015/rsc15_train_full.txt'
PATH_TO_TEST = 'C:/Users/frede/datasets/recsys2015/rsc15_test.txt'
data = pd.read_csv(PATH_TO_TRAIN, sep='\t', dtype={'ItemId':np.int64})
valid = pd.read_csv(PATH_TO_TEST, sep='\t', dtype={'ItemId':np.int64})
%matplotlib inline
import numpy as np
import pandas as pd
from scipy import stats, integrate
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(color_codes=True)
data.head()
valid.head()
sessions_training = set(data.SessionId)
print("There are %i sessions in the training dataset" % len(sessions_training))
sessions_testing = set(valid.SessionId)
print("There are %i sessions in the testing dataset" % len(sessions_testing))
assert len(sessions_testing.intersection(sessions_training)) == 0, ("Huhu!"
"there are sessions from the testing set in"
"the training set")
print("Sessions in the testing set doesn't exist in the training set")
items_training = set(data.ItemId)
print("There are %i items in the training dataset" % len(items_training))
items_testing = set(valid.ItemId)
print("There are %i items in the testing dataset" % len(items_testing))
assert items_testing.issubset(items_training), ("Huhu!"
"there are items from the testing set "
"that are not in the training set")
print("Items in the testing set exist in the training set")
df_visualization = data.copy()
df_visualization["value"] = 1
df_item_count = df_visualization[["ItemId","value"]].groupby("ItemId").sum()
# Most of the items are infrequent
df_item_count.describe().transpose()
fig = plt.figure(figsize=[15,8])
ax = fig.add_subplot(111)
ax = sns.kdeplot(df_item_count["value"], ax=ax)
ax.set(xlabel='Item Frequency', ylabel='Kernel Density Estimation')
plt.show()
fig = plt.figure(figsize=[15,8])
ax = fig.add_subplot(111)
ax = sns.distplot(df_item_count["value"],
hist_kws=dict(cumulative=True),
kde_kws=dict(cumulative=True))
ax.set(xlabel='Item Frequency', ylabel='Cummulative Probability')
plt.show()
# Let's analyze the co-occurrence
df_cooccurrence = data.copy()
df_cooccurrence["next_SessionId"] = df_cooccurrence["SessionId"].shift(-1)
df_cooccurrence["next_ItemId"] = df_cooccurrence["ItemId"].shift(-1)
df_cooccurrence["next_Time"] = df_cooccurrence["Time"].shift(-1)
df_cooccurrence = df_cooccurrence.query("SessionId == next_SessionId").dropna()
df_cooccurrence["next_ItemId"] = df_cooccurrence["next_ItemId"].astype(int)
df_cooccurrence["next_SessionId"] = df_cooccurrence["next_SessionId"].astype(int)
df_cooccurrence.head()
df_cooccurrence["time_difference_minutes"] = np.round((df_cooccurrence["next_Time"] - df_cooccurrence["Time"]) / 60, 2)
df_cooccurrence[["time_difference_minutes"]].describe().transpose()
df_cooccurrence["value"] = 1
df_cooccurrence_sum = df_cooccurrence[["ItemId","next_ItemId","value"]].groupby(["ItemId","next_ItemId"]).sum().reset_index()
df_cooccurrence_sum[["value"]].describe().transpose()
n_layers = 100
save_to = os.path.join(os.path.dirname(PATH_TO_TEST), "gru_" + str(n_layers) +".pickle")
if not os.path.exists(save_to):
print('Training GRU4Rec with ' + str(n_layers) + ' hidden units')
gru = gru4rec.GRU4Rec(layers=[n_layers], loss='top1', batch_size=50,
dropout_p_hidden=0.5, learning_rate=0.01, momentum=0.0)
gru.fit(data)
pickle.dump(gru, open(save_to, "wb"))
else:
print('Loading existing GRU4Rec model with ' + str(n_layers) + ' hidden units')
gru = pickle.load(open(save_to, "rb"))
res = evaluation.evaluate_sessions_batch(gru, valid, None,cut_off=20)
print('The proportion of cases having the desired item within the top 20 (i.e Recall@20): {}'.format(res[0]))
batch_size = 500
print("Now let's try to predict over the first %i items of our testint dataset" % batch_size)
df_valid = valid.head(batch_size)
df_valid["next_ItemId"] = df_valid["ItemId"].shift(-1)
df_valid["next_SessionId"] = df_valid["SessionId"].shift(-1)
session_ids = valid.head(batch_size)["SessionId"].values
input_item_ids = valid.head(batch_size)["ItemId"].values
predict_for_item_ids=None
%timeit gru.predict_next_batch(session_ids=session_ids, input_item_ids=input_item_ids, predict_for_item_ids=None, batch=batch_size)
df_preds = gru.predict_next_batch(session_ids=session_ids,
input_item_ids=input_item_ids,
predict_for_item_ids=None,
batch=batch_size)
df_valid.shape
df_preds.shape
df_preds.columns = df_valid.index.values
len(items_training)
df_preds
for c in df_preds:
df_preds[c] = df_preds[c].rank(ascending=False)
df_valid_preds = df_valid.join(df_preds.transpose())
df_valid_preds = df_valid_preds.query("SessionId == next_SessionId").dropna()
df_valid_preds["next_ItemId"] = df_valid_preds["next_ItemId"].astype(int)
df_valid_preds["next_SessionId"] = df_valid_preds["next_SessionId"].astype(int)
df_valid_preds["next_ItemId_at"] = df_valid_preds.apply(lambda x: x[int(x["next_ItemId"])], axis=1)
df_valid_preds_summary = df_valid_preds[["SessionId","ItemId","Time","next_ItemId","next_ItemId_at"]]
df_valid_preds_summary.head(20)
cutoff = 20
df_valid_preds_summary_ok = df_valid_preds_summary.query("next_ItemId_at <= @cutoff")
df_valid_preds_summary_ok.head(20)
recall_at_k = df_valid_preds_summary_ok.shape[0] / df_valid_preds_summary.shape[0]
print("The recall@%i for this batch is %f"%(cutoff,recall_at_k))
fig = plt.figure(figsize=[15,8])
ax = fig.add_subplot(111)
ax = sns.kdeplot(df_valid_preds_summary["next_ItemId_at"], ax=ax)
ax.set(xlabel='Next Desired Item @K', ylabel='Kernel Density Estimation')
plt.show()
fig = plt.figure(figsize=[15,8])
ax = fig.add_subplot(111)
ax = sns.distplot(df_valid_preds_summary["next_ItemId_at"],
hist_kws=dict(cumulative=True),
kde_kws=dict(cumulative=True))
ax.set(xlabel='Next Desired Item @K', ylabel='Cummulative Probability')
plt.show()
print("Statistics for the rank of the next desired item (Lower the best)")
df_valid_preds_summary[["next_ItemId_at"]].describe()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 加载模型
Step2: 调用hanlp.load进行加载,模型会自动下载到本地缓存。
Step3: 命名实体识别
Step4: 每个四元组表示[命名实体, 类型标签, 起始下标, 终止下标],下标指的是命名实体在单词数组中的下标。
Step5: 白名单词典
Step6: 强制词典
Step7: 黑名单词典
| <ASSISTANT_TASK:>
Python Code:
!pip install hanlp -U
import hanlp
hanlp.pretrained.ner.ALL # 语种见名称最后一个字段或相应语料库
ner = hanlp.load(hanlp.pretrained.ner.MSRA_NER_ELECTRA_SMALL_ZH)
print(ner([["2021年", "HanLPv2.1", "为", "生产", "环境", "带来", "次", "世代", "最", "先进", "的", "多", "语种", "NLP", "技术", "。"], ["阿婆主", "来到", "北京", "立方庭", "参观", "自然", "语义", "科技", "公司", "。"]], tasks='ner*'))
print(ner.dict_whitelist)
ner.dict_whitelist = {'午饭后': 'TIME'}
ner(['2021年', '测试', '高血压', '是', '138', ',', '时间', '是', '午饭', '后', '2点45', ',', '低血压', '是', '44'])
ner.dict_tags = {('名字', '叫', '金华'): ('O', 'O', 'S-PERSON')}
ner(['他', '在', '浙江', '金华', '出生', ',', '他', '的', '名字', '叫', '金华', '。'])
ner.dict_blacklist = {'金华'}
ner(['他', '在', '浙江', '金华', '出生', ',', '他', '的', '名字', '叫', '金华', '。'])
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Dataset
Step2: Split the Data into Training and Test
Step3: Standardizing the Data
Step4: Ok, now we have the data scaled!
Step5: Building the Network with Keras
Step6: Fit (Train) the Model
Step7: Predicting New Unseen Data
Step8: Evaluating Model Performance
Step9: Saving and Loading Models
| <ASSISTANT_TASK:>
Python Code:
import numpy as np
from sklearn.datasets import load_iris
iris = load_iris()
type(iris)
print(iris.DESCR)
X = iris.data
X
y = iris.target
y
from keras.utils import to_categorical
y = to_categorical(y)
y.shape
y
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
X_train
X_test
y_train
y_test
from sklearn.preprocessing import MinMaxScaler
scaler_object = MinMaxScaler()
scaler_object.fit(X_train)
scaled_X_train = scaler_object.transform(X_train)
scaled_X_test = scaler_object.transform(X_test)
X_train.max()
scaled_X_train.max()
X_train
scaled_X_train
from keras.models import Sequential
from keras.layers import Dense
model = Sequential()
model.add(Dense(8, input_dim=4, activation='relu'))
model.add(Dense(8, input_dim=4, activation='relu'))
model.add(Dense(3, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
# Play around with number of epochs as well!
model.fit(scaled_X_train,y_train,epochs=150, verbose=2)
scaled_X_test
# Spits out probabilities by default.
# model.predict(scaled_X_test)
model.predict_classes(scaled_X_test)
model.metrics_names
model.evaluate(x=scaled_X_test,y=y_test)
from sklearn.metrics import confusion_matrix,classification_report
predictions = model.predict_classes(scaled_X_test)
predictions
y_test.argmax(axis=1)
confusion_matrix(y_test.argmax(axis=1),predictions)
print(classification_report(y_test.argmax(axis=1),predictions))
model.save('myfirstmodel.h5')
from keras.models import load_model
newmodel = load_model('myfirstmodel.h5')
newmodel.predict_classes(X_test)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data
Step2: 12.2.2 - Are different groups equal or not?
Step3: Note
Step4: Figure 12.5
| <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import pymc3 as pm
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarnings("ignore", category=FutureWarning)
import theano.tensor as tt
from matplotlib import gridspec
%matplotlib inline
plt.style.use('seaborn-white')
color = '#87ceeb'
%load_ext watermark
%watermark -p pandas,numpy,pymc3,matplotlib,seaborn,theano
df = pd.read_csv('data/background_music.csv', dtype={'CondOfSubj':'category'})
# Mapping the condition descriptions to the condition codes. Just for illustrative purposes.
bgmusic = {0:'Das Kruschke', 1:'Mozart', 2:'Bach', 3:'Beethoven'}
df['CondText'] = df.CondOfSubj.cat.codes.map(bgmusic)
cond_idx = df.CondOfSubj.cat.codes.values
cond_codes = df.CondOfSubj.cat.categories
nCond = cond_codes.size
nSubj = df.index.size
df.info()
df.groupby('CondOfSubj').head(3)
# The means as mentioned in section 12.2.2
df.groupby('CondText', sort=False)['nCorrOfSubj'].mean()
with pm.Model() as model_1:
# constants
aP, bP = 1., 1.
# Pseudo- and true priors for model 1.
a0 = tt.as_tensor([.48*500, aP])
b0 = tt.as_tensor([(1-.48)*500, bP])
# True and pseudopriors for model 0
a = tt.as_tensor(np.c_[np.tile(aP, 4), [(.40*125), (.50*125), (.51*125), (.52*125)]])
b = tt.as_tensor(np.c_[np.tile(bP, 4), [(1-.40)*125, (1-.50)*125, (1-.51)*125, (1-.52)*125]])
# Prior on model index [0,1]
m_idx = pm.Categorical('m_idx', np.asarray([.5, .5]))
# Priors on concentration parameters
kappa_minus2 = pm.Gamma('kappa_minus2', 2.618, 0.0809, shape=nCond)
kappa = pm.Deterministic('kappa', kappa_minus2 +2)
# omega0
omega0 = pm.Beta('omega0', a0[m_idx], b0[m_idx])
# omega (condition specific)
omega = pm.Beta('omega', a[:,m_idx], b[:,m_idx], shape=nCond)
# Use condition specific omega when m_idx = 0, else omega0
aBeta = pm.math.switch(pm.math.eq(m_idx, 0), omega * (kappa-2)+1, omega0 * (kappa-2)+1)
bBeta = pm.math.switch(pm.math.eq(m_idx, 0), (1-omega) * (kappa-2)+1, (1-omega0) * (kappa-2)+1)
# Theta
theta = pm.Beta('theta', aBeta[cond_idx], bBeta[cond_idx], shape=nSubj)
# Likelihood
y = pm.Binomial('y', n=df.nTrlOfSubj.values, p=theta, observed=df.nCorrOfSubj)
pm.model_to_graphviz(model_1)
with model_1:
trace1 = pm.sample(5000, target_accept=.95)
pm.traceplot(trace1);
fig = plt.figure(figsize=(12,8))
# Define gridspec
gs = gridspec.GridSpec(3, 3)
ax1 = plt.subplot(gs[0,0])
ax2 = plt.subplot(gs[0,1])
ax3 = plt.subplot(gs[0,2])
ax4 = plt.subplot(gs[1,0])
ax5 = plt.subplot(gs[1,1])
ax6 = plt.subplot(gs[1,2])
ax7 = plt.subplot(gs[2,:])
# Group the first six axes in a list for easier access in loop below
axes = [ax1, ax2, ax3, ax4, ax5, ax6]
# Differences of posteriors to be displayed: omega x - omega y
x = [0,0,0,1,1,2]
y = [1,2,3,2,3,3]
# Plot histograms
for ax, a, b in zip(axes, x, y):
diff = trace1['omega'][:,a]-trace1['omega'][:,b]
pm.plot_posterior(diff, ref_val=0, point_estimate='mode', color=color, ax=ax)
ax.set_xlabel('$\omega_{}$ - $\omega_{}$'.format(a,b), fontdict={'size':18})
ax.xaxis.set_ticks([-.2, -.1, 0.0, 0.1, 0.2])
# Plot trace values of model index (0, 1)
ax7.plot(np.arange(1, len(trace1['m_idx'])+1),trace1['m_idx'], color=color, linewidth=4)
ax7.set_xlabel('Step in Markov chain', fontdict={'size':14})
ax7.set_ylabel('Model Index (0, 1)', fontdict={'size':14})
ax7.set_ylim(-0.05,1.05)
fig.tight_layout()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 使用 TensorFlow Transform 预处理数据
Step2: 安装 TensorFlow Transform
Step3: Python 检查、导入和全局
Step4: 为列命名
Step5: 定义特征和架构<br>我们根据输入中列的类型来定义一个架构。这将有助于正确导入它们,也将惠及其他操作。
Step6: 设置超参数和基本整理<br>常量和超参数用于训练。桶大小包括数据集描述中列出的所有类别以及一个表示未知的“?”的额外类别。
Step8: 使用 tf.Transform 进行预处理
Step10: 转换数据<br>现在,我们准备开始在 Apache Beam 流水线中转换数据。
Step12: 使用预处理数据通过 tf.keras 训练模型
Step14: 创建应用输入函数
Step18: 训练、评估并导出模型
Step19: 总结<br>我们已经创建了所需的一切来预处理人口普查数据,训练模型,并针对应用准备模型。到目前为止,我们已经做好了一切准备。是时候开始运行了!
Step22: (可选)使用预处理数据通过 tf.estimator 训练模型
Step25: 创建应用输入函数
Step27: 将输入数据封装到 FeatureColumns 中<br>模型希望我们的数据处于 TensorFlow FeatureColumns 中。
Step29: 训练、评估并导出模型
Step30: 总结<br>我们已经创建了所需的一切来预处理人口普查数据、训练模型并针对应用准备模型。到目前为止,我们已经做好了一切准备。是时候开始运行了!
| <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
try:
import colab
!pip install --upgrade pip
except:
pass
!pip install tensorflow-transform
import sys
# Confirm that we're using Python 3
assert sys.version_info.major is 3, 'Oops, not running Python 3. Use Runtime > Change runtime type'
import math
import os
import pprint
import tensorflow as tf
print('TF: {}'.format(tf.__version__))
import apache_beam as beam
print('Beam: {}'.format(beam.__version__))
import tensorflow_transform as tft
import tensorflow_transform.beam as tft_beam
print('Transform: {}'.format(tft.__version__))
from tfx_bsl.public import tfxio
from tfx_bsl.coders.example_coder import RecordBatchToExamples
!wget https://storage.googleapis.com/artifacts.tfx-oss-public.appspot.com/datasets/census/adult.data
!wget https://storage.googleapis.com/artifacts.tfx-oss-public.appspot.com/datasets/census/adult.test
train = './adult.data'
test = './adult.test'
CATEGORICAL_FEATURE_KEYS = [
'workclass',
'education',
'marital-status',
'occupation',
'relationship',
'race',
'sex',
'native-country',
]
NUMERIC_FEATURE_KEYS = [
'age',
'capital-gain',
'capital-loss',
'hours-per-week',
]
OPTIONAL_NUMERIC_FEATURE_KEYS = [
'education-num',
]
ORDERED_CSV_COLUMNS = [
'age', 'workclass', 'fnlwgt', 'education', 'education-num',
'marital-status', 'occupation', 'relationship', 'race', 'sex',
'capital-gain', 'capital-loss', 'hours-per-week', 'native-country', 'label'
]
LABEL_KEY = 'label'
RAW_DATA_FEATURE_SPEC = dict(
[(name, tf.io.FixedLenFeature([], tf.string))
for name in CATEGORICAL_FEATURE_KEYS] +
[(name, tf.io.FixedLenFeature([], tf.float32))
for name in NUMERIC_FEATURE_KEYS] +
[(name, tf.io.VarLenFeature(tf.float32))
for name in OPTIONAL_NUMERIC_FEATURE_KEYS] +
[(LABEL_KEY, tf.io.FixedLenFeature([], tf.string))]
)
SCHEMA = tft.tf_metadata.dataset_metadata.DatasetMetadata(
tft.tf_metadata.schema_utils.schema_from_feature_spec(RAW_DATA_FEATURE_SPEC)).schema
testing = os.getenv("WEB_TEST_BROWSER", False)
NUM_OOV_BUCKETS = 1
if testing:
TRAIN_NUM_EPOCHS = 1
NUM_TRAIN_INSTANCES = 1
TRAIN_BATCH_SIZE = 1
NUM_TEST_INSTANCES = 1
else:
TRAIN_NUM_EPOCHS = 16
NUM_TRAIN_INSTANCES = 32561
TRAIN_BATCH_SIZE = 128
NUM_TEST_INSTANCES = 16281
# Names of temp files
TRANSFORMED_TRAIN_DATA_FILEBASE = 'train_transformed'
TRANSFORMED_TEST_DATA_FILEBASE = 'test_transformed'
EXPORTED_MODEL_DIR = 'exported_model_dir'
def preprocessing_fn(inputs):
Preprocess input columns into transformed columns.
# Since we are modifying some features and leaving others unchanged, we
# start by setting `outputs` to a copy of `inputs.
outputs = inputs.copy()
# Scale numeric columns to have range [0, 1].
for key in NUMERIC_FEATURE_KEYS:
outputs[key] = tft.scale_to_0_1(inputs[key])
for key in OPTIONAL_NUMERIC_FEATURE_KEYS:
# This is a SparseTensor because it is optional. Here we fill in a default
# value when it is missing.
sparse = tf.sparse.SparseTensor(inputs[key].indices, inputs[key].values,
[inputs[key].dense_shape[0], 1])
dense = tf.sparse.to_dense(sp_input=sparse, default_value=0.)
# Reshaping from a batch of vectors of size 1 to a batch to scalars.
dense = tf.squeeze(dense, axis=1)
outputs[key] = tft.scale_to_0_1(dense)
# For all categorical columns except the label column, we generate a
# vocabulary but do not modify the feature. This vocabulary is instead
# used in the trainer, by means of a feature column, to convert the feature
# from a string to an integer id.
for key in CATEGORICAL_FEATURE_KEYS:
outputs[key] = tft.compute_and_apply_vocabulary(
tf.strings.strip(inputs[key]),
num_oov_buckets=NUM_OOV_BUCKETS,
vocab_filename=key)
# For the label column we provide the mapping from string to index.
table_keys = ['>50K', '<=50K']
with tf.init_scope():
initializer = tf.lookup.KeyValueTensorInitializer(
keys=table_keys,
values=tf.cast(tf.range(len(table_keys)), tf.int64),
key_dtype=tf.string,
value_dtype=tf.int64)
table = tf.lookup.StaticHashTable(initializer, default_value=-1)
# Remove trailing periods for test data when the data is read with tf.data.
label_str = tf.strings.regex_replace(inputs[LABEL_KEY], r'\.', '')
label_str = tf.strings.strip(label_str)
data_labels = table.lookup(label_str)
transformed_label = tf.one_hot(
indices=data_labels, depth=len(table_keys), on_value=1.0, off_value=0.0)
outputs[LABEL_KEY] = tf.reshape(transformed_label, [-1, len(table_keys)])
return outputs
def transform_data(train_data_file, test_data_file, working_dir):
Transform the data and write out as a TFRecord of Example protos.
Read in the data using the CSV reader, and transform it using a
preprocessing pipeline that scales numeric data and converts categorical data
from strings to int64 values indices, by creating a vocabulary for each
category.
Args:
train_data_file: File containing training data
test_data_file: File containing test data
working_dir: Directory to write transformed data and metadata to
# The "with" block will create a pipeline, and run that pipeline at the exit
# of the block.
with beam.Pipeline() as pipeline:
with tft_beam.Context(temp_dir=tempfile.mkdtemp()):
# Create a TFXIO to read the census data with the schema. To do this we
# need to list all columns in order since the schema doesn't specify the
# order of columns in the csv.
# We first read CSV files and use BeamRecordCsvTFXIO whose .BeamSource()
# accepts a PCollection[bytes] because we need to patch the records first
# (see "FixCommasTrainData" below). Otherwise, tfxio.CsvTFXIO can be used
# to both read the CSV files and parse them to TFT inputs:
# csv_tfxio = tfxio.CsvTFXIO(...)
# raw_data = (pipeline | 'ToRecordBatches' >> csv_tfxio.BeamSource())
csv_tfxio = tfxio.BeamRecordCsvTFXIO(
physical_format='text',
column_names=ORDERED_CSV_COLUMNS,
schema=SCHEMA)
# Read in raw data and convert using CSV TFXIO. Note that we apply
# some Beam transformations here, which will not be encoded in the TF
# graph since we don't do the from within tf.Transform's methods
# (AnalyzeDataset, TransformDataset etc.). These transformations are just
# to get data into a format that the CSV TFXIO can read, in particular
# removing spaces after commas.
raw_data = (
pipeline
| 'ReadTrainData' >> beam.io.ReadFromText(
train_data_file, coder=beam.coders.BytesCoder())
| 'FixCommasTrainData' >> beam.Map(
lambda line: line.replace(b', ', b','))
| 'DecodeTrainData' >> csv_tfxio.BeamSource())
# Combine data and schema into a dataset tuple. Note that we already used
# the schema to read the CSV data, but we also need it to interpret
# raw_data.
raw_dataset = (raw_data, csv_tfxio.TensorAdapterConfig())
# The TFXIO output format is chosen for improved performance.
transformed_dataset, transform_fn = (
raw_dataset | tft_beam.AnalyzeAndTransformDataset(
preprocessing_fn, output_record_batches=True))
# Transformed metadata is not necessary for encoding.
transformed_data, _ = transformed_dataset
# Extract transformed RecordBatches, encode and write them to the given
# directory.
_ = (
transformed_data
| 'EncodeTrainData' >>
beam.FlatMapTuple(lambda batch, _: RecordBatchToExamples(batch))
| 'WriteTrainData' >> beam.io.WriteToTFRecord(
os.path.join(working_dir, TRANSFORMED_TRAIN_DATA_FILEBASE)))
# Now apply transform function to test data. In this case we remove the
# trailing period at the end of each line, and also ignore the header line
# that is present in the test data file.
raw_test_data = (
pipeline
| 'ReadTestData' >> beam.io.ReadFromText(
test_data_file, skip_header_lines=1,
coder=beam.coders.BytesCoder())
| 'FixCommasTestData' >> beam.Map(
lambda line: line.replace(b', ', b','))
| 'RemoveTrailingPeriodsTestData' >> beam.Map(lambda line: line[:-1])
| 'DecodeTestData' >> csv_tfxio.BeamSource())
raw_test_dataset = (raw_test_data, csv_tfxio.TensorAdapterConfig())
# The TFXIO output format is chosen for improved performance.
transformed_test_dataset = (
(raw_test_dataset, transform_fn)
| tft_beam.TransformDataset(output_record_batches=True))
# Transformed metadata is not necessary for encoding.
transformed_test_data, _ = transformed_test_dataset
# Extract transformed RecordBatches, encode and write them to the given
# directory.
_ = (
transformed_test_data
| 'EncodeTestData' >>
beam.FlatMapTuple(lambda batch, _: RecordBatchToExamples(batch))
| 'WriteTestData' >> beam.io.WriteToTFRecord(
os.path.join(working_dir, TRANSFORMED_TEST_DATA_FILEBASE)))
# Will write a SavedModel and metadata to working_dir, which can then
# be read by the tft.TFTransformOutput class.
_ = (
transform_fn
| 'WriteTransformFn' >> tft_beam.WriteTransformFn(working_dir))
def _make_training_input_fn(tf_transform_output, transformed_examples,
batch_size):
An input function reading from transformed data, converting to model input.
Args:
tf_transform_output: Wrapper around output of tf.Transform.
transformed_examples: Base filename of examples.
batch_size: Batch size.
Returns:
The input data for training or eval, in the form of k.
def input_fn():
return tf.data.experimental.make_batched_features_dataset(
file_pattern=transformed_examples,
batch_size=batch_size,
features=tf_transform_output.transformed_feature_spec(),
reader=tf.data.TFRecordDataset,
label_key=LABEL_KEY,
shuffle=True).prefetch(tf.data.experimental.AUTOTUNE)
return input_fn
def _make_serving_input_fn(tf_transform_output, raw_examples, batch_size):
An input function reading from raw data, converting to model input.
Args:
tf_transform_output: Wrapper around output of tf.Transform.
raw_examples: Base filename of examples.
batch_size: Batch size.
Returns:
The input data for training or eval, in the form of k.
def get_ordered_raw_data_dtypes():
result = []
for col in ORDERED_CSV_COLUMNS:
if col not in RAW_DATA_FEATURE_SPEC:
result.append(0.0)
continue
spec = RAW_DATA_FEATURE_SPEC[col]
if isinstance(spec, tf.io.FixedLenFeature):
result.append(spec.dtype)
else:
result.append(0.0)
return result
def input_fn():
dataset = tf.data.experimental.make_csv_dataset(
file_pattern=raw_examples,
batch_size=batch_size,
column_names=ORDERED_CSV_COLUMNS,
column_defaults=get_ordered_raw_data_dtypes(),
prefetch_buffer_size=0,
ignore_errors=True)
tft_layer = tf_transform_output.transform_features_layer()
def transform_dataset(data):
raw_features = {}
for key, val in data.items():
if key not in RAW_DATA_FEATURE_SPEC:
continue
if isinstance(RAW_DATA_FEATURE_SPEC[key], tf.io.VarLenFeature):
raw_features[key] = tf.RaggedTensor.from_tensor(
tf.expand_dims(val, -1)).to_sparse()
continue
raw_features[key] = val
transformed_features = tft_layer(raw_features)
data_labels = transformed_features.pop(LABEL_KEY)
return (transformed_features, data_labels)
return dataset.map(
transform_dataset,
num_parallel_calls=tf.data.experimental.AUTOTUNE).prefetch(
tf.data.experimental.AUTOTUNE)
return input_fn
def export_serving_model(tf_transform_output, model, output_dir):
Exports a keras model for serving.
Args:
tf_transform_output: Wrapper around output of tf.Transform.
model: A keras model to export for serving.
output_dir: A directory where the model will be exported to.
# The layer has to be saved to the model for keras tracking purpases.
model.tft_layer = tf_transform_output.transform_features_layer()
@tf.function
def serve_tf_examples_fn(serialized_tf_examples):
Serving tf.function model wrapper.
feature_spec = RAW_DATA_FEATURE_SPEC.copy()
feature_spec.pop(LABEL_KEY)
parsed_features = tf.io.parse_example(serialized_tf_examples, feature_spec)
transformed_features = model.tft_layer(parsed_features)
outputs = model(transformed_features)
classes_names = tf.constant([['0', '1']])
classes = tf.tile(classes_names, [tf.shape(outputs)[0], 1])
return {'classes': classes, 'scores': outputs}
concrete_serving_fn = serve_tf_examples_fn.get_concrete_function(
tf.TensorSpec(shape=[None], dtype=tf.string, name='inputs'))
signatures = {'serving_default': concrete_serving_fn}
# This is required in order to make this model servable with model_server.
versioned_output_dir = os.path.join(output_dir, '1')
model.save(versioned_output_dir, save_format='tf', signatures=signatures)
def train_and_evaluate(working_dir,
num_train_instances=NUM_TRAIN_INSTANCES,
num_test_instances=NUM_TEST_INSTANCES):
Train the model on training data and evaluate on test data.
Args:
working_dir: The location of the Transform output.
num_train_instances: Number of instances in train set
num_test_instances: Number of instances in test set
Returns:
The results from the estimator's 'evaluate' method
train_data_path_pattern = os.path.join(working_dir,
TRANSFORMED_TRAIN_DATA_FILEBASE + '*')
eval_data_path_pattern = os.path.join(working_dir,
TRANSFORMED_TEST_DATA_FILEBASE + '*')
tf_transform_output = tft.TFTransformOutput(working_dir)
train_input_fn = _make_training_input_fn(
tf_transform_output, train_data_path_pattern, batch_size=TRAIN_BATCH_SIZE)
train_dataset = train_input_fn()
# Evaluate model on test dataset.
eval_input_fn = _make_training_input_fn(
tf_transform_output, eval_data_path_pattern, batch_size=TRAIN_BATCH_SIZE)
validation_dataset = eval_input_fn()
feature_spec = tf_transform_output.transformed_feature_spec().copy()
feature_spec.pop(LABEL_KEY)
inputs = {}
for key, spec in feature_spec.items():
if isinstance(spec, tf.io.VarLenFeature):
inputs[key] = tf.keras.layers.Input(
shape=[None], name=key, dtype=spec.dtype, sparse=True)
elif isinstance(spec, tf.io.FixedLenFeature):
inputs[key] = tf.keras.layers.Input(
shape=spec.shape, name=key, dtype=spec.dtype)
else:
raise ValueError('Spec type is not supported: ', key, spec)
encoded_inputs = {}
for key in inputs:
feature = tf.expand_dims(inputs[key], -1)
if key in CATEGORICAL_FEATURE_KEYS:
num_buckets = tf_transform_output.num_buckets_for_transformed_feature(key)
encoding_layer = (
tf.keras.layers.experimental.preprocessing.CategoryEncoding(
max_tokens=num_buckets, output_mode='binary', sparse=False))
encoded_inputs[key] = encoding_layer(feature)
else:
encoded_inputs[key] = feature
stacked_inputs = tf.concat(tf.nest.flatten(encoded_inputs), axis=1)
output = tf.keras.layers.Dense(100, activation='relu')(stacked_inputs)
output = tf.keras.layers.Dense(70, activation='relu')(output)
output = tf.keras.layers.Dense(50, activation='relu')(output)
output = tf.keras.layers.Dense(20, activation='relu')(output)
output = tf.keras.layers.Dense(2, activation='sigmoid')(output)
model = tf.keras.Model(inputs=inputs, outputs=output)
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
pprint.pprint(model.summary())
model.fit(train_dataset, validation_data=validation_dataset,
epochs=TRAIN_NUM_EPOCHS,
steps_per_epoch=math.ceil(num_train_instances / TRAIN_BATCH_SIZE),
validation_steps=math.ceil(num_test_instances / TRAIN_BATCH_SIZE))
# Export the model.
exported_model_dir = os.path.join(working_dir, EXPORTED_MODEL_DIR)
export_serving_model(tf_transform_output, model, exported_model_dir)
metrics_values = model.evaluate(validation_dataset, steps=num_test_instances)
metrics_labels = model.metrics_names
return {l: v for l, v in zip(metrics_labels, metrics_values)}
import tempfile
temp = os.path.join(tempfile.gettempdir(), 'keras')
transform_data(train, test, temp)
results = train_and_evaluate(temp)
pprint.pprint(results)
def _make_training_input_fn(tf_transform_output, transformed_examples,
batch_size):
Creates an input function reading from transformed data.
Args:
tf_transform_output: Wrapper around output of tf.Transform.
transformed_examples: Base filename of examples.
batch_size: Batch size.
Returns:
The input function for training or eval.
def input_fn():
Input function for training and eval.
dataset = tf.data.experimental.make_batched_features_dataset(
file_pattern=transformed_examples,
batch_size=batch_size,
features=tf_transform_output.transformed_feature_spec(),
reader=tf.data.TFRecordDataset,
shuffle=True)
transformed_features = tf.compat.v1.data.make_one_shot_iterator(
dataset).get_next()
# Extract features and label from the transformed tensors.
transformed_labels = tf.where(
tf.equal(transformed_features.pop(LABEL_KEY), 1))
return transformed_features, transformed_labels[:,1]
return input_fn
def _make_serving_input_fn(tf_transform_output):
Creates an input function reading from raw data.
Args:
tf_transform_output: Wrapper around output of tf.Transform.
Returns:
The serving input function.
raw_feature_spec = RAW_DATA_FEATURE_SPEC.copy()
# Remove label since it is not available during serving.
raw_feature_spec.pop(LABEL_KEY)
def serving_input_fn():
Input function for serving.
# Get raw features by generating the basic serving input_fn and calling it.
# Here we generate an input_fn that expects a parsed Example proto to be fed
# to the model at serving time. See also
# tf.estimator.export.build_raw_serving_input_receiver_fn.
raw_input_fn = tf.estimator.export.build_parsing_serving_input_receiver_fn(
raw_feature_spec, default_batch_size=None)
serving_input_receiver = raw_input_fn()
# Apply the transform function that was used to generate the materialized
# data.
raw_features = serving_input_receiver.features
transformed_features = tf_transform_output.transform_raw_features(
raw_features)
return tf.estimator.export.ServingInputReceiver(
transformed_features, serving_input_receiver.receiver_tensors)
return serving_input_fn
def get_feature_columns(tf_transform_output):
Returns the FeatureColumns for the model.
Args:
tf_transform_output: A `TFTransformOutput` object.
Returns:
A list of FeatureColumns.
# Wrap scalars as real valued columns.
real_valued_columns = [tf.feature_column.numeric_column(key, shape=())
for key in NUMERIC_FEATURE_KEYS]
# Wrap categorical columns.
one_hot_columns = [
tf.feature_column.indicator_column(
tf.feature_column.categorical_column_with_identity(
key=key,
num_buckets=(NUM_OOV_BUCKETS +
tf_transform_output.vocabulary_size_by_name(
vocab_filename=key))))
for key in CATEGORICAL_FEATURE_KEYS]
return real_valued_columns + one_hot_columns
def train_and_evaluate(working_dir, num_train_instances=NUM_TRAIN_INSTANCES,
num_test_instances=NUM_TEST_INSTANCES):
Train the model on training data and evaluate on test data.
Args:
working_dir: Directory to read transformed data and metadata from and to
write exported model to.
num_train_instances: Number of instances in train set
num_test_instances: Number of instances in test set
Returns:
The results from the estimator's 'evaluate' method
tf_transform_output = tft.TFTransformOutput(working_dir)
run_config = tf.estimator.RunConfig()
estimator = tf.estimator.LinearClassifier(
feature_columns=get_feature_columns(tf_transform_output),
config=run_config,
loss_reduction=tf.losses.Reduction.SUM)
# Fit the model using the default optimizer.
train_input_fn = _make_training_input_fn(
tf_transform_output,
os.path.join(working_dir, TRANSFORMED_TRAIN_DATA_FILEBASE + '*'),
batch_size=TRAIN_BATCH_SIZE)
estimator.train(
input_fn=train_input_fn,
max_steps=TRAIN_NUM_EPOCHS * num_train_instances / TRAIN_BATCH_SIZE)
# Evaluate model on test dataset.
eval_input_fn = _make_training_input_fn(
tf_transform_output,
os.path.join(working_dir, TRANSFORMED_TEST_DATA_FILEBASE + '*'),
batch_size=1)
# Export the model.
serving_input_fn = _make_serving_input_fn(tf_transform_output)
exported_model_dir = os.path.join(working_dir, EXPORTED_MODEL_DIR)
estimator.export_saved_model(exported_model_dir, serving_input_fn)
return estimator.evaluate(input_fn=eval_input_fn, steps=num_test_instances)
import tempfile
temp = os.path.join(tempfile.gettempdir(), 'estimator')
transform_data(train, test, temp)
results = train_and_evaluate(temp)
pprint.pprint(results)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Looking at the outbreak
Step2: Plotting the data
Step3: Modeling the data
Step4: Validating the model
Step5: Projecting future data
| <ASSISTANT_TASK:>
Python Code:
ESTIMATE_DAYS = 3
data_key = 'IT'
date_limit = '2020-03-17'
import pandas as pd
import seaborn as sns
sns.set()
df = pd.read_csv(f'https://storage.googleapis.com/covid19-open-data/v3/location/{data_key}.csv').set_index('date')
def get_outbreak_mask(data: pd.DataFrame, threshold: int = 10):
''' Returns a mask for > N confirmed cases '''
return data['total_confirmed'] > threshold
cols = ['total_confirmed', 'total_deceased']
# Get data only for the columns we care about
df = df[cols]
# Get data only for the selected dates
df = df[df.index <= date_limit]
# Get data only after the outbreak begun
df = df[get_outbreak_mask(df)]
df.plot(kind='bar', figsize=(16, 8));
from scipy import optimize
def exponential_function(x: float, a: float, b: float, c: float):
''' a * (b ^ x) + c '''
return a * (b ** x) + c
X, y = list(range(len(df))), df['total_confirmed'].tolist()
params, _ = optimize.curve_fit(exponential_function, X, y)
print('Estimated function: {0:.3f} * ({1:.3f} ^ X) + {2:.3f}'.format(*params))
confirmed = df[['total_confirmed']].rename(columns={'total_confirmed': 'Ground Truth'})
ax = confirmed.plot(kind='bar', figsize=(16, 8))
estimate = [exponential_function(x, *params) for x in X]
ax.plot(df.index, estimate, color='red', label='Estimate')
ax.legend();
params_validate, _ = optimize.curve_fit(exponential_function, X[:-ESTIMATE_DAYS], y[:-ESTIMATE_DAYS])
# Project zero for all values except for the last ESTIMATE_DAYS
projected = [0] * len(X[:-ESTIMATE_DAYS]) + [exponential_function(x, *params_validate) for x in X[-ESTIMATE_DAYS:]]
projected = pd.Series(projected, index=df.index, name='Projected')
confirmed = pd.DataFrame({'Ground Truth': df['total_confirmed'], 'Projected': projected})
ax = confirmed.plot(kind='bar', figsize=(16, 8))
estimate = [exponential_function(x, *params_validate) for x in X]
ax.plot(df.index, estimate, color='red', label='Estimate')
ax.legend();
import datetime
# Append N new days to our indices
date_format = '%Y-%m-%d'
date_range = [datetime.datetime.strptime(date, date_format) for date in df.index]
for _ in range(ESTIMATE_DAYS): date_range.append(date_range[-1] + datetime.timedelta(days=1))
date_range = [datetime.datetime.strftime(date, date_format) for date in date_range]
# Perform projection with the previously estimated parameters
projected = [0] * len(X) + [exponential_function(x, *params) for x in range(len(X), len(X) + ESTIMATE_DAYS)]
projected = pd.Series(projected, index=date_range, name='Projected')
df_ = pd.DataFrame({'Confirmed': df['total_confirmed'], 'Projected': projected})
ax = df_.plot(kind='bar', figsize=(16, 8))
estimate = [exponential_function(x, *params) for x in range(len(date_range))]
ax.plot(date_range, estimate, color='red', label='Estimate')
ax.legend();
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Function1D is the base class for 1D spectral models and FunctionMeta is ABC class that ensures all the needed parts of a model are in the class as well as making the class function as it should.
Step3: The docstring
Step5: Now we will define a spectral model that will handle both the unit and non-unit call.
Step6: We can check the unit and non-unit call by making a point source and evaluating it
Step7: Template (Table) Models
Step8: Now we define a template model factory. This takes a name, a description, the energy grid and an array of parameter names as input.
Step9: Now, define our grid in parameter space. While we are using a function here, this grid could be from a text file, a database of simulations, etc. We then assign these grid points to the template model factory.
Step10: Finally, we loop over our grid and set the interpolation data to the template model factory. The units of the fluxes must be a differential photon flux!
Step11: We can now save our model to disk. The formal is an HDF5 file which is saved to the astromodel data directory (~/.astromodels/data). The HDF5 file can easily be passed around to other users as all information defining the model is stored in the file. The other user would place the file in thier astromodels data directory.
| <ASSISTANT_TASK:>
Python Code:
from astromodels.functions.function import Function1D, FunctionMeta, ModelAssertionViolation
class Powerlaw(Function1D):
r
description :
A simple power-law
latex : $ K~\frac{x}{piv}^{index} $
parameters :
K :
desc : Normalization (differential flux at the pivot value)
initial value : 1.0
is_normalization : True
transformation : log10
min : 1e-30
max : 1e3
delta : 0.1
piv :
desc : Pivot value
initial value : 1
fix : yes
index :
desc : Photon index
initial value : -2
min : -10
max : 10
__metaclass__ = FunctionMeta
def _set_units(self, x_unit, y_unit):
# The index is always dimensionless
self.index.unit = astropy_units.dimensionless_unscaled
# The pivot energy has always the same dimension as the x variable
self.piv.unit = x_unit
# The normalization has the same units as the y
self.K.unit = y_unit
# noinspection PyPep8Naming
def evaluate(self, x, K, piv, index):
xx = np.divide(x, piv)
return K * np.power(xx, index)
%load_ext Cython
%%cython --annotate
cpdef cython_function(a):
# we could wrap a c++ function here
return a
cython_function(2.)
import astropy.units as astropy_units
class CythonModel(Function1D):
r
description :
A spectral model wrapping a cython function
latex : $$
parameters :
a :
desc : Normalization (differential flux)
initial value : 1.0
is_normalization : True
min : 1e-30
max : 1e3
delta : 0.1
__metaclass__ = FunctionMeta
def _set_units(self, x_unit, y_unit):
# The normalization has the same units as the y
self.a.unit = y_unit
# noinspection PyPep8Naming
def evaluate(self, x, a):
# check is the function is being alled with units
if isinstance(a, astropy_units.Quantity):
# get the values
a_ = a.value
# save the unit
unit_ = self.y_unit
else:
# we do not need to do anything here
a_ = a
# this will basically be ignored
unit_ = 1.
# call the cython function
flux = cython_function(a_)
# add back the unit if needed
return flux * unit_
cython_spectrum = CythonModel()
from astromodels import PointSource
point_source = PointSource('ps',0,0,spectral_shape=cython_spectrum)
print(point_source(10.))
point_source(10. * astropy_units.keV)
from astromodels import Band
import numpy as np
model = Band()
# we won't need to modify the normalization
model.K = 1.
# if no units are provided for the energy grid, keV will be assumed!
energies = np.logspace(1, 3, 50)
from astromodels import TemplateModelFactory
tmf = TemplateModelFactory('my_template', 'A test template', energies, ['alpha', 'xp', 'beta'])
alpha_grid = np.linspace(-1.5, 1, 15)
beta_grid = np.linspace(-3.5, -1.6, 15)
xp_grid = np.logspace(1, 3, 20)
tmf.define_parameter_grid('alpha', alpha_grid)
tmf.define_parameter_grid('beta', beta_grid)
tmf.define_parameter_grid('xp', xp_grid)
for a in alpha_grid:
for b in beta_grid:
for xp in xp_grid:
# change our model parameters
model.alpha = a
model.beta = b
model.xp = xp
tmf.add_interpolation_data(model(energies), alpha=a, xp=xp, beta=b)
tmf.save_data(overwrite=True)
from astromodels import TemplateModel
reloaded_table_model = TemplateModel('my_template')
reloaded_table_model(energies)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load Iris Flower Dataset
Step2: Train A Decision Tree Model
Step3: View Feature Importance
Step4: Visualize Feature Importance
| <ASSISTANT_TASK:>
Python Code:
# Load libraries
from sklearn.ensemble import RandomForestClassifier
from sklearn import datasets
import numpy as np
import matplotlib.pyplot as plt
# Load data
iris = datasets.load_iris()
X = iris.data
y = iris.target
# Create decision tree classifer object
clf = RandomForestClassifier(random_state=0, n_jobs=-1)
# Train model
model = clf.fit(X, y)
# Calculate feature importances
importances = model.feature_importances_
# Sort feature importances in descending order
indices = np.argsort(importances)[::-1]
# Rearrange feature names so they match the sorted feature importances
names = [iris.feature_names[i] for i in indices]
# Create plot
plt.figure()
# Create plot title
plt.title("Feature Importance")
# Add bars
plt.bar(range(X.shape[1]), importances[indices])
# Add feature names as x-axis labels
plt.xticks(range(X.shape[1]), names, rotation=90)
# Show plot
plt.show()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Some calendar information so we can support any netCDF calendar.
Step4: A few calendar functions to determine the number of days in each month
Step5: Open the Dataset
Step6: Now for the heavy lifting
| <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import xarray as xr
from netCDF4 import num2date
import matplotlib.pyplot as plt
print("numpy version : ", np.__version__)
print("pandas version : ", pd.__version__)
print("xarray version : ", xr.__version__)
dpm = {'noleap': [0, 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31],
'365_day': [0, 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31],
'standard': [0, 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31],
'gregorian': [0, 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31],
'proleptic_gregorian': [0, 31, 28, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31],
'all_leap': [0, 31, 29, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31],
'366_day': [0, 31, 29, 31, 30, 31, 30, 31, 31, 30, 31, 30, 31],
'360_day': [0, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30, 30]}
def leap_year(year, calendar='standard'):
Determine if year is a leap year
leap = False
if ((calendar in ['standard', 'gregorian',
'proleptic_gregorian', 'julian']) and
(year % 4 == 0)):
leap = True
if ((calendar == 'proleptic_gregorian') and
(year % 100 == 0) and
(year % 400 != 0)):
leap = False
elif ((calendar in ['standard', 'gregorian']) and
(year % 100 == 0) and (year % 400 != 0) and
(year < 1583)):
leap = False
return leap
def get_dpm(time, calendar='standard'):
return a array of days per month corresponding to the months provided in `months`
month_length = np.zeros(len(time), dtype=np.int)
cal_days = dpm[calendar]
for i, (month, year) in enumerate(zip(time.month, time.year)):
month_length[i] = cal_days[month]
if leap_year(year, calendar=calendar):
month_length[i] += 1
return month_length
ds = xr.tutorial.open_dataset('rasm').load()
print(ds)
# Make a DataArray with the number of days in each month, size = len(time)
month_length = xr.DataArray(get_dpm(ds.time.to_index(), calendar='noleap'),
coords=[ds.time], name='month_length')
# Calculate the weights by grouping by 'time.season'.
# Conversion to float type ('astype(float)') only necessary for Python 2.x
weights = month_length.groupby('time.season') / month_length.astype(float).groupby('time.season').sum()
# Test that the sum of the weights for each season is 1.0
np.testing.assert_allclose(weights.groupby('time.season').sum().values, np.ones(4))
# Calculate the weighted average
ds_weighted = (ds * weights).groupby('time.season').sum(dim='time')
print(ds_weighted)
# only used for comparisons
ds_unweighted = ds.groupby('time.season').mean('time')
ds_diff = ds_weighted - ds_unweighted
# Quick plot to show the results
notnull = pd.notnull(ds_unweighted['Tair'][0])
fig, axes = plt.subplots(nrows=4, ncols=3, figsize=(14,12))
for i, season in enumerate(('DJF', 'MAM', 'JJA', 'SON')):
ds_weighted['Tair'].sel(season=season).where(notnull).plot.pcolormesh(
ax=axes[i, 0], vmin=-30, vmax=30, cmap='Spectral_r',
add_colorbar=True, extend='both')
ds_unweighted['Tair'].sel(season=season).where(notnull).plot.pcolormesh(
ax=axes[i, 1], vmin=-30, vmax=30, cmap='Spectral_r',
add_colorbar=True, extend='both')
ds_diff['Tair'].sel(season=season).where(notnull).plot.pcolormesh(
ax=axes[i, 2], vmin=-0.1, vmax=.1, cmap='RdBu_r',
add_colorbar=True, extend='both')
axes[i, 0].set_ylabel(season)
axes[i, 1].set_ylabel('')
axes[i, 2].set_ylabel('')
for ax in axes.flat:
ax.axes.get_xaxis().set_ticklabels([])
ax.axes.get_yaxis().set_ticklabels([])
ax.axes.axis('tight')
ax.set_xlabel('')
axes[0, 0].set_title('Weighted by DPM')
axes[0, 1].set_title('Equal Weighting')
axes[0, 2].set_title('Difference')
plt.tight_layout()
fig.suptitle('Seasonal Surface Air Temperature', fontsize=16, y=1.02)
# Wrap it into a simple function
def season_mean(ds, calendar='standard'):
# Make a DataArray of season/year groups
year_season = xr.DataArray(ds.time.to_index().to_period(freq='Q-NOV').to_timestamp(how='E'),
coords=[ds.time], name='year_season')
# Make a DataArray with the number of days in each month, size = len(time)
month_length = xr.DataArray(get_dpm(ds.time.to_index(), calendar=calendar),
coords=[ds.time], name='month_length')
# Calculate the weights by grouping by 'time.season'
weights = month_length.groupby('time.season') / month_length.groupby('time.season').sum()
# Test that the sum of the weights for each season is 1.0
np.testing.assert_allclose(weights.groupby('time.season').sum().values, np.ones(4))
# Calculate the weighted average
return (ds * weights).groupby('time.season').sum(dim='time')
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Transfer Learning Using Pretrained ConvNets
Step2: Data preprocessing
Step3: Prepare training and validation cats and dogs datasets
Step4: Create Image Data Generator with Image Augmentation
Step5: Create the base model from the pre-trained ConvNets
Step6: Feature extraction
Step7: Add a classification head
Step8: Compile the model
Step9: These 1.2K trainable parameters are divided among 2 TensorFlow Variable objects, the weights and biases of the two dense layers
Step10: Train the model
Step11: Learning curves
Step12: Fine tuning
Step13: Compile the model
Step14: Continue Train the model
Step15: Learning curves
| <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import tensorflow.compat.v1 as tf
from tensorflow import keras
print("TensorFlow version is ", tf.__version__)
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
zip_file = tf.keras.utils.get_file(origin="https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip",
fname="cats_and_dogs_filtered.zip", extract=True)
base_dir, _ = os.path.splitext(zip_file)
train_dir = os.path.join(base_dir, 'train')
validation_dir = os.path.join(base_dir, 'validation')
# Directory with our training cat pictures
train_cats_dir = os.path.join(train_dir, 'cats')
print ('Total training cat images:', len(os.listdir(train_cats_dir)))
# Directory with our training dog pictures
train_dogs_dir = os.path.join(train_dir, 'dogs')
print ('Total training dog images:', len(os.listdir(train_dogs_dir)))
# Directory with our validation cat pictures
validation_cats_dir = os.path.join(validation_dir, 'cats')
print ('Total validation cat images:', len(os.listdir(validation_cats_dir)))
# Directory with our validation dog pictures
validation_dogs_dir = os.path.join(validation_dir, 'dogs')
print ('Total validation dog images:', len(os.listdir(validation_dogs_dir)))
image_size = 160 # All images will be resized to 160x160
batch_size = 32
# Rescale all images by 1./255 and apply image augmentation
train_datagen = keras.preprocessing.image.ImageDataGenerator(
rescale=1./255)
validation_datagen = keras.preprocessing.image.ImageDataGenerator(rescale=1./255)
# Flow training images in batches of 20 using train_datagen generator
train_generator = train_datagen.flow_from_directory(
train_dir, # Source directory for the training images
target_size=(image_size, image_size),
batch_size=batch_size,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
# Flow validation images in batches of 20 using test_datagen generator
validation_generator = validation_datagen.flow_from_directory(
validation_dir, # Source directory for the validation images
target_size=(image_size, image_size),
batch_size=batch_size,
class_mode='binary')
IMG_SHAPE = (image_size, image_size, 3)
# Create the base model from the pre-trained model MobileNet V2
base_model = tf.keras.applications.MobileNetV2(input_shape=IMG_SHAPE,
include_top=False,
weights='imagenet')
base_model.trainable = False
# Let's take a look at the base model architecture
base_model.summary()
model = tf.keras.Sequential([
base_model,
keras.layers.GlobalAveragePooling2D(),
keras.layers.Dense(1, activation='sigmoid')
])
model.compile(optimizer=tf.keras.optimizers.RMSprop(learning_rate=0.0001),
loss='binary_crossentropy',
metrics=['accuracy'])
model.summary()
len(model.trainable_variables)
epochs = 10
steps_per_epoch = train_generator.n // batch_size
validation_steps = validation_generator.n // batch_size
history = model.fit_generator(train_generator,
steps_per_epoch = steps_per_epoch,
epochs=epochs,
workers=4,
validation_data=validation_generator,
validation_steps=validation_steps)
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
plt.figure(figsize=(8, 8))
plt.subplot(2, 1, 1)
plt.plot(acc, label='Training Accuracy')
plt.plot(val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.ylabel('Accuracy')
plt.ylim([min(plt.ylim()),1])
plt.title('Training and Validation Accuracy')
plt.subplot(2, 1, 2)
plt.plot(loss, label='Training Loss')
plt.plot(val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.ylabel('Cross Entropy')
plt.ylim([0,max(plt.ylim())])
plt.title('Training and Validation Loss')
plt.show()
base_model.trainable = True
# Let's take a look to see how many layers are in the base model
print("Number of layers in the base model: ", len(base_model.layers))
# Fine tune from this layer onwards
fine_tune_at = 100
# Freeze all the layers before the `fine_tune_at` layer
for layer in base_model.layers[:fine_tune_at]:
layer.trainable = False
model.compile(optimizer = tf.keras.optimizers.RMSprop(learning_rate=2e-5),
loss='binary_crossentropy',
metrics=['accuracy'])
model.summary()
len(model.trainable_variables)
history_fine = model.fit_generator(train_generator,
steps_per_epoch = steps_per_epoch,
epochs=epochs,
workers=4,
validation_data=validation_generator,
validation_steps=validation_steps)
acc += history_fine.history['accuracy']
val_acc += history_fine.history['val_accuracy']
loss += history_fine.history['loss']
val_loss += history_fine.history['val_loss']
plt.figure(figsize=(8, 8))
plt.subplot(2, 1, 1)
plt.plot(acc, label='Training Accuracy')
plt.plot(val_acc, label='Validation Accuracy')
plt.ylim([0.9, 1])
plt.plot([epochs-1,epochs-1], plt.ylim(), label='Start Fine Tuning')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(2, 1, 2)
plt.plot(loss, label='Training Loss')
plt.plot(val_loss, label='Validation Loss')
plt.ylim([0, 0.2])
plt.plot([epochs-1,epochs-1], plt.ylim(), label='Start Fine Tuning')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
acc
loss
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Kernel SVMs
| <ASSISTANT_TASK:>
Python Code:
from sklearn.datasets import load_digits
from sklearn.cross_validation import train_test_split
digits = load_digits()
X_train, X_test, y_train, y_test = train_test_split(digits.data / 16., digits.target % 2, random_state=2)
from sklearn.svm import LinearSVC, SVC
linear_svc = LinearSVC(loss="hinge").fit(X_train, y_train)
svc = SVC(kernel="linear").fit(X_train, y_train)
np.mean(linear_svc.predict(X_test) == svc.predict(X_test))
from sklearn.metrics.pairwise import rbf_kernel
line = np.linspace(-3, 3, 100)[:, np.newaxis]
kernel_value = rbf_kernel([[0]], line, gamma=1)
plt.plot(line, kernel_value.T)
from figures import plot_svm_interactive
plot_svm_interactive()
svc = SVC().fit(X_train, y_train)
svc.score(X_test, y_test)
Cs = [0.001, 0.01, 0.1, 1, 10, 100]
gammas = [0.001, 0.01, 0.1, 1, 10, 100]
from sklearn.grid_search import GridSearchCV
param_grid = {'C': Cs, 'gamma' : gammas}
grid_search = GridSearchCV(SVC(), param_grid, cv=5)
grid_search.fit(X_train, y_train)
grid_search.score(X_test, y_test)
# We extract just the scores
scores = [x[1] for x in grid_search.grid_scores_]
scores = np.array(scores).reshape(6, 6)
plt.matshow(scores)
plt.xlabel('gamma')
plt.ylabel('C')
plt.colorbar()
plt.xticks(np.arange(6), param_grid['gamma'])
plt.yticks(np.arange(6), param_grid['C']);
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Get the image data
Step2: Train gaussian mixture model and save it to file
Step3: Run segmentation faster by loading model from file
Step4: The seeds does not have to be used if model is loaded from file
| <ASSISTANT_TASK:>
Python Code:
from imcut import pycut
import numpy as np
import scipy.ndimage
import matplotlib.pyplot as plt
from datetime import datetime
def make_data(sz=32, offset=0, sigma=80):
seeds = np.zeros([sz, sz, sz], dtype=np.int8)
seeds[offset + 12, offset + 9 : offset + 14, offset + 10] = 1
seeds[offset + 20, offset + 18 : offset + 21, offset + 12] = 1
img = np.ones([sz, sz, sz])
img = img - seeds
seeds[
offset + 3 : offset + 15, offset + 2 : offset + 6, offset + 27 : offset + 29
] = 2
img = scipy.ndimage.morphology.distance_transform_edt(img)
segm = img < 7
img = (100 * segm + sigma * np.random.random(img.shape)).astype(np.uint8)
return img, segm, seeds
# make_data()
img, seg, seeds = make_data(64, 20)
i = 30
plt.imshow(img[i, :, :], cmap='gray')
segparams = {
# 'method':'graphcut',
"method": "graphcut",
"use_boundary_penalties": False,
"boundary_dilatation_distance": 2,
"boundary_penalties_weight": 1,
"modelparams": {
"type": "gmmsame",
"fv_type": "intensity",
# 'fv_extern': fv_function,
"adaptation": "original_data",
},
}
gc = pycut.ImageGraphCut(img, segparams=segparams)
gc.set_seeds(seeds)
t0 = datetime.now()
gc.run()
print(f"time cosumed={datetime.now()-t0}")
plt.imshow(img[i, :, :], cmap='gray')
plt.contour(gc.segmentation[i,:,:])
plt.show()
mdl_stored_file = "test_model.p"
gc.save(mdl_stored_file)
# forget
gc = None
img, seg, seeds = make_data(56, 18)
gc = pycut.ImageGraphCut(img)
gc.load(mdl_stored_file)
gc.set_seeds(seeds)
t0 = datetime.now()
gc.run(run_fit_model=False)
print(f"time cosumed={datetime.now()-t0}")
plt.imshow(img[i, :, :], cmap='gray')
plt.contour(gc.segmentation[i,:,:])
plt.show()
# forget
gc = None
img, seg, seeds = make_data(56, 18)
gc = pycut.ImageGraphCut(img)
gc.load(mdl_stored_file)
t0 = datetime.now()
gc.run(run_fit_model=False)
print(f"time cosumed={datetime.now()-t0}")
plt.imshow(img[i, :, :], cmap='gray')
plt.contour(gc.segmentation[i,:,:])
plt.show()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 2. Key Properties --> Flux Correction
Step7: 3. Key Properties --> Genealogy
Step8: 3.2. CMIP3 Parent
Step9: 3.3. CMIP5 Parent
Step10: 3.4. Previous Name
Step11: 4. Key Properties --> Software Properties
Step12: 4.2. Code Version
Step13: 4.3. Code Languages
Step14: 4.4. Components Structure
Step15: 4.5. Coupler
Step16: 5. Key Properties --> Coupling
Step17: 5.2. Atmosphere Double Flux
Step18: 5.3. Atmosphere Fluxes Calculation Grid
Step19: 5.4. Atmosphere Relative Winds
Step20: 6. Key Properties --> Tuning Applied
Step21: 6.2. Global Mean Metrics Used
Step22: 6.3. Regional Metrics Used
Step23: 6.4. Trend Metrics Used
Step24: 6.5. Energy Balance
Step25: 6.6. Fresh Water Balance
Step26: 7. Key Properties --> Conservation --> Heat
Step27: 7.2. Atmos Ocean Interface
Step28: 7.3. Atmos Land Interface
Step29: 7.4. Atmos Sea-ice Interface
Step30: 7.5. Ocean Seaice Interface
Step31: 7.6. Land Ocean Interface
Step32: 8. Key Properties --> Conservation --> Fresh Water
Step33: 8.2. Atmos Ocean Interface
Step34: 8.3. Atmos Land Interface
Step35: 8.4. Atmos Sea-ice Interface
Step36: 8.5. Ocean Seaice Interface
Step37: 8.6. Runoff
Step38: 8.7. Iceberg Calving
Step39: 8.8. Endoreic Basins
Step40: 8.9. Snow Accumulation
Step41: 9. Key Properties --> Conservation --> Salt
Step42: 10. Key Properties --> Conservation --> Momentum
Step43: 11. Radiative Forcings
Step44: 12. Radiative Forcings --> Greenhouse Gases --> CO2
Step45: 12.2. Additional Information
Step46: 13. Radiative Forcings --> Greenhouse Gases --> CH4
Step47: 13.2. Additional Information
Step48: 14. Radiative Forcings --> Greenhouse Gases --> N2O
Step49: 14.2. Additional Information
Step50: 15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3
Step51: 15.2. Additional Information
Step52: 16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3
Step53: 16.2. Additional Information
Step54: 17. Radiative Forcings --> Greenhouse Gases --> CFC
Step55: 17.2. Equivalence Concentration
Step56: 17.3. Additional Information
Step57: 18. Radiative Forcings --> Aerosols --> SO4
Step58: 18.2. Additional Information
Step59: 19. Radiative Forcings --> Aerosols --> Black Carbon
Step60: 19.2. Additional Information
Step61: 20. Radiative Forcings --> Aerosols --> Organic Carbon
Step62: 20.2. Additional Information
Step63: 21. Radiative Forcings --> Aerosols --> Nitrate
Step64: 21.2. Additional Information
Step65: 22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect
Step66: 22.2. Aerosol Effect On Ice Clouds
Step67: 22.3. Additional Information
Step68: 23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect
Step69: 23.2. Aerosol Effect On Ice Clouds
Step70: 23.3. RFaci From Sulfate Only
Step71: 23.4. Additional Information
Step72: 24. Radiative Forcings --> Aerosols --> Dust
Step73: 24.2. Additional Information
Step74: 25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic
Step75: 25.2. Historical Explosive Volcanic Aerosol Implementation
Step76: 25.3. Future Explosive Volcanic Aerosol Implementation
Step77: 25.4. Additional Information
Step78: 26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic
Step79: 26.2. Historical Explosive Volcanic Aerosol Implementation
Step80: 26.3. Future Explosive Volcanic Aerosol Implementation
Step81: 26.4. Additional Information
Step82: 27. Radiative Forcings --> Aerosols --> Sea Salt
Step83: 27.2. Additional Information
Step84: 28. Radiative Forcings --> Other --> Land Use
Step85: 28.2. Crop Change Only
Step86: 28.3. Additional Information
Step87: 29. Radiative Forcings --> Other --> Solar
Step88: 29.2. Additional Information
| <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'miroc', 'nicam16-8s', 'toplevel')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.flux_correction.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OASIS"
# "OASIS3-MCT"
# "ESMF"
# "NUOPC"
# "Bespoke"
# "Unknown"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Atmosphere grid"
# "Ocean grid"
# "Specific coupler grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "Option 1"
# "Option 2"
# "Option 3"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Type A"
# "Type B"
# "Type C"
# "Type D"
# "Type E"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "M"
# "Y"
# "E"
# "ES"
# "C"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "N/A"
# "irradiance"
# "proton"
# "electron"
# "cosmic ray"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
| <ASSISTANT_TASK:>
Python Code:
def countOddSquares(n , m ) :
return int(m ** 0.5 ) - int(( n - 1 ) ** 0.5 )
n = 5
m = 100
print("Count ▁ is ", countOddSquares(n , m ) )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: La forma larga (aunque no tanto), clásica y NO pythonica
Step2: La forma pythonica y a lo numpy
| <ASSISTANT_TASK:>
Python Code:
from numpy import matrix
from numpy import empty
a=matrix(((2,5),(4,6)))
b=matrix(((1,3),(6,4)))
a
b
# shape es una tupla que indica número de filas y número de columnas
suma = empty((a.shape))
#el primer for que recorre las filas
for i in range(0, a.shape[0]):
#el segundo for recorre las columnas
for j in range(0, a.shape[1]):
suma[i,j] = a[i,j] + b[i,j]
print (suma)
suma = a+b
print(suma)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create the server instance
Step2: Login
Step3: Access the data as usual
Step4: Logout
| <ASSISTANT_TASK:>
Python Code:
import fmrest
fms = fmrest.Server('https://10.211.55.15',
user='admin',
password='admin',
database='Contacts',
layout='Contacts',
verify_ssl=False,
data_sources=[{'database': 'SecondDataSource', 'username': 'admin2', 'password': 'admin2'}]
)
fms.login()
record = fms.get_record(1)
record['portal_secondDataSource'][0]['SecondDataSource::name']
fms.logout()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: What I like most about violin plots is that they show you the entire distribution of your data. If data inputs violate your assumptions (e.g. multimodal, full of null values, skewed by bad imputation or extreme outliers) you see the problems at a quick glance and in incredible detail. This is better than a few representative percentiles as in a box and whisker plot, or a table of summary statistics. They avoid the problem of oversaturation prevalent in scatter plots with lots of points, and reveal outliers more clearly than you would in a histogram without a lot of fine-tuning.
Step2: Fuzzywuzzy Similarity Scores
Step5: It’s rarely obvious which function is best for a given problem. Let’s consider five different fuzzy matching methods and compute similarity scores for each pair of strings. Using these scores, we’ll create some violin plots to determine which method is best for distinguishing between matches and not matches. (You could also consider combinations of scores though this comes at a higher computational cost.)
Step6: A few lines of code is all it takes to generate split violin plots using the Seaborn library. The purple distribution depicts a smoothed (sideways) histogram of fuzzy matching scores when Match is True, while the light-green shows the distribution of similarity scores when Match is False. When two distributions have little or no overlap along the y-axis, the fuzzy matching function will do a better job distinguishing between our binary classes.
Step7: Generally, these fuzzy matching scores do a good job in distinguishing between observations where the two names refer to the same product. For any method, a pair of names with a similarity score of 50 or more will probably refer to the same product.
Step9: For this data, the correlation coefficients give a similar ranking as achieved using the violin plots. The token-set-ratio method gives the strongest correlation to the Match variable while the QRatio method gives the weakest correlation.
Step10: In these violin plots, the similarity scores on the left appear to be more helpful in separating between matches and not-matches. There is less overlap between the True and False observations and the observations are more tightly clustered into their respective groups.
| <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from fuzzywuzzy import fuzz
import numpy as np
# some settings to be used throughout the notebook
pd.set_option('max_colwidth', 70)
wf_colors = ["#C7DEB1","#9763A4"]
# make some fake data for a demo split-violin plot
data1 = pd.DataFrame({'Variable': np.random.randn(100)*.2 + 1, 'Label':'Binary Case 1'})
data2 = pd.DataFrame({'Variable': np.random.randn(100)*.3, 'Label':'Binary Case 2'})
df = data1.append(data2)
# violin plots in seaborn require 2 catagorical variables ('x' and 'hue'). We use 'Label' for hue.
df['Category'] = '' # placeholder for 'x' categorical variable
# make the plot
fig, ax = plt.subplots(1,1,figsize=(8, 6))
sns.violinplot(x='Category', y="Variable", hue="Label", data=df, split=True, ax=ax, palette=wf_colors)
ax.set_xlabel(' ')
ax.set_ylabel('Some Continuous Variable', fontsize=16)
ax.set_title('Example Split Violin Plot', fontsize=18)
plt.show()
# read in data
data = pd.read_csv('productnames.csv')
df = data[['Product1', 'Product2', 'Match']]
# what does the data look like?
df.head()
print('Qratio: ', fuzz.QRatio('brown leather sofa', '12ft leather dark brown sofa'))
print('Wratio: ', fuzz.WRatio('brown leather sofa', '12ft leather dark brown sofa'))
print('token_sort_ratio: ', fuzz.token_set_ratio('brown leather sofa', '12ft leather dark brown sofa'))
def get_scores(df, func, score_name):
Function for getting fuzzy similarity scores using a specified function
def _fuzzyscore(row, func=func):
Fuzzy matching score on two columns of pandas dataframe. Called via df.apply()
Args:
row (df row instance): row of pandas DataFrame with columns 'Product1' and 'Product2'
func (function): return numeric similarity score between 'Product1' and 'Product1, defaults to
return func(row['Product1'], row['Product2'])
#get the actual scores
df[score_name] = df.apply(_fuzzyscore, axis=1)
#get scores for different fuzzy functions
get_scores(df, fuzz.QRatio, 'QRatio')
get_scores(df, fuzz.WRatio, 'WRatio')
get_scores(df, fuzz.partial_ratio, 'partial_ratio')
get_scores(df, fuzz.token_set_ratio, 'token_set_ratio')
get_scores(df, fuzz.token_sort_ratio, 'token_sort_ratio')
df.head()
plot_df = pd.melt(df, id_vars=['Match'], value_vars=['QRatio','WRatio', 'partial_ratio','token_set_ratio', 'token_sort_ratio'])
plot_df.columns = ['Match', 'Function', 'Fuzzy Score']
fig, ax = plt.subplots(1,1, figsize=(14, 5))
sns.violinplot(x="Function", y="Fuzzy Score", hue="Match", data=plot_df, split=True, ax=ax, palette=wf_colors)
ax.set_ylabel('Similarity Score', fontsize=18)
ax.set_xlabel('')
ax.legend(loc='lower right', fontsize=13, ncol=2)
ax.tick_params(axis='both', which='major', labelsize=16)
ax.set_title('Fuzzywuzzy Methods: similarity scores for matches and not matches', fontsize=20)
plt.show()
# make sure you have a "plots" folder
fig.savefig('blog_pic1.png')
df[['QRatio','WRatio', 'partial_ratio','token_set_ratio', 'token_sort_ratio', 'Match']].corr()
def make_fake_data(low, high, n=300):
Stacks three draws from a uniform distribution w/ bounds given by 'low' and 'high'
Args:
low (list of ints): lower bounds for the three random draws
high (list of ints): upper bounds for the three random draws
rand_array = np.hstack((np.random.uniform(low=low[0], high=high[0], size=n),
np.random.uniform(low=low[1], high=high[1], size=n),
np.random.uniform(low=low[2], high=high[2], size=n)
))
return rand_array
# make fake data
true1 = make_fake_data([3, 33, 63], [12, 44, 72])
false1 = make_fake_data([18, 48, 78], [27, 57, 84])
true2 = make_fake_data([0, 30, 60], [15, 45, 75])
false2 = make_fake_data([15, 45, 75], [30, 60, 90])
fake_match_df = pd.DataFrame({'score1': false1, 'score2': false2, 'Match': np.full_like(false1, 0, dtype=bool)})
true_match_df = pd.DataFrame({'score1': true1, 'score2':true2, 'Match': np.full_like(true1, 1, dtype=bool)})
df = true_match_df.append(fake_match_df)
plot_df = pd.melt(df, id_vars=['Match'], value_vars=['score1', 'score2'])
plot_df.columns = ['Match', 'Function', 'Fuzzy Score']
fig, ax = plt.subplots(1,1, figsize=(12, 5))
sns.violinplot(x='Function', y='Fuzzy Score', hue="Match", data=plot_df, split=True, ax=ax, bw=.1, palette=["#C7DEB1","#9763A4"])
ax.set_ylabel('Similarity Score', fontsize=18)
ax.set_xlabel('')
ax.legend(loc='upper right', fontsize=12, ncol=2)
ax.tick_params(axis='both', which='major', labelsize=16)
ax.set_title('Irregular Data: Why Violin Plots are Better than Correlation Coefficients', fontsize=20)
fig.savefig('blog_pic2.png')
df.corr()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We use a pre-existing database containing a channel library and pipeline we have established.
Step2: Calibrating Mixers
Step3: If the plot server and client are open, then the data will be plotted along with fits from the calibration procedure. The calibration procedure automatically knows which digitizer and AWG units are needed in the process. The relevant instrument parameters are updated but not commited to the database. Therefore they may be rolled back if undesirable results are found.
Step4: Of course this is somewhat repetetive and can be sped up
Step5: Automatic Tuneup
| <ASSISTANT_TASK:>
Python Code:
from QGL import *
from auspex.qubit import *
cl = ChannelLibrary("my_config")
pl = PipelineManager()
spec_an = cl.new_spectrum_analzyer("SpecAn", "ASRL/dev/ttyACM0::INSTR", cl["spec_an_LO"])
cal = MixerCalibration(q2, spec_an, mixer="measure")
cal.calibrate()
cals = RabiAmpCalibration(q2)
cal.calibrate()
cal = RamseyCalibration(q2)
cal.calibrate()
cals = [RabiAmpCalibration, RamseyCalibration, Pi2Calibration, PiCalibration]
[cal(q2).calibrate() for cal in cals]
cal = QubitTuneup(q2, f_start=5.2e9, f_stop=5.8e9, coarse_step=50e6, fine_step=0.5e6, averages=250, amp=0.1)
cal.calibrate()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1:
Step2: As we saw in the tut-events-vs-annotations tutorial, we can extract an
Step3: <div class="alert alert-info"><h4>Note</h4><p>We could also have loaded the events from file, using
Step4: You'll see from the output that
Step5: Notice that the Event IDs are in quotes; since we didn't provide an event
Step6: This time let's pass preload=True and provide an event dictionary; our
Step7: Notice that the output now mentions "1 bad epoch dropped". In the tutorial
Step8: <div class="alert alert-info"><h4>Note</h4><p>If you forget to provide the event dictionary to the
Step9: Notice that the individual epochs are sequentially numbered along the bottom
Step10: We can also pool across conditions easily, thanks to how MNE-Python handles
Step11: You can also pool conditions by passing multiple tags as a list. Note that
Step12: However, if no match is found, an error is returned
Step13: Selecting epochs by index
Step14: Selecting, dropping, and reordering channels
Step15: Changing channel name and type
Step16: Selection in the time domain
Step17: Cropping removed part of the baseline. When printing the
Step18: However, if you wanted to expand the time domain of an
Step19: Note that although time shifting respects the sampling frequency (the spacing
Step20: Note that if your analysis requires repeatedly extracting single epochs from
Step21: See the tut-epochs-dataframe tutorial for many more examples of the
Step22: The MNE-Python naming convention for epochs files is that the file basename
Step23: In almost all cases this will not require changing anything about your code.
Step24: Iterating over Epochs
Step25: If you want to iterate over
| <ASSISTANT_TASK:>
Python Code:
import os
import mne
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file, verbose=False).crop(tmax=60)
events = mne.find_events(raw, stim_channel='STI 014')
epochs = mne.Epochs(raw, events, tmin=-0.3, tmax=0.7)
print(epochs)
print(epochs.event_id)
event_dict = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,
'visual/right': 4, 'face': 5, 'buttonpress': 32}
epochs = mne.Epochs(raw, events, tmin=-0.3, tmax=0.7, event_id=event_dict,
preload=True)
print(epochs.event_id)
del raw # we're done with raw, free up some memory
print(epochs.drop_log[-4:])
epochs.plot(n_epochs=10)
print(epochs['face'])
# pool across left + right
print(epochs['auditory'])
assert len(epochs['auditory']) == (len(epochs['auditory/left']) +
len(epochs['auditory/right']))
# pool across auditory + visual
print(epochs['left'])
assert len(epochs['left']) == (len(epochs['auditory/left']) +
len(epochs['visual/left']))
print(epochs[['right', 'bottom']])
try:
print(epochs[['top', 'bottom']])
except KeyError:
print('Tag-based selection with no matches raises a KeyError!')
print(epochs[:10]) # epochs 0-9
print(epochs[1:8:2]) # epochs 1, 3, 5, 7
print(epochs['buttonpress'][:4]) # first 4 "buttonpress" epochs
print(epochs['buttonpress'][[0, 1, 2, 3]]) # same as previous line
epochs_eeg = epochs.copy().pick_types(meg=False, eeg=True)
print(epochs_eeg.ch_names)
new_order = ['EEG 002', 'STI 014', 'EOG 061', 'MEG 2521']
epochs_subset = epochs.copy().reorder_channels(new_order)
print(epochs_subset.ch_names)
del epochs_eeg, epochs_subset
epochs.rename_channels({'EOG 061': 'BlinkChannel'})
epochs.set_channel_types({'EEG 060': 'ecg'})
print(list(zip(epochs.ch_names, epochs.get_channel_types()))[-4:])
# let's set them back to the correct values before moving on
epochs.rename_channels({'BlinkChannel': 'EOG 061'})
epochs.set_channel_types({'EEG 060': 'eeg'})
shorter_epochs = epochs.copy().crop(tmin=-0.1, tmax=0.1, include_tmax=True)
for name, obj in dict(Original=epochs, Cropped=shorter_epochs).items():
print('{} epochs has {} time samples'
.format(name, obj.get_data().shape[-1]))
print(shorter_epochs)
# shift times so that first sample of each epoch is at time zero
later_epochs = epochs.copy().shift_time(tshift=0., relative=False)
print(later_epochs.times[:3])
# shift times by a relative amount
later_epochs.shift_time(tshift=-7, relative=True)
print(later_epochs.times[:3])
del shorter_epochs, later_epochs
eog_data = epochs.get_data(picks='EOG 061')
meg_data = epochs.get_data(picks=['mag', 'grad'])
channel_4_6_8 = epochs.get_data(picks=slice(4, 9, 2))
for name, arr in dict(EOG=eog_data, MEG=meg_data, Slice=channel_4_6_8).items():
print('{} contains {} channels'.format(name, arr.shape[1]))
df = epochs.to_data_frame(index=['condition', 'epoch', 'time'])
df.sort_index(inplace=True)
print(df.loc[('auditory/left', slice(0, 10), slice(100, 107)),
'EEG 056':'EEG 058'])
del df
epochs.save('saved-audiovisual-epo.fif', overwrite=True)
epochs_from_file = mne.read_epochs('saved-audiovisual-epo.fif', preload=False)
print(type(epochs))
print(type(epochs_from_file))
print(all([isinstance(epochs, mne.BaseEpochs),
isinstance(epochs_from_file, mne.BaseEpochs)]))
for epoch in epochs[:3]:
print(type(epoch))
for index in range(3):
print(type(epochs[index]))
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The Python Function uN represents the expansion (2), and the printed
Step2: Differentiation
Step3: where uN1 $=u_N^{(1)}$ and uN2 $=u_N^{(2)}$.
Step4: We can look at the sparsity patterns of $(d^{(1)}{kj})$ and $(d^{(2)}{kj})$
Step5: just to see that they are upper triangular. We now ask is there a better and faster
Step6: and since uN11 = uN1 we see that we have achived the same result as in
Step7: So $(d^{(0)}{kj})$ now contains two nonzero diagonals, whereas $(d^{(1)}{kj})$ is
Step8: We get that uN22 = uN2, so the Chebyshev-Petrov-Galerkin projection works. The matrices involved are now
Step9: So there are now three nonzero diagonals in $(d^{(0)}{kj})$, whereas the differentiation matrix
Step10: which evidently is exactly the same result. It turns out that this is not strange. If we multiply
| <ASSISTANT_TASK:>
Python Code:
from shenfun import *
import sympy as sp
x = sp.Symbol('x')
ue = sp.sin(sp.pi*x)
N = 16
SN = FunctionSpace(N, 'C')
uN = Function(SN, buffer=ue)
uN
SM = FunctionSpace(0, 'C')
uM = Function(SM, buffer=ue, abstol=1e-16, reltol=1e-16)
print(uM[:N] - uN[:N])
print(len(uM))
uN1 = project(Dx(uN, 0, 1), SN)
uN2 = project(Dx(uN, 0, 2), SN)
uN1
u = TrialFunction(SN)
v = TestFunction(SN)
D0 = inner(u, v)
D1 = inner(Dx(u, 0, 1), v)
D2 = inner(Dx(u, 0, 2), v)
w0 = Function(SN) # work array
uN1 = Function(SN)
uN2 = Function(SN)
uN1 = D0.solve(D1.matvec(uN, w0), uN1)
uN2 = D0.solve(D2.matvec(uN, w0), uN2)
uN1
%matplotlib inline
import matplotlib.pyplot as plt
fig, (ax1, ax2) = plt.subplots(1, 2)
ax1.spy(D1.diags(), markersize=2, color='r')
ax2.spy(D2.diags(), markersize=2, color='b')
DN = FunctionSpace(N+2, 'C', bc=(0, 0))
v = TestFunction(DN)
D0 = inner(u, v)
D1 = inner(Dx(u, 0, 1), v)
uN11 = Function(SN)
uN11 = D0.solve(D1.matvec(uN, w0), uN11)
print(uN11-uN1)
fig, (ax1, ax2) = plt.subplots(1, 2)
ax1.spy(D0.diags(), markersize=2, color='r')
ax2.spy(D1.diags(), markersize=2, color='b')
BN = FunctionSpace(N+4, 'C', bc=(0, 0, 0, 0))
v = TestFunction(BN)
D0 = inner(u, v)
D2 = inner(Dx(u, 0, 2), v)
uN22 = Function(SN)
uN22 = D0.solve(D2.matvec(uN, w0), uN22)
print(uN22-uN2)
fig, (ax1, ax2) = plt.subplots(1, 2)
ax1.spy(D0.diags(), markersize=2, color='r')
ax2.spy(D2.diags(), markersize=2, color='b')
f1 = np.zeros(N+1)
ck = np.ones(N); ck[0] = 2
for k in range(N-2, -1, -1):
f1[k] = (f1[k+2]+2*(k+1)*uN[k+1])/ck[k]
print(f1[:-1]-uN1)
import pprint
DN = FunctionSpace(N+2, 'C', bc=(0, 0))
v = TestFunction(DN)
D0 = inner(u, v)
D1 = inner(Dx(u, 0, 1), v)
pprint.pprint(dict(D0))
pprint.pprint(dict(D1))
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: Medical tests
Step4: Now we can create a Test object with parameters chosen for demonstration purposes (most medical tests are better than this!)
Step5: If you are curious, here's the nested dictionary that computes the likelihoods
Step6: And here's how we update the Test object with a positive outcome
Step9: The positive test provides evidence that the patient is sick, increasing the probability from 0.1 to 0.25.
Step10: To update a MetaTest, we update each of the hypothetical Test objects. The return value from Update is the normalizing constant, which is the total probability of the data under the hypothesis.
Step11: At the top level, there are two tests, with different values of t. Initially, they are equally likely.
Step12: Here are the results.
Step14: Because a positive test is more likely if t=0.4, the positive test is evidence in favor of the hypothesis that t=0.4.
Step15: Here's the posterior predictive distribution
Step19: After seeing the test, the probability that the patient is sick is 0.25, which is the same result we got with t=0.3.
Step20: MakeMetaTest makes a MetaTest object starting with a given PMF of t.
Step21: Here are the results
Step22: Same as before. Now we can extract the posterior distribution of t.
Step23: Having seen one positive test, we are a little more inclined to believe that t=0.4; that is, that the false positive rate for this patient/test is high.
Step24: Finally, we can make the posterior marginal distribution of sick/notsick, which is a weighted mixture of the conditional distributions
Step25: At this point we have a MetaTest that contains our updated information about the test (the distribution of t) and about the patient that tested positive.
Step26: Then we can compute the posterior marginal distribution of sick/notsick for the two patients
Step27: So in Scenario A the probability that both patients are sick is 1/16.
Step28: We can confirm that this result is correct by simulation. Here's a generator that generates random pairs of patients
Step29: And here's a function that runs the simulation for a given number of iterations
| <ASSISTANT_TASK:>
Python Code:
from __future__ import print_function, division
from thinkbayes2 import Pmf, Suite
from fractions import Fraction
class Test(Suite):
Represents beliefs about a patient based on a medical test.
def __init__(self, p, s, t, label='Test'):
# initialize the prior probabilities
d = dict(sick=p, notsick=1-p)
super(Test, self).__init__(d, label)
# store the parameters
self.p = p
self.s = s
self.t = t
# make a nested dictionary to compute likelihoods
self.likelihood = dict(pos=dict(sick=s, notsick=t),
neg=dict(sick=1-s, notsick=1-t))
def Likelihood(self, data, hypo):
data: 'pos' or 'neg'
hypo: 'sick' or 'notsick'
return self.likelihood[data][hypo]
p = Fraction(1, 10) # prevalence
s = Fraction(9, 10) # sensitivity
t = Fraction(3, 10) # false positive rate
test = Test(p, s, t)
test.Print()
test.likelihood
test.Update('pos')
test.Print()
class MetaTest(Suite):
Represents a set of tests with different values of `t`.
def Likelihood(self, data, hypo):
data: 'pos' or 'neg'
hypo: Test object
# the return value from `Update` is the total probability of the
# data for a hypothetical value of `t`
return hypo.Update(data)
q = Fraction(1, 2)
t1 = Fraction(2, 10)
t2 = Fraction(4, 10)
test1 = Test(p, s, t1, 'Test(t=0.2)')
test2 = Test(p, s, t2, 'Test(t=0.4)')
metatest = MetaTest({test1:q, test2:1-q})
metatest.Print()
metatest.Update('pos')
metatest.Print()
def MakeMixture(metapmf, label='mix'):
Make a mixture distribution.
Args:
metapmf: Pmf that maps from Pmfs to probs.
label: string label for the new Pmf.
Returns: Pmf object.
mix = Pmf(label=label)
for pmf, p1 in metapmf.Items():
for x, p2 in pmf.Items():
mix.Incr(x, p1 * p2)
return mix
predictive = MakeMixture(metatest)
predictive.Print()
def MakeMetaTest(p, s, pmf_t):
Makes a MetaTest object with the given parameters.
p: prevalence
s: sensitivity
pmf_t: Pmf of possible values for `t`
tests = {}
for t, q in pmf_t.Items():
label = 'Test(t=%s)' % str(t)
tests[Test(p, s, t, label)] = q
return MetaTest(tests)
def Marginal(metatest):
Extracts the marginal distribution of t.
marginal = Pmf()
for test, prob in metatest.Items():
marginal[test.t] = prob
return marginal
def Conditional(metatest, t):
Extracts the distribution of sick/notsick conditioned on t.
for test, prob in metatest.Items():
if test.t == t:
return test
pmf_t = Pmf({t1:q, t2:1-q})
metatest = MakeMetaTest(p, s, pmf_t)
metatest.Print()
metatest = MakeMetaTest(p, s, pmf_t)
metatest.Update('pos')
metatest.Print()
Marginal(metatest).Print()
cond1 = Conditional(metatest, t1)
cond1.Print()
cond2 = Conditional(metatest, t2)
cond2.Print()
MakeMixture(metatest).Print()
convolution = metatest + metatest
convolution.Print()
marginal = MakeMixture(metatest+metatest)
marginal.Print()
marginal = MakeMixture(metatest) + MakeMixture(metatest)
marginal.Print()
from random import random
def flip(p):
return random() < p
def generate_pair_A(p, s, pmf_t):
while True:
sick1, sick2 = flip(p), flip(p)
t = pmf_t.Random()
test1 = flip(s) if sick1 else flip(t)
t = pmf_t.Random()
test2 = flip(s) if sick2 else flip(t)
yield test1, test2, sick1, sick2
def run_simulation(generator, iters=100000):
pmf_t = Pmf([0.2, 0.4])
pair_iterator = generator(0.1, 0.9, pmf_t)
outcomes = Pmf()
for i in range(iters):
test1, test2, sick1, sick2 = next(pair_iterator)
if test1 and test2:
outcomes[sick1, sick2] += 1
outcomes.Normalize()
return outcomes
outcomes = run_simulation(generate_pair_A)
outcomes.Print()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Selecting parts of a tree
Step2: Fuzzy tip label matching
Step3: Get TreeNode object from node idx label
Step4: Get tip labels from a node idx label
Step5: The .get_tip_labels() function can be combined with .get_mrca_idx_from_tip_labels() function to get a list of names that are all descendend from a common ancestor. For example, in the rooted tree above if I wanted to get a list of all tip labels in the ingroup clade I could select just one sample from each of the two subclades in it with .get_mrca_idx_from_tip_labels() to get the node idx of their common ancestor. Then pass this to .get_tip_labels() to return the full list of descendants. This is an efficient way to build a list of tip label names for large clade without having to write them all out by hand.
Step6: Modifying ToyTrees
Step7: There is also a function .unroot() to remove the root node from trees. This creates a polytomy at the root. Technically there still exists a point on the treenode structure that we refer to as the root, but it does not appear in drawings.
Step8: Drop tips
Step9: Ladderize
Step10: Rotate nodes
Step11: Resolve polytomy
Step12: Chaining functions and arguments
Step13: Attributes and functions
Step14: Saving/writing ToyTrees
| <ASSISTANT_TASK:>
Python Code:
import toytree
import toyplot
import numpy as np
# load a tree for this tutorial
tre = toytree.tree("https://eaton-lab.org/data/Cyathophora.tre")
# store a rooted copy of tre (more on this later...)
rtre = tre.root(['33588_przewalskii', '32082_przewalskii'])
rtre.draw();
# a multitree storing the unrooted and rooted toytrees
mtre = toytree.mtree([tre, rtre])
# plot shows that idx labels change with rerooting
mtre.draw(
node_labels='idx',
node_sizes=15,
);
# get an idx label of przewalskii clade using names, wildcard or regex
print('tre: ', tre.get_mrca_idx_from_tip_labels(names=['33588_przewalskii', '32082_przewalskii']))
print('tre: ', tre.get_mrca_idx_from_tip_labels(wildcard="prz"))
print('tre: ', tre.get_mrca_idx_from_tip_labels(regex="[0-9]*_przewalskii"))
# get an idx label of przewalskii clade using names, wildcard or regex
print('rtre:', rtre.get_mrca_idx_from_tip_labels(names=['33588_przewalskii', '32082_przewalskii']))
print('rtre:', rtre.get_mrca_idx_from_tip_labels(wildcard="prz"))
print('rtre:', rtre.get_mrca_idx_from_tip_labels(regex="[0-9]*_przewalskii"))
tre.idx_dict[19]
# get list of tips descended from a specific node in the tree
tre.get_tip_labels(idx=19)
# get list of all tips in the tree
tre.get_tip_labels()
# get node index (idx) of mrca
idx = rtre.get_mrca_idx_from_tip_labels(["29154_superba", "40578_rex"])
# get tip labels descended from node idx
rtre.get_tip_labels(idx=idx)
# three ways to do the same re-rooting
rtre = tre.root(names=["32082_przewalskii", "33588_przewalskii"])
rtre = tre.root(wildcard="prz")
rtre = tre.root(regex="[0-9]*_przewalskii")
# draw the rooted tree
rtre.draw(node_labels='idx', node_sizes=15);
# an unrooted tree
rtre.unroot().draw();
rtre.drop_tips(wildcard="cyatho").draw();
# dropping tips unladderized the tree, so we re-ladderized it before plotting
rtre.drop_tips(wildcard="cyatho").ladderize().draw();
rtre.rotate_node(wildcard="prz").draw();
toytree.tree("((a,b,c),d);").resolve_polytomy(dist=1.).draw();
# readable style for writing long draw functions
canvas, axes, mark = (
tre
.root(wildcard="prz")
.drop_tips(wildcard="superba")
.rotate_node(wildcard="30686")
.draw(
tip_labels_align=True,
edge_style={
"stroke": toytree.colors[3],
}
)
)
rtre.get_tip_labels() # list of labels in node-plot order
rtre.get_tip_coordinates() # array of tip plot coordinates in idx order
rtre.get_node_values() # list in node-plot order
rtre.get_node_dict() # dict mapping idx:name for each tip
rtre.get_node_coordinates() # array of node plot coordinates in idx order
rtre.get_edge_values() # list of edge values in edge plot order
rtre.get_edge_values_mapped(); # list of edge values with mapped dict in edge plot order
rtre.is_bifurcating() # boolean
rtre.is_rooted(); # boolean
rtre.nnodes # number of nodes in the tree
rtre.ntips # number of tips in the tree
rtre.newick # the newick representation of the tree
rtre.features # list of node features that can be accessed
rtre.style; # dict of plotting style of tree
# if no file handle is entered then the newick string is returned
rtre.write()
# the fmt (format) options write different newick formats.
rtre.write(tree_format=9)
# write to file
rtre.write("/tmp/mytree.tre", tree_format=0)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Ejemplo
Step2: Shooting method
Step3: Ejemplo 2
Step4: Ejemplo 3
Step5: Ejemplo 4
| <ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
# Un ejemplo de función solución con condiciones de borde marcadas y pendiente s
def y(x):
return np.exp(-x**2)*np.sin(x)
xx = np.linspace(0, 1, 100)
yy = y(xx)
plt.plot(xx,yy, lw=2)
plt.plot([0, 1] , [y(0), y(1)], 'ro', lw=2.0)
plt.arrow(0, 0, 0.1, 0.2, head_width=0.05, head_length=0.02, fc='k', ec='k')
plt.text(-0.1, 0.1, "$s$", fontsize=20)
plt.text(0, -0.05, "$y_a$", fontsize=20)
plt.text(1, 0.25, "$y_b$", fontsize=20)
plt.xlim(-0.5, 1.5)
plt.ylim(-0.1, 0.45)
plt.grid()
plt.xlabel("$t$", fontsize=20)
plt.ylabel("$y(t)$", fontsize=20)
plt.show()
# Solución analítica para el BVP presentado
def y(t, v0, y0):
return -0.5*9.8*t**2.0 + v0*t + y0
# Condiciones iniciales
a = 0
b = 4
y0 = 30
# Condiciones encontradas vía integración
v0 = 12.12
# Plot
tt = np.linspace(a, b, 100)
yy = y(tt, v0, y0)
plt.plot(tt, yy, lw=2.0)
plt.plot(1.236, 37.5, "ro", lw=2.0)
plt.xlabel("$t$", fontsize=20)
plt.ylabel("$y(t)$", fontsize=20)
plt.show()
from scipy.integrate import odeint
from scipy.optimize import bisect
# Límites de t, la condición de borde yb y un array con muchos valores de t entre a y b
a, b = (0, 1)
yb = 3.0
t = np.linspace(a, b, 101)
# Recordar cambio de variable!
# v' = 4y
# y' = v
# Función para el solver donde declaramos el IVP
def yprime(_y, t):
y, v = _y
return [v, 4*y]
# Función donde buscamos el cero, i.e., donde buscamos pillar la 2da condición de borde.
def F(s):
# Condición inicial del IVP que iremos cambiando, y'(0) = s, y(a0) = 1
y0 = [1, s]
# y, v
sol = odeint(yprime, y0, t)
return sol[:,0][-1] - yb
# Chequear F(-1)F(0) < 0 para método de bisección
# -1, 0
print(F(-1))
print(F(0))
sbest = bisect(F, -1, 0)
print(F(sbest))
## Gráficos
# 1) Para comparar generamos soluciones a los IVP con las pendientes que sospechamos
y1 = odeint(yprime, [1, -1], t)
y2 = odeint(yprime, [1, 0], t)
# 2) La gráfica de nuestra solución con mejor pendiente
ybest = odeint(yprime, [1, sbest], t)
# 3) Cháchara plotística (o sea, graficar las funciones)
plt.figure(figsize=(10,7))
plt.plot(1, 3, 'ro', lw=3.0)
plt.plot(t, y1[:,0], 'g--', lw=2.0, label="$s=-1$")
plt.plot(t, y2[:,0], 'b--', lw=2.0, label="$s=0$")
plt.plot(t, ybest[:,0],'r-', lw=2.0, label="$s=s_{best}$")
plt.legend(loc='best', fontsize=20)
plt.xlabel("$t$", fontsize=20)
plt.ylabel("$y(t)$", fontsize=20)
plt.show()
# Condiciones de borde e intervalo de tiempo
a, b = (0, 1)
yb = 1./3.*np.exp(1)
t = np.linspace(a, b, 101)
def yprime(_y, t):
y, v = _y
return [v, y + 2./3.*np.exp(t)]
def F(s):
y0 = [0, s]
sol = odeint(yprime, y0, t)
return sol[:,0][-1] - yb
print(F(0))
print(F(1))
# Solución
sbest = bisect(F, 0, 1)
print(sbest)
print(F(sbest))
# Plot de aproximaciones
y1 = odeint(yprime, [0, 0], t)
y2 = odeint(yprime, [0, 1], t)
plt.figure(figsize=(10,7))
plt.plot(t, y1[:,0],'g--', lw=2.0, label="$s=0$")
plt.plot(t, y2[:,0],'c--', lw=2.0, label="$s=1$")
# Plot de resultado
ybest = odeint(yprime, [0, sbest], t)
plt.plot(1, yb, 'ro')
plt.plot(t, ybest[:,0],'r-', lw=2.0, label="$s=s_{best}$")
# Solución Analítica
plt.plot(t, 1./3.*t*np.exp(t), 'b--', lw=2.0, label="Solución")
plt.legend(loc='best', fontsize=20)
plt.xlabel("$t$", fontsize=20)
plt.ylabel("$y(t)$", fontsize=20)
plt.show()
# Condiciones de borde e intervalo de tiempo
a, b = (0, 1)
yb = np.log(2.0)
t = np.linspace(a, b, 101)
def yprime(_y, t):
y, v = _y
return [v, 2.*np.exp(-2*y)*(1-t**2)]
def F(s):
y0 = [0, s]
sol = odeint(yprime, y0, t)
return sol[:,0][-1] - yb
print(F(-1))
print(F(1))
# Solución
sbest = bisect(F, -1, 1)
print(sbest)
# Plot de aproximaciones
y1 = odeint(yprime, [0, -1], t)
y2 = odeint(yprime, [0, 1], t)
ybest = odeint(yprime, [0, sbest], t)
plt.figure(figsize=(10,7))
plt.plot(t, y1[:,0],'g--', lw=2.0, label="$s=-1$")
plt.plot(t, y2[:,0],'c--', lw=2.0, label="$s=1$")
plt.plot(t, ybest[:,0],'r-', lw=2.0, label="$s=s_{best}$")
# Solución Analítica
plt.plot(t, np.log(t**2.0 + 1), 'b--', lw=2.0, label="Solución")
plt.plot(1, yb, 'ro')
plt.legend(loc='best', fontsize=20)
plt.xlabel("$t$", fontsize=20)
plt.ylabel("$y(t)$", fontsize=20)
plt.show()
# Condiciones de borde e intervalo de tiempo
a, b = (0, 1)
y2_1 = 2.0
t = np.linspace(a, b, 101)
def yprime(_y, t):
# En este caso "y" y "v" representan a y1 e y2
y1, y2 = _y
return [1.0 / y2, t + np.tan(y1)]
def F(s):
# y1_0 = 0, y2_0 = s
y0 = [0, s]
sol = odeint(yprime, y0, t, mxstep=100)
return sol[:,1][-1] - y2_1
print("F(0.9) = ", F(0.9))
print("F(1.1) = ", F(1.1))
# Solución
sbest = bisect(F, 0.9, 1.1)
print("sbest =",sbest)
print("F(sbest) = ", F(sbest))
yy = odeint(yprime, [0, sbest], t)
# Plot de resultados
plt.figure(figsize=(10, 7))
plt.plot(t, yy[:, 0], lw=2.0, label="$y_1$")
plt.plot(t, yy[:, 1], lw=2.0, label="$y_2$")
# Plot analitico
y1 = np.arctan(t)
y2 = t**2.0 + 1
plt.plot(t, y1, "r--", lw=2.0, label="$y_1$ Analítico")
plt.plot(t, y2, "m--", lw=2.0, label="$y_2$ Analítico")
plt.legend(loc='best', fontsize=20)
plt.xlabel("$t$", fontsize=20)
plt.ylabel("$y(t)$", fontsize=20)
plt.xlim([0, 1])
plt.show()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: The following function selects the columns I need.
Step3: Read data from Cycle 1.
Step4: Read data from Cycle 2.
Step5: Read data from Cycle 3.
Step6: Read data from Cycle 4.
Step9: Read data from Cycle 5.
Step11: In a each cycle, a few questions were not asked in some countries.
Step12: I'll remove the cycle/country groups that are missing netuse or rlgblg.
Step15: Income is reported on a different scale in different cycles, and differs substantially across countries. So I'm replacing it with rank on a per-country basis
Step18: To fill missing values, I am drawing random samples from the available values, on a per-country basis.
Step20: Concatenate the cycles.
Step21: TV watching time on average weekday
Step22: Radio listening, total time on average weekday.
Step23: Newspaper reading, total time on average weekday.
Step24: Personal use of Internet, email, www
Step25: Belong to a particular religion or denomination
Step26: How religious
Step27: Total household net income, all sources
Step28: Year born
Step29: Shifted to mean near 0
Step30: Number of years of education, converted to ranks.
Step31: Country codes
Step32: Run the model
Step33: Here's the model with all control variables and all media variables
Step34: Now using the filled variables
Step35: Now adding inwyr07
Step36: Group by country
Step37: Run a sample country
Step38: Run all countries
Step39: Run OLS model with rlgdgr
Step40: Let's see what happens if we add quadratic terms for edurank and hincrank
| <ASSISTANT_TASK:>
Python Code:
from __future__ import print_function, division
import numpy as np
import pandas as pd
import thinkstats2
import thinkplot
import statsmodels.formula.api as smf
from iso_country_codes import COUNTRY
%matplotlib inline
def read_cycle(filename):
Reads a file containing ESS data and selects columns.
filename: string
returns: DataFrame
df = pd.read_stata(filename, convert_categoricals=False)
if 'hinctnta' not in df.columns:
df['hinctnta'] = df.hinctnt
if 'inwyr' not in df.columns:
df['inwyr'] = df.inwyye
cols = ['cntry', 'inwyr', 'tvtot', 'tvpol', 'rdtot', 'rdpol',
'nwsptot', 'nwsppol', 'netuse',
'rlgblg', 'rlgdgr', 'eduyrs', 'hinctnta', 'yrbrn',
'eisced', 'pspwght', 'pweight']
df = df[cols]
return df
df1 = read_cycle('ESS1e06_4.dta')
df1.head()
df2 = read_cycle('ESS2e03_4.dta')
df2.head()
df3 = read_cycle('ESS3e03_5.dta')
df3.head()
df4 = read_cycle('ESS4e04_3.dta')
df4.head()
df5 = read_cycle('ESS5e03_2.dta')
df5.head()
def clean_cycle(df):
Cleans data from one cycle.
df: DataFrame
df.tvtot.replace([77, 88, 99], np.nan, inplace=True)
df.rdtot.replace([77, 88, 99], np.nan, inplace=True)
df.nwsptot.replace([77, 88, 99], np.nan, inplace=True)
df.netuse.replace([77, 88, 99], np.nan, inplace=True)
df.tvpol.replace([66, 77, 88, 99], np.nan, inplace=True)
df.rdpol.replace([66, 77, 88, 99], np.nan, inplace=True)
df.nwsppol.replace([66, 77, 88, 99], np.nan, inplace=True)
df.eduyrs.replace([77, 88, 99], np.nan, inplace=True)
df.rlgblg.replace([7, 8, 9], np.nan, inplace=True)
df.rlgdgr.replace([77, 88, 99], np.nan, inplace=True)
df.hinctnta.replace([77, 88, 99], np.nan, inplace=True)
df.yrbrn.replace([7777, 8888, 9999], np.nan, inplace=True)
df.inwyr.replace([9999], np.nan, inplace=True)
df['hasrelig'] = (df.rlgblg==1).astype(int)
df.loc[df.rlgblg.isnull(), 'hasrelig'] = np.nan
df['yrbrn60'] = df.yrbrn - 1960
df['inwyr07'] = df.inwyr - 2007 + np.random.uniform(-0.5, 0.5, len(df))
cycles = [df1, df2, df3, df4, df5]
for cycle in cycles:
clean_cycle(cycle)
def resample(df):
Resample data by country.
df: DataFrame
returns: map from country code to DataFrame
res = {}
grouped = df.groupby('cntry')
for name, group in grouped:
sample = group.sample(len(group), weights=group.pspwght, replace=True)
sample.index = range(len(group))
res[name] = sample
return res
# each cycle_map is a map from country code to DataFrame
cycle_maps = [resample(cycle) for cycle in cycles]
for cycle_map in cycle_maps:
print(len(cycle_map), 'countries')
def check_variables(name, group):
Print variables missing from a group.
name: group name (country code)
group: DataFrame
varnames = ['cntry', 'tvtot', 'tvpol', 'rdtot', 'rdpol',
'nwsptot', 'nwsppol', 'netuse', 'inwyr07',
'rlgblg', 'rlgdgr', 'eduyrs', 'hinctnta',
'yrbrn', 'pspwght', 'pweight']
for var in varnames:
n = len(group[var].dropna())
if (n < 100):
print(name, var, len(group[var].dropna()))
for i, cycle_map in enumerate(cycle_maps):
print('Cycle', i+1)
for name, group in cycle_map.items():
check_variables(name, group)
del cycle_maps[0]['FR']
del cycle_maps[0]['DE']
del cycle_maps[1]['FR']
del cycle_maps[1]['FI']
ee = cycle_maps[4]['EE']
ee.inwyr07 = 3 + np.random.uniform(-0.5, 0.5, len(ee))
def replace_var_with_rank(name, df, old, new):
Replaces a scale variable with a rank from 0-1.
Creates a new column.
name: country code
df: DataFrame
old: old variable name
new: new variable name
# jitter the data
series = df[old] + np.random.uniform(-0.25, 0.25, len(df))
# if there's no data, just put in random values
if len(series.dropna()) < 10:
df[new] = np.random.random(len(df))
return
# map from values to ranks
cdf = thinkstats2.Cdf(series)
df[new] = cdf.Probs(series)
# make sure NaN maps to NaN
df.loc[df[old].isnull(), new] = np.nan
def replace_with_ranks(cycle_map):
Replace variables within countries.
cycle_map: map from country code to DataFrame
for name, group in cycle_map.items():
replace_var_with_rank(name, group, 'hinctnta', 'hincrank')
replace_var_with_rank(name, group, 'eduyrs', 'edurank')
for cycle_map in cycle_maps:
replace_with_ranks(cycle_map)
def fill_var(df, old, new):
Fills missing values.
Creates a new column
df: DataFrame
old: old variable name
new: new variable name
# find the NaN rows
null = df[df[old].isnull()]
# sample from the non-NaN rows
fill = df[old].dropna().sample(len(null), replace=True)
fill.index = null.index
# replace NaNs with the random sample
df[new] = df[old].fillna(fill)
def fill_all_vars(df):
Fills missing values in the variables we need.
df: DataFrame
for old in ['hasrelig', 'rlgdgr', 'yrbrn60', 'edurank', 'hincrank',
'tvtot', 'rdtot', 'nwsptot', 'netuse', 'inwyr07']:
new = old + '_f'
fill_var(df, old, new)
def fill_vars_by_country(cycle_map):
for name, group in cycle_map.items():
fill_all_vars(group)
for cycle_map in cycle_maps:
fill_vars_by_country(cycle_map)
def concat_groups(cycle_map):
Concat all countries in a cycle.
cycle_map: map from country code to DataFrame
returns: DataFrame
return pd.concat(cycle_map.values(), ignore_index=True)
dfs = [concat_groups(cycle_map) for cycle_map in cycle_maps]
for df in dfs:
print(len(df))
df = pd.concat(dfs, ignore_index=True)
print(df.shape)
df.head()
df.tvtot.value_counts().sort_index()
df.rdtot.value_counts().sort_index()
df.nwsptot.value_counts().sort_index()
df.netuse.value_counts().sort_index()
df.rlgblg.value_counts().sort_index()
df.rlgdgr.value_counts().sort_index()
df.hincrank.describe()
df.yrbrn.describe()
cdf = thinkstats2.Cdf(df.yrbrn)
thinkplot.PrePlot(1)
thinkplot.Cdf(cdf)
thinkplot.Config(xlabel='Year born', ylabel='CDF',
title='Disrtribution of year born', legend=False)
df.yrbrn60.describe()
df.edurank.describe()
df.cntry.value_counts().sort_index()
df.rlgdgr.value_counts().sort_index()
df.inwyr07.describe()
def run_model(df, formula):
model = smf.logit(formula, data=df)
results = model.fit(disp=False)
return results
formula = ('hasrelig ~ yrbrn60 + edurank + hincrank +'
'tvtot + rdtot + nwsptot + netuse')
res = run_model(df, formula)
res.summary()
formula = ('hasrelig_f ~ yrbrn60_f + edurank_f + hincrank_f +'
'tvtot_f + rdtot_f + nwsptot_f + netuse_f')
res = run_model(df, formula)
res.summary()
formula = ('hasrelig_f ~ inwyr07_f + yrbrn60_f + edurank_f + hincrank_f +'
'tvtot_f + rdtot_f + nwsptot_f + netuse_f')
res = run_model(df, formula)
res.summary()
def extract_res(res, var='netuse_f'):
param = res.params[var]
pvalue = res.pvalues[var]
stars = '**' if pvalue < 0.01 else '*' if pvalue < 0.05 else ''
return res.nobs, param, stars
extract_res(res)
grouped = df.groupby('cntry')
group = grouped.get_group('IS')
run_model(group, formula).summary()
def run_logits(grouped, formula, var):
for name, group in grouped:
country = '%14.14s' % COUNTRY[name]
model = smf.logit(formula, data=group)
results = model.fit(disp=False)
nobs, param, stars = extract_res(results, var=var)
arrow = '<--' if stars and param > 0 else ''
print(country, nobs, '%0.3g'%param, stars, arrow, sep='\t')
formula = ('hasrelig_f ~ inwyr07_f + yrbrn60_f + edurank_f + hincrank_f +'
'tvtot_f + rdtot_f + nwsptot_f + netuse_f')
run_logits(grouped, formula, 'netuse_f')
run_logits(grouped, formula, 'hincrank_f')
run_logits(grouped, formula, 'edurank_f')
formula = ('rlgdgr_f ~ inwyr07_f + yrbrn60_f + edurank_f + hincrank_f +'
'tvtot_f + rdtot_f + nwsptot_f + netuse_f')
model = smf.ols(formula, data=df)
results = model.fit(disp=False)
results.summary()
def run_ols(grouped, formula, var):
for name, group in grouped:
model = smf.ols(formula, data=group)
results = model.fit(disp=False)
nobs, param, stars = extract_res(results, var=var)
arrow = '<--' if stars and param > 0 else ''
print(name, len(group), '%0.3g '%param, stars, arrow, sep='\t')
run_ols(grouped, formula, 'netuse_f')
run_ols(grouped, formula, 'edurank_f')
run_ols(grouped, formula, 'hincrank_f')
df['edurank_f2'] = df.edurank_f**2
df['hincrank_f2'] = df.hincrank_f**2
formula = ('rlgdgr_f ~ inwyr07_f + yrbrn60_f + edurank_f + edurank_f2 + hincrank_f +'
'tvtot_f + rdtot_f + nwsptot_f + netuse_f')
run_ols(grouped, formula, 'edurank_f')
run_ols(grouped, formula, 'edurank_f2')
formula = ('rlgdgr_f ~ inwyr07_f + yrbrn60_f + edurank_f + edurank_f2 + '
'hincrank_f + hincrank_f2 + '
'tvtot_f + rdtot_f + nwsptot_f + netuse_f')
run_ols(grouped, formula, 'hincrank_f')
run_ols(grouped, formula, 'edurank_f')
run_ols(grouped, formula, 'netuse_f')
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read in data.
Step2: Calculate Robinson-Foulds distance, and tree length.
Step3: Calculate RF score, pairwise over all trees in sample.
| <ASSISTANT_TASK:>
Python Code:
import dendropy
from dendropy.utility.fileutils import find_files
import numpy as np
taxa = dendropy.TaxonSet()
ours = dendropy.Tree.get_from_path('../best.phy', 'newick', taxon_set=taxa)
non_ml = dendropy.Tree.get_from_path('../Trees/MLE/ExaML_result.SquamataPyron.MLE.2b', 'newick', taxon_set = taxa)
print ours.length()
print non_ml.length()
print ours.symmetric_difference(non_ml)
taxa = dendropy.TaxonSet()
pb = dendropy.Tree.get_from_path('../Trees/TotalOptimization/Ranked/2598364', 'nexus', taxon_set=taxa)
rfs = [tree.symmetric_difference(pb_o) for tree in uotrees]
rfs
olist = find_files(top='garli_opt/', filename_filter='*.tre')
print olist
otrees = [dendropy.Tree.get_from_path(filename, "nexus") for filename in olist]
n = len(uotrees)
udiffarray = np.zeros((n,n))
for i, ele1 in enumerate(uotrees):
for j, ele2 in enumerate(uotrees):
if j >= i:
break # Since the matrix is symmetrical we don't need to
# calculate everything
difference = ele1.symmetric_difference(ele2)
udiffarray[i, j] = difference
udiffarray[j, i] = difference
diffarray
diffarray
o_tl = [tree.length() for tree in otrees]
print o_tl
uo_tl = [mle.length() for mle in uotrees]
print uo_tl
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Family
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables
Step9: 2. Key Properties --> Seawater Properties
Step10: 2.2. Eos Functional Temp
Step11: 2.3. Eos Functional Salt
Step12: 2.4. Eos Functional Depth
Step13: 2.5. Ocean Freezing Point
Step14: 2.6. Ocean Specific Heat
Step15: 2.7. Ocean Reference Density
Step16: 3. Key Properties --> Bathymetry
Step17: 3.2. Type
Step18: 3.3. Ocean Smoothing
Step19: 3.4. Source
Step20: 4. Key Properties --> Nonoceanic Waters
Step21: 4.2. River Mouth
Step22: 5. Key Properties --> Software Properties
Step23: 5.2. Code Version
Step24: 5.3. Code Languages
Step25: 6. Key Properties --> Resolution
Step26: 6.2. Canonical Horizontal Resolution
Step27: 6.3. Range Horizontal Resolution
Step28: 6.4. Number Of Horizontal Gridpoints
Step29: 6.5. Number Of Vertical Levels
Step30: 6.6. Is Adaptive Grid
Step31: 6.7. Thickness Level 1
Step32: 7. Key Properties --> Tuning Applied
Step33: 7.2. Global Mean Metrics Used
Step34: 7.3. Regional Metrics Used
Step35: 7.4. Trend Metrics Used
Step36: 8. Key Properties --> Conservation
Step37: 8.2. Scheme
Step38: 8.3. Consistency Properties
Step39: 8.4. Corrected Conserved Prognostic Variables
Step40: 8.5. Was Flux Correction Used
Step41: 9. Grid
Step42: 10. Grid --> Discretisation --> Vertical
Step43: 10.2. Partial Steps
Step44: 11. Grid --> Discretisation --> Horizontal
Step45: 11.2. Staggering
Step46: 11.3. Scheme
Step47: 12. Timestepping Framework
Step48: 12.2. Diurnal Cycle
Step49: 13. Timestepping Framework --> Tracers
Step50: 13.2. Time Step
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Step52: 14.2. Scheme
Step53: 14.3. Time Step
Step54: 15. Timestepping Framework --> Barotropic
Step55: 15.2. Time Step
Step56: 16. Timestepping Framework --> Vertical Physics
Step57: 17. Advection
Step58: 18. Advection --> Momentum
Step59: 18.2. Scheme Name
Step60: 18.3. ALE
Step61: 19. Advection --> Lateral Tracers
Step62: 19.2. Flux Limiter
Step63: 19.3. Effective Order
Step64: 19.4. Name
Step65: 19.5. Passive Tracers
Step66: 19.6. Passive Tracers Advection
Step67: 20. Advection --> Vertical Tracers
Step68: 20.2. Flux Limiter
Step69: 21. Lateral Physics
Step70: 21.2. Scheme
Step71: 22. Lateral Physics --> Momentum --> Operator
Step72: 22.2. Order
Step73: 22.3. Discretisation
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Step75: 23.2. Constant Coefficient
Step76: 23.3. Variable Coefficient
Step77: 23.4. Coeff Background
Step78: 23.5. Coeff Backscatter
Step79: 24. Lateral Physics --> Tracers
Step80: 24.2. Submesoscale Mixing
Step81: 25. Lateral Physics --> Tracers --> Operator
Step82: 25.2. Order
Step83: 25.3. Discretisation
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Step85: 26.2. Constant Coefficient
Step86: 26.3. Variable Coefficient
Step87: 26.4. Coeff Background
Step88: 26.5. Coeff Backscatter
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Step90: 27.2. Constant Val
Step91: 27.3. Flux Type
Step92: 27.4. Added Diffusivity
Step93: 28. Vertical Physics
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
Step96: 30.2. Closure Order
Step97: 30.3. Constant
Step98: 30.4. Background
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
Step100: 31.2. Closure Order
Step101: 31.3. Constant
Step102: 31.4. Background
Step103: 32. Vertical Physics --> Interior Mixing --> Details
Step104: 32.2. Tide Induced Mixing
Step105: 32.3. Double Diffusion
Step106: 32.4. Shear Mixing
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
Step108: 33.2. Constant
Step109: 33.3. Profile
Step110: 33.4. Background
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
Step112: 34.2. Constant
Step113: 34.3. Profile
Step114: 34.4. Background
Step115: 35. Uplow Boundaries --> Free Surface
Step116: 35.2. Scheme
Step117: 35.3. Embeded Seaice
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Step119: 36.2. Type Of Bbl
Step120: 36.3. Lateral Mixing Coef
Step121: 36.4. Sill Overflow
Step122: 37. Boundary Forcing
Step123: 37.2. Surface Pressure
Step124: 37.3. Momentum Flux Correction
Step125: 37.4. Tracers Flux Correction
Step126: 37.5. Wave Effects
Step127: 37.6. River Runoff Budget
Step128: 37.7. Geothermal Heating
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Step132: 40.2. Ocean Colour
Step133: 40.3. Extinction Depth
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Step135: 41.2. From Sea Ice
Step136: 41.3. Forced Mode Restoring
| <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cams', 'sandbox-1', 'ocean')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Reading the rock property catalog
Step2: Plotting!
| <ASSISTANT_TASK:>
Python Code:
import requests
import pandas as pd
class RPC(object):
def __init__(self):
pass
def _query_ssw(self, filters, properties, options):
base_url = "http://www.subsurfwiki.org/api.php"
q = "action=ask&query=[[RPC:%2B]]"
q += ''.join(filters) if filters else ''
q += '|%3F' + '|%3F'.join(properties) if properties else ''
q += '|' + '|'.join(options) if options else ''
q += '&format=json'
return requests.get(base_url, params=q)
def _get_formats(self, response):
formats = {}
for item in response.json()['query']['printrequests']:
if item[u'mode'] == 1:
formats[item[u'label']] = item[u'typeid'].lstrip('_')
return formats
def _build_dataframe(self, response):
Takes the response of a query and returns a pandas
dataframe containing the results.
try:
s = list(response.json()['query']['results'].keys())
except Exception as e:
raise e
samples = [i[4:] for i in s]
df = pd.DataFrame(samples)
# We'll need to know the formats of the columns.
formats = self._get_formats(response)
properties = formats.keys()
# Now traverse the JSON and build the DataFrame.
for prop in properties:
temp = []
for row in list(s):
p = response.json()['query']['results'][row]['printouts']
if p[prop]:
if formats[prop] == 'qty': # Quantity, number + unit
temp.append(p[prop][0]['value'])
elif formats[prop] == 'wpg': # Wiki page
temp.append(p[prop][0]['fulltext'])
else: # Anything else: num, txt, tem, etc.
temp.append(p[prop][0])
else:
temp.append(None)
df[prop] = temp
df = df.set_index(0)
df.index.name = None
return df
def query(self, filters=None, properties=None, options=None):
r = self._query_ssw(filters, properties, options)
if r.status_code == 200:
return self._build_dataframe(r)
else:
print("Something went wrong.")
rpc = RPC()
filters = ["[[lithology::Shale||Sandstone||Limestone]][[Delta::%2B]]"]
properties = ['Citation', 'Description', 'Lithology', 'Vp', 'Vs', 'Rho', 'Delta', 'Epsilon']
options = ["limit=100"]
df = rpc.query(filters, properties, options)
df.head()
len(df)
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
plt.scatter(df.Vs, df.Vp)
liths = df.groupby('Lithology')
fig = plt.figure(figsize=(6,6))
ax = plt.subplot(111)
for name, group in liths:
ax.plot(group.Vp, group.Vs, marker='o', linestyle='', ms=6, label=name)
ax.legend(loc=2)
title = '{0}\ rocks\ from\ the\ RPC'.format(len(df))
plt.title('$\mathrm{' + title + '}$', size=18)
plt.xlabel('$V_\mathrm{P}\ [m/s]$', size=16)
plt.ylabel('$V_\mathrm{S}\ [m/s]$', size=16)
plt.show()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine.
Step3: Extract the dataset from the compressed .tar.gz file.
Step4: Problem 1
Step6: Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size.
Step7: Problem 2
Step8: Question
Step9: Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune train_size as needed. The labels will be stored into a separate array of integers 0 through 9.
Step10: Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match.
Step11: Problem 4
Step12: Finally, let's save the data for later reuse
Step14: Problem 5
Step15: Problem 6
Step16: Question
| <ASSISTANT_TASK:>
Python Code:
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
%matplotlib inline
from __future__ import print_function
import matplotlib.pyplot as plt
import numpy as np
import os
import sys
import tarfile
from IPython.display import display, Image
from scipy import ndimage
from sklearn.linear_model import LogisticRegression
from six.moves.urllib.request import urlretrieve
from six.moves import cPickle as pickle
url = 'http://yaroslavvb.com/upload/notMNIST/'
def maybe_download(filename, expected_bytes, force=False):
Download a file if not present, and make sure it's the right size.
if force or not os.path.exists(filename):
filename, _ = urlretrieve(url + filename, filename)
statinfo = os.stat(filename)
if statinfo.st_size == expected_bytes:
print('Found and verified', filename)
else:
raise Exception(
'Failed to verify' + filename + '. Can you get to it with a browser?')
return filename
train_filename = maybe_download('notMNIST_large.tar.gz', 247336696)
test_filename = maybe_download('notMNIST_small.tar.gz', 8458043)
print(type(train_filename))
num_classes = 10
np.random.seed(133)
def maybe_extract(filename, force=False):
root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz
if os.path.isdir(root) and not force:
# You may override by setting force=True.
print('%s already present - Skipping extraction of %s.' % (root, filename))
else:
print('Extracting data for %s. This may take a while. Please wait.' % root)
tar = tarfile.open(filename)
sys.stdout.flush()
tar.extractall()
tar.close()
data_folders = [
os.path.join(root, d) for d in sorted(os.listdir(root))
if os.path.isdir(os.path.join(root, d))]
if len(data_folders) != num_classes:
raise Exception(
'Expected %d folders, one per class. Found %d instead.' % (
num_classes, len(data_folders)))
print(data_folders)
return data_folders
train_folders = maybe_extract(train_filename)
test_folders = maybe_extract(test_filename)
#Problem 1, Tong's solution -> done
def display_image(folder_index = 0, image_index = 0):
try:
sample_folder = train_folders[folder_index]
image_files = os.listdir(sample_folder)
sample_image = os.path.join(sample_folder, image_files[image_index])
print('Displaying image: ', sample_image)
display(Image(filename = sample_image ))
except:
print('Indices out of bound.')
display_image(1, 5)
image_size = 28 # Pixel width and height.
pixel_depth = 255.0 # Number of levels per pixel.
def load_letter(folder, min_num_images):
Load the data for a single letter label.
image_files = os.listdir(folder)
dataset = np.ndarray(shape=(len(image_files), image_size, image_size),
dtype=np.float32)
image_index = 0
print(folder)
for image in os.listdir(folder):
image_file = os.path.join(folder, image)
try:
image_data = (ndimage.imread(image_file).astype(float) -
pixel_depth / 2) / pixel_depth
if image_data.shape != (image_size, image_size):
raise Exception('Unexpected image shape: %s' % str(image_data.shape))
dataset[image_index, :, :] = image_data
image_index += 1
except IOError as e:
print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.')
num_images = image_index
dataset = dataset[0:num_images, :, :]
if num_images < min_num_images:
raise Exception('Many fewer images than expected: %d < %d' %
(num_images, min_num_images))
print('Full dataset tensor:', dataset.shape)
print('Mean:', np.mean(dataset))
print('Standard deviation:', np.std(dataset))
return dataset
def maybe_pickle(data_folders, min_num_images_per_class, force=False):
dataset_names = []
for folder in data_folders:
set_filename = folder + '.pickle'
dataset_names.append(set_filename)
if os.path.exists(set_filename) and not force:
# You may override by setting force=True.
print('%s already present - Skipping pickling.' % set_filename)
else:
print('Pickling %s.' % set_filename)
dataset = load_letter(folder, min_num_images_per_class)
try:
with open(set_filename, 'wb') as f:
pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', set_filename, ':', e)
return dataset_names
train_datasets = maybe_pickle(train_folders, 45000)
test_datasets = maybe_pickle(test_folders, 1800)
#Problem 2, Tong's solution -> done
def display_nd_image(folder_index = 0, image_index = 0):
try:
folder = train_datasets[folder_index]
print("Display image in folder: ", folder)
with open(folder, 'rb') as f:
sample_dataset = pickle.load(f)
img = sample_dataset[image_index, :, :]
plt.imshow(img, cmap = "Greys")
plt.show()
except:
print('Something is wrong.')
display_nd_image(1, 5)
#Problem 3, Tong's solution -> done
print(train_datasets)
sizes = []
for dataset in train_datasets:
with open(dataset, 'rb') as f:
data = pickle.load(f)
sizes.append(data.shape[0])
print("The samples sizes for each class are: ")
print(sizes)
print("Average: ", np.average(sizes))
print("Stdev: ", np.std(sizes))
print("Sum: ", np.sum(sizes))
#Very balanced
def make_arrays(nb_rows, img_size):
if nb_rows:
dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32)
labels = np.ndarray(nb_rows, dtype=np.int32)
else:
dataset, labels = None, None
return dataset, labels
def merge_datasets(pickle_files, train_size, valid_size=0):
num_classes = len(pickle_files)
valid_dataset, valid_labels = make_arrays(valid_size, image_size)
train_dataset, train_labels = make_arrays(train_size, image_size)
vsize_per_class = valid_size // num_classes
tsize_per_class = train_size // num_classes
start_v, start_t = 0, 0
end_v, end_t = vsize_per_class, tsize_per_class
end_l = vsize_per_class+tsize_per_class
for label, pickle_file in enumerate(pickle_files):
try:
with open(pickle_file, 'rb') as f:
letter_set = pickle.load(f)
# let's shuffle the letters to have random validation and training set
np.random.shuffle(letter_set)
if valid_dataset is not None:
valid_letter = letter_set[:vsize_per_class, :, :]
valid_dataset[start_v:end_v, :, :] = valid_letter
valid_labels[start_v:end_v] = label
start_v += vsize_per_class
end_v += vsize_per_class
train_letter = letter_set[vsize_per_class:end_l, :, :]
train_dataset[start_t:end_t, :, :] = train_letter
train_labels[start_t:end_t] = label
start_t += tsize_per_class
end_t += tsize_per_class
except Exception as e:
print('Unable to process data from', pickle_file, ':', e)
raise
return valid_dataset, valid_labels, train_dataset, train_labels
train_size = 200000
valid_size = 10000
test_size = 10000
valid_dataset, valid_labels, train_dataset, train_labels = merge_datasets(
train_datasets, train_size, valid_size)
_, _, test_dataset, test_labels = merge_datasets(test_datasets, test_size)
print('Training:', train_dataset.shape, train_labels.shape)
print('Validation:', valid_dataset.shape, valid_labels.shape)
print('Testing:', test_dataset.shape, test_labels.shape)
def randomize(dataset, labels):
permutation = np.random.permutation(labels.shape[0])
shuffled_dataset = dataset[permutation,:,:]
shuffled_labels = labels[permutation]
return shuffled_dataset, shuffled_labels
train_dataset, train_labels = randomize(train_dataset, train_labels)
test_dataset, test_labels = randomize(test_dataset, test_labels)
valid_dataset, valid_labels = randomize(valid_dataset, valid_labels)
#Problem 4, Tong's solution -> done
#Print some random images and lables from each set, see if they match
def check_data(dataset, lables, index=0):
labelset = ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I','J']
img = dataset[index, :, :]
label = labelset[lables[index]]
print("Image:")
plt.imshow(img, cmap = "Greys")
plt.show()
print('Lable: ', label)
check_data(train_dataset, train_labels, index = 1001)
check_data(valid_dataset, valid_labels, index = 11)
check_data(test_dataset, test_labels, index = 9)
#LGTM
print(train_labels[1:100])
pickle_file = 'notMNIST.pickle'
try:
f = open(pickle_file, 'wb')
save = {
'train_dataset': train_dataset,
'train_labels': train_labels,
'valid_dataset': valid_dataset,
'valid_labels': valid_labels,
'test_dataset': test_dataset,
'test_labels': test_labels,
}
pickle.dump(save, f, pickle.HIGHEST_PROTOCOL)
f.close()
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
statinfo = os.stat(pickle_file)
print('Compressed pickle size:', statinfo.st_size)
#Problem 5, Tong's solution -> done
#Why is there overlap??!
train_indices = train_dataset[0]
print(train_dataset.shape[0])
print(train_dataset.item(100, 27, 6))
#Brute-force checking how many rows are identical between train and valid
def overlap_rate(a_dataset, b_dataset, sample_size = 1000):
identical_count = 0
test_size = min(a_dataset.shape[0], sample_size)
for i in range(test_size):
a_record = a_dataset[i, :, :]
for j in range(b_dataset.shape[0]):
b_record = b_dataset[j, :, :]
if np.array_equal(a_record, b_record):
identical_count += 1
print('Sample size:', str(test_size))
print('Percent of a dataset that is overlaped in b dataset', str(identical_count*1.0/test_size))
overlap_rate(train_dataset, valid_dataset) #39%, surprisingly high!
overlap_rate(train_dataset, test_dataset) #58%, even higher
Optioanl questions:
-consider using np.allclose for near duplicates
-sanitized validation and test set: leave for later..
%%capture
#Learn reshape
#l = np.ndarray(range(27), shape=(3, 3, 3))
a = np.arange(27).reshape((3, 3, 3))
b = a.reshape(3, 9)
print(a);
print(b);
#Problem 6, Tong's solution Version 1: no tuning of hyperparameters
#Take subset of training data, reshape for regression
train_size = 1000
train = train_dataset[:train_size, :, :]
test = test_dataset.reshape(test_dataset.shape[0], image_size * image_size)
X = train.reshape(train_size, image_size * image_size)
Y = train_labels[:train_size]
#Build regression graph
logreg = LogisticRegression(C=1.0)
#Fit the model
logreg.fit(X, Y)
#Test predictions on test set
Z = logreg.predict(test)
#Evaluate
np.mean(Z == test_labels) #Accurary 85%
#V2: tune hyperparameters with the validation set. First do this 'by hand'
valid = valid_dataset.reshape(valid_dataset.shape[0], image_size * image_size)
Cs = np.logspace(0.001, 10, num=50)
Accuracys = []
for C in Cs:
logregC = LogisticRegression(C=C)
logregC.fit(X, Y)
pred = logregC.predict(valid)
acc = np.mean(pred == valid_labels)
Accuracys.append(acc)
Accuracys = np.array(Accuracys)
plt.plot(Cs, Accuracys)
#Looks like changing C doesn't matter all that much. Why?
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Measure response times and plot results
| <ASSISTANT_TASK:>
Python Code:
rc_url= 'https://v100rc2.demo.encodedcc.org/'
%%capture --no-stderr --no-display output
prod_url="https://test.encodedcc.org"
qa = qancode.QANCODE(rc_url=rc_url, prod_url=prod_url)
num_trials = 50
item_types = [
"/search/?type=Experiment&biosample_ontology.term_name=whole+organism",
"/search/?type=Experiment&target.label=H3K4me3",
"/search/?type=Experiment&assay_slims=DNA+methylation",
"/search/?type=Experiment&status=released",
"/matrix/?type=Experiment&status=released",
"/search/?searchTerm=hippocampus",
"/experiments/ENCSR079YAP/",
"/experiments/ENCSR296ASC/",
"/publication-data/ENCSR089EOA/",
"/reference-epigenome-matrix/?type=Experiment&related_series.@type=ReferenceEpigenome&replicates.library.biosample.donor.organism.scientific_name=Homo+sapiens"
]
info = qa.check_response_time(item_types=item_types, n=num_trials, alt_format=True)
prod_url_short = prod_url.split('/')[2]
test_url_short = rc_url.split('/')[2]
xticklabels= []
response_time_types = ['es_time','queue_time','render_time','wsgi_time','total_time']
fig, axes = plt.subplots(5, 2, sharex=True, sharey='row', figsize=(max(12,3*len(item_types)), 20))
for i in range(0,5):
axes[i,0].grid(True,axis='y',color='#B5B5B5', linestyle='--', linewidth=1)
axes[i,1].grid(True,axis='y',color='#B5B5B5', linestyle='--', linewidth=1)
fig.add_subplot(111, frameon=False,xticks=[0],yticks=[0]) # Large plot for the Title
plt.tick_params(labelcolor='none', top='off', bottom='off', left='off', right='off')
plt.grid(False)
plt.xlabel('Item type', fontsize=20, labelpad=60)
plt.ylabel('Response time, ms', fontsize=20, labelpad=30)
plt.title('Profiling Results', fontsize=20, pad=30)
x = np.arange(0, float(len(item_types)))
# Check response time returns a nested dict organized in this structure:
# {item_type: {url: {response_type: (average, stdev)}}}
# This iterates over each item_type and then response_type
i = 0
for item in info:
xticklabels.append(tw.fill(item,13))
for j in range(0,5):
response_time_type = response_time_types[j]
ax1 = axes[j, 0]
ax2 = axes[j, 1]
try:
a = ax1.errorbar(
x[i],
info[item][prod_url_short][response_time_type][0], yerr=info[item][prod_url_short][response_time_type][1],
fmt='_b',
linestyle='')
b = ax2.errorbar(
x[i],
info[item][test_url_short][response_time_type][0], yerr=info[item][test_url_short][response_time_type][1],
fmt='_r',
linestyle='')
#a[-1][0].set_linestyle('--')
#b[-1][0].set_linestyle('--')
except KeyError: # format=json queries don't have render time - this will fill in a 0 value for those.
ax1.errorbar(x[i], 0, yerr=0, fmt='b_', linestyle='')
ax2.errorbar(x[i], 0, yerr=0, fmt='b_', linestyle='')
ax1.autoscale(axis='y')
i = i+1
axes[0,0].set_title(prod_url)
axes[0,1].set_title(rc_url)
axes[0,0].set_ylabel('ES time')
axes[1,0].set_ylabel('Queue time')
axes[2,0].set_ylabel('Render time')
axes[3,0].set_ylabel('WSGI time')
axes[4,0].set_ylabel('Total time')
axes[4,0].set_xticks(x)
axes[4,0].set_xticklabels(xticklabels)
axes[4,1].set_xticklabels(xticklabels)
plt.savefig('/Users/jennifer/Desktop/fromnotebook.png',bbox_inches='tight')
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Opageve 2. De expressies uit opdracht 3 en 4 (in de slides) hebben een eigen naam,
Step2: Opageve 4. Een tautologie is een expressie die altijd waar oplevert, ongeacht de invulling van de variabelen.
| <ASSISTANT_TASK:>
Python Code:
## Opgave 1 - uitwerking
for A in [False, True]:
for B in [False, True]:
print(A, B, not(A or B))
## Opgave 3 - uitwerking
# controle -(-A | -B) == A & B
for A in [False, True]:
for B in [False, True]:
print(A, B, not(not A or not B), A and B)
# controle door computer
for A in [False, True]:
for B in [False, True]:
if not(not A or not B) != A and B:
print("De expressie -(-A | -B) is niet gelijk aan (A & B) voor A", A, "en B", B)
## Opgave 4 - uitwerking
# controle door computer, (A or B) -> (A or B)
# uitdrukking vertaald met -(A or B) or (A or B)
is_tautology = True ## tot het tegendeel bewezen is
for A in [False, True]:
for B in [False, True]:
expr = not(A or B) or (A or B)
if not expr:
is_tautology = False
if is_tautology:
print("De expressie (A or B) -> (A or B) is een tautologie")
else:
print("De expressie (A or B) -> (A or B) is geen tautologie")
# controle door computer, (A or B) -> (A and B)
# uitdrukking vertaald met -(A or B) or (A and B)
is_tautology = True ## tot het tegendeel bewezen is
for A in [False, True]:
for B in [False, True]:
expr = not(A or B) or (A and B)
if not expr:
is_tautology = False
if is_tautology:
print("De expressie (A or B) -> (A and B) is een tautologie")
else:
print("De expressie (A or B) -> (A and B) is geen tautologie")
# controle door computer, (-A -> B) and (-A -> -B)
# uitdrukking vertaald met (-(-A) or B) and (-(-A) or -B)
is_tautology = True ## tot het tegendeel bewezen is
for A in [False, True]:
for B in [False, True]:
expr = (not(not(A)) or B) and (not(not(A)) or not(B))
if not expr:
is_tautology = False
if is_tautology:
print("De expressie (-A -> B) and (-A -> -B) is een tautologie")
else:
print("De expressie (-A -> B) and (-A -> -B) is geen tautologie")
# controle door computer, ((A -> B) and (B -> C)) -> (A -> C)
# uitdrukking vertaald met -((-A or B) and (-B or C)) or (-A or C)
is_tautology = True ## tot het tegendeel bewezen is
for A in [False, True]:
for B in [False, True]:
for C in [False, True]:
expr = not((not A or B) and (not B or C)) or (not A or C)
if not expr:
is_tautology = False
if is_tautology:
print("De expressie ((A -> B) and (B -> C)) -> (A -> C) is een tautologie")
else:
print("De expressie ((A -> B) and (B -> C)) -> (A -> C) is geen tautologie")
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Function definitions. Here we consider a hard-coded two-layer perception with one hidden layer, using the hyperbolic tangent as activation function, and a linear output layer. The output of the perceptron can hence be written as
Step2: The cost function is the mean-squared error, i.e.,
Step3: Here, we compare two different optimization functions. The first function is classical steepest gradient descent with a fixed step size. We evaluate the cost function in each step of the loop.
Step4: The second optimizer is the Adam algorithm [1]. The Adam algorithm uses adaptive estimates of lower-order moments to improve the convergence of gradient descent. Due to its good performance, the Adam algorithm is very widely used in machine learning and in particular deep learning nowadays.
Step5: Carry out the optimization using 50000 iterations with gradient descent and Adam.
| <ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
from sklearn.preprocessing import MinMaxScaler
function_select = 5
def myfun(x):
functions = {
1: np.power(x,2), # quadratic function
2: np.sin(x), # sinus
3: np.sign(x), # signum
4: np.exp(x), # exponential function
5: np.abs(x)
}
return functions.get(function_select)
# Generate training data.
N = 32
x_train = np.linspace(-2, 2, num=N).reshape(-1,1)
# Generate the evaluation data.
# (can exceed the range of the training data to evaluate the prediction capabilities)
x_eval = np.linspace(-2, 2, num=N).reshape(-1,1)
def sigma(x):
return np.tanh(x)
# First order derivative of sigma (here tanh)
def sigma_prime(x):
return 1-(np.tanh(x))**2
def MLP(x,theta):
# hard-coded MLP, all parameters are inside theta
y = theta[0]*sigma(theta[1]*x + theta[2]) + theta[3]*sigma(theta[4]*x+theta[5]) + theta[6]*sigma(theta[7]*x+theta[8])
return y
def cost_function(x, y, theta):
# cost function is mean-squared error bvetween the training set x and the y
difference = np.array([MLP(e, theta) for e in x]) - y
return np.dot(difference.T, difference)/len(x)
# gradient of the cost function
def cost_function_gradient(x, y, theta):
gradient = np.zeros(9)
for k in range(len(x)):
ig = np.zeros(9)
ig[0] = sigma(theta[1]*x[k]+theta[2])
ig[1] = theta[0]*sigma_prime(theta[1]*x[k]+theta[2])*x[k]
ig[2] = theta[0]*sigma_prime(theta[1]*x[k]+theta[2])
ig[3] = sigma(theta[4]*x[k]+theta[5])
ig[4] = theta[3]*sigma_prime(theta[4]*x[k]+theta[5])*x[k]
ig[5] = theta[3]*sigma_prime(theta[4]*x[k]+theta[5])
ig[6] = sigma(theta[7]*x[k]+theta[8])
ig[7] = theta[6]*sigma_prime(theta[7]*x[k]+theta[8])*x[k]
ig[8] = theta[6]*sigma_prime(theta[7]*x[k]+theta[8])
gradient += 2*(MLP(x[k],theta) - y[k])*ig
return gradient / len(x)
def approx_1d_function_gd(x_train, theta_initial, epochs):
# Generate labels for training data (here: the image of the selected function applied to X_train)
y_train = myfun(x_train)
# initialize weights to random data
theta = theta_initial
epsilon = 0.001
cost_evolution = np.zeros(epochs)
# carry out gradient descent
for k in range(epochs):
gradient = cost_function_gradient(x_train, y_train, theta)
theta = theta - epsilon * gradient
cost_evolution[k] = cost_function(x_train, y_train, theta)
return theta, cost_evolution
def approx_1d_function_adam(x_train, theta_initial, epochs):
y_train = myfun(x_train)
theta = theta_initial
beta1 = 0.9
beta2 = 0.999
alpha = 0.001
epsilon = 1e-8
cost_evolution = np.zeros(epochs)
m = np.zeros(theta.shape)
t = 0
v = np.zeros(theta.shape)
for k in range(epochs):
t += 1
g = cost_function_gradient(x_train, y_train, theta)
m = beta1*m + (1-beta1)*g
v = beta2*v + (1-beta2)*(g**2)
mhat = m/(1-beta1**t)
vhat = v/(1-beta2**t)
theta = theta - alpha*mhat/(np.sqrt(vhat)+epsilon)
cost_evolution[k] = cost_function(x_train, y_train, theta)
return theta, cost_evolution
epochs = 50000
np.random.seed(1)
theta_initial = np.random.randn(9)
theta_adam, cost_evolution_adam = approx_1d_function_adam(x_train, theta_initial, epochs)
theta_gd, cost_evolution_gd = approx_1d_function_gd(x_train, theta_initial, epochs)
# compute evaluation
predictions = MLP(x_eval, theta_adam)
fig = plt.figure(1, figsize=(18,6))
font = {'size' : 14}
plt.rc('font', **font)
plt.rc('text', usetex=matplotlib.checkdep_usetex(True))
plt.rc('text.latex', preamble=r'\usepackage{amsmath}\usepackage{amssymb}\usepackage{bm}')
ax = fig.add_subplot(1, 2, 1)
plt.plot(x_eval, myfun(x_eval), '-', color='royalblue', linewidth=1.0)
plt.plot(x_eval, predictions, '-', label='output', color='darkorange', linewidth=2.0)
plt.plot(x_train, myfun(x_train), '.', color='royalblue',markersize=14)
plt.xlim((min(x_train),max(x_train)))
plt.grid(which='both');
plt.rcParams.update({'font.size': 14})
plt.xlabel('$x$');
plt.ylabel('$y$')
plt.title('3 neurons in hidden layer with %d iterations of gradient descent' % epochs)
plt.legend(['Function $f(x)$', r'MLP output $\hat{f}(x,\bm{\theta})$', 'Training set'])
ax = fig.add_subplot(1, 2, 2)
plt.plot(x_eval, [theta_adam[0]*sigma(theta_adam[1]*x + theta_adam[2]) for x in x_eval], '--', label='output', color='xkcd:azure', linewidth=2.0)
plt.plot(x_eval, [theta_adam[3]*sigma(theta_adam[4]*x + theta_adam[5]) for x in x_eval], '--', label='output', color='xkcd:coral', linewidth=2.0)
plt.plot(x_eval, [theta_adam[6]*sigma(theta_adam[7]*x + theta_adam[8]) for x in x_eval], '--', label='output', color='xkcd:green', linewidth=2.0)
plt.grid(which='both');
plt.xlim((min(x_train),max(x_train)))
plt.xlabel('$x$')
plt.ylabel('$y$')
plt.title('Weighted output of the 3 neurons')
plt.legend(['$v_1\sigma(w_1x+b_1)$', '$v_2\sigma(w_2x+b_2)$', '$v_3\sigma(w_3x+b_3)$'])
#plt.savefig('3neuron_MLP_abs.pdf',bbox_inches='tight')
plt.show()
fig = plt.figure(2, figsize=(8,6))
plt.plot(range(epochs), cost_evolution_gd)
plt.plot(range(epochs), cost_evolution_adam)
plt.xlim((0,20000))
plt.ylim((0,1))
plt.grid(which='both')
plt.legend(['Steepest Gradient Descent', 'Adam'])
plt.xlabel('Iterations')
plt.ylabel(r'Cost function $J(\bm{\theta},\mathbb{X}^{[\textrm{train}]},\mathbb{Y}^{[\textrm{train}]})$')
#plt.savefig('3neuron_MLP_abs_cost.pdf',bbox_inches='tight')
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Descriptive analysis
Step2: Create the categorical variable
Step3: Looking at the distribution of countries by the new categorical variable income_level in the graph below, the majority of them take low income per capita, followed by high income and medium income.
Step4: Let's save the income_level variable in our data frame. We must explicitly convert it to object because of an actual misunderstanding betwen pandas and stastmodels packages.
Step5: Let's take a look at the population means by the income_level categories. The table and graph below show that the life expectancy means are diferent among the countries income levels, as our alternative hyphothesis $H_a$ states.
Step6: In the next session, we'll see whether $H_o$ can be rejected or not.
Step7: Post hoc test
| <ASSISTANT_TASK:>
Python Code:
# Import all ploting and scientific library,
# and embed figures in this file.
%pylab inline
# Package to manipulate dataframes.
import pandas as pd
# Nice looking plot functions.
import seaborn as sn
# Read the dataset.
df = pd.read_csv('data/gapminder.csv')
# Set the country name as the index of the dataframe.
df.index = df.country
# This column is no longer needed.
#del df['country']
# Select only the variables we're interested.
df = df[['incomeperperson','lifeexpectancy']]
# Convert the types properly.
df = df.convert_objects(convert_numeric=True)
# Remove missing values.
df = df.dropna()
df.describe()
# http://www.irp.wisc.edu/faqs/faq1.htm
income_threshold= 11720
income_level = pd.cut(df.incomeperperson,
[0, income_threshold*0.4, income_threshold, 110000 ],
labels=['Low income', 'Medium income', 'High income'])
il = income_level.value_counts()
f, a = subplots()
f.set_size_inches(6,3)
sn.barplot(il.values, il.index.tolist(), ax=a);
a.set_title('Number of countries by income_level', fontsize=14);
yticks(fontsize=12),xticks(fontsize=12);
import numpy as np
df['income_level'] = income_level.astype(np.object)
g = df.groupby('income_level')
income_mean = g.mean()
income_mean
sn.boxplot(df.income_level, df.lifeexpectancy);
title('Life expectancy by income level groups', fontsize=14, fontweight='bold');
xticks(fontsize=12)
import statsmodels.formula.api as smf
import statsmodels.stats.multicomp as multi
model = smf.ols('lifeexpectancy ~ C(income_level)', df)
result = model.fit()
result.summary()
res = multi.pairwise_tukeyhsd(df.lifeexpectancy, df.income_level)
print(res.summary())
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Image Classification
Step2: Explore the Data
Step5: Implement Preprocess Functions
Step8: One-hot encode
Step10: Randomize Data
Step12: Check Point
Step17: Build the network
Step20: Convolution and Max Pooling Layer
Step23: Flatten Layer
Step26: Fully-Connected Layer
Step29: Output Layer
Step32: Create Convolutional Model
Step35: Train the Neural Network
Step37: Show Stats
Step38: Hyperparameters
Step40: Train on a Single CIFAR-10 Batch
Step42: Fully Train the Model
Step45: Checkpoint
| <ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
from urllib.request import urlretrieve
from os.path import isfile, isdir
from tqdm import tqdm
import problem_unittests as tests
import tarfile
cifar10_dataset_folder_path = 'cifar-10-batches-py'
# Use Floyd's cifar-10 dataset if present
floyd_cifar10_location = '/cifar/cifar-10-python.tar.gz'
if isfile(floyd_cifar10_location):
tar_gz_path = floyd_cifar10_location
else:
tar_gz_path = 'cifar-10-python.tar.gz'
class DLProgress(tqdm):
last_block = 0
def hook(self, block_num=1, block_size=1, total_size=None):
self.total = total_size
self.update((block_num - self.last_block) * block_size)
self.last_block = block_num
if not isfile(tar_gz_path):
with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:
urlretrieve(
'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',
tar_gz_path,
pbar.hook)
if not isdir(cifar10_dataset_folder_path):
with tarfile.open(tar_gz_path) as tar:
tar.extractall()
tar.close()
tests.test_folder_path(cifar10_dataset_folder_path)
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import numpy as np
# Explore the dataset
batch_id = 1
sample_id = 5
helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
def normalize(x):
Normalize a list of sample image data in the range of 0 to 1
: x: List of image data. The image shape is (32, 32, 3)
: return: Numpy array of normalize data
# TODO: Implement Function
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_normalize(normalize)
def one_hot_encode(x):
One hot encode a list of sample labels. Return a one-hot encoded vector for each label.
: x: List of sample Labels
: return: Numpy array of one-hot encoded labels
# TODO: Implement Function
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_one_hot_encode(one_hot_encode)
DON'T MODIFY ANYTHING IN THIS CELL
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
DON'T MODIFY ANYTHING IN THIS CELL
import pickle
import problem_unittests as tests
import helper
# Load the Preprocessed Validation data
valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
import tensorflow as tf
def neural_net_image_input(image_shape):
Return a Tensor for a batch of image input
: image_shape: Shape of the images
: return: Tensor for image input.
# TODO: Implement Function
return None
def neural_net_label_input(n_classes):
Return a Tensor for a batch of label input
: n_classes: Number of classes
: return: Tensor for label input.
# TODO: Implement Function
return None
def neural_net_keep_prob_input():
Return a Tensor for keep probability
: return: Tensor for keep probability.
# TODO: Implement Function
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tf.reset_default_graph()
tests.test_nn_image_inputs(neural_net_image_input)
tests.test_nn_label_inputs(neural_net_label_input)
tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides):
Apply convolution then max pooling to x_tensor
:param x_tensor: TensorFlow Tensor
:param conv_num_outputs: Number of outputs for the convolutional layer
:param conv_ksize: kernal size 2-D Tuple for the convolutional layer
:param conv_strides: Stride 2-D Tuple for convolution
:param pool_ksize: kernal size 2-D Tuple for pool
:param pool_strides: Stride 2-D Tuple for pool
: return: A tensor that represents convolution and max pooling of x_tensor
# TODO: Implement Function
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_con_pool(conv2d_maxpool)
def flatten(x_tensor):
Flatten x_tensor to (Batch Size, Flattened Image Size)
: x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.
: return: A tensor of size (Batch Size, Flattened Image Size).
# TODO: Implement Function
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_flatten(flatten)
def fully_conn(x_tensor, num_outputs):
Apply a fully connected layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_fully_conn(fully_conn)
def output(x_tensor, num_outputs):
Apply an output layer to x_tensor using weight and bias
: x_tensor: A 2-D tensor where the first dimension is batch size.
: num_outputs: The number of output that the new tensor should be.
: return: A 2-D tensor where the second dimension is num_outputs.
# TODO: Implement Function
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_output(output)
def conv_net(x, keep_prob):
Create a convolutional neural network model
: x: Placeholder tensor that holds image data.
: keep_prob: Placeholder tensor that hold dropout keep probability.
: return: Tensor that represents logits
# TODO: Apply 1, 2, or 3 Convolution and Max Pool layers
# Play around with different number of outputs, kernel size and stride
# Function Definition from Above:
# conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides)
# TODO: Apply a Flatten Layer
# Function Definition from Above:
# flatten(x_tensor)
# TODO: Apply 1, 2, or 3 Fully Connected Layers
# Play around with different number of outputs
# Function Definition from Above:
# fully_conn(x_tensor, num_outputs)
# TODO: Apply an Output Layer
# Set this to the number of classes
# Function Definition from Above:
# output(x_tensor, num_outputs)
# TODO: return output
return None
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
##############################
## Build the Neural Network ##
##############################
# Remove previous weights, bias, inputs, etc..
tf.reset_default_graph()
# Inputs
x = neural_net_image_input((32, 32, 3))
y = neural_net_label_input(10)
keep_prob = neural_net_keep_prob_input()
# Model
logits = conv_net(x, keep_prob)
# Name logits Tensor, so that is can be loaded from disk after training
logits = tf.identity(logits, name='logits')
# Loss and Optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))
optimizer = tf.train.AdamOptimizer().minimize(cost)
# Accuracy
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')
tests.test_conv_net(conv_net)
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):
Optimize the session on a batch of images and labels
: session: Current TensorFlow session
: optimizer: TensorFlow optimizer function
: keep_probability: keep probability
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
# TODO: Implement Function
pass
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_train_nn(train_neural_network)
def print_stats(session, feature_batch, label_batch, cost, accuracy):
Print information about loss and validation accuracy
: session: Current TensorFlow session
: feature_batch: Batch of Numpy image data
: label_batch: Batch of Numpy label data
: cost: TensorFlow cost function
: accuracy: TensorFlow accuracy function
# TODO: Implement Function
pass
# TODO: Tune Parameters
epochs = None
batch_size = None
keep_probability = None
DON'T MODIFY ANYTHING IN THIS CELL
print('Checking the Training on a Single Batch...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
batch_i = 1
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
DON'T MODIFY ANYTHING IN THIS CELL
save_model_path = './image_classification'
print('Training...')
with tf.Session() as sess:
# Initializing the variables
sess.run(tf.global_variables_initializer())
# Training cycle
for epoch in range(epochs):
# Loop over all batches
n_batches = 5
for batch_i in range(1, n_batches + 1):
for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):
train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)
print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')
print_stats(sess, batch_features, batch_labels, cost, accuracy)
# Save Model
saver = tf.train.Saver()
save_path = saver.save(sess, save_model_path)
DON'T MODIFY ANYTHING IN THIS CELL
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import tensorflow as tf
import pickle
import helper
import random
# Set batch size if not already set
try:
if batch_size:
pass
except NameError:
batch_size = 64
save_model_path = './image_classification'
n_samples = 4
top_n_predictions = 3
def test_model():
Test the saved model against the test dataset
test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load model
loader = tf.train.import_meta_graph(save_model_path + '.meta')
loader.restore(sess, save_model_path)
# Get Tensors from loaded model
loaded_x = loaded_graph.get_tensor_by_name('x:0')
loaded_y = loaded_graph.get_tensor_by_name('y:0')
loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
loaded_logits = loaded_graph.get_tensor_by_name('logits:0')
loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')
# Get accuracy in batches for memory limitations
test_batch_acc_total = 0
test_batch_count = 0
for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):
test_batch_acc_total += sess.run(
loaded_acc,
feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})
test_batch_count += 1
print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count))
# Print Random Samples
random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))
random_test_predictions = sess.run(
tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),
feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})
helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)
test_model()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The observed magnitude of the RV variation depends crucially on the angle between the line-of-sight (LOS) and the orbit, particularly the inclination and the longitude of periastron. They way I handle that here is that for each orbit I calculate the observed RV values for a large number of sightlines. I then check for which fraction of those sightlines the RV change is large enough to be detected.
Step2: Set up the grid of parameters. I chose the secondary mass $m_2$ and the total semi-major axis $a$ as parameters. From this I calculate the semi-major axis of the primary and the period assuming a constant primary mass of $m_1 = 6 M_{\odot}$.
Step3: This first plot shows the orbital period in years. This shall serve as a comparison for the following plots.
Step4: Next, I show the probability of detection for circular orbits. The plots on the left and on the right show the same data, but one has a linear and one has logarithmic axis. Both of them are a little coarse, because the grid is not very dense. That can easily be changed if we want to make nicer looking plots.
Step5: Now, we repeat the excersice for $e=0.5$. As you can see the probability contours move inwards a little, but overall that is not a dramatic change.
Step6: As a last point, I use circular orbits again to predict how much better this will be with a 20 year baseline. You can use that to predict the number of additional binary systems that will be identified when new data becomes available.
Step7: Run simulations with the actual observing dates
Step8: We just do this for eccentricity 0 and 0.5. Since there is little prior information and (see above) it also does not matter much in that range, we will just show that the contours are very similar and no more details are required.
Step9: How dense are the grids in period and mass ratio space?
Step10: Get the figures for publication together
| <ASSISTANT_TASK:>
Python Code:
# Import python modules.
from __future__ import division
from itertools import izip
import time
import datetime
import numpy as np
import matplotlib.pyplot as plt
import astropy.units as u
import astropy.table as table
import astropy.io.ascii as ascii
from PyAstronomy.pyasl import KeplerEllipse
%matplotlib inline
def generate_LOSs(n):
'''Generate a random distribution of Line of sights
Generate a random distribution of sightlines in cartesian coordinates,
with an isotropic distribution (no clustering around the poles).
Analytically, a truely isotropic distribution could be generated using
geodesic grids, but this is much faster to program and (for large n) just as good.
Parameters
----------
n : integer
number of sightlines to be generated
Returns
-------
los : np.array of dim [3,m] with m < n
x,y,z cartesian coordinates for each sightline. Only a subset of all generated
sightlines is returend, selected in such a way that they are (within the limits
of their randomness) isotropically distributed over the sphere.
'''
# Make points in box with x,y,z in [-1,1].
los = np.random.random((3, n)) * 2 - 1.
# Throw out all points beyond a unit sphere.
# Yes, this is a wasteful implementation, but it's fast enough that I don't care.
r = np.sqrt((los * los).sum(axis=0))
ind = ((r <= 1) & (r >= 1e-6)) # Throw out inner values where round-off errors matter.
return los[:, ind] / r[ind]
def get_LOSs(n = 1000):
'''Generate a random distribution of Line of sights
Generate a random distribution of sightlines in cartesian coordinates,
with an isotropic distribution (no clustering around the poles).
Analytically, a truely isotropic distribution could be generated using
geodesic grids, but this is much faster to program and (for large n) just as good.
Parameters
----------
n : integer
number of sightlines to be generated
Returns
-------
los : np.array of dim [3,m] with m < n
x,y,z cartesian coordinates for each sightline.
'''
while True:
los = generate_LOSs(4 * n) # 2 should be big enough that this usually succeeds.
if los.shape[1] >= n:
break
return los[:, :n]
def calc_many_vobs(times, a, p, e, n_tau=50, n_los=1000, los=None):
'''Calculate radial velocities for one orbit and many LOS.
For one orbit with given semi-major axis, period, and eccentricity calculate
the radial velocity (RV) signal for a given set of observaton times.
This calculation is done for larger number of sightlines to the system and for
different starting points in the orbit.
Parameters
----------
a : float
semi-major axis (in AU)
p : float
period (in years)
e : float
eccentricity of orbit
n_tau : integer
The calculation will be done for ``n_tau`` different stating points
regularly distributed over the orbit, because it does matter if a
star is observed close to periastron or apastron.
n_los : integer
number of lines-of-sight to be evaluated
los : None or np.array of dim [3, n]
If ``None`` then ``n_los`` sightlines will be randomnly generated.
Otherwise a defined set of sightlines can be passed in as an array.
The elements of the array have to be the cartesian coordinates of
points on the unit sphere.
Returns
-------
v_obs : array of astropy.quantity with dim [n_tau, len(times), n_los]
This holds the calculated RV values that would be observed.
'''
if los is None:
los = get_LOSs(n_los)
else:
n_los = los.shape[1]
taus = np.linspace(0,p, n_tau, endpoint=False)
v_obs = np.zeros((n_tau, len(times), n_los))
for j, tau in enumerate(taus):
ke = KeplerEllipse(a, p, e, tau)
vxyz = ke.xyzVel(times)
for i in range(len(times)):
v_obs[j, i,:] = np.dot(vxyz[i,:], los)
return v_obs *u.AU / u.year
def calc_maxdv(*args, **kwargs):
'''Run ``calc_many_vobs`` and simplyfy its output.
See ``calc_many_vobs`` for accepted parameters.
Returns
-------
maxdv : astropy.quantity
Maximum differences between two RV values that would be
observed for the given orbital parameters for each LOS and tau.
'''
v_obs = calc_many_vobs(*args, **kwargs)
maxdv = np.max(v_obs, axis=1) - np.min(v_obs, axis=1)
return maxdv.flatten()
def prob_to_detect(dv_sensitivity, *args, **kwargs):
'''Calculate the probability to detect binarity.
Parameters
----------
dv_sensitivity : astropy.quantity
Minimum delta(RV) required for detection.
See ``calc_many_vobs`` for the remaining parameters.
Returns
-------
prob : float
Probability of detection
Example
-------
>>> times = np.arange(0,1,.1)
>>> prob_to_detect((20.*u.km/u.s), times, 1 , 1, 0)
0.94
(approximate result, since based on random numbers)
'''
maxdv = calc_maxdv(*args, **kwargs)
return (maxdv > dv_sensitivity).sum() / maxdv.size
# Observation cadence. Here: 10 years, once per year, regular intervals.
times = np.arange(0.,10.,1.)
# Grid for semi-major axis between the two stars (in AU)
a = np.logspace(0.1,2)
# Masses (in M_sun)
m1 = 5 # primary
m2 = np.array([0.02, 0.05, 0.08, 0.1,0.5, 0.8,1,2,3,4,5]) # secondary
M = m1 + m2 # total mass of system
agrid, m2grid = np.meshgrid(a, m2)
# semi-major axis of primary
a1grid = agrid * m2grid / (m1 + m2grid)
# Period
Pgrid = (m1 + m2grid) * np.sqrt(a1grid**3 / m2grid**3)
fig = plt.figure(figsize=(10,5))
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
cs = ax1.contour(Pgrid, [0.1,0.2,1,5,25,50,100,300])
ax1.clabel(cs)
ax1.set_title('Period grid [yr]')
cs = ax2.contour(m2grid/m1)
ax2.clabel(cs)
ax2.set_title('mass ratio grid')
cs = plt.contour(agrid, m2grid, Pgrid, [10,20,30,60,120,240.])
plt.clabel(cs)
plt.xlabel('Semi-major axis [AU]')
plt.ylabel('mass of secondary [$M_{\odot}$]')
plt.title('Orbital period [years]')
prop_e0 = np.zeros_like(Pgrid)
for x, y in np.ndindex(Pgrid.shape):
prop_e0[x,y] = prob_to_detect(2.*u.km/u.s, times, a1grid[x,y], Pgrid[x,y], 0)
def plot_m_x(array):
fig = plt.figure(figsize=(10,5))
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
for ax in [ax1, ax2]:
cs = ax.contour(agrid, m2grid, array)
ax.clabel(cs)
ax.set_xlabel('Semi-major axis [AU]')
ax.set_ylabel('mass of secondary [$M_{\odot}$]')
ax.set_title('Probability of detection for $e$ = 0')
ax2.set_title('Same data, logarithmic axes')
ax2.set_xscale('log')
ax2.set_yscale('log')
return fig
def plot_q_p(array):
fig = plt.figure(figsize=(10,5))
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
for ax in [ax1, ax2]:
cs = ax.contour(Pgrid, m2grid/m1, array)
ax.clabel(cs)
ax.set_xlabel('Period [yr]')
ax.set_ylabel('mass ratio [$M_{\odot}$]')
ax.set_title('Probability of detection for $e$ = 0')
ax2.set_title('Same data, logarithmic axes')
ax2.set_xscale('log')
ax2.set_yscale('log')
return fig
fig = plot_m_x(prop_e0)
# Same plot as above, but with different axis
fig = plot_q_p(prop_e0)
times = np.arange(0.,10.,1.)
prop_e05 = np.zeros_like(Pgrid)
for x, y in np.ndindex(Pgrid.shape):
prop_e05[x,y] = prob_to_detect(2.*u.km/u.s, times, a1grid[x,y], Pgrid[x,y], 0.5)
cs = plt.contour(agrid, m2grid, prop_e05)
plt.clabel(cs)
plt.xlabel('Semi-major axis [AU]')
plt.ylabel('mass of secondary [$M_{\odot}$]')
plt.title('Probability of detection for $e$ = 0.5')
times = np.arange(0.,20.,1.)
prop_e05 = np.zeros_like(Pgrid)
for x, y in np.ndindex(Pgrid.shape):
prop_e05[x,y] = prob_to_detect(2.*u.km/u.s, times, a1grid[x,y], Pgrid[x,y], 0)
cs = plt.contour(agrid, m2grid, prop_e05)
plt.clabel(cs)
plt.xlabel('Semi-major axis [AU]')
plt.ylabel('mass of secondary [$M_{\odot}$]')
plt.title('Probability of detection for $e$ = 0 with 20 year baseline' )
actual_obs_years = table.Table.read('datafile4.txt', format='ascii')
actual_obs_years = actual_obs_years.filled()
actual_obs_years = actual_obs_years.group_by('ID')
allsims00 = {}
for star, group in izip(actual_obs_years.groups.keys, actual_obs_years.groups):
print datetime.datetime.fromtimestamp(time.time()).strftime('%Y-%m-%d %H:%M:%S'), 'Working on: ', star[0]
prop = np.zeros_like(Pgrid)
for x, y in np.ndindex(Pgrid.shape):
prop[x,y] = prob_to_detect(2.*u.km/u.s, np.array(group['Year']), a1grid[x,y], Pgrid[x,y], 0.0)
allsims00[star[0]] = prop
allsims05 = {}
for star, group in izip(actual_obs_years.groups.keys, actual_obs_years.groups):
print datetime.datetime.fromtimestamp(time.time()).strftime('%Y-%m-%d %H:%M:%S'), 'Working on: ', star[0]
prop = np.zeros_like(Pgrid)
for x, y in np.ndindex(Pgrid.shape):
prop[x, y] = prob_to_detect(2. * u.km/u.s, np.array(group['Year']), a1grid[x, y], Pgrid[x, y], 0.5)
allsims05[star[0]] = prop
allnames = set(allsims00.keys()) - set(['CK Cam'])
all00 = np.dstack([allsims00[n] for n in allnames]).mean(axis=2)
all05 = np.dstack([allsims05[n] for n in allnames]).mean(axis=2)
fig = plot_q_p(allsims00[list(allnames)[1]])
fig = plot_q_p(all00)
fig = plt.figure(figsize=(10,5))
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
cs = ax1.contour(Pgrid, [0.1,0.2,1,5,25,50,100,300])
ax1.clabel(cs)
ax1.set_title('Period grid')
cs = ax2.contour(m2grid/m1)
ax2.clabel(cs)
ax2.set_title('mass ratio grid')
print np.min(Pgrid), np.max(Pgrid)
fig = plt.figure(figsize=(4,3))
ax = fig.add_subplot(111)
cs = ax.contour(Pgrid, m2grid/m1, all00, [0.95, 0.90, 0.75], linewidths=4, linestyles=['solid', 'dashed','dotted'])
cs2 = ax.contour(Pgrid, m2grid/m1, all05, [0.95, 0.90, 0.75], linestyles=['solid', 'dashed','dotted'])
ax.set_xlim([0,60])
ax.clabel(cs, fmt='%1.2f', use_clabeltext=True, manual=[(40,.4), (30,.6), (20,.5)])
ax.set_xlabel('Period [yr]')
ax.set_ylabel('mass ratio')
#ax.set_xscale("log")
fig.subplots_adjust(left=0.16, bottom=0.16, top=0.97, right=0.97)
fig.savefig('detectionprobability.png')
fig.savefig('detectionprobability.pdf')
fig.savefig('detectionprobability.eps')
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Download and plot the data on US CPI
Step2: 3. Fit the model with maximum likelihood
Step3: It's a good fit. We can also get the series of one-step ahead predictions and plot it next to the actual data, along with a confidence band.
Step4: 4. Helper functions to provide tensors to the library doing Bayesian estimation
Step5: 5. Bayesian estimation with NUTS
Step6: Now for the fun part! There are three parameters to estimate
Step7: Note that the NUTS sampler is auto-assigned because we provided gradients. PyMC3 will use Metropolis or Slicing samplers if it does not find that gradients are available. There are an impressive number of draws per second for a "block box" style computation! However, note that if the model can be represented directly by PyMC3 (like the AR(p) models mentioned above), then computation can be substantially faster.
Step8: The estimated posteriors clearly peak close to the parameters found by MLE. We can also see a summary of the estimated values
Step9: Here $\hat{R}$ is the Gelman-Rubin statistic. It tests for lack of convergence by comparing the variance between multiple chains to the variance within each chain. If convergence has been achieved, the between-chain and within-chain variances should be identical. If $\hat{R}<1.2$ for all model parameters, we can have some confidence that convergence has been reached.
Step10: Appendix A. Application to UnobservedComponents models
Step11: As noted earlier, the Theano wrappers (Loglike and Score) that we created above are generic, so we can re-use essentially the same code to explore the model with Bayesian methods.
Step12: And as before we can plot the marginal posteriors. In contrast to the SARIMAX example, here the posterior modes are somewhat different from the MLE estimates.
Step13: One benefit of this model is that it gives us an estimate of the underling "level" of inflation, using the smoothed estimate of $\mu_t$, which we can access as the "level" column in the results objects' states.smoothed attribute. In this case, because the Bayesian posterior mean of the level's variance is larger than the MLE estimate, its estimated level is a little more volatile.
| <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import theano
import theano.tensor as tt
import pymc3 as pm
import numpy as np
import matplotlib.pyplot as plt
import statsmodels.api as sm
import pandas as pd
from pandas_datareader.data import DataReader
from pandas.plotting import register_matplotlib_converters
plt.style.use('seaborn')
register_matplotlib_converters()
cpi = DataReader('CPIAUCNS', 'fred', start='1971-01', end='2018-12')
cpi.index = pd.DatetimeIndex(cpi.index, freq='MS')
# Define the inflation series that we'll use in analysis
inf = np.log(cpi).resample('QS').mean().diff()[1:] * 400
print(inf.head())
# Plot the series
fig, ax = plt.subplots(figsize=(9, 4), dpi=300)
ax.plot(inf.index, inf, label=r'$\Delta \log CPI$', lw=2)
ax.legend(loc='lower left')
plt.show()
# Create an SARIMAX model instance - here we use it to estimate
# the parameters via MLE using the `fit` method, but we can
# also re-use it below for the Bayesian estimation
mod = sm.tsa.statespace.SARIMAX(inf, order=(1, 0, 1))
res_mle = mod.fit(disp=False)
print(res_mle.summary())
predict_mle = res_mle.get_prediction()
predict_mle_ci = predict_mle.conf_int()
lower = predict_mle_ci['lower CPIAUCNS']
upper = predict_mle_ci['upper CPIAUCNS']
# Graph
fig, ax = plt.subplots(figsize=(9,4), dpi=300)
# Plot data points
inf.plot(ax=ax, style='-', label='Observed')
# Plot predictions
predict_mle.predicted_mean.plot(ax=ax, style='r.', label='One-step-ahead forecast')
ax.fill_between(predict_mle_ci.index, lower, upper, color='r', alpha=0.1)
ax.legend(loc='lower left')
plt.show()
class Loglike(tt.Op):
itypes = [tt.dvector] # expects a vector of parameter values when called
otypes = [tt.dscalar] # outputs a single scalar value (the log likelihood)
def __init__(self, model):
self.model = model
self.score = Score(self.model)
def perform(self, node, inputs, outputs):
theta, = inputs # contains the vector of parameters
llf = self.model.loglike(theta)
outputs[0][0] = np.array(llf) # output the log-likelihood
def grad(self, inputs, g):
# the method that calculates the gradients - it actually returns the
# vector-Jacobian product - g[0] is a vector of parameter values
theta, = inputs # our parameters
out = [g[0] * self.score(theta)]
return out
class Score(tt.Op):
itypes = [tt.dvector]
otypes = [tt.dvector]
def __init__(self, model):
self.model = model
def perform(self, node, inputs, outputs):
theta, = inputs
outputs[0][0] = self.model.score(theta)
# Set sampling params
ndraws = 3000 # number of draws from the distribution
nburn = 600 # number of "burn-in points" (which will be discarded)
# Construct an instance of the Theano wrapper defined above, which
# will allow PyMC3 to compute the likelihood and Jacobian in a way
# that it can make use of. Here we are using the same model instance
# created earlier for MLE analysis (we could also create a new model
# instance if we preferred)
loglike = Loglike(mod)
with pm.Model():
# Priors
arL1 = pm.Uniform('ar.L1', -0.99, 0.99)
maL1 = pm.Uniform('ma.L1', -0.99, 0.99)
sigma2 = pm.InverseGamma('sigma2', 2, 4)
# convert variables to tensor vectors
theta = tt.as_tensor_variable([arL1, maL1, sigma2])
# use a DensityDist (use a lamdba function to "call" the Op)
pm.DensityDist('likelihood', lambda v: loglike(v), observed={'v': theta})
# Draw samples
trace = pm.sample(ndraws, tune=nburn, discard_tuned_samples=True, cores=4)
plt.tight_layout()
# Note: the syntax here for the lines argument is required for
# PyMC3 versions >= 3.7
# For version <= 3.6 you can use lines=dict(res_mle.params) instead
_ = pm.traceplot(trace,
lines=[(k, {}, [v]) for k, v in dict(res_mle.params).items()],
combined=True,
figsize=(12, 12))
pm.summary(trace)
# Retrieve the posterior means
params = pm.summary(trace)['mean'].values
# Construct results using these posterior means as parameter values
res_bayes = mod.smooth(params)
predict_bayes = res_bayes.get_prediction()
predict_bayes_ci = predict_bayes.conf_int()
lower = predict_bayes_ci['lower CPIAUCNS']
upper = predict_bayes_ci['upper CPIAUCNS']
# Graph
fig, ax = plt.subplots(figsize=(9,4), dpi=300)
# Plot data points
inf.plot(ax=ax, style='-', label='Observed')
# Plot predictions
predict_bayes.predicted_mean.plot(ax=ax, style='r.', label='One-step-ahead forecast')
ax.fill_between(predict_bayes_ci.index, lower, upper, color='r', alpha=0.1)
ax.legend(loc='lower left')
plt.show()
# Construct the model instance
mod_uc = sm.tsa.UnobservedComponents(inf, 'rwalk', autoregressive=1)
# Fit the model via maximum likelihood
res_uc_mle = mod_uc.fit()
print(res_uc_mle.summary())
# Set sampling params
ndraws = 3000 # number of draws from the distribution
nburn = 600 # number of "burn-in points" (which will be discarded)
# Here we follow the same procedure as above, but now we instantiate the
# Theano wrapper `Loglike` with the UC model instance instead of the
# SARIMAX model instance
loglike_uc = Loglike(mod_uc)
with pm.Model():
# Priors
sigma2level = pm.InverseGamma('sigma2.level', 1, 1)
sigma2ar = pm.InverseGamma('sigma2.ar', 1, 1)
arL1 = pm.Uniform('ar.L1', -0.99, 0.99)
# convert variables to tensor vectors
theta_uc = tt.as_tensor_variable([sigma2level, sigma2ar, arL1])
# use a DensityDist (use a lamdba function to "call" the Op)
pm.DensityDist('likelihood', lambda v: loglike_uc(v), observed={'v': theta_uc})
# Draw samples
trace_uc = pm.sample(ndraws, tune=nburn, discard_tuned_samples=True, cores=4)
plt.tight_layout()
# Note: the syntax here for the lines argument is required for
# PyMC3 versions >= 3.7
# For version <= 3.6 you can use lines=dict(res_mle.params) instead
_ = pm.traceplot(trace_uc,
lines=[(k, {}, [v]) for k, v in dict(res_uc_mle.params).items()],
combined=True,
figsize=(12, 12))
pm.summary(trace_uc)
# Retrieve the posterior means
params = pm.summary(trace_uc)['mean'].values
# Construct results using these posterior means as parameter values
res_uc_bayes = mod_uc.smooth(params)
# Graph
fig, ax = plt.subplots(figsize=(9,4), dpi=300)
# Plot data points
inf['CPIAUCNS'].plot(ax=ax, style='-', label='Observed data')
# Plot estimate of the level term
res_uc_mle.states.smoothed['level'].plot(ax=ax, label='Smoothed level (MLE)')
res_uc_bayes.states.smoothed['level'].plot(ax=ax, label='Smoothed level (Bayesian)')
ax.legend(loc='lower left');
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Init
Step2: Selecting genomes based on GC
Step3: Copying genomes
Step4: Indexing genomes
| <ASSISTANT_TASK:>
Python Code:
workDir = '/home/nick/notebook/SIPSim/dev/bac_genome3/'
SIPSimExe = '/home/nick/notebook/SIPSim/SIPSim'
genomeDir = '/home/nick/notebook/SIPSim/dev/bac_genome1210/genomes/'
#genomeTable = '/var/seq_data/ncbi_db/genome/prok-bac_filt_rand.txt'
import os,sys
import numpy as np
import pandas as pd
from ggplot import *
import matplotlib.pyplot as plt
%load_ext rpy2.ipython
%matplotlib inline
%%R
library(ggplot2)
library(dplyr)
library(tidyr)
library(gridExtra)
if not os.path.isdir(workDir):
os.mkdir(workDir)
workDirGenome = os.path.join(workDir, 'genomes')
if not os.path.isdir(workDirGenome):
os.mkdir(workDirGenome)
%%bash -s "$genomeDir"
cd $1
find . -name "*fasta" | \
perl -pe 's/\.fasta//' | \
xargs -P 24 -I % bash -c \
"seq_tools GC_content < %.fasta > %_GC.txt"
%%bash -s "$genomeDir"
cd $1
cat ./Clostridium_ljungdahlii_DSM_13528_GC.txt
cat ./Escherichia_coli_1303_GC.txt
cat ./Streptomyces_pratensis_ATCC_33331_GC.txt
%%bash -s "$genomeDir" "$workDirGenome"
cp $1/Clostridium_ljungdahlii_DSM_13528.fasta $2
cp $1/Escherichia_coli_1303.fasta $2
cp $1/Streptomyces_pratensis_ATCC_33331.fasta $2
!cd $workDirGenome; \
printf "Clostridium_ljungdahlii_DSM_13528 Clostridium_ljungdahlii_DSM_13528.fasta\n\
Escherichia_coli_1303 Escherichia_coli_1303.fasta\n\
Streptomyces_pratensis_ATCC_33331 Streptomyces_pratensis_ATCC_33331.fasta" > genome_index.txt
!cd $workDirGenome; \
perl -pi -e 's/ /\t/' genome_index.txt
!cd $workDir; \
$SIPSimExe genome_index genomes/genome_index.txt \
--fp ./genomes/ --np 30 > genomes/index_log.txt
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First, let's gather some data. We use the PubMed database of medical paper.
Step2: Download a toy corpus (224 structured abstracts, 200 KB compressed)
Step3: Download a lemmatized corpus (preprocessed, 350 MB compressed)
Step4: Download training and testing datas for the LSTM (preprocessed, vectorized and splitted, 100 MB compressed)
Step5: Some imports
Step6: <a id='extract'></a>
Step7: Our data currently look like this
Step8: Cleaning, dumping the abstracts with incorrect number of labels
Step9: <a id='pre-process'></a>
Step10: For correct sentence splitting, we train a tokenizer using NLTK Punkt Sentence Tokenizer. This tokenizer use an unsupervised algorithm to learn how to split sentences on a corpus.
Step11: Our data look now like this
Step12: Lemmatization
Step13: Let's save that
Step14: To directly load a lemmatized corpus
Step15: <a id='label analysis'></a>
Step16: <a id='choosing labels'></a>
Step17: ... Or we can keep a more noisy dataset and reduce it to a set of labels
Step18: <a id='create train'></a>
Step19: Vectorize the sentences.
Step20: Now let's save all this
| <ASSISTANT_TASK:>
Python Code:
#%install_ext https://raw.githubusercontent.com/rasbt/watermark/master/watermark.py
%load_ext watermark
# for reproducibility
%watermark -a 'Paul Willot' -mvp numpy,scipy,spacy
!wget https://www.dropbox.com/s/lhqe3bls0mkbq57/pubmed_result_548899.txt.zip -P ./data/
!unzip -o ./data/pubmed_result_548899.txt.zip -d ./data/
#!wget https://www.dropbox.com/s/ujo1l8duu31js34/toy_corpus.txt.zip -P ./data/
#!unzip -o ./TMP/toy_corpus.txt.zip -d ./data/
!wget https://www.dropbox.com/s/lmv88n1vpmp6c19/corpus_lemmatized.pickle.zip -P ./data/
!unzip -o ./data/corpus_lemmatized.pickle.zip -d ./data/
!wget https://www.dropbox.com/s/0o7i0ejv4aqf6gs/training_4_BacObjMetCon.pickle.zip -P ./data/
!unzip -o ./data/training_4_BacObjMetCon.pickle.zip -d ./data/
from __future__ import absolute_import
from __future__ import print_function
# import local libraries
import tools
import prepare
import lemmatize
import analyze
import preprocess
data = prepare.extract_txt('data/toy_corpus.txt')
print("%s\n[...]"%data[0][:800])
abstracts = prepare.get_abstracts(data)
def remove_err(datas,errs):
err=sorted([item for subitem in errs for item in subitem],reverse=True)
for e in err:
for d in datas:
del d[e]
remove_err([abstracts],prepare.get_errors(abstracts))
print("Working on %d documents."%len(abstracts))
abstracts = prepare.filter_numbers(abstracts)
tokenizer = prepare.create_sentence_tokenizer(abstracts)
# For a more general parser, use the one provided in NLTK:
#import nltk.data
#tokenizer = nltk.data.load('tokenizers/punkt/english.pickle')
abstracts_labeled = prepare.ex_all_labels(abstracts,tokenizer)
abstracts_labeled[0][0]
lemmatized = lemmatize.lemm(abstracts_labeled)
lemmatized[0]
tools.dump_pickle(lemmatized,"data/fast_lemmatized.pickle")
lemmatized = tools.load_pickle("data/corpus_lemmatized.pickle")
dic = analyze.create_dic_simple(lemmatized)
print("Number of labels :",len(dic.keys()))
analyze.show_keys(dic,threshold=10)
primary_keyword=['AIM','BACKGROUND','INTRODUCTION','METHOD','RESULT','CONCLUSION','OBJECTIVE','DESIGN','FINDING','OUTCOME','PURPOSE']
analyze.regroup_keys(dic,primary_keyword)
analyze.show_keys(dic,threshold=10)
keys_to_replace = [['INTRODUCTION','CONTEXT','PURPOSE'],
['AIM','SETTING'],
['FINDING','OUTCOME','DISCUSSION']]
replace_with = ['BACKGROUND',
'METHOD',
'CONCLUSION']
analyze.replace_keys(dic,keys_to_replace,replace_with)
analyze.show_keys(dic,threshold=10)
pattern = [
['BACKGROUND','BACKGROUNDS'],
['METHOD','METHODS'],
['RESULT','RESULTS'],
['CONCLUSION','CONCLUSIONS'],
]
sub_perfect = analyze.get_exactly(lemmatized,pattern=pattern,no_truncate=True)
sub_perfect = analyze.get_exactly(lemmatized,pattern=pattern,no_truncate=False)
print("%d abstracts labeled and ready for the next part"%len(sub_perfect))
dic = preprocess.create_dic(lemmatized,100)
# We can re-use the variables defined in the analysis section
#primary_keyword=['AIM','BACKGROUND','METHOD','RESULT','CONCLUSION','OBJECTIVE','DESIGN','FINDINGS','OUTCOME','PURPOSE']
analyze.regroup_keys(dic,primary_keyword)
#keys_to_replace = [['INTRODUCTION','BACKGROUND','AIM','PURPOSE','CONTEXT'],
# ['CONCLUSION']]
#replace_with = ['OBJECTIVE',
# 'RESULT']
analyze.replace_keys(dic,keys_to_replace,replace_with)
# We can restrict our analysis to the main labels
dic = {key:dic[key] for key in ['BACKGROUND','RESULT','METHOD','CONCLUSION']}
analyze.show_keys(dic,threshold=10)
print("Sentences per label :",["%s %d"%(s,len(dic[s][1])) for s in dic.keys()])
classes_names = ['BACKGROUND', 'METHOD', 'RESULT','CONCLUSION']
dic.keys()
# train/test split
split = 0.8
# truncate the number of abstracts to consider for each label,
# -1 to set to the maximum while keeping the number of sentences per labels equal
raw_x_train, raw_y_train, raw_x_test, raw_y_test = preprocess.split_data(dic,classes_names,
split_train_test=split,
truncate=-1)
X_train, y_train, X_test, y_test, feature_names, max_features, vectorizer = preprocess.vectorize_data(raw_x_train,
raw_y_train,
raw_x_test,
raw_y_test)
print("Number of features : %d"%(max_features))
tools.dump_pickle([X_train, y_train, X_test, y_test, feature_names, max_features, classes_names, vectorizer],
"data/unpadded_4_BacObjMetCon.pickle")
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Type
Step7: 1.4. Elemental Stoichiometry
Step8: 1.5. Elemental Stoichiometry Details
Step9: 1.6. Prognostic Variables
Step10: 1.7. Diagnostic Variables
Step11: 1.8. Damping
Step12: 2. Key Properties --> Time Stepping Framework --> Passive Tracers Transport
Step13: 2.2. Timestep If Not From Ocean
Step14: 3. Key Properties --> Time Stepping Framework --> Biology Sources Sinks
Step15: 3.2. Timestep If Not From Ocean
Step16: 4. Key Properties --> Transport Scheme
Step17: 4.2. Scheme
Step18: 4.3. Use Different Scheme
Step19: 5. Key Properties --> Boundary Forcing
Step20: 5.2. River Input
Step21: 5.3. Sediments From Boundary Conditions
Step22: 5.4. Sediments From Explicit Model
Step23: 6. Key Properties --> Gas Exchange
Step24: 6.2. CO2 Exchange Type
Step25: 6.3. O2 Exchange Present
Step26: 6.4. O2 Exchange Type
Step27: 6.5. DMS Exchange Present
Step28: 6.6. DMS Exchange Type
Step29: 6.7. N2 Exchange Present
Step30: 6.8. N2 Exchange Type
Step31: 6.9. N2O Exchange Present
Step32: 6.10. N2O Exchange Type
Step33: 6.11. CFC11 Exchange Present
Step34: 6.12. CFC11 Exchange Type
Step35: 6.13. CFC12 Exchange Present
Step36: 6.14. CFC12 Exchange Type
Step37: 6.15. SF6 Exchange Present
Step38: 6.16. SF6 Exchange Type
Step39: 6.17. 13CO2 Exchange Present
Step40: 6.18. 13CO2 Exchange Type
Step41: 6.19. 14CO2 Exchange Present
Step42: 6.20. 14CO2 Exchange Type
Step43: 6.21. Other Gases
Step44: 7. Key Properties --> Carbon Chemistry
Step45: 7.2. PH Scale
Step46: 7.3. Constants If Not OMIP
Step47: 8. Tracers
Step48: 8.2. Sulfur Cycle Present
Step49: 8.3. Nutrients Present
Step50: 8.4. Nitrous Species If N
Step51: 8.5. Nitrous Processes If N
Step52: 9. Tracers --> Ecosystem
Step53: 9.2. Upper Trophic Levels Treatment
Step54: 10. Tracers --> Ecosystem --> Phytoplankton
Step55: 10.2. Pft
Step56: 10.3. Size Classes
Step57: 11. Tracers --> Ecosystem --> Zooplankton
Step58: 11.2. Size Classes
Step59: 12. Tracers --> Disolved Organic Matter
Step60: 12.2. Lability
Step61: 13. Tracers --> Particules
Step62: 13.2. Types If Prognostic
Step63: 13.3. Size If Prognostic
Step64: 13.4. Size If Discrete
Step65: 13.5. Sinking Speed If Prognostic
Step66: 14. Tracers --> Dic Alkalinity
Step67: 14.2. Abiotic Carbon
Step68: 14.3. Alkalinity
| <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'cnrm-cerfacs', 'sandbox-1', 'ocnbgchem')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.model_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Geochemical"
# "NPZD"
# "PFT"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Fixed"
# "Variable"
# "Mix of both"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.elemental_stoichiometry_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.damping')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.passive_tracers_transport.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "use ocean model transport time step"
# "use specific time step"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.time_stepping_framework.biology_sources_sinks.timestep_if_not_from_ocean')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline"
# "Online"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Use that of ocean model"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.transport_scheme.use_different_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.atmospheric_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Atmospheric Chemistry model"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.river_input')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "from file (climatology)"
# "from file (interannual variations)"
# "from Land Surface model"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_boundary_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.boundary_forcing.sediments_from_explicit_model')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.O2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.DMS_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.N2O_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC11_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.CFC12_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.SF6_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.13CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.14CO2_exchange_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.gas_exchange.other_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OMIP protocol"
# "Other protocol"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.pH_scale')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea water"
# "Free"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.key_properties.carbon_chemistry.constants_if_not_OMIP')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.sulfur_cycle_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nutrients_present')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrogen (N)"
# "Phosphorous (P)"
# "Silicium (S)"
# "Iron (Fe)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_species_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Nitrates (NO3)"
# "Amonium (NH4)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.nitrous_processes_if_N')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Dentrification"
# "N fixation"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_definition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.upper_trophic_levels_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "PFT including size based (specify both below)"
# "Size based only (specify below)"
# "PFT only (specify below)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.pft')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diatoms"
# "Nfixers"
# "Calcifiers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.phytoplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microphytoplankton"
# "Nanophytoplankton"
# "Picophytoplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Generic"
# "Size based (specify below)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.ecosystem.zooplankton.size_classes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Microzooplankton"
# "Mesozooplankton"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.bacteria_present')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.disolved_organic_matter.lability')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Labile"
# "Semi-labile"
# "Refractory"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diagnostic"
# "Diagnostic (Martin profile)"
# "Diagnostic (Balast)"
# "Prognostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.types_if_prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "POC"
# "PIC (calcite)"
# "PIC (aragonite"
# "BSi"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No size spectrum used"
# "Full size spectrum"
# "Discrete size classes (specify which below)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.size_if_discrete')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.particules.sinking_speed_if_prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Function of particule size"
# "Function of particule type (balast)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.carbon_isotopes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "C13"
# "C14)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.abiotic_carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocnbgchem.tracers.dic_alkalinity.alkalinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Prognostic"
# "Diagnostic)"
# TODO - please enter value(s)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Problem statement
Step2: Next, we need to define the bounds of the dimensions of the search space we want to explore
Step4: Optimize all the things!
Step5: Convergence plot
| <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
from sklearn.ensemble import GradientBoostingRegressor
from sklearn.model_selection import cross_val_score
boston = load_boston()
X, y = boston.data, boston.target
n_features = X.shape[1]
reg = GradientBoostingRegressor(n_estimators=50, random_state=0)
def objective(params):
max_depth, learning_rate, max_features, min_samples_split, min_samples_leaf = params
reg.set_params(max_depth=max_depth,
learning_rate=learning_rate,
max_features=max_features,
min_samples_split=min_samples_split,
min_samples_leaf=min_samples_leaf)
return -np.mean(cross_val_score(reg, X, y, cv=5, n_jobs=-1,
scoring="neg_mean_absolute_error"))
space = [(1, 5), # max_depth
(10**-5, 10**0, "log-uniform"), # learning_rate
(1, n_features), # max_features
(2, 100), # min_samples_split
(1, 100)] # min_samples_leaf
from skopt import gp_minimize
res_gp = gp_minimize(objective, space, n_calls=100, random_state=0)
"Best score=%.4f" % res_gp.fun
print(Best parameters:
- max_depth=%d
- learning_rate=%.6f
- max_features=%d
- min_samples_split=%d
- min_samples_leaf=%d % (res_gp.x[0], res_gp.x[1],
res_gp.x[2], res_gp.x[3],
res_gp.x[4]))
from skopt.plots import plot_convergence
plot_convergence(res_gp);
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read the survey data
Step2: Create target group 'Latin America'
Step4: Generic function to plot barchart for any categorical feature on any target/nontarget split
Step6: Generic function to plot barchart for any multi-categorical feature on any target/nontarget split
Step7: Apply this plot on the target 'Africa'
Step8: They also started earlier with MOOCs
| <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
sns.set(style="white")
sns.set_context("talk")
df = pd.read_csv('raw/2016-17-ClassCentral-Survey-data-noUserText.csv', decimal=',', encoding = "ISO-8859-1")
df['Which region of the world are you in?'].value_counts()
target_name = 'Latin America'
mask_africa = (df['Which region of the world are you in?'] == 'Africa')
def binary_compare_categorical_barh(mask, feature, df=df,
target_name='target', nontarget_name='Other',
split_name='visitor', answer='answer'):
Split dataframe into two based on mask
Draw horizontal barcharts for each category item for both masked and unmasked object
target = df[mask]
nontarget = df[~mask]
target_size, nontarget_size = len(target), len(nontarget)
res_target = target[feature].value_counts()/target_size*100
res_nontarget = nontarget[feature].value_counts()/nontarget_size*100
result = pd.DataFrame({target_name: res_target, nontarget_name: res_nontarget})
result[answer] = result.index
res_df = pd.melt(result, id_vars=answer, var_name=split_name, value_name='percentage')
print(res_df)
sns.factorplot(x='percentage', y=answer, hue=split_name, data=res_df, kind='bar', orient='h', size=6, aspect=2)
plt.title(feature)
sns.despine(left=True, bottom=True)
plt.show()
return
def binary_compare_multi_select_categorical_barh(df, target, target_name, question, selectors, nontarget_name = 'Others'):
draw a barchart for Survey results on a question that allows to select multiple categories
df: dataframe to use
target: selection of rows based on column values
question: the question you want to analyse
selectors: list of df column containing the selectors (values 0/1)
size = {}
target_df = df[target]
nontarget_df = df[~target]
size[target_name], size[nontarget_name] = len(target_df), len(nontarget_df)
print(size)
graph_targetdata = target_df.loc[:, selectors]
graph_targetdata['target'] = target_name
graph_nontargetdata = nontarget_df.loc[:, selectors]
graph_nontargetdata['target'] = nontarget_name
graph_data = pd.concat([graph_targetdata, graph_nontargetdata])
melted = pd.melt(graph_data, id_vars='target', var_name='select', value_name='percentage')
grouped = melted.groupby(['target', 'select'], as_index=False).sum()
#print(size[grouped['target']])
grouped.percentage = grouped.percentage/grouped['target'].map(size)*100 # make it percentage of total
grouped['select'] = grouped['select'].apply(lambda x: x.split(": ")[1]) # remove prefix from string
print(grouped)
sns.factorplot(x='percentage', y='select', hue='target', data=grouped, kind='bar', orient='h', size=6, aspect=2)
sns.plt.title(question)
sns.despine(left=True, bottom=True)
sns.plt.show()
binary_compare_categorical_barh(mask=mask_africa,
target_name='Africa', feature='How familiar are you with MOOCs?')
binary_compare_categorical_barh(mask=mask_africa,
target_name='Africa', feature='How important is the ability to earn a certificate when you complete a MOOC?')
reasons = ['Reasons: Learning skills for current career', 'Reasons: Learning skills for new career',
'Reasons: School credit', 'Reasons: Personal interest', 'Reasons: Access to reference materials']
binary_compare_multi_select_categorical_barh(df, target=mask_africa, target_name='Africa',
question='Which of the following are important reasons for you to take MOOCs?',
selectors=reasons)
decisions = ['Decide: Topic/Subject', 'Decide: Instructor', 'Decide: Institution/university',
'Decide: Platform', 'Decide: Ratings', 'Decide: Others recommendations']
binary_compare_multi_select_categorical_barh(df, target=mask_africa, target_name='Africa',
question='Which are the most important factors in deciding which MOOC to take?',
selectors=decisions)
aspects = ['Aspects: Browsing discussion forums',
'Aspects: Actively contributing to discussion forums',
'Aspects: Connecting with other learners in the course environment',
'Aspects: Connecting with learners outside the course environment',
'Aspects: Taking the course with other people you know (friends, colleagues, etc.)']
binary_compare_multi_select_categorical_barh(df, target=mask_africa, target_name='Africa',
question='Which of the following are important aspects of the MOOC experience to you?',
selectors=aspects)
benefits = ['Benefit: Have not taken MOOCs',
'Benefit: Not Really',
'Benefit: School credit towards a degree',
'Benefit: Promotion at current organization',
'Benefit: Higher performance evaluation at current job',
'Benefit: Helped me get a new job in the same field',
'Benefit: Helped me get a new job in a different field']
binary_compare_multi_select_categorical_barh(df, target=mask_africa, target_name='Africa',
question='Have you received any tangible benefits from taking MOOCs?',
selectors=benefits)
pays = ['Pay: The topic/subject',
'Pay: The institution/university offering the MOOC',
'Pay: The instructor/professor',
'Pay: The MOOC platform being used',
'Pay: A multi-course certification that the MOOC is a part of']
binary_compare_multi_select_categorical_barh(df, target=mask_africa, target_name='Africa',
question='Which of the following have a strong impact on your willingness to pay for a MOOC certificate?',
selectors=pays)
binary_compare_categorical_barh(mask=mask_africa,
target_name='Africa', feature='# MOOCs Started')
binary_compare_categorical_barh(mask=mask_africa,
target_name='Africa', feature='# MOOCs Finished')
binary_compare_categorical_barh(mask=mask_africa,
target_name='Africa', feature='When did you first start taking MOOCs?')
binary_compare_categorical_barh(mask=mask_africa,
target_name='Africa', feature='How willing are you to pay for a certificate for a MOOC?')
binary_compare_categorical_barh(mask=mask_africa,
target_name='Arica',
feature='How much do you think employers value MOOC certificates?')
binary_compare_categorical_barh(mask=mask_africa,
target_name='Africa',
feature='What is your level of formal education?')
binary_compare_categorical_barh(mask=mask_africa,
target_name='Africa',
feature='What is your age range?')
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Lab Task 1
Step2: A "vector" or "rank-1" tensor is like a list of values. A vector has 1-axis
Step3: A "matrix" or "rank-2" tensor has 2-axes
Step4: <table>
Step5: There are many ways you might visualize a tensor with more than 2-axes.
Step6: Tensors often contain floats and ints, but have many other types, including
Step7: Tensors are used in all kinds of operations (ops).
Step8: About shapes
Step9: <table>
Step10: While axes are often referred to by their indices, you should always keep track of the meaning of each. Often axes are ordered from global to local
Step11: Indexing with a scalar removes the dimension
Step12: Indexing with a
Step13: Multi-axis indexing
Step14: Passing an integer for each index the result is a scalar.
Step15: You can index using any combination integers and slices
Step16: Here is an example with a 3-axis tensor
Step17: <table>
Step18: You can reshape a tensor into a new shape. Reshaping is fast and cheap as the underlying data does not need to be duplicated.
Step19: The data maintains it's layout in memory and a new tensor is created, with the requested shape, pointing to the same data. TensorFlow uses C-style "row-major" memory ordering, where incrementing the right-most index corresponds to a single step in memory.
Step20: If you flatten a tensor you can see what order it is laid out in memory.
Step21: Typically the only reasonable uses of tf.reshape are to combine or split adjacent axes (or add/remove 1s).
Step22: <table>
Step23: <table>
Step24: Broadcasting
Step25: Likewise, 1-sized dimensions can be stretched out to match the other arguments. Both arguments can be stretched in the same computation.
Step26: <table>
Step27: Most of the time, broadcasting is both time and space efficient, as the broadcast operation never materializes the expanded tensors in memory.
Step28: Unlike a mathematical op, for example, broadcast_to does nothing special to save memory. Here, you are materializing the tensor.
Step29: Instead create a tf.RaggedTensor using tf.ragged.constant
Step30: The shape of a tf.RaggedTensor contains unknown dimensions
Step31: String tensors
Step32: And a vector of strings
Step33: In the above printout the b prefix indicates that tf.string dtype is not a unicode string, but a byte-string. See the Unicode Tutorial for more about working with unicode text in TensorFlow.
Step34: Some basic functions with strings can be found in tf.strings, including tf.strings.split.
Step35: <table>
Step36: Although you can't use tf.cast to turn a string tensor into numbers, you can convert it into bytes, and then into numbers.
Step37: The tf.string dtype is used for all raw bytes data in TensorFlow. The tf.io module contains functions for converting data to and from bytes, including decoding images and parsing csv.
Step38: Lab Task 3
Step39: Create a variable
Step40: A variable looks and acts like a tensor, and, in fact, is a data structure backed by a tf.Tensor. Like tensors, they have a dtype and a shape, and can be exported to NumPy.
Step41: Most tensor operations work on variables as expected, although variables cannot be reshaped.
Step42: As noted above, variables are backed by tensors. You can reassign the tensor using tf.Variable.assign. Calling assign does not (usually) allocate a new tensor; instead, the existing tensor's memory is reused.
Step43: If you use a variable like a tensor in operations, you will usually operate on the backing tensor.
Step44: Lifecycles, naming, and watching
Step45: Variable names are preserved when saving and loading models. By default, variables in models will acquire unique variable names automatically, so you don't need to assign them yourself unless you want to.
Step46: Placing variables and tensors
Step47: It's possible to set the location of a variable or tensor on one device and do the computation on another device. This will introduce delay, as data needs to be copied between the devices.
| <ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
import numpy as np
print("TensorFlow version: ",tf.version.VERSION)
# This will be an int32 tensor by default; see "dtypes" below.
rank_0_tensor = tf.constant(4)
print(rank_0_tensor)
# Let's make this a float tensor.
rank_1_tensor = tf.constant([2.0, 3.0, 4.0])
print(rank_1_tensor)
# If we want to be specific, we can set the dtype (see below) at creation time
rank_2_tensor = tf.constant([[1, 2],
[3, 4],
[5, 6]], dtype= # TODO 1a
# TODO: Your code goes here.
print(rank_2_tensor)
# There can be an arbitrary number of
# axes (sometimes called "dimensions")
rank_3_tensor = tf.constant([
[[0, 1, 2, 3, 4],
[5, 6, 7, 8, 9]],
[[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19]],
[[20, 21, 22, 23, 24],
[25, 26, 27, 28, 29]],])
print(rank_3_tensor)
# Convert a tensor to a NumPy array using `np.array` method
# TODO 1b
# TODO -- Your code here.
# Convert a tensor to a NumPy array using `tensor.numpy` method
# TODO 1c
# TODO -- Your code here.
a = tf.constant([[1, 2],
[3, 4]])
b = tf.constant([[1, 1],
[1, 1]]) # Could have also said `tf.ones([2,2])`
print(tf.add(a, b), "\n")
print(tf.multiply(a, b), "\n")
print(tf.matmul(a, b), "\n")
print(a + b, "\n") # element-wise addition
print(a * b, "\n") # element-wise multiplication
print(a @ b, "\n") # matrix multiplication
c = tf.constant([[4.0, 5.0], [10.0, 1.0]])
# Find the largest value
print(tf.reduce_max(c))
# TODO 1d
# Find the index of the largest value
# TODO -- Your code here.
# Compute the softmax
# TODO -- Your code here.
rank_4_tensor = tf.zeros([3, 2, 4, 5])
print("Type of every element:", rank_4_tensor.dtype)
print("Number of dimensions:", rank_4_tensor.ndim)
print("Shape of tensor:", rank_4_tensor.shape)
print("Elements along axis 0 of tensor:", rank_4_tensor.shape[0])
print("Elements along the last axis of tensor:", rank_4_tensor.shape[-1])
print("Total number of elements (3*2*4*5): ", tf.size(rank_4_tensor).numpy())
rank_1_tensor = tf.constant([0, 1, 1, 2, 3, 5, 8, 13, 21, 34])
print(rank_1_tensor.numpy())
print("First:", rank_1_tensor[0].numpy())
print("Second:", rank_1_tensor[1].numpy())
print("Last:", rank_1_tensor[-1].numpy())
print("Everything:", rank_1_tensor[:].numpy())
print("Before 4:", rank_1_tensor[:4].numpy())
print("From 4 to the end:", rank_1_tensor[4:].numpy())
print("From 2, before 7:", rank_1_tensor[2:7].numpy())
print("Every other item:", rank_1_tensor[::2].numpy())
print("Reversed:", rank_1_tensor[::-1].numpy())
print(rank_2_tensor.numpy())
# Pull out a single value from a 2-rank tensor
print(rank_2_tensor[1, 1].numpy())
# Get row and column tensors
print("Second row:", rank_2_tensor[1, :].numpy())
print("Second column:", rank_2_tensor[:, 1].numpy())
print("Last row:", rank_2_tensor[-1, :].numpy())
print("First item in last column:", rank_2_tensor[0, -1].numpy())
print("Skip the first row:")
print(rank_2_tensor[1:, :].numpy(), "\n")
print(rank_3_tensor[:, :, 4])
# Shape returns a `TensorShape` object that shows the size on each dimension
var_x = tf.Variable(tf.constant([[1], [2], [3]]))
print(var_x.shape)
# You can convert this object into a Python list, too
print(var_x.shape.as_list())
# We can reshape a tensor to a new shape.
# Note that we're passing in a list
# TODO 2a
# TODO -- Your code here.
print(var_x.shape)
print(reshaped.shape)
print(rank_3_tensor)
# A `-1` passed in the `shape` argument says "Whatever fits".
print(tf.reshape(rank_3_tensor, [-1]))
print(tf.reshape(rank_3_tensor, [3*2, 5]), "\n")
print(tf.reshape(rank_3_tensor, [3, -1]))
# Bad examples: don't do this
# You can't reorder axes with reshape.
print(tf.reshape(rank_3_tensor, [2, 3, 5]), "\n")
# This is a mess
print(tf.reshape(rank_3_tensor, [5, 6]), "\n")
# This doesn't work at all
try:
tf.reshape(rank_3_tensor, [7, -1])
except Exception as e: print(e)
# Use the `Tensor.dtype` property
# You can cast from type to type
# TODO 2b
the_f64_tensor = # TODO -- Your code here.
the_f16_tensor = # TODO -- Your code here.
# Now, let's cast to an uint8 and lose the decimal precision
the_u8_tensor = # TODO -- Your code here.
print(the_u8_tensor)
x = tf.constant([1, 2, 3])
y = tf.constant(2)
z = tf.constant([2, 2, 2])
# All of these are the same computation
print(tf.multiply(x, 2))
print(x * y)
print(x * z)
# These are the same computations
x = tf.reshape(x,[3,1])
y = tf.range(1, 5)
print(x, "\n")
print(y, "\n")
print(tf.multiply(x, y))
x_stretch = tf.constant([[1, 1, 1, 1],
[2, 2, 2, 2],
[3, 3, 3, 3]])
y_stretch = tf.constant([[1, 2, 3, 4],
[1, 2, 3, 4],
[1, 2, 3, 4]])
print(x_stretch * y_stretch) # Again, operator overloading
print(tf.broadcast_to(tf.constant([1, 2, 3]), [3, 3]))
ragged_list = [
[0, 1, 2, 3],
[4, 5],
[6, 7, 8],
[9]]
try:
tensor = tf.constant(ragged_list)
except Exception as e: print(e)
# TODO 2c
ragged_tensor = # TODO -- Your code here.
print(ragged_tensor)
print(ragged_tensor.shape)
# Tensors can be strings, too here is a scalar string.
scalar_string_tensor = tf.constant("Gray wolf")
print(scalar_string_tensor)
# If we have two string tensors of different lengths, this is OK.
tensor_of_strings = tf.constant(["Gray wolf",
"Quick brown fox",
"Lazy dog"])
# Note that the shape is (2,), indicating that it is 2 x unknown.
print(tensor_of_strings)
tf.constant("🥳👍")
# We can use split to split a string into a set of tensors
print(tf.strings.split(scalar_string_tensor, sep=" "))
# ...but it turns into a `RaggedTensor` if we split up a tensor of strings,
# as each string might be split into a different number of parts.
print(tf.strings.split(tensor_of_strings))
text = tf.constant("1 10 100")
print(tf.strings.to_number(tf.strings.split(text, " ")))
byte_strings = tf.strings.bytes_split(tf.constant("Duck"))
byte_ints = tf.io.decode_raw(tf.constant("Duck"), tf.uint8)
print("Byte strings:", byte_strings)
print("Bytes:", byte_ints)
# Or split it up as unicode and then decode it
unicode_bytes = tf.constant("アヒル 🦆")
unicode_char_bytes = tf.strings.unicode_split(unicode_bytes, "UTF-8")
unicode_values = tf.strings.unicode_decode(unicode_bytes, "UTF-8")
print("\nUnicode bytes:", unicode_bytes)
print("\nUnicode chars:", unicode_char_bytes)
print("\nUnicode values:", unicode_values)
# Sparse tensors store values by index in a memory-efficient manner
# TODO 2d
sparse_tensor = # TODO -- Your code here.
print(sparse_tensor, "\n")
# We can convert sparse tensors to dense
print(tf.sparse.to_dense(sparse_tensor))
import tensorflow as tf
# Uncomment to see where your variables get placed (see below)
# tf.debugging.set_log_device_placement(True)
# TODO 3a
my_tensor = # TODO -- Your code here.
my_variable = # TODO -- Your code here.
# Variables can be all kinds of types, just like tensors
bool_variable = tf.Variable([False, False, False, True])
complex_variable = tf.Variable([5 + 4j, 6 + 1j])
print("Shape: ",my_variable.shape)
print("DType: ",my_variable.dtype)
print("As NumPy: ", my_variable.numpy)
print("A variable:",my_variable)
print("\nViewed as a tensor:", tf.convert_to_tensor(my_variable))
print("\nIndex of highest value:", tf.argmax(my_variable))
# This creates a new tensor; it does not reshape the variable.
print("\nCopying and reshaping: ", tf.reshape(my_variable, ([1,4])))
a = tf.Variable([2.0, 3.0])
# This will keep the same dtype, float32
a.assign([1, 2])
# Not allowed as it resizes the variable:
try:
a.assign([1.0, 2.0, 3.0])
except Exception as e: print(e)
a = tf.Variable([2.0, 3.0])
# Create b based on the value of a
b = tf.Variable(a)
a.assign([5, 6])
# a and b are different
print(a.numpy())
print(b.numpy())
# There are other versions of assign
print(a.assign_add([2,3]).numpy()) # [7. 9.]
print(a.assign_sub([7,9]).numpy()) # [0. 0.]
# Create a and b; they have the same value but are backed by different tensors.
a = tf.Variable(my_tensor, name="Mark")
# A new variable with the same name, but different value
# Note that the scalar add is broadcast
b = tf.Variable(my_tensor + 1, name="Mark")
# These are elementwise-unequal, despite having the same name
print(a == b)
step_counter = tf.Variable(1, trainable=False)
with tf.device('CPU:0'):
# Create some tensors
a = tf.Variable([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])
c = tf.matmul(a, b)
print(c)
with tf.device('CPU:0'):
a = tf.Variable([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
b = tf.Variable([[1.0, 2.0, 3.0]])
with tf.device('GPU:0'):
# Element-wise multiply
k = a * b
print(k)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.
Step2: We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.
Step3: Training
Step4: Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss.
Step5: Checking out the results
| <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', validation_size=0)
img = mnist.train.images[0]
plt.imshow(img.reshape((28, 28)), cmap='Greys_r')
img
tf.reset_default_graph()
# Size of the encoding layer (the hidden layer)
encoding_dim = 32 # feel free to change this value
# Input and target placeholders
inputs_ = tf.placeholder(tf.float32, [None, 28*28], 'input')
targets_ = tf.placeholder(tf.float32, [None, 28*28], 'target')
# Output of hidden layer, single fully connected layer here with ReLU activation
encoded = tf.layers.dense(inputs_, encoding_dim, activation=tf.nn.relu, name='encoded')
# Output layer logits, fully connected layer with no activation
logits = tf.layers.dense(encoded, 28*28, activation=None, name='logits')
# Sigmoid output from logits
decoded = tf.sigmoid(logits, name='decoded')
# Sigmoid cross-entropy loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits, name='loss')
# Mean of the loss
cost = tf.reduce_mean(loss, name='cost')
# Adam optimizer
opt = tf.train.AdamOptimizer(0.0005).minimize(cost)
# Create the session
sess = tf.Session()
epochs = 20
batch_size = 200
sess.run(tf.global_variables_initializer())
for e in range(epochs):
for ii in range(mnist.train.num_examples//batch_size):
batch = mnist.train.next_batch(batch_size)
feed = {inputs_: batch[0], targets_: batch[0]}
batch_cost, _ = sess.run([cost, opt], feed_dict=feed)
print("Epoch: {}/{}...".format(e+1, epochs),
"Training loss: {:.4f}".format(batch_cost))
fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(10,2))
in_imgs = mnist.test.images[:10]
reconstructed, compressed = sess.run([decoded, encoded], feed_dict={inputs_: in_imgs})
for images, row in zip([in_imgs, reconstructed], axes):
for img, ax in zip(images, row):
ax.imshow(img.reshape((28, 28)), cmap='Greys_r')
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
fig.tight_layout(pad=0.1)
sess.close()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: OK, now let's write a traitlets class that does the same thing
Step4: Roadmap
Step5: Trying it out...
Step6: Testing the result
| <ASSISTANT_TASK:>
Python Code:
import json
import jsonschema
simple_schema = {
"type": "object",
"properties": {
"foo": {"type": "string"},
"bar": {"type": "number"}
}
}
good_instance = {
"foo": "hello world",
"bar": 3.141592653,
}
bad_instance = {
"foo" : 42,
"bar" : "string"
}
# Should succeed
jsonschema.validate(good_instance, simple_schema)
# Should fail
try:
jsonschema.validate(bad_instance, simple_schema)
except jsonschema.ValidationError as err:
print(err)
import traitlets as T
class SimpleInstance(T.HasTraits):
foo = T.Unicode()
bar = T.Float()
# Should succeed
SimpleInstance(**good_instance)
# Should fail
try:
SimpleInstance(**bad_instance)
except T.TraitError as err:
print(err)
import jinja2
OBJECT_TEMPLATE =
{%- for import in cls.imports %}
{{ import }}
{%- endfor %}
class {{ cls.classname }}({{ cls.baseclass }}):
{%- for (name, prop) in cls.properties.items() %}
{{ name }} = {{ prop.trait_code }}
{%- endfor %}
class JSONSchema(object):
A class to wrap JSON Schema objects and reason about their contents
object_template = OBJECT_TEMPLATE
def __init__(self, schema, root=None):
self.schema = schema
self.root = root or schema
@property
def type(self):
# TODO: should the default type be considered object?
return self.schema.get('type', 'object')
@property
def trait_code(self):
type_dict = {'string': 'T.Unicode()',
'number': 'T.Float()',
'integer': 'T.Integer()',
'boolean': 'T.Bool()'}
if self.type not in type_dict:
raise NotImplementedError()
return type_dict[self.type]
@property
def classname(self):
# TODO: deal with non-root schemas somehow...
if self.schema is self.root:
return "RootInstance"
else:
raise NotImplementedError("Non-root object schema")
@property
def baseclass(self):
return "T.HasTraits"
@property
def imports(self):
return ["import traitlets as T"]
@property
def properties(self):
return {key: JSONSchema(val) for key, val in self.schema.get('properties', {}).items()}
def object_code(self):
return jinja2.Template(self.object_template).render(cls=self)
code = JSONSchema(simple_schema).object_code()
print(code)
exec(code) # defines RootInstance
# Good instance should validate correctly
RootInstance(**good_instance)
# Bad instance should raise a TraitError
try:
RootInstance(**bad_instance)
except T.TraitError as err:
print(err)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We have seen how to input data from csv and xls files -- either online or from our computer and through APIs. Sometimes the data is only available as specific part of a website.
Step2: After running our request, we get a Response object. This object has a status_code property, which indicates if the page was downloaded successfully.
Step3: You might want to query for different things and download information for all of them.
Step4: Check tab completion
Step5: To print out the content of the html file, use the content or text properties
Step6: Extracting information from a web page
Step7: Print this out in a prettier way.
Step8: We can find all tags of certain type with the find_all method. This returns a list.
Step9: To get the first paragraph in the html write
Step10: This is a lot of information and we want to extract some part of it. Use the text or get_text() method to get the text content.
Step11: This is still messy. We will need a smarter search.
Step12: Look for tags based on their class. This is extremely useful for efficiently locating information.
Step13: We are getting more cells than we want -- there were only 120 listings on the page. Check the ads with "Inspect Element". There are duplicates. We need a different tag level (<li>)
Step14: What's going wrong? Some ads don't have a price listed, so we can't retrieve it.
Step15: If it does not find the price, it returns a NoneType. We might exploit this fact to select only the valid links.
Step16: We only have 118 listing because 2 listings did not have a price.
Step17: We could do text anaylsis and see what words are common in ads which has a relatively higher price.
Step18: We can see if we start clicking on the 2nd nd 3rd pages of the results that there is a structure in how they are defined
Step19: We have the total number of listings with the given search specification. Breaking down the steps
Step20: We have scraped all the listings from CL in section "Rooms and Shares" for the East Village.
Step21: We can check whether it's working fine at http
Step22: Define a dictionary for the parameters we want to pass to the GET request for NWS server.
Step23: Recall the format of the url associated with a particular location
| <ASSISTANT_TASK:>
Python Code:
import pandas as pd # data package
import matplotlib.pyplot as plt # graphics
import datetime as dt # date tools, used to note current date
%matplotlib inline
import requests # you might have to install this
url = 'https://newyork.craigslist.org/search/roo?query=east+village&availabilityMode=0'
cl = requests.get(url)
cl
cl.status_code
url = 'https://newyork.craigslist.org/search/roo'
keys = {'query' : 'east village', 'availabilityMode' : '0'}
cl_extra = requests.get(url, params=keys)
# see if the URL was specified successfully
cl_extra.url
cl.url
cl.text[:300]
cl.content[:500] # this works also for information which is not purely text
from bs4 import BeautifulSoup
BeautifulSoup?
cl_soup = BeautifulSoup(cl.content, 'html.parser')
#print(cl_soup.prettify())
print('Type:', type(cl_soup))
# we can access a tag
print('Title: ', cl_soup.title)
# or only the text content
print('Title: ', cl_soup.title.text) # or
print('Title: ', cl_soup.title.get_text())
cl_soup.find_all?
cl_soup.find_all('p')[0]
cl_soup.find_all('p')[0].get_text()
list(cl_soup.find_all('p')[0].children)
cl_soup.find_all('span', class_='result-price')[0].get_text()
cl_soup.find_all('span', class_='result-price')[:10]
prices = cl_soup.find_all('span', class_='result-price')
price_data = [price.get_text() for price in prices]
price_data[:10]
len(price_data)
cl_soup.find_all('li', class_='result-row')[0]
ads = cl_soup.find_all('li', class_='result-row')
# we can access values of the keys by using a dictionary like syntax
ads[5].find('a', class_='result-title hdrlnk')
ads[5].find('a', class_='result-title hdrlnk')['href']
data = [[ad.find('a', class_='result-title hdrlnk').get_text(),
ad.find('a', class_='result-title hdrlnk')['data-id'],
ad.find('span', class_='result-price').get_text()] for ad in ads ]
# if it exists then the type is
type(ads[0].find('span', class_='result-price'))
import bs4
data = [[ad.find('a', class_='result-title hdrlnk').get_text(),
ad.find('a', class_='result-title hdrlnk')['data-id'],
ad.find('span', class_='result-price').get_text()] for ad in ads
if type(ad.find('span', class_='result-price'))==bs4.element.Tag]
data[:10]
df = pd.DataFrame(data)
df.head(10)
df.shape
df.columns = ['Title', 'ID', 'Price']
df.head()
cl_soup.find('span', class_='totalcount')
# First we get the total number of listings in real time
url = 'https://newyork.craigslist.org/search/roo?query=east+village&availabilityMode=0'
cl = requests.get(url)
cl_soup = BeautifulSoup(cl.content, 'html.parser')
total_count = int(cl_soup.find('span', class_='totalcount').get_text())
print(total_count)
# 1) Specify the url
for page in range(0, total_count, 120):
print('https://newyork.craigslist.org/search/roo?s={}&availabilityMode=0&query=east%20village'.format(page))
# Next we write a loop to scrape all pages
df = pd.DataFrame({'Title' : [], 'ID' : [], 'Price' : []})
for page in range(0, total_count, 120):
url = 'https://newyork.craigslist.org/search/roo?s={}&availabilityMode=0&query=east%20village'.format(page)
cl = requests.get(url)
cl_soup = BeautifulSoup(cl.content, 'html.parser')
ads = cl_soup.find_all('li', class_='result-row')
data = pd.DataFrame([[ad.find('a', class_='result-title hdrlnk').get_text(),
ad.find('a', class_='result-title hdrlnk')['data-id'],
ad.find('span', class_='result-price').get_text()] for ad in ads
if type(ad.find('span', class_='result-price'))==bs4.element.Tag],
columns=['Title', 'ID', 'Price'])
df = df.append(data, ignore_index=True)
df.head()
# Do the same using the `extend` method
data = []
for page in range(0, total_count, 120):
url = 'https://newyork.craigslist.org/search/roo?s={}&availabilityMode=0&query=east%20village'.format(page)
cl = requests.get(url)
cl_soup = BeautifulSoup(cl.content, 'html.parser')
ads = cl_soup.find_all('li', class_='result-row')
data_page = [[ad.find('a', class_='result-title hdrlnk').get_text(),
ad.find('a', class_='result-title hdrlnk')['data-id'],
ad.find('span', class_='result-price').get_text()] for ad in ads
if type(ad.find('span', class_='result-price'))==bs4.element.Tag]
data.extend(data_page)
df = pd.DataFrame(data, columns=['Title', 'ID', 'Price'])
df.head()
df.shape
df.tail()
from pygeocoder import Geocoder
# check for one of the locations how it's working
# some addresses might not be valid -- it goes through Google's API
loc = Geocoder.geocode('Bozeman, Montana')
loc.coordinates
Geocoder.geocode('Stanford, California').coordinates
locations = ['Bozeman, Montana', 'White Sands National Monument', 'Stanford University, California']
coordinates = [Geocoder.geocode(location).coordinates for location in locations]
coordinates
for location, coordinate in zip(locations, coordinates):
print('The coordinates of {} are:'.format(location), coordinate)
keys = {}
for location, coordinate in zip(locations, coordinates):
keys[location] = {'lat' : coordinate[0], 'lon' : coordinate[1]}
keys
keys[locations[0]]
url = 'http://forecast.weather.gov/MapClick.php'
nws = requests.get(url, params=keys[locations[0]])
nws.status_code
nws.url
nws.content[:300]
nws_soup = BeautifulSoup(nws.content, 'html.parser')
seven = nws_soup.find('div', id='seven-day-forecast-container')
seven.find(text='Friday')
seven.find(text='Friday').parent
seven.find(text='Friday').parent.parent
seven.find(text='Friday').parent.parent.find('p', class_='temp temp-high').get_text()
data = []
for location in locations:
nws = requests.get(url, params=keys[location])
nws_soup = BeautifulSoup(nws.content, 'html.parser')
seven = nws_soup.find('div', id='seven-day-forecast-container')
temp = seven.find(text='Friday').parent.parent.find('p', class_='temp temp-high').get_text()
data.append([location, temp])
df_weather = pd.DataFrame(data, columns=['Location', 'Friday weather'])
df_weather
df_weather['high_temp'] = df_weather['Friday weather'].str.rsplit().str.get(1).astype(float)
df_weather['high_temp'].std()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create Keras model
Step2: Next, define the feature columns. mother_age and gestation_weeks should be numeric.
Step3: We can visualize the DNN using the Keras plot_model utility.
Step4: Train and evaluate
Step5: Visualize loss curve
Step6: Save the model
| <ASSISTANT_TASK:>
Python Code:
# Ensure the right version of Tensorflow is installed.
!pip freeze | grep tensorflow==2.1
# change these to try this notebook out
BUCKET = 'cloud-training-demos-ml'
PROJECT = 'cloud-training-demos'
REGION = 'us-central1'
import os
os.environ['BUCKET'] = BUCKET
os.environ['PROJECT'] = PROJECT
os.environ['REGION'] = REGION
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
%%bash
ls *.csv
import shutil
import numpy as np
import tensorflow as tf
print(tf.__version__)
# Determine CSV, label, and key columns
CSV_COLUMNS = 'weight_pounds,is_male,mother_age,plurality,gestation_weeks,key'.split(',')
LABEL_COLUMN = 'weight_pounds'
KEY_COLUMN = 'key'
# Set default values for each CSV column. Treat is_male and plurality as strings.
DEFAULTS = [[0.0], ['null'], [0.0], ['null'], [0.0], ['nokey']]
def features_and_labels(row_data):
for unwanted_col in ['key']:
row_data.pop(unwanted_col)
label = row_data.pop(LABEL_COLUMN)
return row_data, label # features, label
# load the training data
def load_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL):
dataset = (tf.data.experimental.make_csv_dataset(pattern, batch_size, CSV_COLUMNS, DEFAULTS)
.map(features_and_labels) # features, label
)
if mode == tf.estimator.ModeKeys.TRAIN:
dataset = dataset.shuffle(1000).repeat()
dataset = dataset.prefetch(1) # take advantage of multi-threading; 1=AUTOTUNE
return dataset
## Build a Keras wide-and-deep model using its Functional API
def rmse(y_true, y_pred):
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
# Helper function to handle categorical columns
def categorical_fc(name, values):
orig = tf.feature_column.categorical_column_with_vocabulary_list(name, values)
wrapped = tf.feature_column.indicator_column(orig)
return orig, wrapped
def build_wd_model(dnn_hidden_units = [64, 32], nembeds = 3):
# input layer
deep_inputs = {
colname : tf.keras.layers.Input(name=colname, shape=(), dtype='float32')
for colname in ['mother_age', 'gestation_weeks']
}
wide_inputs = {
colname : tf.keras.layers.Input(name=colname, shape=(), dtype='string')
for colname in ['is_male', 'plurality']
}
inputs = {**wide_inputs, **deep_inputs}
# feature columns from inputs
deep_fc = {
colname : tf.feature_column.numeric_column(colname)
for colname in ['mother_age', 'gestation_weeks']
}
wide_fc = {}
is_male, wide_fc['is_male'] = categorical_fc('is_male', ['True', 'False', 'Unknown'])
plurality, wide_fc['plurality'] = categorical_fc('plurality',
['Single(1)', 'Twins(2)', 'Triplets(3)',
'Quadruplets(4)', 'Quintuplets(5)','Multiple(2+)'])
# bucketize the float fields. This makes them wide
age_buckets = tf.feature_column.bucketized_column(deep_fc['mother_age'],
boundaries=np.arange(15,45,1).tolist())
wide_fc['age_buckets'] = tf.feature_column.indicator_column(age_buckets)
gestation_buckets = tf.feature_column.bucketized_column(deep_fc['gestation_weeks'],
boundaries=np.arange(17,47,1).tolist())
wide_fc['gestation_buckets'] = tf.feature_column.indicator_column(gestation_buckets)
# cross all the wide columns. We have to do the crossing before we one-hot encode
crossed = tf.feature_column.crossed_column(
[is_male, plurality, age_buckets, gestation_buckets], hash_bucket_size=20000)
deep_fc['crossed_embeds'] = tf.feature_column.embedding_column(crossed, nembeds)
# the constructor for DenseFeatures takes a list of numeric columns
# The Functional API in Keras requires that you specify: LayerConstructor()(inputs)
wide_inputs = tf.keras.layers.DenseFeatures(wide_fc.values(), name='wide_inputs')(inputs)
deep_inputs = tf.keras.layers.DenseFeatures(deep_fc.values(), name='deep_inputs')(inputs)
# hidden layers for the deep side
layers = [int(x) for x in dnn_hidden_units]
deep = deep_inputs
for layerno, numnodes in enumerate(layers):
deep = tf.keras.layers.Dense(numnodes, activation='relu', name='dnn_{}'.format(layerno+1))(deep)
deep_out = deep
# linear model for the wide side
wide_out = tf.keras.layers.Dense(10, activation='relu', name='linear')(wide_inputs)
# concatenate the two sides
both = tf.keras.layers.concatenate([deep_out, wide_out], name='both')
# final output is a linear activation because this is regression
output = tf.keras.layers.Dense(1, activation='linear', name='weight')(both)
model = tf.keras.models.Model(inputs, output)
model.compile(optimizer='adam', loss='mse', metrics=[rmse, 'mse'])
return model
print("Here is our Wide-and-Deep architecture so far:\n")
model = build_wd_model()
print(model.summary())
tf.keras.utils.plot_model(model, 'wd_model.png', show_shapes=False, rankdir='LR')
TRAIN_BATCH_SIZE = 32
NUM_TRAIN_EXAMPLES = 10000 * 5 # training dataset repeats, so it will wrap around
NUM_EVALS = 5 # how many times to evaluate
NUM_EVAL_EXAMPLES = 10000 # enough to get a reasonable sample, but not so much that it slows down
trainds = load_dataset('train*', TRAIN_BATCH_SIZE, tf.estimator.ModeKeys.TRAIN)
evalds = load_dataset('eval*', 1000, tf.estimator.ModeKeys.EVAL).take(NUM_EVAL_EXAMPLES//1000)
steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS)
history = model.fit(trainds,
validation_data=evalds,
epochs=NUM_EVALS,
steps_per_epoch=steps_per_epoch)
# plot
import matplotlib.pyplot as plt
nrows = 1
ncols = 2
fig = plt.figure(figsize=(10, 5))
for idx, key in enumerate(['loss', 'rmse']):
ax = fig.add_subplot(nrows, ncols, idx+1)
plt.plot(history.history[key])
plt.plot(history.history['val_{}'.format(key)])
plt.title('model {}'.format(key))
plt.ylabel(key)
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left');
import shutil, os, datetime
OUTPUT_DIR = 'babyweight_trained'
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
EXPORT_PATH = os.path.join(OUTPUT_DIR, datetime.datetime.now().strftime('%Y%m%d%H%M%S'))
tf.saved_model.save(model, EXPORT_PATH) # with default serving function
print("Exported trained model to {}".format(EXPORT_PATH))
!ls $EXPORT_PATH
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Vi tegner for å finne $x_0$
Step2: Vi ser en løsning nær $4$, så vi velger $x_0=4$.
Step3: Vi kan beregne det en om en
Step4: Når stopper vi? Vi kan sjekke for x3
Step5: Liten nok? Da kan vi stoppe!
| <ASSISTANT_TASK:>
Python Code:
def f(x):
return log(x) + cos(x) - 1
x = linspace(1,20,200)
y = f(x)
plot(x,y, lw=2)
plot([1,20],[0,0], lw=2, color='k')
ax = gca()
mpld3.display()
def fder(x):
return 1/x - sin(x)
x0=4
x1=x0- f(x0)/fder(x0)
print(x1)
x2=x1-f(x1)/fder(x1)
print(x2)
x3=x2-f(x2)/fder(x2)
print(x3)
f(x3)
x=4
for i in range(100):
x = x-f(x)/fder(x)
if abs(f(x)) < 0.001:
break
print(x)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Word Cloud from RSS feed titles
Step2: 2) HTML Parser
Step4: Modules for fetching and parsing HTML content
Step5: The get_title function should should process the HTML page stored in the global variable html_doc, and return the title of the page in a unicode string. get_title() should return u'Three Little Pigs'
Step6: The process_json function should load the dictionary stored as a JSON string in global variable json_str, and return the sum of the values in this dictionary. process_json() should return 8 because 3+2+1+1+1 = 8
Step7: The get_pigs function should process the HTML page stored in the global variable html_doc, and return the three pigs listed below 'there were three little pigs' in a JSON string. Note that it should return a string, not a list. get_pigs() should return '["Pig A", "Pig B", "Pig C"]'
Step8: The get_story_headings function should process the HTML page stored in the global variable html_doc, and return the three story headings in a JSON string. Note that it should return a string, not a list. get_story_headings() should return '["Story 1", "Story 2", "Story 3"]'
Step9: The get_houses function should process the HTML page stored in the global variable html_doc, and return information in the house table in a JSON string. Note that it should return a string, not a list. get_houses() should return '[["Pig A", "Straw"], ["Pig B", "Stick"], ["Pig C", "Brick"]]'
Step10: The get_links function should process the HTML page stored in the global variable html_doc, and return all url links in the page in a JSON string. Note that it should return a string, not a list. get_links() should return '["http
Step11: The treasure_hunting function should first visit http
| <ASSISTANT_TASK:>
Python Code:
# -*-coding: utf-8 -*-
import feedparser
import re
import string
# Create the regular expressions
reg1 = re.compile(r'<br />') #Regex to replace <br /> with \n (see reg1.sub)
reg2 = re.compile(r'(<!--.*?-->|<[^>]*>)') #Regex to clean all html tags (anything with <something>)
#alternative reg2
#reg2 = re.compile(r'<[^<]+?>')
#reg2 = re.compile(r'<[^>]+>')
reg3 = re.compile(r' ') #Regex to clean all  
reg4 = re.compile(r'\'') #Regex to clean all ' chars
#alternative reg4
#reg4 = re.compile(r"'")
# Parses the RSS feed from RSS
def parseFeeds(str):
d = feedparser.parse(str)
print "There are", len(d['items']), "items in", str
FILE_INPUT = open("NewsFeed.txt","w")
for item in d['items']:
first_filter = reg1.sub('\n', item.description)
second_filter = reg2.sub('', first_filter)
third_filter = reg3.sub(' ', second_filter)
item_description = reg4.sub('', third_filter)
try:
FILE_INPUT.write(item_description)
except IOError:
print "Error: can\'t find file or read data"
FILE_INPUT.close
#Main
if __name__ == '__main__':
# Provide a link to an RSS Feed
parseFeeds("http://rss.cnn.com/rss/cnn_topstories.rss")
# Alternative links
# parseFeeds("http://sports.espn.go.com/espn/rss/news")
# parseFeeds("http://www.reddit.com/r/python/.rss")
from IPython.display import Image
Image(filename='word cloud.png')
# -*-coding: utf-8 -*-
from bs4 import BeautifulSoup
import json, urllib2
# this is the html document used in this code
html_doc =
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
"http://www.w3.org/TR/html4/loose.dtd">
<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=iso-8859-1">
<title>Three Little Pigs</title>
<meta name="generator" content="Amaya, see http://www.w3.org/Amaya/">
</head>
<body>
<p>Once upon a time, there were <a
href="http://en.wikipedia.org/wiki/Three_Little_Pigs">three little pigs</a>:</p>
<ol>
<li><h2>Pig A</h2>
</li>
<li><h2>Pig B</h2>
</li>
<li><h2>Pig C</h2>
</li>
</ol>
<p>And unfortunately, there was a <a
href="http://en.wikipedia.org/wiki/Big_bad_wolf">big bad wolf</a> too.</p>
<p>There are many stories about them.</p>
<h2>Story 1</h2>
<p>This is story 1.</p>
<h2>Story 2</h2>
<p>This is story 2.</p>
<h2>Story 3</h2>
<p>This is story 3.</p>
<h1>Type of Houses Constructed</h1>
<table border="1" style="width: 100%">
<caption></caption>
<col>
<col>
<tbody>
<tr>
<td>Pig</td>
<td>House Type</td>
</tr>
<tr>
<td>Pig A</td>
<td>Straw</td>
</tr>
<tr>
<td>Pig B</td>
<td>Stick</td>
</tr>
<tr>
<td>Pig C</td>
<td>Brick</td>
</tr>
</tbody>
</table>
</body>
</html>
# this is the json string used in this lab
json_str = '{"Belle": 3, "Aurora": 2, "Jasmine": 1, "Irene": 1, "Adella": 1}'
def get_title():
soup = BeautifulSoup(html_doc)
return soup.title.string
def process_json():
string = json.loads(json_str)
return sum(string.itervalues())
def get_pigs():
soup = BeautifulSoup(html_doc)
piglist = []
for pigs in soup.find_all('h2'):
pig = pigs.string
piglist.append(pig)
piglist.remove(piglist[5])
piglist.remove(piglist[4])
piglist.remove(piglist[3])
pigdata = json.dumps(piglist)
return pigdata
def get_story_headings():
soup = BeautifulSoup(html_doc)
storylist = []
for stories in soup.find_all('h2'):
story = stories.string
storylist.append(story)
storylist.remove(storylist[2])
storylist.remove(storylist[1])
storylist.remove(storylist[0])
storydata = json.dumps(storylist)
return storydata
def get_houses():
soup = BeautifulSoup(html_doc)
pighouselist = []
for houses in soup.find_all('td'):
house = houses.string
pighouselist.append(house)
pighouselist1 = pighouselist[2], pighouselist[3]
pighouselist2 = pighouselist[4], pighouselist[5]
pighouselist3 = pighouselist[6], pighouselist[7]
pighouselistfinal = pighouselist1, pighouselist2, pighouselist3
data = json.dumps(pighouselistfinal)
return data
def get_links():
soup = BeautifulSoup(html_doc)
list = []
for links in soup.find_all('a', href=True):
link = links.get('href')
list.append(link)
data = json.dumps(list)
return data
def treasure_hunting():
# Your code here: make sure to modify the return statement to pass back
# the correct value.
response = urllib2.urlopen('http://www.example.com').read()
soup = BeautifulSoup(response)
for treasurelinks in soup.find_all('a', href=True):
href = treasurelinks.get('href')
response2 = urllib2.urlopen(href).read().decode('utf-8')
utfresponse2 = response2.encode('utf-8')
treasuresoup = BeautifulSoup(utfresponse2)
for table in treasuresoup.find_all('table', {'class': 'iana-table'}):
tablelist = []
for row in table.find_all('tr'):
columns = row.find_all('td')
for column in columns:
column_string = column.string
tablelist.append(column_string)
return tablelist[8]
# Provided simple test() function used in main() to print
# what each function returns vs. what it's supposed to return.
def test(got, expected):
if got == expected:
prefix = ' OK '
else:
prefix = ' X '
print '%s got: %s expected: %s' % (prefix, repr(got), repr(expected))
def test2(got, expected):
if got == expected:
prefix = ' OK '
else:
prefix = ' X '
print '%s got: %s expected: %s' % (prefix, got, expected)
# Provided main() calls the above functions with interesting inputs,
# using test() to check if each result is correct or not.
def main():
print 'get_title'
test(get_title(), u'Three Little Pigs')
print 'process_json'
test(process_json(), 8)
print 'get_pigs'
test(get_pigs(), '["Pig A", "Pig B", "Pig C"]' )
print 'get_story_headings'
test(get_story_headings(), '["Story 1", "Story 2", "Story 3"]' )
print 'get_houses'
test(get_houses(), '[["Pig A", "Straw"], ["Pig B", "Stick"], ["Pig C", "Brick"]]')
print 'get_links'
test(get_links(), '["http://en.wikipedia.org/wiki/Three_Little_Pigs", "http://en.wikipedia.org/wiki/Big_bad_wolf"]')
print 'treasure_hunting'
test2(treasure_hunting(), u'\u6d4b\u8bd5')
# Standard boilerplate to call the main() function.
if __name__ == '__main__':
main()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Target Configuration
Step2: Experiments Configuration
Step3: Tests execution
| <ASSISTANT_TASK:>
Python Code:
import logging
from conf import LisaLogging
LisaLogging.setup()
import os
import json
from env import TestEnv
from executor import Executor
# Setup a target configuration
my_target_conf = {
# Target platform and board
"platform" : 'linux',
"board" : 'juno',
# Target board IP/MAC address
"host" : '192.168.0.1',
# Login credentials
"username" : 'root',
"password" : 'juno',
}
my_experiments_conf = {
# Folder where all the results will be collected
"results_dir" : "ExecutorExample",
# Platform configurations to test: you can specify any number of configurations
"confs" : [
{
"tag" : "base", # Relevant string to identify configuration
"flags" : ["ftrace", "freeze_userspace"], # Enable FTrace events, freeze userspace while running
"sched_features" : "NO_ENERGY_AWARE", # Disable EAS
"cpufreq" : { # Use PERFORMANCE CpuFreq
"governor" : "performance",
},
},
{
"tag" : "eas", # Relevant string to identify configuration
"flags" : ["ftrace", "freeze_userspace"], # Enable FTrace events, freeze userspace while running
"sched_features" : "ENERGY_AWARE", # Enable EAS
"cpufreq" : { # Use PERFORMANCE CpuFreq
"governor" : "performance",
},
},
],
# Workloads to run (on each platform configuration)
"wloads" : {
# Run hackbench with 1 group using pipes
"perf" : {
"type" : "perf_bench",
"conf" : {
"class" : "messaging",
"params" : {
"group" : 1,
"loop" : 10,
"pipe" : True,
"thread": True,
}
}
},
# Run a 20% duty-cycle periodic task
"rta" : {
"type" : "rt-app",
"loadref" : "big",
"conf" : {
"class" : "profile",
"params" : {
"p20" : {
"kind" : "Periodic",
"params" : {
"duty_cycle_pct" : 20,
},
},
},
},
},
},
# Number of iterations for each workloaditerations
"iterations" : 1,
}
my_test_conf = {
# FTrace events to collect for all the tests configuration which have
# the "ftrace" flag enabled
"ftrace" : {
"events" : [
"sched_switch",
"sched_wakeup",
"sched_wakeup_new",
"cpu_frequency",
],
"buffsize" : 80 * 1024,
},
# Tools required by the experiments
"tools" : [ 'trace-cmd', 'perf' ],
# Modules required by these experiments
"modules" : [ 'bl', 'cpufreq', 'cgroups' ],
}
executor = Executor(TestEnv(target_conf=my_target_conf, test_conf=my_test_conf), my_experiments_conf)
executor.run()
!tree {executor.te.res_dir}
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Na sequência, dois predicados
Step2: Assim, podemos calcular a probabilidade de um novo indivíduo ser do sexo masculino.
Step3: O que $0,8$ (ou $80\%$) quer dizer? Chamamos isso de probabilidade independente, pois estamos considerando que a variável "sexo" não depende de outra variável do conjunto de dados. Variável? Isso, esse é o termo usado, mas é uma coluna no conjunto de dados.
Step4: Alguns predicados
Step5: Por fim, podemos responder
Step6: O raciocínio é o mesmo, se usarmos a contagem, a probilidade é
Step7: Daí respondemos a pergunta calculando
Step8: A resposta para a pergunta anterior é $80\%$.
Step9: Perceba que há um padrão nesse tipo de pergunta
| <ASSISTANT_TASK:>
Python Code:
PDSexo = ProbDist(
Sexo_M=4,
Sexo_F=1
)
PDSexo
def sexo_m(r): return 'Sexo_M' in r
def sexo_f(r): return 'Sexo_F' in r
P(sexo_m, PDSexo)
PDIdades = ProbDist(
Idade_A=4,
Idade_B=1,
Idade_C=0,
Idade_D=0
)
PDIdades
def idade_A(r) : return 'Idade_A' in r
def idade_B(r) : return 'Idade_B' in r
def idade_C(r) : return 'Idade_C' in r
def idade_D(r) : return 'Idade_D' in r
P(idade_B, PDIdades)
PDSexoIdade = joint(PDSexo, PDIdades, ' ')
PDSexoIdade
P(sexo_m, tal_que(idade_B, PDSexoIdade))
PA = P(sexo_m, PDSexoIdade)
PE = P(idade_B, PDSexoIdade)
Prob = PA * PE / PE
Prob
# Evidência
PE = P(idade_B, PDSexoIdade)
print('P(E) = P(Idade=B) = %.1f%%' % (PE * 100))
# Hipótese A (Sexo = F)
PA = P(sexo_f, PDSexoIdade)
print('P(A) = P(Sexo=F) = %.1f%%' % (PA * 100))
# Hipótese B (Sexo = M)
PB = P(sexo_m, PDSexoIdade)
print('P(B) = P(Sexo=M) = %.1f%%' % (PB * 100))
# Evidência, dada Hipótese A
PEA = P(idade_B, tal_que(sexo_f, PDSexoIdade))
print('P(E|A) = P(Idade=B|Sexo=F) = %.1f%%' % (PEA * 100))
# Evidência, dada Hipótese B
PEB = P(idade_B, tal_que(sexo_m, PDSexoIdade))
print('P(E|B) = P(Idade=B|Sexo=M) = %.1f%%' % (PEB * 100))
# Outra forma de encontrar P(E)
PE2 = PEA * PA + PEB * PB
print('P(E) = P(Idade=B) = %.1f%%' % (PE2 * 100))
# probabilidade desejada (Sexo = M), dada a Evidência -> P(B|E)
PBE = P(sexo_m, tal_que(idade_B, PDSexoIdade))
print('P(B|E) = P(Sexo=M|Idade=B) = %.1f%%' % (PBE * 100))
# outra forma de encontrar P(B|E)
PBE2 = PEB * PB / PE
print('P(B|E) = P(Sexo=M|Idade=B) = %.1f%%' % (PBE2 * 100))
# outra forma de encontrar P(B|E)
PBE3 = PB * PE / PE
print('P(B|E) = P(Sexo=M|Idade=B) = %.1f%%' % (PBE3 * 100))
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Funções de Ativação
Step2: Funções Auxiliares
Step3: Funções de Custo
Step4: Inicialização de Pesos
Step5: Exemplo 2
Step6: Gradient Checking
Step7: Regressão Linear Multivariada - Exercício de Regressão do Perceptron
Step8: Regressão Quadrática
Step9: Regressão Cúbica
Step10: Regressão Logarítimica
Step11: Regressão Exponencial
Step12: Classificação Binária
Step13: Porta XOR
Step14: 2 Clusters
Step15: 4 Clusters
Step16: Círculos
Step17: Moons
Step18: Espiral
Step19: Classificação Multiclasse
Step20: 4 Clusters Multiclasse
Step21: Espiral - 5 Classes
Step22: Make Classification - 4 Classes
Step23: Iris Dataset
| <ASSISTANT_TASK:>
Python Code:
import numpy as np
import _pickle as pkl
import matplotlib.pyplot as plt
from sklearn.datasets import load_iris
from sklearn.datasets.samples_generator import make_blobs, make_circles, make_moons, make_classification
from sklearn.metrics import accuracy_score
from sklearn.preprocessing import MinMaxScaler, OneHotEncoder
from utils import plot
from utils.samples_generator import make_spiral, make_square, make_cubic, make_exp, make_log10
%matplotlib inline
def linear(x, derivative=False):
return np.ones_like(x) if derivative else x
def sigmoid(x, derivative=False):
if derivative:
y = sigmoid(x)
return y*(1 - y)
return 1.0/(1.0 + np.exp(-x))
def tanh(x, derivative=False):
if derivative:
y = tanh(x)
return 1 - y**2
return (np.exp(x) - np.exp(-x))/(np.exp(x) + np.exp(-x))
def relu(x, derivative=False):
if derivative:
return np.where(x <= 0, 0, 1)
return np.maximum(0, x)
def leaky_relu(x, derivative=False):
alpha = 0.1
if derivative:
return np.where(x <= 0, alpha, 1)
return np.where(x <= 0, alpha*x, x)
def elu(x, derivative=False):
alpha = 1.0
if derivative:
y = elu(x)
return np.where(x <= 0, y + alpha, 1)
return np.where(x <= 0, alpha*(np.exp(x) - 1), x)
def softmax(x, y_oh=None, derivative=False):
if derivative:
y_pred = softmax(x)
k = np.nonzero(y_pred * y_oh)
pk = y_pred[k]
y_pred[k] = pk * (1.0 - pk)
return y_pred
exp = np.exp(x)
return exp / np.sum(exp, axis=1, keepdims=True)
def neg_log_likelihood(y_oh, y_pred, derivative=False):
k = np.nonzero(y_pred * y_oh)
pk = y_pred[k]
if derivative:
y_pred[k] = (-1.0 / pk)
return y_pred
return np.mean(-np.log(pk))
def softmax_neg_log_likelihood(y_oh, y_pred, derivative=False):
y_softmax = softmax(y_pred)
if derivative:
return -(y_oh - y_softmax) / y_oh.shape[0]
return neg_log_likelihood(y_oh, y_softmax)
x = np.array([[0.05, 0.10]])
y = np.array([[0.01, 0.99]])
w1 = np.array([[0.15, 0.20], [0.25, 0.30]])
b1 = np.array([[0.35]])
w2 = np.array([[0.40, 0.45], [0.50, 0.55]])
b2 = np.array([[0.60]])
# insira sua rede aqui!
x = np.array([[0.1, 0.2, 0.7]])
y = np.array([[1, 0, 0]])
D_in, D_out = x.shape[1], y.shape[1]
w1 = np.array([[0.1, 0.2, 0.3], [0.3, 0.2, 0.7], [0.4, 0.3, 0.9]])
b1 = np.ones((1,3))
w2 = np.array([[0.2, 0.3, 0.5], [0.3, 0.5, 0.7], [0.6, 0.4, 0.8]])
b2 = np.ones((1,3))
w3 = np.array([[0.1, 0.4, 0.8], [0.3, 0.7, 0.2], [0.5, 0.2, 0.9]])
b3 = np.ones((1,3))
# insira sua rede aqui!
data = np.loadtxt('data/medidas.csv', delimiter=',', skiprows=1)
print(data.shape)
x, y = data[:,0].reshape(-1,1), data[:,1].reshape(-1,1)
print(x.shape, y.shape)
plt.scatter(x, y)
minmax = MinMaxScaler(feature_range=(-1, 1))
x = minmax.fit_transform(x.astype(np.float64))
print(x.min(), x.max())
plt.scatter(x, y)
input_dim, output_dim = x.shape[1], y.shape[1]
# insira sua rede aqui!
plt.figure(figsize=(12, 6))
plt.subplot(1, 2, 1)
plt.scatter(x, y)
plt.plot(x, nn.predict(x), c='green')
data = np.loadtxt('data/notas.csv', delimiter=',', skiprows=1)
print(data.shape)
x, y = data[:,:-1], data[:,-1].reshape(-1,1)
print(x.shape, y.shape)
minmax = MinMaxScaler(feature_range=(-1, 1))
x = minmax.fit_transform(x.astype(np.float64))
print(x.min(axis=0), x.max(axis=0))
plt.scatter(x, y)
input_dim, output_dim = x.shape[1], y.shape[1]
# insira sua rede aqui!
x, y = make_square(n_samples=100, x_min=-10, x_max=10, a=1, b=1, c=1, noise=10)
print(x.shape, y.shape)
plt.scatter(x, y)
minmax = MinMaxScaler(feature_range=(-1, 1))
x = minmax.fit_transform(x.astype(np.float64))
print(x.min(axis=0), x.max(axis=0))
input_dim, output_dim = x.shape[1], y.shape[1]
# insira sua rede aqui!
plt.figure(figsize=(12, 6))
plt.subplot(1, 2, 1)
plt.scatter(x, y)
plt.plot(x, nn.predict(x), c='green')
x, y = make_cubic(n_samples=100, x_min=-4, x_max=4, a=1, b=0, c=-10, d=0, noise=3)
print(x.shape, y.shape)
plt.scatter(x, y)
minmax = MinMaxScaler(feature_range=(-1, 1))
x = minmax.fit_transform(x.astype(np.float64))
print(x.min(axis=0), x.max(axis=0))
input_dim, output_dim = x.shape[1], y.shape[1]
# insira sua rede aqui!
plt.figure(figsize=(12, 6))
plt.subplot(1, 2, 1)
plt.scatter(x, y)
plt.plot(x, nn.predict(x), c='green')
x, y = make_log10(n_samples=100, x_min=1, x_max=100, noise=0.3)
print(x.shape, y.shape)
plt.scatter(x, y)
minmax = MinMaxScaler(feature_range=(-1, 1))
x = minmax.fit_transform(x.astype(np.float64))
print(x.min(axis=0), x.max(axis=0))
input_dim, output_dim = x.shape[1], y.shape[1]
# insira sua rede aqui!
plt.figure(figsize=(12, 6))
plt.subplot(1, 2, 1)
plt.scatter(x, y)
plt.plot(x, nn.predict(x), c='green')
x, y = make_exp(n_samples=100, x_min=0, x_max=5, noise=10)
print(x.shape, y.shape)
plt.scatter(x, y)
minmax = MinMaxScaler(feature_range=(-1, 1))
x = minmax.fit_transform(x.astype(np.float64))
print(x.min(axis=0), x.max(axis=0))
input_dim, output_dim = x.shape[1], y.shape[1]
# insira sua rede aqui!
plt.figure(figsize=(12, 6))
plt.subplot(1, 2, 1)
plt.scatter(x, y)
plt.plot(x, nn.predict(x), c='green')
x = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y = np.array([0, 0, 0, 1]).reshape(-1, 1)
print(x.shape, y.shape)
plt.scatter(x[:,0], x[:,1], c=list(np.array(y).ravel()), s=15, cmap='bwr')
input_dim, output_dim = x.shape[1], y.shape[1]
# insira sua rede aqui!
y_pred = nn.predict(x)
print('Predições:', y_pred, sep='\n')
print('Acurácia: {:.2f}%'.format(100*accuracy_score(y, y_pred > 0.5)))
plot.classification_predictions(x, y, is_binary=True, nn=nn, cmap='bwr')
x = np.array([[0, 0], [0, 1], [1, 0], [1, 1]])
y = np.array([0, 1, 1, 0]).reshape(-1, 1)
print(x.shape, y.shape)
plt.scatter(x[:,0], x[:,1], c=list(np.array(y).ravel()), s=15, cmap='bwr')
input_dim, output_dim = x.shape[1], y.shape[1]
# insira sua rede aqui!
y_pred = nn.predict(x)
print('Predições:', y_pred, sep='\n')
print('Acurácia: {:.2f}%'.format(100*accuracy_score(y, y_pred > 0.5)))
plot.classification_predictions(x, y, is_binary=True, nn=nn, cmap='bwr')
x, y = make_blobs(n_samples=100, n_features=2, centers=2, random_state=1234)
y = y.reshape(-1, 1)
print(x.shape, y.shape)
plt.scatter(x[:,0], x[:,1], c=list(np.array(y).ravel()), s=15, cmap='bwr')
input_dim, output_dim = x.shape[1], y.shape[1]
# insira sua rede aqui!
y_pred = nn.predict(x)
threshold = 0 if nn.layers[-1].activation == linear else 0.5
print('Acurácia: {:.2f}%'.format(100*accuracy_score(y, y_pred >= threshold)))
plot.classification_predictions(x, y, is_binary=True, nn=nn, threshold=threshold, cmap='bwr')
x, y = make_blobs(n_samples=500, n_features=2, cluster_std=0.9, centers=[(-3, -3), (3, 3), (-3, 3), (3, -3)], random_state=1234)
y = y.reshape(-1, 1)
y = np.where(y >= 2, 1, 0)
print(x.shape, y.shape)
plt.scatter(x[:,0], x[:,1], c=list(np.array(y).ravel()), s=15, cmap='bwr')
input_dim, output_dim = x.shape[1], y.shape[1]
# insira sua rede aqui!
y_pred = nn.predict(x)
print('Acurácia: {:.2f}%'.format(100*accuracy_score(y, y_pred > 0.5)))
plot.classification_predictions(x, y, is_binary=True, nn=nn, threshold=0.5, cmap='bwr')
x, y = make_circles(n_samples=500, noise=0.1, factor=0.4, random_state=1234)
y = y.reshape(-1, 1)
print(x.shape, y.shape)
plt.scatter(x[:,0], x[:,1], c=list(np.array(y).ravel()), s=15, cmap='bwr')
input_dim, output_dim = x.shape[1], y.shape[1]
# insira sua rede aqui!
y_pred = nn.predict(x)
print('Acurácia: {:.2f}%'.format(100*accuracy_score(y, y_pred > 0.5)))
plot.classification_predictions(x, y, is_binary=True, nn=nn, threshold=0.5, cmap='bwr')
x, y = make_moons(200, noise=0.20)
y = y.reshape(-1, 1)
print(x.shape, y.shape)
plt.scatter(x[:,0], x[:,1], c=list(np.array(y).ravel()), s=15, cmap='bwr')
input_dim, output_dim = x.shape[1], y.shape[1]
# insira sua rede aqui!
y_pred = nn.predict(x)
print('Acurácia: {:.2f}%'.format(100*accuracy_score(y, y_pred > 0.5)))
plot.classification_predictions(x, y, is_binary=True, nn=nn, threshold=0.5, cmap='bwr')
x, y = make_spiral(n_samples=100, n_class=2, radius=5, laps=1.75)
y = y.reshape(-1, 1)
print(x.shape, y.shape)
plt.scatter(x[:,0], x[:,1], c=list(np.array(y).ravel()), s=15, cmap='bwr')
input_dim, output_dim = x.shape[1], y.shape[1]
# insira sua rede aqui!
y_pred = nn.predict(x)
print('Acurácia: {:.2f}%'.format(100*accuracy_score(y, y_pred > 0.5)))
plot.classification_predictions(x, y, is_binary=True, nn=nn, threshold=0.5, cmap='bwr')
x, y = make_blobs(n_samples=300, n_features=2, centers=[(0, -3), (-3, 3), (3, 3)], random_state=1234)
y = y.reshape(-1, 1)
print(x.shape, y.shape)
plt.scatter(x[:,0], x[:,1], c=list(np.array(y).ravel()), s=15, cmap=plt.cm.viridis)
onehot = OneHotEncoder(sparse=False)
y_onehot = onehot.fit_transform(y)
print(y_onehot[::60])
input_dim, output_dim = x.shape[1], y.shape[1]
# insira sua rede aqui!
y_pred = np.argmax(nn.predict(x), axis=1)
print('Acurácia: {:.2f}%'.format(100*accuracy_score(y, y_pred)))
plot.classification_predictions(x, y, is_binary=False, nn=nn)
x, y = make_blobs(n_samples=400, n_features=2, centers=[(-3, 0), (3, 0), (0, 3), (0, -3)], random_state=1234)
y = y.reshape(-1, 1)
print(x.shape, y.shape)
plt.scatter(x[:,0], x[:,1], c=list(np.array(y).ravel()), s=15, cmap=plt.cm.viridis)
onehot = OneHotEncoder(sparse=False)
y_onehot = onehot.fit_transform(y)
print(y_onehot[::70])
input_dim, output_dim = x.shape[1], y.shape[1]
# insira sua rede aqui!
y_pred = np.argmax(nn.predict(x), axis=1)
print('Acurácia: {:.2f}%'.format(100*accuracy_score(y, y_pred)))
plot.classification_predictions(x, y, is_binary=False, nn=nn)
x, y = make_spiral(n_samples=100, n_class=5, radius=1, laps=0.5)
y = y.reshape(-1, 1)
print(x.shape, y.shape)
plt.scatter(x[:,0], x[:,1], c=list(np.array(y).ravel()), s=15, cmap=plt.cm.viridis)
onehot = OneHotEncoder(sparse=False)
y_onehot = onehot.fit_transform(y)
print(y_onehot[::100])
input_dim, output_dim = x.shape[1], y.shape[1]
# insira sua rede aqui!
y_pred = np.argmax(nn.predict(x), axis=1)
print('Acurácia: {:.2f}%'.format(100*accuracy_score(y, y_pred)))
plot.classification_predictions(x, y, is_binary=False, nn=nn)
x, y = make_classification(n_samples=100, n_classes=4, n_features=2, n_clusters_per_class=1, n_redundant=0, n_repeated=0, random_state=1234)
y = y.reshape(-1, 1)
print(x.shape, y.shape)
plt.scatter(x[:,0], x[:,1], c=list(np.array(y).ravel()), s=15, cmap=plt.cm.viridis)
onehot = OneHotEncoder(sparse=False)
y_onehot = onehot.fit_transform(y)
print(y_onehot[::10])
input_dim, output_dim = x.shape[1], y.shape[1]
# insira sua rede aqui!
y_pred = np.argmax(nn.predict(x), axis=1)
print('Acurácia: {:.2f}%'.format(100*accuracy_score(y, y_pred)))
plot.classification_predictions(x, y, is_binary=False, nn=nn)
data = load_iris()
x, y = data.data[:, 2:], data.target.reshape(-1,1)
print(data.feature_names)
print(data.target_names)
print(x.shape, y.shape)
plt.scatter(x[:,0], x[:,1], c=list(np.array(y).ravel()), s=15, cmap=plt.cm.viridis)
onehot = OneHotEncoder(sparse=False)
y_onehot = onehot.fit_transform(y)
print(y_onehot[::20])
input_dim, output_dim = x.shape[1], y.shape[1]
# insira sua rede aqui!
y_pred = np.argmax(nn.predict(x), axis=1)
print('Acurácia: {:.2f}%'.format(100*accuracy_score(y, y_pred)))
plot.classification_predictions(x, y, is_binary=False, nn=nn)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's have a look at the csv files we created in the previous notebooks that we will use for training/eval.
Step2: Create TensorFlow model using TensorFlow's Estimator API
Step3: Exercise 1
Step4: Create the input function
Step5: Create the feature columns
Step6: Create the Serving Input function
Step7: Create the model and run training and evaluation
Step8: Finally, we train the model!
| <ASSISTANT_TASK:>
Python Code:
PROJECT = "cloud-training-demos" # Replace with your PROJECT
BUCKET = "cloud-training-bucket" # Replace with your BUCKET
REGION = "us-central1" # Choose an available region for Cloud MLE
TFVERSION = "1.14" # TF version for CMLE to use
import os
os.environ["BUCKET"] = BUCKET
os.environ["PROJECT"] = PROJECT
os.environ["REGION"] = REGION
os.environ["TFVERSION"] = TFVERSION
%%bash
if ! gsutil ls | grep -q gs://${BUCKET}/; then
gsutil mb -l ${REGION} gs://${BUCKET}
fi
%%bash
ls *.csv
import shutil
import numpy as np
import tensorflow as tf
print(tf.__version__)
# Determine CSV, label, and key columns
CSV_COLUMNS = # TODO: Your code goes here
LABEL_COLUMN = # TODO: Your code goes here
# Set default values for each CSV column
DEFAULTS = # TODO: Your code goes here
TRAIN_STEPS = 1000
# Create an input function reading a file using the Dataset API
# Then provide the results to the Estimator API
def read_dataset(filename_pattern, mode, batch_size = 512):
def _input_fn():
def decode_csv(line_of_text):
columns = # TODO: Your code goes here
features = # TODO: Your code goes here
label = # TODO: Your code goes here
return features, label
# Create list of files that match pattern
file_list = # TODO: Your code goes here
# Create dataset from file list
dataset = # TODO: Your code goes here
# In training mode, shuffle the dataset and repeat indefinitely
# TODO: Your code goes here
# This will now return batches of features, label
dataset = # TODO: Your code goes here
return dataset
return _input_fn
def get_wide_deep():
# Define column types
fc_is_male,fc_plurality,fc_mother_age,fc_gestation_weeks = [# TODO: Your code goes here]
# Bucketized columns
fc_age_buckets = # TODO: Your code goes here
fc_gestation_buckets = # TODO: Your code goes here
# Sparse columns are wide, have a linear relationship with the output
wide = [# TODO: Your code goes here]
# Feature cross all the wide columns and embed into a lower dimension
fc_crossed = # TODO: Your code goes here
fc_embed = # TODO: Your code goes here
# Continuous columns are deep, have a complex relationship with the output
deep = [# TODO: Your code goes here]
return wide, deep
def serving_input_fn():
feature_placeholders = # TODO: Your code goes here
features = # TODO: Your code goes here
return tf.estimator.export.ServingInputReceiver(features = features, receiver_tensors = feature_placeholders)
def train_and_evaluate(output_dir):
EVAL_INTERVAL = 300
run_config = tf.estimator.RunConfig(
save_checkpoints_secs = EVAL_INTERVAL,
keep_checkpoint_max = 3)
estimator = # TODO: Your code goes here
train_spec = # TODO: Your code goes here
exporter = # TODO: Your code goes here
eval_spec = # TODO: Your code goes here
tf.estimator.train_and_evaluate(# TODO: Your code goes here)
# Run the model
shutil.rmtree(path = "babyweight_trained_wd", ignore_errors = True) # start fresh each time
train_and_evaluate("babyweight_trained_wd")
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Update
Step2: Union of Dec2016 and Jan2017 version overlapping UGC's list
Step3: Total Journals in Beall's list
Step4: Jan 2017 version
Step5: Union of Publisher List from Dec 2016 and Jan 2017
| <ASSISTANT_TASK:>
Python Code:
import pandas as pd
from fuzzywuzzy import fuzz
import re
from fuzzywuzzy import process
pd.set_option('display.max_colwidth', -1)
pd.set_option('display.max_columns', None)
pd.set_option('display.max_rows', None)
def clean_name(x):
## Remove things in braces such as :
## Journal of Food, Agriculture and Environment (JFAE) => Journal of Food, Agriculture and Environment
cleaned = re.sub(r"\(([A-Za-z0-9_]+)\)", '', x).strip()
cleaned = re.sub( '\s+', ' ', cleaned ).strip()
cleaned = unicode(cleaned.lower().replace('&', 'and'), 'utf-8')
return cleaned
with open('Beall_list_dec2016.txt') as f:
beall_list_dec = [clean_name(x.strip()) for x in f.readlines()]
df = pd.read_csv('UGC_Journal_list_2017.csv')
df.Title = df.Title.apply(clean_name)
df.Publisher = df.Publisher.astype(str).apply(clean_name)
exact_matches_dec2016 = pd.DataFrame({'Journals': list(set(df.Title).intersection(beall_list_dec))})
#fuzzy_matches = [process.extractOne(x, beall_list_dec) for x in set(df.Title)]
exact_matches_dec2016.sort_values(by='Journals').reset_index(drop=True)
with open('Beall_list_Jan2017.txt') as f:
beall_list_jan = [clean_name(x.strip()) for x in f.readlines()]
exact_matches_jan2017 = pd.DataFrame({'Journals': list(set(df.Title).intersection(beall_list_jan))})
exact_matches_jan2017.sort_values(by='Journals').reset_index(drop=True)
overlap_jan_dec = pd.concat([exact_matches_dec2016, exact_matches_jan2017]).drop_duplicates().sort_values(by='Journals').reset_index(drop=True)
overlap_jan_dec
with open('Beall_publisher_list_Dec2016.txt') as f:
beall_pubisher_list_dec = [clean_name(x.strip()) for x in f.readlines()]
publisher_matches_dec2016 = pd.DataFrame({'Publisher': list(set(df.Publisher).intersection(beall_pubisher_list_dec))})
publisher_matches_dec2016.sort_values(by='Publisher').reset_index(drop=True)
with open('Beall_publisher_list_Jan2017.txt') as f:
beall_pubisher_list_jan = [clean_name(x.strip()) for x in f.readlines()]
publisher_matches_jan2017 = pd.DataFrame({'Publisher': list(set(df.Publisher).intersection(beall_pubisher_list_jan))})
publisher_matches_jan2017.sort_values(by='Publisher').reset_index(drop=True)
overlap_pub_jan_dec = pd.concat([publisher_matches_dec2016, publisher_matches_jan2017]).drop_duplicates().sort_values(by='Publisher').reset_index(drop=True)
overlap_pub_jan_dec
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Deep Learning activation functions examined below
Step2: 1. ReLU
Step3: 2. Leaky ReLU
Step4: 3. sigmoid
Step5: 4. tanh
| <ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
#Create array of possible z values
z = np.linspace(-5,5,num=1000)
def draw_activation_plot(a,quadrants=2,y_ticks=[0],two_quad_y_lim=[0,5], four_quad_y_lim=[-1,1]):
Draws plot of activation function
Parameters
----------
a : Output of activation function over domain z.
quadrants: The number of quadrants in the plot (options: 2 or 4)
y_ticks: Ticks to show on the y-axis.
two_quad_y_lim: The limit of the y axis for 2 quadrant plots.
four_quad_y_lim: The limit of the y axis for 4 quadrant plots.
#Create figure and axis
fig = plt.figure()
ax = fig.add_subplot(1, 1, 1)
#Move left axis
ax.spines['left'].set_position('center')
#Remove top and right axes
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
#Set x and y labels
plt.xlabel('z')
plt.ylabel('a')
#Set ticks
plt.xticks([])
plt.yticks(y_ticks)
#Set ylim
plt.ylim(two_quad_y_lim)
#4 Quadrant conditions
if quadrants==4:
#Move up bottom axis
ax.spines['bottom'].set_position('center')
#Move x and y labels for readability
ax.yaxis.set_label_coords(.48,.75)
ax.xaxis.set_label_coords(.75,.48)
##Set y_lim for 4 quadrant graphs
plt.ylim(four_quad_y_lim)
#Plot z vs. activation function
plt.plot(z,a);
relu = np.maximum(z,0)
draw_activation_plot(relu)
leaky_ReLU = np.maximum(0.01*z,z)
draw_activation_plot(leaky_ReLU)
sigmoid = 1/(1+np.exp(-z))
draw_activation_plot(sigmoid,y_ticks=[0,1], two_quad_y_lim=[0,1])
tanh = (np.exp(z)-np.exp(-z))/(np.exp(z)+np.exp(-z))
draw_activation_plot(tanh,y_ticks=[-1,0,1],quadrants=4)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Processing sample
Step2: Plotting
Step3: In the above figures, first image represents observed degree distribution of random graph $G(n, p)$ and the binomial pmf with parameters $n$ and $p$. The second image describes observed degree distribution of random graph $G(n, m)$ with $m = \frac{1}{2}pn(n-1)$ and the binomial pmf with parameters $n$ and $p$.
Step4: In the above graph, $x$-axis represents number of nodes our graph has, $y$-axis is probability $p$. We plot different lines as a function of $n$ and $p$. More precisely, we plot 7 lines with $n\cdot p \in {.7, .8, .9, 1.0, 1.3, 1.7, 2.0}$. We can see that when $n\cdot p < 1$, the size of the largest component is below 20 (with a few exceptions). This confirms that if $np < 1$ then the size of the largest component belongs to $O(\log n)$.
Step5: The above plot represents the number of isolated vertices as a function of $n$ and $p$. We also see 7 lines representing equations $n\cdot p = (1+\epsilon)\log(n)$ for $\epsilon \in [-0.2, 0.15]$.
| <ASSISTANT_TASK:>
Python Code:
import os, sys, math
import collections as collcs
# append the path so that modules can be properly imported
sys.path.append('../src/')
import graph
import algorithms.erdos_renyi as er
import algorithms.newman_model as nm
reload(graph)
reload(er)
reload(nm)
# generate a random graph having 1000 vertices
n, p, seed = 1000, 0.4, 12345
G = er.er_np(n, p, seed=seed)
m = (n*(n-1)/2.0)*p
m = int(math.floor(m))
G1 = er.er_nm(n, m)
g_degree_sequence = G.degree_sequence()
g1_degree_sequence = G1.degree_sequence()
num_counter, num_counterG1 = {}, {}
for i in range(n):
num_counter[i] = num_counterG1[i] = 0
for d1, d2 in zip(g_degree_sequence, g1_degree_sequence):
num_counter[d1] += 1
num_counterG1[d2] += 1
import scipy.stats as stat
import numpy as np
import matplotlib.pyplot as plt
def plot_test(n, p, degree_sequence, counter, title="Plot"):
# plot setup
plt.figure(num=None, figsize=(15, 6), dpi=80, facecolor='white', edgecolor='green')
# Binomial pmf with parameters n-1, p
x_axis = np.linspace(0, n-1, n)
y_axis = np.array([stat.binom.pmf(i, n-1, p) for i in range(n)]) #stat.binom.pmf(x_axis, n-1, p)
# display degree distribution and binomial pmf
#print("deg seq")
#print(degree_sequence)
my_counter, bins, patches = plt.hist(degree_sequence, x_axis, normed=1.0, edgecolor='k', linewidth=0.1)
plt.title(title)
plt.plot(x_axis, y_axis, 'r--', linewidth=1)
plt.grid(True)
plt.axis([0, n-1, 0, 0.05])
plt.show()
# G(n, p) model
plot_test(n, p, g_degree_sequence, num_counter,
title="Observed degree distribution and binomial distribution: n=%d, p=%f" % (n, p))
print("******************Second test*******************")
num_counterG1[0]=0
plot_test(n, p, g1_degree_sequence, num_counterG1, title="Plot1")
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
import numpy as np
import matplotlib.pyplot as plt
def component_experiment((n_l, n_r), (p_l, p_r), seeds = None, offset = 3):
# n_l, n_r bounds for n
# p_l, p_r bounds for probability
# s sample size
# generate s graphs from G(n,p)
s = n_r - n_l # sample size
if seeds is None:
seeds = np.random.choice(2000000, s*s).reshape((s, s))
# compute
step_p = float(p_r - p_l) / float(s)
N = np.arange(n_l, n_r, step=1)
P = np.arange(p_l, p_r, step=step_p)
N, P = np.meshgrid(N, P)
CC = np.zeros((s, s))
# r_graphs = []
for i in range(s):
# r_graphs.append([])
for j in range(s):
G = er.er_np(int(N[i, j]), P[i, j], seed=seeds[i, j])
# r_graphs[-1].append(G)
cc_sizes = G.connected_component()[1]
# get the largest
largest_comp_size = 0 if len(cc_sizes) == 0 else max(cc_sizes)
# print("Largest comp of graph (%d, %f) is %d" % (int(N[i, j]), P[i, j], largest_comp_size))
CC[i, j] = largest_comp_size
plot3D(N, P, CC)
def plot3D(X, Y, Z):
# plt.figure(num=None, figsize=(15, 6), dpi=80, facecolor='w', edgecolor='k')
fig = plt.figure(figsize=(15, 8))
mapa = plt.pcolor(X, Y, Z, cmap=cm.coolwarm, antialiased=True)
fig.colorbar(mapa, shrink=0.5, aspect=5)
# plot a few lines
tX = X[1]
n_times_p = [0.7, 0.8, 0.9, 1.0, 1.3, 1.7, 2.0]
lines = []
for prod in n_times_p:
line, = plt.plot(tX, prod / tX, label="n*p = %f"%prod)
lines.append(line)
plt.legend(lines)
plt.show()
component_experiment((100, 200), (0.003, 0.04))
# number of isolated vertices when np in [(1-e)logn, (1+e)logn] for n in [100, 200] and p in [0.02, 0.5]
def isolated_vertices_experiment((n_l, n_r), (p_l, p_r), epsilon):
p_size = n_size = int(n_r - n_l)
N, P = np.linspace(n_l, n_r, n_size), np.linspace(p_l, p_r, p_size)
N, P = np.meshgrid(N, P)
CC = np.zeros((n_size, p_size))
for i in range(n_size):
for j in range(p_size):
G = er.er_np(int(N[i, j]), P[i, j])
isolated_vertices = G.find_isolated_vertices()
CC[i, j] = len(isolated_vertices)
fig = plt.figure(figsize=(15, 8))
mapa = plt.pcolor(N, P, CC, cmap=cm.coolwarm, antialiased=True)
fig.colorbar(mapa, shrink=0.5, aspect=5)
X = N[0]
lines = []
for prod in epsilon:
line, = plt.plot(X, ((1+prod)*np.log(X))/X, label="n*p = %flog(n)"%(1+prod))
lines.append(line)
plt.legend(lines)
plt.xlabel("n")
plt.ylabel("p")
plt.show()
n_l, n_r = 1000, 1010
p_l, p_r = 0.0001, 0.03
epsilon = np.concatenate((np.linspace(-0.2, 0.0, 4), np.linspace(0.01, 0.15, 3)))
# run the experiment
isolated_vertices_experiment((n_l, n_r), (p_l, p_r), epsilon)
from matplotlib import cm
import numpy as np
import matplotlib.pyplot as plt
import algorithms.newman_model as nm
reload(nm)
def ratio_experiment((n_l, n_r), (k_l, k_r)):
size = n_r - n_l
k_size = k_r - k_l
N = np.linspace(n_l, n_r, size, dtype=int)
K = np.linspace(k_l, k_r, k_size, dtype=int)
NN, KK = np.meshgrid(N, K)
ratio = [] #np.zeros((size, k_size))
for n in range(size):
current_sum = 0.0
for k in range(k_size):
degree_sequence = nm.degree_sequence_regular(int(NN[n, k]), int(KK[n,k]))
edges = nm.configure_sequence(degree_sequence)
tmp_ratio = nm.irregular_edge_count(edges)
current_sum += tmp_ratio
ratio.append(current_sum / float(k_size))
fig = plt.figure(figsize=(15,8))
plt.xlabel("n")
plt.ylabel("ratio")
line, = plt.plot(N, np.array(ratio), label="ratio(n)")
plt.legend([line])
#mapa = plt.pcolor(NN, KK, ratio, cmap=cm.coolwarm, antialiased=False)
#fig.colorbar(mapa, shrink=0.5, aspect=5)
plt.show()
ratio_experiment((60, 90), (50, 80))
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Initialization of setup
Step2: 2. Initial condition
Step3: 3. Solution for the homogeneous problem
Step4: 4. Finite Volumes solution
| <ASSISTANT_TASK:>
Python Code:
# Import all necessary libraries, this is a configuration step for the exercise.
# Please run it before the simulation code!
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
# Show the plots in the Notebook.
plt.switch_backend("nbagg")
# Initialization of setup
# --------------------------------------------------------------------------
nx = 800 # number of grid points
c = 2500 # acoustic velocity in m/s
ro = 2500 # density in kg/m^3
Z = ro*c # impedance
mu = ro*c**2 # shear modulus
xmax = 10000 # Length in m
eps = 0.5 # CFL
tmax = 2.0 # simulation time in s
isnap = 10 # plotting rate
sig = 200 # argument in the inital condition
x0 = 5000 # position of the initial condition
imethod = 'upwind' # 'Lax-Wendroff', 'upwind'
# Initialize Space
x, dx = np.linspace(0,xmax,nx,retstep=True)
# use wave based CFL criterion
dt = eps*dx/c # calculate time step from stability criterion
# Simulation time
nt = int(np.floor(tmax/dt))
# Initialize wave fields
Q = np.zeros((2,nx))
Qnew = np.zeros((2,nx))
Qa = np.zeros((2,nx))
# Initial condition
# --------------------------------------------------------------------------
sx = np.exp(-1./sig**2 * (x-x0)**2)
Q[0,:] = sx
# ---------------------------------------------------------------
# Plot initial condition
# ---------------------------------------------------------------
plt.plot(x, sx, color='r', lw=2, label='Initial condition')
plt.ylabel('Amplitude', size=16)
plt.xlabel('x', size=16)
plt.legend()
plt.grid(True)
plt.show()
# Initialize all matrices
# --------------------------------------------------------------------------
R = np.array([[Z, -Z],[1, 1]]) # Eq. 8.43
Rinv = np.linalg.inv(R) # Eq. 8.43
Lp = np.array([[0, 0], [0, c]]) # Eq. 8.49
Lm = np.array([[-c, 0], [0, 0]]) # Eq. 8.49
Ap = R @ Lp @ Rinv # Eq. 8.50
Am = R @ Lm @ Rinv # Eq. 8.50
A = np.array([[0, -mu], [-1/ro, 0]]) # Eq. 8.35
# Initialize animated plot
# ---------------------------------------------------------------
fig = plt.figure(figsize=(10,6))
ax1 = fig.add_subplot(2,1,1)
ax2 = fig.add_subplot(2,1,2)
line1 = ax1.plot(x, Q[0,:], 'k', x, Qa[0,:], 'r--')
line2 = ax2.plot(x, Q[1,:], 'k', x, Qa[1,:], 'r--')
ax1.set_ylabel('Stress')
ax2.set_ylabel('Velocity')
ax2.set_xlabel(' x ')
plt.suptitle('Homogeneous F. volume - %s method'%imethod, size=16)
plt.ion() # set interective mode
plt.show()
# ---------------------------------------------------------------
# Time extrapolation
# ---------------------------------------------------------------
for i in range(nt):
if imethod =='Lax-Wendroff':
for j in range(1,nx-1):
dQ1 = Q[:,j+1] - Q[:,j-1]
dQ2 = Q[:,j-1] - 2*Q[:,j] + Q[:,j+1]
Qnew[:,j] = Q[:,j] - 0.5*dt/dx*(A @ dQ1)\
+ 1./2.*(dt/dx)**2 * (A @ A) @ dQ2 # Eq. 8.56
# Absorbing boundary conditions
Qnew[:,0] = Qnew[:,1]
Qnew[:,nx-1] = Qnew[:,nx-2]
elif imethod == 'upwind':
for j in range(1,nx-1):
dQl = Q[:,j] - Q[:,j-1]
dQr = Q[:,j+1] - Q[:,j]
Qnew[:,j] = Q[:,j] - dt/dx * (Ap @ dQl + Am @ dQr) # Eq. 8.54
# Absorbing boundary conditions
Qnew[:,0] = Qnew[:,1]
Qnew[:,nx-1] = Qnew[:,nx-2]
else:
raise NotImplementedError
Q, Qnew = Qnew, Q
# --------------------------------------
# Animation plot. Display solution
if not i % isnap:
for l in line1:
l.remove()
del l
for l in line2:
l.remove()
del l
# --------------------------------------
# Analytical solution (stress i.c.)
Qa[0,:] = 1./2.*(np.exp(-1./sig**2 * (x-x0 + c*i*dt)**2)\
+ np.exp(-1./sig**2 * (x-x0-c*i*dt)**2))
Qa[1,:] = 1/(2*Z)*(np.exp(-1./sig**2 * (x-x0+c*i*dt)**2)\
- np.exp(-1./sig**2 * (x-x0-c*i*dt)**2))
# --------------------------------------
# Display lines
line1 = ax1.plot(x, Q[0,:], 'k', x, Qa[0,:], 'r--', lw=1.5)
line2 = ax2.plot(x, Q[1,:], 'k', x, Qa[1,:], 'r--', lw=1.5)
plt.legend(iter(line2), ('F. Volume', 'Analytic'))
plt.gcf().canvas.draw()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Verify CSV files exist
Step2: Create Keras model
Step5: Make dataset of features and label from CSV files.
Step7: Create input layers for raw features.
Step10: Create feature columns for inputs.
Step12: Create wide and deep model and output layer.
Step14: Create custom evaluation metric.
Step16: Build wide and deep model tying all of the pieces together.
Step17: We can visualize the wide and deep network using the Keras plot_model utility.
Step18: Run and evaluate model
Step19: Visualize loss curve
Step20: Save the model
| <ASSISTANT_TASK:>
Python Code:
import datetime
import os
import shutil
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
print(tf.__version__)
%%bash
ls *.csv
%%bash
head -5 *.csv
# Determine CSV, label, and key columns
CSV_COLUMNS = ["weight_pounds",
"is_male",
"mother_age",
"plurality",
"gestation_weeks"]
LABEL_COLUMN = "weight_pounds"
# Set default values for each CSV column.
# Treat is_male and plurality as strings.
DEFAULTS = [[0.0], ["null"], [0.0], ["null"], [0.0]]
def features_and_labels(row_data):
Splits features and labels from feature dictionary.
Args:
row_data: Dictionary of CSV column names and tensor values.
Returns:
Dictionary of feature tensors and label tensor.
label = row_data.pop(LABEL_COLUMN)
return row_data, label # features, label
def load_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL):
Loads dataset using the tf.data API from CSV files.
Args:
pattern: str, file pattern to glob into list of files.
batch_size: int, the number of examples per batch.
mode: tf.estimator.ModeKeys to determine if training or evaluating.
Returns:
`Dataset` object.
# Make a CSV dataset
dataset = tf.data.experimental.make_csv_dataset(
file_pattern=pattern,
batch_size=batch_size,
column_names=CSV_COLUMNS,
column_defaults=DEFAULTS)
# Map dataset to features and label
dataset = dataset.map(map_func=features_and_labels) # features, label
# Shuffle and repeat for training
if mode == tf.estimator.ModeKeys.TRAIN:
dataset = dataset.shuffle(buffer_size=1000).repeat()
# Take advantage of multi-threading; 1=AUTOTUNE
dataset = dataset.prefetch(buffer_size=1)
return dataset
def create_input_layers():
Creates dictionary of input layers for each feature.
Returns:
Dictionary of `tf.Keras.layers.Input` layers for each feature.
deep_inputs = {
colname: tf.keras.layers.Input(
name=colname, shape=(), dtype="float32")
for colname in ["mother_age", "gestation_weeks"]
}
wide_inputs = {
colname: tf.keras.layers.Input(
name=colname, shape=(), dtype="string")
for colname in ["is_male", "plurality"]
}
inputs = {**wide_inputs, **deep_inputs}
return inputs
def categorical_fc(name, values):
Helper function to wrap categorical feature by indicator column.
Args:
name: str, name of feature.
values: list, list of strings of categorical values.
Returns:
Categorical and indicator column of categorical feature.
cat_column = tf.feature_column.categorical_column_with_vocabulary_list(
key=name, vocabulary_list=values)
ind_column = tf.feature_column.indicator_column(
categorical_column=cat_column)
return cat_column, ind_column
def create_feature_columns(nembeds):
Creates wide and deep dictionaries of feature columns from inputs.
Args:
nembeds: int, number of dimensions to embed categorical column down to.
Returns:
Wide and deep dictionaries of feature columns.
deep_fc = {
colname: tf.feature_column.numeric_column(key=colname)
for colname in ["mother_age", "gestation_weeks"]
}
wide_fc = {}
is_male, wide_fc["is_male"] = categorical_fc(
"is_male", ["True", "False", "Unknown"])
plurality, wide_fc["plurality"] = categorical_fc(
"plurality", ["Single(1)", "Twins(2)", "Triplets(3)",
"Quadruplets(4)", "Quintuplets(5)", "Multiple(2+)"])
# Bucketize the float fields. This makes them wide
age_buckets = tf.feature_column.bucketized_column(
source_column=deep_fc["mother_age"],
boundaries=np.arange(15, 45, 1).tolist())
wide_fc["age_buckets"] = tf.feature_column.indicator_column(
categorical_column=age_buckets)
gestation_buckets = tf.feature_column.bucketized_column(
source_column=deep_fc["gestation_weeks"],
boundaries=np.arange(17, 47, 1).tolist())
wide_fc["gestation_buckets"] = tf.feature_column.indicator_column(
categorical_column=gestation_buckets)
# Cross all the wide columns, have to do the crossing before we one-hot
crossed = tf.feature_column.crossed_column(
keys=[age_buckets, gestation_buckets],
hash_bucket_size=1000)
deep_fc["crossed_embeds"] = tf.feature_column.embedding_column(
categorical_column=crossed, dimension=nembeds)
return wide_fc, deep_fc
def get_model_outputs(wide_inputs, deep_inputs, dnn_hidden_units):
Creates model architecture and returns outputs.
Args:
wide_inputs: Dense tensor used as inputs to wide side of model.
deep_inputs: Dense tensor used as inputs to deep side of model.
dnn_hidden_units: List of integers where length is number of hidden
layers and ith element is the number of neurons at ith layer.
Returns:
Dense tensor output from the model.
# Hidden layers for the deep side
layers = [int(x) for x in dnn_hidden_units]
deep = deep_inputs
for layerno, numnodes in enumerate(layers):
deep = tf.keras.layers.Dense(
units=numnodes,
activation="relu",
name="dnn_{}".format(layerno+1))(deep)
deep_out = deep
# Linear model for the wide side
wide_out = tf.keras.layers.Dense(
units=10, activation="relu", name="linear")(wide_inputs)
# Concatenate the two sides
both = tf.keras.layers.concatenate(
inputs=[deep_out, wide_out], name="both")
# Final output is a linear activation because this is regression
output = tf.keras.layers.Dense(
units=1, activation="linear", name="weight")(both)
return output
def rmse(y_true, y_pred):
Calculates RMSE evaluation metric.
Args:
y_true: tensor, true labels.
y_pred: tensor, predicted labels.
Returns:
Tensor with value of RMSE between true and predicted labels.
return tf.sqrt(tf.reduce_mean((y_pred - y_true) ** 2))
def build_wide_deep_model(dnn_hidden_units=[64, 32], nembeds=3):
Builds wide and deep model using Keras Functional API.
Returns:
`tf.keras.models.Model` object.
# Create input layers
inputs = create_input_layers()
# Create feature columns for both wide and deep
wide_fc, deep_fc = create_feature_columns(nembeds)
# The constructor for DenseFeatures takes a list of numeric columns
# The Functional API in Keras requires: LayerConstructor()(inputs)
wide_inputs = tf.keras.layers.DenseFeatures(
feature_columns=wide_fc.values(), name="wide_inputs")(inputs)
deep_inputs = tf.keras.layers.DenseFeatures(
feature_columns=deep_fc.values(), name="deep_inputs")(inputs)
# Get output of model given inputs
output = get_model_outputs(wide_inputs, deep_inputs, dnn_hidden_units)
# Build model and compile it all together
model = tf.keras.models.Model(inputs=inputs, outputs=output)
model.compile(optimizer="adam", loss="mse", metrics=[rmse, "mse"])
return model
print("Here is our wide and deep architecture so far:\n")
model = build_wide_deep_model()
print(model.summary())
tf.keras.utils.plot_model(
model=model, to_file="wd_model.png", show_shapes=False, rankdir="LR")
TRAIN_BATCH_SIZE = 32
NUM_TRAIN_EXAMPLES = 10000 * 5 # training dataset repeats, it'll wrap around
NUM_EVALS = 5 # how many times to evaluate
# Enough to get a reasonable sample, but not so much that it slows down
NUM_EVAL_EXAMPLES = 10000
trainds = load_dataset(
pattern="train*",
batch_size=TRAIN_BATCH_SIZE,
mode=tf.estimator.ModeKeys.TRAIN)
evalds = load_dataset(
pattern="eval*",
batch_size=1000,
mode=tf.estimator.ModeKeys.EVAL).take(count=NUM_EVAL_EXAMPLES // 1000)
steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS)
logdir = os.path.join(
"logs", datetime.datetime.now().strftime("%Y%m%d-%H%M%S"))
tensorboard_callback = tf.keras.callbacks.TensorBoard(
log_dir=logdir, histogram_freq=1)
history = model.fit(
trainds,
validation_data=evalds,
epochs=NUM_EVALS,
steps_per_epoch=steps_per_epoch,
callbacks=[tensorboard_callback])
# Plot
nrows = 1
ncols = 2
fig = plt.figure(figsize=(10, 5))
for idx, key in enumerate(["loss", "rmse"]):
ax = fig.add_subplot(nrows, ncols, idx+1)
plt.plot(history.history[key])
plt.plot(history.history["val_{}".format(key)])
plt.title("model {}".format(key))
plt.ylabel(key)
plt.xlabel("epoch")
plt.legend(["train", "validation"], loc="upper left");
OUTPUT_DIR = "babyweight_trained_wd"
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
EXPORT_PATH = os.path.join(
OUTPUT_DIR, datetime.datetime.now().strftime("%Y%m%d%H%M%S"))
tf.saved_model.save(
obj=model, export_dir=EXPORT_PATH) # with default serving function
print("Exported trained model to {}".format(EXPORT_PATH))
!ls $EXPORT_PATH
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read the data and get a row count. Data source
Step2: SymPy
Step3: This example was gleaned from
Step4: What is the probability that the temperature is actually greater than 33 degrees?
Step5: Assume we now have a thermometer and can measure the temperature. However, there is still uncertainty involved.
Step6: We now have two measurements -- 30 +- 3 degrees and 26 +- 1.5 degrees. How do we combine them? 30 +- 3 was our prior measurement. We want to cacluate a better estimate of the temperature (posterior) given an observation of 26 degrees.
| <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import sys
print('Python version ' + sys.version)
print('Pandas version ' + pd.__version__)
print('Numpy version ' + np.__version__)
file_path = r'data\T100_2015.csv.gz'
df = pd.read_csv(file_path, header=0)
df.count()
df.head(n=10)
df = pd.read_csv(file_path, header=0, usecols=["PASSENGERS", "ORIGIN", "DEST"])
df.head(n=10)
print('Min: ', df['PASSENGERS'].min())
print('Max: ', df['PASSENGERS'].max())
print('Mean: ', df['PASSENGERS'].mean())
df = df.query('PASSENGERS > 10000')
print('Min: ', df['PASSENGERS'].min())
print('Max: ', df['PASSENGERS'].max())
print('Mean: ', df['PASSENGERS'].mean())
OriginToDestination = df.groupby(['ORIGIN', 'DEST'], as_index=False).agg({'PASSENGERS':sum,})
OriginToDestination.head(n=10)
OriginToDestination = pd.pivot_table(OriginToDestination, values='PASSENGERS', index=['ORIGIN'], columns=['DEST'], aggfunc=np.sum)
OriginToDestination.head()
OriginToDestination.fillna(0)
import sympy
from sympy import *
from sympy.stats import *
from sympy import symbols
from sympy.plotting import plot
from sympy.interactive import printing
printing.init_printing(use_latex=True)
print('Sympy version ' + sympy.__version__)
T = Normal('T', 30, 3)
P(T > 33)
N(P(T > 33))
noise = Normal('noise', 0, 1.5)
observation = T + noise
T_posterior = given(T, Eq(observation, 26))
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This is an extra layer of convenience compared to PyMC. Any variables created within a given Model's context will be automatically assigned to that model. If you try to define a variable outside of the context of a model, you will get an error.
Step2: We can examine the same variables outside of the model context once they have been defined, but to define more variables that the model will recognize they have to be within the context.
Step3: Each variable assigned to a model will be defined with its own name, the first string parameter (we will cover this further in the variables section). To create a different model object with the same name as one we have used previously, we need only run the first block of code again.
Step4: We can also define an entirely separate model. Note that we are free to name our models whatever we like, so if we do not want to overwrite an old model we need only make another.
Step5: You probably noticed that PyMC3 will often give you notifications about transformations when you add variables to your model. These transformations are done internally by PyMC3 to modify the space that the variable is sampled in (when we get to actually sampling the model). This is an internal feature which helps with the convergence of our samples to the posterior distribution and serves to improve the results.
Step6: The test_value is used only for the model, as the starting point for sampling if no other start is specified. It will not change as a result of sampling. This initial state can be changed at variable creation by specifying a value for the testval parameter.
Step7: This can be helpful if you are using a more unstable prior that may require a better starting point.
Step8: If we want a deterministic variable to actually be tracked by our sampling, however, we need to define it explicitly as a named deterministic variable with the constructor.
Step9: Clearly, if $\tau, \lambda_1$ and $\lambda_2$ are known, then $\lambda$ is known completely, hence it is a deterministic variable. We use the switch function here to change from $\lambda_1$ to $\lambda_2$ at the appropriate time. This function is directly from the theano package, which we will discuss in the next section.
Step10: Here we use theano's stack() function in the same way we would use one of NumPy's stacking functions
Step11: To frame this in the notation of the first chapter, though this is a slight abuse of notation, we have specified $P(A)$. Our next goal is to include data/evidence/observations $X$ into our model.
Step12: This is how we include data into our models
Step13: Modeling approaches
Step14: 2. Draw $\lambda_1$ and $\lambda_2$ from an $\text{Exp}(\alpha)$ distribution
Step15: 3. For days before $\tau$, represent the user's received SMS count by sampling from $\text{Poi}(\lambda_1)$, and sample from $\text{Poi}(\lambda_2)$ for days after $\tau$. For example
Step16: 4. Plot the artificial dataset
Step17: It is okay that our fictional dataset does not look like our observed dataset
Step18: Later we will see how we use this to make predictions and test the appropriateness of our models.
Step19: Had we had stronger beliefs, we could have expressed them in the prior above.
Step20: The observed frequency is
Step21: We combine the observations into the PyMC3 observed variable, and run our inference algorithm
Step22: We plot the posterior distribution of the unknown $p_A$ below
Step23: Our posterior distribution puts most weight near the true value of $p_A$, but also some weights in the tails. This is a measure of how uncertain we should be, given our observations. Try changing the number of observations, N, and observe how the posterior distribution changes.
Step24: Below we plot the posterior distributions for the three unknowns
Step25: Notice that as a result of N_B < N_A, i.e. we have less data from site B, our posterior distribution of $p_B$ is fatter, implying we are less certain about the true value of $p_B$ than we are of $p_A$.
Step26: If this probability is too high for comfortable decision-making, we can perform more trials on site B (as site B has less samples to begin with, each additional data point for site B contributes more inferential "power" than each additional data point for site A).
Step27: The special case when $N = 1$ corresponds to the Bernoulli distribution. There is another connection between Bernoulli and Binomial random variables. If we have $X_1, X_2, ... , X_N$ Bernoulli random variables with the same $p$, then $Z = X_1 + X_2 + ... + X_N \sim \text{Binomial}(N, p )$.
Step28: Again, thinking of our data-generation model, we assign Bernoulli random variables to the 100 students
Step29: If we carry out the algorithm, the next step that occurs is the first coin-flip each student makes. This can be modeled again by sampling 100 Bernoulli random variables with $p=1/2$
Step30: Although not everyone flips a second time, we can still model the possible realization of second coin-flips
Step31: Using these variables, we can return a possible realization of the observed proportion of "Yes" responses. We do this using a PyMC3 deterministic variable
Step32: The line fc*t_a + (1-fc)*sc contains the heart of the Privacy algorithm. Elements in this array are 1 if and only if i) the first toss is heads and the student cheated or ii) the first toss is tails, and the second is heads, and are 0 else. Finally, the last line sums this vector and divides by float(N), produces a proportion.
Step33: Next we need a dataset. After performing our coin-flipped interviews the researchers received 35 "Yes" responses. To put this into a relative perspective, if there truly were no cheaters, we should expect to see on average 1/4 of all responses being a "Yes" (half chance of having first coin land Tails, and another half chance of having second coin land Heads), so about 25 responses in a cheat-free world. On the other hand, if all students cheated, we should expected to see approximately 3/4 of all responses be "Yes".
Step34: Below we add all the variables of interest to a Model container and run our black-box algorithm over the model.
Step35: With regards to the above plot, we are still pretty uncertain about what the true frequency of cheaters might be, but we have narrowed it down to a range between 0.05 to 0.35 (marked by the solid lines). This is pretty good, as a priori we had no idea how many students might have cheated (hence the uniform distribution for our prior). On the other hand, it is also pretty bad since there is a .3 length window the true value most likely lives in. Have we even gained anything, or are we still too uncertain about the true frequency?
Step36: I could have typed p_skewed = 0.5*p + 0.25 instead for a one-liner, as the elementary operations of addition and scalar multiplication will implicitly create a deterministic variable, but I wanted to make the deterministic boilerplate explicit for clarity's sake.
Step37: Below we add all the variables of interest to a Model container and run our black-box algorithm over the model.
Step38: More PyMC3 Tricks
Step39: The remainder of this chapter examines some practical examples of PyMC3 and PyMC3 modeling
Step40: It looks clear that the probability of damage incidents occurring increases as the outside temperature decreases. We are interested in modeling the probability here because it does not look like there is a strict cutoff point between temperature and a damage incident occurring. The best we can do is ask "At temperature $t$, what is the probability of a damage incident?". The goal of this example is to answer that question.
Step41: But something is missing. In the plot of the logistic function, the probability changes only near zero, but in our data above the probability changes around 65 to 70. We need to add a bias term to our logistic function
Step42: Adding a constant term $\alpha$ amounts to shifting the curve left or right (hence why it is called a bias).
Step43: A Normal random variable can be take on any real number, but the variable is very likely to be relatively close to $\mu$. In fact, the expected value of a Normal is equal to its $\mu$ parameter
Step44: We have our probabilities, but how do we connect them to our observed data? A Bernoulli random variable with parameter $p$, denoted $\text{Ber}(p)$, is a random variable that takes value 1 with probability $p$, and 0 else. Thus, our model can look like
Step45: We have trained our model on the observed data, now we can sample values from the posterior. Let's look at the posterior distributions for $\alpha$ and $\beta$
Step46: All samples of $\beta$ are greater than 0. If instead the posterior was centered around 0, we may suspect that $\beta = 0$, implying that temperature has no effect on the probability of defect.
Step47: Above we also plotted two possible realizations of what the actual underlying system might be. Both are equally likely as any other draw. The blue line is what occurs when we average all the 20000 possible dotted lines together.
Step48: The 95% credible interval, or 95% CI, painted in purple, represents the interval, for each temperature, that contains 95% of the distribution. For example, at 65 degrees, we can be 95% sure that the probability of defect lies between 0.25 and 0.75.
Step49: Is our model appropriate?
Step50: Note that the above plots are different (if you can think of a cleaner way to present this, please send a pull request and answer here!).
Step51: Next we sort each column by the posterior probabilities
Step52: We can present the above data better in a figure
Step53: The snaking-line is the sorted probabilities, blue bars denote defects, and empty space (or grey bars for the optimistic readers) denote non-defects. As the probability rises, we see more and more defects occur. On the right hand side, the plot suggests that as the posterior probability is large (line close to 1), then more defects are realized. This is good behaviour. Ideally, all the blue bars should be close to the right-hand side, and deviations from this reflect missed predictions.
Step54: In the random model, we can see that as the probability increases there is no clustering of defects to the right-hand side. Similarly for the constant model.
Step55: References
| <ASSISTANT_TASK:>
Python Code:
import pymc3 as pm
with pm.Model() as model:
parameter = pm.Exponential("poisson_param", 1.0)
data_generator = pm.Poisson("data_generator", parameter)
with model:
data_plus_one = data_generator + 1
parameter.tag.test_value
with pm.Model() as model:
theta = pm.Exponential("theta", 2.0)
data_generator = pm.Poisson("data_generator", theta)
with pm.Model() as ab_testing:
p_A = pm.Uniform("P(A)", 0, 1)
p_B = pm.Uniform("P(B)", 0, 1)
print("parameter.tag.test_value =", parameter.tag.test_value)
print("data_generator.tag.test_value =", data_generator.tag.test_value)
print("data_plus_one.tag.test_value =", data_plus_one.tag.test_value)
with pm.Model() as model:
parameter = pm.Exponential("poisson_param", 1.0, testval=0.5)
print("\nparameter.tag.test_value =", parameter.tag.test_value)
with pm.Model() as model:
lambda_1 = pm.Exponential("lambda_1", 1.0)
lambda_2 = pm.Exponential("lambda_2", 1.0)
tau = pm.DiscreteUniform("tau", lower=0, upper=10)
new_deterministic_variable = lambda_1 + lambda_2
import numpy as np
n_data_points = 5 # in CH1 we had ~70 data points
idx = np.arange(n_data_points)
with model:
lambda_ = pm.math.switch(tau >= idx, lambda_1, lambda_2)
import theano.tensor as tt
with pm.Model() as theano_test:
p1 = pm.Uniform("p", 0, 1)
p2 = 1 - p1
p = tt.stack([p1, p2])
assignment = pm.Categorical("assignment", p)
%matplotlib inline
from IPython.core.pylabtools import figsize
import matplotlib.pyplot as plt
import scipy.stats as stats
figsize(12.5, 4)
samples = lambda_1.random(size=20000)
plt.hist(samples, bins=70, normed=True, histtype="stepfilled")
plt.title("Prior distribution for $\lambda_1$")
plt.xlim(0, 8);
data = np.array([10, 5])
with model:
fixed_variable = pm.Poisson("fxd", 1, observed=data)
print("value: ", fixed_variable.tag.test_value)
# We're using some fake data here
data = np.array([10, 25, 15, 20, 35])
with model:
obs = pm.Poisson("obs", lambda_, observed=data)
print(obs.tag.test_value)
tau = np.random.randint(0, 80)
print(tau)
alpha = 1./20.
lambda_1, lambda_2 = np.random.exponential(scale=1/alpha, size=2)
print(lambda_1, lambda_2)
data = np.r_[stats.poisson.rvs(mu=lambda_1, size=tau), stats.poisson.rvs(mu=lambda_2, size = 80 - tau)]
plt.bar(np.arange(80), data, color="#348ABD")
plt.bar(tau-1, data[tau - 1], color="r", label="user behaviour changed")
plt.xlabel("Time (days)")
plt.ylabel("count of text-msgs received")
plt.title("Artificial dataset")
plt.xlim(0, 80)
plt.legend();
def plot_artificial_sms_dataset():
tau = stats.randint.rvs(0, 80)
alpha = 1./20.
lambda_1, lambda_2 = stats.expon.rvs(scale=1/alpha, size=2)
data = np.r_[stats.poisson.rvs(mu=lambda_1, size=tau), stats.poisson.rvs(mu=lambda_2, size=80 - tau)]
plt.bar(np.arange(80), data, color="#348ABD")
plt.bar(tau - 1, data[tau-1], color="r", label="user behaviour changed")
plt.xlim(0, 80);
figsize(12.5, 5)
plt.title("More example of artificial datasets")
for i in range(4):
plt.subplot(4, 1, i+1)
plot_artificial_sms_dataset()
import pymc3 as pm
# The parameters are the bounds of the Uniform.
with pm.Model() as model:
p = pm.Uniform('p', lower=0, upper=1)
#set constants
p_true = 0.05 # remember, this is unknown.
N = 1500
# sample N Bernoulli random variables from Ber(0.05).
# each random variable has a 0.05 chance of being a 1.
# this is the data-generation step
occurrences = stats.bernoulli.rvs(p_true, size=N)
print(occurrences) # Remember: Python treats True == 1, and False == 0
print(np.sum(occurrences))
# Occurrences.mean is equal to n/N.
print("What is the observed frequency in Group A? %.4f" % np.mean(occurrences))
print("Does this equal the true frequency? %s" % (np.mean(occurrences) == p_true))
#include the observations, which are Bernoulli
with model:
obs = pm.Bernoulli("obs", p, observed=occurrences)
# To be explained in chapter 3
step = pm.Metropolis()
trace = pm.sample(18000, step=step)
burned_trace = trace[1000:]
figsize(12.5, 4)
plt.title("Posterior distribution of $p_A$, the true effectiveness of site A")
plt.vlines(p_true, 0, 90, linestyle="--", label="true $p_A$ (unknown)")
plt.hist(burned_trace["p"], bins=25, histtype="stepfilled", normed=True)
plt.legend();
import pymc3 as pm
figsize(12, 4)
#these two quantities are unknown to us.
true_p_A = 0.05
true_p_B = 0.04
#notice the unequal sample sizes -- no problem in Bayesian analysis.
N_A = 1500
N_B = 750
#generate some observations
observations_A = stats.bernoulli.rvs(true_p_A, size=N_A)
observations_B = stats.bernoulli.rvs(true_p_B, size=N_B)
print("Obs from Site A: ", observations_A[:30], "...")
print("Obs from Site B: ", observations_B[:30], "...")
print(np.mean(observations_A))
print(np.mean(observations_B))
# Set up the pymc3 model. Again assume Uniform priors for p_A and p_B.
with pm.Model() as model:
p_A = pm.Uniform("p_A", 0, 1)
p_B = pm.Uniform("p_B", 0, 1)
# Define the deterministic delta function. This is our unknown of interest.
delta = pm.Deterministic("delta", p_A - p_B)
# Set of observations, in this case we have two observation datasets.
obs_A = pm.Bernoulli("obs_A", p_A, observed=observations_A)
obs_B = pm.Bernoulli("obs_B", p_B, observed=observations_B)
# To be explained in chapter 3.
step = pm.Metropolis()
trace = pm.sample(20000, step=step)
burned_trace=trace[1000:]
p_A_samples = burned_trace["p_A"]
p_B_samples = burned_trace["p_B"]
delta_samples = burned_trace["delta"]
figsize(12.5, 10)
#histogram of posteriors
ax = plt.subplot(311)
plt.xlim(0, .1)
plt.hist(p_A_samples, histtype='stepfilled', bins=25, alpha=0.85,
label="posterior of $p_A$", color="#A60628", normed=True)
plt.vlines(true_p_A, 0, 80, linestyle="--", label="true $p_A$ (unknown)")
plt.legend(loc="upper right")
plt.title("Posterior distributions of $p_A$, $p_B$, and delta unknowns")
ax = plt.subplot(312)
plt.xlim(0, .1)
plt.hist(p_B_samples, histtype='stepfilled', bins=25, alpha=0.85,
label="posterior of $p_B$", color="#467821", normed=True)
plt.vlines(true_p_B, 0, 80, linestyle="--", label="true $p_B$ (unknown)")
plt.legend(loc="upper right")
ax = plt.subplot(313)
plt.hist(delta_samples, histtype='stepfilled', bins=30, alpha=0.85,
label="posterior of delta", color="#7A68A6", normed=True)
plt.vlines(true_p_A - true_p_B, 0, 60, linestyle="--",
label="true delta (unknown)")
plt.vlines(0, 0, 60, color="black", alpha=0.2)
plt.legend(loc="upper right");
# Count the number of samples less than 0, i.e. the area under the curve
# before 0, represent the probability that site A is worse than site B.
print("Probability site A is WORSE than site B: %.3f" % \
np.mean(delta_samples < 0))
print("Probability site A is BETTER than site B: %.3f" % \
np.mean(delta_samples > 0))
figsize(12.5, 4)
import scipy.stats as stats
binomial = stats.binom
parameters = [(10, .4), (10, .9)]
colors = ["#348ABD", "#A60628"]
for i in range(2):
N, p = parameters[i]
_x = np.arange(N + 1)
plt.bar(_x - 0.5, binomial.pmf(_x, N, p), color=colors[i],
edgecolor=colors[i],
alpha=0.6,
label="$N$: %d, $p$: %.1f" % (N, p),
linewidth=3)
plt.legend(loc="upper left")
plt.xlim(0, 10.5)
plt.xlabel("$k$")
plt.ylabel("$P(X = k)$")
plt.title("Probability mass distributions of binomial random variables");
import pymc3 as pm
N = 100
with pm.Model() as model:
p = pm.Uniform("freq_cheating", 0, 1)
with model:
true_answers = pm.Bernoulli("truths", p, shape=N, testval=np.random.binomial(1, 0.5, N))
with model:
first_coin_flips = pm.Bernoulli("first_flips", 0.5, shape=N, testval=np.random.binomial(1, 0.5, N))
print(first_coin_flips.tag.test_value)
with model:
second_coin_flips = pm.Bernoulli("second_flips", 0.5, shape=N, testval=np.random.binomial(1, 0.5, N))
import theano.tensor as tt
with model:
val = first_coin_flips*true_answers + (1 - first_coin_flips)*second_coin_flips
observed_proportion = pm.Deterministic("observed_proportion", tt.sum(val)/float(N))
observed_proportion.tag.test_value
X = 35
with model:
observations = pm.Binomial("obs", N, observed_proportion, observed=X)
# To be explained in Chapter 3!
with model:
step = pm.Metropolis(vars=[p])
trace = pm.sample(40000, step=step)
burned_trace = trace[15000:]
figsize(12.5, 3)
p_trace = burned_trace["freq_cheating"][15000:]
plt.hist(p_trace, histtype="stepfilled", normed=True, alpha=0.85, bins=30,
label="posterior distribution", color="#348ABD")
plt.vlines([.05, .35], [0, 0], [5, 5], alpha=0.3)
plt.xlim(0, 1)
plt.legend();
with pm.Model() as model:
p = pm.Uniform("freq_cheating", 0, 1)
p_skewed = pm.Deterministic("p_skewed", 0.5*p + 0.25)
with model:
yes_responses = pm.Binomial("number_cheaters", 100, p_skewed, observed=35)
with model:
# To Be Explained in Chapter 3!
step = pm.Metropolis()
trace = pm.sample(25000, step=step)
burned_trace = trace[2500:]
figsize(12.5, 3)
p_trace = burned_trace["freq_cheating"]
plt.hist(p_trace, histtype="stepfilled", normed=True, alpha=0.85, bins=30,
label="posterior distribution", color="#348ABD")
plt.vlines([.05, .35], [0, 0], [5, 5], alpha=0.2)
plt.xlim(0, 1)
plt.legend();
N = 10
x = np.ones(N, dtype=object)
with pm.Model() as model:
for i in range(0, N):
x[i] = pm.Exponential('x_%i' % i, (i+1.0)**2)
figsize(12.5, 3.5)
np.set_printoptions(precision=3, suppress=True)
challenger_data = np.genfromtxt("data/challenger_data.csv", skip_header=1,
usecols=[1, 2], missing_values="NA",
delimiter=",")
#drop the NA values
challenger_data = challenger_data[~np.isnan(challenger_data[:, 1])]
#plot it, as a function of tempature (the first column)
print("Temp (F), O-Ring failure?")
print(challenger_data)
plt.scatter(challenger_data[:, 0], challenger_data[:, 1], s=75, color="k",
alpha=0.5)
plt.yticks([0, 1])
plt.ylabel("Damage Incident?")
plt.xlabel("Outside temperature (Fahrenheit)")
plt.title("Defects of the Space Shuttle O-Rings vs temperature");
figsize(12, 3)
def logistic(x, beta):
return 1.0 / (1.0 + np.exp(beta * x))
x = np.linspace(-4, 4, 100)
plt.plot(x, logistic(x, 1), label=r"$\beta = 1$")
plt.plot(x, logistic(x, 3), label=r"$\beta = 3$")
plt.plot(x, logistic(x, -5), label=r"$\beta = -5$")
plt.legend();
def logistic(x, beta, alpha=0):
return 1.0 / (1.0 + np.exp(np.dot(beta, x) + alpha))
x = np.linspace(-4, 4, 100)
plt.plot(x, logistic(x, 1), label=r"$\beta = 1$", ls="--", lw=1)
plt.plot(x, logistic(x, 3), label=r"$\beta = 3$", ls="--", lw=1)
plt.plot(x, logistic(x, -5), label=r"$\beta = -5$", ls="--", lw=1)
plt.plot(x, logistic(x, 1, 1), label=r"$\beta = 1, \alpha = 1$",
color="#348ABD")
plt.plot(x, logistic(x, 3, -2), label=r"$\beta = 3, \alpha = -2$",
color="#A60628")
plt.plot(x, logistic(x, -5, 7), label=r"$\beta = -5, \alpha = 7$",
color="#7A68A6")
plt.legend(loc="lower left");
import scipy.stats as stats
nor = stats.norm
x = np.linspace(-8, 7, 150)
mu = (-2, 0, 3)
tau = (.7, 1, 2.8)
colors = ["#348ABD", "#A60628", "#7A68A6"]
parameters = zip(mu, tau, colors)
for _mu, _tau, _color in parameters:
plt.plot(x, nor.pdf(x, _mu, scale=1./_tau),
label="$\mu = %d,\;\\tau = %.1f$" % (_mu, _tau), color=_color)
plt.fill_between(x, nor.pdf(x, _mu, scale=1./_tau), color=_color,
alpha=.33)
plt.legend(loc="upper right")
plt.xlabel("$x$")
plt.ylabel("density function at $x$")
plt.title("Probability distribution of three different Normal random \
variables");
import pymc3 as pm
temperature = challenger_data[:, 0]
D = challenger_data[:, 1] # defect or not?
#notice the`value` here. We explain why below.
with pm.Model() as model:
beta = pm.Normal("beta", mu=0, tau=0.001, testval=0)
alpha = pm.Normal("alpha", mu=0, tau=0.001, testval=0)
p = pm.Deterministic("p", 1.0/(1. + tt.exp(beta*temperature + alpha)))
# connect the probabilities in `p` with our observations through a
# Bernoulli random variable.
with model:
observed = pm.Bernoulli("bernoulli_obs", p, observed=D)
# Mysterious code to be explained in Chapter 3
start = pm.find_MAP()
step = pm.Metropolis()
trace = pm.sample(120000, step=step, start=start)
burned_trace = trace[100000::2]
alpha_samples = burned_trace["alpha"][:, None] # best to make them 1d
beta_samples = burned_trace["beta"][:, None]
figsize(12.5, 6)
#histogram of the samples:
plt.subplot(211)
plt.title(r"Posterior distributions of the variables $\alpha, \beta$")
plt.hist(beta_samples, histtype='stepfilled', bins=35, alpha=0.85,
label=r"posterior of $\beta$", color="#7A68A6", normed=True)
plt.legend()
plt.subplot(212)
plt.hist(alpha_samples, histtype='stepfilled', bins=35, alpha=0.85,
label=r"posterior of $\alpha$", color="#A60628", normed=True)
plt.legend();
t = np.linspace(temperature.min() - 5, temperature.max()+5, 50)[:, None]
p_t = logistic(t.T, beta_samples, alpha_samples)
mean_prob_t = p_t.mean(axis=0)
figsize(12.5, 4)
plt.plot(t, mean_prob_t, lw=3, label="average posterior \nprobability \
of defect")
plt.plot(t, p_t[0, :], ls="--", label="realization from posterior")
plt.plot(t, p_t[-2, :], ls="--", label="realization from posterior")
plt.scatter(temperature, D, color="k", s=50, alpha=0.5)
plt.title("Posterior expected value of probability of defect; \
plus realizations")
plt.legend(loc="lower left")
plt.ylim(-0.1, 1.1)
plt.xlim(t.min(), t.max())
plt.ylabel("probability")
plt.xlabel("temperature");
from scipy.stats.mstats import mquantiles
# vectorized bottom and top 2.5% quantiles for "confidence interval"
qs = mquantiles(p_t, [0.025, 0.975], axis=0)
plt.fill_between(t[:, 0], *qs, alpha=0.7,
color="#7A68A6")
plt.plot(t[:, 0], qs[0], label="95% CI", color="#7A68A6", alpha=0.7)
plt.plot(t, mean_prob_t, lw=1, ls="--", color="k",
label="average posterior \nprobability of defect")
plt.xlim(t.min(), t.max())
plt.ylim(-0.02, 1.02)
plt.legend(loc="lower left")
plt.scatter(temperature, D, color="k", s=50, alpha=0.5)
plt.xlabel("temp, $t$")
plt.ylabel("probability estimate")
plt.title("Posterior probability estimates given temp. $t$");
figsize(12.5, 2.5)
prob_31 = logistic(31, beta_samples, alpha_samples)
plt.xlim(0.995, 1)
plt.hist(prob_31, bins=1000, normed=True, histtype='stepfilled')
plt.title("Posterior distribution of probability of defect, given $t = 31$")
plt.xlabel("probability of defect occurring in O-ring");
N = 10000
with pm.Model() as model:
beta = pm.Normal("beta", mu=0, tau=0.001, testval=0)
alpha = pm.Normal("alpha", mu=0, tau=0.001, testval=0)
p = pm.Deterministic("p", 1.0/(1. + tt.exp(beta*temperature + alpha)))
observed = pm.Bernoulli("bernoulli_obs", p, observed=D)
simulated = pm.Bernoulli("bernoulli_sim", p, shape=p.tag.test_value.shape)
step = pm.Metropolis(vars=[p])
trace = pm.sample(N, step=step)
figsize(12.5, 5)
simulations = trace["bernoulli_sim"]
print(simulations.shape)
plt.title("Simulated dataset using posterior parameters")
figsize(12.5, 6)
for i in range(4):
ax = plt.subplot(4, 1, i+1)
plt.scatter(temperature, simulations[1000*i, :], color="k",
s=50, alpha=0.6)
posterior_probability = simulations.mean(axis=0)
print("posterior prob of defect | realized defect ")
for i in range(len(D)):
print("%.2f | %d" % (posterior_probability[i], D[i]))
ix = np.argsort(posterior_probability)
print("probb | defect ")
for i in range(len(D)):
print("%.2f | %d" % (posterior_probability[ix[i]], D[ix[i]]))
from separation_plot import separation_plot
figsize(11., 1.5)
separation_plot(posterior_probability, D)
figsize(11., 1.25)
# Our temperature-dependent model
separation_plot(posterior_probability, D)
plt.title("Temperature-dependent model")
# Perfect model
# i.e. the probability of defect is equal to if a defect occurred or not.
p = D
separation_plot(p, D)
plt.title("Perfect model")
# random predictions
p = np.random.rand(23)
separation_plot(p, D)
plt.title("Random model")
# constant model
constant_prob = 7./23*np.ones(23)
separation_plot(constant_prob, D)
plt.title("Constant-prediction model");
#type your code here.
figsize(12.5, 4)
plt.scatter(alpha_samples, beta_samples, alpha=0.1)
plt.title("Why does the plot look like this?")
plt.xlabel(r"$\alpha$")
plt.ylabel(r"$\beta$");
from IPython.core.display import HTML
def css_styling():
styles = open("../styles/custom.css", "r").read()
return HTML(styles)
css_styling()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Sparse 2d interpolation
Step2: The following plot should show the points on the boundary and the single point in the interior
Step3: Use meshgrid and griddata to interpolate the function $f(x,y)$ on the entire square domain
Step4: Plot the values of the interpolated scalar field using a contour plot. Customize your plot to make it effective and beautiful.
| <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
sns.set_style('white')
from scipy.interpolate import griddata
x1=np.arange(-5,6)
y1=5*np.ones(11)
f1=np.zeros(11)
x2=np.arange(-5,6)
y2=-5*np.ones(11)
f2=np.zeros(11)
y3=np.arange(-4,5)
x3=5*np.ones(9)
f3=np.zeros(9)
y4=np.arange(-4,5)
x4=-5*np.ones(9)
f4=np.zeros(9)
x5=np.array([0])
y5=np.array([0])
f5=np.array([1])
x=np.hstack((x1,x2,x3,x4,x5))
y=np.hstack((y1,y2,y3,y4,y5))
f=np.hstack((f1,f2,f3,f4,f5))
print (x)
print (y)
print (f)
plt.scatter(x, y);
assert x.shape==(41,)
assert y.shape==(41,)
assert f.shape==(41,)
assert np.count_nonzero(f)==1
xnew=np.linspace(-5.0,6.0,100)
ynew=np.linspace(-5,6,100)
Xnew,Ynew=np.meshgrid(xnew,ynew)
Fnew=griddata((x,y),f,(Xnew,Ynew),method='cubic',fill_value=0.0)
assert xnew.shape==(100,)
assert ynew.shape==(100,)
assert Xnew.shape==(100,100)
assert Ynew.shape==(100,100)
assert Fnew.shape==(100,100)
plt.figure(figsize=(10,8))
plt.contourf(Fnew,cmap='cubehelix_r')
plt.title('2D Interpolation');
assert True # leave this to grade the plot
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Hierarchical Agglomerative Clustering - Complete Linkage Clustering
Step2: <br>
Step3: <br>
Step4: b) Condensed distance matrix (correct)
Step5: c) Input sample matrix (correct)
Step6: <br>
Step7: <br>
Step8: Thus, in order to sort the DataFrame according to the clustering, we can simply use the 'leaves' as indices like so
Step9: <br>
Step10: <br>
Step11: <br>
| <ASSISTANT_TASK:>
Python Code:
%load_ext watermark
%watermark -a "Sebastian Raschka" -d -v
import pandas as pd
import numpy as np
np.random.seed(123)
variables = ['X', 'Y', 'Z']
labels = ['ID_0','ID_1','ID_2','ID_3','ID_4']
X = np.random.random_sample([5,3])*10
df = pd.DataFrame(X, columns=variables, index=labels)
df
from scipy.spatial.distance import pdist,squareform
row_dist = pd.DataFrame(squareform(pdist(df, metric='euclidean')), columns=labels, index=labels)
row_dist
from scipy.cluster.hierarchy import linkage
row_clusters = linkage(row_dist, method='complete', metric='euclidean')
pd.DataFrame(row_clusters,
columns=['row label 1', 'row label 2', 'distance', 'no. of items in clust.'],
index=['cluster %d' %(i+1) for i in range(row_clusters.shape[0])])
row_clusters = linkage(pdist(df, metric='euclidean'), method='complete')
pd.DataFrame(row_clusters,
columns=['row label 1', 'row label 2', 'distance', 'no. of items in clust.'],
index=['cluster %d' %(i+1) for i in range(row_clusters.shape[0])])
row_clusters = linkage(df.values, method='complete', metric='euclidean')
pd.DataFrame(row_clusters,
columns=['row label 1', 'row label 2', 'distance', 'no. of items in clust.'],
index=['cluster %d' %(i+1) for i in range(row_clusters.shape[0])])
import matplotlib.pyplot as plt
%matplotlib inline
from scipy.cluster.hierarchy import dendrogram
row_dendr = dendrogram(row_clusters, labels=labels)
fig = plt.figure()
ax = fig.add_subplot(111)
cax = ax.matshow(df, interpolation='nearest', cmap='hot_r')
fig.colorbar(cax)
ax.set_xticklabels([''] + list(df.columns))
ax.set_yticklabels([''] + list(df.index))
plt.show()
row_dendr['leaves']
# reorder rows with respect to the clustering
row_dendr = dendrogram(row_clusters, labels=labels, no_plot=True)
df_rowclust = df.ix[row_dendr['leaves']]
# plot
fig = plt.figure()
ax = fig.add_subplot(111)
cax = ax.matshow(df_rowclust, interpolation='nearest', cmap='hot_r')
fig.colorbar(cax)
ax.set_xticklabels([''] + list(df_rowclust.columns))
ax.set_yticklabels([''] + list(df_rowclust.index))
plt.show()
from scipy.cluster import hierarchy
# makes dendrogram black (1)
hierarchy.set_link_color_palette(['black'])
# plot row dendrogram
fig = plt.figure(figsize=(8,8))
axd = fig.add_axes([0.09,0.1,0.2,0.6])
row_dendr = dendrogram(row_clusters, orientation='right',
color_threshold=np.inf, ) # makes dendrogram black (2))
# reorder data with respect to clustering
df_rowclust = df.ix[row_dendr['leaves'][::-1]]
axd.set_xticks([])
axd.set_yticks([])
# remove axes spines from dendrogram
for i in axd.spines.values():
i.set_visible(False)
# reorder rows with respect to the clustering
df_rowclust = df.ix[row_dendr['leaves'][::-1]]
# plot heatmap
axm = fig.add_axes([0.26,0.1,0.6,0.6]) # x-pos, y-pos, width, height
cax = axm.matshow(df_rowclust, interpolation='nearest', cmap='hot_r')
fig.colorbar(cax)
axm.set_xticklabels([''] + list(df_rowclust.columns))
axm.set_yticklabels([''] + list(df_rowclust.index))
plt.show()
# Compute pairwise distances for columns
col_dists = pdist(df.T, metric='euclidean')
col_clusters = linkage(col_dists, method='complete')
# plot column dendrogram
fig = plt.figure(figsize=(8,8))
axd2 = fig.add_axes([0.38,0.74,0.36,0.10])
col_dendr = dendrogram(col_clusters, orientation='top',
color_threshold=np.inf) # makes dendrogram black)
axd2.set_xticks([])
axd2.set_yticks([])
# plot row dendrogram
axd1 = fig.add_axes([0.09,0.1,0.2,0.6])
row_dendr = dendrogram(row_clusters, orientation='right',
count_sort='ascending',
color_threshold=np.inf) # makes dendrogram black
axd1.set_xticks([])
axd1.set_yticks([])
# remove axes spines from dendrogram
for i,j in zip(axd1.spines.values(), axd2.spines.values()):
i.set_visible(False)
j.set_visible(False)
# reorder columns and rows with respect to the clustering
df_rowclust = df.ix[row_dendr['leaves'][::-1]]
df_rowclust.columns = [df_rowclust.columns[col_dendr['leaves']]]
# plot heatmap
axm = fig.add_axes([0.26,0.1,0.6,0.6])
cax = axm.matshow(df_rowclust, interpolation='nearest', cmap='hot_r')
fig.colorbar(cax)
axm.set_xticklabels([''] + list(df_rowclust.columns))
axm.set_yticklabels([''] + list(df_rowclust.index))
plt.show()
from scipy.cluster import hierarchy
# makes dendrogram black (1)
hierarchy.set_link_color_palette(['black'])
# plot row dendrogram
fig = plt.figure(figsize=(8,8))
axd = fig.add_axes([0.09,0.1,0.2,0.6])
row_dendr = dendrogram(row_clusters, orientation='right',
labels=labels,
color_threshold=np.inf, ) # makes dendrogram black (2))
axd.set_xticks([])
# uncomment to hide dendrogram labels
#axd.set_yticks([])
# remove axes spines from dendrogram
for i in axd.spines.values():
i.set_visible(False)
# reorder columns and rows with respect to the clustering
df_rowclust = df.ix[row_dendr['leaves'][::-1]]
# plot heatmap
axm = fig.add_axes([0.26,0.1,0.6,0.6]) # x-pos, y-pos, width, height
cax = axm.matshow(df_rowclust, interpolation='nearest', cmap='hot_r')
fig.colorbar(cax)
axm.set_xticklabels([''] + list(df_rowclust.columns))
axm.set_yticklabels([''] + list(df_rowclust.index))
plt.show()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now lets create some lightcurves
Step2: In terms of LYRA, the server only allows you to download an entire day of data at a time. We can match this to the rest of the data by using the truncate function.
Step3: Part of the advantage of using these inbuilt functions we can get a quicklook at our data using short commands
Step4: Accessing and using the data
Step5: Notice that the meta data information is stored in something called OrderedDict
Step6: Analysing Lightcurve data
Step7: Pandas
Step8: So lets plot them on the graph
Step9: Reading in Tablulated data
Step10: Now, the above line imports information on some solar features over a sample time period. Specifically we have, maximum length, lifetime and time at which they occured. Now if we type data[0] what will happen?
Step11: This is the first row of the array, containing the first element of our three properties. This particular example is a stuctured array, so the columns and rows can have properties and assign properties to the header. We can ask what the title of these columns is by using a dtype command
Step12: Unhelpful, so lets give them something more recognisable. We can use the docs to look up syntax and change the names of the column lables.
Step13: DataFrame
Step14: Datetime Objects
Step15: Looking back at when we discussed the first element of data, and the format of the time index was awkward to use so lets do something about that.
Step16: This is a string and python will just treat it as such. We need to use datetime to pick this string appart and change it into an oject we can use.
Step17: Now the next logical step would be to make a for loop and iterate over the index and reassign it. HOWEVER there is almost always a better way. And Pandas has a to_dateime() method that we can feed the index
Step18: Much easier. Note that the format of table has now changed and are pandas specific datetime objects, and looks like this
Step19: This means we can bin data according to time
Step20: Here we have used the groupby command to take the 'max_len' column, called as a dictionary key, and create bins for our data to sit in according to year and then month.
Step21: Lets not forget that l_bins is a list of bins so when we print out l_mean we get
Step22: Now we have all this data we can build a lovely bargraph with error bars and wonderful things like that.
Step23: Note that the date on the x-axis is a little messed up we can fix with fig.autofmt_xdate()
| <ASSISTANT_TASK:>
Python Code:
from __future__ import print_function, division
import numpy as np
import sunpy
from sunpy import lightcurve as lc
import matplotlib.pyplot as plt
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
goes_lightcurve = lc.GOESLightCurve.create('2011-06-07 06:00','2011-06-07 08:00')
hsi_lightcurve = lc.RHESSISummaryLightCurve.create('2011-06-07 06:00','2011-06-07 08:00')
lyra_lightcurve_fullday = lc.LYRALightCurve.create('2011-06-07')
lyra_lightcurve = lyra_lightcurve_fullday.truncate('2011-06-07 06:00','2011-06-07 08:00')
fig = goes_lightcurve.peek()
fig = lyra_lightcurve.peek()
print(lyra_lightcurve.data.keys())
print(lyra_lightcurve.meta)
plt.figure(1, figsize=(10,5))
plt.plot(lyra_lightcurve.data.index, lyra_lightcurve.data['CHANNEL3'], color='blue', label='Al filter')
plt.plot(lyra_lightcurve.data.index, lyra_lightcurve.data['CHANNEL4'], color='red', label='Zr filter')
plt.ylabel('Flux (Wm^2)')
plt.show()
cross_correlation = np.correlate(lyra_lightcurve.data['CHANNEL3'],
lyra_lightcurve.data['CHANNEL4'])
print(cross_correlation)
# max time argument taken from long and short GOES channels
max_t_goes_long = goes_lightcurve.data['xrsb'].idxmax()
max_t_goes_short = goes_lightcurve.data['xrsa'].idxmax()
# max_time argument taken from channel 3 & 4 LYRA channe;s
max_t_lyra_al = lyra_lightcurve.data['CHANNEL3'].idxmax()
max_t_lyra_zr = lyra_lightcurve.data['CHANNEL4'].idxmax()
print('GOES long: ', max_t_goes_long)
print('GOES short: ', max_t_goes_short)
print('LYRA Al: ', max_t_lyra_al)
print('LYRA Zr: ', max_t_lyra_zr)
# create figure with raw curves
plt.figure(1, figsize=(10,5))
plt.plot(lyra_lightcurve.data.index, lyra_lightcurve.data['CHANNEL3'], color='blue', label='Al filter')
plt.plot(lyra_lightcurve.data.index, lyra_lightcurve.data['CHANNEL4'], color='red', label='Zr filter')
plt.ylabel('Flux (Wm^2)')
# max lines
plt.axvline(max_t_lyra_al,color='blue',linestyle='dashed', linewidth=2)
plt.axvline(max_t_lyra_zr,color='red',linestyle='dashed')
plt.axvline(max_t_goes_long,color='green',linestyle='dashed',linewidth=2)
data = np.genfromtxt('macrospicules.csv', skip_header=1, dtype=None, delimiter=',')
data[0]
data.dtype.names
data.dtype.names = ('max_len', 'ltime', 'sample_time')
data['max_len']
import pandas as pd
d = {'max_len': data['max_len'], 'ltime': data['ltime']}
df = pd.DataFrame(data=d, index=data['sample_time'])
print(df)
import datetime
print(datetime.datetime.now())
print(datetime.datetime.utcnow())
lunchtime = datetime.time(12,30)
the_date = datetime.date(2005, 7, 14)
dinner = datetime.datetime.combine(the_date, lunchtime)
print("When is dinner? {}".format(dinner))
print(df.index[0])
dt_obj = datetime.datetime.strptime(df.index[0], '%Y-%m-%dT%H:%M:%S.%f')
print(dt_obj)
df.index = pd.to_datetime(df.index)
df
df.index[0]
l_bins = pd.groupby(df['max_len'], by=[df.index.year, df.index.month])
print(len(l_bins))
l_mean = l_bins.mean()
l_std = l_bins.std()
print(l_mean)
fig, ax = plt.subplots()
fig.autofmt_xdate()
l_mean.plot(kind='bar', ax=ax, yerr=l_std, grid=False, legend=False)
plt.show()
import numpy as np
ep_data = np.loadtxt('data/XO1_wl_transit_FLUX.txt')
ep_dict = {'flux':ep_data[:, 1],
'err_flux':ep_data[:, 2]}
ep_df = pd.DataFrame(data=ep_dict, index=ep_data[:,0])
ep_df.index = pd.to_datetime(ep_df.index)
from astropy.time import Time
t = Time(ep_data[:, 0], format='jd')
UTC = t.datetime
ep_df.index = UTC
ep_df
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Enable DataFlow API for your GKE cluster
Step2: Configure the TFX pipeline example
Step3: Submit pipeline for execution on the Kubeflow cluster
| <ASSISTANT_TASK:>
Python Code:
!pip3 install 'tfx==0.15.0' --upgrade
!python3 -m pip install 'kfp>=0.1.35' --quiet
# Directory and data locations (uses Google Cloud Storage).
import os
_input_bucket = '<your gcs bucket>'
_output_bucket = '<your gcs bucket>'
_pipeline_root = os.path.join(_output_bucket, 'tfx')
# Google Cloud Platform project id to use when deploying this pipeline.
_project_id = '<your project id>'
# copy the trainer code to a storage bucket as the TFX pipeline will need that code file in GCS
from tensorflow.compat.v1 import gfile
gfile.Copy('utils/taxi_utils.py', _input_bucket + '/taxi_utils.py')
%load https://raw.githubusercontent.com/tensorflow/tfx/master/tfx/examples/chicago_taxi_pipeline/taxi_pipeline_kubeflow_gcp.py
import kfp
run_result = kfp.Client(
host=None # replace with Kubeflow Pipelines endpoint if this notebook is run outside of the Kubeflow cluster.
).create_run_from_pipeline_package('chicago_taxi_pipeline_kubeflow.tar.gz', arguments={})
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: One-hot encoding
Step2: Data argumentation
Step3: Range of image
Step4: Remove obstacle feature
Step5: Check convert image
Step6: Data shuffle
Step7: Set parameter
Step8: Build model (DNN)
Step9: Train operation
| <ASSISTANT_TASK:>
Python Code:
current_dir = os.getcwd()
data_dir = os.listdir("./train/")
def image_road(data_dir):
img_matrix = []
label = []
index = 0
for data_label in data_dir:
category_list = os.listdir(os.getcwd()+"/train/"+data_label)
for data in category_list:
img = imread(current_dir+"/train/"+data_label+"/"+data)
resize_img = imresize(img, [256, 256, 3])
img_matrix.append(resize_img)
label.append(index)
index += 1
return img_matrix, label
img_matrix, label = image_road(data_dir)
y_data = np.eye(len(data_dir))[label]
plt.imshow(img_matrix[0])
plt.title("Original Image")
print("Image size :", np.shape(img_matrix))
img = np.array(img_matrix)/255
print("change the range of image : \n",img[0,0:3,0:3,0])
print(np.shape(img))
# RGB channel
for i in range(len(img)):
row_img = img[i]
red = row_img[:,:,0]
green = row_img[:,:,1]
blue = row_img[:,:,2]
red_reshape = np.reshape(red, -1)
green_reshape = np.reshape(green, -1)
blue_reshape = np.reshape(blue, -1)
for index, value in enumerate(zip(red_reshape,green_reshape,blue_reshape)):
if value[1] < 0.20:
img[i,index//256, index%256, :] = 0
elif value[0] > 0.30:
img[i,index//256, index%256, :] = 0
elif value[2] > 0.20:
img[i,index//256, index%256, :] = 0
plt.imshow(img[2])
plt.title("Convert remove obstacle feature")
randidx = np.random.randint(len(img),size=len(img))
shuffle_x = img[randidx,:]
shuffle_y = y_data[randidx,:]
split_value = int(len(img)*0.8)
train_x = shuffle_x[:split_value]
train_y = shuffle_y[:split_value]
test_x = shuffle_x[split_value:]
test_y = shuffle_y[split_value:]
img_width = np.shape(img)[1]
img_height = np.shape(img)[1]
channel = 3
batch_size = 64
learning_rate = 0.01
epoch = 7
X = tf.placeholder(tf.float32, [None, img_width, img_width, channel])
y = tf.placeholder(tf.float32, [None, 2])
X_img = tf.reshape(X,[-1, img_width*img_height*channel])
w_1 = tf.get_variable("weight1",shape=[img_width*img_height*channel, 256], initializer=tf.random_normal_initializer())
b_1 = tf.get_variable("bias1", shape=[256,] ,initializer=tf.zeros_initializer())
layer_1 = tf.nn.bias_add(tf.matmul(X_img,w_1),b_1)
layer_1 = tf.nn.relu(layer_1)
w_2 = tf.get_variable("weight2",shape=[256, 512], initializer=tf.random_normal_initializer())
b_2 = tf.get_variable("bias2", shape=[512,] ,initializer=tf.zeros_initializer())
layer_2 = tf.nn.bias_add(tf.matmul(layer_1,w_2),b_2)
layer_2 = tf.nn.relu(layer_2)
w_3 = tf.get_variable("weight3",shape=[512, 2], initializer=tf.random_normal_initializer())
b_3 = tf.get_variable("bias3", shape=[2,] ,initializer=tf.zeros_initializer())
layer_3 = tf.nn.bias_add(tf.matmul(layer_2,w_3),b_3)
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=layer_3, labels=y))
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(loss)
correct_prediction = tf.equal(tf.argmax(layer_3, 1), tf.argmax(y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for i in range(epoch):
avg_cost = 0
total_batch = int(len(train_x)/batch_size)
for step in range(total_batch):
randidx = np.random.randint(len(train_x),size=batch_size)
batch_xs = train_x[randidx,:]
batch_ys = train_y[randidx,:]
feed_dict = {X: batch_xs, y: batch_ys}
c, _ = sess.run([loss, optimizer], feed_dict=feed_dict)
avg_cost += c / total_batch
if epoch%1 == 0:
print("Cost :",avg_cost)
print("Test Accuracy :", sess.run(accuracy, feed_dict={X:test_x, y:test_y}))
print('Finished')
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Checkerboard
Step2: Use vizarray to visualize a checkerboard of size=20 with a block size of 10px.
Step3: Use vizarray to visualize a checkerboard of size=27 with a block size of 5px.
| <ASSISTANT_TASK:>
Python Code:
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
import antipackage
import github.ellisonbg.misc.vizarray as va
def checkerboard(size):
a = np.zeros((size,size), dtype = np.float)
b = 2
if size % 2 != 0:
for element in np.nditer(a, op_flags=['readwrite']):
if size % 2 != 0:
if b % 2 == 0:
element[...] = element + 1.0
b += 1
else:
b += 1
return a
else:
c = [1,0]
d = [0,1]
e = []
f = size / 2
g = list(range(1, size + 1))
for item in g:
if item % 2 != 0:
e.append(c * f)
else:
e.append(d * f)
h = np.array(e, dtype = np.float)
return h
print checkerboard(4)
va.set_block_size(10)
va.vizarray(checkerboard(20))
assert True
va.set_block_size(5)
va.vizarray(checkerboard(27))
assert True
a = checkerboard(4)
assert a[0,0]==1.0
assert a.sum()==8.0
assert a.dtype==np.dtype(float)
assert np.all(a[0,0:5:2]==1.0)
assert np.all(a[1,0:5:2]==0.0)
b = checkerboard(5)
assert b[0,0]==1.0
assert b.sum()==13.0
assert np.all(b.ravel()[0:26:2]==1.0)
assert np.all(b.ravel()[1:25:2]==0.0)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: PCA
Step2: Solving b vector
| <ASSISTANT_TASK:>
Python Code:
import pandas
df = pandas.read_csv('muct76-opencv.csv', header=0, usecols=np.arange(2,154), dtype=float)
df.head()
X = df.iloc[:, ::2].values
Y = df.iloc[:, 1::2].values
d = np.hstack((X,Y))
d.shape
import sys
threshold = 1.0e-8
def center(vec):
pivot = int(vec.shape[0]/2)
meanx = np.mean(vec[:pivot])
meany = np.mean(vec[pivot:])
return(meanx, meany)
def calnorm(vec):
vsqsum = np.sum(np.square(vec))
return(vsqsum)
def scale(vec):
vcopy = vec.copy()
vmax = np.max(vec)
if vmax > 2.0:
vcopy = vcopy / vmax
vnorm = calnorm(vcopy)
return (vcopy / np.sqrt(vnorm))
def caldiff(pref, pcmp):
return np.mean(np.sum(np.square(pref - pcmp), axis=1))
def simTransform(pref, pcmp, showerror = False):
err_before = np.mean(np.sum(np.square(pref - pcmp), axis=1))
ref_mean = np.mean(pref, axis=0)
prefcentered = np.asmatrix(pref) - np.asmatrix(ref_mean)
cmp_mean = np.mean(pcmp, axis=0)
pcmpcentered = np.asmatrix(pcmp) - np.asmatrix(cmp_mean)
Sxx = np.sum(np.square(pcmpcentered[:,0]))
Syy = np.sum(np.square(pcmpcentered[:,1]))
Sxxr = prefcentered[:,0].T * pcmpcentered[:,0] #(ref_x, x)
Syyr = prefcentered[:,1].T * pcmpcentered[:,1] #(ref_y, y)
Sxyr = prefcentered[:,1].T * pcmpcentered[:,0] #(ref_y, x)
Syxr = prefcentered[:,0].T * pcmpcentered[:,1] #(ref_x, y)
a = (Sxxr + Syyr)/(Sxx + Syy) #(Sxxr + Syyr) / (Sxx + Syy)
b = (Sxyr - Syxr) / (Sxx + Syy)
a = np.asscalar(a)
b = np.asscalar(b)
Rot = np.matrix([[a, -b],[b, a]])
translation = -Rot * np.asmatrix(cmp_mean).T + np.asmatrix(ref_mean).T
outx, outy = [], []
res = Rot * np.asmatrix(pcmp).T + translation
err_after = np.mean(np.sum(np.square(pref - res.T), axis=1))
if showerror:
print("Error before: %.4f after: %.4f\n"%(err_before, err_after))
return (res.T, err_after)
def align2mean(data):
d = data.copy()
pivot = int(d.shape[1]/2)
for i in range(d.shape[0]):
cx, cy = center(d[i,:])
d[i,:pivot] = d[i,:pivot] - cx
d[i,pivot:] = d[i,pivot:] - cy
#print(cx, cy, center(d[i,:]))
d[i,:] = scale(d[i,:])
norm = calnorm(d[i,:])
d_aligned = d.copy()
pref = np.vstack((d[0,:pivot], d[0,pivot:])).T
print(pref.shape)
mean = pref.copy()
mean_diff = 1
while mean_diff > threshold:
err_sum = 0.0
for i in range(1, d.shape[0]):
p = np.vstack((d[i,:pivot], d[i,pivot:])).T
p_aligned, err = simTransform(mean, p)
d_aligned[i,:] = scale(p_aligned.flatten(order='F'))
err_sum += err
oldmean = mean.copy()
mean = np.mean(d_aligned, axis=0)
mean = scale(mean)
mean = np.reshape(mean, newshape=pref.shape, order='F')
d = d_aligned.copy()
mean_diff = caldiff(oldmean, mean)
sys.stdout.write("SumError: %.4f MeanDiff: %.6f\n"%(err_sum, mean_diff))
return (d_aligned, mean)
d_aligned, mean = align2mean(d)
plt.figure(figsize=(7,7))
plt.gca().set_aspect('equal')
plotFaceShape(mean)
d_aligned.shape
from sklearn.decomposition import PCA
pca = PCA(n_components=8)
pca.fit(d_aligned)
print(pca.explained_variance_ratio_)
cov_mat = np.cov(d_aligned.T)
print(cov_mat.shape)
eig_values, eig_vectors = np.linalg.eig(cov_mat)
print(eig_values.shape, eig_vectors.shape)
num_eigs = 8
Phi_matrix = eig_vectors[:,:num_eigs]
Phi_matrix.shape
# * ()
mean_matrix = np.reshape(mean, (152,1), 'F')
d_aligned_matrix = np.matrix(d_aligned)
delta = d_aligned_matrix.T - mean_matrix
b = (np.matrix(Phi_matrix).T * delta).T
b.shape
mean.dump('models/meanshape-ocvfmt.pkl')
eig_vectors.dump('models/eigenvectors-ocvfmt.pkl')
eig_values.dump('models/eigenvalues-ocvfmt.pkl')
Phi_matrix.dump('models/phimatrix.pkl')
b.dump('models/bvector.pkl')
d_aligned.dump('models/alignedfaces.pkl')
mean_matrix.dump('models/meanvector.pkl')
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Sum up an array of numbers using python
Step2: Use cython, without changing the code
Step3: Does making s a local variable help?
| <ASSISTANT_TASK:>
Python Code:
%load_ext cython
import array
a = array.array('l',range(100))
s = 0
def python_sum(a):
global s
s = 0
for i in range(len(a)):
for j in range(10000):
s = s + a[i]
return s
%timeit python_sum(a)
%%cython --annotate
def cython_sum1(a):
global s
s = 0
for i in range(len(a)):
for j in range(10000):
s = s + a[i]
return s
print('python sum: ',python_sum(a))
print('cython sum1: ',cython_sum1(a))
print('python sum')
%timeit python_sum(a)
print('cython sum1')
%timeit cython_sum1(a)
%%cython --annotate
def cython_sum2(a):
s = 0
for i in range(len(a)):
for j in range(10000):
s = s + a[i]
return s
print('python sum: ',python_sum(a))
print('cython sum1: ',cython_sum1(a))
print('cython sum2: ',cython_sum2(a))
print('python sum')
%timeit python_sum(a)
print('cython sum1')
%timeit cython_sum1(a)
print('cython sum2')
%timeit cython_sum2(a)
%%cython --annotate
from cpython cimport array
def cython_sum3(a):
cdef long s = 0
cdef array.array ta = a
cdef long * ap = ta.data.as_longs
for i in range(len(ta)):
for j in range(10000):
s = s + ap[i]
return s
print('python sum: ',python_sum(a))
print('cython sum1: ',cython_sum1(a))
print('cython sum2: ',cython_sum2(a))
print('cython sum3: ',cython_sum3(a))
print('python sum')
%timeit python_sum(a)
print('cython sum1')
%timeit cython_sum1(a)
print('cython sum2')
%timeit cython_sum2(a)
print('cython sum3')
%timeit cython_sum3(a)
from numba import jit
@jit
def numba_sum(a):
s = 0
for i in range(len(a)):
for j in range(10000):
s = s + a[i]
return s
print('python sum: ',python_sum(a))
print('cython sum1: ',cython_sum1(a))
print('cython sum2: ',cython_sum2(a))
print('cython sum3: ',cython_sum3(a))
print('numba sum: ', numba_sum(a))
print('python sum')
%timeit python_sum(a)
print('cython sum1')
%timeit cython_sum1(a)
print('cython sum2')
%timeit cython_sum2(a)
print('cython sum3')
%timeit cython_sum3(a)
print('numba sum')
%timeit numba_sum(a)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This map is an example of an unprojected plot
Step2: But there is a better way
Step3: For a list of projections implemented in geoplot, refer to the projections reference in the cartopy documentation (cartopy is the library geoplot relies on for its projections).
Step4: By default, geoplot will set the extent (the area covered by the plot) to the total_bounds of the last plot stacked onto the map.
Step5: The section of the tutorial on Customizing Plots explains the extent parameter in more detail.
Step6: matplotlib supports subplotting projected maps using the projection argument to subplot_kw.
| <ASSISTANT_TASK:>
Python Code:
import geopandas as gpd
import geoplot as gplt
%matplotlib inline
# load the example data
contiguous_usa = gpd.read_file(gplt.datasets.get_path('contiguous_usa'))
gplt.polyplot(contiguous_usa)
boroughs = gpd.read_file(gplt.datasets.get_path('nyc_boroughs'))
gplt.polyplot(boroughs)
import geoplot.crs as gcrs
gplt.polyplot(contiguous_usa, projection=gcrs.AlbersEqualArea())
cities = gpd.read_file(gplt.datasets.get_path('usa_cities'))
ax = gplt.polyplot(
contiguous_usa,
projection=gcrs.AlbersEqualArea()
)
gplt.pointplot(cities, ax=ax)
ax = gplt.polyplot(
contiguous_usa,
projection=gcrs.AlbersEqualArea()
)
gplt.pointplot(cities, ax=ax, extent=contiguous_usa.total_bounds)
import matplotlib.pyplot as plt
import geoplot as gplt
f, axarr = plt.subplots(1, 2, figsize=(12, 4))
gplt.polyplot(contiguous_usa, ax=axarr[0])
gplt.polyplot(contiguous_usa, ax=axarr[1])
proj = gcrs.AlbersEqualArea(central_longitude=-98, central_latitude=39.5)
f, axarr = plt.subplots(1, 2, figsize=(12, 4), subplot_kw={
'projection': proj
})
gplt.polyplot(contiguous_usa, projection=proj, ax=axarr[0])
gplt.polyplot(contiguous_usa, projection=proj, ax=axarr[1])
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now we have the weatherData object with its data variable containing four extra columns, i.e. Month, _LastYearsAvg, _LastYearsStd, and _ThisYear. The columns _LastYearsAvg and _LastYearsStd contain the average and standard deviation of all previous years of month Month respectively. Columns _ThisYear contains the average of the current month of the year.
Step2: First let's remove the early years in which we are not interested in. Note also that there might be months containing -9999. This can happen if they constitute the first months in which the measurement was available (and no last years are available). In addition create a new column containing the difference of the measured averages, thus making them better comparable.
Step3: In order get a feel for a good classification for extreme climate events we can look at the distribution of the difference between the average values and the distribution of the standard deviations. I will take the maximum temperature reading as an example.
Step4: This already gives us an indication that there are events that are extreme in the sense that they deviate by more than 1-2 standard deviations from the average of past years. However this still does not give us a robust and good indicator of when to call a climate event extreme.
Step5: For plotting we will remove the few events that deviate extremely and would render the plotting impossible
Step6: Here I will take prior knowlede (I looked already at the plot and went back one step) and assume that the distribution will look like a normal distribution. To visually emphasize this point we can fit the distribution and add the fit to the plot.
Step7: From the plot we can see that there is no obvious cutoff point that we could choose so we will have to use common sense. I would argue that a good measure would be to declare the 25% highest values as extreme. This will give us a cuttoff point of
Step8: What the plot above is not telling us is how the individual bins of the histogram are populated in time. By that I mean that each event in the histogram is linked to the year in which the measurement was taken. We can now ask the question if events that deviate far from the all time averages are more likely to have occured in the recent past or if they are equally distributed.
| <ASSISTANT_TASK:>
Python Code:
%pylab inline
import sys
sys.path.insert(0,"../lib/")
from scipy.stats import norm
from climateData import WeatherData
weatherData = WeatherData(years=[1900,2014], optimiseFactor=True)
weatherData.data.head()
data = weatherData.data[ weatherData.data["Year"] >= 1980 ]
data = data[ np.isclose(data["_LastYearsStd"], -9999.0) == False ]
data["avg_diff"] = data["_LastYearsAvg"] - data["_ThisYear"]
data_tmax = data[ data["Element"] == "TMAX" ]
hist_avg, bin_edges_avg = np.histogram( np.abs(np.abs(np.asarray( data_tmax["avg_diff"] ))), bins=100 )
hist_std, bin_edges_std = np.histogram( np.abs(np.asarray( data_tmax["_LastYearsStd"] )), bins=100 )
fig = plt.figure(figsize=(10, 8), dpi=200)
ax = fig.add_subplot(111)
ax.tick_params(axis='both', which='major', labelsize=12)
label_avg = "Distribution of differences of monthly average\ntemperatures compared to past years"
label_std = "Distribution of standard deviations of monthly\naverage temperatures"
ax.bar(bin_edges_avg[:-1], hist_avg, width = 1.1, facecolor="red", alpha=0.9, label=label_avg);
ax.bar(bin_edges_std[:-1], hist_std, width = 1.1, facecolor="blue", alpha=0.6, label=label_std, zorder=5);
plt.legend();
data["avg_diff_fold"] = np.abs(data["avg_diff"]) / data["_LastYearsStd"]
data_tmax = data[ data["Element"] == "TMAX" ]
tmpData = np.abs(np.asarray( data_tmax["avg_diff_fold"] ))
tmpData = tmpData[ tmpData < np.percentile(tmpData, 99.9) ]
hist_avg_fold, bin_edges_avg_fold = np.histogram(tmpData, bins=100, density=True)
mu, std = norm.fit(np.concatenate((-tmpData,tmpData), axis=1))
x = np.linspace(0, 5, 100)
p = norm.pdf(x, mu, std)
print("Fitted a normal distribution at %.1f with standard deviation %.2f" %(mu, std))
fig = plt.figure(figsize=(10, 8), dpi=200)
ax = fig.add_subplot(111)
ax.tick_params(axis='both', which='major', labelsize=12)
label_avg_fold = "Distribution of fold differences of monthly average\ntemperatures compared to past years"
ax.bar(bin_edges_avg_fold[:-1], hist_avg_fold, width = 0.04, facecolor="green", edgecolor="green", alpha=0.9, label=label_avg_fold);
ax.plot(x, 2*p, 'k', linewidth=2)
plt.legend();
cutoff = np.percentile(tmpData, 85)
print("The cutoff point is set to %.2f" %cutoff)
bin_years = list()
for i in range(1,len(bin_edges_avg_fold)):
start, end = bin_edges_avg_fold[i-1], bin_edges_avg_fold[i]
tmp = data_tmax[ data_tmax["avg_diff_fold"] > start ]
tmp = tmp[ tmp["avg_diff_fold"] < end ]
bin_years.append(tmp["Year"])
avg_time = [ np.average(item) for item in bin_years ]
avg_time_X = [ i*0.05 for i in range(1,len(avg_time)+1) ] # make the plot go from 0 to 5 and not from 0 to 100
fig = plt.figure(figsize=(8, 6), dpi=200)
ax = fig.add_subplot(111)
ax.tick_params(axis='both', which='major', labelsize=12)
ax.plot(avg_time_X, avg_time, label="Average year of the histogram bin");
ax.axhline(np.average(data_tmax["Year"]), 0, 100, color="red", label="Total average of years");
plt.legend(loc=2, fontsize=16);
<END_TASK> |