text_prompt
stringlengths 168
30.3k
| code_prompt
stringlengths 67
124k
|
---|---|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: iris為一個dict型別資料,我們可以用以下指令來看一下資料的內容。
Step2: | 顯示 | 說明 |
Step3: 而接下來為了產生一個包含絕大部份可能的測試矩陣,我們會用到以下指令。
Step4: (三) 測試分類器以及畫出機率分佈圖的選擇
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import numpy as np
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn import datasets
iris = datasets.load_iris()
X = iris.data[:, 0:2] # 僅使用前兩個特徵,方便視覺化呈現
y = iris.target
n_features = X.shape[1]
for key,value in iris.items() :
try:
print (key,value.shape)
except:
print (key)
C = 1.0
# Create different classifiers. The logistic regression cannot do
# multiclass out of the box.
classifiers = {'L1 logistic': LogisticRegression(C=C, penalty='l1'),
'L2 logistic (OvR)': LogisticRegression(C=C, penalty='l2'),
'Linear SVC': SVC(kernel='linear', C=C, probability=True,
random_state=0),
'L2 logistic (Multinomial)': LogisticRegression(
C=C, solver='lbfgs', multi_class='multinomial'
)}
n_classifiers = len(classifiers)
plt.figure(figsize=(3 * 2, n_classifiers * 2))
plt.subplots_adjust(bottom=.2, top=.95)
xx = np.linspace(3, 9, 100)
yy = np.linspace(1, 5, 100).T
xx, yy = np.meshgrid(xx, yy)
Xfull = np.c_[xx.ravel(), yy.ravel()]
#若在ipython notebook (Jupyter) 裏執行,則可以將下列這行的井號移除
%matplotlib inline
#原範例沒有下列這行,這是為了讓圖形顯示更漂亮而新增的
fig = plt.figure(figsize=(12,12), dpi=300)
for index, (name, classifier) in enumerate(classifiers.items()):
#訓練並計算分類成功率
#然而此範例訓練跟測試用相同資料集,並不符合實際狀況。
#建議採用cross_validation的方式才能較正確評估
classifier.fit(X, y)
y_pred = classifier.predict(X)
classif_rate = np.mean(y_pred.ravel() == y.ravel()) * 100
print("classif_rate for %s : %f " % (name, classif_rate))
# View probabilities=
probas = classifier.predict_proba(Xfull)
n_classes = np.unique(y_pred).size
for k in range(n_classes):
plt.subplot(n_classifiers, n_classes, index * n_classes + k + 1)
plt.title("Class %d" % k)
if k == 0:
plt.ylabel(name)
imshow_handle = plt.imshow(probas[:, k].reshape((100, 100)),
extent=(3, 9, 1, 5), origin='lower')
plt.xticks(())
plt.yticks(())
idx = (y_pred == k)
if idx.any():
plt.scatter(X[idx, 0], X[idx, 1], marker='o', c='k')
ax = plt.axes([0.15, 0.04, 0.7, 0.05])
plt.title("Probability")
plt.colorbar(imshow_handle, cax=ax, orientation='horizontal')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Consumption in Europe is led by Germany followed by France and the United Kingdom. Spain is in the 5th place with a household consumption during the period of less than half the one of Germany. The tail of consumptionis led by Poland followed by Belgium and the Netherlands. It seems that there is a correlation between the size of the country and the electricity consumption.
Step2: Electricity consumption between 1990 and 2014 in the household market in Central & SOuth America is led by Brazil frollowed by Argentina & Venezuela. Although it was expected Chile to be between the first three due to its economic development, its in the 5 place after Colombia. Compared to Brazil (first place) households consumption in Argentina (second place) is about 4 times less.
Step3: The comparison between North America, Europe and Central & South America shows that average eletricity consumption in North America is 8.5 times bigger than the one in Europe (comparing the best in breed in each case). Europe compared to Central & South America has an average consumption 1.8 bigger. Within each regions variations are high concentrating most of the region´s consumption in less than 10 contries.
Step4: There is an asymetric distribution of electricity consumtpion values in the world. While most of them are in the range from 0-10 000 GWh, contries like the US has a consumption of 120 times bigger. Additionally, frequency rises to 0.95 when the electricity consumption reaches 80 000 GWh which is similar to the consumption in Brazil.
Step5: There is a sustained growth in the electricity consumption in Spain from 1990 to 2014. This is a good indicator of the economic growth of the country although between 2005 and 2015 there is a decrease in the interannual grouwth due to aggressive energy efficiency measures.
Step6: The electricity consumption experiments a moderate growth from 1990 to 2015. There is a higher growth between 1990 and 2005 than from 2005 onwards. In the last 10 years of the period under analysis, the UK´s electricity consumption in the household segment has decreased. At the end of the period electricity consumption levels have fallen to those in the year 2000.
|
<ASSISTANT_TASK:>
Python Code:
#Europe
df5 = df4.loc[df4.index.isin(['Austria', 'Belgium', 'Bulgaria','Croatia', 'Cyprus', 'Czechia','Denmark', 'Estonia','Finland','France','Germany','Greece','Hungary','Ireland','Italy','Latvia','Lithuania','Luxembourg','Malta','Netherlands','Poland','Portugal','Romania','Slovakia', 'Slovenia','Spain', 'Sweden', 'United Kingdom'])]
df6= df5.sort_values(ascending=[False])
plt.figure(figsize=(10, 5))
plt.ylabel('GWh')
plt.title('Average Electricity Consumption in Europe: Household Market 1990-2014')
df6.plot.bar()
#Central & South America
df7 = df4.loc[df4.index.isin(['Antigua and Barbuda', 'Argentina', 'Bahamas','Barbados', 'Belize', 'Bolivia (Plur. State of)','Brazil','Chile','Colombia','Costa Rica','Cuba','Dominica','Dominican Republic','Ecuador','El Salvador','Grenada','Guatemala','Guyana','Haiti','Honduras','Jamaica','Nicaragua','Panama', 'Paraguay','Peru', 'St. Kitts-Nevis', 'St. Lucia','St. Vincent-Grenadines','Suriname','Trinidad and Tobago','Uruguay','Venezuela (Bolivar. Rep.)'])]
df8= df7.sort_values(ascending=[False])
plt.figure(figsize=(10, 5))
plt.ylabel('GWh')
plt.title('Average Electricity Consumption in Central & South America: Household Market 1990-2014')
df8.plot.bar()
#Plotting all the figures together for comparison.
#North America has a different scale that Europe & "Central & South America"
plt.figure(figsize=(20, 7))
plt.subplot(1, 3, 1)
df10.plot.bar()
plt.ylabel('GWh')
plt.ylim(0,1200000)
plt.title('Av. Elect. Cons. in N. America: Households 1990-2014')
plt.subplot(1, 3, 2)
df6.plot.bar()
plt.ylabel('GWh')
plt.ylim(0,140000)
plt.title('Av. Elect. Cons. in Europe: Households 1990-2014')
plt.subplot(1, 3, 3)
df8.plot.bar()
plt.ylabel('GWh')
plt.ylim(0,140000)
plt.title('Av. Elect. Cons. in Central & South America: Households 1990-2014')
#plt.legend(loc='lower right',prop={'size':14})
plt.tight_layout()
plt.show()
#Correct the problem of skewness when the 3 graphs are represented together by normalizing with Log the data.
plt.figure(figsize=(20, 7))
plt.subplot(1, 3, 1)
np.log(df10).plot.bar()
plt.ylabel('Log(GWh)')
plt.ylim(0,14)
plt.title('Av. Elect. Cons. in N. America: Households 1990-2014')
plt.subplot(1, 3, 2)
np.log(df6).plot.bar()
plt.ylabel('Log(GWh)')
plt.ylim(0,14)
plt.title('Av. Elect. Cons. in Europe: Households 1990-2014')
plt.subplot(1, 3, 3)
np.log(df8).plot.bar()
plt.ylabel('Log(GWh)')
plt.ylim(0,14)
plt.title('Av. Elect. Cons. in Central & South America: Households 1990-2014')
#plt.legend(loc='lower right',prop={'size':14})
plt.tight_layout()
plt.show()
#Histograms showing consumption in the World 1990-2014
plt.figure(figsize=(20, 5))
plt.subplot(1, 2, 1)
plt.xlabel("Electricity Consumption")
plt.ylabel("Frequency")
plt.hist(df3['Quantity (GWh)'], bins=5000 ,facecolor='green', alpha=0.5)
plt.axis([0, 20000, 0, 2000])
plt.ylabel('frequency')
plt.xlabel('Year')
plt.title('Distribution of Electricity Consumption in the World 1990-2014')
plt.subplot(1, 2, 2)
plt.xlabel("Electricity Consumption")
plt.ylabel("Frequency")
plt.hist(df3['Quantity (GWh)'], bins=5000 ,facecolor='red', normed=1, cumulative=1, alpha=0.5)
plt.axis([0, 80000, 0, 1])
plt.ylabel('frequency')
plt.xlabel('Year')
plt.title('Cumulative distribution of Electricity Consumption in the World')
plt.tight_layout()
plt.show()
#Dynamic analysis of the electricity consumption in Spain (delving into the details of Europe)
#To see this cell properly, it needs to be run individually while screening through the notebook.
#When 'Cell-Run all" is used the graph and an 'error message' appears.
df1 = df.ix[lambda df: w['Country or Area'] == "Spain", :]
plt.scatter(x = df1["Year"], y = df1['Quantity'], color = 'purple', marker = 'o', s = 30)
plt.ylabel('GWh')
plt.xlabel('Year')
plt.ylim([20000, 160000])
plt.title('Electricity consumption in Spain by household 1990-2014')
plt.show()
#Dynamic analysis of electricity consumption in The UK
df2 = df.ix[lambda df: w['Country or Area'] == "United Kingdom", :]
plt.scatter(x = df2["Year"], y = df2['Quantity'], color = 'green', marker = 'x', s = 30)
plt.ylabel('GWh')
plt.xlabel('Year')
plt.ylim([20000, 160000])
plt.title('Electricity consumption in UK by household 1990-2014')
plt.show()
#Dynamic Comparison of the Electricity consumption between The UK & Spain
plt.figure(figsize=(20, 5))
plt.subplot(1, 3, 1)
plt.scatter(x = df1["Year"], y = df1['Quantity'], color = 'purple', marker = 'o', s = 30)
plt.ylabel('GWh')
plt.xlabel('Year')
plt.ylim([20000, 160000])
plt.title('Electricity consumption in Spain by household 1990-2014')
plt.subplot(1, 3, 2)
plt.scatter(x = df2["Year"], y = df2['Quantity'], color = 'green', marker = 'x', s = 30)
plt.ylabel('GWn')
plt.xlabel('Year')
plt.ylim([20000, 160000])
plt.title('Electricity consumption in UK by household 1990-2014')
plt.subplot(1, 3, 3)
plt.scatter(x = df1["Year"], y = df1['Quantity'], color = 'purple', marker= "o", s= 30, label="Spain")
plt.scatter(x = df2["Year"], y = df2['Quantity'], color = 'green', marker ="x", s= 30, label="UK")
plt.ylabel('GWh')
plt.xlabel('Year')
plt.ylim([20000, 160000])
plt.title('Electricity consumption in Spain, UK by household 1990-2014')
plt.legend(loc='lower right',prop={'size':14})
plt.tight_layout()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <div id='intro' />
Step2: The other algorithm we will use is implemented in the function np.linalg.solve of NumPy.
Step3: The following code compares the solution obtained by each approach for the small problem.
Step4: We should see that the output x1 and x2 are the same. More on this will be study in the course, what we will study now (and also later) is the time it takes to find $\mathbf{x}$ by both algorihtms.
Step5: Do you notice a difference in the computation time?
Step6: The following code solve 10 linear systems of equations from size $10\times 10$ upto $2000\times 2000$, and for each one we compute the average time for several ejecutions.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
# This function solves a linear system of equation
# in a NOT recomended way, we just include it here
# for comparison purposes. Please DON'T do this!
def solve_inverse(A,b):
B = np.linalg.inv(A) # FORBIDDEN!!
x = np.dot(B,b)
return x
# We use this to ensure the output is reproducible,
# what we do is to set a initial seed for the
# pseudo-random number genrator.
np.random.seed(0)
# Here we define the dimension of the problem we will study
n = 10
# Here we generate 'A' and 'b'.
A = np.random.random((n,n)) # Warning: We hope the matrix is not singular!
b = np.random.random(n)
x1 = np.linalg.solve(A,b)
print('x1: ',x1)
x2 = solve_inverse(A,b)
print('x2: ',x2)
%timeit np.linalg.solve(A,b)
%timeit solve_inverse(A,b)
# This line of code will help us to manipulate the output.
out = %timeit -o -q -n 10 -r 5 np.linalg.solve(A,b)
out.all_runs
def generate_data_Axb():
Ns = np.linspace(10,2000,10, dtype=int)
time_solve = np.zeros(len(Ns))
time_inv = np.zeros(len(Ns))
np.random.seed(0)
for i, n in np.ndenumerate(Ns):
print('working on n =',n)
A = np.random.random((n,n))
b = np.random.random(n)
out = %timeit -o -q -n 5 -r 5 np.linalg.solve(A,b)
time_solve[i] = out.average
out = %timeit -o -q -n 5 -r 5 solve_inverse(A,b)
time_inv[i] = out.average
print('time it took =',time_solve[i]+time_inv[i],'[sec]')
return Ns, time_solve, time_inv
Ns, time_solve, time_inv = generate_data_Axb()
plt.figure(figsize=(5,5))
plt.semilogy(Ns, time_solve, '.', label="np.linalg.solve")
plt.semilogy(Ns, time_inv, '.', label="solve_inverse")
plt.grid(True)
plt.ylabel('Time in [sec]')
plt.xlabel('Dimension of linear system')
plt.legend(loc='best')
plt.show()
plt.figure(figsize=(5,5))
plt.plot(Ns, time_inv/time_solve, 'k.', label="time_inv/time_solve")
plt.grid(True)
plt.ylabel('Time in [sec]')
plt.xlabel('Dimension of linear system')
plt.legend(loc='best')
plt.show()
plt.figure(figsize=(5,5))
plt.loglog(Ns, time_solve, '.', label="np.linalg.solve")
plt.loglog(Ns, time_inv, '.', label="solve_inverse")
plt.grid(True)
plt.ylabel('Time in [sec]')
plt.xlabel('Dimension of linear system')
plt.legend(loc='best')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the data
Step2: Calculate chromosome properties and classify
Step3: 2. Calculate chromosomal properties
Step4: 3. Summarize chromosome classifications by sample
Step5: 4. Calculate chromosome amplicon distribution curves
Step6: Calculate Sample properties and distributions
Step7: Figure 5
Step8: 2. Function for heatmap plotting
Step9: 3. Extract MN2b data
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors
# import matplotlib.gridspec as gridspec
import seaborn as sns
import pathlib
import PaSDqc
%matplotlib inline
sns.set_context('poster')
sns.set_style("ticks", {'ytick.minor.size': 0.0, 'xtick.minor.size': 0.0})
p = pathlib.Path("../01_example_report/psd/")
f_psd_list = sorted(p.glob("*.spec"))
psd_list = [PaSDqc.PSDTools.SamplePSD.load_from_file(str(f), name=f.stem) for f in f_psd_list]
med_list = [PaSDqc.PSDTools.normalize_psd(psd.avg_PSD()) for psd in psd_list]
freq = psd_list[0].freq
period = 1 / freq
chroms = PaSDqc.extra_tools.chroms_from_build('grch37')
[psd.fit_chrom_curves(chroms) for psd in psd_list]
[psd.fit_sample_curves() for psd in psd_list];
[psd.calc_chrom_props(chroms) for psd in psd_list];
psd_list[0].chrom_props
idx = [psd.name.split('.')[0] for psd in psd_list]
df_stat = PaSDqc.extra_tools.summarize_chrom_classif_by_sample(psd_list, idx)
df_stat.iloc[0:5, 0:12]
[psd.infer_chrom_amplicon_dist(chroms) for psd in psd_list];
psd_list[0].chrom_dist.keys()
[psd.calc_sample_props() for psd in psd_list];
[psd.infer_sample_amplicon_dist() for psd in psd_list];
PaSDqc.extra_tools.summarize_sample_props(psd_list, idx)
df_cnvs = pd.read_table("CNVs_Zhang_2015_MN.txt", index_col=0)
columns = [c for c in df_cnvs.columns if c.startswith('MN')]
cols2 = [c.split('.')[0] for c in columns]
df_cnv2 = df_cnvs.loc[:, columns].iloc[:-2, :] - df_cnvs[columns].iloc[:-2, :].median() # removing sex chromosomes
df_cnv2.index = [c.replace('chr', '') for c in df_cnv2.index]
df_cnv2.columns = cols2
# Mask MN9_d since CNV caller failed
mask = (df_cnv2 == 0)
mask.loc[:, 'MN9_d'] = True
def plot_cnv_heatmap(df_cnv, ax=None, add_cbar=True, cbar_ax=None):
if not ax:
f = plt.figure()
ax = f.add_subplot(111)
nd = df_cnv.T.as_matrix()
np_mask = np.ma.array(nd, mask=mask.T)
dcp = sns.diverging_palette(255, 133, l=60, n=7)
dcp_list = [dcp[0], dcp[1], dcp[2], dcp[3], dcp[3], dcp[3], dcp[3], dcp[4], dcp[5], dcp[6]]
cmap = matplotlib.colors.LinearSegmentedColormap.from_list('mycmap', dcp_list)
cax = ax.imshow(np_mask, aspect='equal', cmap=cmap, interpolation=None, vmax=1, vmin=-1)
ax.set_yticks(np.arange(0, df_cnv2.shape[1]))
ax.set_xticks(np.arange(0, df_cnv2.shape[0]))
ax.set_xticks(np.arange(0.5, df_cnv2.shape[0]+0.5), minor=True)
ax.set_xlabel('chromosome')
for y in np.arange(0.5, df_cnv2.shape[1], 1):
ax.axhline(y, linestyle='--', color='black', linewidth=1)
ax.set_yticklabels([])
ax.set_yticklabels([s.replace('_', '') for s in df_cnv2.columns])
ax.set_xticklabels(df_cnv2.index);
ax.grid(which='minor', color='black', linewidth=1)
#colorbar
if add_cbar:
cbar = ax.figure.colorbar(cax, ticks=[-1, 0, 1], cax=cbar_ax, orientation='horizontal')
cbar.ax.set_xticklabels(['Loss', 'Neutral', 'Gain'])
cbar.ax.xaxis.tick_top()
cbar.ax.tick_params(axis='x', which='major', pad=0)
MN2b = psd_list[3]
MN2b_kl = MN2b.KL_div_by_chrom()
freq = MN2b.ampl.freq['erf']
# Sample curves
MN2b_fit = MN2b.sample_curves['avg']
MN2b_fit_lower = MN2b.sample_curves['lower']
MN2b_fit_upper = MN2b.sample_curves['upper']
MN2b_dist = MN2b.sample_dist
# Chrom curves
MN2b_chr2_fit = MN2b.chrom_curves['2']
MN2b_chr2_dist = MN2b.chrom_dist['2']
MN2b_chr12_fit = MN2b.chrom_curves['12']
MN2b_chr12_dist = MN2b.chrom_dist['12']
MN2b_chr21_fit = MN2b.chrom_curves['21']
MN2b_chr21_dist = MN2b.chrom_dist['21']
# MAKE A GIANT FIGURE
f = plt.figure(figsize=(12, 14))
ax0 = plt.subplot2grid((7, 6), (0, 0), colspan=6, rowspan=2)
ax1 = plt.subplot2grid((7, 6), (2, 0), colspan=3, rowspan=2)
ax2 = plt.subplot2grid((7, 6), (2, 3), colspan=3, rowspan=2)
ax3 = plt.subplot2grid((7, 6), (4, 0), colspan=3, rowspan=3)
ax3_cbar = f.add_axes([0.11, 0.39, 0.375, 0.01])
ax4 = plt.subplot2grid((7, 6), (4, 3), colspan=3, rowspan=3)
ax4_cbar = f.add_axes([0.11+0.48, 0.39, 0.375, 0.01])
cp = sns.color_palette()
## KL divergence plot
PaSDqc.extra_tools.plot_KL_div_by_chrom(MN2b_kl, ax=ax0)
## Smooth spectrograms
ax1.plot(freq, MN2b_fit, label='Sample\n Avg')
ax1.fill_between(freq, MN2b_fit_lower, MN2b_fit_upper, color=cp[0], alpha=0.25)
ax1.plot(freq, MN2b_chr2_fit, label='MN2b Chr2 \n (Loss)', color=cp[5])
ax1.plot(freq, MN2b_chr12_fit, label='MN2b Chr12 \n (Gain)', color=cp[1])
ax1.plot(freq, MN2b_chr21_fit, label='MN2b Chr21\n (Aberrant)', color=cp[2])
ax1.set_xscale('log')
ax1.set_xticklabels(["0", "100 bp", "1 kb", "10 kb", "100 kb", "1 mb"])
ax1.set_xlabel('Genomic scale')
ax1.set_ylabel('PSD (dB)')
ax1.legend(bbox_to_anchor=(0, 1, 2.25, .102), loc=(0, 0), ncol=5, mode="expand", borderaxespad=0.)
## Distribution Plots
ax2.plot(MN2b_dist['freq'], MN2b_dist['dist'], label='MN3 Sample Avg', color=cp[0])
ax2.plot(MN2b_chr2_dist['freq'], MN2b_chr2_dist['dist'], label='MN3 chr2 (loss)', color=cp[5])
ax2.plot(MN2b_chr12_dist['freq'], MN2b_chr12_dist['dist'], label='MN3 chr12 (gain)', color=cp[1])
ax2.plot(MN2b_chr21_dist['freq'], MN2b_chr21_dist['dist'], label='MN3 chr21 (aberrant)', color=cp[2])
ax2.set_xscale('log')
ax2.set_xticklabels(["0", "100 bp", "1 kb", "10 kb", "100 kb", "1 mb"])
ax2.set_xlabel('Genomic scale')
ax2.set_ylabel('Density')
PaSDqc.extra_tools.plot_chrom_classification(df_stat, ax=ax3, cbar_ax=ax3_cbar)
ax3.tick_params(axis='x', which='major', labelsize=14, pad=5)
plot_cnv_heatmap(df_cnv2, ax=ax4, cbar_ax=ax4_cbar)
ax4.tick_params(axis='x', which='major', labelsize=14, pad=5)
plt.tight_layout(h_pad=2.5)
sns.despine(ax=ax0)
sns.despine(ax=ax1)
sns.despine(ax=ax2)
f.text(0.02, 0.97, "A", weight="bold", horizontalalignment='left', verticalalignment='center', fontsize=20)
f.text(0.02, 0.70, "B", weight="bold", horizontalalignment='left', verticalalignment='center', fontsize=20)
#f.text(0.49, 0.66, "C", weight="bold", horizontalalignment='left', verticalalignment='center', fontsize=20)
f.text(0.02, 0.43, "C", weight="bold", horizontalalignment='left', verticalalignment='center', fontsize=20)
f.text(0.51, 0.43, "D", weight="bold", horizontalalignment='left', verticalalignment='center', fontsize=20)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: "echo" に修正して実行してみましょう。
Step2: Jupyter-LC_wrapper
Step3: Jupyter-multi_outputs
Step4: 上の例では、タブをクリックすると以前の出力結果を参照することができます。
Step5: 以前の出力結果を選択表示すると <span class='fa fa-fw fa-thumb-tack'></span> が <span class='fa fa-fw fa-exchange'></span> に変わります。この <span class='fa fa-fw fa-exchange'></span>をクリックすると、選択している出力と現在の出力を比較することができます。
|
<ASSISTANT_TASK:>
Python Code:
! echo "This is 1st step" > foo; cat foo
! echo ".. 2nd step..." >> foo && cat foo
!echooooo ".. 3rd step... will fail" >> foo && cat foo
! cat foo
%env lc_wrapper 8:8:10:10
# lc_wrapper s:h:e:f
#
# s : Summary starts when # of output lines exceed 's' (default s=1)
# h : Summary displays the first h lines and max 2 x h error lines.
# e : Max # of output lines in progress.
# f : Summary displays the last f lines (default f=1)
!!from time import sleep
with open("./resources/bootstrap.log", "r") as f: # "/var/log/bootstrap.log"
count = 0
limit = 100
for line in f:
count = count+1
if count > limit: break
print(line, end=''),
sleep(0.05)
print ("after", limit, "lines are ignored")
#
# Emulate large log output..
import pandas
import matplotlib
import matplotlib.pyplot as plt
import random
%matplotlib inline
plot_df = pandas.DataFrame({
'col1': [12, 3, random.randint(1,10), 4],
'col2': [3, 12, 5, 2],
'col3': [random.randint(4,7), 10, 3, random.randint(0,2)],
'col4': [random.randint(0,11), 5, random.randint(6,12), random.randint(0,5)],
})
plot_df
plot_df.plot()
plot_df
plot_df.transpose
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set parameters and read data
Step2: Loop through frequencies, apply classifier and save scores
Step3: Plot frequency results
Step4: Loop through frequencies and time, apply classifier and save scores
Step5: Plot time-frequency results
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Laura Gwilliams <laura.gwilliams@nyu.edu>
# Jean-Remi King <jeanremi.king@gmail.com>
# Alex Barachant <alexandre.barachant@gmail.com>
# Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
from mne import Epochs, create_info, events_from_annotations
from mne.io import concatenate_raws, read_raw_edf
from mne.datasets import eegbci
from mne.decoding import CSP
from mne.time_frequency import AverageTFR
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
from sklearn.model_selection import StratifiedKFold, cross_val_score
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import LabelEncoder
event_id = dict(hands=2, feet=3) # motor imagery: hands vs feet
subject = 1
runs = [6, 10, 14]
raw_fnames = eegbci.load_data(subject, runs)
raw = concatenate_raws([read_raw_edf(f, preload=True) for f in raw_fnames])
# Extract information from the raw file
sfreq = raw.info['sfreq']
events, _ = events_from_annotations(raw, event_id=dict(T1=2, T2=3))
raw.pick_types(meg=False, eeg=True, stim=False, eog=False, exclude='bads')
# Assemble the classifier using scikit-learn pipeline
clf = make_pipeline(CSP(n_components=4, reg=None, log=True, norm_trace=False),
LinearDiscriminantAnalysis())
n_splits = 5 # how many folds to use for cross-validation
cv = StratifiedKFold(n_splits=n_splits, shuffle=True)
# Classification & Time-frequency parameters
tmin, tmax = -.200, 2.000
n_cycles = 10. # how many complete cycles: used to define window size
min_freq = 5.
max_freq = 25.
n_freqs = 8 # how many frequency bins to use
# Assemble list of frequency range tuples
freqs = np.linspace(min_freq, max_freq, n_freqs) # assemble frequencies
freq_ranges = list(zip(freqs[:-1], freqs[1:])) # make freqs list of tuples
# Infer window spacing from the max freq and number of cycles to avoid gaps
window_spacing = (n_cycles / np.max(freqs) / 2.)
centered_w_times = np.arange(tmin, tmax, window_spacing)[1:]
n_windows = len(centered_w_times)
# Instantiate label encoder
le = LabelEncoder()
# init scores
freq_scores = np.zeros((n_freqs - 1,))
# Loop through each frequency range of interest
for freq, (fmin, fmax) in enumerate(freq_ranges):
# Infer window size based on the frequency being used
w_size = n_cycles / ((fmax + fmin) / 2.) # in seconds
# Apply band-pass filter to isolate the specified frequencies
raw_filter = raw.copy().filter(fmin, fmax, n_jobs=1, fir_design='firwin',
skip_by_annotation='edge')
# Extract epochs from filtered data, padded by window size
epochs = Epochs(raw_filter, events, event_id, tmin - w_size, tmax + w_size,
proj=False, baseline=None, preload=True)
epochs.drop_bad()
y = le.fit_transform(epochs.events[:, 2])
X = epochs.get_data()
# Save mean scores over folds for each frequency and time window
freq_scores[freq] = np.mean(cross_val_score(estimator=clf, X=X, y=y,
scoring='roc_auc', cv=cv,
n_jobs=1), axis=0)
plt.bar(freqs[:-1], freq_scores, width=np.diff(freqs)[0],
align='edge', edgecolor='black')
plt.xticks(freqs)
plt.ylim([0, 1])
plt.axhline(len(epochs['feet']) / len(epochs), color='k', linestyle='--',
label='chance level')
plt.legend()
plt.xlabel('Frequency (Hz)')
plt.ylabel('Decoding Scores')
plt.title('Frequency Decoding Scores')
# init scores
tf_scores = np.zeros((n_freqs - 1, n_windows))
# Loop through each frequency range of interest
for freq, (fmin, fmax) in enumerate(freq_ranges):
# Infer window size based on the frequency being used
w_size = n_cycles / ((fmax + fmin) / 2.) # in seconds
# Apply band-pass filter to isolate the specified frequencies
raw_filter = raw.copy().filter(fmin, fmax, n_jobs=1, fir_design='firwin',
skip_by_annotation='edge')
# Extract epochs from filtered data, padded by window size
epochs = Epochs(raw_filter, events, event_id, tmin - w_size, tmax + w_size,
proj=False, baseline=None, preload=True)
epochs.drop_bad()
y = le.fit_transform(epochs.events[:, 2])
# Roll covariance, csp and lda over time
for t, w_time in enumerate(centered_w_times):
# Center the min and max of the window
w_tmin = w_time - w_size / 2.
w_tmax = w_time + w_size / 2.
# Crop data into time-window of interest
X = epochs.copy().crop(w_tmin, w_tmax).get_data()
# Save mean scores over folds for each frequency and time window
tf_scores[freq, t] = np.mean(cross_val_score(estimator=clf, X=X, y=y,
scoring='roc_auc', cv=cv,
n_jobs=1), axis=0)
# Set up time frequency object
av_tfr = AverageTFR(create_info(['freq'], sfreq), tf_scores[np.newaxis, :],
centered_w_times, freqs[1:], 1)
chance = np.mean(y) # set chance level to white in the plot
av_tfr.plot([0], vmin=chance, title="Time-Frequency Decoding Scores",
cmap=plt.cm.Reds)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Step 2
Step2: Step 3
Step3: Step 4
Step4: Step 5
Step5: Step 6
Step6: Step 7
Step7: Step 8
Step8: Step 9
Step9: Step 10
Step10: Step 11
Step11: Step 13
Step12: Step 14
Step13: Step 15
|
<ASSISTANT_TASK:>
Python Code:
import os, re
# Provide the directory for your index and read files (you can do multiple independently in one go)
bioblitz = '/home/roli/BioBlitz.2017/SV_based/'
# Prepare an object with the name of the library, the name of the directory object (created above), and the metadatafile name
#datasets = [['name',directory1,'metadata1','domain of life'],['name',directory2,'metadata2','domain of life']]
datasets = [['bioblitz',bioblitz,'metadata.tsv','bacteria']]
# Ensure your reads files are named accordingly (or modify to suit your needs)
readFile1 = 'read1.fq.gz'
readFile2 = 'read2.fq.gz'
indexFile1 = 'index_read1.fq.gz'
indexFile2 = 'index_read2.fq.gz'
## Enter Minimum Support for Keeping QIIME Classification
# Note: Classifications that do not meet this criteria will simply be retained, but labeled 'putative'
min_support = 0.8
## Note: QIIME takes a single barcode file. The command 'extract_barcodes.py' concatenates the forward and reverse read barcode and attributes it to a single read.
# See http://qiime.org/tutorials/processing_illumina_data.html
for dataset in datasets:
directory = dataset[1]
index1 = directory+indexFile1
index2 = directory+indexFile2
# Run extract_barcodes to merge the two index files
!python2 /opt/anaconda2/bin/extract_barcodes.py --input_type barcode_paired_end -f $index1 -r $index2 --bc1_len 8 --bc2_len 8 -o $directory/output
# QIIME2 import requires a directory containing files names: forward.fastq.gz, reverse.fastq.gz and barcodes.fastq.gz
!ln -s $directory$readFile1 $directory/output/forward.fastq.gz
!ln -s $directory$readFile2 $directory/output/reverse.fastq.gz
# Gzip the barcodes files (apparently necessary)
!pigz -p 5 $directory/output/barcodes.fastq
# Removed orphaned reads files (not needed)
!rm $directory/output/reads?.fastq
for dataset in datasets:
name = dataset[0]
directory = dataset[1]
os.system(' '.join([
"qiime tools import",
"--type EMPPairedEndSequences",
"--input-path "+directory+"output/",
"--output-path "+directory+"output/"+name+".qza"
]))
# This more direct command is broken by the fact QIIME uses multiple dashes in their arguments (is my theory)
#!qiime tools import --type EMPPairedEndSequences --input-path $directory/output --output-path $directory/output/$name.qza
########
## Note: The barcode you supply to QIIME is now a concatenation of your forward and reverse barcode.
# Your 'forward' barcode is actually the reverse complement of your reverse barcode and the 'reverse' is your forward barcode. The file 'primers.complete.csv' provides this information corresponding to the Buckley Lab 'primer number'
# This quirk could be corrected in how different sequencing facilities pre-process the output from the sequencer
##
## SLOW STEP (~ 2 - 4 hrs)
##
for dataset in datasets:
name = dataset[0]
directory = dataset[1]
metadata = dataset[2]
os.system(' '.join([
"qiime demux emp-paired",
"--m-barcodes-file "+directory+metadata,
"--m-barcodes-category BarcodeSequence",
"--i-seqs "+directory+"output/"+name+".qza",
"--o-per-sample-sequences "+directory+"output/"+name+".demux"
]))
# This more direct command is broken by the fact QIIME uses multiple dashes in their arguments (is my theory)
#!qiime demux emp-paired --m-barcodes-file $directory/$metadata --m-barcodes-category BarcodeSequence --i-seqs $directory/output/$name.qza --o-per-sample-sequences $directory/output/$name.demux
## Based on the Graph Produced using the Following Command enter the trim and truncate values. Trim refers to the start of a sequence and truncate the total length (i.e. number of bases to remove from end)
# The example in the Atacam Desert Tutorial trims 13 bp from the start of each read and does not remove any bases from the end of the 150 bp reads:
# --p-trim-left-f 13 \
# --p-trim-left-r 13 \
# --p-trunc-len-f 150 \
# --p-trunc-len-r 150
for dataset in datasets:
name = dataset[0]
directory = dataset[1]
os.system(' '.join([
"qiime demux summarize",
"--i-data "+directory+"/output/"+name+".demux.qza",
"--o-visualization "+directory+"/output/"+name+".demux.QC.summary.qzv"
]))
## Take the output from this command and drop it into:
#https://view.qiime2.org
wait_for_user = input("The script will now wait for you to input trimming parameters in the next cell. You will need to take the .qzv files for each library and visualize them at <https://view.qiime2.org>. This is hopefully temporary, while QIIME2 developers improve on q2view.\n\n[ENTER ANYTHING. THIS IS ONLY MEANT TO PAUSE THE PIPELING]")
print("\nThe script is now proceeding. Stay tuned to make sure trimming works.")
## User Input Required
trim_dict = {}
## Input your trimming parameters into a python dictionary for all libraries
#trim_dict["LibraryName1"] = [trim_forward, truncate_forward, trim_reverse, truncate_reverse]
#trim_dict["LibraryName2"] = [trim_forward, truncate_forward, trim_reverse, truncate_reverse]
## Example
trim_dict["bioblitz"] = [1, 240, 1, 190]
## Hack for Multithreading
# I hardcoded 'nthreads' in both versions of 'run_dada_paired.R' (find your versions by running 'locate run_dada_paired.R' from your home directory)
# I used ~ 20 threads and the processing finished in ~ 7 - 8hrs
##
## SLOW STEP (~ 6 - 8 hrs, IF multithreading is used)
##
for dataset in datasets:
name = dataset[0]
directory = dataset[1]
os.system(' '.join([
"qiime dada2 denoise-paired",
"--i-demultiplexed-seqs "+directory+"/output/"+name+".demux.qza",
"--o-table "+directory+"/output/"+name+".table",
"--o-representative-sequences "+directory+"/output/"+name+".rep.seqs.final",
"--p-trim-left-f "+str(trim_dict[name][0]),
"--p-trim-left-r "+str(trim_dict[name][2]),
"--p-trunc-len-f "+str(trim_dict[name][1]),
"--p-trunc-len-r "+str(trim_dict[name][3])
]))
for dataset in datasets:
name = dataset[0]
directory = dataset[1]
metadata = dataset[2]
os.system(' '.join([
"qiime feature-table summarize",
"--i-table "+directory+"/output/"+name+".table.qza",
"--o-visualization "+directory+"/output/"+name+".table.qzv",
"--m-sample-metadata-file "+directory+metadata
]))
os.system(' '.join([
"qiime feature-table tabulate-seqs",
"--i-data "+directory+"/output/"+name+".rep.seqs.final.qza",
"--o-visualization "+directory+"/output/"+name+".rep.seqs.final.qzv"
]))
## Hack for Multithreading
# I hardcoded 'n_threads' in '_mafft.py' in the directory ~/anaconda3/envs/qiime2-2017.9/lib/python3.5/site-packages/q2_alignment
# I used ~ 20 threads and the processing finished in ~ 15 min
for dataset in datasets:
name = dataset[0]
directory = dataset[1]
metadata = dataset[2]
domain = dataset[3]
if domain != "fungi":
# Generate Alignment with MAFFT
os.system(' '.join([
"qiime alignment mafft",
"--i-sequences "+directory+"/output/"+name+".rep.seqs.final.qza",
"--o-alignment "+directory+"/output/"+name+".rep.seqs.aligned.qza"
]))
# Mask Hypervariable parts of Alignment
os.system(' '.join([
"qiime alignment mask",
"--i-alignment "+directory+"/output/"+name+".rep.seqs.aligned.qza",
"--o-masked-alignment "+directory+"/output/"+name+".rep.seqs.aligned.masked.qza"
]))
# Generate Tree with FastTree
os.system(' '.join([
"qiime phylogeny fasttree",
"--i-alignment "+directory+"/output/"+name+".rep.seqs.aligned.masked.qza",
"--o-tree "+directory+"/output/"+name+".rep.seqs.tree.unrooted.qza"
]))
# Root Tree
os.system(' '.join([
"qiime phylogeny midpoint-root",
"--i-tree "+directory+"/output/"+name+".rep.seqs.tree.unrooted.qza",
"--o-rooted-tree "+directory+"/output/"+name+".rep.seqs.tree.final.qza"
]))
for dataset in datasets:
name = dataset[0]
directory = dataset[1]
metadata = dataset[2]
domain = dataset[3]
# Classify
if domain == 'bacteria':
os.system(' '.join([
"qiime feature-classifier classify-sklearn",
"--i-classifier /home/db/GreenGenes/qiime2_13.8.99_515.806_nb.classifier.qza",
"--i-reads "+directory+"/output/"+name+".rep.seqs.final.qza",
"--o-classification "+directory+"/output/"+name+".taxonomy.final.qza"
]))
if domain == 'fungi':
os.system(' '.join([
"qiime feature-classifier classify-sklearn",
"--i-classifier /home/db/UNITE/qiime2_unite_ver7.99_20.11.2016_classifier.qza",
"--i-reads "+directory+"/output/"+name+".rep.seqs.final.qza",
"--o-classification "+directory+"/output/"+name+".taxonomy.final.qza"
]))
# Output Summary
os.system(' '.join([
"qiime metadata tabulate",
"--m-input-file "+directory+"/output/"+name+".taxonomy.final.qza",
"--o-visualization "+directory+"/output/"+name+".taxonomy.final.summary.qzv"
]))
## Make Function to Re-Format Taxonomy File to Contain Full Column Information
# and factor in the certain of the taxonomic assignment
def format_taxonomy(tax_file, min_support):
output = open(re.sub(".tsv",".fixed.tsv",tax_file), "w")
output.write("\t".join(["OTU","Domain","Phylum","Class","Order","Family","Genus","Species"])+"\n")
with open(tax_file, "r") as f:
next(f) #skip header
for line in f:
line = line.strip()
line = line.split("\t")
read_id = line[0]
tax_string = line[1]
# Annotate those strings which do not meet minimum support
if float(line[2]) < float(min_support):
tax_string = re.sub("__","__putative ",tax_string)
# Remove All Underscore Garbage (gimmie aesthetics)
tax_string = re.sub("k__|p__|c__|o__|f__|g__|s__","",tax_string)
# Add in columns containing unclassified taxonomic information
# Predicated on maximum 7 ranks (Domain -> Species)
full_rank = tax_string.split(";")
last_classified = full_rank[len(full_rank)-1]
count = 1
while last_classified == " ":
last_classified = full_rank[len(full_rank)-count]
count = count + 1
for n in range(full_rank.index(last_classified)+1, 7, 1):
try:
full_rank[n] = "unclassifed "+last_classified
except:
full_rank.append("unclassifed "+last_classified)
output.write(read_id+"\t"+'\t'.join(full_rank)+"\n")
return()
#####################
## Export from QIIME2
for dataset in datasets:
name = dataset[0]
directory = dataset[1]
metadata = dataset[2]
domain = dataset[3]
## Final Output Names
fasta_file = directory+"/output/"+name+".rep.seqs.final.fasta"
tree_file = directory+"/output/"+name+".tree.final.nwk"
tax_file = directory+"/output/"+name+".taxonomy.final.tsv"
count_table = directory+"/output/"+name+".counts.final.biom"
# Export Classifications
os.system(' '.join([
"qiime tools export",
directory+"/output/"+name+".taxonomy.final.qza",
"--output-dir "+directory+"/output/"
]))
# Reformat Classifications to meet phyloseq format
format_taxonomy(directory+"/output/taxonomy.tsv", min_support)
# Export SV Table
os.system(' '.join([
"qiime tools export",
directory+"/output/"+name+".table.qza",
"--output-dir "+directory+"/output/"
]))
# Export SV Sequences
os.system(' '.join([
"qiime tools export",
directory+"/output/"+name+".rep.seqs.final.qza",
"--output-dir "+directory+"/output/"
]))
# Export Tree
os.system(' '.join([
"qiime tools export",
directory+"/output/"+name+".rep.seqs.tree.final.qza",
"--output-dir "+directory+"/output/"
]))
# Rename Exported Files
%mv $directory/output/dna-sequences.fasta $fasta_file
%mv $directory/output/feature-table.biom $count_table
%mv $directory/output/taxonomy.fixed.tsv $tax_file
if domain == "bacteria":
%mv $directory/output/tree.nwk $tree_file
## This step is based on the database contructed for the software 'copyrighter'
## The software itself lacked information about datastructure (and, the import of a biom from QIIME2 failed, likely because there are multiple versions of the biom format)
downloaded = "N"
for dataset in datasets:
name = dataset[0]
directory = dataset[1]
domain = dataset[3]
if domain == 'bacteria':
if downloaded == "N":
## Download copyrighter database
!git clone https://github.com/fangly/AmpliCopyrighter $directory/temp/
## There are multiple GreenGenes ID numbers for a given taxonomic string.
## However, the copyrighter database uses the same average rrn copy number.
## We will therefore just use the taxonomic strings, since QIIME2 does not output the ID numbers
!sed -e '1,1075178d; 1078115d' $directory/temp/data/201210/ssu_img40_gg201210.txt > $directory/output/copyrighter.tax.strings.tsv
## Create Dictionary of rrnDB
rrnDB = {}
with open(directory+"/output/copyrighter.tax.strings.tsv", "r") as f:
for line in f:
line = line.strip()
line = line.split("\t")
try:
rrnDB[line[0]] = line[1]
except:
pass
downloaded = "Y"
## Attribute rrn to readID from taxonomy.tsv
output = open(directory+"/output/"+name+".seqID.to.rrn.final.tsv","w")
output.write("Feature ID\trrn\n")
with open(directory+"/output/taxonomy.tsv", "r") as f:
missing = 0
total = 0
next(f) # Skip Header
for line in f:
line = line.strip()
line = line.split("\t")
seqID = line[0]
try:
rrn = rrnDB[line[1]]
except:
rrn = "NA"
missing = missing + 1
total = total + 1
output.write(seqID+"\t"+rrn+"\n")
print("\nPercent of OTUs Missing {:.1%}".format(float(missing)/total))
print("Don't Panic! The majority of missing OTUs could be low abundance.")
## Setup R-Magic for Jupyter Notebooks
import rpy2
%load_ext rpy2.ipython
def fix_biom_conversion(file):
with open(file, 'r') as fin:
data = fin.read().splitlines(True)
with open(file, 'w') as fout:
fout.writelines(data[1:])
import pandas as pd
%R library(phyloseq)
%R library(ape)
for dataset in datasets:
name = dataset[0]
directory = dataset[1]
metadata = dataset[2]
domain = dataset[3]
#### IMPORT DATA to R
## For '.tsv' files, use Pandas to create a dataframe and then pipe that to R
## For '.biom' files, first convert using 'biom convert' on the command-line
## Had problems importing the count table with pandas, opted for using read.table in R
# Import Taxonomy File
tax_file = pd.read_csv(directory+"/output/"+name+".taxonomy.final.tsv", sep="\t")
%R -i tax_file
%R rownames(tax_file) = tax_file$OTU
%R tax_file$OTU <- NULL
%R tax_file <- tax_file[sort(row.names(tax_file)),] #read names must match the count_table
# Import Sample Data
#sample_file = pd.read_csv(directory+"/"+metadata, sep="\t")
sample_file = pd.read_table(directory+metadata, keep_default_na=False)
%R -i sample_file
%R rownames(sample_file) = sample_file$X.SampleID
%R sample_file$X.SampleID <- NULL
%R sample_file$LinkerPrimerSequence <- NULL ## Clean-up some other stuff
# Import Count Data
os.system(' '.join([
"biom convert",
"-i",
directory+"/output/"+name+".counts.final.biom",
"-o",
directory+"/output/"+name+".counts.final.tsv",
"--to-tsv"
]))
# The biom converter adds a stupid line that messes with the table formatting
fix_biom_conversion(directory+"/output/"+name+".counts.final.tsv")
# Finally import
count_table = pd.read_csv(directory+"/output/"+name+".counts.final.tsv", sep="\t")
%R -i count_table
%R rownames(count_table) = count_table$X.OTU.ID
%R count_table$X.OTU.ID <- NULL
%R count_table <- count_table[sort(row.names(count_table)),] #read names must match the tax_table
# Convert to Phyloseq Objects
%R p_counts = otu_table(count_table, taxa_are_rows = TRUE)
%R p_samples = sample_data(sample_file)
%R p_tax = tax_table(tax_file)
%R taxa_names(p_tax) <- rownames(tax_file) # phyloseq throws out rownames
%R colnames(p_tax) <- colnames(tax_file) # phyloseq throws out colnames
# Merge Phyloseq Objects
%R p = phyloseq(p_counts, p_tax)
# Import Phylogenetic Tree
if domain == "bacteria":
tree_file = directory+"/output/"+name+".tree.final.nwk"
%R -i tree_file
%R p_tree <- read.tree(tree_file)
# Combine All Objects into One Phyloseq
%R p_final <- merge_phyloseq(p, p_samples, p_tree)
else:
# Combine All Objects into One Phyloseq
%R p_final <- merge_phyloseq(p, p_samples)
# Save Phyloseq Object as '.rds'
output = directory+"/output/p_"+name+".final.rds"
%R -i output
%R saveRDS(p_final, file = output)
# Confirm Output
%R print(p_final)
for dataset in datasets:
directory = dataset[1]
metadata = dataset[2]
# Remove Files
if domain == "bacteria":
%rm -r $directory/output/*tree.unrooted.qza
%rm -r $directory/output/*aligned.masked.qza
%rm $directory/output/*.biom
%rm -r $directory/temp/
%rm $directory/output/*barcodes.fastq.gz
%rm $directory/output/taxonomy.tsv
%rm $directory/output/forward.fastq.gz # Just the symlink
%rm $directory/output/reverse.fastq.gz # Just the symlink
%rm $directory/output/copyrighter.tax.strings.tsv
# Separate Final Files
%mkdir $directory/final/
%mv $directory/output/*.final.rds $directory/final/
%mv $directory/output/*.taxonomy.final.tsv $directory/final/
%mv $directory/output/*.counts.final.tsv $directory/final/
%mv $directory/output/*.final.fasta $directory/final/
%cp $directory$metadata $directory/final/
%mv $directory/output/*.seqID.to.rrn.final.tsv $directory/final/
%mv $directory/output/*.nwk $directory/final/
# Gzip and Move Intermediate Files
!pigz -p 10 $directory/output/*.qza
!pigz -p 10 $directory/output/*.qzv
%mv $directory/output/ $directory/intermediate_files
print("Your sequences have been successfully saved to 'final' and 'intermediate_files'")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Create a linear stream of 10million points between -50 and 50.
Step2: Create random noise of same dimension
Step3: Define the function
Step4: Train test split
Step5: Plotting algorithms cannot work with millions of points, so you downsample just for plotting
Step7: Curve fitting
Step8: Run the model. Change the max_order to higher or lower if you wish
Step9: Plot results
Step10: Test results
Step11: Bias vs Variance
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
x = np.arange(-50,50,0.00001)
x.shape
bias = np.random.standard_normal(x.shape)
y2 = np.cos(x)**3 * (x**2/max(x)) + bias*5
x_train, x_test, y_train, y_test = train_test_split(x,y2, test_size=0.3)
x_train.shape
stepper = int(x_train.shape[0]/1000)
stepper
fig, ax = plt.subplots(1,1, figsize=(13,8))
ax.scatter(x[::stepper],y2[::stepper], marker='d')
ax.set_title('Distribution of training points')
def greedy_fitter(x_train, y_train, x_test, y_test, max_order=25):
Fitter will try to find the best order of
polynomial curve fit for the given synthetic data
import time
train_predictions=[]
train_rmse=[]
test_predictions=[]
test_rmse=[]
for order in range(1,max_order+1):
t1 = time.time()
coeff = np.polyfit(x_train, y_train, deg=order)
n_order = order
count = 0
y_predict = np.zeros(x_train.shape)
while n_order >=0:
y_predict += coeff[count]*x_train**n_order
count+=1
n_order = n_order-1
# append to predictions
train_predictions.append(y_predict)
# find training errors
current_train_rmse =np.sqrt(mean_squared_error(y_train, y_predict))
train_rmse.append(current_train_rmse)
# predict and find test errors
n_order = order
count = 0
y_predict_test = np.zeros(x_test.shape)
while n_order >=0:
y_predict_test += coeff[count]*x_test**n_order
count+=1
n_order = n_order-1
# append test predictions
test_predictions.append(y_predict_test)
# find test errors
current_test_rmse =np.sqrt(mean_squared_error(y_test, y_predict_test))
test_rmse.append(current_test_rmse)
t2 = time.time()
elapsed = round(t2-t1, 3)
print("Elapsed: " + str(elapsed) + \
"s Order: " + str(order) + \
" Train RMSE: " + str(round(current_train_rmse, 4)) + \
" Test RMSE: " + str(round(current_test_rmse, 4)))
return (train_predictions, train_rmse, test_predictions, test_rmse)
%%time
complexity=50
train_predictions, train_rmse, test_predictions, test_rmse = greedy_fitter(
x_train, y_train, x_test, y_test, max_order=complexity)
%%time
fig, axes = plt.subplots(1,1, figsize=(15,15))
axes.scatter(x_train[::stepper], y_train[::stepper],
label='Original data', color='gray', marker='x')
order=1
for p, r in zip(train_predictions, train_rmse):
axes.scatter(x_train[:stepper], p[:stepper],
label='O: ' + str(order) + " RMSE: " + str(round(r,2)),
marker='.')
order+=1
axes.legend(loc=0)
axes.set_title('Performance against training data')
%%time
fig, axes = plt.subplots(1,1, figsize=(15,15))
axes.scatter(x_test[::stepper], y_test[::stepper],
label='Test data', color='gray', marker='x')
order=1
for p, r in zip(test_predictions, test_rmse):
axes.scatter(x_test[:stepper], p[:stepper],
label='O: ' + str(order) + " RMSE: " + str(round(r,2)),
marker='.')
order+=1
axes.legend(loc=0)
axes.set_title('Performance against test data')
ax = plt.plot(np.arange(1,complexity+1),test_rmse)
plt.title('Bias vs Complexity'); plt.xlabel('Order of polynomial'); plt.ylabel('Test RMSE')
ax[0].axes.get_yaxis().get_major_formatter().set_useOffset(False)
plt.savefig('Model efficiency.png')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step4: Plotting functions
Step14: Restricted Boltzmann Machines
Step15: Load MNIST
Step17: Training with optax
Step18: Evaluating Training
Step20: Classification
Step21: The increase in accuracy here is modest because of the small number of hidden units. When 1000 hidden units are used the Epoch-5 accuracy approaches 97.5%.
Step22: We can explore the quality of the learned hidden tranformation by inspecting reconstructions of these test images.
|
<ASSISTANT_TASK:>
Python Code:
!pip install optax
import numpy as np
import jax
from jax import numpy as jnp
from jax import grad, jit, vmap, random
import optax
import tensorflow_datasets as tfds
from sklearn.linear_model import LogisticRegression
from matplotlib import pyplot as plt
import matplotlib.gridspec as gridspec
def plot_digit(img, label=None, ax=None):
Plot MNIST Digit.
if ax is None:
fig, ax = plt.subplots()
if img.ndim == 1:
img = img.reshape(28, 28)
ax.imshow(img.squeeze(), cmap="Greys_r")
ax.axis("off")
if label is not None:
ax.set_title(f"Label:{label}", fontsize=10, pad=1.3)
return ax
def grid_plot_imgs(imgs, dim=None, axs=None, labels=None, figsize=(5, 5)):
Plot a series of digits in a grid.
if dim is None:
if axs is None:
n_imgs = len(imgs)
dim = np.sqrt(n_imgs)
if not dim.is_integer():
raise ValueError("If dim not specified `len(imgs)` must be a square number.")
else:
dim = int(dim)
else:
dim = len(axs)
if axs is None:
gridspec_kw = {"hspace": 0.05, "wspace": 0.05}
if labels is not None:
gridspec_kw["hspace"] = 0.25
fig, axs = plt.subplots(dim, dim, figsize=figsize, gridspec_kw=gridspec_kw)
for n in range(dim**2):
img = imgs[n]
row_idx = n // dim
col_idx = n % dim
axi = axs[row_idx, col_idx]
if labels is not None:
ax_label = labels[n]
else:
ax_label = None
plot_digit(img, ax=axi, label=ax_label)
return axs
def gridspec_plot_imgs(imgs, gs_base, title=None, dim=5):
Plot digits into a gridspec subgrid.
Args:
imgs - images to plot.
gs_base - from `gridspec.GridSpec`
title - subgrid title.
Note that, in general, for this type of plotting it is considerably more
simple to using `fig.subfigures()` however that requires matplotlib >=3.4
which has some conflicts with the default colab setup as of the time of
writing.
gs0 = gs_base.subgridspec(dim, dim)
for i in range(dim):
for j in range(dim):
ax = fig.add_subplot(gs0[i, j])
plot_digit(imgs[i * dim + j], ax=ax)
if (i == 0) and (j == 2):
if title is not None:
ax.set_title(title)
def initialise_params(N_vis, N_hid, key):
Initialise the parameters.
Args:
N_vis - number of visible units.
N_hid - number of hidden units.
key - PRNG key.
Returns:
params - (W, a, b), Weights and biases for network.
W_key, a_key, b_key = random.split(key, 3)
W = random.normal(W_key, (N_vis, N_hid)) * 0.01
a = random.normal(a_key, (N_hid,)) * 0.01
b = random.normal(b_key, (N_vis,)) * 0.01
return (W, a, b)
@jit
def sample_hidden(vis, params, key):
Performs the hidden layer sampling, P(h|v;θ).
Args:
vis - state of the visible units.
params - (W, a, b), Weights and biases for network.
key - PRNG key.
Returns:
The probabilities and states of the hidden layer sampling.
W, a, _ = params
activation = jnp.dot(vis, W) + a
hid_probs = jax.nn.sigmoid(activation)
hid_states = random.bernoulli(key, hid_probs).astype("int8")
return hid_probs, hid_states
@jit
def sample_visible(hid, params, key):
Performs the visible layer sampling, P(v|h;θ).
Args:
hid - state of the hidden units
params - (W, a, b), Weights and biases for network.
key - PRNG key.
Returns:
The probabilities and states of the visible layer sampling.
W, _, b = params
activation = jnp.dot(hid, W.T) + b
vis_probs = jax.nn.sigmoid(activation)
vis_states = random.bernoulli(key, vis_probs).astype("int8")
return vis_probs, vis_states
@jit
def CD1(vis_sample, params, key):
The one-step contrastive divergence algorithm.
Can handle batches of training data.
Args:
vis_sample - sample of visible states from data.
params - (W, a, b), Weights and biases for network.
key - PRNG key.
Returns:
An estimate of the gradient of the log likelihood with respect
to the parameters.
key, subkey = random.split(key)
hid_prob0, hid_state0 = sample_hidden(vis_sample, params, subkey)
key, subkey = random.split(key)
vis_prob1, vis_state1 = sample_visible(hid_state0, params, subkey)
key, subkey = random.split(key)
# It would be more efficient here to not actual sample the unused states.
hid_prob1, _ = sample_hidden(vis_state1, params, subkey)
delta_W = jnp.einsum("...j,...k->...jk", vis_sample, hid_prob0) - jnp.einsum(
"...j,...k->...jk", vis_state1, hid_prob1
)
delta_a = hid_prob0 - hid_prob1
delta_b = vis_sample - vis_state1
return (delta_W, delta_a, delta_b)
@jit
def reconstruct_vis(vis_sample, params, key):
Reconstruct the visible state from a conditional sample of the hidden
units.
Returns
Reconstruction probabilities.
subkey1, subkey2 = random.split(key, 2)
_, hid_state = sample_hidden(vis_sample, params, subkey1)
vis_recon_prob, _ = sample_visible(hid_state, params, subkey2)
return vis_recon_prob
@jit
def reconstruction_loss(vis_samples, params, key):
Calculate the L2 loss between a batch of visible samples and their
reconstructions.
Note this is a heuristic for evaluating training progress, not an objective
function.
reconstructed_samples = reconstruct_vis(vis_samples, params, key)
loss = optax.l2_loss(vis_samples.astype("float32"), reconstructed_samples).mean()
return loss
@jit
def vis_free_energy(vis_state, params):
Calculate the free enery of a visible state.
The free energy of a visible state is equal to the sum of the energies of
all of the configurations of the total state (hidden + visible) which
contain that visible state.
Args:
vis_state - state of the visible units.
params - (W, a, b), Weights and biases for network.
key - PRNG key.
Returns:
The free energy of the visible state.
W, a, b = params
activation = jnp.dot(vis_state, W) + a
return -jnp.dot(vis_state, b) - jnp.sum(jax.nn.softplus(activation))
@jit
def free_energy_gap(vis_train_samples, vis_test_samples, params):
Calculate the average difference in free energies between test and train
data.
The free energy gap can be used to evaluate overfitting. If the model
starts to overfit the training data the free energy gap will start to
become increasingly negative.
Args:
vis_train_samples - samples of visible states from training data.
vis_test_samples - samples of visible states from validation data.
params - (W, a, b), Weights and biases for network.
Returns:
The difference between the test and validation free energies.
train_FE = vmap(vis_free_energy, (0, None))(vis_train_samples, params)
test_FE = vmap(vis_free_energy, (0, None))(vis_test_samples, params)
return train_FE.mean() - test_FE.mean()
@jit
def evaluate_params(train_samples, test_samples, params, key):
Calculate performance measures of parameters.
train_key, test_key = random.split(key)
train_recon_loss = reconstruction_loss(train_samples, params, train_key)
test_recon_loss = reconstruction_loss(test_samples, params, test_key)
FE_gap = free_energy_gap(train_samples, test_samples, params)
return train_recon_loss, test_recon_loss, FE_gap
def preprocess_images(images):
images = images.reshape((len(images), -1))
return jnp.array(images > (255 / 2), dtype="float32")
def load_mnist(split):
images, labels = tfds.as_numpy(tfds.load("mnist", split=split, batch_size=-1, as_supervised=True))
procced_images = preprocess_images(images)
return procced_images, labels
mnist_train_imgs, mnist_train_labels = load_mnist("train")
mnist_test_imgs, mnist_test_labels = load_mnist("test")
def train_RBM(params, train_data, optimizer, key, eval_samples, n_epochs=5, batch_size=20):
Optimize parameters of RBM using the CD1 algoritm.
@jit
def batch_step(params, opt_state, batch, key):
grads = jax.tree_map(lambda x: x.mean(0), CD1(batch, params, key))
updates, opt_state = optimizer.update(grads, opt_state, params)
params = jax.tree_map(lambda p, u: p - u, params, updates)
return params, opt_state
opt_state = optimizer.init(params)
metric_list = []
param_list = [params]
n_batches = len(train_data) // batch_size
for _ in range(n_epochs):
key, subkey = random.split(key)
perms = random.permutation(subkey, len(mnist_train_imgs))
perms = perms[: batch_size * n_batches] # Skip incomplete batch
perms = perms.reshape((n_batches, -1))
for n, perm in enumerate(perms):
batch = mnist_train_imgs[perm, ...]
key, subkey = random.split(key)
params, opt_state = batch_step(params, opt_state, batch, subkey)
if n % 200 == 0:
key, eval_key = random.split(key)
batch_metrics = evaluate_params(*eval_samples, params, eval_key)
metric_list.append(batch_metrics)
param_list.append(params)
return params, metric_list, param_list
# In practice you can use many more than 100 hidden units, up to 1000-2000.
# A small number is chosen here so that training is fast.
N_vis, N_hid = mnist_train_imgs.shape[-1], 100
key = random.PRNGKey(111)
key, subkey = random.split(key)
init_params = initialise_params(N_vis, N_hid, subkey)
optimizer = optax.sgd(learning_rate=0.05, momentum=0.9)
eval_samples = (mnist_train_imgs[:1000], mnist_test_imgs[:1000])
params, metric_list, param_list = train_RBM(init_params, mnist_train_imgs, optimizer, key, eval_samples)
train_recon_loss, test_recon_loss, FE_gap = list(zip(*metric_list))
epoch_progress = np.linspace(0, 5, len(train_recon_loss))
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 6))
ax1.plot(epoch_progress, train_recon_loss, label="Train Reconstruction Loss")
ax1.plot(epoch_progress, test_recon_loss, label="Test Reconstruction Loss")
ax1.legend()
ax1.set_xlabel("Epoch")
ax1.set_ylabel("Loss")
ax2.plot(epoch_progress, FE_gap)
ax2.set_xlabel("Epoch")
ax2.set_ylabel("Free Energy Gap");
vis_data_samples = mnist_test_imgs[:25]
fig = plt.figure(figsize=(15, 5))
gs_bases = gridspec.GridSpec(1, 3, figure=fig)
recon_params = (param_list[0], param_list[1], param_list[-1])
subfig_titles = ("Initial", "Epoch 1", "Epoch 5")
key, subkey = random.split(key)
for gs_base, epoch_param, sf_title in zip(gs_bases, recon_params, subfig_titles):
# Use the same subkey for all parameter sets.
vis_recon_probs = reconstruct_vis(vis_data_samples, epoch_param, subkey)
title = f"{sf_title} Parameters"
gridspec_plot_imgs(vis_recon_probs, gs_base, title)
fig.suptitle("Reconstruction Samples", fontsize=20);
class RBM_LogReg:
Perform logistic regression on samples transformed to RBM hidden
representation with `params`.
def __init__(self, params):
self.params = params
self.LR = LogisticRegression(solver="saga", tol=0.1)
def _transform(self, samples):
W, a, _ = self.params
activation = jnp.dot(samples, W) + a
hidden_probs = jax.nn.sigmoid(activation)
return hidden_probs
def fit(self, train_samples, train_labels):
transformed_samples = self._transform(train_samples)
self.LR.fit(transformed_samples, train_labels)
def score(self, test_samples, test_labels):
transformed_samples = self._transform(test_samples)
return self.LR.score(transformed_samples, test_labels)
def predict(self, test_samples):
transformed_samples = self._transform(test_samples)
return self.LR.predict(transformed_samples)
def reconstruct_samples(self, samples, key):
return reconstruct_vis(samples, self.params, key)
train_data = (mnist_train_imgs, mnist_train_labels)
test_data = (mnist_test_imgs, mnist_test_labels)
# Train LR classifier on the raw pixel data for comparison.
LR_raw = LogisticRegression(solver="saga", tol=0.1)
LR_raw.fit(*train_data)
# LR classifier trained on hidden representations after 1 Epoch of training.
rbm_lr1 = RBM_LogReg(param_list[1])
rbm_lr1.fit(*train_data)
# LR classifier trained on hidden representations after 5 Epochs of training.
rbm_lr5 = RBM_LogReg(param_list[-1])
rbm_lr5.fit(*train_data)
print("Logistic Regression Accuracy:")
print(f"\tRaw Data: {LR_raw.score(*test_data)}")
print(f"\tHidden Units Epoch-1: {rbm_lr1.score(*test_data)}")
print(f"\tHidden Units Epoch-5: {rbm_lr5.score(*test_data)}")
class1_correct = rbm_lr1.predict(mnist_test_imgs) == mnist_test_labels
class5_correct = rbm_lr5.predict(mnist_test_imgs) == mnist_test_labels
diff_class_img_idxs = np.where(class5_correct & ~class1_correct)[0]
print(f"There are {len(diff_class_img_idxs)} images which were correctly labelled after >1 Epochs of training.")
key = random.PRNGKey(100)
# Try out different subsets of img indices.
idx_list = diff_class_img_idxs[100:]
n_rows = 5
fig, axs = plt.subplots(n_rows, 3, figsize=(9, 20))
for img_idx, ax_row in zip(idx_list, axs):
ax1, ax2, ax3 = ax_row
img = mnist_test_imgs[img_idx]
plot_digit(img, ax=ax1)
true_label = mnist_test_labels[img_idx]
ax1.set_title(f"Raw Image\nTrue Label: {true_label}")
epoch1_recon = rbm_lr1.reconstruct_samples(img, key)
plot_digit(epoch1_recon, ax=ax2)
hid1_label = rbm_lr1.predict(img[None, :])[0]
ax2.set_title(f"Epoch 1 Reconstruction\nPredicted Label: {hid1_label} (incorrect)")
epoch5_recon = rbm_lr5.reconstruct_samples(img, key)
hid5_label = rbm_lr5.predict(img[None, :])[0]
plot_digit(epoch5_recon, ax=ax3)
ax3.set_title(f"Epoch 5 Reconstruction\nPredicted Label: {hid5_label} (correct)");
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Неон и ртуть
Step2: Водород
Step3: Погрешность измерения барабана 12 градусов. Отсюда находим погрешность измерения длин волн
Step4: Определение постоянной Ридберга - из формулы выше $R = \frac{4n^2}{\lambda (n^2 - 4)}$
Step5: Погрешность измерения постоянной Ридберга зависит только от погрешности измерения барабана, откуда мы получаем
Step6: Энергия колебательного кванта возбужденного состояния молекулы йода
Step7: а) Энергия электронного перехода
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np; import scipy as sps; import matplotlib.pyplot as plt; import pandas as pd
%matplotlib inline
table_1 = pd.read_excel('lab-4-1.xlsx', '1'); table_1.iloc[:, :4]
table_2 = pd.read_excel('lab-4-1.xlsx', '2'); table_2.iloc[:, :]
degrees = table_1.values[:, 0].tolist()[::-1]; len_waves = table_1.values[:, 3][::-1]; red, blue, violet = table_2.values[:, 0]
plt.figure(figsize=(16, 8)); plt.title('Длина волны в зависимости от поворота', fontsize=18); plt.grid(ls='-')
plt.plot(degrees, len_waves, lw=2, label='Длина волны', color='black')
plt.xlabel('Градусы на барабане, у. е.', fontsize=15); plt.ylabel('Длина волны, Å', fontsize=15)
plt.xlim((750, 2600)); plt.ylim((4000, 7100))
plt.vlines(red, 4000, 7100, color='red'); plt.vlines(blue, 4000, 7100, color='blue'); plt.vlines(violet, 4000, 7100, color='violet')
plt.errorbar(degrees, len_waves, xerr=[15] * 30, fmt='o', color='black')
plt.show()
def max_lt(seq, val):
res = 0
while seq[res] < val:
res += 1
if res > 0:
res -= 1
return res
def count_len(x):
l = max_lt(degrees, x)
r = l + 1
coef = (len_waves[r] - len_waves[l]) / (degrees[r] - degrees[l])
return len_waves[l] + coef * (x - degrees[l])
lred, lblue, lviolet = count_len(red), count_len(blue), count_len(violet)
print('Длина волны красного света, Å -', round(lred))
print('Длина волны синего света, Å -', round(lblue))
print('Длина волны фиолетового света, Å -', round(lviolet))
R_real = 109737
lred_real = (R_real * (0.25 - 1 / 9)) ** (-1) * 10 ** 8
lblue_real = (R_real * (0.25 - 1 / 25)) ** (-1) * 10 ** 8
lviolet_real = (R_real * (0.25 - 1 / 36)) ** (-1) * 10 ** 8
print("Длина волны красного света, Å -", round(lred_real))
print("Длина волны синего света, Å -", round(lblue_real))
print("Длина волны фиолетового света, Å -", round(lviolet_real))
Rred = 4 * 9 / (lred * 5) * 10 ** 8
Rblue = 4 * 25 / (lblue * 21) * 10 ** 8
Rviolet = 4 * 36 / (lviolet * 32) * 10 ** 8
R = (Rred + Rblue + Rviolet) / 3
print('Постоянная для красного света, 1/см -', round(Rred))
print('Постоянная для синего света, 1/см -', round(Rblue))
print('Постоянная для фиолетового света, 1/см -', round(Rviolet))
print('Среднее значение, 1/см -', round(Rred))
print('Правильное значение, 1/см -', round(R_real))
l10 = 5401 + (5852 - 5401) * (2320 - 2248) / (2505 - 2248)
l15 = 5401 + (5852 - 5401) * (2245 - 2248) / (2505 - 2248)
lgr = 5401 + (5852 - 5401) * (2176 - 2248) / (2505 - 2248)
print('Длина волны для n_10, Å -', round(l10))
print('Длина волны для n_15, Å -', round(l15))
print('Длина волны для n_gr, Å -', round(lgr))
h = 4.13 * 10 ** -15; c = 3 * 10 ** 8
nu10 = c / (l10 * 10 ** -10); nu15 = c / (l15 * 10 ** -10); nugr = c / (lgr * 10 ** -10); hnu2 = h * (nu15 - nu10) / 5
print('Энергия, эВ -', round(hnu2, 3))
hnu1 = 0.027; E_A = 0.94
hnuel = h * nu10 + hnu1; D2 = h * nugr- hnu1; D1 = D2 - E_A
print('Энергия электронного перехода, эВ -', round(hnuel, 2))
print('Энергия диссоциации молекулы в основном состоянии, эВ -', round(D2, 2))
print('Энергия диссоциации молекулы в возбужденногом состоянии, эВ -', round(D1, 2))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Numeric widgets
Step2: FloatSlider
Step3: An example of sliders displayed vertically.
Step4: FloatLogSlider
Step5: IntRangeSlider
Step6: FloatRangeSlider
Step7: IntProgress
Step8: FloatProgress
Step9: The numerical text boxes that impose some limit on the data (range, integer-only) impose that restriction when the user presses enter.
Step10: BoundedFloatText
Step11: IntText
Step12: FloatText
Step13: Boolean widgets
Step14: Checkbox
Step15: Valid
Step16: Selection widgets
Step17: The following is also valid, displaying the words 'One', 'Two', 'Three' as the dropdown choices but returning the values 1, 2, 3.
Step18: RadioButtons
Step19: With dynamic layout and very long labels
Step20: Select
Step21: SelectionSlider
Step22: SelectionRangeSlider
Step23: ToggleButtons
Step24: SelectMultiple
Step25: String widgets
Step26: Textarea
Step27: Combobox
Step28: Password
Step29: Label
Step30: HTML
Step31: HTML Math
Step32: Image
Step33: Button
Step34: The icon attribute can be used to define an icon; see the fontawesome page for available icons.
Step35: Tags input widget
Step36: Date picker
Step37: Time picker
Step38: Datetime picker
Step39: Color picker
Step40: File Upload
Step41: The upload widget exposes a value attribute that contains the files uploaded. The value attribute is a tuple with a dictionary for each uploaded file. For instance
Step42: Container/Layout widgets
Step43: HBox
Step44: VBox
Step45: GridBox
Step46: Accordion
Step47: Tabs
Step48: Stacked
Step49: This can be used in combination with another selection-based widget to show different widgets depending on the selection
Step50: Accordion, Tab, and Stacked use selected_index, not value
Step51: Nesting tabs and accordions
|
<ASSISTANT_TASK:>
Python Code:
import ipywidgets as widgets
widgets.IntSlider(
value=7,
min=0,
max=10,
step=1,
description='Test:',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='d'
)
widgets.FloatSlider(
value=7.5,
min=0,
max=10.0,
step=0.1,
description='Test:',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='.1f',
)
widgets.FloatSlider(
value=7.5,
min=0,
max=10.0,
step=0.1,
description='Test:',
disabled=False,
continuous_update=False,
orientation='vertical',
readout=True,
readout_format='.1f',
)
widgets.FloatLogSlider(
value=10,
base=10,
min=-10, # max exponent of base
max=10, # min exponent of base
step=0.2, # exponent step
description='Log Slider'
)
widgets.IntRangeSlider(
value=[5, 7],
min=0,
max=10,
step=1,
description='Test:',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='d',
)
widgets.FloatRangeSlider(
value=[5, 7.5],
min=0,
max=10.0,
step=0.1,
description='Test:',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='.1f',
)
widgets.IntProgress(
value=7,
min=0,
max=10,
description='Loading:',
bar_style='', # 'success', 'info', 'warning', 'danger' or ''
style={'bar_color': 'maroon'},
orientation='horizontal'
)
widgets.FloatProgress(
value=7.5,
min=0,
max=10.0,
description='Loading:',
bar_style='info',
style={'bar_color': '#ffff00'},
orientation='horizontal'
)
widgets.BoundedIntText(
value=7,
min=0,
max=10,
step=1,
description='Text:',
disabled=False
)
widgets.BoundedFloatText(
value=7.5,
min=0,
max=10.0,
step=0.1,
description='Text:',
disabled=False
)
widgets.IntText(
value=7,
description='Any:',
disabled=False
)
widgets.FloatText(
value=7.5,
description='Any:',
disabled=False
)
widgets.ToggleButton(
value=False,
description='Click me',
disabled=False,
button_style='', # 'success', 'info', 'warning', 'danger' or ''
tooltip='Description',
icon='check' # (FontAwesome names without the `fa-` prefix)
)
widgets.Checkbox(
value=False,
description='Check me',
disabled=False,
indent=False
)
widgets.Valid(
value=False,
description='Valid!',
)
widgets.Dropdown(
options=['1', '2', '3'],
value='2',
description='Number:',
disabled=False,
)
widgets.Dropdown(
options=[('One', 1), ('Two', 2), ('Three', 3)],
value=2,
description='Number:',
)
widgets.RadioButtons(
options=['pepperoni', 'pineapple', 'anchovies'],
# value='pineapple', # Defaults to 'pineapple'
# layout={'width': 'max-content'}, # If the items' names are long
description='Pizza topping:',
disabled=False
)
widgets.Box(
[
widgets.Label(value='Pizza topping with a very long label:'),
widgets.RadioButtons(
options=[
'pepperoni',
'pineapple',
'anchovies',
'and the long name that will fit fine and the long name that will fit fine and the long name that will fit fine '
],
layout={'width': 'max-content'}
)
]
)
widgets.Select(
options=['Linux', 'Windows', 'macOS'],
value='macOS',
# rows=10,
description='OS:',
disabled=False
)
widgets.SelectionSlider(
options=['scrambled', 'sunny side up', 'poached', 'over easy'],
value='sunny side up',
description='I like my eggs ...',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True
)
import datetime
dates = [datetime.date(2015, i, 1) for i in range(1, 13)]
options = [(i.strftime('%b'), i) for i in dates]
widgets.SelectionRangeSlider(
options=options,
index=(0, 11),
description='Months (2015)',
disabled=False
)
widgets.ToggleButtons(
options=['Slow', 'Regular', 'Fast'],
description='Speed:',
disabled=False,
button_style='', # 'success', 'info', 'warning', 'danger' or ''
tooltips=['Description of slow', 'Description of regular', 'Description of fast'],
# icons=['check'] * 3
)
widgets.SelectMultiple(
options=['Apples', 'Oranges', 'Pears'],
value=['Oranges'],
#rows=10,
description='Fruits',
disabled=False
)
widgets.Text(
value='Hello World',
placeholder='Type something',
description='String:',
disabled=False
)
widgets.Textarea(
value='Hello World',
placeholder='Type something',
description='String:',
disabled=False
)
widgets.Combobox(
# value='John',
placeholder='Choose Someone',
options=['Paul', 'John', 'George', 'Ringo'],
description='Combobox:',
ensure_option=True,
disabled=False
)
widgets.Password(
value='password',
placeholder='Enter password',
description='Password:',
disabled=False
)
widgets.HBox([widgets.Label(value="The $m$ in $E=mc^2$:"), widgets.FloatSlider()])
widgets.HTML(
value="Hello <b>World</b>",
placeholder='Some HTML',
description='Some HTML',
)
widgets.HTMLMath(
value=r"Some math and <i>HTML</i>: \(x^2\) and $$\frac{x+1}{x-1}$$",
placeholder='Some HTML',
description='Some HTML',
)
file = open("images/WidgetArch.png", "rb")
image = file.read()
widgets.Image(
value=image,
format='png',
width=300,
height=400,
)
button = widgets.Button(
description='Click me',
disabled=False,
button_style='', # 'success', 'info', 'warning', 'danger' or ''
tooltip='Click me',
icon='check' # (FontAwesome names without the `fa-` prefix)
)
button
play = widgets.Play(
value=50,
min=0,
max=100,
step=1,
interval=500,
description="Press play",
disabled=False
)
slider = widgets.IntSlider()
widgets.jslink((play, 'value'), (slider, 'value'))
widgets.HBox([play, slider])
tags = widgets.TagsInput(
value=['pizza', 'fries'],
allowed_tags=['pizza', 'fries', 'tomatoes', 'steak'],
allow_duplicates=False
)
tags
color_tags = widgets.ColorsInput(
value=['red', '#2f6d30'],
# allowed_tags=['red', 'blue', 'green'],
# allow_duplicates=False
)
color_tags
widgets.DatePicker(
description='Pick a Date',
disabled=False
)
widgets.TimePicker(
description='Pick a Time',
disabled=False
)
widgets.DatetimePicker(
description='Pick a Time',
disabled=False
)
widgets.ColorPicker(
concise=False,
description='Pick a color',
value='blue',
disabled=False
)
widgets.FileUpload(
accept='', # Accepted file extension e.g. '.txt', '.pdf', 'image/*', 'image/*,.pdf'
multiple=False # True to accept multiple files upload else False
)
widgets.Controller(
index=0,
)
items = [widgets.Label(str(i)) for i in range(4)]
widgets.Box(items)
items = [widgets.Label(str(i)) for i in range(4)]
widgets.HBox(items)
items = [widgets.Label(str(i)) for i in range(4)]
left_box = widgets.VBox([items[0], items[1]])
right_box = widgets.VBox([items[2], items[3]])
widgets.HBox([left_box, right_box])
items = [widgets.Label(str(i)) for i in range(8)]
widgets.GridBox(items, layout=widgets.Layout(grid_template_columns="repeat(3, 100px)"))
accordion = widgets.Accordion(children=[widgets.IntSlider(), widgets.Text()], titles=('Slider', 'Text'))
accordion
tab_contents = ['P0', 'P1', 'P2', 'P3', 'P4']
children = [widgets.Text(description=name) for name in tab_contents]
tab = widgets.Tab()
tab.children = children
tab.titles = [str(i) for i in range(len(children))]
tab
button = widgets.Button(description='Click here')
slider = widgets.IntSlider()
stacked = widgets.Stacked([button, slider])
stacked # will show only the button
dropdown = widgets.Dropdown(options=['button', 'slider'])
widgets.jslink((dropdown, 'index'), (stacked, 'selected_index'))
widgets.VBox([dropdown, stacked])
tab.selected_index = 3
accordion.selected_index = None
tab_nest = widgets.Tab()
tab_nest.children = [accordion, accordion]
tab_nest.titles = ('An accordion', 'Copy of the accordion')
tab_nest
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Loading Hyperbolic Orbits into REBOUND
Step2: We want to add these comits to a REBOUND simulation(s). The first thing to do is set the units, which have to be consistent throughout. Here we have a table in AU and days, so we'll use the gaussian gravitational constant (AU, days, solar masses).
Step3: We also set the simulation time to the epoch at which the elements are valid
Step4: We then add the giant planets in our Solar System to the simulation. You could for example query JPL HORIZONS for the states of the planets at each comet's corresponding epoch of observation (see Horizons.ipynb). Here we set up toy masses and orbits for Jupiter & Saturn
Step5: Let's write a function that takes a comet from the table and adds it to our simulation
Step6: By default, REBOUND adds and outputs particles in Jacobi orbital elements. Typically orbital elements for comets are heliocentric. Mixing the two will give you relative errors in elements, positions etc. of order the mass ratio of Jupiter to the Sun ($\sim 0.001$) which is why we pass the additional primary=sim.particles[0] argument to the add() function. If this level of accuracy doesn't matters to you, you can ignore the primary argument.
Step7: Now we just integrate until whatever final time we’re interested in. Here it's the epoch at which we observe the comet, which is the last column in our table
Step8: REBOUND automatically find out if you want to integrate forward or backward in time.
|
<ASSISTANT_TASK:>
Python Code:
from io import StringIO
import numpy as np
import rebound
epoch_of_elements = 53371.0 # [MJD, days]
c = StringIO(u
# id e q[AU] i[deg] Omega[deg] argperi[deg] t_peri[MJD, days] epoch_of_observation[MJD, days]
168026 12.181214 15.346358 136.782470 37.581438 268.412314 54776.806093 55516.41727
21170 2.662235 2.013923 140.646538 23.029490 46.292039 54336.126288 53673.44043
189298 15.503013 11.550314 20.042232 203.240743 150.855761 55761.641176 55718.447145
72278 34.638392 24.742323 157.984412 126.431540 178.612758 54382.158401 54347.240445
109766 8.832472 9.900228 144.857801 243.102255 271.345342 55627.501618 54748.37722
)
comets = np.loadtxt(c) # load the table into a numpy array
sim = rebound.Simulation()
k = 0.01720209895 # Gaussian constant
sim.G = k**2
sim.t = epoch_of_elements
sim.add(m=1.) # Sun
sim.add(m=1.e-3, a=5.) # Jupiter
sim.add(m=3.e-4, a=10.) # Saturn
def addOrbit(sim, comet_elem):
tracklet_id, e, q, inc, Omega, argperi, t_peri, epoch_of_observation = comet_elem
sim.add(primary=sim.particles[0],
a = q/(1.-e),
e = e,
inc = inc*np.pi/180., # have to convert to radians
Omega = Omega*np.pi/180.,
omega = argperi*np.pi/180.,
T = t_peri # time of pericenter passage
)
addOrbit(sim, comets[0])
%matplotlib inline
fig = rebound.OrbitPlot(sim, trails=True)
tfinal = comets[0][-1]
sim.integrate(tfinal)
fig = rebound.OrbitPlot(sim, trails=True)
sim = rebound.Simulation()
sim.G = k**2
sim.t = epoch_of_elements
sim.add(m=1.) # Sun
sim.add(m=1.e-3, a=5.) # Jupiter
sim.add(m=3.e-4, a=10.) # Saturn
for comet in comets:
addOrbit(sim, comet)
fig = rebound.OrbitPlot(sim, trails=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Display mode
Step2: Display mode
Step3: Recognized Formats
Step4: Programmable Table Actions
Step5: Set index to DataFrame
Step6: Update cell
Step7: HTML format
Step8: Auto linking of URLs
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
from beakerx import *
from beakerx.object import beakerx
pd.read_csv('../resources/data/interest-rates.csv')
table = TableDisplay(pd.read_csv('../resources/data/interest-rates.csv'))
table.setAlignmentProviderForColumn('m3', TableDisplayAlignmentProvider.CENTER_ALIGNMENT)
table.setRendererForColumn("y10", TableDisplayCellRenderer.getDataBarsRenderer(False))
table.setRendererForType(ColumnType.Double, TableDisplayCellRenderer.getDataBarsRenderer(True))
table
df = pd.read_csv('../resources/data/interest-rates.csv')
df['time'] = df['time'].str.slice(0,19).astype('datetime64[ns]')
table = TableDisplay(df)
table.setStringFormatForTimes(TimeUnit.DAYS)
table.setStringFormatForType(ColumnType.Double, TableDisplayStringFormat.getDecimalFormat(4,6))
table.setStringFormatForColumn("m3", TableDisplayStringFormat.getDecimalFormat(0, 0))
table
table = TableDisplay(pd.read_csv('../resources/data/interest-rates.csv'))
table
#freeze a column
table.setColumnFrozen("y1", True)
#hide a column
table.setColumnVisible("y30", False)
table.setColumnOrder(["m3", "y1", "y5", "time", "y2"])
def config_tooltip(row, column, table):
return "The value is: " + str(table.values[row][column])
table.setToolTip(config_tooltip)
table.setDataFontSize(16)
table.setHeaderFontSize(18)
table
mapListColorProvider = [
{"a": 1, "b": 2, "c": 3},
{"a": 4, "b": 5, "c": 6},
{"a": 7, "b": 8, "c": 5}
]
tabledisplay = TableDisplay(mapListColorProvider)
colors = [
[Color.LIGHT_GRAY, Color.GRAY, Color.RED],
[Color.DARK_GREEN, Color.ORANGE, Color.RED],
[Color.MAGENTA, Color.BLUE, Color.BLACK]
]
def color_provider(row, column, table):
return colors[row][column]
tabledisplay.setFontColorProvider(color_provider)
tabledisplay
mapListFilter = [
{"a":1, "b":2, "c":3},
{"a":4, "b":5, "c":6},
{"a":7, "b":8, "c":5}
]
display = TableDisplay(mapListFilter)
def filter_row(row, model):
return model[row][1] == 8
display.setRowFilter(filter_row)
display
table = TableDisplay(pd.read_csv('../resources/data/interest-rates.csv'))
table.addCellHighlighter(TableDisplayCellHighlighter.getHeatmapHighlighter("m3", TableDisplayCellHighlighter.FULL_ROW))
table
beakerx.pandas_display_default()
pd.read_csv('../resources/data/interest-rates.csv')
beakerx.pandas_display_table()
pd.read_csv('../resources/data/interest-rates.csv')
TableDisplay([{'y1':4, 'm3':2, 'z2':1}, {'m3':4, 'z2':2}])
TableDisplay({"x" : 1, "y" : 2})
mapList4 = [
{"a":1, "b":2, "c":3},
{"a":4, "b":5, "c":6},
{"a":7, "b":8, "c":5}
]
display = TableDisplay(mapList4)
def dclick(row, column, tabledisplay):
tabledisplay.values[row][column] = sum(map(int,tabledisplay.values[row]))
display.setDoubleClickAction(dclick)
def negate(row, column, tabledisplay):
tabledisplay.values[row][column] = -1 * int(tabledisplay.values[row][column])
def incr(row, column, tabledisplay):
tabledisplay.values[row][column] = int(tabledisplay.values[row][column]) + 1
display.addContextMenuItem("negate", negate)
display.addContextMenuItem("increment", incr)
display
mapList4 = [
{"a":1, "b":2, "c":3},
{"a":4, "b":5, "c":6},
{"a":7, "b":8, "c":5}
]
display = TableDisplay(mapList4)
#set what happens on a double click
display.setDoubleClickAction("runDoubleClick")
display
print("runDoubleClick fired")
print(display.details)
df = pd.read_csv('../resources/data/interest-rates.csv')
df.set_index(['m3'])
df = pd.read_csv('../resources/data/interest-rates.csv')
df.index = df['time']
df
dataToUpdate = [
{'a':1, 'b':2, 'c':3},
{'a':4, 'b':5, 'c':6},
{'a':7, 'b':8, 'c':9}
]
tableToUpdate = TableDisplay(dataToUpdate)
tableToUpdate
tableToUpdate.values[0][0] = 99
tableToUpdate.sendModel()
tableToUpdate.updateCell(2,"c",121)
tableToUpdate.sendModel()
table = TableDisplay({
'w': '$2 \\sigma$',
'x': '<em style="color:red">italic red</em>',
'y': '<b style="color:blue">bold blue</b>',
'z': 'strings without markup work fine too',
})
table.setStringFormatForColumn("Value", TableDisplayStringFormat.getHTMLFormat())
table
TableDisplay({'Two Sigma': 'http://twosigma.com', 'BeakerX': 'http://BeakerX.com'})
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Then import the Codon Table for standard genetic code, with the slight modification - add STOP codon * as a fully-fledged member of the table
Step2: Kyte-Doolittle hydrophobicity scale is needed for us as well. We add the STOP codon there as well, and assign it artificially large KD value, simply to make all the mutations to/from the STOP codon distinguishable.
Step3: Reference flu as object and pH1N1 alignment
Step4: Reference flu-genome sequence is a consensus of the pandemic H1N1
Step5: Our segment-wide alignments are stored in files with particular names
Step6: Let's load all of the alignments using these file names and store them as a dictionary
Step7: Let's output some characteristics of the alignment
Step9: in ORF or not in ORF? where exactly?
Step10: Then we perform the analysis, exploring observed mutations and their basic features
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import os
import sys
from Bio import SeqRecord
from Bio import AlignIO
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from Bio.Data import CodonTable
genetic_code = CodonTable.standard_dna_table.forward_table
stop_codons = dict([ (codon,'*') for codon in CodonTable.standard_dna_table.stop_codons ])
genetic_code.update(stop_codons)
from Bio.SeqUtils import ProtParamData
KD = ProtParamData.kd
KD['*'] = 25.0
sys.path.append("../RNA_SNP_association")
import flu_module as flu
ref_fname = "./pH1N1_coding_dat/pH1N1.fa"
orf_fname = "./pH1N1_coding_dat/pH1N1_noPB1F.orf"
ph1n1 = flu.influenza(ref_fname, orf_fname)
aligned_seg_fname = lambda number: "seg%d.afa"%number
segment_aln = {}
for seg_idx in range(1, ph1n1.segnum+1):
fname = aligned_seg_fname(seg_idx)
segment_aln['seg%d'%seg_idx] = AlignIO.read(fname,'fasta')
for seg in sorted(segment_aln):
print "%s of len %d has %d sequences aligned"%(seg,segment_aln[seg].get_alignment_length(),len(segment_aln[seg]))
def get_loci_orf_assoc(orf,loci):
checks if a given position is in any of the ORFs. Returns
a list of the hosting ORFs with corresponding coords. of codons.
which_orf = []
for orf_id in orf:
orf_bounds = orf[orf_id]
orf_positions = sum([range(start,stop+1) for start,stop in orf_bounds], [])
# pos residence in ORF or not in ORF?
if loci in orf_positions:
# relative loci, simply an index in the list of orf_positions ...
relative_pos = orf_positions.index(loci)
# the index of the codon it belongs to ...
codon_idx = relative_pos//3
# relative codon coordinates (indexes within CDS):
relative_codon_coord = [codon_idx*3+i for i in range(3)]
# loci is 1st,2nd or 3rd position in the codon?
codon_shift = relative_pos - relative_codon_coord[0]
# absolute codon coordinates (indexes in the genome):
codon_coord = [orf_positions[_] for _ in relative_codon_coord]
which_orf.append( (orf_id, (codon_idx,codon_shift,codon_coord) ) )
return dict(which_orf)
# lists to store some info ...
codon_position_mutated = []
codon_position_mutated_weight = []
dKD = []
dKD_weight = []
#
segment_var_info = {}
#
# analysis for each segment ...
for seg in sorted(segment_aln):
# turn aln to an array and describe aln ...
aln = np.array([list(rec) for rec in segment_aln[seg]], np.character)
aln = pd.DataFrame(aln)
aln = aln.where(aln!='-',None) # gaps to Nones
descr = aln.describe().transpose() #built-in
descr['freq_ratio'] = descr['freq']/descr['count'] # freq_ratio ignores gaps ...
descr['gaps'] = aln.isnull().sum()
descr['variation'] = descr['freq']<descr['count'] # variable positions ...
# ORF of this segment ...
seg_orf = ph1n1.orf[seg+'_pH1N1']
seg_seq = ph1n1.genome[seg+'_pH1N1']
# we would like to store some averaged statisticsfor each var position:
codon_position_avg = []
codon_position_mutated_weight = []
dKD = []
dKD_weight = []
# go over all variable positions ...
for pos in descr[descr['variation']].index:
# unpack codon information ...
prod_codon = get_loci_orf_assoc(seg_orf,pos)
for product in prod_codon:
codon_idx, codon_shift, codon_coords = prod_codon[product]
# the consensus codon, AA and KD ...
codon_itself = ''.join(seg_seq[i] for i in codon_coords)
aa_itself = genetic_code[codon_itself]
KD_itself = KD[aa_itself]
# SNPs at this position ...
nts_present = aln.loc[:,pos].value_counts().to_dict()
# CODON SHIFT STATISTICS GATHERING ...
codon_position_mutated.append(codon_shift)
# variation frequency, non-consensus % aka mutation 'weight'
weight = 1.0 - descr['freq_ratio'][pos]
codon_position_mutated_weight.append(weight)
# hence, possible codons are:
possible_codons = []
for snp in nts_present:
mut_codon = list(codon_itself)
mut_codon[codon_shift] = snp
possible_codons.append(mut_codon)
# STATISTICS GATHERING ...
the_alt_codon = ''.join(mut_codon)
if the_alt_codon != codon_itself:
the_alt_aa = genetic_code[the_alt_codon]
the_alt_KD = KD[the_alt_aa]
weight = nts_present[snp]*1.0/sum(nts_present.values())
#################################
dKD.append(the_alt_KD - KD_itself)
dKD_weight.append(weight)
# # amino acids ...
# print "Product %s, position %d in protein (codon %s, aa %s)"%(product, codon_shift+1, codon_itself, genetic_code[codon_itself])
# other_possible_codons = [''.join(codon) for codon in possible_codons if ''.join(codon)!=codon_itself]
# print "outcome AAs are: %s"%str([genetic_code[ccc] for ccc in other_possible_codons]).strip('[]').replace('\'','')
# # print str([genetic_code[ccc] for ccc in other_possible_codons]).strip('[]').replace('\'','')
# print "dKD for AA subs: %s"%str([ KD[genetic_code[ccc]]-KD[genetic_code[codon_itself]] for ccc in other_possible_codons]).strip('[]')
print descr.head(15)
print seg
f,ax = plt.subplots(1,1,figsize=(20,4))
ax.plot(descr[descr['variation']].index,descr[descr['variation']]['freq_ratio'],'ro')
# ax = plt.gca()
# ax.set_yscale('log')
# ax.set_ylim(0.99,1)
plt.show()
plt.hist(codon_position_mutated,bins=3)
ax = plt.gca()
ax.set_xlabel("codon position: 1,2,3")
plt.show()
plt.hist(codon_position_mutated,bins=3,weights=codon_position_mutated_weight)
ax = plt.gca()
ax.set_xlabel("codon position: 1,2,3")
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load the lending club dataset
Step2: Like the previous assignment, we reassign the labels to have +1 for a safe loan, and -1 for a risky (bad) loan.
Step3: Unlike the previous assignment where we used several features, in this assignment, we will just be using 4 categorical
Step4: Let's explore what the dataset looks like.
Step5: Subsample dataset to make sure classes are balanced
Step6: Note
Step7: Let's see what the feature columns look like now
Step8: Let's explore what one of these columns looks like
Step9: This column is set to 1 if the loan grade is A and 0 otherwise.
Step10: Train-test split
Step11: Decision tree implementation
Step12: Because there are several steps in this assignment, we have introduced some stopping points where you can check your code and make sure it is correct before proceeding. To test your intermediate_node_num_mistakes function, run the following code until you get a Test passed!, then you should proceed. Otherwise, you should spend some time figuring out where things went wrong.
Step13: Function to pick best feature to split on
Step14: To test your best_splitting_feature function, run the following code
Step15: Building the tree
Step16: We have provided a function that learns the decision tree recursively and implements 3 stopping conditions
Step17: Here is a recursive function to count the nodes in your tree
Step18: Run the following test code to check your implementation. Make sure you get 'Test passed' before proceeding.
Step19: Build the tree!
Step20: Making predictions with a decision tree
Step21: Now, let's consider the first example of the test set and see what my_decision_tree model predicts for this data point.
Step22: Let's add some annotations to our prediction to see what the prediction path was that lead to this predicted class
Step23: Quiz question
Step24: Now, let's use this function to evaluate the classification error on the test set.
Step25: Quiz Question
Step26: Quiz Question
Step27: Exploring the left subtree of the left subtree
Step28: Quiz question
|
<ASSISTANT_TASK:>
Python Code:
import graphlab
loans = graphlab.SFrame('lending-club-data.gl/')
loans['safe_loans'] = loans['bad_loans'].apply(lambda x : +1 if x==0 else -1)
loans = loans.remove_column('bad_loans')
features = ['grade', # grade of the loan
'term', # the term of the loan
'home_ownership', # home_ownership status: own, mortgage or rent
'emp_length', # number of years of employment
]
target = 'safe_loans'
loans = loans[features + [target]]
loans
safe_loans_raw = loans[loans[target] == 1]
risky_loans_raw = loans[loans[target] == -1]
# Since there are less risky loans than safe loans, find the ratio of the sizes
# and use that percentage to undersample the safe loans.
percentage = len(risky_loans_raw)/float(len(safe_loans_raw))
safe_loans = safe_loans_raw.sample(percentage, seed = 1)
risky_loans = risky_loans_raw
loans_data = risky_loans.append(safe_loans)
print "Percentage of safe loans :", len(safe_loans) / float(len(loans_data))
print "Percentage of risky loans :", len(risky_loans) / float(len(loans_data))
print "Total number of loans in our new dataset :", len(loans_data)
loans_data = risky_loans.append(safe_loans)
for feature in features:
loans_data_one_hot_encoded = loans_data[feature].apply(lambda x: {x: 1})
loans_data_unpacked = loans_data_one_hot_encoded.unpack(column_name_prefix=feature)
# Change None's to 0's
for column in loans_data_unpacked.column_names():
loans_data_unpacked[column] = loans_data_unpacked[column].fillna(0)
loans_data.remove_column(feature)
loans_data.add_columns(loans_data_unpacked)
features = loans_data.column_names()
features.remove('safe_loans') # Remove the response variable
features
print "Number of features (after binarizing categorical variables) = %s" % len(features)
loans_data.head(n=1)
loans_data['grade.A']
print "Total number of grade.A loans : %s" % loans_data['grade.A'].sum()
print "Expexted answer : 6422"
train_data, test_data = loans_data.random_split(.8, seed=1)
def intermediate_node_num_mistakes(labels_in_node):
# Corner case: If labels_in_node is empty, return 0
if len(labels_in_node) == 0:
return 0
# Count the number of 1's (safe loans)
num_of_positive = (labels_in_node == +1).sum()
# Count the number of -1's (risky loans)
num_of_negative = (labels_in_node == -1).sum()
# Return the number of mistakes that the majority classifier makes.
return num_of_negative if num_of_positive > num_of_negative else num_of_positive
# Test case 1
example_labels = graphlab.SArray([-1, -1, 1, 1, 1])
if intermediate_node_num_mistakes(example_labels) == 2:
print 'Test passed!'
else:
print 'Test 1 failed... try again!'
# Test case 2
example_labels = graphlab.SArray([-1, -1, 1, 1, 1, 1, 1])
if intermediate_node_num_mistakes(example_labels) == 2:
print 'Test passed!'
else:
print 'Test 2 failed... try again!'
# Test case 3
example_labels = graphlab.SArray([-1, -1, -1, -1, -1, 1, 1])
if intermediate_node_num_mistakes(example_labels) == 2:
print 'Test passed!'
else:
print 'Test 3 failed... try again!'
def best_splitting_feature(data, features, target):
best_feature = None # Keep track of the best feature
best_error = 10 # Keep track of the best error so far
# Note: Since error is always <= 1, we should intialize it with something larger than 1.
# Convert to float to make sure error gets computed correctly.
num_data_points = float(len(data))
# Loop through each feature to consider splitting on that feature
for feature in features:
# The left split will have all data points where the feature value is 0
left_split = data[data[feature] == 0]
# The right split will have all data points where the feature value is 1
## YOUR CODE HERE
right_split = data[data[feature] == 1]
# Calculate the number of misclassified examples in the left split.
# Remember that we implemented a function for this! (It was called intermediate_node_num_mistakes)
# YOUR CODE HERE
left_mistakes = intermediate_node_num_mistakes(left_split[target])
# Calculate the number of misclassified examples in the right split.
## YOUR CODE HERE
right_mistakes = intermediate_node_num_mistakes(right_split[target])
# Compute the classification error of this split.
# Error = (# of mistakes (left) + # of mistakes (right)) / (# of data points)
## YOUR CODE HERE
error = (left_mistakes + right_mistakes) / num_data_points
# If this is the best error we have found so far, store the feature as best_feature and the error as best_error
## YOUR CODE HERE
if error < best_error:
best_feature = feature
best_error = error
return best_feature # Return the best feature we found
if best_splitting_feature(train_data, features, 'safe_loans') == 'term. 36 months':
print 'Test passed!'
else:
print 'Test failed... try again!'
def create_leaf(target_values):
# Create a leaf node
leaf = {'splitting_feature' : None,
'left' : None,
'right' : None,
'is_leaf': True } ## YOUR CODE HERE
# Count the number of data points that are +1 and -1 in this node.
num_ones = len(target_values[target_values == +1])
num_minus_ones = len(target_values[target_values == -1])
# For the leaf node, set the prediction to be the majority class.
# Store the predicted class (1 or -1) in leaf['prediction']
if num_ones > num_minus_ones:
leaf['prediction'] = +1
else:
leaf['prediction'] = -1
# Return the leaf node
return leaf
def decision_tree_create(data, features, target, current_depth = 0, max_depth = 10):
remaining_features = features[:] # Make a copy of the features.
target_values = data[target]
print "--------------------------------------------------------------------"
print "Subtree, depth = %s (%s data points)." % (current_depth, len(target_values))
# Stopping condition 1
# (Check if there are mistakes at current node.
# Recall you wrote a function intermediate_node_num_mistakes to compute this.)
if intermediate_node_num_mistakes(target_values) == 0: ## YOUR CODE HERE
print "Stopping condition 1 reached."
# If not mistakes at current node, make current node a leaf node
return create_leaf(target_values)
# Stopping condition 2 (check if there are remaining features to consider splitting on)
if remaining_features == []: ## YOUR CODE HERE
print "Stopping condition 2 reached."
# If there are no remaining features to consider, make current node a leaf node
return create_leaf(target_values)
# Additional stopping condition (limit tree depth)
if current_depth >= max_depth: ## YOUR CODE HERE
print "Reached maximum depth. Stopping for now."
# If the max tree depth has been reached, make current node a leaf node
return create_leaf(target_values)
# Find the best splitting feature (recall the function best_splitting_feature implemented above)
## YOUR CODE HERE
splitting_feature = best_splitting_feature(data, features, target)
# Split on the best feature that we found.
left_split = data[data[splitting_feature] == 0]
right_split = data[data[splitting_feature] == 1]
remaining_features.remove(splitting_feature)
print "Split on feature %s. (%s, %s)" % (\
splitting_feature, len(left_split), len(right_split))
# Create a leaf node if the split is "perfect"
if len(left_split) == len(data):
print "Creating leaf node."
return create_leaf(left_split[target])
if len(right_split) == len(data):
print "Creating leaf node."
return create_leaf(right_split[target])
# Repeat (recurse) on left and right subtrees
left_tree = decision_tree_create(left_split, remaining_features, target, current_depth + 1, max_depth)
## YOUR CODE HERE
right_tree = decision_tree_create(right_split, remaining_features, target, current_depth + 1, max_depth)
return {'is_leaf' : False,
'prediction' : None,
'splitting_feature': splitting_feature,
'left' : left_tree,
'right' : right_tree}
def count_nodes(tree):
if tree['is_leaf']:
return 1
return 1 + count_nodes(tree['left']) + count_nodes(tree['right'])
small_data_decision_tree = decision_tree_create(train_data, features, 'safe_loans', max_depth = 3)
if count_nodes(small_data_decision_tree) == 13:
print 'Test passed!'
else:
print 'Test failed... try again!'
print 'Number of nodes found :', count_nodes(small_data_decision_tree)
print 'Number of nodes that should be there : 13'
small_data_decision_tree
# Make sure to cap the depth at 6 by using max_depth = 6
my_decision_tree = decision_tree_create(train_data, features, 'safe_loans', max_depth = 6)
my_decision_tree
def classify(tree, x, annotate = False):
# if the node is a leaf node.
if tree['is_leaf']:
if annotate:
print "At leaf, predicting %s" % tree['prediction']
return tree['prediction']
else:
# split on feature.
split_feature_value = x[tree['splitting_feature']]
if annotate:
print "Split on %s = %s" % (tree['splitting_feature'], split_feature_value)
if split_feature_value == 0:
return classify(tree['left'], x, annotate)
else:
return classify(tree['right'], x, annotate)
test_data[0]
print 'Predicted class: %s ' % classify(my_decision_tree, test_data[0])
classify(my_decision_tree, test_data[0], annotate=True)
classify(small_data_decision_tree, test_data[0], annotate=True)
def evaluate_classification_error(tree, data):
# Apply the classify(tree, x) to each row in your data
prediction = data.apply(lambda x: classify(tree, x))
# Once you've made the predictions, calculate the classification error and return it
## YOUR CODE HERE
num_of_mistakes = (prediction != data[target]).sum()/float(len(data))
return num_of_mistakes
evaluate_classification_error(my_decision_tree, test_data)
def print_stump(tree, name = 'root'):
split_name = tree['splitting_feature'] # split_name is something like 'term. 36 months'
if split_name is None:
print "(leaf, label: %s)" % tree['prediction']
return None
split_feature, split_value = split_name.split('.')
print ' %s' % name
print ' |---------------|----------------|'
print ' | |'
print ' | |'
print ' | |'
print ' [{0} == 0] [{0} == 1] '.format(split_name)
print ' | |'
print ' | |'
print ' | |'
print ' (%s) (%s)' \
% (('leaf, label: ' + str(tree['left']['prediction']) if tree['left']['is_leaf'] else 'subtree'),
('leaf, label: ' + str(tree['right']['prediction']) if tree['right']['is_leaf'] else 'subtree'))
print_stump(my_decision_tree)
print_stump(my_decision_tree['left'], my_decision_tree['splitting_feature'])
print_stump(my_decision_tree['left']['left'], my_decision_tree['left']['splitting_feature'])
print_stump(my_decision_tree['right'], my_decision_tree['splitting_feature'])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Stochastical estimation of ELBO
Step2: Minibatches speed up computation
Step3: Running stochastic optimization
|
<ASSISTANT_TASK:>
Python Code:
import sys, os
import numpy
import time
sys.path.append(os.path.join(os.getcwd(),'..'))
import candlegp
from matplotlib import pyplot
import torch
from torch.autograd import Variable
%matplotlib inline
pyplot.style.use('ggplot')
import IPython
M = 50
def func(x):
return torch.sin(x * 3*3.14) + 0.3*torch.cos(x * 9*3.14) + 0.5 * torch.sin(x * 7*3.14)
X = torch.rand(10000, 1).double() * 2 - 1
Y = func(X) + torch.randn(10000, 1).double() * 0.2
pyplot.plot(X.numpy(), Y.numpy(), 'x')
D = X.size(1)
Xt = torch.linspace(-1.1, 1.1, 100).double().unsqueeze(1)
Yt = func(Xt)
k = candlegp.kernels.RBF(D,variance=torch.DoubleTensor([1.0])).double()
Z = X[:M].clone()
m = candlegp.models.SVGP(Variable(X), Variable(Y.unsqueeze(1)),
likelihood=candlegp.likelihoods.Gaussian(ttype=torch.DoubleTensor),
kern=k, Z=Z)
m
# ground_truth = m.compute_log_likelihood() # seems to take too long
evals = []
for i in range(100):
if i % 10 == 9:
print ('.', end='')
idxes = torch.randperm(X.size(0))[:100]
evals.append(m.compute_log_likelihood(Variable(X[idxes]), Variable(Y[idxes])).data[0])
pyplot.hist(evals)
#pyplot.axvline(ground_truth)
mbps = numpy.logspace(-2, -0.8, 7)
times = []
objs = []
for mbp in mbps:
minibatch_size = int(len(X) * mbp)
print (minibatch_size)
start_time = time.time()
evals = []
for i in range(20):
idxes = torch.randperm(X.size(0))[:minibatch_size]
evals.append(m.compute_log_likelihood(Variable(X[idxes]), Variable(Y[idxes])).data[0])
objs.append(evals)
# plt.hist(objs, bins = 100)
# plt.axvline(ground_truth, color='r')
times.append(time.time() - start_time)
f, (ax1, ax2) = pyplot.subplots(1, 2, figsize=(16, 6))
ax1.plot(mbps, times, 'x-')
ax1.set_xlabel("Minibatch proportion")
ax1.set_ylabel("Time taken")
ax2.plot(mbps, numpy.array(objs), 'kx')
ax2.set_xlabel("Minibatch proportion")
ax2.set_ylabel("ELBO estimates")
pX = Variable(torch.linspace(-1, 1, 100).unsqueeze(1).double())
pY, pYv = m.predict_y(pX)
pyplot.plot(X.numpy(), Y.numpy(), 'x')
line, = pyplot.plot(pX.data.numpy(), pY.data.numpy(), lw=1.5)
col = line.get_color()
pyplot.plot(pX.data.numpy(), (pY+2*pYv**0.5).data.numpy(), col, lw=1.5)
pyplot.plot(pX.data.numpy(), (pY-2*pYv**0.5).data.numpy(), col, lw=1.5)
pyplot.plot(m.Z.get().data.numpy(), numpy.zeros(m.Z.shape), 'k|', mew=2)
pyplot.title("Predictions before training")
logt = []
logf = []
st = time.time()
minibatch_size = 100
m.Z.requires_grad = True
opt = torch.optim.Adam(m.parameters(), lr=0.01)
m.Z.requires_grad = False
for i in range(2000):
if i % 50 == 49:
print (i)
idxes = torch.randperm(X.size(0))[:minibatch_size]
opt.zero_grad()
obj = m(Variable(X[idxes]), Variable(Y[idxes]))
logf.append(obj.data[0])
obj.backward()
opt.step()
logt.append(time.time() - st)
if i%50 == 49:
IPython.display.clear_output(True)
pyplot.plot(-numpy.array(logf))
pyplot.xlabel('iteration')
pyplot.ylabel('ELBO')
pyplot.show()
pX = Variable(torch.linspace(-1, 1, 100).unsqueeze(1).double())
pY, pYv = m.predict_y(pX)
pyplot.plot(X.numpy(), Y.numpy(), 'x')
line, = pyplot.plot(pX.data.numpy(), pY.data.numpy(), lw=1.5)
col = line.get_color()
pyplot.plot(pX.data.numpy(), (pY+2*pYv**0.5).data.numpy(), col, lw=1.5)
pyplot.plot(pX.data.numpy(), (pY-2*pYv**0.5).data.numpy(), col, lw=1.5)
pyplot.plot(m.Z.get().data.numpy(), numpy.zeros(m.Z.shape), 'k|', mew=2)
pyplot.title("Predictions after training")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2 About the data
Step2: For convenience, we will rename all columns to upper case, so we don't have to remember what is upper or lower case in the future.
Step3: We will also change the index of the dataframe to be the Pokemon's name, and we will use some magic to make names look nicer.
Step4: We are ready to take a look at the data for the first time!
Step5: In doubt, [this](https
Step6: 3 Pre-processing data
Step7: Now, briefly describing the raw dataset.
Step8: Time to separate our features from the labels, and we're ready to train a simple model.
Step9: 3 Training and testing a model in a single dataset
Step10: Using the same dataset for training and testing our model, we get a remarkable accuracy score!
Step11: 4 Using training and test sets
Step12: Now, we will use one partition for training and the other for testing or evaluating model performance on previously unseen data.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
% matplotlib inline
from matplotlib import pyplot as plt
from sklearn.metrics import accuracy_score, precision_score, recall_score
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
data = pd.read_csv('../data/pokemon.csv')
data.columns = data.columns.str.upper()
data = data.set_index('NAME')
data.index = data.index.str.replace(".*(?=Mega|Primal|Origin|Therian|Land|Incarnate)", "")
data = data.drop(['#'], axis=1)
most_powerful = data.sort_values('TOTAL', ascending=False)
most_powerful.head(n=3)
most_powerful_by_type = most_powerful.drop_duplicates(subset=['TYPE 1'], keep='first')
most_powerful_by_type
columns = ['ATTACK', 'DEFENSE', 'SPEED', 'TYPE 1']
data_clf = data[columns]
print("The dataset contains %s rows and %s columns." % data.shape)
print("The dataset columns are: %s."% data.columns.values)
data_clf.describe()
X = data_clf.drop(['TYPE 1'], axis=1)
y = data_clf['TYPE 1']
clf = DecisionTreeClassifier(random_state=0)
model_using_all_data = clf.fit(X, y)
y_pred = model_using_all_data.predict(X)
accuracy_using_all_data = accuracy_score(y, y_pred)
print("The accuracy score of the model is: %s." % accuracy_using_all_data)
results_using_all_data = data
results_using_all_data['PREDICTED'] = y_pred
failures = results_using_all_data['TYPE 1'] != results_using_all_data['PREDICTED']
results_using_all_data[failures].head(n=10)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
print("The training dataset contains %s rows and %s columns." % X_train.shape)
print("The test dataset contains %s rows and %s columns." % X_test.shape)
model_using_test_data = clf.fit(X_train, y_train)
y_pred = model_using_test_data.predict(X_test)
accuracy_using_test_data = accuracy_score(y_test, y_pred)
print("The accuracy score of the model for previously unseen data is: %s." % accuracy_using_test_data)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: So we had to just put the parentheses
Step2: Time taken to execute a cell
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
# importing the dataset we prepared and saved using Baseline 1 Notebook
ricep = pd.read_csv("/Users/macbook/Documents/BTP/Notebook/BTP/ricep.csv")
ricep.head()
ricep = ricep.drop(["Unnamed: 0"],axis=1)
ricep["phosphorus"] = ricep["phosphorus"]*10
ricep["value"] = ricep["Production"]/ricep["Area"]
ricep.head()
ricep.index[ ricep['ind_district'] == 'anantapur'].tolist()
ricep.index[ ricep['ind_district'] == 'anantapur' & ricep['Crop_Year'] == '2002' ]
ricep.index[ (ricep['ind_district'] == 'anantapur') & (ricep['Crop_Year'] == 2002) ].tolist()
a = np.empty((ricep.shape[0],1))*np.NAN
ricex = ricep.assign(test = a)
ricex.head()
%time
v = ricex.iloc[0,13]
v
d = v + 5
d
if pd.isnull(v):
v = 3
v+5
v
df = pd.DataFrame(np.arange(1,7).reshape(2,3),
columns = list('abc'),
index=pd.Series([2,5], name='b'))
df
x = 3
x += 2
x
s = "Akshansh"
s[-4]
# bx = False
# if bx: continue
# else: bx = True
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Code from the previous chapter
Step2: System objects
Step4: And we can encapsulate the code that runs the model in a function.
Step6: We can also encapsulate the code that plots the results.
Step7: Here's how we run it.
Step9: Proportional growth
Step10: I picked a death rate that seemed reasonable and then adjusted the birth rate to fit the data.
Step11: Here's what it looks like.
Step13: The model fits the data pretty well for the first 20 years, but not so well after that.
Step14: The name update_func refers to a function object.
Step15: Which we can confirm by checking its type.
Step17: run_simulation takes the update function as a parameter and calls it just like any other function.
Step18: Here's how we use it.
Step20: Remember not to put parentheses after update_func1. What happens if you try?
Step21: Here's how it works
Step22: Exercises
|
<ASSISTANT_TASK:>
Python Code:
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
from pandas import read_html
filename = 'data/World_population_estimates.html'
tables = read_html(filename, header=0, index_col=0, decimal='M')
table2 = tables[2]
table2.columns = ['census', 'prb', 'un', 'maddison',
'hyde', 'tanton', 'biraben', 'mj',
'thomlinson', 'durand', 'clark']
un = table2.un / 1e9
un.head()
census = table2.census / 1e9
census.head()
t_0 = get_first_label(census)
t_end = get_last_label(census)
elapsed_time = t_end - t_0
p_0 = get_first_value(census)
p_end = get_last_value(census)
total_growth = p_end - p_0
annual_growth = total_growth / elapsed_time
system = System(t_0=t_0,
t_end=t_end,
p_0=p_0,
annual_growth=annual_growth)
def run_simulation1(system):
Runs the constant growth model.
system: System object
returns: TimeSeries
results = TimeSeries()
results[system.t_0] = system.p_0
for t in linrange(system.t_0, system.t_end):
results[t+1] = results[t] + system.annual_growth
return results
def plot_results(census, un, timeseries, title):
Plot the estimates and the model.
census: TimeSeries of population estimates
un: TimeSeries of population estimates
timeseries: TimeSeries of simulation results
title: string
plot(census, ':', label='US Census')
plot(un, '--', label='UN DESA')
plot(timeseries, color='gray', label='model')
decorate(xlabel='Year',
ylabel='World population (billion)',
title=title)
results = run_simulation1(system)
plot_results(census, un, results, 'Constant growth model')
def run_simulation2(system):
Run a model with proportional birth and death.
system: System object
returns: TimeSeries
results = TimeSeries()
results[system.t_0] = system.p_0
for t in linrange(system.t_0, system.t_end):
births = system.birth_rate * results[t]
deaths = system.death_rate * results[t]
results[t+1] = results[t] + births - deaths
return results
system.death_rate = 0.01
system.birth_rate = 0.027
results = run_simulation2(system)
plot_results(census, un, results, 'Proportional model')
savefig('figs/chap06-fig01.pdf')
def update_func1(pop, t, system):
Compute the population next year.
pop: current population
t: current year
system: system object containing parameters of the model
returns: population next year
births = system.birth_rate * pop
deaths = system.death_rate * pop
return pop + births - deaths
update_func1
type(update_func1)
def run_simulation(system, update_func):
Simulate the system using any update function.
system: System object
update_func: function that computes the population next year
returns: TimeSeries
results = TimeSeries()
results[system.t_0] = system.p_0
for t in linrange(system.t_0, system.t_end):
results[t+1] = update_func(results[t], t, system)
return results
t_0 = get_first_label(census)
t_end = get_last_label(census)
p_0 = census[t_0]
system = System(t_0=t_0,
t_end=t_end,
p_0=p_0,
birth_rate=0.027,
death_rate=0.01)
results = run_simulation(system, update_func1)
plot_results(census, un, results, 'Proportional model, factored')
def update_func2(pop, t, system):
Compute the population next year.
pop: current population
t: current year
system: system object containing parameters of the model
returns: population next year
net_growth = system.alpha * pop
return pop + net_growth
system.alpha = system.birth_rate - system.death_rate
results = run_simulation(system, update_func2)
plot_results(census, un, results, 'Proportional model, combined birth and death')
# Solution goes here
# Solution goes here
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step4: Let me work through CSS Tutorial, while consulting Cascading Style Sheets - Wikipedia, the free encyclopedia.
Step6: Box model
Step7: Basic exercise
Step12: box-sizing
|
<ASSISTANT_TASK:>
Python Code:
from nbfiddle import Fiddle
# http://www.w3schools.com/css/tryit.asp?filename=trycss_default
Fiddle(
div_css =
background-color: #d0e4fe;
h1 {
color: orange;
text-align: center;
}
p {
font-family: "Times New Roman";
font-size: 20px;
}
,
html =
<h1>My First CSS Example</h1>
<p>This is a paragraph.</p>
)
# http://www.w3schools.com/css/tryit.asp?filename=trycss_inline-block_old
Fiddle(
div_css =
.floating-box {
float: left;
width: 150px;
height: 75px;
margin: 10px;
border: 3px solid #73AD21;
}
.after-box {
clear: left;
border: 3px solid red;
}
,
html =
<h2>The Old Way - using float</h2>
<div class="floating-box">Floating box</div>
<div class="floating-box">Floating box</div>
<div class="floating-box">Floating box</div>
<div class="floating-box">Floating box</div>
<div class="floating-box">Floating box</div>
<div class="floating-box">Floating box</div>
<div class="floating-box">Floating box</div>
<div class="floating-box">Floating box</div>
<div class="after-box">Another box, after the floating boxes...</div>
,
)
# using normalize.css
normalize_css_url = "http://yui.yahooapis.com/3.18.0/build/cssnormalize-context/cssnormalize-context-min.css"
Fiddle(
html =
<img src="https://upload.wikimedia.org/wikipedia/commons/thumb/6/64/W3C_and_Internet_Explorer_box_models.svg/600px-W3C_and_Internet_Explorer_box_models.svg.png"/ style="width:300px">
,
csslibs = (normalize_css_url,)
)
%%html
<style>
</style>
<img src="https://upload.wikimedia.org/wikipedia/commons/6/6a/Johann_Sebastian_Bach.jpg" style="width: 200px; padding:5px; float:left;">
<p>Johann Sebastian Bach (31 March [O.S. 21 March] 1685 – 28 July 1750) was a German composer and musician
of the Baroque period. He enriched established German styles through his skill in counterpoint,
harmonic and motivic organisation, and the adaptation of rhythms, forms, and textures from abroad,
particularly from Italy and France. Bach's compositions include the Brandenburg Concertos, the Goldberg
Variations, the Mass in B minor, two Passions, and over three hundred cantatas of which around two hundred survive.
His music is revered for its technical command, artistic beauty, and intellectual depth.<p>
<p style="clear: right;">Bach's abilities as an organist were highly respected during his lifetime,
although he was not widely recognised as a great composer until a revival of interest and performances
of his music in the first half of the 19th century. He is now generally regarded as one of the greatest
composers of all time.</p>
# box-sizing
Fiddle(
html =
<div class="box-sizing-content-box">HELLO</div>
<div class="box-sizing-border-box">HELLO</div>
,
div_css =
.box-sizing-content-box {
box-sizing: content-box;
width: 200px;
padding: 30px;
border: 5px solid red;
}
.box-sizing-border-box {
box-sizing: border-box;
width: 200px;
padding: 30px;
border: 5px solid blue;
}
)
# display: block, inline, none
Fiddle(
html =
<p class="inline">p.inline</p>
<p class="inline">p.inline</p>
<span class="block">span.block</span>
<span class="block">span.block</span>
<div class="none">none</div>
,
div_css =
p.inline {
display: inline;
}
span.block {
display: block;
}
div.none {
display: none;
}
)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 1.3. Model Family
Step7: 1.4. Basic Approximations
Step8: 1.5. Prognostic Variables
Step9: 2. Key Properties --> Seawater Properties
Step10: 2.2. Eos Functional Temp
Step11: 2.3. Eos Functional Salt
Step12: 2.4. Eos Functional Depth
Step13: 2.5. Ocean Freezing Point
Step14: 2.6. Ocean Specific Heat
Step15: 2.7. Ocean Reference Density
Step16: 3. Key Properties --> Bathymetry
Step17: 3.2. Type
Step18: 3.3. Ocean Smoothing
Step19: 3.4. Source
Step20: 4. Key Properties --> Nonoceanic Waters
Step21: 4.2. River Mouth
Step22: 5. Key Properties --> Software Properties
Step23: 5.2. Code Version
Step24: 5.3. Code Languages
Step25: 6. Key Properties --> Resolution
Step26: 6.2. Canonical Horizontal Resolution
Step27: 6.3. Range Horizontal Resolution
Step28: 6.4. Number Of Horizontal Gridpoints
Step29: 6.5. Number Of Vertical Levels
Step30: 6.6. Is Adaptive Grid
Step31: 6.7. Thickness Level 1
Step32: 7. Key Properties --> Tuning Applied
Step33: 7.2. Global Mean Metrics Used
Step34: 7.3. Regional Metrics Used
Step35: 7.4. Trend Metrics Used
Step36: 8. Key Properties --> Conservation
Step37: 8.2. Scheme
Step38: 8.3. Consistency Properties
Step39: 8.4. Corrected Conserved Prognostic Variables
Step40: 8.5. Was Flux Correction Used
Step41: 9. Grid
Step42: 10. Grid --> Discretisation --> Vertical
Step43: 10.2. Partial Steps
Step44: 11. Grid --> Discretisation --> Horizontal
Step45: 11.2. Staggering
Step46: 11.3. Scheme
Step47: 12. Timestepping Framework
Step48: 12.2. Diurnal Cycle
Step49: 13. Timestepping Framework --> Tracers
Step50: 13.2. Time Step
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Step52: 14.2. Scheme
Step53: 14.3. Time Step
Step54: 15. Timestepping Framework --> Barotropic
Step55: 15.2. Time Step
Step56: 16. Timestepping Framework --> Vertical Physics
Step57: 17. Advection
Step58: 18. Advection --> Momentum
Step59: 18.2. Scheme Name
Step60: 18.3. ALE
Step61: 19. Advection --> Lateral Tracers
Step62: 19.2. Flux Limiter
Step63: 19.3. Effective Order
Step64: 19.4. Name
Step65: 19.5. Passive Tracers
Step66: 19.6. Passive Tracers Advection
Step67: 20. Advection --> Vertical Tracers
Step68: 20.2. Flux Limiter
Step69: 21. Lateral Physics
Step70: 21.2. Scheme
Step71: 22. Lateral Physics --> Momentum --> Operator
Step72: 22.2. Order
Step73: 22.3. Discretisation
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Step75: 23.2. Constant Coefficient
Step76: 23.3. Variable Coefficient
Step77: 23.4. Coeff Background
Step78: 23.5. Coeff Backscatter
Step79: 24. Lateral Physics --> Tracers
Step80: 24.2. Submesoscale Mixing
Step81: 25. Lateral Physics --> Tracers --> Operator
Step82: 25.2. Order
Step83: 25.3. Discretisation
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Step85: 26.2. Constant Coefficient
Step86: 26.3. Variable Coefficient
Step87: 26.4. Coeff Background
Step88: 26.5. Coeff Backscatter
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Step90: 27.2. Constant Val
Step91: 27.3. Flux Type
Step92: 27.4. Added Diffusivity
Step93: 28. Vertical Physics
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
Step96: 30.2. Closure Order
Step97: 30.3. Constant
Step98: 30.4. Background
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
Step100: 31.2. Closure Order
Step101: 31.3. Constant
Step102: 31.4. Background
Step103: 32. Vertical Physics --> Interior Mixing --> Details
Step104: 32.2. Tide Induced Mixing
Step105: 32.3. Double Diffusion
Step106: 32.4. Shear Mixing
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
Step108: 33.2. Constant
Step109: 33.3. Profile
Step110: 33.4. Background
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
Step112: 34.2. Constant
Step113: 34.3. Profile
Step114: 34.4. Background
Step115: 35. Uplow Boundaries --> Free Surface
Step116: 35.2. Scheme
Step117: 35.3. Embeded Seaice
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Step119: 36.2. Type Of Bbl
Step120: 36.3. Lateral Mixing Coef
Step121: 36.4. Sill Overflow
Step122: 37. Boundary Forcing
Step123: 37.2. Surface Pressure
Step124: 37.3. Momentum Flux Correction
Step125: 37.4. Tracers Flux Correction
Step126: 37.5. Wave Effects
Step127: 37.6. River Runoff Budget
Step128: 37.7. Geothermal Heating
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Step132: 40.2. Ocean Colour
Step133: 40.3. Extinction Depth
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Step135: 41.2. From Sea Ice
Step136: 41.3. Forced Mode Restoring
|
<ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nasa-giss', 'sandbox-1', 'ocean')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First tell PyPSA that links can have multiple outputs by overriding the component_attrs. This can be done for as many buses as you need with format busi for i = 2,3,4,5,....
Step2: Add a constant electrical load
Step3: Add a constant demand for transport
Step4: Add a bus for Hydrogen storage.
Step5: Allow production of diesel from H2 and CO2 using Fischer-Tropsch
Step6: Direct air capture consumes electricity to take CO2 from the air to the underground store
Step7: Meet transport with diesel
Step8: Add a cheap and a expensive biomass generator.
Step9: Add a global CO$_2$ constraint.
Step10: How do the different stores in the system behave?
Step11: Let's have a look at the production
Step12: At all times, the amount of carbon is constant!
|
<ASSISTANT_TASK:>
Python Code:
import pypsa
import numpy as np
import matplotlib.pyplot as plt
override_component_attrs = pypsa.descriptors.Dict(
{k: v.copy() for k, v in pypsa.components.component_attrs.items()}
)
override_component_attrs["Link"].loc["bus2"] = [
"string",
np.nan,
np.nan,
"2nd bus",
"Input (optional)",
]
override_component_attrs["Link"].loc["bus3"] = [
"string",
np.nan,
np.nan,
"3rd bus",
"Input (optional)",
]
override_component_attrs["Link"].loc["efficiency2"] = [
"static or series",
"per unit",
1.0,
"2nd bus efficiency",
"Input (optional)",
]
override_component_attrs["Link"].loc["efficiency3"] = [
"static or series",
"per unit",
1.0,
"3rd bus efficiency",
"Input (optional)",
]
override_component_attrs["Link"].loc["p2"] = [
"series",
"MW",
0.0,
"2nd bus output",
"Output",
]
override_component_attrs["Link"].loc["p3"] = [
"series",
"MW",
0.0,
"3rd bus output",
"Output",
]
n = pypsa.Network(override_component_attrs=override_component_attrs)
n.set_snapshots(range(10))
n.add("Bus", "bus")
n.add("Load", "load", bus="bus", p_set=1.0)
n.add("Bus", "transport")
n.add("Load", "transport", bus="transport", p_set=1.0)
n.add("Bus", "diesel")
n.add("Store", "diesel", bus="diesel", e_cyclic=True, e_nom=1000.0)
n.add("Bus", "hydrogen")
n.add("Store", "hydrogen", bus="hydrogen", e_cyclic=True, e_nom=1000.0)
# n.add("Load","hydrogen",
# bus="hydrogen",
# p_set=1.)
n.add("Link", "electrolysis", p_nom=2.0, efficiency=0.8, bus0="bus", bus1="hydrogen")
n.add(
"Link",
"FT",
p_nom=4,
bus0="hydrogen",
bus1="diesel",
bus2="co2 stored",
efficiency=1.0,
efficiency2=-1,
)
# minus sign because opposite to how fossil fuels used:
# CH4 burning puts CH4 down, atmosphere up
n.add("Carrier", "co2", co2_emissions=-1.0)
# this tracks CO2 in the atmosphere
n.add("Bus", "co2 atmosphere", carrier="co2")
# NB: can also be negative
n.add("Store", "co2 atmosphere", e_nom=1000, e_min_pu=-1, bus="co2 atmosphere")
# this tracks CO2 stored, e.g. underground
n.add("Bus", "co2 stored")
# NB: can also be negative
n.add("Store", "co2 stored", e_nom=1000, e_min_pu=-1, bus="co2 stored")
n.add(
"Link",
"DAC",
bus0="bus",
bus1="co2 stored",
bus2="co2 atmosphere",
efficiency=1,
efficiency2=-1,
p_nom=5.0,
)
n.add(
"Link",
"diesel car",
bus0="diesel",
bus1="transport",
bus2="co2 atmosphere",
efficiency=1.0,
efficiency2=1.0,
p_nom=2.0,
)
n.add("Bus", "gas")
n.add("Store", "gas", e_initial=50, e_nom=50, marginal_cost=20, bus="gas")
n.add(
"Link",
"OCGT",
bus0="gas",
bus1="bus",
bus2="co2 atmosphere",
p_nom_extendable=True,
efficiency=0.5,
efficiency2=1,
)
n.add(
"Link",
"OCGT+CCS",
bus0="gas",
bus1="bus",
bus2="co2 stored",
bus3="co2 atmosphere",
p_nom_extendable=True,
efficiency=0.4,
efficiency2=0.9,
efficiency3=0.1,
)
biomass_marginal_cost = [20.0, 50.0]
biomass_stored = [40.0, 15.0]
for i in range(2):
n.add("Bus", "biomass" + str(i))
n.add(
"Store",
"biomass" + str(i),
bus="biomass" + str(i),
e_nom_extendable=True,
marginal_cost=biomass_marginal_cost[i],
e_nom=biomass_stored[i],
e_initial=biomass_stored[i],
)
# simultaneously empties and refills co2 atmosphere
n.add(
"Link",
"biomass" + str(i),
bus0="biomass" + str(i),
bus1="bus",
p_nom_extendable=True,
efficiency=0.5,
)
n.add(
"Link",
"biomass+CCS" + str(i),
bus0="biomass" + str(i),
bus1="bus",
bus2="co2 stored",
bus3="co2 atmosphere",
p_nom_extendable=True,
efficiency=0.4,
efficiency2=1.0,
efficiency3=-1,
)
# can go to -50, but at some point can't generate enough electricity for DAC and demand
target = -50
n.add(
"GlobalConstraint",
"co2_limit",
sense="<=",
carrier_attribute="co2_emissions",
constant=target,
)
n.lopf();
n.stores_t.e.plot(figsize=(9, 7), lw=3)
plt.tight_layout()
n.links_t.p0[["biomass+CCS0", "biomass+CCS1", "OCGT+CCS", "DAC"]].plot(
subplots=True, figsize=(9, 7)
)
plt.tight_layout()
n.stores_t.e[["co2 stored", "co2 atmosphere", "gas", "diesel"]].sum(axis=1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Tirer des points aléatoirement et les afficher
Step2: Distance d'un chemin
Step3: Visualisation
Step4: Parcourir toutes les permutations
Step5: Module tqdm
Step6: Retournement
|
<ASSISTANT_TASK:>
Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
%matplotlib inline
import numpy
points = numpy.random.random((6, 2))
points
def distance_chemin(points, chemin):
dist = 0
for i in range(1, len(points)):
dx, dy = points[chemin[i], :] - points[chemin[i-1], :]
dist += (dx ** 2 + dy ** 2) ** 0.5
dx, dy = points[chemin[0], :] - points[chemin[-1], :]
dist += (dx ** 2 + dy ** 2) ** 0.5
return dist
distance_chemin(points, list(range(points.shape[0])))
import matplotlib.pyplot as plt
def plot_points(points, chemin):
fig, ax = plt.subplots(1, 2, figsize=(8, 4))
loop = list(chemin) + [chemin[0]]
p = points[loop]
ax[0].plot(points[:, 0], points[:, 1], 'o')
ax[1].plot(p[:, 0], p[:, 1], 'o-')
ax[1].set_title("dist=%1.2f" % distance_chemin(points, chemin))
return ax
plot_points(points, list(range(points.shape[0])));
from itertools import permutations
def optimisation(points, chemin):
dist = distance_chemin(points, chemin)
best = chemin
for perm in permutations(chemin):
d = distance_chemin(points, perm)
if d < dist:
dist = d
best = perm
return best
res = optimisation(points, list(range(points.shape[0])))
plot_points(points, res);
from tqdm import tqdm
def optimisation(points, chemin):
dist = distance_chemin(points, chemin)
best = chemin
loop = tqdm(permutations(chemin))
for perm in loop:
loop.set_description(str(perm))
d = distance_chemin(points, perm)
if d < dist:
dist = d
best = perm
return best
res = optimisation(points, list(range(points.shape[0])))
plot_points(points, res);
def optimisation_retournement(points, chemin):
dist = distance_chemin(points, chemin)
best = chemin
for i in range(1, len(chemin)):
for j in range(i+1, len(chemin)):
chemin[i: j] = chemin[j-1: i-1: -1]
d = distance_chemin(points, chemin)
if d < dist:
dist = d
else:
chemin[i: j] = chemin[j-1: i-1: -1]
return chemin
res = optimisation_retournement(points, list(range(points.shape[0])))
plot_points(points, res);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Advanced example
Step2: Progress monitoring and control using callback argument of fit method
Step3: Counting total iterations that will be used to explore all subspaces
|
<ASSISTANT_TASK:>
Python Code:
from skopt import BayesSearchCV
from sklearn.datasets import load_digits
from sklearn.svm import SVC
from sklearn.model_selection import train_test_split
X, y = load_digits(10, True)
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.75, test_size=.25, random_state=0)
# log-uniform: understand as search over p = exp(x) by varying x
opt = BayesSearchCV(
SVC(),
{
'C': (1e-6, 1e+6, 'log-uniform'),
'gamma': (1e-6, 1e+1, 'log-uniform'),
'degree': (1, 8), # integer valued parameter
'kernel': ['linear', 'poly', 'rbf'], # categorical parameter
},
n_iter=32,
cv=3
)
opt.fit(X_train, y_train)
print("val. score: %s" % opt.best_score_)
print("test score: %s" % opt.score(X_test, y_test))
from skopt import BayesSearchCV
from skopt.space import Real, Categorical, Integer
from sklearn.datasets import load_digits
from sklearn.svm import LinearSVC, SVC
from sklearn.pipeline import Pipeline
from sklearn.model_selection import train_test_split
X, y = load_digits(10, True)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
# pipeline class is used as estimator to enable
# search over different model types
pipe = Pipeline([
('model', SVC())
])
# single categorical value of 'model' parameter is
# sets the model class
# We will get ConvergenceWarnings because the problem is not well-conditioned.
# But that's fine, this is just an example.
linsvc_search = {
'model': [LinearSVC(max_iter=1000)],
'model__C': (1e-6, 1e+6, 'log-uniform'),
}
# explicit dimension classes can be specified like this
svc_search = {
'model': Categorical([SVC()]),
'model__C': Real(1e-6, 1e+6, prior='log-uniform'),
'model__gamma': Real(1e-6, 1e+1, prior='log-uniform'),
'model__degree': Integer(1,8),
'model__kernel': Categorical(['linear', 'poly', 'rbf']),
}
opt = BayesSearchCV(
pipe,
[(svc_search, 20), (linsvc_search, 16)], # (parameter space, # of evaluations)
cv=3
)
opt.fit(X_train, y_train)
print("val. score: %s" % opt.best_score_)
print("test score: %s" % opt.score(X_test, y_test))
from skopt import BayesSearchCV
from sklearn.datasets import load_iris
from sklearn.svm import SVC
X, y = load_iris(True)
searchcv = BayesSearchCV(
SVC(gamma='scale'),
search_spaces={'C': (0.01, 100.0, 'log-uniform')},
n_iter=10,
cv=3
)
# callback handler
def on_step(optim_result):
score = searchcv.best_score_
print("best score: %s" % score)
if score >= 0.98:
print('Interrupting!')
return True
searchcv.fit(X, y, callback=on_step)
from skopt import BayesSearchCV
from sklearn.datasets import load_iris
from sklearn.svm import SVC
X, y = load_iris(True)
searchcv = BayesSearchCV(
SVC(),
search_spaces=[
({'C': (0.1, 1.0)}, 19), # 19 iterations for this subspace
{'gamma':(0.1, 1.0)}
],
n_iter=23
)
print(searchcv.total_iterations)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: 2 - Outline of the Assignment
Step4: Expected Output
Step6: Expected Output
Step8: Expected Output
Step10: Expected Output
Step12: Expected Output
Step14: Expected Output
Step16: Expected Output
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import h5py
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
%load_ext autoreload
%autoreload 2
np.random.seed(1)
# GRADED FUNCTION: zero_pad
def zero_pad(X, pad):
Pad with zeros all images of the dataset X. The padding is applied to the height and width of an image,
as illustrated in Figure 1.
Argument:
X -- python numpy array of shape (m, n_H, n_W, n_C) representing a batch of m images
pad -- integer, amount of padding around each image on vertical and horizontal dimensions
Returns:
X_pad -- padded image of shape (m, n_H + 2*pad, n_W + 2*pad, n_C)
### START CODE HERE ### (≈ 1 line)
X_pad = np.pad(X, ((0, 0), (pad, pad), (pad, pad), (0, 0)), mode='constant', constant_values=0)
### END CODE HERE ###
return X_pad
np.random.seed(1)
x = np.random.randn(4, 3, 3, 2)
x_pad = zero_pad(x, 2)
print ("x.shape =", x.shape)
print ("x_pad.shape =", x_pad.shape)
print ("x[1,1] =", x[1,1])
print ("x_pad[1,1] =", x_pad[1,1])
fig, axarr = plt.subplots(1, 2)
axarr[0].set_title('x')
axarr[0].imshow(x[0,:,:,0])
axarr[1].set_title('x_pad')
axarr[1].imshow(x_pad[0,:,:,0])
# GRADED FUNCTION: conv_single_step
def conv_single_step(a_slice_prev, W, b):
Apply one filter defined by parameters W on a single slice (a_slice_prev) of the output activation
of the previous layer.
Arguments:
a_slice_prev -- slice of input data of shape (f, f, n_C_prev)
W -- Weight parameters contained in a window - matrix of shape (f, f, n_C_prev)
b -- Bias parameters contained in a window - matrix of shape (1, 1, 1)
Returns:
Z -- a scalar value, result of convolving the sliding window (W, b) on a slice x of the input data
### START CODE HERE ### (≈ 2 lines of code)
# Element-wise product between a_slice and W. Do not add the bias yet.
s = a_slice_prev * W # Sum over all entries of the volume s.
Z = np.sum(s)
# Add bias b to Z. Cast b to a float() so that Z results in a scalar value.
Z = Z + b
### END CODE HERE ###
return Z
np.random.seed(1)
a_slice_prev = np.random.randn(4, 4, 3)
W = np.random.randn(4, 4, 3)
b = np.random.randn(1, 1, 1)
Z = conv_single_step(a_slice_prev, W, b)
print("Z =", Z)
# GRADED FUNCTION: conv_forward
def conv_forward(A_prev, W, b, hparameters):
Implements the forward propagation for a convolution function
Arguments:
A_prev -- output activations of the previous layer, numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)
W -- Weights, numpy array of shape (f, f, n_C_prev, n_C)
b -- Biases, numpy array of shape (1, 1, 1, n_C)
hparameters -- python dictionary containing "stride" and "pad"
Returns:
Z -- conv output, numpy array of shape (m, n_H, n_W, n_C)
cache -- cache of values needed for the conv_backward() function
### START CODE HERE ###
# Retrieve dimensions from A_prev's shape (≈1 line)
(m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape
# Retrieve dimensions from W's shape (≈1 line)
(f, f, n_C_prev, n_C) = W.shape
# Retrieve information from "hparameters" (≈2 lines)
stride = hparameters['stride']
pad = hparameters['pad']
# Compute the dimensions of the CONV output volume using the formula given above. Hint: use int() to floor. (≈2 lines)
n_H = int((n_H_prev - f + 2 * pad) / stride + 1)
n_W = int((n_W_prev - f + 2 * pad) / stride + 1)
# Initialize the output volume Z with zeros. (≈1 line)
Z = np.zeros(shape=(m, n_H, n_W, n_C))
# Create A_prev_pad by padding A_prev
A_prev_pad = zero_pad(A_prev, pad)
for i in range(m): # loop over the batch of training examples
a_prev_pad = A_prev_pad[i] # Select ith training example's padded activation
for h in range(n_H): # loop over vertical axis of the output volume
for w in range(n_W): # loop over horizontal axis of the output volume
for c in range(n_C): # loop over channels (= #filters) of the output volume
# Find the corners of the current "slice" (≈4 lines)
vert_start = stride * h
vert_end = vert_start + f
horiz_start = stride * w
horiz_end = horiz_start + f
# Use the corners to define the (3D) slice of a_prev_pad (See Hint above the cell). (≈1 line)
a_slice_prev = a_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :]
# Convolve the (3D) slice with the correct filter W and bias b, to get back one output neuron. (≈1 line)
Z[i, h, w, c] = np.sum(a_slice_prev[:, :, :] * W[:, :, :, c]) + b[0, 0, 0, c]
### END CODE HERE ###
# Making sure your output shape is correct
assert(Z.shape == (m, n_H, n_W, n_C))
# Save information in "cache" for the backprop
cache = (A_prev, W, b, hparameters)
return Z, cache
np.random.seed(1)
A_prev = np.random.randn(10,4,4,3)
W = np.random.randn(2,2,3,8)
b = np.random.randn(1,1,1,8)
hparameters = {"pad" : 2,
"stride": 2}
Z, cache_conv = conv_forward(A_prev, W, b, hparameters)
print("Z's mean =", np.mean(Z))
print("Z[3,2,1] =", Z[3,2,1])
print("cache_conv[0][1][2][3] =", cache_conv[0][1][2][3])
# GRADED FUNCTION: pool_forward
def pool_forward(A_prev, hparameters, mode = "max"):
Implements the forward pass of the pooling layer
Arguments:
A_prev -- Input data, numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)
hparameters -- python dictionary containing "f" and "stride"
mode -- the pooling mode you would like to use, defined as a string ("max" or "average")
Returns:
A -- output of the pool layer, a numpy array of shape (m, n_H, n_W, n_C)
cache -- cache used in the backward pass of the pooling layer, contains the input and hparameters
# Retrieve dimensions from the input shape
(m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape
# Retrieve hyperparameters from "hparameters"
f = hparameters["f"]
stride = hparameters["stride"]
# Define the dimensions of the output
n_H = int(1 + (n_H_prev - f) / stride)
n_W = int(1 + (n_W_prev - f) / stride)
n_C = n_C_prev
# Initialize output matrix A
A = np.zeros((m, n_H, n_W, n_C))
### START CODE HERE ###
for i in range(m): # loop over the training examples
for h in range(n_H): # loop on the vertical axis of the output volume
for w in range(n_W): # loop on the horizontal axis of the output volume
for c in range (n_C): # loop over the channels of the output volume
# Find the corners of the current "slice" (≈4 lines)
vert_start = h * stride
vert_end = vert_start + f
horiz_start = w * stride
horiz_end = horiz_start + f
# Use the corners to define the current slice on the ith training example of A_prev, channel c. (≈1 line)
a_prev_slice = A_prev[i, vert_start:vert_end, horiz_start:horiz_end, c]
# Compute the pooling operation on the slice. Use an if statment to differentiate the modes. Use np.max/np.mean.
if mode == "max":
A[i, h, w, c] = np.max(a_prev_slice)
elif mode == "average":
A[i, h, w, c] = np.mean(a_prev_slice)
### END CODE HERE ###
# Store the input and hparameters in "cache" for pool_backward()
cache = (A_prev, hparameters)
# Making sure your output shape is correct
assert(A.shape == (m, n_H, n_W, n_C))
return A, cache
np.random.seed(1)
A_prev = np.random.randn(2, 4, 4, 3)
hparameters = {"stride" : 2, "f": 3}
A, cache = pool_forward(A_prev, hparameters)
print("mode = max")
print("A =", A)
print()
A, cache = pool_forward(A_prev, hparameters, mode = "average")
print("mode = average")
print("A =", A)
def conv_backward(dZ, cache):
Implement the backward propagation for a convolution function
Arguments:
dZ -- gradient of the cost with respect to the output of the conv layer (Z), numpy array of shape (m, n_H, n_W, n_C)
cache -- cache of values needed for the conv_backward(), output of conv_forward()
Returns:
dA_prev -- gradient of the cost with respect to the input of the conv layer (A_prev),
numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)
dW -- gradient of the cost with respect to the weights of the conv layer (W)
numpy array of shape (f, f, n_C_prev, n_C)
db -- gradient of the cost with respect to the biases of the conv layer (b)
numpy array of shape (1, 1, 1, n_C)
### START CODE HERE ###
# Retrieve information from "cache"
(A_prev, W, b, hparameters) = cache
# Retrieve dimensions from A_prev's shape
(m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape
# Retrieve dimensions from W's shape
(f, f, n_C_prev, n_C) = W.shape
# Retrieve information from "hparameters"
stride = hparameters['stride']
pad = hparameters['pad']
# Retrieve dimensions from dZ's shape
(m, n_H, n_W, n_C) = dZ.shape
# Initialize dA_prev, dW, db with the correct shapes
dA_prev = np.zeros((m, n_H, n_W, n_C_prev))
dW = np.zeros((f, f, n_C_prev, n_C))
db = np.zeros((1, 1, 1, n_C))
# Pad A_prev and dA_prev
A_prev_pad = zero_pad(A_prev, pad)
dA_prev_pad = zero_pad(dA_prev, pad)
for i in range(m): # loop over the training examples
# select ith training example from A_prev_pad and dA_prev_pad
a_prev_pad = A_prev_pad[i]
da_prev_pad = dA_prev_pad[i]
for h in range(n_H): # loop over vertical axis of the output volume
for w in range(n_W): # loop over horizontal axis of the output volume
for c in range(n_C): # loop over the channels of the output volume
# Find the corners of the current "slice"
vert_start = h * stride
vert_end = vert_start + f
horiz_start = w * stride
horiz_end = horiz_start + f
# Use the corners to define the slice from a_prev_pad
a_slice = a_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :]
# Update gradients for the window and the filter's parameters using the code formulas given above
da_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :] += W[:,:,:,c] * dZ[i, h, w, c]
dW[:,:,:,c] += a_slice * dZ[i, h, w, c]
db[:,:,:,c] += dZ[i, h, w, c]
# Set the ith training example's dA_prev to the unpaded da_prev_pad (Hint: use X[pad:-pad, pad:-pad, :])
dA_prev[i, :, :, :] = da_prev_pad[pad:-pad, pad:-pad, :]
### END CODE HERE ###
# Making sure your output shape is correct
assert(dA_prev.shape == (m, n_H_prev, n_W_prev, n_C_prev))
return dA_prev, dW, db
np.random.seed(1)
dA, dW, db = conv_backward(Z, cache_conv)
print("dA_mean =", np.mean(dA))
print("dW_mean =", np.mean(dW))
print("db_mean =", np.mean(db))
def create_mask_from_window(x):
Creates a mask from an input matrix x, to identify the max entry of x.
Arguments:
x -- Array of shape (f, f)
Returns:
mask -- Array of the same shape as window, contains a True at the position corresponding to the max entry of x.
### START CODE HERE ### (≈1 line)
mask = x == np.max(x)
### END CODE HERE ###
return mask
np.random.seed(1)
x = np.random.randn(2,3)
mask = create_mask_from_window(x)
print('x = ', x)
print("mask = ", mask)
def distribute_value(dz, shape):
Distributes the input value in the matrix of dimension shape
Arguments:
dz -- input scalar
shape -- the shape (n_H, n_W) of the output matrix for which we want to distribute the value of dz
Returns:
a -- Array of size (n_H, n_W) for which we distributed the value of dz
### START CODE HERE ###
# Retrieve dimensions from shape (≈1 line)
(n_H, n_W) = None
# Compute the value to distribute on the matrix (≈1 line)
average = None
# Create a matrix where every entry is the "average" value (≈1 line)
a = None
### END CODE HERE ###
return a
a = distribute_value(2, (2,2))
print('distributed value =', a)
def pool_backward(dA, cache, mode = "max"):
Implements the backward pass of the pooling layer
Arguments:
dA -- gradient of cost with respect to the output of the pooling layer, same shape as A
cache -- cache output from the forward pass of the pooling layer, contains the layer's input and hparameters
mode -- the pooling mode you would like to use, defined as a string ("max" or "average")
Returns:
dA_prev -- gradient of cost with respect to the input of the pooling layer, same shape as A_prev
### START CODE HERE ###
# Retrieve information from cache (≈1 line)
(A_prev, hparameters) = None
# Retrieve hyperparameters from "hparameters" (≈2 lines)
stride = None
f = None
# Retrieve dimensions from A_prev's shape and dA's shape (≈2 lines)
m, n_H_prev, n_W_prev, n_C_prev = None
m, n_H, n_W, n_C = None
# Initialize dA_prev with zeros (≈1 line)
dA_prev = None
for i in range(None): # loop over the training examples
# select training example from A_prev (≈1 line)
a_prev = None
for h in range(None): # loop on the vertical axis
for w in range(None): # loop on the horizontal axis
for c in range(None): # loop over the channels (depth)
# Find the corners of the current "slice" (≈4 lines)
vert_start = None
vert_end = None
horiz_start = None
horiz_end = None
# Compute the backward propagation in both modes.
if mode == "max":
# Use the corners and "c" to define the current slice from a_prev (≈1 line)
a_prev_slice = None
# Create the mask from a_prev_slice (≈1 line)
mask = None
# Set dA_prev to be dA_prev + (the mask multiplied by the correct entry of dA) (≈1 line)
dA_prev[i, vert_start: vert_end, horiz_start: horiz_end, c] += None
elif mode == "average":
# Get the value a from dA (≈1 line)
da = None
# Define the shape of the filter as fxf (≈1 line)
shape = None
# Distribute it to get the correct slice of dA_prev. i.e. Add the distributed value of da. (≈1 line)
dA_prev[i, vert_start: vert_end, horiz_start: horiz_end, c] += None
### END CODE ###
# Making sure your output shape is correct
assert(dA_prev.shape == A_prev.shape)
return dA_prev
np.random.seed(1)
A_prev = np.random.randn(5, 5, 3, 2)
hparameters = {"stride" : 1, "f": 2}
A, cache = pool_forward(A_prev, hparameters)
dA = np.random.randn(5, 4, 2, 2)
dA_prev = pool_backward(dA, cache, mode = "max")
print("mode = max")
print('mean of dA = ', np.mean(dA))
print('dA_prev[1,1] = ', dA_prev[1,1])
print()
dA_prev = pool_backward(dA, cache, mode = "average")
print("mode = average")
print('mean of dA = ', np.mean(dA))
print('dA_prev[1,1] = ', dA_prev[1,1])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data preprocessing
Step2: Encoding the words
Step3: Encoding the labels
Step4: If you built labels correctly, you should see the next output.
Step5: Okay, a couple issues here. We seem to have one review with zero length. And, the maximum review length is way too many steps for our RNN. Let's truncate to 200 steps. For reviews shorter than 200, we'll pad with 0s. For reviews longer than 200, we can truncate them to the first 200 characters.
Step6: Exercise
Step7: If you build features correctly, it should look like that cell output below.
Step8: Training, Validation, Test
Step9: With train, validation, and text fractions of 0.8, 0.1, 0.1, the final shapes should look like
Step10: For the network itself, we'll be passing in our 200 element long review vectors. Each batch will be batch_size vectors. We'll also be using dropout on the LSTM layer, so we'll make a placeholder for the keep probability.
Step11: Embedding
Step12: LSTM cell
Step13: RNN forward pass
Step14: Output
Step15: Validation accuracy
Step16: Batching
Step17: Training
Step18: Testing
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import tensorflow as tf
with open('../sentiment-network/reviews.txt', 'r') as f:
reviews = f.read()
with open('../sentiment-network/labels.txt', 'r') as f:
labels = f.read()
reviews[:2000]
from string import punctuation
all_text = ''.join([c for c in reviews if c not in punctuation])
reviews = all_text.split('\n')
all_text = ' '.join(reviews)
words = all_text.split()
all_text[:2000]
words[:100]
reviews[:2]
# Create your dictionary that maps vocab words to integers here
vocab_to_int = dict((word,idx) for idx, word in enumerate(set(words)))
# Convert the reviews to integers, same shape as reviews list, but with integers
reviews_ints = []
for review in reviews:
r_int = []
for r_word in review.split():
r_int.append(vocab_to_int[r_word])
reviews_ints.append(r_int)
# Convert labels to 1s and 0s for 'positive' and 'negative'
labels = labels.split('\n')
int_labels = []
for label in labels:
if label == 'positive':
int_labels.append(1)
else:
int_labels.append(0)
labels = int_labels
from collections import Counter
review_lens = Counter([len(x) for x in reviews_ints])
print("Zero-length reviews: {}".format(review_lens[0]))
print("Maximum review length: {}".format(max(review_lens)))
# Filter out that review with 0 length
reviews_ints_filtered = []
for review in reviews_ints:
if len(review) >= 0:
reviews_ints_filtered.append(review)
reviews_ints = reviews_ints_filtered
seq_len = 200
def parse_single_review(review, max_allowed_len = seq_len):
if len(review) >= max_allowed_len:
return review[:200]
else:
required_zeros = max_allowed_len - len(review)
left_part_review = [int(i) for i in np.zeros(required_zeros)]
return left_part_review + review
features = np.array([parse_single_review(review) for review in reviews_ints])
features[:10,:100]
from sklearn.model_selection import train_test_split
split_frac = 0.8
train_x, val_x, train_y, val_y = train_test_split(features, labels, test_size= 1 - split_frac, random_state=42)
#train_x, val_x =
#train_y, val_y =
val_x, test_x, val_y, test_y = train_test_split(val_x, val_y, test_size = 0.5, random_state=42)
#val_x, test_x =
#val_y, test_y =
train_y = np.array(train_y)
val_y = np.array(val_y)
test_y = np.array(val_y)
print("\t\t\tFeature Shapes:")
print("Train set: \t\t{}".format(train_x.shape),
"\nValidation set: \t{}".format(val_x.shape),
"\nTest set: \t\t{}".format(test_x.shape))
lstm_size = 256
lstm_layers = 2
batch_size = 500
learning_rate = 0.001
n_words = len(vocab_to_int)
# Create the graph object
graph = tf.Graph()
# Add nodes to the graph
with graph.as_default():
inputs_ = tf.placeholder(tf.int32, (None, seq_len), name='inputs')
labels_ = tf.placeholder(tf.int32, (None, None), name='labels')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
# Size of the embedding vectors (number of units in the embedding layer)
embed_size = 300
n_words = len(vocab_to_int)
with graph.as_default():
#Create embedding lookup table
#Num vocabs = len(vocab_to_int)
embedding = tf.Variable(tf.truncated_normal((n_words, embed_size), stddev=0.1))
#Perform embedding lookup
embed = tf.nn.embedding_lookup(params=embedding, ids=inputs_)
with graph.as_default():
def build_cell():
# Your basic LSTM cell
lstm = tf.contrib.rnn.BasicLSTMCell(num_units=lstm_size)
# Add dropout to the cell
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
return drop
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([build_cell() for i in range(lstm_layers)])
# Getting an initial state of all zeros
initial_state = cell.zero_state(batch_size, tf.float32)
with graph.as_default():
#Creazione dell'effettiva struttura LSTM per le review (ora a 200 parole).
#Gli input devono essere gli embedding!
outputs, final_state = tf.nn.dynamic_rnn(cell=cell, inputs=embed, initial_state=initial_state)
with graph.as_default():
#Prendi solo ultima colonna (che contiene l'output dell'ultima cella) per ogni batch e applica sigmoide (prob che sia positiva)
predictions = tf.contrib.layers.fully_connected(outputs[:, -1], 1, activation_fn=tf.sigmoid)
cost = tf.losses.mean_squared_error(labels_, predictions)
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
with graph.as_default():
correct_pred = tf.equal(tf.cast(tf.round(predictions), tf.int32), labels_)
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
def get_batches(x, y, batch_size=100):
n_batches = len(x)//batch_size
x, y = x[:n_batches*batch_size], y[:n_batches*batch_size]
for ii in range(0, len(x), batch_size):
yield x[ii:ii+batch_size], y[ii:ii+batch_size]
epochs = 10
with graph.as_default():
saver = tf.train.Saver()
with tf.Session(graph=graph) as sess:
sess.run(tf.global_variables_initializer())
iteration = 1
for e in range(epochs):
state = sess.run(initial_state)
for ii, (x, y) in enumerate(get_batches(train_x, train_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 0.5,
initial_state: state}
loss, state, _ = sess.run([cost, final_state, optimizer], feed_dict=feed)
if iteration%5==0:
print("Epoch: {}/{}".format(e, epochs),
"Iteration: {}".format(iteration),
"Train loss: {:.3f}".format(loss))
if iteration%25==0:
val_acc = []
val_state = sess.run(cell.zero_state(batch_size, tf.float32))
for x, y in get_batches(val_x, val_y, batch_size):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: val_state}
batch_acc, val_state = sess.run([accuracy, final_state], feed_dict=feed)
val_acc.append(batch_acc)
print("Val acc: {:.3f}".format(np.mean(val_acc)))
iteration +=1
saver.save(sess, "checkpoints/sentiment.ckpt")
test_acc = []
with tf.Session(graph=graph) as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
test_state = sess.run(cell.zero_state(batch_size, tf.float32))
for ii, (x, y) in enumerate(get_batches(test_x, test_y, batch_size), 1):
feed = {inputs_: x,
labels_: y[:, None],
keep_prob: 1,
initial_state: test_state}
batch_acc, test_state = sess.run([accuracy, final_state], feed_dict=feed)
test_acc.append(batch_acc)
print("Test accuracy: {:.3f}".format(np.mean(test_acc)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Are we underfitting?
Step2: py
Step3: ```py
Step5: ...and load our fine-tuned weights.
Step6: We're going to be training a number of iterations without dropout, so it would be best for us to pre-calculate the input to the fully connected layers - i.e. the Flatten() layer.
Step7: Now we can use the exact same approach to creating features as we used when we created the linear model from the imagenet predictions in the last lesson - it's only the model that has changed. As you're seeing, there's a fairly small number of "recipes" that can get us a long way!
Step8: For our new fully connected model, we'll create it using the exact same architecture as the last layers of VGG 16, so that we can conveniently copy pre-trained weights over from that model. However, we'll set the dropout layer's p values to zero, so as to effectively remove dropout.
Step9: And fit the model in the usual way
Step10: Reducing overfitting
Step11: Let's take a look at how this generator changes a single image (the details of this code don't matter much, but feel free to read the comments and keras docs to understand the details if you're interested).
Step12: As you can see below, there's no magic to data augmentation - it's a very intuitive approach to generating richer input data. Generally speaking, your intuition should be a good guide to appropriate data augmentation, although it's a good idea to test your intuition by checking the results of different augmentation approaches.
Step13: Adding data augmentation
Step14: When using data augmentation, we can't pre-compute our convolutional layer features, since randomized changes are being made to every input image. That is, even if the training process sees the same image multiple times, each time it will have undergone different data augmentation, so the results of the convolutional layers will be different.
Step15: Now we can compile, train, and save our model as usual - note that we use fit_generator() since we want to pull random images from the directories on every batch.
Step16: Batch normalization
|
<ASSISTANT_TASK:>
Python Code:
from theano.sandbox import cuda
%matplotlib inline
from importlib import reload
import utils; reload(utils)
from utils import *
from __future__ import division, print_function
#path = "data/dogscats/sample/"
path = "data/dogscats/"
model_path = path + 'models/'
if not os.path.exists(model_path): os.mkdir(model_path)
batch_size=32
??vgg_ft
??Vgg16.ft
model = vgg_ft(2)
model.load_weights(model_path+'finetune3.h5')
layers = model.layers
# find the last convolution layer
last_conv_idx = [index for index,layer in enumerate(layers)
if type(layer) is Convolution2D][-1]
last_conv_idx
layers[last_conv_idx]
conv_layers = layers[:last_conv_idx+1]
conv_model = Sequential(conv_layers)
# Dense layers - also known as fully connected or 'FC' layers
fc_layers = layers[last_conv_idx+1:]
batches = get_batches(path+'train', shuffle=False, batch_size=batch_size)
val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size)
val_classes = val_batches.classes
trn_classes = batches.classes
val_labels = onehot(val_classes)
trn_labels = onehot(trn_classes)
val_features = conv_model.predict_generator(val_batches, val_batches.nb_sample)
trn_features = conv_model.predict_generator(batches, batches.nb_sample)
save_array(model_path + 'train_convlayer_features.bc', trn_features)
save_array(model_path + 'valid_convlayer_features.bc', val_features)
trn_features = load_array(model_path+'train_convlayer_features.bc')
val_features = load_array(model_path+'valid_convlayer_features.bc')
trn_features.shape
# Copy the weights from the pre-trained model.
# NB: Since we're removing dropout, we want to half the weights
def proc_wgts(layer): return [o/2 for o in layer.get_weights()]
# Such a finely tuned model needs to be updated very slowly!
opt = RMSprop(lr=0.000001, rho=0.7)
def get_fc_model():
model = Sequential([
MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]),
Flatten(),
Dense(4096, activation='relu'),
Dropout(0.),
Dense(4096, activation='relu'),
Dropout(0.),
Dense(2, activation='softmax')
])
for l1,l2 in zip(model.layers, fc_layers): l1.set_weights(proc_wgts(l2))
model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy'])
return model
fc_model = get_fc_model()
fc_model.fit(trn_features, trn_labels, nb_epoch=8,
batch_size=batch_size, validation_data=(val_features, val_labels))
fc_model.save_weights(model_path+'no_dropout.h5')
fc_model.load_weights(model_path+'no_dropout.h5')
# dim_ordering='tf' uses tensorflow dimension ordering,
# which is the same order as matplotlib uses for display.
# Therefore when just using for display purposes, this is more convenient
gen = image.ImageDataGenerator(rotation_range=10, width_shift_range=0.1,
height_shift_range=0.1, shear_range=0.15, zoom_range=0.1,
channel_shift_range=10., horizontal_flip=True, dim_ordering='tf')
# Create a 'batch' of a single image
img = np.expand_dims(ndimage.imread('data/dogscats/test/7.jpg'),0)
# Request the generator to create batches from this image
aug_iter = gen.flow(img)
# Get eight examples of these augmented images
aug_imgs = [next(aug_iter)[0].astype(np.uint8) for i in range(8)]
# The original
plt.imshow(img[0])
# Augmented data
plots(aug_imgs, (20,7), 2)
# Ensure that we return to theano dimension ordering
K.set_image_dim_ordering('th')
gen = image.ImageDataGenerator(rotation_range=15, width_shift_range=0.1,
height_shift_range=0.1, zoom_range=0.1, horizontal_flip=True)
batches = get_batches(path+'train', gen, batch_size=batch_size)
# NB: We don't want to augment or shuffle the validation set
val_batches = get_batches(path+'valid', shuffle=False, batch_size=batch_size)
fc_model = get_fc_model()
for layer in conv_model.layers: layer.trainable = False
# Look how easy it is to connect two models together!
conv_model.add(fc_model)
conv_model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy'])
conv_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=8,
validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
conv_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=3,
validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
conv_model.save_weights(model_path + 'aug1.h5')
conv_model.load_weights(model_path + 'aug1.h5')
conv_layers[-1].output_shape[1:]
def get_bn_layers(p):
return [
MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]),
Flatten(),
Dense(4096, activation='relu'),
BatchNormalization(),
Dropout(p),
Dense(4096, activation='relu'),
BatchNormalization(),
Dropout(p),
Dense(1000, activation='softmax')
]
def load_fc_weights_from_vgg16bn(model):
"Load weights for model from the dense layers of the Vgg16BN model."
# See imagenet_batchnorm.ipynb for info on how the weights for
# Vgg16BN can be generated from the standard Vgg16 weights.
from vgg16bn import Vgg16BN
vgg16_bn = Vgg16BN()
_, fc_layers = split_at(vgg16_bn.model, Convolution2D)
copy_weights(fc_layers, model.layers)
p=0.6
bn_model = Sequential(get_bn_layers(0.6))
load_fc_weights_from_vgg16bn(bn_model)
def proc_wgts(layer, prev_p, new_p):
scal = (1-prev_p)/(1-new_p)
return [o*scal for o in layer.get_weights()]
for l in bn_model.layers:
if type(l)==Dense: l.set_weights(proc_wgts(l, 0.5, 0.6))
bn_model.pop()
for layer in bn_model.layers: layer.trainable=False
bn_model.add(Dense(2,activation='softmax'))
bn_model.compile(Adam(), 'categorical_crossentropy', metrics=['accuracy'])
bn_model.fit(trn_features, trn_labels, nb_epoch=8, validation_data=(val_features, val_labels))
bn_model.save_weights(model_path+'bn.h5')
bn_model.load_weights(model_path+'bn.h5')
bn_layers = get_bn_layers(0.6)
bn_layers.pop()
bn_layers.append(Dense(2,activation='softmax'))
final_model = Sequential(conv_layers)
for layer in final_model.layers: layer.trainable = False
for layer in bn_layers: final_model.add(layer)
for l1,l2 in zip(bn_model.layers, bn_layers):
l2.set_weights(l1.get_weights())
final_model.compile(optimizer=Adam(),
loss='categorical_crossentropy', metrics=['accuracy'])
final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=1,
validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
final_model.save_weights(model_path + 'final1.h5')
final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=4,
validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
final_model.save_weights(model_path + 'final2.h5')
final_model.optimizer.lr=0.001
final_model.fit_generator(batches, samples_per_epoch=batches.nb_sample, nb_epoch=4,
validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
bn_model.save_weights(model_path + 'final3.h5')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exercise 1
Step2: Exercise 2
Step3: Exercise 3
|
<ASSISTANT_TASK:>
Python Code:
# This is a configuration step for the exercise. Please run it before calculating the derivative!
import numpy as np
import matplotlib.pyplot as plt
# Show the plots in the Notebook.
plt.switch_backend("nbagg")
#################################################################
# IMPLEMENT THE CHEBYSHEV DERIVATIVE MATRIX METHOD HERE!
#################################################################
#################################################################
# IMPLEMENT YOUR SOLUTION HERE!
#################################################################
#################################################################
# PLOT YOUR SOLUTION HERE!
#################################################################
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Start from a simple linear regression
Step2: As it is metioned before, the command in Sklearn for LinearRegression can be used except that the input data are not arrays any longer. Instead, they are data in a CSV file. Hence, you can give a file name instead of X, y as arrays.
Step3: Now every other operations are the same to the commands in orginal LinearRegression method. You can predict for new input data. Now additional input data is not a file. It will be updated to use file later on. After that, you can specify traning data on fit() while testing on predict().
Step4: Investigating Formats of Data Sheets
Step5: Grid search
|
<ASSISTANT_TASK:>
Python Code:
from importlib import reload
import sklearn.linear_model
import pandas as pd
import numpy as np
from poodle import linear_model
reload( linear_model)
ml = linear_model.LinearRegression()
ml.fit('sheet/xy_pdl.csv')
ml.predict( 'sheet/x_pdl.csv', 'sheet/yp_pdl.csv')
linear_model.read_csv( 'sheet/xy_pdl.csv')
linear_model.read_csv( 'sheet/yp_pdl.csv')
reload( linear_model)
gs = linear_model.GridSearchCV()
gs.fit( 'sheet/xy_pdl.csv', 'sheet/gs_pdl.csv')
linear_model.read_csv( 'sheet/gs_pdl.csv')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step3: Download block_events and blocked_users
Step5: Download NPA warnings
Step7: Download Long term Users
Step9: Download Gender
Step10: Onionize all_blocked_user
|
<ASSISTANT_TASK:>
Python Code:
def cf_helper(path, cols, k = 5):
df = pd.read_csv(path, sep = '\t', quoting = 3, encoding = 'utf-8', header = None, usecols=range(len(cols)))
if df.shape[0] ==0:
return pd.DataFrame(columns = cols)
if df.shape[1] != len(cols):
print(path)
print(df.shape)
return pd.DataFrame(columns = cols)
df.columns = cols
df = df.assign(key = lambda x: np.random.randint(0, high=5*k, size=x.shape[0]))
dfs = [e[1] for e in df.groupby('key')]
p = mp.Pool(k)
dfs = p.map(clean_and_filter, dfs)
p.close()
p.join()
return pd.concat(dfs)
def clean_and_filter_parallel(d, k = 7):
indir = d['raw_local_path']
outdir = d['clean_local_path']
os.system("rm -rf %s" % outdir)
os.system("mkdir %s" % outdir)
cols = d['cols']
files = []
for root, dirnames, filenames in os.walk(indir):
for filename in filenames:
if '_0' in filename:
files.append(os.path.join(root, filename))
for i, file in enumerate(files):
df = cf_helper(file, cols, k = k)
del df['key']
df.to_csv(os.path.join(outdir, "chunk_%d.tsv" % i), sep = '\t', index = False)
for d in datasets:
print(d['raw_local_path'])
clean_and_filter_parallel(d)
query =
SELECT
*
FROM
enwiki.block_events
block_events_df = query_hive_ssh(query, '../../data/block_events.tsv', priority = True, quoting=3, delete=False)
block_events_df.columns = [c.split('.')[1] for c in block_events_df.columns]
query =
SELECT
*
FROM
enwiki.blocked_user
blocked_user_df = query_hive_ssh(query, '../../data/blocked_user.tsv', priority = True, quoting=3, delete=False)
blocked_user_df.columns = [c.split('.')[1] for c in blocked_user_df.columns]
query =
SELECT
*
FROM
enwiki.npa_warnings
npa_warnings_df = query_hive_ssh(query, '../../data/npa_warnings.tsv', priority = True, quoting=3, delete=False)
npa_warnings_df.columns = [c.split('.')[1] for c in npa_warnings_df.columns]
query =
SELECT
user_text,
COUNT(*) AS num_days
FROM
(SELECT
user_text,
day
FROM
(SELECT
rev_user_text AS user_text,
SUBSTR(rev_timestamp,0,8) AS day
FROM
enwiki.revision
WHERE
rev_user != 0
AND rev_timestamp <= '2015-01-01'
) a
GROUP BY
user_text,
day ) b
GROUP BY
user_text
HAVING
COUNT(*) > 7
long_term_users_df = query_hive_ssh(query, '../../data/long_term_users.tsv', priority = True, quoting=3, delete=False)
## Annotate users by gender
query =
SELECT
user_id,
user_name as user_text,
up_value as gender
FROM
enwiki.user_properties p,
enwiki.user u
WHERE
p.up_user = u.user_id
AND up_property = 'gender'
d_gender = query_analytics_store(query, {})
d_gender.to_csv('../../data/genders.tsv', sep = '\t', index = False)
block_events_df = pd.read_csv('../../data/block_events.tsv', sep = "\t")
block_events_df.columns = [c.split('.')[1] for c in block_events_df.columns]
nss = ['user', 'article']
rel_path = '../../data/samples'
for ns in nss:
infile = os.path.join(rel_path, ns, 'clean', 'all_blocked_user.tsv')
out_dir = os.path.join(rel_path, ns, 'clean', 'blocked_user_onion')
df = pd.read_csv(infile, sep = '\t')
users = list(set(df['user_text']))
print(len(users))
k_prev = 0
ks = [5, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 150, 200, 250, 300, 500, 1000]
dfs = {k:[] for k in ks}
t1 = time.time()
for i, user in enumerate(users):
if i % 1000 ==0:
print (i)
print(time.time()-t1)
t1 = time.time()
df_user = df[df['user_text'] == user].sort_values(by='rev_timestamp')
if df_user.shape[0] == 0:
continue
block_events_df_user = block_events_df[block_events_df['user_text']==user]
seen_ids = set()
for i,r in block_events_df_user.iterrows():
ts = r['timestamp']
for k in ks:
df_user_pre = df_user[df_user['rev_timestamp'] <= ts][-k:]
if df_user_pre.shape[0] > 0:
df_user_pre = df_user_pre[df_user_pre['rev_id'].apply(lambda x: x not in seen_ids )]
if df_user_pre.shape[0] > 0:
seen_ids.update(tuple(df_user_pre['rev_id']))
dfs[k].append(df_user_pre)
df_user_post = df_user[df_user['rev_timestamp'] > ts][:k]
if df_user_post.shape[0] > 0:
df_user_post = df_user_post[df_user_post['rev_id'].apply(lambda x: x not in seen_ids ) ]
if df_user_post.shape[0] > 0:
seen_ids.update(tuple(df_user_post['rev_id']))
dfs[k].append(df_user_post)
dfs = {k: pd.concat(v) for k,v in dfs.items()}
sizes = [(k, len(v)) for k,v in dfs.items()]
sizes.sort(key=lambda x: x[0])
print(sizes)
os.system('rm -rf %s' % out_dir)
os.system('mkdir %s' % out_dir)
for k, v in dfs.items():
v.iloc[np.random.permutation(len(v))].to_csv(out_dir +'/%d.tsv' % k, sep = '\t', index=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: TSNE
|
<ASSISTANT_TASK:>
Python Code:
df_train = sessions_to_dataframe(training_sessions)
df_val = sessions_to_dataframe(validation_sessions)
df_train.head()
df_train = preprocess_data(df_train)
df_val = preprocess_data(df_val)
#### SPECIAL CASE #####
# There isnt any XButton data in the validation set so we better drop this column for the training set
# if we want to have the same number of features in both sets
df_train = df_train.drop(['XButton'], axis = 1)
#### SPECIAL CASE #####
df_train.head()
seq_size = 300
train_x, train_y = data_to_machine_learning_examples(df_train, seq_size)
print('[*] Generated traning examples {} and labels {}'.format(train_x.shape, train_y.shape))
val_x, val_y = data_to_machine_learning_examples(df_val, seq_size)
print('[*] Generated validation examples {} and labels {}'.format(val_x.shape, val_y.shape))
def print_model(model):
print("[*] Sequential model created with the following layers:")
for layer in model.layers:
print("{:30}{} -> {}".format(layer.name, layer.input_shape, layer.output_shape))
from keras.models import load_model
from keras.callbacks import ModelCheckpoint
from keras.callbacks import ReduceLROnPlateau
from keras.callbacks import TensorBoard
epochs = 200
batch_size = 30
learning_rate = 0.0001
batch_norm_momentum = 0.2
n_classes = 10
data_point_dimensionality = 13
# model = load_model('model/model_18.h5')
model = create_model_paper(input_shape = (seq_size, data_point_dimensionality),
classes = n_classes,
batch_norm_momentum = batch_norm_momentum,
l2_regularization = 0.01)
optimizer = Adam(lr=learning_rate)
model.compile(loss='categorical_crossentropy', optimizer=optimizer, metrics=['accuracy'])
cb_check = ModelCheckpoint('model/checkpoint', monitor='val_loss', verbose=1, period=30)
cb_reducelr = ReduceLROnPlateau(verbose=1)
cb_tensorboard = TensorBoard(log_dir='./logs', histogram_freq=30, write_graph=True)
hist = model.fit(train_x, train_y,
batch_size, epochs, 2,
validation_data=(val_x, val_y),
callbacks = [cb_reducelr])
# callbacks =[cb_check, cb_reducelr, cb_tensorboard])
plt.plot(hist.history['loss'])
plt.plot(hist.history['val_loss'])
plt.plot(hist.history['acc'])
plt.plot(hist.history['val_acc'])
from keras.models import load_model
model = load_model('model/model_18.h5')
def print_model(model):
print("[*] Sequential model created with the following layers:")
for layer in model.layers:
print("{:30}{} -> {}".format(layer.name, layer.input_shape, layer.output_shape))
print_model(model)
from keras.models import Model
layer_name = 'global_average_pooling1d_1'
intermediate_layer_model = Model(inputs=model.input,
outputs=model.get_layer(layer_name).output)
intermediate_output = intermediate_layer_model.predict(train_x)
y_data = model.predict(train_x)
intermediate_output.shape
y_data_nums = [np.argmax(row) for row in y_data]
from sklearn.manifold import TSNE
tsne_model = TSNE(n_components=2, random_state=0)
np.set_printoptions(suppress=True)
result = tsne_model.fit_transform(intermediate_output)
print(result)
import seaborn as sns
sns.set(style="white", color_codes=True)
g = sns.jointplot(x=result[:,0], y=result[:,1])
plt.figure(1, figsize=(12, 10))
plt.scatter(result[:,0], result[:,1], c=y_data_nums, cmap=plt.cm.get_cmap("jet"))
# plt.scatter(result[:,0], result[:,1])
plt.colorbar(ticks=range(10))
plt.clim(-0.5, 9.5)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The shape of the 4D tensor corresponding to the weight matrix W is
Step2: Note that we use the same weight initialization formula as with the MLP. Weights are sampled randomly from a uniform distribution in the range [-1/fan-in, 1/fan-in], where fan-in is the number of inputs to a hidden unit. For MLPs, this was the number of units in the layer below. For CNNs however, we have to take into account the number of input feature maps and the size of the receptive fields.
Step3: RandomState.uniform(low=0.0, high=1.0, size=None)
Step4: We chose to have only 2 filters, so 2 bias terms need to be initialized.
Step5: nnet.conv2d
Step6: tensor.nnet.sigmoid(x)
Step7: 2. Testing ConvOp on an image
Step8: <img src="images/figure_3.png">
Step11: theano.tensor.signal.downsample.max_pool_2d(input, ds, ignore_border=None, st=None, padding=(0, 0), mode='max')
Step13: Notice that when initializing the weight values, the fan-in is determined by the size of the receptive fields and the number of input feature maps.
Step18: The class uses tanh as activation function by default. This can be supported by the results presented in the scientific paper by called Performance Analysis of Various Activation Functions in Generalized MLP Architectures of Neural Networks by Ahmet V Olgac and Bekir Karlik.
Step20: .negative_log_likelihood(y)
Step25: C. Implementation of Learning Rate Decay
Step27: D. Implementation of dropout
Step28: 2. Creating dropout classes
Step32: Note
Step34: After running the code for 50 epochs (237 minutes of computation) we get
Step35: 2. Testing the function on a single untrained LeNetConvPoolLayer
Step37: <img src="images/filters2.png" width = 400 >
|
<ASSISTANT_TASK:>
Python Code:
import cPickle
import gzip
import os
import sys
import timeit
import numpy
import theano
import theano.tensor as T
from theano.tensor.signal import downsample
from theano.tensor.nnet import conv
rng = numpy.random.RandomState(23455)
# instantiate 4D tensor for input
input = T.tensor4(name='input')
w_shp = (2, 3, 9, 9)
w_bound = numpy.sqrt(3 * 9 * 9)
W = theano.shared( numpy.asarray(
rng.uniform(
low=-1.0 / w_bound,
high=1.0 / w_bound,
size=w_shp),
dtype=input.dtype), name ='W')
# initialize shared variable for bias (1D tensor) with random values
# IMPORTANT: biases are usually initialized to zero. However in this
# particular application, we simply apply the convolutional layer to
# an image without learning the parameters. We therefore initialize
# them to random values to "simulate" learning.
b_shp = (2,)
b = theano.shared(numpy.asarray(
rng.uniform(low=-.5, high=.5, size=b_shp),
dtype=input.dtype), name ='b')
# build symbolic expression that computes the convolution of input with filters in w
conv_out = conv.conv2d(input, W)
# build symbolic expression to add bias and apply activation function, i.e. produce neural net layer output
output = T.nnet.sigmoid(conv_out + b.dimshuffle('x', 0, 'x', 'x'))
# create theano function to compute filtered images
f = theano.function([input], output)
import pylab
from PIL import Image
# open random image of dimensions 1936×2592
img = Image.open(open('images/profilepic4.jpg'))
img = numpy.asarray(img, dtype='float64') / 256. # divide by 256 to have RGB 0-1 scale and not 0 - 256
#put image in 4D tensor of shape (1, 3, height, width)
img_ = img.transpose(2, 0, 1).reshape(1, 3, 2592, 1936)
filtered_img = f(img_)
# plot original image and first and second components of output
pylab.subplot(1, 3, 1); pylab.axis('off'); pylab.imshow(img)
pylab.gray();
# recall that the convOp output (filtered image) is actually a "minibatch",
# of size 1 here, so we take index 0 in the first dimension:
pylab.subplot(1, 3, 2); pylab.axis('off'); pylab.imshow(filtered_img[0, 0, :, :])
pylab.subplot(1, 3, 3); pylab.axis('off'); pylab.imshow(filtered_img[0, 1, :, :])
pylab.show()
from theano.tensor.signal import downsample
input = T.dtensor4('input')
maxpool_shape = (2, 2)
pool_out = downsample.max_pool_2d(input, maxpool_shape, ignore_border=True)
g = theano.function([input],pool_out)
invals = numpy.random.RandomState(1).rand(3, 2, 5, 5)
print 'With ignore_border set to True:'
print 'invals[0, 0, :, :] =\n', invals[0, 0, :, :]
print 'output[0, 0, :, :] =\n', g(invals)[0, 0, :, :]
pool_out = downsample.max_pool_2d(input, maxpool_shape, ignore_border=False)
g = theano.function([input],pool_out)
print 'With ignore_border set to False:'
print 'invals[0, 0, :, :] =\n', invals[0, 0, :, :]
print 'output[0, 0, :, :] =\n', g(invals)[0, 0, :, :]
class LeNetConvPoolLayer(object):
Pool Layer of a convolutional network
def __init__(self, rng, input, filter_shape, image_shape, poolsize=(2, 2)):
Allocate a LeNetConvPoolLayer with shared variable internal parameters.
:type rng: numpy.random.RandomState
:param rng: a random number generator used to initialize weights
:type input: theano.tensor.dtensor4
:param input: symbolic image tensor, of shape image_shape
:type filter_shape: tuple or list of length 4
:param filter_shape: (number of filters, num input feature maps,
filter height, filter width)
:type image_shape: tuple or list of length 4
:param image_shape: (batch size, num input feature maps,
image height, image width)
:type poolsize: tuple or list of length 2
:param poolsize: the downsampling (pooling) factor (#rows, #cols)
assert image_shape[1] == filter_shape[1]
# assert just checks if the number of feature maps is consistent between filter shape and image_shape
self.input = input
# there are "num input feature maps * filter height * filter width"
# inputs to each hidden unit
# reminder: Weights are sampled randomly from a uniform distribution
# in the range [-1/fan-in, 1/fan-in], where fan-in is the number of inputs to a hidden unit
fan_in = numpy.prod(filter_shape[1:])
# each unit in the lower layer receives a gradient from:
# "num output feature maps * filter height * filter width" /
# pooling size
fan_out = (filter_shape[0] * numpy.prod(filter_shape[2:]) /
numpy.prod(poolsize))
# initialize weights with random weights
W_bound = numpy.sqrt(6. / (fan_in + fan_out))
self.W = theano.shared(
numpy.asarray(
rng.uniform(low=-W_bound, high=W_bound, size=filter_shape),
dtype=theano.config.floatX
),
borrow=True # see above the def of theano.shared for explanation of borrow
)
# the bias is a 1D tensor -- one bias per output feature map
b_values = numpy.zeros((filter_shape[0],), dtype=theano.config.floatX)
self.b = theano.shared(value=b_values, borrow=True)
# convolve input feature maps with filters
conv_out = conv.conv2d(
input=input,
filters=self.W,
filter_shape=filter_shape,
image_shape=image_shape
)
# downsample each feature map individually, using maxpooling
pooled_out = downsample.max_pool_2d(
input=conv_out,
ds=poolsize,
ignore_border=True
)
# add the bias term. Since the bias is a vector (1D array), we first
# reshape it to a tensor of shape (1, n_filters, 1, 1). Each bias will
# thus be broadcasted across mini-batches and feature map
# width & height
self.output = T.tanh(pooled_out + self.b.dimshuffle('x', 0, 'x', 'x'))
# store parameters of this layer
self.params = [self.W, self.b]
# keep track of model input
self.input = input
class HiddenLayer(object):
def __init__(self, rng, input, n_in, n_out, W=None, b=None,
activation=T.tanh):
Typical hidden layer of a MLP: units are fully-connected and have
sigmoidal activation function. Weight matrix W is of shape (n_in,n_out)
and the bias vector b is of shape (n_out,).
NOTE : The nonlinearity used here is tanh
Hidden unit activation is given by: tanh(dot(input,W) + b)
:type rng: numpy.random.RandomState
:param rng: a random number generator used to initialize weights
:type input: theano.tensor.dmatrix
:param input: a symbolic tensor of shape (n_examples, n_in)
:type n_in: int
:param n_in: dimensionality of input
:type n_out: int
:param n_out: number of hidden units
:type activation: theano.Op or function
:param activation: Non linearity to be applied in the hidden
layer
self.input = input
# `W` is initialized with `W_values` which is uniformely sampled
# from sqrt(-6./(n_in+n_hidden)) and sqrt(6./(n_in+n_hidden))
# for tanh activation function
# the output of uniform if converted using asarray to dtype
# theano.config.floatX so that the code is runable on GPU
# Note : optimal initialization of weights is dependent on the
# activation function used (among other things).
# For example, results presented in [Xavier10] suggest that you
# should use 4 times larger initial weights for sigmoid
# compared to tanh
# We have no info for other function, so we use the same as
# tanh.
if W is None:
W_values = numpy.asarray(
rng.uniform(
low=-numpy.sqrt(6. / (n_in + n_out)),
high=numpy.sqrt(6. / (n_in + n_out)),
size=(n_in, n_out)
),
dtype=theano.config.floatX
)
if activation == theano.tensor.nnet.sigmoid:
W_values *= 4
W = theano.shared(value=W_values, name='W', borrow=True)
if b is None:
b_values = numpy.zeros((n_out,), dtype=theano.config.floatX)
b = theano.shared(value=b_values, name='b', borrow=True)
self.W = W
self.b = b
lin_output = T.dot(input, self.W) + self.b
self.output = (
lin_output if activation is None
else activation(lin_output)
)
# parameters of the model
self.params = [self.W, self.b]
class LogisticRegression(object):
Multi-class Logistic Regression Class
The logistic regression is fully described by a weight matrix :math:`W`
and bias vector :math:`b`. Classification is done by projecting data
points onto a set of hyperplanes, the distance to which is used to
determine a class membership probability.
def __init__(self, input, n_in, n_out):
Initialize the parameters of the logistic regression
:type input: theano.tensor.TensorType
:param input: symbolic variable that describes the input of the
architecture (one minibatch)
:type n_in: int
:param n_in: number of input units, the dimension of the space in
which the datapoints lie
:type n_out: int
:param n_out: number of output units, the dimension of the space in
which the labels lie
# initialize with 0 the weights W as a matrix of shape (n_in, n_out)
self.W = theano.shared(
value=numpy.zeros(
(n_in, n_out),
dtype=theano.config.floatX
),
name='W',
borrow=True
)
# initialize the biases b as a vector of n_out 0s
self.b = theano.shared(
value=numpy.zeros(
(n_out,),
dtype=theano.config.floatX
),
name='b',
borrow=True
)
# symbolic expression for computing the matrix of class-membership
# probabilities
# Where:
# W is a matrix where column-k represent the separation hyperplane for
# class-k
# x is a matrix where row-j represents input training sample-j
# b is a vector where element-k represent the free parameter of
# hyperplane-k
self.p_y_given_x = T.nnet.softmax(T.dot(input, self.W) + self.b)
# symbolic description of how to compute prediction as class whose
# probability is maximal
self.y_pred = T.argmax(self.p_y_given_x, axis=1)
# parameters of the model
self.params = [self.W, self.b]
# keep track of model input
self.input = input
def negative_log_likelihood(self, y):
Return the mean of the negative log-likelihood of the prediction
of this model under a given target distribution.
.. math::
\frac{1}{|\mathcal{D}|} \mathcal{L} (\theta=\{W,b\}, \mathcal{D}) =
\frac{1}{|\mathcal{D}|} \sum_{i=0}^{|\mathcal{D}|}
\log(P(Y=y^{(i)}|x^{(i)}, W,b)) \\
\ell (\theta=\{W,b\}, \mathcal{D})
:type y: theano.tensor.TensorType
:param y: corresponds to a vector that gives for each example the
correct label
Note: we use the mean instead of the sum so that
the learning rate is less dependent on the batch size
# y.shape[0] is (symbolically) the number of rows in y, i.e.,
# number of examples (call it n) in the minibatch
# T.arange(y.shape[0]) is a symbolic vector which will contain
# [0,1,2,... n-1] T.log(self.p_y_given_x) is a matrix of
# Log-Probabilities (call it LP) with one row per example and
# one column per class LP[T.arange(y.shape[0]),y] is a vector
# v containing [LP[0,y[0]], LP[1,y[1]], LP[2,y[2]], ...,
# LP[n-1,y[n-1]]] and T.mean(LP[T.arange(y.shape[0]),y]) is
# the mean (across minibatch examples) of the elements in v,
# i.e., the mean log-likelihood across the minibatch.
return -T.mean(T.log(self.p_y_given_x)[T.arange(y.shape[0]), y])
def errors(self, y):
Return a float representing the number of errors in the minibatch
over the total number of examples of the minibatch ; zero one
loss over the size of the minibatch
:type y: theano.tensor.TensorType
:param y: corresponds to a vector that gives for each example the
correct label
# check if y has same dimension of y_pred
if y.ndim != self.y_pred.ndim:
raise TypeError(
'y should have the same shape as self.y_pred',
('y', y.type, 'y_pred', self.y_pred.type)
)
# check if y is of the correct datatype
if y.dtype.startswith('int'):
# the T.neq operator returns a vector of 0s and 1s, where 1
# represents a mistake in prediction
return T.mean(T.neq(self.y_pred, y))
else:
raise NotImplementedError()
def evaluate_lenet5(learning_rate=0.1, n_epochs=200,
dataset='mnist.pkl.gz',
nkerns=[20, 50], batch_size=500):
Demonstrates lenet on MNIST dataset
:type learning_rate: float
:param learning_rate: learning rate used (factor for the stochastic
gradient)
:type n_epochs: int
:param n_epochs: maximal number of epochs to run the optimizer
:type dataset: string
:param dataset: path to the dataset used for training /testing (MNIST here)
:type nkerns: list of ints
:param nkerns: number of kernels on each layer (so 20 convolutional filters, and then 50 activation units)
rng = numpy.random.RandomState(23455)
datasets = load_data(dataset)
train_set_x, train_set_y = datasets[0]
valid_set_x, valid_set_y = datasets[1]
test_set_x, test_set_y = datasets[2]
# compute number of minibatches for training, validation and testing
n_train_batches = train_set_x.get_value(borrow=True).shape[0]
n_valid_batches = valid_set_x.get_value(borrow=True).shape[0]
n_test_batches = test_set_x.get_value(borrow=True).shape[0]
n_train_batches /= batch_size
n_valid_batches /= batch_size
n_test_batches /= batch_size
# allocate symbolic variables for the data
index = T.lscalar() # index to a [mini]batch
# start-snippet-1
x = T.matrix('x') # the data is presented as rasterized images
y = T.ivector('y') # the labels are presented as 1D vector of
# [int] labels
######################
# BUILD ACTUAL MODEL #
######################
print '... building the model'
# Reshape matrix of rasterized images of shape (batch_size, 28 * 28)
# to a 4D tensor, compatible with our LeNetConvPoolLayer
# (28, 28) is the size of MNIST images.
layer0_input = x.reshape((batch_size, 1, 28, 28))
# Construct the first convolutional pooling layer:
# filtering reduces the image size to (28-5+1 , 28-5+1) = (24, 24)
# maxpooling reduces this further to (24/2, 24/2) = (12, 12)
# 4D output tensor is thus of shape (batch_size, nkerns[0], 12, 12)
layer0 = LeNetConvPoolLayer(
rng,
input=layer0_input,
image_shape=(batch_size, 1, 28, 28),
filter_shape=(nkerns[0], 1, 5, 5),
poolsize=(2, 2)
)
'''
Reminder of LeNetConvPoolLayer input parameters and types
:type rng: numpy.random.RandomState
:param rng: a random number generator used to initialize weights
:type input: theano.tensor.dtensor4
:param input: symbolic image tensor, of shape image_shape
:type filter_shape: tuple or list of length 4
:param filter_shape: (number of filters, num input feature maps,
filter height, filter width)
:type image_shape: tuple or list of length 4
:param image_shape: (batch size, num input feature maps,
image height, image width)
:type poolsize: tuple or list of length 2
:param poolsize: the downsampling (pooling) factor (#rows, #cols)
'''
# Construct the second convolutional pooling layer
# filtering reduces the image size to (12-5+1, 12-5+1) = (8, 8)
# maxpooling reduces this further to (8/2, 8/2) = (4, 4)
# 4D output tensor is thus of shape (batch_size, nkerns[1], 4, 4)
layer1 = LeNetConvPoolLayer(
rng,
input=layer0.output,
image_shape=(batch_size, nkerns[0], 12, 12),
filter_shape=(nkerns[1], nkerns[0], 5, 5),
poolsize=(2, 2)
)
# the HiddenLayer being fully-connected, it operates on 2D matrices of
# shape (batch_size, num_pixels) (i.e matrix of rasterized images).
# This will generate a matrix of shape (batch_size, nkerns[1] * 4 * 4),
# or (500, 50 * 4 * 4) = (500, 800) with the default values.
layer2_input = layer1.output.flatten(2)
# construct a fully-connected sigmoidal layer
layer2 = HiddenLayer(
rng,
input=layer2_input,
n_in=nkerns[1] * 4 * 4,
n_out=500,
activation=T.tanh
)
# classify the values of the fully-connected sigmoidal layer
layer3 = LogisticRegression(input=layer2.output, n_in=500, n_out=10)
# the cost we minimize during training is the NLL of the model
cost = layer3.negative_log_likelihood(y)
# create a function to compute the mistakes that are made by the model
test_model = theano.function(
[index],
layer3.errors(y),
givens={
x: test_set_x[index * batch_size: (index + 1) * batch_size],
y: test_set_y[index * batch_size: (index + 1) * batch_size]
}
)
validate_model = theano.function(
[index],
layer3.errors(y),
givens={
x: valid_set_x[index * batch_size: (index + 1) * batch_size],
y: valid_set_y[index * batch_size: (index + 1) * batch_size]
}
)
# create a list of all model parameters to be fit by gradient descent
params = layer3.params + layer2.params + layer1.params + layer0.params
# create a list of gradients for all model parameters
grads = T.grad(cost, params)
# train_model is a function that updates the model parameters by
# SGD Since this model has many parameters, it would be tedious to
# manually create an update rule for each model parameter. We thus
# create the updates list by automatically looping over all
# (params[i], grads[i]) pairs.
updates = [
(param_i, param_i - learning_rate * grad_i)
for param_i, grad_i in zip(params, grads)
]
train_model = theano.function(
[index],
cost,
updates=updates,
givens={
x: train_set_x[index * batch_size: (index + 1) * batch_size],
y: train_set_y[index * batch_size: (index + 1) * batch_size]
}
)
# end-snippet-1
###############
# TRAIN MODEL #
###############
print '... training'
# early-stopping parameters
patience = 10000 # look as this many examples regardless
patience_increase = 2 # wait this much longer when a new best is
# found
improvement_threshold = 0.995 # a relative improvement of this much is
# considered significant
validation_frequency = min(n_train_batches, patience / 2)
# go through this many
# minibatche before checking the network
# on the validation set; in this case we
# check every epoch
best_validation_loss = numpy.inf
best_iter = 0
test_score = 0.
start_time = timeit.default_timer()
epoch = 0
done_looping = False
while (epoch < n_epochs) and (not done_looping):
epoch = epoch + 1
for minibatch_index in xrange(n_train_batches):
# This function is very similar to range(), but returns an xrange object instead of a list.
# This is an opaque sequence type which yields the same values as the corresponding list,
# without actually storing them all simultaneously. The advantage of xrange() over range()
# is minimal (since xrange() still has to create the values when asked for them) except when a
# very large range is used on a memory-starved machine or when all of the range’s elements
# are never used (such as when the loop is usually terminated with break).
# For more information on xrange objects, see XRange Type and Sequence Types — str,
# unicode, list, tuple, bytearray, buffer, xrange
iter = (epoch - 1) * n_train_batches + minibatch_index
# for epoch = 1 (first value while entering the "while" loop; iter = 0 * n_train_batches + minibtach_index
# so iter = 0. This will call train_model over the index of train_set_x[0:500] and train_set_y[0:500].
# the (epoch -1) * n_train_batches keep track of the iteration number while looping over and over on
# the train set.
if iter % 100 == 0:
print 'training @ iter = ', iter
cost_ij = train_model(minibatch_index)
# Only at this moment all the symbolic expression that were called during "Building the model" are
# called with real values replacing the symbolic tensors. That is how theano works.
if (iter + 1) % validation_frequency == 0:
# compute zero-one loss on validation set
validation_losses = [validate_model(i) for i
in xrange(n_valid_batches)]
this_validation_loss = numpy.mean(validation_losses)
print('epoch %i, minibatch %i/%i, validation error %f %%' %
(epoch, minibatch_index + 1, n_train_batches,
this_validation_loss * 100.))
# if we got the best validation score until now
if this_validation_loss < best_validation_loss:
#improve patience if loss improvement is good enough
if this_validation_loss < best_validation_loss * \
improvement_threshold:
patience = max(patience, iter * patience_increase)
# save best validation score and iteration number
best_validation_loss = this_validation_loss
best_iter = iter
# test it on the test set
test_losses = [
test_model(i)
for i in xrange(n_test_batches)
]
test_score = numpy.mean(test_losses)
print((' epoch %i, minibatch %i/%i, test error of '
'best model %f %%') %
(epoch, minibatch_index + 1, n_train_batches,
test_score * 100.))
if patience <= iter:
done_looping = True
break
end_time = timeit.default_timer()
print('Optimization complete.')
print('Best validation score of %f %% obtained at iteration %i, '
'with test performance %f %%' %
(best_validation_loss * 100., best_iter + 1, test_score * 100.))
print >> sys.stderr, ('The code for file ' +
os.path.split(__file__)[1] +
' ran for %.2fm' % ((end_time - start_time) / 60.))
def evaluate_lenet5_ldr(learning_rate=0.1, learning_rate_decay = 0.98, n_epochs=200,
dataset='mnist.pkl.gz',
nkerns=[20, 50], batch_size=500):
:type learning_rate_decay: float
:param learning_rate_decay: learning rate decay used
rng = numpy.random.RandomState(23455)
...
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!#
# Theano function to decay the learning rate, this is separate from the
# training function because we only want to do this once each epoch instead
# of after each minibatch.
decay_learning_rate = theano.function(inputs=[], outputs=learning_rate,
updates={learning_rate: learning_rate * learning_rate_decay})
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!#
###############
# TRAIN MODEL #
###############
...
while (epoch < n_epochs) and (not done_looping):
epoch = epoch + 1
for minibatch_index in xrange(n_train_batches):
iter = (epoch - 1) * n_train_batches + minibatch_index
if iter % 100 == 0:
print 'training @ iter = ', iter
cost_ij = train_model(minibatch_index)
if (iter + 1) % validation_frequency == 0:
# compute zero-one loss on validation set
validation_losses = [validate_model(i) for i
in xrange(n_valid_batches)]
this_validation_loss = numpy.mean(validation_losses)
print('epoch %i, minibatch %i/%i, validation error %f %%' %
(epoch, minibatch_index + 1, n_train_batches,
this_validation_loss * 100.))
# if we got the best validation score until now
if this_validation_loss < best_validation_loss:
#improve patience if loss improvement is good enough
if this_validation_loss < best_validation_loss * \
improvement_threshold:
patience = max(patience, iter * patience_increase)
# save best validation score and iteration number
best_validation_loss = this_validation_loss
best_iter = iter
# test it on the test set
test_losses = [
test_model(i)
for i in xrange(n_test_batches)
]
test_score = numpy.mean(test_losses)
print((' epoch %i, minibatch %i/%i, test error of '
'best model %f %%') %
(epoch, minibatch_index + 1, n_train_batches,
test_score * 100.))
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
new_learning_rate = decay_learning_rate()
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
if patience <= iter:
done_looping = True
break
...
def _dropout_from_layer(rng, layer, p):
p is the probablity of dropping a unit
srng = theano.tensor.shared_randomstreams.RandomStreams(
rng.randint(999999))
# p=1-p because 1's indicate keep and p is probability of dropping
mask = srng.binomial(n=1, p=1-p, size=layer.shape)
# The cast is important because
# int * float32 = float64 which pulls things off the gpu
output = layer * T.cast(mask, theano.config.floatX)
return output
class DropoutHiddenLayer(HiddenLayer):
def __init__(self, rng, input, n_in, n_out,
activation, dropout_rate, W=None, b=None):
super(DropoutHiddenLayer, self).__init__(
rng=rng, input=input, n_in=n_in, n_out=n_out, W=W, b=b,
activation=activation)
self.output = _dropout_from_layer(rng, self.output, p=dropout_rate)
class DropoutLeNetConvPoolLayer(LeNetConvPoolLayer):
def __init__(self, rng, input, filter_shape, image_shape, poolsize,
dropout_rate, W=None, b=None):
super(DropoutLeNetConvPoolLayer, self).__init__(
rng=rng, input=input, filter_shape=filter_shape, image_shape=image_shape,
poolsize=poolsize, W=W, b=b)
self.output = _dropout_from_layer(rng, self.output, p=dropout_rate)
def evaluate_lenet5(initial_learning_rate=0.1, learning_rate_decay = 1,
dropout_rates = [0.2, 0.2, 0.2, 0.5], n_epochs=200,
dataset='mnist.pkl.gz',
nkerns=[20, 50], batch_size=500):
:type dropout_rates: list of float
:param dropout_rates: dropout rate used for each layer (input layer,
1st filtered layer, 2nd filtered layer, fully connected layer)
...
######################
# BUILD ACTUAL MODEL #
######################
print '... building the model'
# Reshape matrix of rasterized images of shape (batch_size, 28 * 28)
# to a 4D tensor, compatible with our LeNetConvPoolLayer
# (28, 28) is the size of MNIST images.
layer0_input = x.reshape((batch_size, 1, 28, 28))
# Dropping out pixels from original image randomly, with a probability of dropping
# low enough not too drop too much information (20% was found to be ideal)
layer0_input_dropout = _dropout_from_layer(rng, layer0_input, dropout_rates[0])
# Construct the first convolutional pooling layer:
# filtering reduces the image size to (28-5+1 , 28-5+1) = (24, 24)
# maxpooling reduces this further to (24/2, 24/2) = (12, 12)
# 4D output tensor is thus of shape (batch_size, nkerns[0], 12, 12)
layer0_dropout = DropoutLeNetConvPoolLayer(
rng,
input=layer0_input_dropout,
image_shape=(batch_size, 1, 28, 28),
filter_shape=(nkerns[0], 1, 5, 5),
poolsize=(2, 2),
dropout_rate= dropout_rates[1]
)
# Creating in parallel a normal LeNetConvPoolLayer that share the same
# W and b as the dropout layer, with W scaled with p.
layer0 = LeNetConvPoolLayer(
rng,
input=layer0_input,
image_shape=(batch_size, 1, 28, 28),
filter_shape=(nkerns[0], 1, 5, 5),
poolsize=(2, 2),
W=layer0_dropout.W * (1 - dropout_rates[0]),
b=layer0_dropout.b
)
# Construct the second convolutional pooling layer
# filtering reduces the image size to (12-5+1, 12-5+1) = (8, 8)
# maxpooling reduces this further to (8/2, 8/2) = (4, 4)
# 4D output tensor is thus of shape (batch_size, nkerns[1], 4, 4)
layer1_dropout = DropoutLeNetConvPoolLayer(
rng,
input=layer0_dropout.output,
image_shape=(batch_size, nkerns[0], 12, 12),
filter_shape=(nkerns[1], nkerns[0], 5, 5),
poolsize=(2, 2),
dropout_rate = dropout_rates[2]
)
layer1 = LeNetConvPoolLayer(
rng,
input=layer0.output,
image_shape=(batch_size, nkerns[0], 12, 12),
filter_shape=(nkerns[1], nkerns[0], 5, 5),
poolsize=(2, 2),
W=layer1_dropout.W * (1 - dropout_rates[1]),
b=layer1_dropout.b
)
# the HiddenLayer being fully-connected, it operates on 2D matrices of
# shape (batch_size, num_pixels) (i.e matrix of rasterized images).
# This will generate a matrix of shape (batch_size, nkerns[1] * 4 * 4),
# or (500, 50 * 4 * 4) = (500, 800) with the default values.
layer2_dropout_input = layer1_dropout.output.flatten(2)
layer2_input = layer1.output.flatten(2)
# construct a fully-connected sigmoidal layer
layer2_dropout = DropoutHiddenLayer(
rng,
input=layer2_dropout_input,
n_in=nkerns[1] * 4 * 4,
n_out=500,
activation=T.tanh,
dropout_rate = dropout_rates[3]
)
layer2 = HiddenLayer(
rng,
input=layer2_input,
n_in=nkerns[1] * 4 * 4,
n_out=500,
activation=T.tanh,
W=layer2_dropout.W * (1 - dropout_rates[2]),
b=layer2_dropout.b
)
# classify the values of the fully-connected sigmoidal layer
layer3_dropout = LogisticRegression(
input = layer2_dropout.output,
n_in = 500, n_out = 10)
layer3 = LogisticRegression(
input=layer2.output,
n_in=500, n_out=10,
W=layer3_dropout.W * (1 - dropout_rates[-1]),
b=layer3_dropout.b
)
# the cost we minimize during training is the NLL of the model
cost = layer3.negative_log_likelihood(y)
dropout_cost = layer3_dropout.negative_log_likelihood(y)
# create a function to compute the mistakes that are made by the model
test_model = theano.function(
[index],
layer3.errors(y),
givens={
x: test_set_x[index * batch_size: (index + 1) * batch_size],
y: test_set_y[index * batch_size: (index + 1) * batch_size]
}
)
validate_model = theano.function(
[index],
layer3.errors(y),
givens={
x: valid_set_x[index * batch_size: (index + 1) * batch_size],
y: valid_set_y[index * batch_size: (index + 1) * batch_size]
}
)
# create a list of all model parameters to be fit by gradient descent
params = layer3_dropout.params + layer2_dropout.params + layer1_dropout.params + layer0_dropout.params
# create a list of gradients for all model parameters
grads = T.grad(dropout_cost, params)
# train_model is a function that updates the model parameters by SGD
updates = [
(param_i, param_i - learning_rate * grad_i)
for param_i, grad_i in zip(params, grads)
]
train_model = theano.function(
[index],
dropout_cost,
updates=updates,
givens={
x: train_set_x[index * batch_size: (index + 1) * batch_size],
y: train_set_y[index * batch_size: (index + 1) * batch_size]
}
)
...
import pylab
from PIL import Image
def display_filter(W, n_cols = 5):
:type W: numpy_nd_array
:param W: parameter W of a convolutional + max pooling layer
:type image_width: int
: param image_width: width of the final image representing the different filters
W_shape = W.shape
n_filters = W_shape[0]
#param filter_shape: (number of filters, num input feature maps, filter height, filter width)
filter_height = W_shape[2]
filter_width = W_shape[3]
n_lines = numpy.ceil(n_filters / n_cols)
for n in range(n_filters):
Wn = W[n,0,:,:]
Wn = Wn / Wn.max() # Scaling W to get 0-1 gray scale
pylab.subplot(n_lines, n_cols, n + 1); pylab.axis('off'); pylab.imshow(W[n,0,:,:], cmap=pylab.gray())
pylab.show()
rng = numpy.random.RandomState(1234)
img = Image.open(open('images/profilepic4.jpg'))
img = numpy.asarray(img, dtype='float64') / 256. # divide by 256 to have RGB 0-1 scale and not 0 - 256
img_ = img.transpose(2, 0, 1).reshape(1, 3, 2592, 1936)
input = img_
filter_shape = [20,3,12,12]
image_shape = [1,3,2592,1936]
poolsize = (2, 2)
layer_test = LeNetConvPoolLayer(rng, input, filter_shape, image_shape, poolsize)
f = theano.function([], layer_test.params)
W = f[0]
display_filter(W)
def evaluate_lenet5(initial_learning_rate=0.1, learning_rate_decay = 1,
dropout_rates = [0.2, 0.2, 0.2, 0.5], n_epochs=200,
dataset='mnist.pkl.gz', display_filters = True,
nkerns=[20, 50], batch_size=500):
:type display_filters: Bool
:param display_filters: True if we want to display the learned filters after training
we skip to the very end of the code, after training is done
if display_filters:
# Retrieving the filters from first and second layer
first_convlayer_params = theano.function([], layer0_dropout.params)
second_convlayer_params = theano.function([], layer1_dropout.params)
W0 = first_convlayer_params[0]
W1 = second_convlayer_params[0]
# Display filters from first layer (20 filters)
display_filter(W0)
# Display filters from second layer (50 filters)
display_filter(W1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data preprocessing
Step2: Compute covariance matrices, fit and apply spatial filter.
Step3: Plot source space activity
Step4: Now let's plot this on a glass brain, which will automatically transform the
Step5: Finally let's get another view, this time plotting again a 'stat_map'
|
<ASSISTANT_TASK:>
Python Code:
# Author: Alexandre Gramfort <alexandre.gramfort@inria.fr>
#
# License: BSD (3-clause)
import mne
from mne.datasets import sample
from mne.beamformer import make_lcmv, apply_lcmv
print(__doc__)
data_path = sample.data_path()
subjects_dir = data_path + '/subjects'
raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'
fname_fwd = data_path + '/MEG/sample/sample_audvis-meg-vol-7-fwd.fif'
# Get epochs
event_id, tmin, tmax = [1, 2], -0.2, 0.5
# Read forward model
forward = mne.read_forward_solution(fname_fwd)
# Setup for reading the raw data
raw = mne.io.read_raw_fif(raw_fname, preload=True)
raw.info['bads'] = ['MEG 2443', 'EEG 053'] # 2 bads channels
events = mne.find_events(raw)
# Pick the channels of interest
raw.pick(['meg', 'eog'])
# Read epochs
proj = False # already applied
epochs = mne.Epochs(raw, events, event_id, tmin, tmax,
baseline=(None, 0), preload=True, proj=proj,
reject=dict(grad=4000e-13, mag=4e-12, eog=150e-6))
evoked = epochs.average()
# Visualize sensor space data
evoked.plot_joint()
# Read regularized noise covariance and compute regularized data covariance
noise_cov = mne.compute_covariance(epochs, tmin=tmin, tmax=0, method='shrunk',
rank=None)
data_cov = mne.compute_covariance(epochs, tmin=0.04, tmax=0.15,
method='shrunk', rank=None)
# Compute weights of free orientation (vector) beamformer with weight
# normalization (neural activity index, NAI). Providing a noise covariance
# matrix enables whitening of the data and forward solution. Source orientation
# is optimized by setting pick_ori to 'max-power'.
# weight_norm can also be set to 'unit-noise-gain'. Source orientation can also
# be 'normal' (but only when using a surface-based source space) or None,
# which computes a vector beamfomer. Note, however, that not all combinations
# of orientation selection and weight normalization are implemented yet.
filters = make_lcmv(evoked.info, forward, data_cov, reg=0.05,
noise_cov=noise_cov, pick_ori='max-power',
weight_norm='nai', rank=None)
print(filters)
# You can save these with:
# filters.save('filters-lcmv.h5')
# Apply this spatial filter to the evoked data.
stc = apply_lcmv(evoked, filters, max_ori_out='signed')
# You can save result in stc files with:
# stc.save('lcmv-vol')
lims = [0.3, 0.6, 0.9]
stc.plot(
src=forward['src'], subject='sample', subjects_dir=subjects_dir,
clim=dict(kind='value', pos_lims=lims), mode='stat_map',
initial_time=0.1, verbose=True)
stc.plot(
src=forward['src'], subject='sample', subjects_dir=subjects_dir,
mode='glass_brain', clim=dict(kind='value', lims=lims),
initial_time=0.1, verbose=True)
morph = mne.compute_source_morph(
forward['src'], 'sample', 'fsaverage', subjects_dir=subjects_dir,
zooms=7, verbose=True)
stc.copy().crop(0.05, 0.18).plot(
src=morph, subject='fsaverage', subjects_dir=subjects_dir,
mode='stat_map', clim=dict(kind='value', pos_lims=lims),
initial_time=0.1, verbose=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's restrict the number of conditions to speed up computation
Step2: Define stimulus - trigger mapping
Step3: Let's make the event_id dictionary
Step4: Read MEG data
Step5: Epoch data
Step6: Let's plot some conditions
Step7: Representational Similarity Analysis (RSA) is a neuroimaging-specific
Step8: Compute confusion matrix using ROC-AUC
Step9: Plot
Step10: Confusion matrix related to mental representations have been historically
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Jean-Remi King <jeanremi.king@gmail.com>
# Jaakko Leppakangas <jaeilepp@student.jyu.fi>
# Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
#
# License: BSD (3-clause)
import os.path as op
import numpy as np
from pandas import read_csv
import matplotlib.pyplot as plt
from sklearn.model_selection import StratifiedKFold
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score
from sklearn.manifold import MDS
import mne
from mne.io import read_raw_fif, concatenate_raws
from mne.datasets import visual_92_categories
print(__doc__)
data_path = visual_92_categories.data_path()
# Define stimulus - trigger mapping
fname = op.join(data_path, 'visual_stimuli.csv')
conds = read_csv(fname)
print(conds.head(5))
max_trigger = 24
conds = conds[:max_trigger] # take only the first 24 rows
conditions = []
for c in conds.values:
cond_tags = list(c[:2])
cond_tags += [('not-' if i == 0 else '') + conds.columns[k]
for k, i in enumerate(c[2:], 2)]
conditions.append('/'.join(map(str, cond_tags)))
print(conditions[:10])
event_id = dict(zip(conditions, conds.trigger + 1))
event_id['0/human bodypart/human/not-face/animal/natural']
n_runs = 4 # 4 for full data (use less to speed up computations)
fname = op.join(data_path, 'sample_subject_%i_tsss_mc.fif')
raws = [read_raw_fif(fname % block) for block in range(n_runs)]
raw = concatenate_raws(raws)
events = mne.find_events(raw, min_duration=.002)
events = events[events[:, 2] <= max_trigger]
mne.viz.plot_events(events, sfreq=raw.info['sfreq'])
picks = mne.pick_types(raw.info, meg=True)
epochs = mne.Epochs(raw, events=events, event_id=event_id, baseline=None,
picks=picks, tmin=-.1, tmax=.500, preload=True)
epochs['face'].average().plot()
epochs['not-face'].average().plot()
# Classify using the average signal in the window 50ms to 300ms
# to focus the classifier on the time interval with best SNR.
clf = make_pipeline(StandardScaler(),
LogisticRegression(C=1, solver='lbfgs'))
X = epochs.copy().crop(0.05, 0.3).get_data().mean(axis=2)
y = epochs.events[:, 2]
classes = set(y)
cv = StratifiedKFold(n_splits=5, random_state=0, shuffle=True)
# Compute confusion matrix for each cross-validation fold
y_pred = np.zeros((len(y), len(classes)))
for train, test in cv.split(X, y):
# Fit
clf.fit(X[train], y[train])
# Probabilistic prediction (necessary for ROC-AUC scoring metric)
y_pred[test] = clf.predict_proba(X[test])
confusion = np.zeros((len(classes), len(classes)))
for ii, train_class in enumerate(classes):
for jj in range(ii, len(classes)):
confusion[ii, jj] = roc_auc_score(y == train_class, y_pred[:, jj])
confusion[jj, ii] = confusion[ii, jj]
labels = [''] * 5 + ['face'] + [''] * 11 + ['bodypart'] + [''] * 6
fig, ax = plt.subplots(1)
im = ax.matshow(confusion, cmap='RdBu_r', clim=[0.3, 0.7])
ax.set_yticks(range(len(classes)))
ax.set_yticklabels(labels)
ax.set_xticks(range(len(classes)))
ax.set_xticklabels(labels, rotation=40, ha='left')
ax.axhline(11.5, color='k')
ax.axvline(11.5, color='k')
plt.colorbar(im)
plt.tight_layout()
plt.show()
fig, ax = plt.subplots(1)
mds = MDS(2, random_state=0, dissimilarity='precomputed')
chance = 0.5
summary = mds.fit_transform(chance - confusion)
cmap = plt.get_cmap('rainbow')
colors = ['r', 'b']
names = list(conds['condition'].values)
for color, name in zip(colors, set(names)):
sel = np.where([this_name == name for this_name in names])[0]
size = 500 if name == 'human face' else 100
ax.scatter(summary[sel, 0], summary[sel, 1], s=size,
facecolors=color, label=name, edgecolors='k')
ax.axis('off')
ax.legend(loc='lower right', scatterpoints=1, ncol=2)
plt.tight_layout()
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: For a test case, we generate 10 random points and observations, where the
Step2: Using the circumcenter and circumcircle radius information from
Step3: What?....the circle from triangle 8 looks pretty darn close. Why isn't
Step4: Lets do a manual check of the above interpolation value for grid 0 (southernmost grid)
Step6: Draw the natural neighbor triangles and their circumcenters. Also plot a Voronoi diagram
Step7: Put all of the generated polygon areas and their affiliated values in arrays.
Step8: For each polygon area, calculate its percent of total area.
Step9: Multiply the percent of total area by the respective values.
Step10: The sum of this array is the interpolation value!
Step11: The values are slightly different due to truncating the area values in
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import numpy as np
from scipy.spatial import ConvexHull, Delaunay, delaunay_plot_2d, Voronoi, voronoi_plot_2d
from scipy.spatial.distance import euclidean
from metpy.gridding import polygons, triangles
from metpy.gridding.interpolation import nn_point
np.random.seed(100)
pts = np.random.randint(0, 100, (10, 2))
xp = pts[:, 0]
yp = pts[:, 1]
zp = (pts[:, 0] * pts[:, 0]) / 1000
tri = Delaunay(pts)
fig, ax = plt.subplots(1, 1, figsize=(15, 10))
delaunay_plot_2d(tri, ax=ax)
for i, zval in enumerate(zp):
ax.annotate('{} F'.format(zval), xy=(pts[i, 0] + 2, pts[i, 1]))
sim_gridx = [30., 60.]
sim_gridy = [30., 60.]
ax.plot(sim_gridx, sim_gridy, '+', markersize=10)
ax.set_aspect('equal', 'datalim')
ax.set_title('Triangulation of observations and test grid cell '
'natural neighbor interpolation values')
members, tri_info = triangles.find_natural_neighbors(tri, list(zip(sim_gridx, sim_gridy)))
val = nn_point(xp, yp, zp, (sim_gridx[0], sim_gridy[0]), tri, members[0], tri_info)
ax.annotate('grid 0: {:.3f}'.format(val), xy=(sim_gridx[0] + 2, sim_gridy[0]))
val = nn_point(xp, yp, zp, (sim_gridx[1], sim_gridy[1]), tri, members[1], tri_info)
ax.annotate('grid 1: {:.3f}'.format(val), xy=(sim_gridx[1] + 2, sim_gridy[1]))
def draw_circle(ax, x, y, r, m, label):
th = np.linspace(0, 2 * np.pi, 100)
nx = x + r * np.cos(th)
ny = y + r * np.sin(th)
ax.plot(nx, ny, m, label=label)
members, tri_info = triangles.find_natural_neighbors(tri, list(zip(sim_gridx, sim_gridy)))
fig, ax = plt.subplots(1, 1, figsize=(15, 10))
delaunay_plot_2d(tri, ax=ax)
ax.plot(sim_gridx, sim_gridy, 'ks', markersize=10)
for i, info in tri_info.items():
x_t = info['cc'][0]
y_t = info['cc'][1]
if i in members[1] and i in members[0]:
draw_circle(ax, x_t, y_t, info['r'], 'm-', str(i) + ': grid 1 & 2')
ax.annotate(str(i), xy=(x_t, y_t), fontsize=15)
elif i in members[0]:
draw_circle(ax, x_t, y_t, info['r'], 'r-', str(i) + ': grid 0')
ax.annotate(str(i), xy=(x_t, y_t), fontsize=15)
elif i in members[1]:
draw_circle(ax, x_t, y_t, info['r'], 'b-', str(i) + ': grid 1')
ax.annotate(str(i), xy=(x_t, y_t), fontsize=15)
else:
draw_circle(ax, x_t, y_t, info['r'], 'k:', str(i) + ': no match')
ax.annotate(str(i), xy=(x_t, y_t), fontsize=9)
ax.set_aspect('equal', 'datalim')
ax.legend()
x_t, y_t = tri_info[8]['cc']
r = tri_info[8]['r']
print('Distance between grid0 and Triangle 8 circumcenter:',
euclidean([x_t, y_t], [sim_gridx[0], sim_gridy[0]]))
print('Triangle 8 circumradius:', r)
cc = np.array([tri_info[m]['cc'] for m in members[0]])
r = np.array([tri_info[m]['r'] for m in members[0]])
print('circumcenters:\n', cc)
print('radii\n', r)
vor = Voronoi(list(zip(xp, yp)))
fig, ax = plt.subplots(1, 1, figsize=(15, 10))
voronoi_plot_2d(vor, ax=ax)
nn_ind = np.array([0, 5, 7, 8])
z_0 = zp[nn_ind]
x_0 = xp[nn_ind]
y_0 = yp[nn_ind]
for x, y, z in zip(x_0, y_0, z_0):
ax.annotate('{}, {}: {:.3f} F'.format(x, y, z), xy=(x, y))
ax.plot(sim_gridx[0], sim_gridy[0], 'k+', markersize=10)
ax.annotate('{}, {}'.format(sim_gridx[0], sim_gridy[0]), xy=(sim_gridx[0] + 2, sim_gridy[0]))
ax.plot(cc[:, 0], cc[:, 1], 'ks', markersize=15, fillstyle='none',
label='natural neighbor\ncircumcenters')
for center in cc:
ax.annotate('{:.3f}, {:.3f}'.format(center[0], center[1]),
xy=(center[0] + 1, center[1] + 1))
tris = tri.points[tri.simplices[members[0]]]
for triangle in tris:
x = [triangle[0, 0], triangle[1, 0], triangle[2, 0], triangle[0, 0]]
y = [triangle[0, 1], triangle[1, 1], triangle[2, 1], triangle[0, 1]]
ax.plot(x, y, ':', linewidth=2)
ax.legend()
ax.set_aspect('equal', 'datalim')
def draw_polygon_with_info(ax, polygon, off_x=0, off_y=0):
Draw one of the natural neighbor polygons with some information.
pts = np.array(polygon)[ConvexHull(polygon).vertices]
for i, pt in enumerate(pts):
ax.plot([pt[0], pts[(i + 1) % len(pts)][0]],
[pt[1], pts[(i + 1) % len(pts)][1]], 'k-')
avex, avey = np.mean(pts, axis=0)
ax.annotate('area: {:.3f}'.format(polygons.area(pts)), xy=(avex + off_x, avey + off_y),
fontsize=12)
cc1 = triangles.circumcenter((53, 66), (15, 60), (30, 30))
cc2 = triangles.circumcenter((34, 24), (53, 66), (30, 30))
draw_polygon_with_info(ax, [cc[0], cc1, cc2])
cc1 = triangles.circumcenter((53, 66), (15, 60), (30, 30))
cc2 = triangles.circumcenter((15, 60), (8, 24), (30, 30))
draw_polygon_with_info(ax, [cc[0], cc[1], cc1, cc2], off_x=-9, off_y=3)
cc1 = triangles.circumcenter((8, 24), (34, 24), (30, 30))
cc2 = triangles.circumcenter((15, 60), (8, 24), (30, 30))
draw_polygon_with_info(ax, [cc[1], cc1, cc2], off_x=-15)
cc1 = triangles.circumcenter((8, 24), (34, 24), (30, 30))
cc2 = triangles.circumcenter((34, 24), (53, 66), (30, 30))
draw_polygon_with_info(ax, [cc[0], cc[1], cc1, cc2])
areas = np.array([60.434, 448.296, 25.916, 70.647])
values = np.array([0.064, 1.156, 2.809, 0.225])
total_area = np.sum(areas)
print(total_area)
proportions = areas / total_area
print(proportions)
contributions = proportions * values
print(contributions)
interpolation_value = np.sum(contributions)
function_output = nn_point(xp, yp, zp, (sim_gridx[0], sim_gridy[0]), tri, members[0], tri_info)
print(interpolation_value, function_output)
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Next, we instantiate a main compiler engine using the IBM Q back-end and the predefined compiler engines which take care of the qubit placement, translation of operations, etc.
Step2: If use_hardware is set to False, it will use the IBM Q simulator instead. num_runs specifies the number of samples to collect for statistics, verbose=True would output additional information which may be helpful for debugging, and the device parameter lets users choose between the two devices ("ibmqx4" and "ibmqx5").
Step3: Retrieving a timed-out execution
Step4: Entangling more qubits
Step5: and then re-run the example from before via run_entangle(eng, num_qubits). If an execution times out, it can also be retrieved at a later point by providing the additional retrieve_execution="execution_id" parameter to the IBMBackend (but this time with device='ibmqx5').
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import projectq.setups.ibm
from projectq.backends import IBMBackend
from projectq.ops import Measure, Entangle, All
from projectq import MainEngine
eng = MainEngine(IBMBackend(use_hardware=True, num_runs=1024,
verbose=False, device='ibmqx4'),
engine_list=projectq.setups.ibm.get_engine_list())
def run_entangle(eng, num_qubits):
# allocate a quantum register of 5 qubits
qureg = eng.allocate_qureg(num_qubits)
# entangle the qureg
Entangle | qureg
# measure; should be all-0 or all-1
All(Measure) | qureg
# run the circuit
eng.flush()
# access the probabilities via the back-end:
# results = eng.backend.get_probabilities(qureg)
# for state in results:
# print("Measured {} with p = {}.".format(state, results[state]))
# or plot them directly:
histogram(eng.backend, qureg)
plt.show()
# return one (random) measurement outcome.
return [int(q) for q in qureg]
run_entangle(eng, num_qubits=5) # run it
eng = MainEngine(IBMBackend(use_hardware=True, num_runs=1024,
verbose=False, device='ibmqx4',
retrieve_execution="5b557df2306393003b746da2"), # provide job ID
engine_list=projectq.setups.ibm.get_engine_list())
run_entangle(eng, num_qubits=5)
import projectq.setups.ibm16 # import setup which contains the grid mapper
eng = MainEngine(IBMBackend(use_hardware=True, num_runs=1024,
verbose=False, device='ibmqx5'), # use ibmqx5 now
engine_list=projectq.setups.ibm16.get_engine_list()) # note: ibm16 setup
run_entangle(eng, num_qubits=8)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Tuples, lists, sets, dicts, strings and numpy arrays are the hard core of the objects just to handle data in phython. In this notebook we learn to handle tuples, lists and sets as basic tools for every day python. Next we'll do strings as immutable sequences of characters and then dicts storage devices in which we can reach values by keys. We'll show various practical ways to use them and their methods.
Step2: Tuples are hany for multiple assignments
Step3: Famous Python way of reversing values of parameters
Step4: Tuples like lists are sequences of aribitrary itmes
Step5: You an iterate over tuples, lists and sets (and dicts and strings and possible other sequences)
Step6: But you cannot change a tuple because it's immutable.
Step7: The methods that are associated with tuple can be seen by typing tuple then a dot and then tapping the tab key. There are only two, count(item) and index(item).
Step8: Lists
Step9: Lists like tuples are sequences of aribitrary itmes
Step10: Till here there's no difference between tuples and lists. But contratry to tuples, list are mutable. You can see this from the methods that are associated with lists
Step11: Instead of only two (count() and index()) there are now 11. The other 9 methods can change the contents of the list.
Step12: Let's pop out the 'copy' using pop()
Step13: Item copy was popped out and assigned to c; is now missing from list publicListMethods.
Step14: We will often see more complex structures like list of lists and list of tuples or tuples of list or even lists of lists of lists. Hence we can build arbitrary comples structures, even in combination with sets and dicts.
Step15: We can access any item with indexing and slicing
Step16: Sets
Step17: You see that only the unique itmes are kept. Their order if of no importance.
Step18: Which itmes are in yourSet and not in mySet ??
Step19: Which items are in either set?
Step20: Which items are in both sets?
Step21: Adding and multiplying tuples and lists
|
<ASSISTANT_TASK:>
Python Code:
from pprint import pprint
import numpy as np
myTuple = ('This', 'is', 'our', 'tuple', 'number', 1)
print("This tuple contains {} itmes.".format(len(myTuple)))
print("Here you see that the object is a tuple: {}".format(type(myTuple)))
print("If you ask if this is a tuple, this is the answer:", isinstance(myTuple, tuple))
'duck', 'goat', 'dog' # this is also a tuple, parenthesis are not needed
print( type( ('duck')) )
('duck') # this is not a tuple, it's a single item between paranthesis
print( type( ('duck',) ) )
('duck',) # to make python recognize a single item as a tuple, put a comma behind it
'duck', # parenthesis are not needed, but the comma is to make it a tuple
12, # this hold true for numbers as well as for any objects
a, b, c, d = 12, ['a', 'duck'], np.sin, ['quack', 2]
print("a = ", a)
print("b = ", b)
print("c = ", c)
print("d = ", d)
c, a, d, b = a, b, c, d
print("a = ", a)
print("b = ", b)
print("c = ", c)
print("d = ", d)
myTuple = (12, 'a duck', np.sin, {'cat': 'Spooky', 'dog' : 'Barky', 'horse' : 'Duky'}, {'a', 'b', 'a', 'c', 'b', 'b'})
myTuple
for k in myTuple:
print(type(k))
12 and np.sin in myTuple
And get values by indexing:
myTuple[3]
myTuple[1:3]
myTuple[0] = 14
myTuple.count(np.sin) # returns the number of occurences of item
myTuple.index(np.sin) # returns the index (location) of itme in tuple
dir(tuple) # gives all methods, both "private" and public
Lists are like tuples sequences, but the are mutable.
Lists are recognized by a series of comma-separted items within square brackets [ ]
myList = ['This', 'is', 'our', 'list', 'number', 1]
print("This list contains {} itmes.".format(len(myList)))
print("Here you see that the object is a list: {}".format(type(myList)))
print("If you ask if this is a list, this is the answer:", isinstance(myList, list))
myList = [12, 'a duck', np.sin, {'cat': 'Spooky', 'dog' : 'Barky', 'horse' : 'Duky'}, {'a', 'b', 'a', 'c', 'b', 'b'}]
myList
[p for p in dir(myList) if not p.startswith('_')]
publicListMethods = [] # empty list, you could also use = list()
for p in dir(myList):
if not p.startswith('_'):
publicListMethods.append(p)
publicListMethods
c = publicListMethods.pop(2)
print(c)
publicListMethods
publicListMethods.insert(4, 'copy')
print(publicListMethods)
publicListMethods.sort(reverse=True) # inplace sort
print(publicListMethods)
publicListMethods.reverse() # inplace reverse
print(publicListMethods)
sorted(publicListMethods) # sorts, not in place, it makes a copy, returns a new list
help(list.sort)
help(sorted)
listList = [['quack', 'duck', 'age', 2],
['bark', 'dog', 'age', 2],
['splash', 'fish', 'age', 5],
['tweet', 'Trump', 'age', 70]]
listList[1]
listList[3:0:-1]
listList[2][2]
listList[2][3:1:-1]
p = listList.pop(2)
print(p)
print(listList)
listList.append(p)
print(listList)
mySet = {'dog', 'cat', 'cat', 'horse', 'fish', 'dog', 'camel'}
yourSet = {'bird', 'cat', 'elephant', 'chimp', 'dog', 'elephant', 'zebra'}
print(mySet)
print(yourSet)
print(mySet - yourSet)
print(yourSet - mySet)
print(yourSet | mySet)
print(yourSet & mySet)
myList = ['john', 'goes', 'home']
yourList=['with', 'mary']
print(myList + yourList)
print(yourList * 6)
print( (myList + yourList) * 2)
print('John walks home' + ' with Mary') # addition of two strings
print('-' * 80) # holds also for strings, multiplication of a string
'Today, ' + str(5) + ' boats arrived at about ' + str(12) + " o'clock." # use str(number) to convert to string
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1
Step1: To get the monthly traffic data on English Wikipedia from January 2008 through September 2017, we need to use 2 API endpoints, the Pagecounts API and the Pageviews API. The Pagecounts API provides monthy desktop and mobile traffic data from January 2008 through July 2016, and the Pageviews API provides monthy desktop, mobile-web, and mobile-app traffic data from July 2015 through September 2017. Once the user finishes the parameter settings for the API request, the traffic data will be returned in JSON format. The codes below will get you all pagecounts for English Wikipedia accessed through desktop from January 2008 through July 2016.
Step2: The codes below will get you all pagecounts for English Wikipedia accessed through mobile from January 2008 through July 2016.
Step3: The codes below will get you all pageviews for English Wikipedia accessed through desktop from July 2015 through September 2017. Note that the data doesn't count traffic by web crawlers or spiders.
Step4: The codes below will get you all pageviews for English Wikipedia accessed through mobile website from July 2015 through September 2017. Again, note that the data doesn't count traffic by web crawlers or spiders.
Step5: The codes below will get you all pageviews for English Wikipedia accessed through mobile app from July 2015 through September 2017. Again, note that the data doesn't count traffic by web crawlers or spiders.
Step6: Step 2
Step7: After we get the dictionary, we could convert it into a Pandas dataframe and save the dataframe to a csv file
Step8: Step 3
|
<ASSISTANT_TASK:>
Python Code:
# Import packages that will be used in this assignment
import requests
import json
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
# Collect desktop traffic data from January 2008 through July 2016 using the Pagecounts API
endpoint_pagecounts = 'https://wikimedia.org/api/rest_v1/metrics/legacy/pagecounts/aggregate/{project}/{access}/{granularity}/{start}/{end}'
params_pc_desktop = {
'project' : 'en.wikipedia.org',
'access' : 'desktop-site',
'granularity' : 'monthly',
'start' : '2008010100',
'end' : '2016080100'#use the first day of the following month to ensure a full month of data is collected
}
api_call = requests.get(endpoint_pagecounts.format(**params_pc_desktop))
response_pc_desktop = api_call.json()
with open('pagecounts_desktop-site_200801-201607.json', 'w') as outfile:
json.dump(response_pc_desktop, outfile)
# Collect mobile traffic data from January 2008 through July 2016 using the Pagecounts API
endpoint_pagecounts = 'https://wikimedia.org/api/rest_v1/metrics/legacy/pagecounts/aggregate/{project}/{access}/{granularity}/{start}/{end}'
params_pc_mobile = {
'project' : 'en.wikipedia.org',
'access' : 'mobile-site',
'granularity' : 'monthly',
'start' : '2008010100',
'end' : '2016080100'
}
api_call = requests.get(endpoint_pagecounts.format(**params_pc_mobile))
response_pc_mobile = api_call.json()
with open('pagecounts_mobile-site_200801-201607.json', 'w') as outfile:
json.dump(response_pc_mobile, outfile)
# Collect desktop traffic data from July 2015 through September 2017 using the Pageviews API
endPoint_pageviews = 'https://wikimedia.org/api/rest_v1/metrics/pageviews/aggregate/{project}/{access}/{agent}/{granularity}/{start}/{end}'
headers = {'User-Agent' : 'https://github.com/HWNi', 'From' : 'haowen2@uw.edu'}
params_pv_desktop = {
'project' : 'en.wikipedia.org',
'access' : 'desktop',
'agent' : 'user',
'granularity' : 'monthly',
'start' : '2015070100',
'end' : '2017100100'
}
api_call = requests.get(endPoint_pageviews.format(**params_pv_desktop))
response_pv_desktop = api_call.json()
with open('pageviews_desktop_201507-201709.json', 'w') as outfile:
json.dump(response_pv_desktop, outfile)
# Collect mobile web traffic data from July 2015 through September 2017 using the Pageviews API
endPoint_pageviews = 'https://wikimedia.org/api/rest_v1/metrics/pageviews/aggregate/{project}/{access}/{agent}/{granularity}/{start}/{end}'
headers = {'User-Agent' : 'https://github.com/HWNi', 'From' : 'haowen2@uw.edu'}
params_pv_mobile_web = {
'project' : 'en.wikipedia.org',
'access' : 'mobile-web',
'agent' : 'user',
'granularity' : 'monthly',
'start' : '2015070100',
'end' : '2017100100'
}
api_call = requests.get(endPoint_pageviews.format(**params_pv_mobile_web))
response_pv_mobile_web = api_call.json()
with open('pageviews_mobile-web_201507-201709.json', 'w') as outfile:
json.dump(response_pv_mobile_web, outfile)
# Collect mobile app traffic data from July 2015 through September 2017 using the Pageviews API
endPoint_pageviews = 'https://wikimedia.org/api/rest_v1/metrics/pageviews/aggregate/{project}/{access}/{agent}/{granularity}/{start}/{end}'
headers = {'User-Agent' : 'https://github.com/HWNi', 'From' : 'haowen2@uw.edu'}
params_pv_mobile_app = {
'project' : 'en.wikipedia.org',
'access' : 'mobile-app',
'agent' : 'user',
'granularity' : 'monthly',
'start' : '2015070100',
'end' : '2017100100'
}
api_call = requests.get(endPoint_pageviews.format(**params_pv_mobile_app))
response_pv_mobile_app = api_call.json()
with open('pageviews_mobile-app_201507-201709.json', 'w') as outfile:
json.dump(response_pv_mobile_app, outfile)
data_cleaned = {}
for item in response_pc_desktop['items']:
timeStamp = item['timestamp']
data_cleaned[timeStamp] = [item['count'], 0, 0, 0, 0]
for item in response_pc_mobile['items']:
timeStamp = item['timestamp']
if timeStamp in data_cleaned:
data_cleaned[timeStamp][1] = item['count']
else:
data_cleaned[timeStamp] = [0, item['count'], 0, 0, 0]
for item in response_pv_desktop['items']:
timeStamp = item['timestamp']
if timeStamp in data_cleaned:
data_cleaned[timeStamp][2] = item['views']
else:
data_cleaned[timeStamp] = [0, 0, item['views'], 0, 0]
for item in response_pv_mobile_web['items']:
timeStamp = item['timestamp']
if timeStamp in data_cleaned:
data_cleaned[timeStamp][3] = item['views']
else:
data_cleaned[timeStamp] = [0, 0, 0, item['views'], 0]
for item in response_pv_mobile_app['items']:
timeStamp = item['timestamp']
if timeStamp in data_cleaned:
data_cleaned[timeStamp][4] = item['views']
else:
data_cleaned[timeStamp] = [0, 0, 0, 0, item['views']]
df = pd.DataFrame.from_dict(data_cleaned, orient='index')
df_result = pd.DataFrame
df['timestamp'] = df.index
df['year'] = [t[0:4] for t in df['timestamp']]
df['month'] = [t[4:6] for t in df['timestamp']]
df['pagecount_all_views'] = df[0] + df[1]
df['pagecount_desktop_views'] = df[0]
df['pagecount_mobile_views'] = df[1]
df['pageview_all_views'] = df[2] + df[3] + df[4]
df['pageview_desktop_views'] = df[2]
df['pageview_mobile_views'] = df[3] + df[4]
df = df.loc[:, 'year' : 'pageview_mobile_views']
df.to_csv('en-wikipedia_traffic_200801-201709.csv', index=False)
df
dateRange = pd.date_range('2008-01', '2017-10', freq='M')
scale = 1e-6
sns.set_style("whitegrid")
fig = plt.figure(figsize=(18, 12))
plt.plot(dateRange, df['pagecount_all_views'] * scale, linestyle = ':')
plt.plot(dateRange, df['pagecount_desktop_views'] * scale)
plt.plot(dateRange, df['pagecount_mobile_views'] * scale)
plt.plot(dateRange, df['pageview_all_views'] * scale, linestyle = ':')
plt.plot(dateRange, df['pagecount_desktop_views'] * scale)
plt.plot(dateRange, df['pagecount_mobile_views'] * scale)
plt.legend()
plt.xlabel('Year')
plt.ylabel('Amount of Traffic (* 1,000,000)')
fig.savefig('en-wikipedia_traffic_200801-201709.jpg')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now you can invoke f and pass the input values, i.e. f(1,1), f(10,-3) and the result for this operation is returned.
Step2: Printing of the graph
Step3: The graph fo z
Step4: Next we define some NumPy-Array with data and let Theano compute the result for $f(x,W,b)$
Step5: Don't confuse x,W, b with inputX, inputW, inputB. x,W,b contain pointer to your symbols in the compute graph. inputX,inputW,inputB contains your data.
Step6: Shared Variables
|
<ASSISTANT_TASK:>
Python Code:
import theano
import theano.tensor as T
#Put your code here
print f(1,1)
print f(10,-3)
#Graph for z
theano.printing.pydotprint(z, outfile="pics/z_graph.png", var_with_name_simple=True)
#Graph for function f (after optimization)
theano.printing.pydotprint(f, outfile="pics/f_graph.png", var_with_name_simple=True)
import theano
import theano.tensor as T
import numpy as np
# Put your code here
inputX = np.asarray([0.1, 0.2, 0.3], dtype='float32')
inputW = np.asarray([[0.1,-0.2],[-0.4,0.5],[0.6,-0.7]], dtype='float32')
inputB = np.asarray([0.1,0.2], dtype='float32')
print "inputX.shape",inputX.shape
print "inputW.shape",inputW.shape
f(inputX, inputW, inputB)
import theano
import theano.tensor as T
import numpy as np
#Define my internal state
init_value = 1
state = theano.shared(value=init_value, name='state')
#Define my operation f(x) = 2*x
x = T.lscalar('x')
z = 2*x
accumulator = theano.function(inputs=[], outputs=z, givens={x: state})
print accumulator()
print accumulator()
#New accumulator function, now with an update
# Put your code here to update the internal counter
print accumulator(1)
print accumulator(1)
print accumulator(1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Hamilton (1989) switching model of GNP
Step2: We plot the filtered and smoothed probabilities of a recession. Filtered refers to an estimate of the probability at time $t$ based on data up to and including time $t$ (but excluding time $t+1, ..., T$). Smoothed refers to an estimate of the probability at time $t$ using all the data in the sample.
Step3: From the estimated transition matrix we can calculate the expected duration of a recession versus an expansion.
Step4: In this case, it is expected that a recession will last about one year (4 quarters) and an expansion about two and a half years.
Step5: Below we plot the probabilities of being in each of the regimes; only in a few periods is a high-variance regime probable.
Step6: Filardo (1994) Time-Varying Transition Probabilities
Step7: The time-varying transition probabilities are specified by the exog_tvtp parameter.
Step8: Below we plot the smoothed probability of the economy operating in a low-production state, and again include the NBER recessions for comparison.
Step9: Using the time-varying transition probabilities, we can see how the expected duration of a low-production state changes over time
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import statsmodels.api as sm
import matplotlib.pyplot as plt
import requests
from io import BytesIO
# NBER recessions
from pandas_datareader.data import DataReader
from datetime import datetime
usrec = DataReader('USREC', 'fred', start=datetime(1947, 1, 1), end=datetime(2013, 4, 1))
# Get the RGNP data to replicate Hamilton
dta = pd.read_stata('http://www.stata-press.com/data/r14/rgnp.dta').iloc[1:]
dta.index = pd.DatetimeIndex(dta.date, freq='QS')
dta_hamilton = dta.rgnp
# Plot the data
dta_hamilton.plot(title='Growth rate of Real GNP', figsize=(12,3))
# Fit the model
mod_hamilton = sm.tsa.MarkovAutoregression(dta_hamilton, k_regimes=2, order=4, switching_ar=False)
res_hamilton = mod_hamilton.fit()
res_hamilton.summary()
fig, axes = plt.subplots(2, figsize=(7,7))
ax = axes[0]
ax.plot(res_hamilton.filtered_marginal_probabilities[0])
ax.fill_between(usrec.index, 0, 1, where=usrec['USREC'].values, color='k', alpha=0.1)
ax.set_xlim(dta_hamilton.index[4], dta_hamilton.index[-1])
ax.set(title='Filtered probability of recession')
ax = axes[1]
ax.plot(res_hamilton.smoothed_marginal_probabilities[0])
ax.fill_between(usrec.index, 0, 1, where=usrec['USREC'].values, color='k', alpha=0.1)
ax.set_xlim(dta_hamilton.index[4], dta_hamilton.index[-1])
ax.set(title='Smoothed probability of recession')
fig.tight_layout()
print(res_hamilton.expected_durations)
# Get the dataset
ew_excs = requests.get('http://econ.korea.ac.kr/~cjkim/MARKOV/data/ew_excs.prn').content
raw = pd.read_table(BytesIO(ew_excs), header=None, skipfooter=1, engine='python')
raw.index = pd.date_range('1926-01-01', '1995-12-01', freq='MS')
dta_kns = raw.loc[:'1986'] - raw.loc[:'1986'].mean()
# Plot the dataset
dta_kns[0].plot(title='Excess returns', figsize=(12, 3))
# Fit the model
mod_kns = sm.tsa.MarkovRegression(dta_kns, k_regimes=3, trend='nc', switching_variance=True)
res_kns = mod_kns.fit()
res_kns.summary()
fig, axes = plt.subplots(3, figsize=(10,7))
ax = axes[0]
ax.plot(res_kns.smoothed_marginal_probabilities[0])
ax.set(title='Smoothed probability of a low-variance regime for stock returns')
ax = axes[1]
ax.plot(res_kns.smoothed_marginal_probabilities[1])
ax.set(title='Smoothed probability of a medium-variance regime for stock returns')
ax = axes[2]
ax.plot(res_kns.smoothed_marginal_probabilities[2])
ax.set(title='Smoothed probability of a high-variance regime for stock returns')
fig.tight_layout()
# Get the dataset
filardo = requests.get('http://econ.korea.ac.kr/~cjkim/MARKOV/data/filardo.prn').content
dta_filardo = pd.read_table(BytesIO(filardo), sep=' +', header=None, skipfooter=1, engine='python')
dta_filardo.columns = ['month', 'ip', 'leading']
dta_filardo.index = pd.date_range('1948-01-01', '1991-04-01', freq='MS')
dta_filardo['dlip'] = np.log(dta_filardo['ip']).diff()*100
# Deflated pre-1960 observations by ratio of std. devs.
# See hmt_tvp.opt or Filardo (1994) p. 302
std_ratio = dta_filardo['dlip']['1960-01-01':].std() / dta_filardo['dlip'][:'1959-12-01'].std()
dta_filardo['dlip'][:'1959-12-01'] = dta_filardo['dlip'][:'1959-12-01'] * std_ratio
dta_filardo['dlleading'] = np.log(dta_filardo['leading']).diff()*100
dta_filardo['dmdlleading'] = dta_filardo['dlleading'] - dta_filardo['dlleading'].mean()
# Plot the data
dta_filardo['dlip'].plot(title='Standardized growth rate of industrial production', figsize=(13,3))
plt.figure()
dta_filardo['dmdlleading'].plot(title='Leading indicator', figsize=(13,3));
mod_filardo = sm.tsa.MarkovAutoregression(
dta_filardo.iloc[2:]['dlip'], k_regimes=2, order=4, switching_ar=False,
exog_tvtp=sm.add_constant(dta_filardo.iloc[1:-1]['dmdlleading']))
np.random.seed(12345)
res_filardo = mod_filardo.fit(search_reps=20)
res_filardo.summary()
fig, ax = plt.subplots(figsize=(12,3))
ax.plot(res_filardo.smoothed_marginal_probabilities[0])
ax.fill_between(usrec.index, 0, 1, where=usrec['USREC'].values, color='gray', alpha=0.2)
ax.set_xlim(dta_filardo.index[6], dta_filardo.index[-1])
ax.set(title='Smoothed probability of a low-production state');
res_filardo.expected_durations[0].plot(
title='Expected duration of a low-production state', figsize=(12,3));
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data - Sinus + linear function
Step2: The is how it looks like
Step10: The Fitting class
|
<ASSISTANT_TASK:>
Python Code:
# == Basic import == #
# plot within the notebook
%matplotlib inline
# No annoying warnings
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import matplotlib.pyplot as mpl
x = np.linspace(0,20,100)
dy = np.random.normal(0,7,100)
y = 10*np.sin(x) + 4*x + dy
mpl.plot(x,y,"ob")
class Chi2Fit( object ):
def __init__(self, x, data, errors):
init the class
Parameters:
-----------
x: [array]
the x-axis used for the modeling.
data, errors: [arrays]
measurement and its associated errors
Note: x,data and errors must have the same size
Return
------
Void
self.x = np.asarray(x)
self.data = np.asarray(data)
self.errors = np.asarray(errors)
self.npoints = len(data)
def fit(self,guess):
fit the model to the data
The methods uses scipy.optimize.minize to fit the model
to the data. The fit output is saved as self.fitout, the
best fit parameters being self.fitout["x"]
Parameters
----------
guess: [array]
initial guess for the minimizer. It's size must correspond
to the amount of free parameters of the model.
Return
------
Void (create self.fitout)
from scipy.optimize import minimize
self.fitout = minimize(self.chi2, guess)
print self.fitout
def get_model(self,parameters):
YOU HAVE TO IMPLEMENT THIS METHOD
This method should return the model-array that will be
compared to self.data
raise NotImplementedError(" CREATE IT IN YOUR CLASS")
def chi2(self,parameters):
The chi2 of the model with the given `parameters`
in comparison to the object's data
Return
------
float (the chi2)
res = self.data - self.get_model(parameters)
chi2 = (res**2)/(self.errors**2)
return np.sum(chi2)
def plot(self, parameters):
Vizualize the data and the model for the given
parameters
Return
------
Void
fig = mpl.figure()
ax = fig.add_subplot(1,1,1)
ax.errorbar(self.x,self.data, yerr= self.errors,
ls="None",marker='o', color="b", ecolor="0.7")
ax.plot(self.x,self.get_model(parameters),'-r')
fig.show()
# ----------------- #
# The Actual Model #
# ----------------- #
class SinFit( Chi2Fit ):
def get_model(self,parameters):
the modeled array for the given parameters
The model is:
$$
A sin(x) + B*x
$$
such that A,B = parameters
Return
------
array
A,B = parameters
return A*np.sin(self.x) + B*self.x
class LinFit( Chi2Fit ):
def get_model(self,parameters):
the modeled array for the given parameters
The model is:
$$
A + B*x
$$
such that A,B = parameters
Return
------
array
A,B = parameters
return A + B*self.x
sinfit = SinFit(x,y,dy)
linfit = LinFit(x,y,dy)
sinfit.fit([2,3])
linfit.fit([2,3])
sinfit.plot(sinfit.fitout["x"])
print(sinfit.fitout["x"])
print("(the input being: 10, 4)")
linfit.plot(linfit.fitout["x"])
print(linfit.fitout["x"])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <span id="plat_prod">Choose Platforms and Products ▴</span>
Step2: <span id="extents">Get the Extents of the Cube ▴</span>
Step3: <span id="define_extents">Define the Extents of the Analysis ▴</span>
Step4: <span id="define_analysis_params">Define Analysis Parameters ▴</span>
Step5: <span id="load_data">Load and Clean Data from the Data Cube ▴</span>
Step6: Create a Mosaic for the Baseline and Analysis Time Periods
Step7: <span id="anomalies">Calculate Anomaly Product ▴</span>
Step8: Show Baseline RGB, Analysis RGB and Anomaly Products
Step9: Calculate Pixel Counts within Thresholds
|
<ASSISTANT_TASK:>
Python Code:
import sys
import os
sys.path.append(os.environ.get('NOTEBOOK_ROOT'))
import warnings
# Supress Warning
warnings.filterwarnings('ignore')
from datacube.utils.aws import configure_s3_access
configure_s3_access(requester_pays=True)
import utils.data_cube_utilities.data_access_api as dc_api
api = dc_api.DataAccessApi()
dc = api.dc
# Select a Product and Platform
product = 's2_google_vanuatu'
platform = 'SENTINEL-2'
# Print extents of the data cube
extents = api.get_full_dataset_extent(platform = platform, product = product)
latitude_extents = (min(extents['latitude'].values),max(extents['latitude'].values))
longitude_extents = (min(extents['longitude'].values),max(extents['longitude'].values))
time_extents = (min(extents['time'].values),max(extents['time'].values))
print(time_extents)
print(latitude_extents)
print(longitude_extents)
# Select an analysis region
# Vanuatu - Peninsula near Port Vila
latitude = (-17.75, -17.63)
longitude = (168.15, 168.25)
## The code below renders a map that can be used to orient yourself with the region.
from utils.data_cube_utilities.dc_display_map import display_map
display_map(latitude = latitude, longitude = longitude)
from datetime import datetime
# Select the start and end periods for your analysis products
# The datetime function is (Year,Month,Day)
# These time windows will be used to make a mosaic, so typically pick a year length (or more)
# or select a small window surrounding a clear single date (use Cloud Statistics notebook)
# Also, be sure to evaluate the RGB mosaics (below) to affirm they are not full of clouds
# Select the baseline time period (start and end)
baseline_time_period = (datetime(2019,7,27), datetime(2019,7,29))
# Select the analysis time period (start and end)
analysis_time_period = (datetime(2020,7,1), datetime(2020,7,3))
# Select the cloud-free mosaic type
# Options are: max_ndvi, median, most_recent_pixel
# Use "median" for longer time periods, such as a year
# Use "most_recent_pixel" for short time periods, such as one day
# Use "max_ndvi" for seasonal time periods to compare vegetation peaks
baseline_mosaic_function = "most_recent_pixel"
analysis_mosaic_function = "most_recent_pixel"
common_load_params = \
dict(latitude=latitude,longitude=longitude,platform=platform,product=product,
measurements = ['red', 'green', 'blue', 'nir', 'swir1', 'swir2', 'scl'],
group_by='solar_day', dask_chunks={'time':1, 'latitude':1000, 'longitude':1000})
baseline_ds = dc.load(**common_load_params,
time=baseline_time_period)
analysis_ds = dc.load(**common_load_params,
time=analysis_time_period)
cloud_mask_baseline = (baseline_ds.scl != 0) & (baseline_ds.scl != 1) & \
(baseline_ds.scl != 3) & (baseline_ds.scl != 8) & \
(baseline_ds.scl != 9) & (baseline_ds.scl != 10)
baseline_ds = baseline_ds.where(cloud_mask_baseline)
cloud_mask_analysis = (analysis_ds.scl != 0) & (analysis_ds.scl != 1) & \
(analysis_ds.scl != 3) & (analysis_ds.scl != 8) & \
(analysis_ds.scl != 9) & (analysis_ds.scl != 10)
analysis_ds = analysis_ds.where(cloud_mask_analysis)
from utils_special.data_cube_utilities.dc_mosaic import create_max_ndvi_mosaic, create_median_mosaic, create_mosaic
mosaic_function = {"median": create_median_mosaic,
"max_ndvi": create_max_ndvi_mosaic,
"most_recent_pixel": create_mosaic}
baseline_compositor = mosaic_function[baseline_mosaic_function]
analysis_compositor = mosaic_function[analysis_mosaic_function]
baseline_composite = baseline_compositor(baseline_ds, cloud_mask_baseline.values)
analysis_composite = analysis_compositor(analysis_ds, cloud_mask_analysis.values)
def NDVI(dataset):
return (dataset.nir - dataset.red)/(dataset.nir + dataset.red)
parameter_baseline_composite = NDVI(baseline_composite)
parameter_analysis_composite = NDVI(analysis_composite)
parameter_anomaly = parameter_analysis_composite - parameter_baseline_composite
import matplotlib.pyplot as plt
from utils.data_cube_utilities.dc_rgb import rgb
from matplotlib.cm import RdYlGn
RdYlGn.set_bad('black',1.)
# Define the significant anomaly range for Plot #4
loss_range = parameter_anomaly < -0.2
gain_range = parameter_anomaly > 0.2
import xarray as xr
import numpy as np
fig, ax = plt.subplots(2, 2, figsize=(12,12))
for sub_ax in ax.flatten():
sub_ax.set_facecolor('black')
baseline_rgb = baseline_composite[['red', 'green', 'blue']].to_array()
analysis_rgb = analysis_composite[['red', 'green', 'blue']].to_array()
# Use the middle values of the data (2% to 98%) to brighten the image
lw_qtl, up_qtl = 0.02, 0.98
rgb_vmin = min(baseline_rgb.quantile(lw_qtl).values,analysis_rgb.quantile(lw_qtl).values)
rgb_vmax = max(baseline_rgb.quantile(up_qtl).values,analysis_rgb.quantile(up_qtl).values)
# Plot the resulting 4 products ... Baseline RGB, Analysis RGB, Total Anomaly, Anomaly Threshold
# NOTE: Clouds in either the baseline or analysis images will be removed from the anomaly product
## Plot #1 = Baseline RGB (upper left)
axes_image = baseline_rgb.plot.imshow(ax=ax[0,0], vmin=rgb_vmin, vmax=rgb_vmax)
## Plot #2 = Analysis RGB (upper right)
analysis_rgb.plot.imshow(ax=ax[0,1], vmin=rgb_vmin, vmax=rgb_vmax)
## Plot #3 = Total Anomaly (lower left)
parameter_anomaly.plot(ax=ax[1,0], vmin=-0.4, vmax=0.4, cmap = RdYlGn, add_colorbar=False)
## Plot #4 = Anomaly Threshold (lower right)
# Analysis composite grayscale background
plt4_bkg_band = 'swir1' # The band to use as the background image.
plt4_rgb = np.repeat(analysis_composite[plt4_bkg_band].where(cloud_mask_baseline.squeeze('time'))\
.values[:,:,np.newaxis],3,axis=2)
# Selected a range of SWIR1 values (0.001 to 0.600) to lighten image background (vs. 0.02 and 0.98)
min_bkg = np.nanquantile(analysis_composite[plt4_bkg_band].values, 0.001)
max_bkg = np.nanquantile(analysis_composite[plt4_bkg_band].values, 0.600)
plt4_rgb = np.interp(plt4_rgb, (min_bkg, max_bkg), [0,1])
# Significant anomaly color overlays
color_green = np.array([0,1,0]) # green
color_red = np.array([1,0,0]) # red
plt4_rgb[loss_range] = color_red
plt4_rgb[gain_range] = color_green
# Plot
plt4_coords = dict(analysis_composite.coords)
rgb_coord_arr = np.array(['red', 'green', 'blue'])
rgb_coord_da = xr.DataArray(rgb_coord_arr,name='rgb',dims=['rgb'],coords={'rgb': rgb_coord_arr})
plt4_coords.update({'rgb': rgb_coord_da})
plt4_rgb_da = xr.DataArray(plt4_rgb, coords=plt4_coords,dims=list(analysis_composite.dims) + ['rgb'])
plt4_rgb_da.plot.imshow(ax=ax[1,1])
# Titles for all plots
ax[0,0].set_title('Baseline Composite'), ax[0,0].xaxis.set_visible(False), ax[0,0].yaxis.set_visible(False)
ax[0,1].set_title('Analysis Composite'), ax[0,1].xaxis.set_visible(False), ax[0,1].yaxis.set_visible(False)
ax[1,0].set_title('Vegetation Anomalies: Red=Loss, Green=Gain'), ax[1,0].xaxis.set_visible(False), ax[1,0].yaxis.set_visible(False)
ax[1,1].set_title('Locations of Significant Anomalies: Red=Loss, Green=Gain'), ax[1,1].xaxis.set_visible(False), ax[1,1].yaxis.set_visible(False)
plt.tight_layout()
plt.show()
def threshold_count(da, min_threshold, max_threshold, mask = None):
def count_not_nans(arr):
return np.count_nonzero(~np.isnan(arr))
in_threshold = np.logical_and( da.values > min_threshold, da.values < max_threshold)
total_non_cloudy = count_not_nans(da.values) if mask is None else np.sum(mask)
return dict(total = np.size(da.values),
total_non_cloudy = total_non_cloudy,
inside = np.nansum(in_threshold),
outside = total_non_cloudy - np.nansum(in_threshold)
)
def threshold_percentage(da, min_threshold, max_threshold, mask = None):
counts = threshold_count(da, min_threshold, max_threshold, mask = mask)
return dict(percent_inside_threshold = (counts["inside"] / counts["total"]) * 100.0,
percent_outside_threshold = (counts["outside"] / counts["total"]) * 100.0,
percent_clouds = ( 100.0-counts["total_non_cloudy"] / counts["total"] * 100.0))
# Select an Anomaly Threshold Range to calculate the results from the previous image
# Be sure to put the smallest value in the "minimum_change" location (be careful of negative values)
# This code block can be used for either vegetation loss or vegetation gain calculations
# NDVI losses might use the following limits: min=-1.0 and max=-0.2
# NDVI gains might use the following limits: min=0.2, max=1.0
minimum_change = -0.7
maximum_change = -0.2
# This output is a count of the pixels that fall within each threshold range
threshold_count(parameter_anomaly,minimum_change,maximum_change)
# This output is a percentage of the pixels that fall within each threshold range
threshold_percentage(parameter_anomaly,minimum_change,maximum_change)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: 2 - Outline of the Assignment
Step4: Expected output
Step6: Expected output
Step8: Expected output
Step10: Expected output
Step12: <table style="width
Step14: Expected Output
Step16: Expected Output
Step18: Expected output with sigmoid
Step20: Expected Output
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import h5py
import matplotlib.pyplot as plt
from testCases_v2 import *
from dnn_utils_v2 import sigmoid, sigmoid_backward, relu, relu_backward
%matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
%load_ext autoreload
%autoreload 2
np.random.seed(1)
# GRADED FUNCTION: initialize_parameters
def initialize_parameters(n_x, n_h, n_y):
Argument:
n_x -- size of the input layer
n_h -- size of the hidden layer
n_y -- size of the output layer
Returns:
parameters -- python dictionary containing your parameters:
W1 -- weight matrix of shape (n_h, n_x)
b1 -- bias vector of shape (n_h, 1)
W2 -- weight matrix of shape (n_y, n_h)
b2 -- bias vector of shape (n_y, 1)
np.random.seed(1)
### START CODE HERE ### (≈ 4 lines of code)
W1 = np.random.randn(n_h, n_x) * 0.01
b1 = np.zeros((n_h, 1))
W2 = np.random.randn(n_y, n_h) * 0.01
b2 = np.zeros((n_y, 1))
### END CODE HERE ###
assert(W1.shape == (n_h, n_x))
assert(b1.shape == (n_h, 1))
assert(W2.shape == (n_y, n_h))
assert(b2.shape == (n_y, 1))
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
parameters = initialize_parameters(2,2,1)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
# GRADED FUNCTION: initialize_parameters_deep
def initialize_parameters_deep(layer_dims):
Arguments:
layer_dims -- python array (list) containing the dimensions of each layer in our network
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
Wl -- weight matrix of shape (layer_dims[l], layer_dims[l-1])
bl -- bias vector of shape (layer_dims[l], 1)
np.random.seed(3)
parameters = {}
L = len(layer_dims) # number of layers in the network
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layer_dims[l], layer_dims[l-1]) * 0.01
parameters['b' + str(l)] = np.zeros((layer_dims[l], 1))
### END CODE HERE ###
assert(parameters['W' + str(l)].shape == (layer_dims[l], layer_dims[l-1]))
assert(parameters['b' + str(l)].shape == (layer_dims[l], 1))
return parameters
parameters = initialize_parameters_deep([5,4,3])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
# GRADED FUNCTION: linear_forward
def linear_forward(A, W, b):
Implement the linear part of a layer's forward propagation.
Arguments:
A -- activations from previous layer (or input data): (size of previous layer, number of examples)
W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)
b -- bias vector, numpy array of shape (size of the current layer, 1)
Returns:
Z -- the input of the activation function, also called pre-activation parameter
cache -- a python dictionary containing "A", "W" and "b" ; stored for computing the backward pass efficiently
### START CODE HERE ### (≈ 1 line of code)
Z = np.dot(W, A) + b
### END CODE HERE ###
assert(Z.shape == (W.shape[0], A.shape[1]))
cache = (A, W, b)
return Z, cache
A, W, b = linear_forward_test_case()
Z, linear_cache = linear_forward(A, W, b)
print("Z = " + str(Z))
# GRADED FUNCTION: linear_activation_forward
def linear_activation_forward(A_prev, W, b, activation):
Implement the forward propagation for the LINEAR->ACTIVATION layer
Arguments:
A_prev -- activations from previous layer (or input data): (size of previous layer, number of examples)
W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)
b -- bias vector, numpy array of shape (size of the current layer, 1)
activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu"
Returns:
A -- the output of the activation function, also called the post-activation value
cache -- a python dictionary containing "linear_cache" and "activation_cache";
stored for computing the backward pass efficiently
if activation == "sigmoid":
# Inputs: "A_prev, W, b". Outputs: "A, activation_cache".
### START CODE HERE ### (≈ 2 lines of code)
Z, linear_cache = linear_forward(A_prev, W, b)
A, activation_cache = sigmoid(Z)
### END CODE HERE ###
elif activation == "relu":
# Inputs: "A_prev, W, b". Outputs: "A, activation_cache".
### START CODE HERE ### (≈ 2 lines of code)
Z, linear_cache = linear_forward(A_prev, W, b)
A, activation_cache = relu(Z)
### END CODE HERE ###
assert (A.shape == (W.shape[0], A_prev.shape[1]))
cache = (linear_cache, activation_cache)
return A, cache
A_prev, W, b = linear_activation_forward_test_case()
A, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = "sigmoid")
print("With sigmoid: A = " + str(A))
A, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = "relu")
print("With ReLU: A = " + str(A))
# GRADED FUNCTION: L_model_forward
def L_model_forward(X, parameters):
Implement forward propagation for the [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID computation
Arguments:
X -- data, numpy array of shape (input size, number of examples)
parameters -- output of initialize_parameters_deep()
Returns:
AL -- last post-activation value
caches -- list of caches containing:
every cache of linear_relu_forward() (there are L-1 of them, indexed from 0 to L-2)
the cache of linear_sigmoid_forward() (there is one, indexed L-1)
caches = []
A = X
L = len(parameters) // 2 # number of layers in the neural network
# Implement [LINEAR -> RELU]*(L-1). Add "cache" to the "caches" list.
for l in range(1, L):
A_prev = A
### START CODE HERE ### (≈ 2 lines of code)
A, cache = linear_activation_forward(A_prev, parameters['W' + str(l)], parameters['b' + str(l)], "relu")
caches.append(cache)
### END CODE HERE ###
# Implement LINEAR -> SIGMOID. Add "cache" to the "caches" list.
### START CODE HERE ### (≈ 2 lines of code)
AL, cache = linear_activation_forward(A, parameters['W' + str(L)], parameters['b' + str(L)], "sigmoid")
caches.append(cache)
### END CODE HERE ###
assert(AL.shape == (1,X.shape[1]))
return AL, caches
X, parameters = L_model_forward_test_case()
AL, caches = L_model_forward(X, parameters)
print("AL = " + str(AL))
print("Length of caches list = " + str(len(caches)))
# GRADED FUNCTION: compute_cost
def compute_cost(AL, Y):
Implement the cost function defined by equation (7).
Arguments:
AL -- probability vector corresponding to your label predictions, shape (1, number of examples)
Y -- true "label" vector (for example: containing 0 if non-cat, 1 if cat), shape (1, number of examples)
Returns:
cost -- cross-entropy cost
m = Y.shape[1]
# Compute loss from aL and y.
### START CODE HERE ### (≈ 1 lines of code)
cost = -1 / m * np.sum(np.multiply(Y, np.log(AL)) + np.multiply((1 - Y), np.log(1 - AL)))
### END CODE HERE ###
cost = np.squeeze(cost) # To make sure your cost's shape is what we expect (e.g. this turns [[17]] into 17).
assert(cost.shape == ())
return cost
Y, AL = compute_cost_test_case()
print("cost = " + str(compute_cost(AL, Y)))
# GRADED FUNCTION: linear_backward
def linear_backward(dZ, cache):
Implement the linear portion of backward propagation for a single layer (layer l)
Arguments:
dZ -- Gradient of the cost with respect to the linear output (of current layer l)
cache -- tuple of values (A_prev, W, b) coming from the forward propagation in the current layer
Returns:
dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev
dW -- Gradient of the cost with respect to W (current layer l), same shape as W
db -- Gradient of the cost with respect to b (current layer l), same shape as b
A_prev, W, b = cache
m = A_prev.shape[1]
### START CODE HERE ### (≈ 3 lines of code)
dW = 1 / m * np.dot(dZ, A_prev.T)
db = 1 / m * np.sum(dZ, axis=1, keepdims=True)
dA_prev = np.dot(W.T, dZ)
### END CODE HERE ###
assert (dA_prev.shape == A_prev.shape)
assert (dW.shape == W.shape)
assert (db.shape == b.shape)
return dA_prev, dW, db
# Set up some test inputs
dZ, linear_cache = linear_backward_test_case()
dA_prev, dW, db = linear_backward(dZ, linear_cache)
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db))
# GRADED FUNCTION: linear_activation_backward
def linear_activation_backward(dA, cache, activation):
Implement the backward propagation for the LINEAR->ACTIVATION layer.
Arguments:
dA -- post-activation gradient for current layer l
cache -- tuple of values (linear_cache, activation_cache) we store for computing backward propagation efficiently
activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu"
Returns:
dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev
dW -- Gradient of the cost with respect to W (current layer l), same shape as W
db -- Gradient of the cost with respect to b (current layer l), same shape as b
linear_cache, activation_cache = cache
if activation == "relu":
### START CODE HERE ### (≈ 2 lines of code)
dZ = relu_backward(dA, activation_cache)
dA_prev, dW, db = linear_backward(dZ, linear_cache)
### END CODE HERE ###
elif activation == "sigmoid":
### START CODE HERE ### (≈ 2 lines of code)
dZ = sigmoid_backward(dA, activation_cache)
dA_prev, dW, db = linear_backward(dZ, linear_cache)
### END CODE HERE ###
return dA_prev, dW, db
AL, linear_activation_cache = linear_activation_backward_test_case()
dA_prev, dW, db = linear_activation_backward(AL, linear_activation_cache, activation = "sigmoid")
print ("sigmoid:")
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db) + "\n")
dA_prev, dW, db = linear_activation_backward(AL, linear_activation_cache, activation = "relu")
print ("relu:")
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db))
# GRADED FUNCTION: L_model_backward
def L_model_backward(AL, Y, caches):
Implement the backward propagation for the [LINEAR->RELU] * (L-1) -> LINEAR -> SIGMOID group
Arguments:
AL -- probability vector, output of the forward propagation (L_model_forward())
Y -- true "label" vector (containing 0 if non-cat, 1 if cat)
caches -- list of caches containing:
every cache of linear_activation_forward() with "relu" (it's caches[l], for l in range(L-1) i.e l = 0...L-2)
the cache of linear_activation_forward() with "sigmoid" (it's caches[L-1])
Returns:
grads -- A dictionary with the gradients
grads["dA" + str(l)] = ...
grads["dW" + str(l)] = ...
grads["db" + str(l)] = ...
grads = {}
L = len(caches) # the number of layers
m = AL.shape[1]
Y = Y.reshape(AL.shape) # after this line, Y is the same shape as AL
# Initializing the backpropagation
### START CODE HERE ### (1 line of code)
dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL))
### END CODE HERE ###
# Lth layer (SIGMOID -> LINEAR) gradients. Inputs: "AL, Y, caches". Outputs: "grads["dAL"], grads["dWL"], grads["dbL"]
### START CODE HERE ### (approx. 2 lines)
current_cache = caches[L - 1]
grads["dA" + str(L)], grads["dW" + str(L)], grads["db" + str(L)] = linear_activation_backward(dAL, current_cache, activation = "sigmoid")
### END CODE HERE ###
for l in reversed(range(L-1)):
# lth layer: (RELU -> LINEAR) gradients.
# Inputs: "grads["dA" + str(l + 2)], caches". Outputs: "grads["dA" + str(l + 1)] , grads["dW" + str(l + 1)] , grads["db" + str(l + 1)]
### START CODE HERE ### (approx. 5 lines)
current_cache = caches[l]
dA_prev_temp, dW_temp, db_temp = linear_activation_backward(grads["dA" + str(l + 2)], current_cache, activation = "relu")
grads["dA" + str(l + 1)] = dA_prev_temp
grads["dW" + str(l + 1)] = dW_temp
grads["db" + str(l + 1)] = db_temp
### END CODE HERE ###
return grads
AL, Y_assess, caches = L_model_backward_test_case()
grads = L_model_backward(AL, Y_assess, caches)
print ("dW1 = "+ str(grads["dW1"]))
print ("db1 = "+ str(grads["db1"]))
print ("dA1 = "+ str(grads["dA1"]))
# GRADED FUNCTION: update_parameters
def update_parameters(parameters, grads, learning_rate):
Update parameters using gradient descent
Arguments:
parameters -- python dictionary containing your parameters
grads -- python dictionary containing your gradients, output of L_model_backward
Returns:
parameters -- python dictionary containing your updated parameters
parameters["W" + str(l)] = ...
parameters["b" + str(l)] = ...
L = len(parameters) // 2 # number of layers in the neural network
# Update rule for each parameter. Use a for loop.
### START CODE HERE ### (≈ 3 lines of code)
for l in range(L):
parameters["W" + str(l+1)] = parameters["W" + str(l+1)] - learning_rate * grads["dW" + str(l+1)]
parameters["b" + str(l+1)] = parameters["b" + str(l+1)] - learning_rate * grads["db" + str(l+1)]
### END CODE HERE ###
return parameters
parameters, grads = update_parameters_test_case()
parameters = update_parameters(parameters, grads, 0.1)
print ("W1 = "+ str(parameters["W1"]))
print ("b1 = "+ str(parameters["b1"]))
print ("W2 = "+ str(parameters["W2"]))
print ("b2 = "+ str(parameters["b2"]))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Linear Regression
Step2: The orange line on the plot above is the number of page views in blue and the orange line is the CPU load that viewing this pages generates on the server.
Step3: There clearly is a strong correlation between the page views and the CPU usage. Because of this correlation we can build a model to predict the CPU usage from the total page views. If we use a linear model we get a formula like the following
Step4: Now we need to feed the data to the model to fit it.
Step5: We can now inspect the coefficient $c_1$ and constant term (intercept) $c_0$ of the model
Step6: So this means that each additional page view adds about 0.11% CPU load to the server and all the other processes running on the server consume on average 0.72% CPU.
Step7: What is the expected CPU usage when we have 1000 page views per second? Is this technically possible? Why does the model predict it this way?
Step8: Now we plot the linear model together with our data to verify it captures the relationship correctly (the predict method can accept the entire total_page_views array at once).
Step9: Our model can calculate the R2 score indicating how well the linear model captures the data. A score of 1 means there is perfect linear correlation and the model can fit the data perfectly, a score of 0 (or lower) means that there is no correlation at all (and it does not make sense to try to model it that way). The score method takes the same arguments as the fit method.
Step10: 2.3 Extrapolation
Step11: Now let's plot what you have done.
Step12: Is this what you would expect? Can you see what's wrong?
Step13: The spikes of CPU usage are actually backups that run at night and they can be ignored. So repeat the exercise again but ignore these data points.
Step14: So what you should have learned from the previous exercise is that you should always look at your data and/or write scripts to inspect your data. Additionally extrapolation does not always work because there are no training examples in that area.
Step15: Let's have a look at this data.
Step16: We start again by creating a LinearRegression model.
Step17: Next we fit the model on the data, using multi_lin_model.fit(X,y). In contrast to the case above our page_views variable already has the correct shape to pass as the X matrix
Step18: Now, given the coefficients calculated by the model, which capture the contribution of each page view to the total CPU usage, we can start to answer some interesting questions. For example,
Step19: From this table we see that 'resources/js/basket.js' consumes the most per CPU per view. It generates about 0.30% CPU load for each additional page view. 'products/science.html' on the other hand is much leaner and only consumes about 0.04% CPU per view. Does this seem to be correct if you look at the scatter plot above?
Step20: As you can see this term is very similar to the result achieved in single linear regression, but it is not entirely the same. This means that these models are not perfect. However, they seem to be able to give a reliable estimate.
Step21: As you can see from the R2 score, this model performs better. It can explain 91.5% of the variance instead of just 90.5% of the variance. So this gives the impression that this model is more accurate.
Step22: For our training set, we will calculate 10 y values from evenly spaced x values using this function.
Step23: Now let's try to fit a model to this data with linear regression.
Step24: As you can see this fit is not optimal.
Step25: As you can see above this function transforms $x$ into [$x^0$, $x^1$, $x^2$, $x^3$] with $x^0=1$ and $x^1 = x$. If you have 2 inputs it will also take the cross products so that [$x_1$, $x_2$] is transformed into
Step26: In this example we only have 1 input so the number of features is always the degree + 1.
Step27: Now play with the degree of the polynomial expansion function below to create better features. Search for the optimal degree.
Step28: What do you notice? When does it work better? And when does it work best?
Step29: If everything is correct your score is very close to 1. Which means that we have built a model that can fit this data (almost) perfectly.
Step30: Now let's see what this results to in the test set.
Step31: As you can clearly see, this result is not that good. Why do you think this is?
Step32: Is this what you expect?
Step33: What did you observe? And what is the method learning? And how can you avoid this?
Step34: 5. Over-fitting and Cross-Validation
Step35: Now let's train on the entire train set (including the validation set) and test this result on the test set with the following code.
Step36: As you can see this approach works to select the optimal degree. Usually the test score is lower than the validation score, but in this case it is not because the test data doesn't contain noise.
Step37: Let's plot these results in a box plot to get an idea on how well the models performed on average.
Step38: Next we will compute the best degree.
Step39: Now let's train the model on the entire train set (including the validation set) and have a look at the result.
Step40: As you can see this automatic way of selecting the optimal degree has resulted in a good fit for the sine function.
Step41: As you can see above, the result of Ridge Regression is not as good as reducing the number of features in this example. However it works a lot better than without regularisation (try that). In the example above you will notice that it makes the result a lot smoother and removes the unwanted spikes. It will actually make sure that if you have too many features you still get a reasonable result. So this means that it should be in your standard toolkit.
Step42: As you can see, the extrapolation results for non-linear regression are even worse than for those of linear regression. This is because models only work well in the input space they have been trained in.
Step43: In a colored image this is easy to do, but when you remove the color it becomes much harder. Can you do the classification in the image below?
Step44: As you can see classifying is very hard to do when you don't get the answer even if you saw the solution earlier. But you will see that machine learning algorithms can solve this quite well if they can learn from examples.
Step45: Now let's plot the result.
Step46: As you can see a linear classifier returns a linear decision boundary.
Step47: If everything went well you should get a validation/test accuracy very close to 0.8.
Step48: As you can see they are quite powerful right out of the box without any parameter tuning. But we can get the results even better with some fine tuning.
Step49: The min_samples_leaf parameter sets the number of data points that can create a new branch/leaf in the tree. So in practice it limits the depth of the decision tree. The bigger this parameter is, the less deep the tree will be and less likely each tree will over-fit.
Step50: As you can see increasing the number of estimators improves the model and reduces over-fitting. This parameter actually sets the number of trees in the random forest. The more trees there are in the forest the better the result is. But obviously it requires more computing power so that is the limiting factor here.
Step51: As you have noticed by now it seems that random forests are less powerful than linear regression with polynomial feature extraction. This is because these polynomials are ideally suited for this task. This also means that you could get a better result if you would also apply polynomial expansion for random forests. Try that below.
Step52: As you have may have noticed, it is hard to get results that are better than the ones obtained using logistic regression. This illustrates that linear techniques are very powerful and often underrated. But in some situations they are not powerful enough and you need something stronger like a random forest or even neural networks (check this simulator if you want to play with the latter).
Step53: In the graph above you can clearly see that there is a rising trend in the data.
Step54: As you can see from the score above, the model is not perfect but it seems to get a relatively high score. Now let's make a prediction into the future and plot this.
Step55: As you can see from the image above the model doesn't quite seem to fit the data well. Let's see how we can improve this.
Step56: As you can see from the print above both x_train and y_train contains 303 data points. For x_train you see that there are now 5 features which contain the page views from the 5 past hours.
Step57: Now change the width parameter to see if you can get a better score.
Step58: As you can see in the image above the prediction is not what you would expect from a perfect model. What happened is that the model learned the training data by heart without 'understanding' what the data is really about. This phenomenon is called over-fitting and will always occur if you make your model too complex.
Step59: As you will have noticed by now is that it is better to have a non-perfect score which will give you a much better outcome. Now try the same thing for the following models
Step60: If everything is correct the LassoCV methods was selected.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = (13.0, 8.0)
%matplotlib inline
import pickle
import sklearn
import sklearn.linear_model
import sklearn.preprocessing
import sklearn.gaussian_process
import sklearn.ensemble
import pickle # Pickle files allow us to easily save and load python objects.
with open('data/cpu_page_views.pickle', 'rb') as file:
cpu_usage, page_views, page_names, total_page_views = pickle.load(file, encoding='latin1')
print('Array shapes:')
print('-'*25)
print(f'cpu_usage\t {cpu_usage.shape}')
print(f'page_views\t {page_views.shape}')
print(f'page_names\t {page_names.shape}')
print(f'total_page_views {total_page_views.shape}')
plt.figure(figsize=(13, 6))
plt.plot(total_page_views, label='Total page views')
plt.plot(cpu_usage, label='CPU %')
plt.legend()
plt.show()
plt.figure(figsize=(13, 6))
plt.xlabel("Total page views")
plt.ylabel("CPU usage")
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# plt.scatter( ? , ? )
plt.show()
import sklearn.linear_model
simple_lin_model = sklearn.linear_model.LinearRegression()
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# simple_lin_model.fit( ? , ? )
print(f"Coefficient = {simple_lin_model.coef_[0]:.2f}\nConstant term = {simple_lin_model.intercept_:.2f}")
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# simple_lin_model.predict( [[ ? ]] )
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# simple_lin_model.predict( [[ ? ]] )
plt.figure(figsize=(13, 6))
plt.scatter(total_page_views, cpu_usage, color='black')
plt.plot(total_page_views, simple_lin_model.predict(total_page_views.reshape((-1, 1))), color='blue', linewidth=3)
plt.xlabel("Total page views")
plt.ylabel("CPU usage")
plt.show()
R2 = simple_lin_model.score(total_page_views.reshape((-1, 1)), cpu_usage)
print(f'R2 = {R2:.3f}')
with open('data/cpu_page_views_2.pickle', 'rb') as file:
cpu_usage, total_page_views = pickle.load(file, encoding='latin1')
print('Array shapes:')
print('-'*25)
print(f'cpu_usage\t {cpu_usage.shape}')
print(f'total_page_views {total_page_views.shape}')
simple_lin_model = sklearn.linear_model.LinearRegression()
simple_lin_model.fit(total_page_views, cpu_usage)
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# prediction = simple_lin_model.predict(?)
print(f'The predicted value is: {prediction}')
assert prediction < 25
all_page_views = np.concatenate((total_page_views, [[8]]))
plt.figure(figsize=(13, 6))
plt.scatter(total_page_views, cpu_usage, color='black')
plt.plot(all_page_views, simple_lin_model.predict(all_page_views), color='blue', linewidth=3)
plt.axvline(8, color='r')
plt.xlabel("Total page views")
plt.ylabel("CPU usage")
plt.show()
plt.figure(figsize=(16, 5))
plt.plot(total_page_views, label='Total page views')
plt.plot(cpu_usage, label='CPU %')
plt.legend()
plt.show()
x = np.array([1, 2, 3])
selection = np.array([True, False, True])
x[selection]
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# selection = ?
assert selection.dtype == np.dtype('bool'), 'The selection variable should be an array of True/False values'
assert len(selection) == len(total_page_views)
simple_lin_model = sklearn.linear_model.LinearRegression()
simple_lin_model.fit(total_page_views[selection], cpu_usage[selection])
prediction = simple_lin_model.predict([[8]])
print(f'The predicted value is: {prediction}')
all_page_views = np.concatenate((total_page_views, [[8]]))
plt.figure(figsize=(13, 6))
plt.scatter(total_page_views, cpu_usage, c=selection, cmap='RdYlGn')
plt.plot(all_page_views, simple_lin_model.predict(all_page_views), color='blue', linewidth=3)
plt.axvline(8, color='r')
plt.xlabel("Total page views")
plt.ylabel("CPU usage")
plt.show()
assert prediction > 23
# load the data
with open('data/cpu_page_views.pickle', 'rb') as file:
cpu_usage, page_views, page_names, total_page_views = pickle.load(file, encoding='latin1')
print('Array shapes:')
print('-'*25)
print(f'cpu_usage\t {cpu_usage.shape}')
print(f'page_views\t {page_views.shape}')
print(f'page_names\t {page_names.shape}')
print(f'total_page_views {total_page_views.shape}\n')
print(page_names)
plt.figure(figsize=(13, 6))
for i in range(len(page_names)):
plt.plot(page_views[:,i], label=page_names[i])
plt.plot(cpu_usage, label= 'CPU %')
plt.legend()
plt.show()
plt.figure(figsize=(13, 6))
for i in range(len(page_names)):
plt.scatter(page_views[:,i], cpu_usage, label=page_names[i])
plt.xlabel("Page views")
plt.ylabel("CPU usage")
plt.legend()
plt.show()
multi_lin_model = sklearn.linear_model.LinearRegression()
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# multi_lin_model.fit( ? , ? )
# Some quick and dirty code to print the most consuming pages first
print('Index\tCPU (%)\t Page')
print('-'*41)
indices = np.argsort(-multi_lin_model.coef_)
for i in indices:
print(f"{i}\t{ multi_lin_model.coef_[i]:4.2f}\t {page_names[i]}")
print(f'The other processes on the server consume {multi_lin_model.intercept_:.2f}%')
R2 = multi_lin_model.score(page_views, cpu_usage)
print(f'R2 = {R2:.3f}')
x = np.arange(0,6, 0.01).reshape((-1, 1))
plt.figure(figsize=(13, 6))
plt.plot(x, np.sin(x))
plt.show()
# helper function to generate the data
def sine_train_data():
x_train = np.linspace(0, 6, 10).reshape((-1, 1))
y_train = np.sin(x_train)
return x_train, y_train
x_train, y_train = sine_train_data()
plt.figure(figsize=(13, 6))
plt.scatter(x_train, y_train)
plt.show()
x_train, y_train = sine_train_data()
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# model = ?
# model.fit( ? )
print(f'The R2 score of this model is: {model.score(x_train, y_train):.3}')
plt.figure(figsize=(13, 6))
plt.scatter(x_train, y_train)
plt.plot(x, model.predict(x))
plt.show()
import sklearn.preprocessing
x = [[2]]
pol_exp = sklearn.preprocessing.PolynomialFeatures(degree=3)
pol_exp.fit_transform(x)
x = [[2, 3]]
pol_exp = sklearn.preprocessing.PolynomialFeatures(degree=3)
pol_exp.fit_transform(x)
x_train, y_train = sine_train_data()
pol_exp = sklearn.preprocessing.PolynomialFeatures(degree=3)
pol_exp.fit_transform(x_train)
x_train, y_train = sine_train_data()
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# pol_exp = sklearn.preprocessing.PolynomialFeatures(degree= ? )
model = sklearn.linear_model.LinearRegression()
model.fit(pol_exp.fit_transform(x_train), y_train)
train_score = model.score(pol_exp.fit_transform(x_train), y_train)
print(f'The R2 score of this model is: {train_score:.6f}')
plt.figure(figsize=(13, 6))
plt.scatter(x_train, y_train)
x = np.arange(0,6, 0.01).reshape((-1, 1))
plt.plot(x, model.predict(pol_exp.fit_transform(x)))
plt.show()
def sine_test_data():
x_test = 0.5 + np.arange(6).reshape((-1, 1))
y_test = np.sin(x_test)
return x_test, y_test
assert train_score > .99999, 'Adjust the degree parameter 2 cells above until the train_score > .99999'
x_test, y_test = sine_test_data()
plt.figure(figsize=(13, 6))
plt.scatter(x_train, y_train, label='train')
plt.scatter(x_test, y_test, color='r', label='test')
plt.legend()
x = np.arange(0, 6, 0.01).reshape((-1, 1))
plt.plot(x, model.predict(pol_exp.fit_transform(x)))
plt.show()
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# test_score = model.score( ? )
print(f'The R2 score of the model on the test set is: {test_score:.3f}')
assert test_score > 0.99
# a helper function to create the sine train set that can also add noise to the data
def noisy_sine_train_data(noise=None):
x_train = np.linspace(0, 6, 10).reshape((-1, 1))
y_train = np.sin(x_train)
# If fixed, set the random seed so that the next call of the
# random function always returns the same result
if noise == 'fixed':
np.random.seed(1)
x_train += np.random.randn(len(x_train)).reshape((-1, 1)) / 5
return x_train, y_train
x_train, y_train = noisy_sine_train_data(noise='fixed')
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# pol_exp = sklearn.preprocessing.PolynomialFeatures(degree= ? )
model = sklearn.linear_model.LinearRegression()
model.fit(pol_exp.fit_transform(x_train), y_train)
train_score = model.score(pol_exp.fit_transform(x_train), y_train)
print(f'The R2 score of this method on the train set is {train_score:.3f}')
assert train_score > 0.99
x_test, y_test = sine_test_data()
print(f'The R2 score of the model on the test set is: {model.score(pol_exp.fit_transform(x_test), y_test):.3f}')
plt.figure(figsize=(13, 6))
plt.scatter(x_train, y_train)
x = np.arange(0,6, 0.01).reshape((-1, 1))
plt.plot(x, model.predict(pol_exp.fit_transform(x)))
plt.show()
x_train, y_train = noisy_sine_train_data()
pol_exp = sklearn.preprocessing.PolynomialFeatures(degree=9)
model = sklearn.linear_model.LinearRegression()
model.fit(pol_exp.fit_transform(x_train), y_train)
print(f'The R2 score of this method on the train set is {model.score(pol_exp.fit_transform(x_train), y_train):.3f}')
plt.figure(figsize=(13, 6))
plt.scatter(x_train, y_train)
x = np.arange(x_train[0], x_train[-1], 0.01).reshape((-1, 1))
plt.plot(x, model.predict(pol_exp.fit_transform(x)))
plt.show()
x_train, y_train = noisy_sine_train_data(noise='fixed')
x_test, y_test = sine_test_data()
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# pol_exp = ?
# model = ?
# model.fit( ? )
print(f'The score of this method on the train set is: {model.score(pol_exp.fit_transform(x_train), y_train):.3f}')
plt.figure(figsize=(13, 6))
plt.scatter(x_train, y_train, label='train')
plt.scatter(x_test, y_test, color='r', label='test')
x = np.arange(0,6, 0.01).reshape((-1, 1))
plt.plot(x, model.predict(pol_exp.fit_transform(x)))
plt.legend()
plt.show()
test_score = model.score(pol_exp.fit_transform(x_test), y_test)
print(f'The score of the model on the test set is: {test_score:.3f}')
assert test_score > 0.99, 'Adjust the degree parameter until test_score > 0.99'
# create the data in case you skipped the previous exercise
# a helper function to create the sine train set that can also add noise to the data
def noisy_sine_train_data(noise=None):
x_train = np.linspace(0, 6, 10).reshape((-1, 1))
y_train = np.sin(x_train)
# If fixed, set the random seed so that the next call of the
# random function always returns the same result
if noise == 'fixed':
np.random.seed(1)
x_train += np.random.randn(len(x_train)).reshape((-1, 1)) / 5
return x_train, y_train
def sine_test_data():
x_test = 0.5 + np.arange(6).reshape((-1, 1))
y_test = np.sin(x_test)
return x_test, y_test
x_train, y_train = noisy_sine_train_data(noise='fixed')
# we randomly pick 3 data points to get a nice validation set
train_i = [0, 1, 3, 4, 6, 7, 9]
val_i = [2, 5, 8]
# create the train and validation sets
x_train_i = x_train[train_i, :]
y_train_i = y_train[train_i]
x_val_i = x_train[val_i, :]
y_val_i = y_train[val_i]
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# pol_exp = sklearn.preprocessing.PolynomialFeatures(degree= ? )
model = sklearn.linear_model.LinearRegression()
model.fit(pol_exp.fit_transform(x_train_i), y_train_i)
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# train_score = model.score( ? )
# validation_score = model.score( ? )
print(f'The R2 score of this model on the train set is: {train_score:.3f}')
print(f'The R2 score of this model on the validation set is: {validation_score:.3f}')
assert pol_exp.degree < 5, 'Select a polynomial degree < 5'
model = sklearn.linear_model.LinearRegression()
model.fit(pol_exp.fit_transform(x_train), y_train)
x_test, y_test = sine_test_data()
plt.figure(figsize=(13, 6))
plt.scatter(x_train, y_train, label='train')
plt.scatter(x_test, y_test, color='r', label='test')
plt.legend()
x = np.arange(0,6, 0.01).reshape((-1, 1))
plt.plot(x, model.predict(pol_exp.fit_transform(x)))
plt.show()
print(f'The score of the model on the test set is: {model.score(pol_exp.fit_transform(x_test), y_test):.3f}')
x_train, y_train = noisy_sine_train_data(noise='fixed')
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# results = np.inf * np.ones(( ? , ?))
# The results array should have a shape of "the number of data points" x "the number of polynomial degrees to try"
# The ones are multiplied with a very large number, np.inf, since we are looking for the smallest error
# for i in range( ? ):
train_i = np.where(np.arange(10) != i)[0]
x_train_i = x_train[train_i, :]
y_train_i = y_train[train_i]
x_val_i = x_train[i:i+1, :]
y_val_i = y_train[i:i+1]
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# for degree in range(?):
# pol_exp = sklearn.preprocessing.PolynomialFeatures(degree= ? )
model = sklearn.linear_model.LinearRegression()
model.fit(pol_exp.fit_transform(x_train_i), y_train_i)
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# Fill out the results for each validation set and each degree in the results matrix
# results[ ? ] = sklearn.metrics.mean_squared_error(model.predict(pol_exp.fit_transform(x_val_i)), y_val_i)
max_degree = 10
plt.boxplot(results[:, : max_degree])
plt.xticks(range(1, max_degree + 1), range(max_degree))
plt.xlabel('Polynomial degree')
plt.ylabel('Mean Squared Error')
plt.show()
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# average the results over all validation sets
# average_results = np.mean(results, axis= ? )
# find the optimal degree
# degree = np.argmin( ? )
print(f'The optimal degree for the polynomials is: {degree}')
assert degree == 3
pol_exp = sklearn.preprocessing.PolynomialFeatures(degree=degree)
model = sklearn.linear_model.LinearRegression()
model.fit(pol_exp.fit_transform(x_train), y_train)
print(f'The score of this method on the train set is: {model.score(pol_exp.fit_transform(x_train), y_train):.3f}')
plt.figure(figsize=(13, 6))
plt.scatter(x_train, y_train, label='train')
plt.scatter(x_test, y_test, color='r', label='test')
plt.legend()
x = np.arange(0,6, 0.01).reshape((-1, 1))
plt.plot(x, model.predict(pol_exp.fit_transform(x)))
plt.show()
print(f'The score of the model on the test set is: {model.score(pol_exp.fit_transform(x_test), y_test):.3f}')
x_train, y_train = noisy_sine_train_data(noise='fixed')
pol_exp = sklearn.preprocessing.PolynomialFeatures(degree=9)
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# model = sklearn.linear_model. ?
model.fit(pol_exp.fit_transform(x_train), y_train)
print(f'The R2 score of this method on the train set is: {model.score(pol_exp.fit_transform(x_train), y_train):.3f}')
plt.figure(figsize=(13,8))
plt.scatter(x_train, y_train, label='train')
plt.scatter(x_test, y_test, color='r', label='test')
plt.legend()
x = np.arange(0,6, 0.01).reshape((-1, 1))
plt.plot(x, model.predict(pol_exp.fit_transform(x)))
plt.show()
print(f'The R2 score of the model on the test set is: {model.score(pol_exp.fit_transform(x_test), y_test):.3f}')
x_train, y_train = noisy_sine_train_data(noise='fixed')
pol_exp = sklearn.preprocessing.PolynomialFeatures(degree=3)
model = sklearn.linear_model.RidgeCV()
model.fit(pol_exp.fit_transform(x_train), y_train)
print('The R2 score of this method on the train set is:',
f'{model.score(pol_exp.fit_transform(x_train), y_train):.3f}')
# Now test outside the area of the training
x_test_extended = np.array([-3,-2,-1,7,8,9]).reshape((-1, 1))
y_test_extended = np.sin(x_test_extended)
plt.figure(figsize=(13, 8))
plt.scatter(x_train, y_train, label='train')
plt.scatter(x_test_extended, y_test_extended, color='r', label='test')
plt.legend()
x = np.arange(-4,10, 0.01).reshape((-1, 1))
plt.plot(x, model.predict(pol_exp.fit_transform(x)))
plt.show()
print('The R2 score of the model on the test set outside the area used for training is:',
f'{model.score(pol_exp.fit_transform(x_test_extended), y_test_extended):.3f}')
# Some code to generate spirals. You can ignore this for now.
# To comply with standards in machine learning we use x1 and x2 as opposed to x and y for this graph
# because y is reserved for the output in Machine Learning (= 0 or 1 in this case)
r = np.arange(0.1, 1.5, 0.0001)
theta = 2 * np.pi * r
x1_0 = r * np.cos(theta)
x2_0 = r * np.sin(theta)
x1_1 = - r * np.cos(theta)
x2_1 = - r * np.sin(theta)
perm_indices = np.random.permutation(range(len(x1_0)))
x1_0_rand = x1_0[perm_indices[ : 1000]] + np.random.randn(1000) / 5
x2_0_rand = x2_0[perm_indices[ : 1000]] + np.random.randn(1000) / 5
x1_1_rand = x1_1[perm_indices[1000 : 2000]] + np.random.randn(1000) / 5
x2_1_rand = x2_1[perm_indices[1000 : 2000]] + np.random.randn(1000) / 5
plt.figure(figsize=(8, 8))
plt.scatter(x1_0_rand, x2_0_rand, color = 'b', alpha=0.6, linewidth=0)
plt.scatter(x1_1_rand, x2_1_rand, color = 'r', alpha=0.6, linewidth=0)
plt.plot(x1_0, x2_0, color = 'b', lw=3)
plt.plot(x1_1, x2_1, color='r', lw=3)
plt.xlim(-2, 2)
plt.ylim(-2, 2)
plt.xlabel('X1')
plt.ylabel('X2')
plt.show()
# Create a train and validation set
x_train_0 = np.concatenate((x1_0_rand[ : 800].reshape((-1,1)), x2_0_rand[ : 800].reshape((-1,1))), axis=1)
y_train_0 = np.zeros((len(x_train_0),))
x_train_1 = np.concatenate((x1_1_rand[ : 800].reshape((-1,1)), x2_1_rand[ : 800].reshape((-1,1))), axis=1)
y_train_1 = np.ones((len(x_train_1),))
x_val_0 = np.concatenate((x1_0_rand[800 : ].reshape((-1,1)), x2_0_rand[800 : ].reshape((-1,1))), axis=1)
y_val_0 = np.zeros((len(x_val_0),))
x_val_1 = np.concatenate((x1_1_rand[800 : ].reshape((-1,1)), x2_1_rand[800 : ].reshape((-1,1))), axis=1)
y_val_1 = np.ones((len(x_val_1),))
x_train = np.concatenate((x_train_0, x_train_1), axis=0)
y_train = np.concatenate((y_train_0, y_train_1), axis=0)
x_val = np.concatenate((x_val_0, x_val_1), axis=0)
y_val = np.concatenate((y_val_0, y_val_1), axis=0)
# Plot the train and test data
plt.figure(figsize=(8, 8))
plt.scatter(x_train[:, 0], x_train[:, 1], color='k', alpha=0.6, linewidth=0)
plt.scatter(x_val[:, 0], x_val[:, 1], color='y', alpha=0.6, linewidth=0)
plt.xlim(-2, 2)
plt.ylim(-2, 2)
plt.show()
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# model = sklearn.linear_model. ?
# model.fit( ? )
train_score = sklearn.metrics.accuracy_score(model.predict(x_train), y_train)
print(f'The train accuracy is: {train_score:.3f}')
val_score = sklearn.metrics.accuracy_score(model.predict(x_val), y_val)
print(f'The validation accuracy is: {val_score:.3f}')
assert val_score > 0.5
# A quick and dirty helper function to plot the decision boundaries
def plot_decision_boundary(model, pol_exp=None):
n=250
lin_space = np.linspace(-2, 2, num=n).reshape((-1, 1))
x1 = np.dot(lin_space, np.ones((1, n))).reshape((-1, 1))
x2 = np.dot(np.ones((n, 1)), lin_space.T).reshape((-1, 1))
x = np.concatenate((x1, x2), axis=1)
if pol_exp is None:
y = model.predict(x)
else:
y = model.predict(pol_exp.fit_transform(x))
i_0 = np.where(y < 0.5)
i_1 = np.where(y > 0.5)
plt.figure(figsize=(8,8))
plt.scatter(x[i_0, 0], x[i_0, 1], color='b', s=2, alpha=0.5, linewidth=0, marker='s')
plt.scatter(x[i_1, 0], x[i_1, 1], color='r',s=2, alpha=0.5, linewidth=0, marker='s')
plt.plot(x1_0, x2_0, color = 'b', lw=3)
plt.plot(x1_1, x2_1, color='r', lw=3)
plt.xlim(-2, 2)
plt.ylim(-2, 2)
# Call the function
plot_decision_boundary(model)
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# model = sklearn.linear_model. ?
# pol_exp = sklearn.preprocessing.PolynomialFeatures(degree= ? )
# model.fit( ? )
train_score = sklearn.metrics.accuracy_score(model.predict(pol_exp.fit_transform(x_train)), y_train)
print(f'The train accuracy is: {train_score:.3f}')
val_score = sklearn.metrics.accuracy_score(model.predict(pol_exp.fit_transform(x_val)), y_val)
print(f'The validation accuracy is: {val_score:.3f}')
plot_decision_boundary(model, pol_exp=pol_exp)
assert val_score >= 0.8
import sklearn.ensemble
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# model = ?
# model.fit( ? )
train_score = sklearn.metrics.accuracy_score(model.predict(x_train), y_train)
print(f'The train accuracy is: {train_score:.3f}')
val_score = sklearn.metrics.accuracy_score(model.predict(x_val), y_val)
print(f'The validation accuracy is: {val_score:.3f}')
plot_decision_boundary(model)
assert val_score > 0.7
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# model = sklearn.ensemble.RandomForestClassifier(min_samples_leaf= ? )
model.fit(x_train, y_train)
train_score = sklearn.metrics.accuracy_score(model.predict(x_train), y_train)
print(f'The train accuracy is: {train_score:.3f}')
val_score = sklearn.metrics.accuracy_score(model.predict(x_val), y_val)
print(f'The validation accuracy is: {val_score:.3f}')
plot_decision_boundary(model)
assert val_score > 0.5
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# model = sklearn.ensemble.RandomForestClassifier(n_estimators= ? )
model.fit(x_train, y_train)
train_score = sklearn.metrics.accuracy_score(model.predict(x_train), y_train)
print(f'The train accuracy is: {train_score:.3f}')
val_score = sklearn.metrics.accuracy_score(model.predict(x_val), y_val)
print(f'The validation accuracy is: {val_score:.3f}')
plot_decision_boundary(model)
assert val_score > 0.7
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# model = sklearn.ensemble.RandomForestClassifier(n_estimators= ? , min_samples_leaf= ? )
model.fit(x_train, y_train)
train_score = sklearn.metrics.accuracy_score(model.predict(x_train), y_train)
print(f'The train accuracy is: {train_score:.3f}')
val_score = sklearn.metrics.accuracy_score(model.predict(x_val), y_val)
print(f'The validation accuracy is: {val_score:.3f}')
plot_decision_boundary(model)
assert val_score > 0.7
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# model = sklearn.ensemble.RandomForestClassifier(n_estimators= ? , min_samples_leaf= ? )
# pol_exp = sklearn.preprocessing.PolynomialFeatures(degree= ?)
# model.fit( ? )
train_score = sklearn.metrics.accuracy_score(model.predict(pol_exp.fit_transform(x_train)), y_train)
print(f'The train accuracy is: {train_score:.3f}')
val_score = sklearn.metrics.accuracy_score(model.predict(pol_exp.fit_transform(x_val)), y_val)
print(f'The validation accuracy is: {val_score:.3f}')
plot_decision_boundary(model, pol_exp=pol_exp)
assert val_score > 0.7
with open('data/train_set_forecasting.pickle', 'rb') as file:
train_set = pickle.load(file, encoding='latin1')
print(f'Shape of the train set = {train_set.shape}')
plt.figure(figsize=(20,4))
plt.plot(train_set)
plt.show()
import sklearn
import sklearn.linear_model
import sklearn.gaussian_process
model = sklearn.linear_model.LinearRegression()
# the input x_train contains all the data except the last data point
x_train = train_set[ : -1].reshape((-1, 1)) # the reshape is necessary since sklearn requires a 2 dimensional array
# the output y_train contains all the data except the first data point
y_train = train_set[1 : ]
# this code fits the model on the train data
model.fit(x_train, y_train)
# this score gives you how well it fits on the train set
# higher is better and 1.0 is perfect
print(f'The R2 train score of the linear model is {model.score(x_train, y_train):.3f}')
n_predictions = 100
import copy
# use the last data point as the first input for the predictions
x_test = copy.deepcopy(train_set[-1]) # make a copy to avoid overwriting the training data
prediction = []
for i in range(n_predictions):
# predict the next data point
y_test = model.predict([[x_test]])[0] # sklearn requires a 2 dimensional array and returns a one-dimensional one
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# prediction.append( ? )
# x_test = ?
prediction = np.array(prediction)
plt.figure(figsize=(20,4))
plt.plot(np.concatenate((train_set, prediction)), 'g')
plt.plot(train_set, 'b')
plt.show()
def convert_time_series_to_train_data(ts, width):
x_train, y_train = [], []
for i in range(len(ts) - width - 1):
x_train.append(ts[i : i + width])
y_train.append(ts[i + width])
return np.array(x_train), np.array(y_train)
width = 5
x_train, y_train = convert_time_series_to_train_data(train_set, width)
print(x_train.shape, y_train.shape)
width = 5
x_train, y_train = convert_time_series_to_train_data(train_set, width)
model = sklearn.linear_model.LinearRegression()
model.fit(x_train, y_train)
print(f'The R2 score of the linear model with width={width} is {model.score(x_train, y_train):.3f}')
import copy
# this is a helper function to make the predictions
def predict(model, train_set, width, n_points):
prediction = []
# create the input data set for the first predicted output
# copy the data to make sure the original is not overwritten
x_test = copy.deepcopy(train_set[-width : ])
for i in range(n_points):
# predict only the next data point
prediction.append(model.predict(x_test.reshape((1, -1))))
# use the newly predicted data point as input for the next prediction
x_test[0 : -1] = x_test[1 : ]
x_test[-1] = prediction[-1]
return np.array(prediction)
n_predictions = 200
prediction = predict(model, train_set, width, n_predictions)
plt.figure(figsize=(20,4))
plt.plot(np.concatenate((train_set, prediction[:,0])), 'g')
plt.plot(train_set, 'b')
plt.show()
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# width = ?
x_train, y_train = convert_time_series_to_train_data(train_set, width)
model = sklearn.linear_model.LinearRegression()
model.fit(x_train, y_train)
print(f'The R2 score of the linear model with width={width} is {model.score(x_train, y_train):.3f}')
prediction = predict(model, train_set, width, 200)
plt.figure(figsize=(20,4))
plt.plot(np.concatenate((train_set, prediction[:,0])), 'g')
plt.plot(train_set, 'b')
plt.show()
assert width > 1
model_generators = [sklearn.linear_model.LinearRegression(),
sklearn.linear_model.RidgeCV(cv=3),
sklearn.linear_model.LassoCV(cv=3),
sklearn.ensemble.RandomForestRegressor(n_estimators=10)]
best_score = 0
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# for model_gen in ? :
# for width in range( ? , ? ):
x_train, y_train = convert_time_series_to_train_data(train_set, width)
# train the model on the first 48 hours
x_train_i, y_train_i = x_train[ : -48, :], y_train[ : -48]
# use the last 48 hours for validation
x_val_i, y_val_i = x_train[-48 : ], y_train[-48 : ]
# there is a try except clause here because some models do not converge for some data
try:
# Constructs a new, untrained, model with the same parameters
model = sklearn.base.clone(model_gen, safe=True)
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# model.fit( ? , ? )
# this_score = ?
if this_score > best_score:
best_score = this_score
# Constructs a new, untrained, model with the same parameters
best_model = sklearn.base.clone(model, safe=True)
best_width = width
except:
pass
print(f'{best_model.__class__.__name__} was selected as the best model with a width of {best_width}',
f'and a validation R2 score of {best_score:.3f}')
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# width = ?
# model = ?
x_train, y_train = convert_time_series_to_train_data(train_set, width)
##### Implement this part of the code #####
raise NotImplementedError("Code not implemented, follow the instructions.")
# model.fit( ? , ? )
n_predictions = 200
prediction = predict(model, train_set, width, n_predictions)
plt.figure(figsize=(20,4))
plt.plot(np.concatenate((train_set, prediction[:,0])), 'g')
plt.plot(train_set, 'b')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <p style="font-family
Step2: <p style="font-family
Step3: <p style="font-family
Step4: <p style="font-family
Step5: <p style="font-family
Step6: Let us look at a typical row from our features
Step7: Let us also display our target values
Step8: <p style="font-family
Step9: <p style="font-family
Step10: <p style="font-family
Step11: <p style="font-family
Step12: <p style="font-family
Step13: <p style="font-family
Step14: <p style="font-family
Step15: <p style="font-family
Step16: <p style="font-family
|
<ASSISTANT_TASK:>
Python Code:
import sqlite3
import pandas as pd
from sklearn.tree import DecisionTreeRegressor
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_squared_error
from math import sqrt
# Create your connection.
cnx = sqlite3.connect(r'database.sqlite')
df = pd.read_sql_query("SELECT * FROM Player_Attributes", cnx)
df.head()
df.shape
df.columns
features = [
'potential', 'crossing', 'finishing', 'heading_accuracy',
'short_passing', 'volleys', 'dribbling', 'curve', 'free_kick_accuracy',
'long_passing', 'ball_control', 'acceleration', 'sprint_speed',
'agility', 'reactions', 'balance', 'shot_power', 'jumping', 'stamina',
'strength', 'long_shots', 'aggression', 'interceptions', 'positioning',
'vision', 'penalties', 'marking', 'standing_tackle', 'sliding_tackle',
'gk_diving', 'gk_handling', 'gk_kicking', 'gk_positioning',
'gk_reflexes']
target = ['overall_rating']
df = df.dropna()
X = df[features]
y = df[target]
X.iloc[2]
y
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=324)
regressor = LinearRegression()
regressor.fit(X_train, y_train)
y_prediction = regressor.predict(X_test)
y_prediction
y_test.describe()
RMSE = sqrt(mean_squared_error(y_true = y_test, y_pred = y_prediction))
print(RMSE)
regressor = DecisionTreeRegressor(max_depth=20)
regressor.fit(X_train, y_train)
y_prediction = regressor.predict(X_test)
y_prediction
y_test.describe()
RMSE = sqrt(mean_squared_error(y_true = y_test, y_pred = y_prediction))
print(RMSE)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: visiulization the steering dataset, and we can see there only have 8000 labeld data, and this data is only labed the center image, so we can labed the left and right images, then we can have more data for training stage. and we can see the frequent steering is between 0.25 to -0.25.
Step2: Set hyper parameters here, e.g. the start learning rate and number of training and validate samples
Step3: load some images for visiulization purpose
Step4: analyze those image and steering, we labed the y for left is Y[center] + 0.25, and right is Y[center] - 0.25
Step5: build the nets, using lambda to normalize the data.
Step6: using batch generate tech here, to generate data for training rather than storing the training data in memory, and the data augmentation used here please see the util.py
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
driving_log = pd.read_csv(PATH+data_csv, names = None)
print(driving_log.shape)
driving_log.head()
import matplotlib.pyplot as plt
# Visualizations will be shown in the notebook.
%matplotlib inline
plt.figure(figsize=(20,10));
driving_log['steering'].plot()
#nb_epoch = 6
number_of_samples_per_epoch = 25728
number_of_validation_samples = 6432
learning_rate = 1e-4
activation_relu = 'elu'
#read train & validate image from dataset
def load_image(X):
image = plt.imread(PATH+X.strip(' '))
return image
downsamples = 10
centerImgs = np.array([load_image(imgLoc) for imgLoc in driving_log['center'][0:downsamples]], dtype=np.float32)
leftImgs = np.array([load_image(imgLoc) for imgLoc in driving_log['left'][0:downsamples]], dtype=np.float32)
rightImgs = np.array([load_image(imgLoc) for imgLoc in driving_log['right'][0:downsamples]], dtype=np.float32)
_, ax = plt.subplots( 10, 3, figsize=(20, 10))
col, row = 0, 0
print( )
for i in range(downsamples):
centerImg = centerImgs[i]
leftImg = leftImgs[i]
rightImg = rightImgs[i]
ax[i, 0].imshow(leftImg)
ax[i, 0].get_xaxis().set_ticks([])
ax[i, 0].get_yaxis().set_ticks([])
ax[i, 1].imshow(centerImg)
ax[i, 1].get_xaxis().set_ticks([])
ax[i, 1].get_yaxis().set_ticks([])
ax[i, 1].text(3, -5, driving_log['steering'][i], fontsize=9)
ax[i, 2].imshow(rightImg)
ax[i, 2].get_xaxis().set_ticks([])
ax[i, 2].get_yaxis().set_ticks([])
plt.show()
def Nets_Build():
#NVIDIA SELF DRIVING CAR MODEL USING HERE(modiefy)
model = Sequential()
#model.add(Cropping2D(cropping=((100,40), (0,0)), input_shape=(160,320,3)))
#Normalize
model.add(Lambda(lambda x: x / 127.5 - 1.0, input_shape=(160, 320, 3)))
#BatchNormalization here
model.add(BatchNormalization(epsilon=0.001, mode=2, axis=1, input_shape=(160, 320, 3)))
#Fist convolution the images
model.add(Convolution2D(3, 1, 1,border_mode='same', subsample=(2, 2)))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(1, 1)))
#Then we crop the filter maps
model.add(Cropping2D(cropping=((25,5),(0,0))))#75,25, 50,10
# start convolution layers here
model.add(Convolution2D(36, 3, 3,border_mode='same', subsample=(2, 2)))
model.add(Activation(activation_relu))
#model.add(Dropout(0.5))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(1, 1)))
model.add(Convolution2D(48, 3, 3,border_mode='same', subsample=(2, 2)))
model.add(Activation(activation_relu))
#model.add(Dropout(0.5))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(1, 1)))
model.add(Convolution2D(48, 3, 3,border_mode='same', subsample=(2, 2)))
model.add(Activation(activation_relu))
model.add(Dropout(0.5))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(1, 1)))
model.add(Convolution2D(64, 3, 3,border_mode='same', subsample=(1, 1)))
model.add(Activation(activation_relu))
model.add(Dropout(0.5))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(1, 1)))
model.add(Convolution2D(64, 3, 3,border_mode='same', subsample=(1, 1)))
model.add(Activation(activation_relu))
model.add(Dropout(0.5))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(1, 1)))
# use fcn to reduce the parameters, so replace the fc layers
model.add(Convolution2D(96, 3, 3,border_mode='same', subsample=(1, 1)))
model.add(Activation(activation_relu))
model.add(Convolution2D(96, 3, 3,border_mode='same', subsample=(1, 1)))
model.add(Activation(activation_relu))
model.add(Convolution2D(50, 1, 1,border_mode='same', subsample=(1, 1)))
model.add(Activation(activation_relu))
model.add(Convolution2D(10, 1, 1,border_mode='same', subsample=(1, 1)))
model.add(Activation(activation_relu))
model.add(Flatten())
# Next, five fully connected layers
#model.add(Dense(500))
#model.add(Activation(activation_relu))
#model.add(Dropout(0.5))
#model.add(Dense(50))
#model.add(Convolution2D(200, 1, 1,border_mode='same', subsample=(1, 1)))
#model.add(Activation(activation_relu))
#model.add(Flatten())
#model.add(Dropout(0.5))
#model.add(Dense(50))
#model.add(Activation(activation_relu))
#model.add(Dropout(0.5))
model.add(Dense(10))
model.add(Activation(activation_relu))
model.add(Dense(1))
model = opt(model)
return model
def opt(model, learning_rate = learning_rate):
model.summary()
model.compile(optimizer=Adam(learning_rate), loss="mse", )
return model
def train(model):
x,y = read_csv()
x,y = datashuffle(x,y)
#X_Tleft, X_Tright, X_Tcenter,Y_T, X_Vleft, X_Vright, X_Vcenter, Y_V = trainval_split(x, y)
X_train, X_val, Y_train, Y_val = trainval_split(x, y)
X_left = X_train['left'].as_matrix()
X_right = X_train['right'].as_matrix()
X_center = X_train['center'].as_matrix()
Y = Y_train.as_matrix()
#X_train, Y_train = load_data(X_train, Y_train)
#X_train, Y_train = data_augmentation(X_train, Y_train)
#X_val, Y_val = load_data(X_val, Y_val)
#X_val, Y_val = data_augmentation(X_val, Y_val)
#print(X_train.shape, Y_train.shape, X_val.shape, Y_val.shape)
train_gen = generate_train_batch(X_center, X_left, X_right,Y, batch_size = 64)
#validation_gen = generate_train_batchV1(X_val,Y_val, batch_size = 64)
X_left = X_val['left'].as_matrix()
X_right = X_val['right'].as_matrix()
X_center = X_val['center'].as_matrix()
Y = Y_val.as_matrix()
validation_gen = generate_train_batch(X_center, X_left, X_right,Y, batch_size = 64)
history = model.fit_generator(train_gen,
samples_per_epoch=number_of_samples_per_epoch,
nb_epoch=5,
validation_data=validation_gen,
nb_val_samples=number_of_validation_samples,
verbose=1)
return model
def train_model():
model = Nets_Build()
model = train(model)
#save_model(model)
json_string = model.to_json()
model_json = 'model.json'
model_weights = 'model.h5'
try:
os.remove(model_json)
os.remove(model_weights)
except OSError:
pass
with open(model_json, 'w') as jfile:
json.dump(json_string, jfile)
model.save_weights(model_weights)
return model
model = train_model()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: One strategy to choose a change point is to pick a point which has a low pvalue and also has a large enough effect size. Note that a changepoint depends on 2 things (a) Effect size and (b) Significance (pvalue). It is possible for very small effect sizes to also be significant. That is why we need to use both criterion to ultimately get a estimate. Here I used the threshold for effect size as 1.0 (this depends on your data) and the significance level of 0.05
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from changepoint.mean_shift_model import MeanShiftModel
ts = np.concatenate([np.random.normal(0, 0.1, 10), np.random.normal(1, 0.1, 10)])
model = MeanShiftModel()
stats_ts, pvals, nums = model.detect_mean_shift(ts, B=10000)
%matplotlib inline
import pylab as pl
pl.plot(ts)
pl.plot(stats_ts)
pvals
np.argmin(pvals)
np.where(np.array(stats_ts)>1.0)
import numpy as np
from changepoint.mean_shift_model import MeanShiftModel
ts = np.concatenate([np.random.normal(0, 0.1, 10), np.random.normal(0, 0.1, 10)])
model = MeanShiftModel()
stats_ts, pvals, nums = model.detect_mean_shift(ts, B=10000)
pl.plot(ts)
pvals
np.where(np.array(pvals)<0.05)
np.where(np.array(stats_ts)>1.0)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Relative intensity
Step2: Fold-change
|
<ASSISTANT_TASK:>
Python Code:
# First, we must perform the incantations.
%pylab inline
import pandas as pd
# Parse data file.
proteins = pd.read_table('data/pubs2015/proteinGroups.txt', low_memory=False)
# Find mass spec intensity columns.
intensity_cols = [c for c in proteins.columns if 'intensity '
in c.lower() and 'lfq' not in c.lower()]
# Find columns corresponding to experiment classes.
wcl_cols = [c for c in intensity_cols if '_wcl' in c.lower() and '_wclp' not in c.lower()]
wclp_cols = [c for c in intensity_cols if '_wclp' in c.lower()]
ub_cols = [c for c in intensity_cols if '_ub' in c.lower() and '_ubp' not in c.lower()]
ubp_cols = [c for c in intensity_cols if '_ubp' in c.lower()]
# Create a binary mask excluding reversed and contaminated samples.
mask = (proteins['Reverse'] != '+') & \
(proteins['Potential contaminant'] != '+')
# Apply reversed/contaminated mask and get intensity columns.
intensities = proteins[mask][intensity_cols]
# Sum down the columns (MS runs).
total_intensities = proteins[intensity_cols].sum(axis=0)
# Element-wise division with singleton expansion/broadcasting.
normed_intensities = intensities / total_intensities
# Indices of proteins which have non-zero intensity in at least one run.
idx = (normed_intensities != 0).any(axis=1)
# Get names and intensities of such proteins.
names = proteins[mask][idx]['Protein IDs']
nonzero_intensities = normed_intensities[idx]
# Separate the intensity DataFrame into separate DataFrames for each experiment class.
wcl = nonzero_intensities[wcl_cols]
wclp = nonzero_intensities[wclp_cols]
ub = nonzero_intensities[ub_cols]
ubp = nonzero_intensities[ubp_cols]
# Find control columns in each experiment class.
wcl_ctrl = [c for c in wcl.columns if 'control' in c.lower()]
wclp_ctrl = [c for c in wclp.columns if 'control' in c.lower()]
ub_ctrl = [c for c in ub.columns if 'control' in c.lower()]
ubp_ctrl = [c for c in ubp.columns if 'control' in c.lower()]
# Find experiment columns in each experiment class.
wcl_exp = [c for c in wcl.columns if 'control' not in c.lower()]
wclp_exp = [c for c in wclp.columns if 'control' not in c.lower()]
ub_exp = [c for c in ub.columns if 'control' not in c.lower()]
ubp_exp = [c for c in ubp.columns if 'control' not in c.lower()]
# Need to use underlying numpy arrays for singleton expansion ('broadcasting')
# and form new DataFrame using appropriate column names.
wcl_foldch = pd.DataFrame(log2(wcl[wcl_exp]).values - log2(wcl[wcl_ctrl]).values, columns=wcl_exp)
wclp_foldch = pd.DataFrame(log2(wclp[wclp_exp]).values - log2(wclp[wclp_ctrl]).values, columns=wclp_exp)
ub_foldch = pd.DataFrame(log2(ub[ub_exp]).values - log2(ub[ub_ctrl]).values, columns=ub_exp)
ubp_foldch = pd.DataFrame(log2(ubp[ubp_exp]).values - log2(ubp[ubp_ctrl]).values, columns=ubp_exp)
# 3rd-to-last element is Shmoo / CaCl2.
# Only histogram finite (non-inf, non-NaN) values.
hist(wcl_foldch[wcl_foldch.columns[-3]][isfinite(wcl_foldch[wcl_foldch.columns[-3]])].values, 100);
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Apoi se importa modulul astfel
Step2: Orice functie din acest modul se apeleaza apoi ca plt.NumeFunctie.
Step3: O alta solutie este sa divizam intervalul $[a,b]$ precizand pasul de diviziune, h, nu numarul de puncte de diviziune
Step4: Daca pasul de diviziune al intervalului nu este suficient de mic graficul unei functii derivabile nu este afisat neted, ci "cu colturi", deoarece
Step5: Pe langa trasarea unor grafice, functia plt.plot poate realiza vizualizarea unui nor de puncte
Step6: Apelul functiei matplotlib.pyplot.plot se realizeaza fie transmitandu-i doar argumente obligatorii, fie argumente si cuvinte cheie carora li s-au atribuit valori.
Step7: Pentru a trasa un segment intre doua puncte de coordonate $(2,4)$, $(6,-1)$, constituim lista absciselor,
Step8: Un alt argument al functiei plt.plot este culoarea de trasare sau marcare a punctelor.
Step9: In secventa de cod de mai sus apare functia plt.subplot(nrlin,nrcol, nrfig) care indica ca se
Step10: Cuvintele cheie se pot plasa in orice pozitie dupa argumentele obligatorii.
Step11: matplotlib contine numeroase alte module care vor fi prezentate pe masura ce vor fi folosite.
Step12: Daca copiati liniile de cod din celula precedenta si le salvati intr-un fisier spirala.py si comentati linia
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
a=0
b=6*np.pi
n=300
x=np.linspace(a, b,n)
y=np.exp(-x/8)*np.cos(x)
plt.plot(x,y, 'r')
plt.title('Grafic de functie')
plt.xlabel('x')
plt.ylabel('y=f(x)')
a=-5
b=7
h=0.01
X=np.arange(a,b, h)
Y=-2*X*X+X+1
plt.plot(X,Y, 'g')
plt.title('Un arc de parabola')
a=0
b=2*np.pi
h=0.5
xx=np.arange(a, b, h)
yy=np.sin(xx)
plt.plot(xx, yy)
x=np.random.random((1,10))
y=np.random.random((1,10))
plt.plot(x,y, 'b*')
y=[2,4,1,3,-2,5]
plt.plot(y)
x=[2,6]
y=[4,-1]
plt.plot(x,y)
x=[2,4,3,1,5,0.7,6]
y=[1,3,2,2.41,2.5, 3.13, 2]
plt.subplot(1,4,1)
plt.plot(x,y, 'o', color=(0.86, 0.10, 0.46))
plt.subplot(1,4,2)
plt.plot(x,y, 'o', color=(0.86, 0.10, 0.46, 0.45))
plt.subplot(1,4,3)
plt.plot(x,y, 'o', color='#33FF99')
plt.subplot(1,4,4)
plt.plot(x,y, color='k')
x=np.arange(0,5, 0.3)
y=x*x/8-x
plt.subplot(1,3,1)
plt.plot(x,y, lw=2, ls='dotted')
plt.subplot(1,3,2)
plt.plot(x,y, lw=3, ls='dashed')
plt.subplot(1,3,3)
plt.plot(x,y, 'r', ls='solid', lw=4)
import matplotlib
X=np.random.random((16,16))# genereaza o matrice de tip 16x16 de elemente
#aleator generate in intervalul [0,1)
img=plt.imshow(X,cmap=matplotlib.cm.spectral, interpolation='nearest')#afiseaza matricea ca o
#imagine care rezulta realizand o corespondenta intre numerele din intervalul [0,1] si
#un colormap(paleta de culori)
X=plt.imread('Imag/Iony.png')
print 'Imaginea este de tip', X.shape
plt.imshow(X)
import numpy as np
import matplotlib.pyplot as plt
t=np.arange(0,4, 0.01)
x=np.exp(-t)*np.cos(2*np.pi*t)
y=np.exp(-t)*np.sin(2*np.pi*t)
plt.plot(x,y,'r')
plt.axis('equal')
plt.xlabel('x(t)')
plt.ylabel('y(t)')
plt.show()
from IPython.core.display import HTML
def css_styling():
styles = open("./custom.css", "r").read()
return HTML(styles)
css_styling()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: IV.1 Particle Metropolis-Hastings
Step2: Bootstrap particle filter giving an estimate $\widehat{z}\theta$ of the joint likelihood $p(y{1
Step3: As a proposal we can use $q(\theta'\,\big|\,\theta[k - 1]) = \mathcal{N}\left(\theta';\,\theta[k - 1], \tau\right)$ with an appropriately chosen $\tau$.
Step4: IV.2 Conditional Particle Filter
Step5: Simulate from the model given above.
Step6: This is a Markov kernel which can be used in Gibbs sampling where the parameters and the hidden state are sampled repeatedly consecutively.
Step7: IV.3 Conditional importance sampling
Step8: Use that kernel to sample from the target distribution.
Step9: Run the sampler
Step10: Plot the result
Step11: b) Lower bound for probability that draw from cond. imp. sampling kernel falls in a set $A$
Step12: Visualize simulated observations and true $x_0$
Step13: b) Likelihood
Step14: c) Metropolis-Hastings kernel for $\pi_k$
Step15: The Metropolis-Hastings kernel produces one new sample of the Markov chain, conditional on the last sample.
Step16: e) Putting together the actual SMC sampler
Step17: f) Visualisation and testing of the SMC sampling
Step18: g) Comparison to standard Metropolis Hastings sampler
Step19: Some visualisations of the marginal distributions for the two coordinates determined by the Metropolis-Hastings run.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from scipy import stats
from tqdm import tqdm_notebook
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style()
T = 50
xs_sim = np.zeros((T + 1,))
ys_sim = np.zeros((T,))
# Initial state
xs_sim[0] = 0.
for t in range(T):
xs_sim[t + 1] = np.cos(xs_sim[t]) + stats.norm.rvs()
ys_sim = xs_sim[1:] + stats.norm.rvs(0, 1, T)
fig, axs = plt.subplots(2, 1, figsize=(10, 10))
axs[0].plot(xs_sim, 'o-')
axs[1].plot(range(1, T + 1), ys_sim, 'o-r')
def log_likelihood_bootstrap_pf(y, N=20, theta=1):
# Cumulatively build up log-likelihood
ll = 0.0
# Initialisation
samples = stats.norm.rvs(0, 1, N)
weights = 1 / N * np.ones((N,))
# Determine the number of time steps
T = len(y)
# Loop through all time steps
for t in range(T):
# Resample
ancestors = np.random.choice(samples, size=N,
replace=True, p=weights)
# Propagate
samples = stats.norm.rvs(0, 1, N) + np.cos(theta * ancestors)
# Weight
weights = stats.norm.logpdf(y[t], loc=samples, scale=1)
# Calculate the max of the weights
max_weights = np.max(weights)
# Subtract the max
weights = weights - max_weights
# Update log-likelihood
ll += max_weights + np.log(np.sum(np.exp(weights))) - np.log(N)
# Normalize weights to be probabilities
weights = np.exp(weights) / np.sum(np.exp(weights))
return ll
log_likelihood_bootstrap_pf(ys_sim, N=50, theta=3)
def particle_metropolis_hastings(y, M=10000, N=20, tau=1):
theta = np.zeros((M + 1,))
alpha = np.zeros((M,))
z = np.zeros((M + 1,))
# Initial state
theta[0] = 0
z[0] = log_likelihood_bootstrap_pf(y, N=N, theta=theta[0])
# Iterate the chain
t = tqdm_notebook(range(M))
for i in t:
# Sample a new value
theta_prop = stats.norm.rvs(theta[i], tau, 1)
# Sample to be compared to the acceptance probability
u = stats.uniform.rvs()
# Terms in the second part of the acceptance probability -
# Proposal is symmetric, so terms containing the proposal will
# cancel each other out
z_prop = log_likelihood_bootstrap_pf(y, N=N, theta=theta_prop)
num = z_prop + stats.norm.logpdf(theta_prop)
denom = z[i] + stats.norm.logpdf(theta[i])
# Acceptance probability
alpha[i] = min(1, np.exp(num - denom))
t.set_postfix({'a_mean': np.mean(alpha[:(i + 1)])})
# Set next state depending on acceptance probability
if u <= alpha[i]:
z[i + 1] = z_prop
theta[i + 1] = theta_prop
else:
z[i + 1] = z[i]
theta[i + 1] = theta[i]
return theta, alpha
theta, alpha = particle_metropolis_hastings(ys_sim, M=10000, N=50, tau=0.7)
np.mean(alpha)
fig, ax = plt.subplots()
ax.plot(theta, '.-')
fig, ax = plt.subplots()
ax.hist(theta[2000:], normed=True, bins=60);
def conditional_FAPF(x_ref, y, N=200):
# Determine length of data
T = len(y)
# Save the paths of all final particles
xs = np.zeros((N, T + 1))
# Initialisation
xs[:, 0] = stats.norm.rvs(0, 1, N)
# Replace last state with state from reference trajectory
xs[N - 1, 0] = x_ref[0]
for t in range(T):
# Calculate resampling weights in case of FAPF
ws = stats.norm.logpdf(y[t], loc=2*np.power(np.cos(xs[:, t]), 2),
scale=np.sqrt(4.01))
# Subtract maximum weight
ws -= np.max(ws)
# Normalize the resampling weights
ws = np.exp(ws) / np.sum(np.exp(ws))
# Resample
ancestors = np.random.choice(range(N), size=N, replace=True, p=ws)
# Propagate
xs[:, t + 1] = stats.norm.rvs(0, 1, N) * 0.1 / np.sqrt(4.01) + \
(2 / 4.01) * y[t] + (0.01 / 4.01) * \
np.power(np.cos(xs[ancestors, t]), 2)
# Replace last sample with reference trajectory
ancestors[N - 1] = N - 1
xs[N - 1, t + 1] = x_ref[t + 1]
# Update the ancestor lines
xs[:, 0:t] = xs[ancestors, 0:t]
# Randomly choose trajectory which will be returned
# All normalized weights are 1 / N, so that no draw from
# a categorical distribution is necessary. A uniform draw
# is satisfactory.
b = np.random.randint(N)
return xs[b, :]
T = 100
# Allocate arrays for results
ys_sim = np.zeros((T,))
xs_sim = np.zeros((T + 1,))
# Initial value for state
xs_sim[0] = 0.1
# Walk through all time steps
for t in range(T):
xs_sim[t + 1] = np.power(np.cos(xs_sim[t]), 2) + stats.norm.rvs(0, 1, 1)
ys_sim[t] = 2 * xs_sim[t + 1] + stats.norm.rvs(0, 0.1, 1)
fig, axs = plt.subplots(2, 1, figsize=(10, 10))
axs[0].plot(range(T + 1), xs_sim, 'o-');
axs[1].plot(range(1, T + 1), ys_sim, 'o-r');
xs = conditional_FAPF(xs_sim, ys_sim, N=1000)
fig, ax = plt.subplots()
ax.plot(xs_sim, 'o-')
ax.plot(xs, 'x-');
def cond_imp_sampling_kernel(x, N=2):
# Sample new proposals
xs = stats.norm.rvs(0, 1, N)
# Set the last sample to the reference
xs[N - 1] = x
# Calculate weights
ws = stats.norm.logpdf(xs, loc=1, scale=1) - \
stats.norm.logpdf(xs, loc=0, scale=1)
ws -= np.max(ws)
ws = np.exp(ws) / np.sum(np.exp(ws))
return xs[np.random.choice(range(N), size=1, p=ws)[0]]
def cond_imp_sampling_mcmc(M=1000, N=2):
# Initialisation
xs = np.zeros((M + 1,))
for m in tqdm_notebook(range(M)):
xs[m + 1] = cond_imp_sampling_kernel(xs[m], N=N)
return xs
xs = cond_imp_sampling_mcmc(M=70000)
fig, ax = plt.subplots()
ax.hist(xs, normed=True, bins=40);
M = 50
x0 = np.array([6.0, -5.5])
ns = np.reshape(stats.expon.rvs(scale=2, size=2 * M), (2, M))
bs = np.reshape(np.random.choice([-1, 1], size=2 * M,
replace=True, p=[0.5, 0.5]),
(2, M))
ys = np.reshape(np.repeat(x0, M), (2, M)) + ns * bs
ys = ys.T
fig, ax = plt.subplots(figsize=(8, 8))
ax.scatter(ys[:, 0], ys[:, 1])
ax.set_xlim([-12, 12])
ax.set_ylim([-12, 12])
ax.scatter(x0[0], x0[1], facecolors='none', edgecolors='r', s=100)
def log_likelihood(x, ys):
return np.sum(np.log(0.25) + 0.5 *
np.power(-1, ((ys - x) > 0).astype('int')) * (ys - x))
def tempered_logpdf(x, ys, k, K=10):
# k / K comes from likelihood tempering
return k / K * log_likelihood(x, ys) + \
stats.multivariate_normal.logpdf(x, mean=[0, 0],
cov=7 * np.eye(2))
def mh_kernel(x, ys, k, K=10, tau=0.5):
# Propose a new value
x_prop = stats.multivariate_normal.rvs(mean=x,
cov=tau**2 * np.eye(2),
size=1)
# Terms in the second part of the acceptance probability
# Proposal is symmetric, so terms containing the proposal will
# cancel each other out
# Acceptance probability
alpha = min(0, tempered_logpdf(x_prop, ys, k, K=K) -
tempered_logpdf(x, ys, k, K=K))
# Sample to be compared to the acceptance probability
u = stats.uniform.rvs()
# Set next state depending on acceptance probability
if np.log(u) <= alpha:
return x_prop, np.exp(alpha)
else:
return x, np.exp(alpha)
mh_kernel(x0, ys, 2)
def smc_sampler(ys, K=10, N=100, ess_min=50, tau=0.5, progressbar=True):
# Vectors for saving
xs = np.zeros((K + 1, N, 2))
ancs = np.zeros((K, N), dtype='int64')
ws = np.zeros((K + 1, N))
# Initialisation
xs[0, :, :] = stats.multivariate_normal.rvs(mean=[0, 0],
cov=7 * np.eye(2),
size=N)
ws[0, :] = 1 / N * np.ones((N,))
if progressbar:
t = tqdm_notebook(range(K))
else:
t = range(K)
for k in t:
# Update weights
for i in range(N):
ws[k + 1, i] = np.log(ws[k, i]) + \
tempered_logpdf(xs[k, i, :], ys, k=k + 1, K=K) - \
tempered_logpdf(xs[k, i, :], ys, k=k, K=K)
# and normalize them
ws[k + 1, :] -= np.max(ws[k + 1, :])
ws[k + 1, :] = np.exp(ws[k + 1, :]) / np.sum(np.exp(ws[k + 1, :]))
# Resample depending on ESS
if 1 / np.sum(np.power(ws[k + 1, :], 2)) < ess_min:
ancs[k, :] = np.random.choice(range(N), size=N,
replace=True, p=ws[k + 1, :])
ws[k + 1, :] = 1 / N * np.ones((N,))
else:
ancs[k, :] = range(N)
# Propagate / Sample from next element in the sequence
# Here, via a Metropolis-Hastings kernel
for i in range(N):
xs[k + 1, i, :] = mh_kernel(xs[k, ancs[k, i], :], ys,
k=k + 1, K=K, tau=tau)[0]
return xs, ancs, ws
xs, ancs, ws = smc_sampler(ys, N=1000, ess_min=750)
np.sum(xs[10, :, 0] * ws[10])
np.sum(xs[10, :, 1] * ws[10])
x = np.arange(-12, 12, 0.25)
y = np.arange(-12, 12, 0.25)
X, Y = np.meshgrid(x, y)
Z = np.zeros((len(x), len(y), 10))
for k in tqdm_notebook(range(10)):
for i in range(len(x)):
for j in range(len(y)):
Z[i, j, k] = tempered_logpdf(np.array([X[i, j], Y[i, j]]),
ys, k, K=10)
Z[:, :, k] -= np.max(Z[:, :, k])
Z[:, :, k] = np.exp(Z[:, :, k])
fig, axs = plt.subplots(5, 2, figsize=(8.5, 20))
for k in range(10):
levels=np.linspace(np.min(Z[:, :, k]),
np.max(Z[:, :, k]), 8)
axs[k // 2, k % 2].contour(X, Y, Z[:, :, k])
axs[k // 2, k % 2].scatter(x0[0], x0[1],
facecolors='none', edgecolors='r', s=100)
axs[k // 2, k % 2].scatter(xs[k, :, 0], xs[k, :, 1], color='k')
fig.tight_layout()
def mh_sampler(ys, k=10, K=10, M=1000, tau=0.5, progressbar=True):
# Prepare vectors for saving
xs = np.zeros((M + 1, 2))
alpha = np.zeros((M,))
# Initial state
# Choose zero as the initial state
# Iterate the chain
if progressbar:
t = tqdm_notebook(range(M))
else:
t = range(M)
for i in t:
xs[i + 1], alpha[i] = mh_kernel(xs[i], ys, k, K=K, tau=tau)
if progressbar:
t.set_postfix({'mean acc': np.mean(alpha[:(i + 1)])})
return xs, alpha
xs, _ = mh_sampler(ys, M=30000, tau=0.7, progressbar=True)
fig, axs = plt.subplots(2, 1, figsize=(8, 6))
burnin = 500
axs[0].hist(xs[burnin:, 0], normed=True, bins=50);
axs[0].axvline(np.mean(xs[burnin:, 0]), color='r', linestyle='--')
axs[0].axvline(np.median(xs[burnin:, 0]), color='k', linestyle='--')
axs[1].hist(xs[burnin:, 1], normed=True, bins=50);
axs[1].axvline(np.mean(xs[burnin:, 1]), color='r', linestyle='--')
axs[1].axvline(np.median(xs[burnin:, 1]), color='k', linestyle='--')
means_mh = np.zeros((10, 2))
means_smc = np.zeros((10, 2))
for m in tqdm_notebook(range(10)):
xs, _ = mh_sampler(ys, M=25000, tau=0.7, progressbar=True)
means_mh[m, :] = np.mean(xs[500:], axis=0)
xs, _, ws = smc_sampler(ys, N=2000, ess_min=1500, progressbar=True)
means_smc[m, :] = [np.sum(xs[10, :, 0] * ws[10]),
np.sum(xs[10, :, 1] * ws[10])]
np.mean(np.linalg.norm(means_smc - x0, axis=1, ord=1))
np.mean(np.linalg.norm(means_mh - x0, axis=1, ord=1))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Observe que na linha acima aplicamos o método split diretamente a uma string, sem precisarmos nomear uma variável com o conteúdo da string!
Step2: Como transformar todas as primeiras letras de cada item da lista em maiúsculas?
Step3: Observe que x é uma variável local à list comprehension; ela serve apenas para que possamos fazer referência a cada item da lista sendo percorrida na nova expressão que queremos gerar. Fora da list comprehension, o Python não sabe quem é esse x
Step4: Exemplo
Step5: Para transformarmos essa lista em uma string, podemos usar o método join
Step6: Uma explicação para a sintaxe do join
Step7: Podemos, usando list comprehensions, gerar outras listas. Por exemplo, gerar uma lista de números ímpares
Step8: Slicing
Step9: Observe que, com um parâmetro a mais, podemos selecionar os elementos da lista de 2 em 2
Step10: Também podemos calcular o tamanho de uma lista de números
Step11: Podemos gerar uma nova lista contendo pedaços da lista original
Step12: Infelizmente, o resultado acima é uma lista de listas
Step13: Assim, para acessarmos um elemento individual da lista numeros, precisamos utilizar um índice para a lista externa, e outro índice para a lista interna
Step14: Para transformarmos aquela lista de listas em uma lista simples, podemos usar o comando abaixo (admito que é um pouco mágico, mas funciona! ;))
Step15: Observe que o slicing também vale para sequências de caracteres (strings)
Step16: Comandos de repetição e condicionais
Step17: Fazemos isso explicitamente quando usamos a estrutura for. Observe que, no Python, não precisamos sinalizar o fim de um bloco de código; na verdade, a indentação sinaliza um bloco. Observe a diferença entre as células abaixo
Step18: Condicionais
Step19: Usamos == na expressão acima, pois no Python é necessário diferenciar entre um comando de atribuição (atribuir um valor a uma variável) e um comando de comparação (testar se duas variáveis são iguais). O caso do if é o segundo.
Step20: Scripting
Step21: Atenção
Step22: Em seguida, vamos listar todos os arquivos deste diretório.
Step23: Para que possamos percorrer a lista que contém os nomes de todos os arquivos deste diretório, vamos salvar esta lista de nomes de arquivo
Step24: Agora, vamos efetuar a seguinte operação
Step25: Verificando que funcionou
Step26: (é claro que se você estiver em um diretório em que não hajam arquivos com extensão ".txt", nada irá acontecer!)
Step27: Exemplo 2
Step28: Agora, para descobrirmos quando o arquivo foi modificado pela última vez, precisamos usar uma função que não retorna a data da última modificação no formato em que estamos acostumados. Ela retorna o tempo, em segundos, decorrido desde 1o de janeiro de 1970 (se você estiver no Unix). Para podermos obter o que queremos, usamos então a função ctime do módulo time.
Step29: Agora, vamos desfazer o exercício para restaurarmos o diretório à estrutura original
|
<ASSISTANT_TASK:>
Python Code:
minhalista = "Como fazer uma list comprehension".split()
minhalista
minhalista = [x.capitalize() for x in minhalista]
minhalista
x
linguadope = ["Pe"+palavra for palavra in minhalista]
linguadope
" ".join(linguadope)
numeros = [n for n in range(0,10)]
print(numeros)
numeros = [2*n+1 for n in range(0,11)]
numeros
numeros[0]
numeros[-1]
numeros[3:4]
numeros[0:11:2]
len(numeros)
numeros[-3]
numeros = [numeros[4:6],numeros[3:8]]
print(numeros)
numeros[0]
numeros[0][1]
lista = [item for sublist in numeros for item in sublist]
lista
palavra = "teste"
palavra[1:]
numeros = [n for n in range(0,11)]
print(numeros)
for item in numeros: # Indentação
print("Numero "+str(item))
print("bla")
for item in numeros: # Indentação
print("Numero "+str(item))
print("bla")
palavra = "bla"
if palavra == "bla!":
print("Verdadeiro")
else:
print("Falso")
if palavra == "bla!" and 3 > 2:
print("Aha")
else:
print("Uhu")
if palavra == "bla!" or 3>2:
print("Aha")
else:
print("Uhu")
import os
diretorio = os.path.join(os.getcwd(), "..","exemplos/exemplo_1")
os.listdir(diretorio)
lista = os.listdir(diretorio)
for arquivo in lista:
if arquivo[-3:] == "txt":
os.rename(os.path.join(diretorio,arquivo),os.path.join(diretorio,arquivo.capitalize()))
os.listdir(diretorio)
lista = os.listdir(diretorio)
for arquivo in lista:
os.rename(os.path.join(diretorio,arquivo),os.path.join(diretorio,arquivo.lower()))
os.listdir(diretorio)
import os
diretorio = os.path.join(os.getcwd(), "..","exemplos/exemplo_2")
print(diretorio)
print(os.listdir(diretorio))
import time
print(os.path.getmtime(os.path.join(diretorio,"file1.txt")))
time.ctime(os.path.getmtime(os.path.join(diretorio,"file1.txt")))
lista = os.listdir(diretorio)
for arquivo in lista:
print(time.ctime(os.path.getmtime(os.path.join(diretorio,arquivo))))
os.mkdir(os.path.join(diretorio,"arquivos_setembro"))
os.mkdir(os.path.join(diretorio,"arquivos_agosto"))
for arquivo in lista:
if arquivo != "teste.txt":
data_modificacao = time.ctime(os.path.getmtime(os.path.join(diretorio,arquivo)))
if data_modificacao[4:7] == "Sep":
os.rename(os.path.join(diretorio,arquivo), os.path.join(diretorio,"arquivos_setembro",arquivo))
elif data_modificacao[4:7] == "Aug":
os.rename(os.path.join(diretorio,arquivo), os.path.join(diretorio,"arquivos_agosto",arquivo))
lista = os.listdir(diretorio)
for item in lista:
if os.path.isdir(os.path.join(diretorio,item)):
locais = os.listdir(os.path.join(diretorio,item))
print(locais)
for arquivo in locais:
os.rename(os.path.join(diretorio,item,arquivo),os.path.join(diretorio,arquivo))
os.rmdir(os.path.join(diretorio,item))
print(os.listdir(diretorio))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Explore data
Step2: From scratch
Step3: With sklearn
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
from IPython.core.display import display, HTML
display(HTML('''
<style>
.dataframe td, .dataframe th {
border: 1px solid black;
background: white;
}
.dataframe td {
text-align: left;
}
</style>
'''))
df = pd.DataFrame({
'Outlook': ['sunny', 'sunny', 'overcast', 'rain', 'rain', 'rain', 'overcast', 'sunny', 'sunny', 'rain', 'sunny', 'overcast', 'overcast', 'rain'],
'Temperature': ['hot', 'hot', 'hot', 'mild', 'cool', 'cool', 'cool', 'mild', 'cool', 'mild', 'mild', 'mild', 'hot', 'mild'],
'Humidity': ['high', 'high', 'high', 'high', 'normal', 'normal', 'normal', 'high', 'normal', 'normal', 'normal', 'high', 'normal','high'],
'Wind': ['weak', 'strong', 'weak', 'weak', 'weak', 'strong', 'strong', 'weak', 'weak', 'weak', 'strong', 'strong', 'weak', 'strong'],
'Play': ['no', 'no', 'yes', 'yes', 'yes', 'no', 'yes', 'no', 'yes', 'yes', 'yes', 'yes', 'yes', 'no']
})
HTML(df.to_html(index=False))
val, count = np.unique(df['Play'], return_counts=True)
n = np.sum(count)
for i,v in enumerate(val):
print('P(Play={:<3s}) = {:d}/{:d}'.format(v, count[i], n))
for column in df.drop('Play', axis=1).columns:
dftmp = pd.crosstab(df[column], df['Play'],
margins=False,
rownames=[None],colnames=[column])
dftmp.columns = 'Play=' + dftmp.columns
for i,v in enumerate(val):
dftmp.iloc[:,i] = dftmp.iloc[:,i].astype('string') + '/' + str(count[i])
display(HTML(dftmp.to_html()))
dfYes = df[df['Play'] == 'yes']
dfNo = df[df['Play'] == 'no']
nYes = len(dfYes)
nNo = len(dfNo)
print(nYes, nNo)
pYes = (dfYes['Outlook'] == 'sunny').sum()/nYes \
* (dfYes['Temperature'] == 'cool').sum()/nYes \
* (dfYes['Humidity'] == 'high').sum()/nYes \
* (dfYes['Wind'] == 'strong').sum()/nYes \
* nYes/len(df)
pYes
pNo = (dfNo['Outlook'] == 'sunny').sum()/nNo \
* (dfNo['Temperature'] == 'cool').sum()/nNo \
* (dfNo['Humidity'] == 'high').sum()/nNo \
* (dfNo['Wind'] == 'strong').sum()/nNo \
* nNo/len(df)
pNo
print('Prediction:', ('yes' if pYes > pNo else 'no'))
from sklearn.naive_bayes import GaussianNB, BernoulliNB, MultinomialNB
from sklearn.preprocessing import LabelEncoder
# Encode labels to integers
encoder = LabelEncoder()
c = {}
Y = encoder.fit_transform(df['Play'])
c['Play'] = list(encoder.classes_)
X = df.drop('Play', axis=1)
for column in X.columns:
X[column] = encoder.fit_transform(X[column])
c[column] = list(encoder.classes_)
# Pre-compute likelihood tables
model = MultinomialNB()
model.fit(X, Y)
# Predict most likely outcome
res = model.predict([[
c['Outlook'].index('sunny'),
c['Temperature'].index('cool'),
c['Humidity'].index('high'),
c['Wind'].index('strong'),
]])[0]
print('Prediction:', c['Play'][res])
'''
# Evaluate
from sklearn.metrics import accuracy_score, confusion_matrix
y_pred = model.predict(X_test)
accuracy_score(y_test, y_pred, normalize=True)
confusion_matrix(y_test, y_pred)
'''
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Summarize
Step2: Summarize
Step3: Setting values in lists
Step4: Predict what this code does.
Step5: Predict what this code does.
Step6: Summarize
Step7: Copying lists
Step8: Predict what this code does.
|
<ASSISTANT_TASK:>
Python Code:
some_list = [10,20,30]
print(some_list[2])
some_list = [10,20,30]
print(some_list[0])
some_list = [10,20,30]
print(some_list[-1])
some_list = [10,20,30,40]
print(some_list[1:3])
some_list = [10,20,30]
print(some_list[:3])
some_list = [0,10,20,30,40,50,60,70]
print(some_list[2:4])
some_list = [10,20,30]
some_list[0] = 50
print(some_list)
some_list = []
for i in range(5):
some_list.append(i)
print(some_list)
some_list = [1,2,3]
some_list.insert(2,5)
print(some_list)
some_list = [10,20,30]
some_list.pop(1)
print(some_list)
some_list = [10,20,30]
some_list.remove(30)
print(some_list)
# You can put anything in a list
some_list = ["test",1,1.52323,print]
# You can even put a list in a list
some_list = [[1,2,3],[4,5,6],[7,8,9]] # a list of three lists!
# You can get the length of a list with len(some_list)
some_list = [10,20,30]
print(len(some_list))
some_list = [10,20,30]
another_list = some_list
some_list[0] = 50
print(some_list)
print(another_list)
import copy
some_list = [10,20,30]
another_list = copy.deepcopy(some_list)
some_list[0] = 50
print(some_list)
print(another_list)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Contents
Step2: Facility generation and CO2 emissions
Step3: EIA Facility level emissions (consolidate fuels/prime movers)
Step4: Total EIA generation and CO2 emissions
Step5: Consolidate total EIA to monthly gen and emissions
Step6: Pretty sure that I don't need to keep eia_total_annual
Step7: Load EPA data
Step8: Fill nan's with 0
Step9: Correct EPA facility emissions
Step10: Adjust CO2 emissions where we have a CO2 ratio value
Step11: Look back at this to ensure that I'm correctly accounting for edge cases
Step12: If CEMS reported CO2 emissions are 0 but heat inputs are >0 and calculated CO2 emissions are >0, change the adjusted CO2 to NaN. These NaN values will be replaced by the calculated value later. Do the same for low index records (<300 g/kWh). If there is a valid CO2 ratio, multiply the adjusted CO2 column by the CO2 ratio.
Step13: Emissions and gen not captured by facilities
Step14: Create a new df that groups the facility data into more general fuel types that match up with the EIA generation and fuel use totals.
Step15: Extra generation and fuel use
Step16: Calculate extra electric fuel CO2 emissions
Step17: We need to approximate some of the emission factors because the state-level EIA data is only available in the bulk download at an aggregated level. Natural gas usually makes up the bulk of this extra electric generation/fuel use (consumption not reported by facilities, estimated by EIA), and it is still a single fuel here.
Step18: Add EPA facility-level emissions back to the EIA facility df, use EIA emissions where EPA don't exist, add extra EIA emissions for state-level data
Step21: Final index values
Step22: Monthly Index
Step23: Quarterly Index
Step24: Annual Index
Step25: Export to Excel file
Step26: Generation by fuel
Step28: A function to estimate the emissions intensity of each fuel over time, making sure that they add up to the total emissions intensity.
Step29: Apply the function above to each generation dataframe
Step30: Export files
Step31: Export to Excel file
|
<ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from __future__ import division
import matplotlib.pyplot as plt
import seaborn as sns
# import plotly.plotly as py
# import plotly.graph_objs as go
# from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
import pandas as pd
import os
import numpy as np
# init_notebook_mode(connected=True)
import datetime as dt
def add_datetime(df, year='year', month='month'):
df['datetime'] = pd.to_datetime(df[year].astype(str) + '-' + df[month].astype(str),
format='%Y-%m')
def add_quarter(df, year='year', month='month'):
add_datetime(df, year, month)
df['quarter'] = df['datetime'].dt.quarter
path = os.path.join('Facility gen fuels and CO2.csv')
eia_facility = pd.read_csv(path, parse_dates=['datetime'], low_memory=False)
cols = ['all fuel fossil CO2 (kg)','elec fuel fossil CO2 (kg)',
'all fuel total CO2 (kg)','elec fuel total CO2 (kg)', 'generation (MWh)']
eia_facility_grouped = eia_facility.groupby(['year', 'month', 'plant id'])[cols].sum()
eia_facility_grouped.reset_index(inplace=True)
eia_facility_grouped['CO2 ratio'] = eia_facility_grouped['elec fuel fossil CO2 (kg)'] / eia_facility_grouped['all fuel total CO2 (kg)']
eia_facility_grouped['CO2 ratio'].fillna(0, inplace=True)
eia_facility_grouped.head()
path = os.path.join('EIA country-wide gen fuel CO2.csv')
eia_total = pd.read_csv(path, parse_dates=['datetime'], low_memory=False)
eia_total['type'].unique()
keep_types = [u'WWW', u'WND', u'WAS', u'SUN', 'DPV', u'NUC', u'NG',
u'PEL', u'PC', u'OTH', u'COW', u'OOG', u'HPS', u'HYC', u'GEO']
keep_cols = ['generation (MWh)', 'total fuel (mmbtu)', 'elec fuel (mmbtu)',
'all fuel CO2 (kg)', 'elec fuel CO2 (kg)']
eia_total_monthly = eia_total.loc[(eia_total['type'].isin(keep_types))].groupby(['type', 'year', 'month'])[keep_cols].sum()
eia_total_monthly.head()
keep_types = [u'WWW', u'WND', u'WAS', u'TSN', u'NUC', u'NG',
u'PEL', u'PC', u'OTH', u'COW', u'OOG', u'HPS', u'HYC', u'GEO']
eia_total_annual = eia_total_monthly.reset_index().groupby('year').sum()
eia_total_annual['index (g/kWh)'] = eia_total_annual['elec fuel CO2 (kg)'] / eia_total_annual['generation (MWh)']
path = os.path.join('Monthly EPA emissions.csv')
epa = pd.read_csv(path)
add_quarter(epa, year='YEAR', month='MONTH')
epa.head()
epa.loc[:,'CO2_MASS (kg)'].fillna(0, inplace=True)
eia_keep = ['month', 'year', 'all fuel total CO2 (kg)', 'CO2 ratio', 'plant id']
epa_adj = epa.merge(eia_facility_grouped[eia_keep], left_on=['ORISPL_CODE', 'YEAR', 'MONTH'],
right_on=['plant id', 'year', 'month'], how='inner') # how='left
epa_adj.drop(['month', 'year', 'plant id'], axis=1, inplace=True)
epa_adj['epa index'] = epa_adj.loc[:,'CO2_MASS (kg)'] / epa_adj.loc[:,'GLOAD (MW)']
epa_adj.head()
sns.jointplot('CO2_MASS (kg)', 'all fuel total CO2 (kg)', epa_adj, marker='.')
# Calaculated with an "inner" merge of the dataframes
for year in range(2001, 2017):
total_co2 = epa_adj.loc[epa_adj['YEAR']==year, 'CO2_MASS (kg)'].sum()
union_co2 = epa_adj.loc[(epa_adj['YEAR']==year) &
~(epa_adj['CO2 ratio'].isnull()), 'CO2_MASS (kg)'].sum()
missing = total_co2 - union_co2
print year, '{:.3%}'.format(union_co2/total_co2), 'accounted for', \
missing/1000, 'metric tons missing'
epa_adj['adj CO2 (kg)'] = epa_adj.loc[:,'CO2_MASS (kg)']
epa_adj.loc[~(epa_adj['CO2_MASS (kg)']>0) &
(epa_adj['HEAT_INPUT (mmBtu)']>0) &
(epa_adj['all fuel total CO2 (kg)']>0), 'adj CO2 (kg)'] = np.nan
epa_adj.loc[(epa_adj['epa index']<300) &
(epa_adj['HEAT_INPUT (mmBtu)']>0) &
(epa_adj['all fuel total CO2 (kg)']>0), 'adj CO2 (kg)'] = np.nan
epa_adj.loc[epa_adj['CO2 ratio'].notnull(), 'adj CO2 (kg)'] *= epa_adj.loc[epa_adj['CO2 ratio'].notnull(), 'CO2 ratio']
for year in range(2001,2017):
num_missing = len(epa_adj.loc[(epa_adj['adj CO2 (kg)'].isnull()) &
(epa_adj['YEAR']==year), 'ORISPL_CODE'].unique())
total = len(epa_adj.loc[epa_adj['YEAR']==year, 'ORISPL_CODE'].unique())
print 'In', str(year) + ',', num_missing, 'plants missing some data out of', total
eia_facility['fuel'].unique()
# OG and BFG are included in Other because I've included OOG in Other below
# Pet liquids and pet coke are included here because they line up with how the state-level
# EIA data are reported
facility_fuel_cats = {'COW' : ['SUB','BIT','LIG', 'WC','SC','RC','SGC'],
'NG' : ['NG'],
'PEL' : ['DFO', 'RFO', 'KER', 'JF', 'PG', 'WO', 'SGP'],
'PC' : ['PC'],
'HYC' : ['WAT'],
'HPS' : [],
'GEO' : ['GEO'],
'NUC' : ['NUC'],
'OOG' : ['BFG', 'OG', 'LFG'],
'OTH' : ['OTH', 'MSN', 'MSW', 'PUR', 'TDF', 'WH'],
'SUN' : ['SUN'],
'DPV' : [],
'WAS' : ['OBL', 'OBS', 'OBG', 'MSB', 'SLW'],
'WND' : ['WND'],
'WWW' : ['WDL', 'WDS', 'AB', 'BLQ']
}
eia_facility_fuel = eia_facility.copy()
for key in facility_fuel_cats.keys():
eia_facility_fuel.loc[eia_facility_fuel['fuel'].isin(facility_fuel_cats[key]),'type'] = key
eia_facility_fuel = eia_facility_fuel.groupby(['type', 'year', 'month']).sum()
# eia_facility_fuel.reset_index(inplace=True)
eia_facility_fuel.head()
eia_total_monthly.head()
iterables = [eia_total_monthly.index.levels[0], range(2001, 2017), range(1, 13)]
index = pd.MultiIndex.from_product(iterables=iterables, names=['type', 'year', 'month'])
eia_extra = pd.DataFrame(index=index, columns=['total fuel (mmbtu)', 'generation (MWh)',
'elec fuel (mmbtu)'])
idx = pd.IndexSlice
use_columns=['total fuel (mmbtu)', 'generation (MWh)',
'elec fuel (mmbtu)']
eia_extra = (eia_total_monthly.loc[idx[:,:,:], use_columns] -
eia_facility_fuel.loc[idx[:,:,:], use_columns])
# I have lumped hydro pumped storage in with conventional hydro in the facility data.
# Because of this, I need to add HPS rows so that the totals will add up correctly.
# Also need to add DPV because it won't show up otherwise
eia_extra.loc[idx[['HPS', 'DPV'],:,:], use_columns] = eia_total_monthly.loc[idx[['HPS', 'DPV'],:,:], use_columns]
# eia_extra = eia_extra.loc[idx[:, 2003:, :],:]
eia_extra.head()
eia_extra.loc[idx['DPV',:,:]]
path = os.path.join('Final emission factors.csv')
ef = pd.read_csv(path, index_col=0)
fuel_factors = {'NG' : ef.loc['NG', 'Fossil Factor'],
'PEL': ef.loc[['DFO', 'RFO'], 'Fossil Factor'].mean(),
'PC' : ef.loc['PC', 'Fossil Factor'],
'COW' : ef.loc[['BIT', 'SUB'], 'Fossil Factor'].mean(),
'OOG' : ef.loc['OG', 'Fossil Factor']}
# Start with 0 emissions in all rows
# For fuels where we have an emission factor, replace the 0 with the calculated value
eia_extra['all fuel CO2 (kg)'] = 0
eia_extra['elec fuel CO2 (kg)'] = 0
for fuel in fuel_factors.keys():
eia_extra.loc[idx[fuel,:,:],'all fuel CO2 (kg)'] = \
eia_extra.loc[idx[fuel,:,:],'total fuel (mmbtu)'] * fuel_factors[fuel]
eia_extra.loc[idx[fuel,:,:],'elec fuel CO2 (kg)'] = \
eia_extra.loc[idx[fuel,:,:],'elec fuel (mmbtu)'] * fuel_factors[fuel]
# eia_extra.reset_index(inplace=True)
# add_quarter(eia_extra)
eia_extra.loc[idx['NG',:,:],].tail()
epa_cols = ['ORISPL_CODE', 'YEAR', 'MONTH', 'adj CO2 (kg)']
final_co2_gen = eia_facility_grouped.merge(epa_adj.loc[:,epa_cols], left_on=['plant id', 'year', 'month'],
right_on=['ORISPL_CODE', 'YEAR', 'MONTH'], how='left')
final_co2_gen.drop(['ORISPL_CODE', 'YEAR', 'MONTH'], axis=1, inplace=True)
final_co2_gen['final CO2 (kg)'] = final_co2_gen['adj CO2 (kg)']
final_co2_gen.loc[final_co2_gen['final CO2 (kg)'].isnull(), 'final CO2 (kg)'] = final_co2_gen.loc[final_co2_gen['final CO2 (kg)'].isnull(), 'elec fuel fossil CO2 (kg)']
add_quarter(final_co2_gen)
final_co2_gen.head()
def g2lb(df):
Convert g/kWh to lb/MWh and add a column to the df
kg2lb = 2.2046
df['index (lb/MWh)'] = df['index (g/kWh)'] * kg2lb
def change_since_2005(df):
Calculate the % difference from 2005 and add as a column in the df
# first calculate the index in 2005
index_2005 = ((df.loc[df['year']==2005,'index (g/kWh)'] *
df.loc[df['year']==2005,'generation (MWh)']) /
df.loc[df['year']==2005,'generation (MWh)'].sum()).sum()
# Calculated index value in 2005 is 599.8484560355034
# If the value above is different throw an error
if (index_2005 > 601) or (index_2005 < 599.5):
raise ValueError('Calculated 2005 index value', index_2005,
'is outside expected range. Expected value is 599.848')
if type(index_2005) != float:
raise TypeError('index_2005 is', type(index_2005), 'rather than a float.')
df['change since 2005'] = (df['index (g/kWh)'] - index_2005) / index_2005
monthly_index = final_co2_gen.groupby(['year', 'month'])['generation (MWh)', 'final CO2 (kg)'].sum()
monthly_index.reset_index(inplace=True)
# Add extra generation and emissions not captured by facility-level data
monthly_index.loc[:,'final CO2 (kg)'] += eia_extra.reset_index().groupby(['year', 'month'])['elec fuel CO2 (kg)'].sum().values
monthly_index.loc[:,'generation (MWh)'] += eia_extra.reset_index().groupby(['year', 'month'])['generation (MWh)'].sum().values
add_quarter(monthly_index)
monthly_index['index (g/kWh)'] = monthly_index.loc[:, 'final CO2 (kg)'] / monthly_index.loc[:, 'generation (MWh)']
change_since_2005(monthly_index)
g2lb(monthly_index)
monthly_index.dropna(inplace=True)
monthly_index.tail()
path = os.path.join('Data for plots', 'Monthly index.csv')
monthly_index.to_csv(path, index=False)
quarterly_index = monthly_index.groupby(['year', 'quarter'])['generation (MWh)', 'final CO2 (kg)'].sum()
quarterly_index.reset_index(inplace=True)
quarterly_index['index (g/kWh)'] = quarterly_index.loc[:, 'final CO2 (kg)'] / quarterly_index.loc[:, 'generation (MWh)']
quarterly_index['year_quarter'] = quarterly_index['year'].astype(str) + ' Q' + quarterly_index['quarter'].astype(str)
change_since_2005(quarterly_index)
g2lb(quarterly_index)
quarterly_index.tail()
path = os.path.join('Data for plots', 'Quarterly index.csv')
quarterly_index.to_csv(path, index=False)
annual_index = quarterly_index.groupby('year')['generation (MWh)', 'final CO2 (kg)'].sum()
annual_index.reset_index(inplace=True)
annual_index['index (g/kWh)'] = annual_index.loc[:, 'final CO2 (kg)'] / annual_index.loc[:, 'generation (MWh)']
change_since_2005(annual_index)
g2lb(annual_index)
annual_index.tail()
path = os.path.join('Data for plots', 'Annual index.csv')
annual_index.to_csv(path, index=False)
'US POWER SECTOR CO2 EMISSIONS INTENSITY'.title()
path = os.path.join('..', 'Calculated values', 'US Power Sector CO2 Emissions Intensity.xlsx')
writer = pd.ExcelWriter(path)
monthly_index.to_excel(writer, sheet_name='Monthly', index=False)
quarterly_index.to_excel(writer, sheet_name='Quarterly', index=False)
annual_index.to_excel(writer, sheet_name='Annual', index=False)
writer.save()
fuel_cats = {'Coal' : [u'COW'],
'Natural Gas' : [u'NG'],
'Nuclear' : ['NUC'],
'Renewables' : [u'GEO', u'HYC', u'SUN', 'DPV',
u'WAS', u'WND', u'WWW'],
'Other' : [u'OOG', u'PC', u'PEL', u'OTH', u'HPS']
}
keep_types = [u'WWW', u'WND', u'WAS', u'SUN', 'DPV', u'NUC', u'NG',
u'PEL', u'PC', u'OTH', u'COW', u'OOG', u'HPS', u'HYC', u'GEO']
eia_gen_monthly = eia_total.loc[eia_total['type'].isin(keep_types)].groupby(['type', 'year', 'month']).sum()
eia_gen_monthly.reset_index(inplace=True)
eia_gen_monthly.drop(['end', 'sector', 'start'], inplace=True, axis=1)
for key, values in fuel_cats.iteritems():
eia_gen_monthly.loc[eia_gen_monthly['type'].isin(values),'fuel category'] = key
eia_gen_monthly = eia_gen_monthly.groupby(['fuel category', 'year', 'month']).sum()
eia_gen_monthly.reset_index(inplace=True)
add_quarter(eia_gen_monthly)
eia_gen_quarterly = eia_gen_monthly.groupby(['fuel category', 'year', 'quarter']).sum()
eia_gen_quarterly.reset_index(inplace=True)
eia_gen_quarterly['year_quarter'] = (eia_gen_quarterly['year'].astype(str) +
' Q' + eia_gen_quarterly['quarter'].astype(str))
eia_gen_quarterly.drop('month', axis=1, inplace=True)
eia_gen_annual = eia_gen_monthly.groupby(['fuel category', 'year']).sum()
eia_gen_annual.reset_index(inplace=True)
eia_gen_annual.drop(['month', 'quarter'], axis=1, inplace=True)
def generation_index(gen_df, index_df, group_by='year'):
Calculate the emissions intensity of each fuel in each time period. Use the
adjusted total emissions from the index dataframe to ensure that the weighted
sum of fuel emission intensities will equal the total index value.
final_adj_co2 = index_df.loc[:,'final CO2 (kg)'].copy()
calc_total_co2 = gen_df.groupby(group_by)['elec fuel CO2 (kg)'].sum().values
# calc_total_co2 = calc_total_co2.reset_index()
for fuel in gen_df['fuel category'].unique():
gen_df.loc[gen_df['fuel category']==fuel, 'adjusted CO2 (kg)'] = (gen_df.loc[gen_df['fuel category']==fuel,
'elec fuel CO2 (kg)'] /
calc_total_co2 *
final_adj_co2.values)
gen_df['adjusted index (g/kWh)'] = gen_df['adjusted CO2 (kg)'] / gen_df['generation (MWh)']
gen_df['adjusted index (lb/MWh)'] = gen_df['adjusted index (g/kWh)'] * 2.2046
generation_index(eia_gen_annual, annual_index, 'year')
generation_index(eia_gen_monthly, monthly_index, ['year', 'month'])
generation_index(eia_gen_quarterly, quarterly_index, 'year_quarter')
eia_gen_annual.head()
path = os.path.join('Data for plots', 'Monthly generation.csv')
eia_gen_monthly.to_csv(path, index=False)
path = os.path.join('Data for plots', 'Quarterly generation.csv')
eia_gen_quarterly.to_csv(path, index=False)
path = os.path.join('Data for plots', 'Annual generation.csv')
eia_gen_annual.to_csv(path, index=False)
path = os.path.join('..', 'Calculated values', 'US Generation By Fuel Type.xlsx')
writer = pd.ExcelWriter(path, engine='xlsxwriter')
eia_gen_monthly.to_excel(writer, sheet_name='Monthly', index=False)
eia_gen_quarterly.to_excel(writer, sheet_name='Quarterly', index=False)
eia_gen_annual.to_excel(writer, sheet_name='Annual', index=False)
writer.save()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In this workshop we will code up a genetic algorithm for a simple mathematical optimization problem.
Step11: The optimization problem
Step12: Great. Now that the class and its basic methods is defined, we move on to code up the GA.
Step13: Crossover
Step14: Mutation
Step16: Fitness Evaluation
Step17: Fitness and Elitism
Step18: Putting it all together and making the GA
Step19: Visualize
|
<ASSISTANT_TASK:>
Python Code:
# All the imports
from __future__ import print_function, division
from math import *
import random
import sys
import matplotlib.pyplot as plt
# TODO 1: Enter your unity ID here
__author__ = "dndesai"
class O:
Basic Class which
- Helps dynamic updates
- Pretty Prints
def __init__(self, **kwargs):
self.has().update(**kwargs)
def has(self):
return self.__dict__
def update(self, **kwargs):
self.has().update(kwargs)
return self
def __repr__(self):
show = [':%s %s' % (k, self.has()[k])
for k in sorted(self.has().keys())
if k[0] is not "_"]
txt = ' '.join(show)
if len(txt) > 60:
show = map(lambda x: '\t' + x + '\n', show)
return '{' + ' '.join(show) + '}'
print("Unity ID: ", __author__)
# Few Utility functions
def say(*lst):
Print whithout going to new line
print(*lst, end="")
sys.stdout.flush()
def random_value(low, high, decimals=2):
Generate a random number between low and high.
decimals incidicate number of decimal places
return round(random.uniform(low, high),decimals)
def gt(a, b): return a > b
def lt(a, b): return a < b
def shuffle(lst):
Shuffle a list
random.shuffle(lst)
return lst
class Decision(O):
Class indicating Decision of a problem
def __init__(self, name, low, high):
@param name: Name of the decision
@param low: minimum value
@param high: maximum value
O.__init__(self, name=name, low=low, high=high)
class Objective(O):
Class indicating Objective of a problem
def __init__(self, name, do_minimize=True):
@param name: Name of the objective
@param do_minimize: Flag indicating if objective has to be minimized or maximized
O.__init__(self, name=name, do_minimize=do_minimize)
class Point(O):
Represents a member of the population
def __init__(self, decisions):
O.__init__(self)
self.decisions = decisions
self.objectives = None
def __hash__(self):
return hash(tuple(self.decisions))
def __eq__(self, other):
return self.decisions == other.decisions
def clone(self):
new = Point(self.decisions)
new.objectives = self.objectives
return new
class Problem(O):
Class representing the cone problem.
def __init__(self):
O.__init__(self)
# TODO 2: Code up decisions and objectives below for the problem
# using the auxilary classes provided above.
self.decisions = [Decision('r',0,10), Decision('h',0,20)]
self.objectives = [Objective('S', True), Objective('T', True)]
@staticmethod
def evaluate(point):
[r, h] = point.decisions
l = (r**2 + h**2)**0.5
sa = pi*r*l
ta = sa + pi * r**2
point.objectives = [sa, ta]
# TODO 3: Evaluate the objectives S and T for the point.
return point.objectives
@staticmethod
def is_valid(point):
[r, h] = point.decisions
# TODO 4: Check if the point has valid decisions
return pi * r**2 * h / 3.0 > 200
def generate_one(self):
# TODO 5: Generate a valid instance of Point.
while (True):
mypoint = Point([random_value(d.low, d.high) for d in self.decisions])
if Problem.is_valid(mypoint):
return mypoint
def populate(problem, size):
population = []
# TODO 6: Create a list of points of length 'size'
population = [problem.generate_one() for _ in xrange(size)]
return population
def crossover(mom, dad):
# TODO 7: Create a new point which contains decisions from
# the first half of mom and second half of dad
n = len(mom.decisions)
return Point(mom.decisions[:n//2] + dad.decisions[n//2:])
def mutate(problem, point, mutation_rate=0.01):
# TODO 8: Iterate through all the decisions in the point
# and if the probability is less than mutation rate
# change the decision(randomly set it between its max and min).
for i, d in enumerate(point.decisions):
if random.randrange(0, 1) < mutation_rate:
d = random_value(problem.decisions[i].low, problem.decisions[i].high)
return point
def bdom(problem, one, two):
Return if one dominates two
objs_one = problem.evaluate(one)
objs_two = problem.evaluate(two)
dominates = False
# TODO 9: Return True/False based on the definition
# of bdom above.
eq_or_better = True
for i, o in enumerate(objs_one):
eq_or_better = o <= objs_two[i] if problem.objectives[i].do_minimize else o >= objs_two[i]
if not eq_or_better:
return False
if not dominates:
dominates = o < objs_two[i] if problem.objectives[i].do_minimize else o > objs_two[i]
return dominates
def fitness(problem, population, point):
dominates = 0
# TODO 10: Evaluate fitness of a point.
# For this workshop define fitness of a point
# as the number of points dominated by it.
# For example point dominates 5 members of population,
# then fitness of point is 5.
for another in population:
if bdom(problem, point, another):
dominates += 1
return dominates
def elitism(problem, population, retain_size):
# TODO 11: Sort the population with respect to the fitness
# of the points and return the top 'retain_size' points of the population
fitlist = [fitness(problem, population, p) for p in population]
new_pop = [y for x, y in sorted(zip(fitlist, population), reverse=True)]
return new_pop[:retain_size]
def ga(pop_size = 100, gens = 250):
problem = Problem()
population = populate(problem, pop_size)
[problem.evaluate(point) for point in population]
initial_population = [point.clone() for point in population]
gen = 0
while gen < gens:
say(".")
children = []
for _ in range(pop_size):
mom = random.choice(population)
dad = random.choice(population)
while (mom == dad):
dad = random.choice(population)
child = mutate(problem, crossover(mom, dad))
if problem.is_valid(child) and child not in population+children:
children.append(child)
population += children
population = elitism(problem, population, pop_size)
gen += 1
print("")
return initial_population, population
def plot_pareto(initial, final):
initial_objs = [point.objectives for point in initial]
final_objs = [point.objectives for point in final]
initial_x = [i[0] for i in initial_objs]
initial_y = [i[1] for i in initial_objs]
final_x = [i[0] for i in final_objs]
final_y = [i[1] for i in final_objs]
plt.scatter(initial_x, initial_y, color='b', marker='+', label='initial')
plt.scatter(final_x, final_y, color='r', marker='o', label='final')
plt.title("Scatter Plot between initial and final population of GA")
plt.ylabel("Total Surface Area(T)")
plt.xlabel("Curved Surface Area(S)")
plt.legend(loc=9, bbox_to_anchor=(0.5, -0.175), ncol=2)
plt.show()
initial, final = ga()
plot_pareto(initial, final)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: tf.dataを使って画像をロードする
Step2: データセットのダウンロードと検査
Step3: 218MBをダウンロードすると、花の画像のコピーが使えるようになっているはずです。
Step4: 画像の検査
Step5: 各画像のラベルの決定
Step6: ラベルにインデックスを割り当てます。
Step7: ファイルとラベルのインデックスの一覧を作成します。
Step8: 画像の読み込みと整形
Step9: 以下は生のデータです。
Step10: 画像のテンソルにデコードします。
Step11: モデルに合わせてリサイズします。
Step12: このあと使用するために、簡単な関数にまとめます。
Step13: tf.data.Datasetの構築
Step14: output_shapesとoutput_typesという2つのフィールドが、データセット中の要素の中身を示しています。この場合には、バイナリ文字列というスカラーのセットです。
Step15: preprocess_imageをファイルパスのデータセットにマップすることで、画像を実行時にロードし整形する新しいデータセットを作成します。
Step16: (image, label)のペアのデータセット
Step17: これらのデータセットは同じ順番なので、zipすることで(image, label)というペアのデータセットができます。
Step18: 新しいデータセットのshapesとtypesは、それぞれのフィールドを示すシェイプと型のタプルです。
Step19: 注:all_image_labelsやall_image_pathsのような配列がある場合、tf.data.dataset.Dataset.zipメソッドの代わりとなるのは、配列のペアをスライスすることです。
Step20: 基本的な訓練手法
Step21: 注意すべきことがいくつかあります。
Step22: データセットをモデルにつなぐ
Step23: このモデルは、入力が[-1,1]の範囲に正規化されていることを想定しています。
Step24: MobileNetは画像ごとに6x6の特徴量の空間を返します。
Step25: MobileNetをラップしたモデルを作り、出力層であるtf.keras.layers.Denseの前に、tf.keras.layers.GlobalAveragePooling2Dで空間の軸に沿って平均値を求めます。
Step26: 期待したとおりの形状の出力が得られます。
Step27: 訓練手法を記述するためにモデルをコンパイルします。
Step28: 訓練可能な変数は2つ、全結合層のweightsとbiasです。
Step29: モデルを訓練します。
Step30: 性能
Step31: 現在のデータセットの性能は次のとおりです。
Step32: キャッシュ
Step33: メモリキャッシュを使う際の欠点のひとつは、実行の都度キャッシュを再構築しなければならないことです。このため、データセットがスタートするたびに同じだけ起動のための遅延が発生します。
Step34: データがメモリに収まらない場合には、キャッシュファイルを使用します。
Step35: キャッシュファイルには、キャッシュを再構築することなくデータセットを再起動できるという利点もあります。2回めがどれほど早いか見てみましょう。
Step36: TFRecord ファイル
Step37: 次に、TFRecordファイルを読み込み、以前定義したpreprocess_image関数を使って画像のデコード/リフォーマットを行うデータセットを構築します。
Step38: これを、前に定義済みのラベルデータセットとzipし、期待通りの(image,label)のペアを得ます。
Step39: これは、cacheバージョンよりも低速です。前処理をキャッシュしていないからです。
Step40: .jpeg文字列のデータセットではなく、これはテンソルのデータセットです。
Step41: 前処理をキャッシュしたことにより、データはTFRecordファイルから非常に効率的にロードできます。テンソルを使用する前にデシリアライズすることを忘れないでください。
Step42: 次にラベルを追加し、以前と同じような標準的な処理を適用します。
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow.compat.v1 as tf
tf.__version__
AUTOTUNE = tf.data.experimental.AUTOTUNE
import pathlib
data_root = tf.keras.utils.get_file('flower_photos','https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz', untar=True)
data_root = pathlib.Path(data_root)
print(data_root)
for item in data_root.iterdir():
print(item)
import random
all_image_paths = list(data_root.glob('*/*'))
all_image_paths = [str(path) for path in all_image_paths]
random.shuffle(all_image_paths)
image_count = len(all_image_paths)
image_count
all_image_paths
import os
attributions = (data_root/"LICENSE.txt").open(encoding='utf-8').readlines()[4:]
attributions = [line.split(' CC-BY') for line in attributions]
attributions = dict(attributions)
import IPython.display as display
def caption_image(image_path):
image_rel = pathlib.Path(image_path).relative_to(data_root)
return "Image (CC BY 2.0) " + ' - '.join(attributions[str(image_rel)].split(' - ')[:-1])
for n in range(3):
image_path = random.choice(all_image_paths)
display.display(display.Image(image_path))
print(caption_image(image_path))
print()
label_names = sorted(item.name for item in data_root.glob('*/') if item.is_dir())
label_names
label_to_index = dict((name, index) for index,name in enumerate(label_names))
label_to_index
all_image_labels = [label_to_index[pathlib.Path(path).parent.name]
for path in all_image_paths]
print("First 10 labels indices: ", all_image_labels[:10])
img_path = all_image_paths[0]
img_path
img_raw = tf.read_file(img_path)
print(repr(img_raw)[:100]+"...")
img_tensor = tf.image.decode_image(img_raw)
print(img_tensor.shape)
print(img_tensor.dtype)
img_final = tf.image.resize_images(img_tensor, [192, 192])
img_final = img_final/255.0
print(img_final.shape)
print(img_final.numpy().min())
print(img_final.numpy().max())
def preprocess_image(image):
image = tf.image.decode_jpeg(image, channels=3)
image = tf.image.resize_images(image, [192, 192])
image /= 255.0 # normalize to [0,1] range
return image
def load_and_preprocess_image(path):
image = tf.read_file(path)
return preprocess_image(image)
import matplotlib.pyplot as plt
image_path = all_image_paths[0]
label = all_image_labels[0]
plt.imshow(load_and_preprocess_image(img_path))
plt.grid(False)
plt.xlabel(caption_image(img_path))
plt.title(label_names[label].title())
print()
path_ds = tf.data.Dataset.from_tensor_slices(all_image_paths)
print('shape: ', repr(path_ds.output_shapes))
print('type: ', path_ds.output_types)
print()
print(path_ds)
image_ds = path_ds.map(load_and_preprocess_image, num_parallel_calls=AUTOTUNE)
import matplotlib.pyplot as plt
plt.figure(figsize=(8,8))
for n,image in enumerate(image_ds.take(4)):
plt.subplot(2,2,n+1)
plt.imshow(image)
plt.grid(False)
plt.xticks([])
plt.yticks([])
plt.xlabel(caption_image(all_image_paths[n]))
label_ds = tf.data.Dataset.from_tensor_slices(tf.cast(all_image_labels, tf.int64))
for label in label_ds.take(10):
print(label_names[label.numpy()])
image_label_ds = tf.data.Dataset.zip((image_ds, label_ds))
print('image shape: ', image_label_ds.output_shapes[0])
print('label shape: ', image_label_ds.output_shapes[1])
print('types: ', image_label_ds.output_types)
print()
print(image_label_ds)
ds = tf.data.Dataset.from_tensor_slices((all_image_paths, all_image_labels))
# The tuples are unpacked into the positional arguments of the mapped function
# タプルは展開され、マップ関数の位置引数に割り当てられます
def load_and_preprocess_from_path_label(path, label):
return load_and_preprocess_image(path), label
image_label_ds = ds.map(load_and_preprocess_from_path_label)
image_label_ds
BATCH_SIZE = 32
# シャッフルバッファのサイズをデータセットと同じに設定することで、データが完全にシャッフルされる
# ようにできます。
ds = image_label_ds.shuffle(buffer_size=image_count)
ds = ds.repeat()
ds = ds.batch(BATCH_SIZE)
# `prefetch`を使うことで、モデルの訓練中にバックグラウンドでデータセットがバッチを取得できます。
ds = ds.prefetch(buffer_size=AUTOTUNE)
ds
ds = image_label_ds.apply(
tf.data.experimental.shuffle_and_repeat(buffer_size=image_count))
ds = ds.batch(BATCH_SIZE)
ds = ds.prefetch(buffer_size=AUTOTUNE)
ds
mobile_net = tf.keras.applications.MobileNetV2(input_shape=(192, 192, 3), include_top=False)
mobile_net.trainable=False
def change_range(image,label):
return 2*image-1, label
keras_ds = ds.map(change_range)
# シャッフルバッファがいっぱいになるまで、データセットは何秒かかかります。
image_batch, label_batch = next(iter(keras_ds))
feature_map_batch = mobile_net(image_batch)
print(feature_map_batch.shape)
model = tf.keras.Sequential([
mobile_net,
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(len(label_names))])
logit_batch = model(image_batch).numpy()
print("min logit:", logit_batch.min())
print("max logit:", logit_batch.max())
print()
print("Shape:", logit_batch.shape)
model.compile(optimizer=tf.train.AdamOptimizer(),
loss=tf.keras.losses.sparse_categorical_crossentropy,
metrics=["accuracy"])
len(model.trainable_variables)
model.summary()
steps_per_epoch=tf.ceil(len(all_image_paths)/BATCH_SIZE).numpy()
steps_per_epoch
model.fit(ds, epochs=1, steps_per_epoch=3)
import time
def timeit(ds, batches=2*steps_per_epoch+1):
overall_start = time.time()
# タイマーをスタートする前に、パイプラインの初期化の(シャッフルバッファを埋める)ため、
# バッチを1つ取得します
it = iter(ds.take(batches+1))
next(it)
start = time.time()
for i,(images,labels) in enumerate(it):
if i%10 == 0:
print('.',end='')
print()
end = time.time()
duration = end-start
print("{} batches: {} s".format(batches, duration))
print("{:0.5f} Images/s".format(BATCH_SIZE*batches/duration))
print("Total time: {}s".format(end-overall_start))
ds = image_label_ds.apply(
tf.data.experimental.shuffle_and_repeat(buffer_size=image_count))
ds = ds.batch(BATCH_SIZE).prefetch(buffer_size=AUTOTUNE)
ds
timeit(ds)
ds = image_label_ds.cache()
ds = ds.apply(
tf.data.experimental.shuffle_and_repeat(buffer_size=image_count))
ds = ds.batch(BATCH_SIZE).prefetch(buffer_size=AUTOTUNE)
ds
timeit(ds)
timeit(ds)
ds = image_label_ds.cache(filename='./cache.tf-data')
ds = ds.apply(
tf.data.experimental.shuffle_and_repeat(buffer_size=image_count))
ds = ds.batch(BATCH_SIZE).prefetch(1)
ds
timeit(ds)
timeit(ds)
image_ds = tf.data.Dataset.from_tensor_slices(all_image_paths).map(tf.read_file)
tfrec = tf.data.experimental.TFRecordWriter('images.tfrec')
tfrec.write(image_ds)
image_ds = tf.data.TFRecordDataset('images.tfrec').map(preprocess_image)
ds = tf.data.Dataset.zip((image_ds, label_ds))
ds = ds.apply(
tf.data.experimental.shuffle_and_repeat(buffer_size=image_count))
ds=ds.batch(BATCH_SIZE).prefetch(AUTOTUNE)
ds
timeit(ds)
paths_ds = tf.data.Dataset.from_tensor_slices(all_image_paths)
image_ds = paths_ds.map(load_and_preprocess_image)
image_ds
ds = image_ds.map(tf.serialize_tensor)
ds
tfrec = tf.data.experimental.TFRecordWriter('images.tfrec')
tfrec.write(ds)
RESTORE_TYPE = image_ds.output_types
RESTORE_SHAPE = image_ds.output_shapes
ds = tf.data.TFRecordDataset('images.tfrec')
def parse(x):
result = tf.parse_tensor(x, out_type=RESTORE_TYPE)
result = tf.reshape(result, RESTORE_SHAPE)
return result
ds = ds.map(parse, num_parallel_calls=AUTOTUNE)
ds
ds = tf.data.Dataset.zip((ds, label_ds))
ds = ds.apply(
tf.data.experimental.shuffle_and_repeat(buffer_size=image_count))
ds=ds.batch(BATCH_SIZE).prefetch(AUTOTUNE)
ds
timeit(ds)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now we use a model atmosphere with a lower alpha enhancement
Step2: And a model atmosphere with a higher alpha enhancement
Step3: Compare
|
<ASSISTANT_TASK:>
Python Code:
atm= atlas9.Atlas9Atmosphere(teff=3500.,logg=2.5,metals=0.,am=0.,cm=0.)
synspec_correct= apogee.modelspec.turbospec.synth(modelatm=atm,
linelist='20150714',
lsf='all',cont='true',vmacro=0.,
isotopes='arcturus')
atm2= atlas9.Atlas9Atmosphere(teff=3500.,logg=2.5,metals=0.,am=-0.25,cm=0.)
synspec_minus= apogee.modelspec.turbospec.synth([8.,0.25],[12,0.25],[14.,0.25],[16.,0.25],[20.,0.25],[22.,0.25],
modelatm=atm2,
linelist='20150714',
lsf='all',cont='true',vmacro=0.,
isotopes='arcturus')
atm3= atlas9.Atlas9Atmosphere(teff=3500.,logg=2.5,metals=0.,am=+0.25,cm=0.)
synspec_plus= apogee.modelspec.turbospec.synth([8.,-0.25],[12,-0.25],[14.,-0.25],[16.,-0.25],[20.,-0.25],[22.,-0.25],
modelatm=atm3,
linelist='20150714',
lsf='all',cont='true',vmacro=0.,
isotopes='arcturus')
for panel in apogee.spec.plot.highres(synspec_minus[0]-synspec_correct[0],synspec_plus[0]-synspec_correct[0],
yrange=[-0.2,0.2],color=['r','b'],labelLines=True,cleanZero=False,fig_width=14.):
show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Set source and destination paths
Step2: 2. Instantiate a BatchGenerator
Step3: 3. Set the processing parameters and start the processing
|
<ASSISTANT_TASK:>
Python Code:
from batch_generator import BatchGenerator
from cityscapesscripts.helpers.labels import IDS_TO_TRAINIDS_ARRAY
# The directories that contain the train, val, and test images
train_images = '../../datasets/Cityscapes/leftImg8bit/train/'
train_extra_images = '../../datasets/Cityscapes/leftImg8bit/train_extra/'
val_images = '../../datasets/Cityscapes/leftImg8bit/val/'
test_images = '../../datasets/Cityscapes/leftImg8bit/test/'
# The directories that contain the train and val ground truth images
train_gt = '../../datasets/Cityscapes/gtFine/train/'
train_extra_gt = '../../datasets/Cityscapes/gtCoarse/train_extra/'
val_gt = '../../datasets/Cityscapes/gtFine/val/'
# Define which of the above to pass to the `BatchGenerator` constructor.
image_dirs = [train_images, val_images]
ground_truth_dirs = [train_gt, val_gt]
export_dir = '../../datasets/Cityscapes_small/' # The directory into which you want to export the processed images
root_dir = '../../datasets/Cityscapes/' # The root directory of the dataset.
offline_processor = BatchGenerator(image_dirs=image_dirs,
image_file_extension='png',
ground_truth_dirs=ground_truth_dirs,
image_name_split_separator='leftImg8bit',
ground_truth_suffix='gtFine_labelIds',
check_existence=True,
root_dir=root_dir,
export_dir=export_dir)
num_files = offline_processor.get_num_files()
print("Total number of files in all datasets: ", num_files)
offline_processor.process_all(convert_colors_to_ids=False,
convert_ids_to_ids=IDS_TO_TRAINIDS_ARRAY, # <-- Convert the class IDs to different ones.
convert_to_one_hot=False,
void_class_id=None,
random_crop=False,
crop=False,
resize=(256, 512), # <-- Resize the images.
brightness=False,
flip=False,
translate=False,
scale=False,
gray=False,
to_disk=True) # <-- Save the processed images to disk.
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Carregar, examinar e fazer plot dos dados
Step2: 3. Ajustar (sobrepor ?) uma reta qualquer
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
# O arquivo de dados é um txt no qual cada linha
# contém dois números, separados por vírgula.
# A primeira coluna representa x e a segunda coluna y
fname = 'data1.txt'
data = np.loadtxt(fname, delimiter = ',')
N = data.shape[0] # número de linhas; portanto número de exemplos
X = data[:, 0]
y = data[:, 1]
# Examinar um pouco a dimensão dos arrays
print 'Dimensão do array data:', data.shape
print 'Dimensão do array X:', X.shape
# plotar o dados
plt.plot(X, y, 'rx')
plt.xlim(-1, 10)
plt.ylim(-1, 6)
plt.title(fname)
plt.xlabel('x')
plt.ylabel('y')
plt.show()
# chutar valores para o vetor de pesos
w = np.array((1,0.5))
# plotar os dados (X,y)
plt.plot(X, y, 'rx')
# plotar a reta
XX = np.vstack(zip(np.ones(N), X))
plt.plot(X, XX.dot(w), '-')
# Outras configs do plot
plt.xlim(-1, 10)
plt.ylim(-1, 6)
plt.title( fname + ' e a reta %.1f+%.1fx'%(w[0],w[1]) )
plt.xlabel('x')
plt.ylabel('y')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Plot the different time series and PSDs
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
#
# License: BSD (3-clause)
import numpy as np
from scipy import signal
import matplotlib.pyplot as plt
import mne
from mne.time_frequency import fit_iir_model_raw
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
proj_fname = data_path + '/MEG/sample/sample_audvis_ecg-proj.fif'
raw = mne.io.read_raw_fif(raw_fname)
proj = mne.read_proj(proj_fname)
raw.info['projs'] += proj
raw.info['bads'] = ['MEG 2443', 'EEG 053'] # mark bad channels
# Set up pick list: Gradiometers - bad channels
picks = mne.pick_types(raw.info, meg='grad', exclude='bads')
order = 5 # define model order
picks = picks[:1]
# Estimate AR models on raw data
b, a = fit_iir_model_raw(raw, order=order, picks=picks, tmin=60, tmax=180)
d, times = raw[0, 10000:20000] # look at one channel from now on
d = d.ravel() # make flat vector
innovation = signal.convolve(d, a, 'valid')
d_ = signal.lfilter(b, a, innovation) # regenerate the signal
d_ = np.r_[d_[0] * np.ones(order), d_] # dummy samples to keep signal length
plt.close('all')
plt.figure()
plt.plot(d[:100], label='signal')
plt.plot(d_[:100], label='regenerated signal')
plt.legend()
plt.figure()
plt.psd(d, Fs=raw.info['sfreq'], NFFT=2048)
plt.psd(innovation, Fs=raw.info['sfreq'], NFFT=2048)
plt.psd(d_, Fs=raw.info['sfreq'], NFFT=2048, linestyle='--')
plt.legend(('Signal', 'Innovation', 'Regenerated signal'))
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Get layer data
Step2: Get $p, T, \rho$
Step3: Note that the geopotential height can be provided as a numpy array
Step4: If height is provided as a
Step5: Height computation
Step6: On the other hand, pressure-altitude yields a single value of the geopotential height
Step7: Array-numpy can be used to input more than one temperature or pressure, wich yields a geopoential height numpy array. Note that if more than one height value can be retrieved, then an array of arrays is returned. I.e
Step8: Finally, suppose $h=7500m, \Delta T_{ISA}=0K$ and $h=9000m, \Delta T_{ISA}=+10K$
Step9: Now retrieve geopotential height, indicating the ISA deviation
|
<ASSISTANT_TASK:>
Python Code:
# Import isa library
from pyturb.gas_models import isa
import numpy as np
from matplotlib import pyplot as plt
height = [0, 11000, 20000, 32000, 47000, 51000, 71000, 84852]
for i_layer, h in enumerate(height):
lapse_rate, Tbase, pbase, dbase, heightbase, layer_name = isa.get_atmosdata(h)
print('{5} -{4:>14s} - zbase={3:9.3f}m, alpha={0:9.2e}K/m, Tbase={1:6.2f}K, pbase={2:10.3f}Pa, dbase={6:6.3e}kg/m^3'.format(lapse_rate, Tbase, pbase, heightbase, layer_name, i_layer+1, dbase))
# For geopotential height of 1000m
T0 = isa.temperature_isa(1000)
p0 = isa.pressure_isa(1000)
rho0 = isa.density_state_eq(1000)
print('T=', T0, 'K; p=', p0, 'Pa; rho=', rho0, 'kg/m^3')
# Create a vector from troposphere to mesosphere3 (0m to 84852m).
h = np.linspace(0,84851, 100)
# Temperature:
T = isa.temperature_isa(h)
# Pressure
p = isa.pressure_isa(h)
# Density
d = isa.density_state_eq(h)
plt.figure()
plot_temp = plt.plot(h,T, '-x')
plt.ylabel("Temperature [K]")
plt.xlabel("geopotential height [m]")
fig, eje1 = plt.subplots()
eje2 = eje1.twinx()
plot1, = eje1.plot(h, p, "b-+", label="pressure")
plot2, = eje2.plot(h, d, "r-x", label="density_steq")
eje1.set_xlabel("geopotential height [m]")
eje1.set_ylabel("pressure [Pa]")
eje2.set_ylabel("density kg/m3")
eje1.yaxis.label.set_color(plot1.get_color())
eje2.yaxis.label.set_color(plot2.get_color())
tkw = dict(size=4, width=1.5)
eje1.tick_params(axis='y', colors=plot1.get_color(), **tkw)
eje2.tick_params(axis='y', colors=plot2.get_color(), **tkw)
eje1.tick_params(axis='x', **tkw)
lines = [plot1, plot2]
eje1.legend(lines, [l.get_label() for l in lines])
print('Temperature:')
print('isa_dev by default: T=', isa.temperature_isa([0, 1000, 2000, 3000]))
print('isa_dev=10, T=', isa.temperature_isa([0, 1000, 2000, 3000], 10))
print('isa_dev as array, T=', isa.temperature_isa([0, 1000, 2000, 3000], [5, 10, 15, 20]))
print('Pressure:')
print('isa_dev by default: ', isa.pressure_isa([5000, 8000, 10000, 0]))
print('isa_dev as array, p=', isa.pressure_isa([5000, 8000, 10000, 0], [5, 10, 15, 20]))
isa.height_from_temperature_isa(250)
plt.figure()
plot_temp = plt.plot(h,T, '-x')
plt.plot([0,84000], [250, 250])
plt.ylabel("Temperature [K]")
plt.xlabel("geopotential height [m]")
plt.figure()
plot_temp = plt.plot(h,p, '-x')
plt.ylabel("Pressure [Pa]")
plt.xlabel("geopotential height [m]")
isa.height_from_pressure_isa([90000, 50000, 2500])
isa.height_from_temperature_isa([200, 220, 250, 275])
T75 = isa.temperature_isa(7500)
print(T75)
T90 = isa.temperature_isa(9000, 10)
print(T90)
p75 = isa.pressure_isa(7500)
p90 = isa.pressure_isa(9000,10)
print(p75)
print(p90)
hp75 = isa.height_from_pressure_isa(p75)
hp90 = isa.height_from_pressure_isa(p90, 10)
print(hp75)
print(hp90)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Using the cell function, we can create Cell widgets that are directly added to the current sheet.
Step2: Events
Step3: Cell ranges
Step4: Calculations
Step6: Renderers
Step7: If flexx is installed, Python code can be transpiled to JavaScript at runtime.
|
<ASSISTANT_TASK:>
Python Code:
import ipysheet
sheet = ipysheet.sheet()
sheet
sheet = ipysheet.sheet(rows=3, columns=4)
cell1 = ipysheet.cell(0, 0, 'Hello')
cell2 = ipysheet.cell(2, 0, 'World')
cell_value = ipysheet.cell(2,2, 42.)
sheet
import ipywidgets as widgets
sheet = ipysheet.sheet(rows=3, columns=2, column_headers=False, row_headers=False)
cell_a = ipysheet.cell(0, 1, 1, label_left='a')
cell_b = ipysheet.cell(1, 1, 2, label_left='b')
cell_sum = ipysheet.cell(2, 1, 3, label_left='sum', read_only=True)
# create a slider linked to cell a
slider = widgets.FloatSlider(min=-10, max=10, description='a')
widgets.jslink((cell_a, 'value'), (slider, 'value'))
# changes in a or b should trigger this function
def calculate(change):
cell_sum.value = cell_a.value + cell_b.value
cell_a.observe(calculate, 'value')
cell_b.observe(calculate, 'value')
widgets.VBox([sheet, slider])
sheet = ipysheet.sheet(rows=5, columns=4)
row = ipysheet.row(0, [0, 1, 2, 3], background_color="red")
column = ipysheet.column(1, ["a", "b", "c", "d"], row_start=1, background_color="green")
cells = ipysheet.cell_range([["hi", "ola"], ["ciao", "bonjour"], ["hallo", "guten tag"]],
row_start=1, column_start=2, background_color="yellow")
sheet
import ipywidgets as widgets
sheet = ipysheet.sheet(rows=3, columns=2, column_headers=False, row_headers=False)
cell_a = ipysheet.cell(0, 1, 1, label_left='a')
cell_b = ipysheet.cell(1, 1, 2, label_left='b')
cell_sum = ipysheet.cell(2, 1, 3, label_left='sum', read_only=True)
# create a slider linked to cell a
slider = widgets.FloatSlider(min=-10, max=10, description='a')
widgets.jslink((cell_a, 'value'), (slider, 'value'))
@ipysheet.calculation(inputs=[cell_a, cell_b], output=cell_sum)
def calculate(a, b):
return a + b
widgets.VBox([sheet, slider])
jscode_renderer_negative =
function (instance, td, row, col, prop, value, cellProperties) {
Handsontable.renderers.TextRenderer.apply(this, arguments);
if (value < 0)
td.style.backgroundColor = 'red'
else
td.style.backgroundColor = 'green'
}
ipysheet.renderer(code=jscode_renderer_negative, name='negative');
import random
s = ipysheet.sheet(rows=3, columns=4)
data = [[random.randint(-10, 10) for j in range(4)] for j in range(3)]
ipysheet.cell_range(data, renderer='negative')
s
def renderer_negative(instance, td, row, col, prop, value, cellProperties):
Handsontable.renderers.TextRenderer.apply(this, arguments);
if value < 0:
td.style.backgroundColor = 'orange'
else:
td.style.backgroundColor = ''
ipysheet.renderer(code=renderer_negative, name='negative_transpiled');
import random
s = ipysheet.sheet(rows=3, columns=4)
data = [[random.randint(-10, 10) for j in range(4)] for j in range(3)]
ipysheet.cell_range(data, renderer='negative_transpiled')
s
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data Structure for Recurrent Neural Networks
Step2: This is essentially building a CSV file from scratch, to see it as a data frame, use the following
Step3: You might want to put volume in with the stock price.
Step4: Now we get to sequence format. We want to predict something over a sequence, so the data format needs to add a dimension. A maximum sequence length must be specified, but the individual sequences can be of any length.
Step5: Even if there is only one feature (price), the 3rd dimension must be used
Step6: Recurrent Neural Networks
Step7: Both of these two functions compress their output to a specific range. For the sigmoid function, this range is 0 to 1. For the hyperbolic tangent function, this range is -1 to 1.
Step8: The following code trains on a data set (x) with a max sequence size of 6 (columns) and 6 training elements (rows)
Step9: Stock Market Example
Step10: Assignment 3 Solution
Step11: The following code uses a random forest to rank the importance of features. This can be used both to rank the origional features and new ones created.
|
<ASSISTANT_TASK:>
Python Code:
from sklearn import preprocessing
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
# Encode text values to dummy variables(i.e. [1,0,0],[0,1,0],[0,0,1] for red,green,blue)
def encode_text_dummy(df,name):
dummies = pd.get_dummies(df[name])
for x in dummies.columns:
dummy_name = "{}-{}".format(name,x)
df[dummy_name] = dummies[x]
df.drop(name, axis=1, inplace=True)
# Encode text values to indexes(i.e. [1],[2],[3] for red,green,blue).
def encode_text_index(df,name):
le = preprocessing.LabelEncoder()
df[name] = le.fit_transform(df[name])
return le.classes_
# Encode a numeric column as zscores
def encode_numeric_zscore(df,name,mean=None,sd=None):
if mean is None:
mean = df[name].mean()
if sd is None:
sd = df[name].std()
df[name] = (df[name]-mean)/sd
# Convert all missing values in the specified column to the median
def missing_median(df, name):
med = df[name].median()
df[name] = df[name].fillna(med)
# Convert a Pandas dataframe to the x,y inputs that TensorFlow needs
def to_xy(df,target):
result = []
for x in df.columns:
if x != target:
result.append(x)
# find out the type of the target column. Is it really this hard? :(
target_type = df[target].dtypes
target_type = target_type[0] if hasattr(target_type, '__iter__') else target_type
print(target_type)
# Encode to int for classification, float otherwise. TensorFlow likes 32 bits.
if target_type in (np.int64, np.int32):
# Classification
return df.as_matrix(result).astype(np.float32),df.as_matrix([target]).astype(np.int32)
else:
# Regression
return df.as_matrix(result).astype(np.float32),df.as_matrix([target]).astype(np.float32)
# Nicely formatted time string
def hms_string(sec_elapsed):
h = int(sec_elapsed / (60 * 60))
m = int((sec_elapsed % (60 * 60)) / 60)
s = sec_elapsed % 60
return "{}:{:>02}:{:>05.2f}".format(h, m, s)
# Regression chart, we will see more of this chart in the next class.
def chart_regression(pred,y):
t = pd.DataFrame({'pred' : pred.flatten(), 'y' : y_test.flatten()})
t.sort_values(by=['y'],inplace=True)
a = plt.plot(t['y'].tolist(),label='expected')
b = plt.plot(t['pred'].tolist(),label='prediction')
plt.ylabel('output')
plt.legend()
plt.show()
#
x = [
[32],
[41],
[39],
[20],
[15]
]
y = [
1,
-1,
0,
-1,
1
]
print(x)
print(y)
from IPython.display import display, HTML
import pandas as pd
import numpy as np
x = np.array(x)
print(x[:,0])
df = pd.DataFrame({'x':x[:,0], 'y':y})
display(df)
x = [
[32,1383],
[41,2928],
[39,8823],
[20,1252],
[15,1532]
]
y = [
1,
-1,
0,
-1,
1
]
print(x)
print(y)
Again, very similar to what we did before. The following shows this as a data frame.
from IPython.display import display, HTML
import pandas as pd
import numpy as np
x = np.array(x)
print(x[:,0])
df = pd.DataFrame({'price':x[:,0], 'volume':x[:,1], 'y':y})
display(df)
x = [
[[32,1383],[41,2928],[39,8823],[20,1252],[15,1532]],
[[35,8272],[32,1383],[41,2928],[39,8823],[20,1252]],
[[37,2738],[35,8272],[32,1383],[41,2928],[39,8823]],
[[34,2845],[37,2738],[35,8272],[32,1383],[41,2928]],
[[32,2345],[34,2845],[37,2738],[35,8272],[32,1383]],
]
y = [
1,
-1,
0,
-1,
1
]
print(x)
print(y)
x = [
[[32],[41],[39],[20],[15]],
[[35],[32],[41],[39],[20]],
[[37],[35],[32],[41],[39]],
[[34],[37],[35],[32],[41]],
[[32],[34],[37],[35],[32]],
]
y = [
1,
-1,
0,
-1,
1
]
print(x)
print(y)
%matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
import math
def sigmoid(x):
a = []
for item in x:
a.append(1/(1+math.exp(-item)))
return a
def f2(x):
a = []
for item in x:
a.append(math.tanh(item))
return a
x = np.arange(-10., 10., 0.2)
y1 = sigmoid(x)
y2 = f2(x)
print("Sigmoid")
plt.plot(x,y1)
plt.show()
print("Hyperbolic Tangent(tanh)")
plt.plot(x,y2)
plt.show()
import numpy as np
import pandas
import tensorflow as tf
from sklearn import metrics
from tensorflow.models.rnn import rnn, rnn_cell
from tensorflow.contrib import skflow
SEQUENCE_SIZE = 6
HIDDEN_SIZE = 20
NUM_CLASSES = 4
def char_rnn_model(X, y):
byte_list = skflow.ops.split_squeeze(1, SEQUENCE_SIZE, X)
cell = rnn_cell.LSTMCell(HIDDEN_SIZE)
_, encoding = rnn.rnn(cell, byte_list, dtype=tf.float32)
return skflow.models.logistic_regression(encoding, y)
classifier = skflow.TensorFlowEstimator(model_fn=char_rnn_model, n_classes=NUM_CLASSES,
steps=100, optimizer='Adam', learning_rate=0.01, continue_training=True)
x = [
[[0],[1],[1],[0],[0],[0]],
[[0],[0],[0],[2],[2],[0]],
[[0],[0],[0],[0],[3],[3]],
[[0],[2],[2],[0],[0],[0]],
[[0],[0],[3],[3],[0],[0]],
[[0],[0],[0],[0],[1],[1]]
]
x = np.array(x,dtype=np.float32)
y = np.array([1,2,3,2,3,1])
classifier.fit(x, y)
test = [[[0],[0],[0],[0],[3],[3]]]
test = np.array(test)
classifier.predict(test)
# How to read data from the stock market.
from IPython.display import display, HTML
import pandas.io.data as web
import datetime
start = datetime.datetime(2014, 1, 1)
end = datetime.datetime(2014, 12, 31)
f=web.DataReader('tsla', 'yahoo', start, end)
display(f)
import numpy as np
prices = f.Close.pct_change().tolist() # to percent changes
prices = prices[1:] # skip the first, no percent change
SEQUENCE_SIZE = 5
x = []
y = []
for i in range(len(prices)-SEQUENCE_SIZE-1):
#print(i)
window = prices[i:(i+SEQUENCE_SIZE)]
after_window = prices[i+SEQUENCE_SIZE]
window = [[x] for x in window]
#print("{} - {}".format(window,after_window))
x.append(window)
y.append(after_window)
x = np.array(x)
print(len(x))
from tensorflow.contrib import skflow
from tensorflow.models.rnn import rnn, rnn_cell
import tensorflow as tf
HIDDEN_SIZE = 20
def char_rnn_model(X, y):
byte_list = skflow.ops.split_squeeze(1, SEQUENCE_SIZE, X)
cell = rnn_cell.LSTMCell(HIDDEN_SIZE)
_, encoding = rnn.rnn(cell, byte_list, dtype=tf.float32)
return skflow.models.linear_regression(encoding, y)
regressor = skflow.TensorFlowEstimator(model_fn=char_rnn_model, n_classes=1,
steps=100, optimizer='Adam', learning_rate=0.01, continue_training=True)
regressor.fit(x, y)
# Try an in-sample prediction
from sklearn import metrics
# Measure RMSE error. RMSE is common for regression.
pred = regressor.predict(x)
score = np.sqrt(metrics.mean_squared_error(pred,y))
print("Final score (RMSE): {}".format(score))
# Try out of sample
start = datetime.datetime(2015, 1, 1)
end = datetime.datetime(2015, 12, 31)
f=web.DataReader('tsla', 'yahoo', start, end)
import numpy as np
prices = f.Close.pct_change().tolist() # to percent changes
prices = prices[1:] # skip the first, no percent change
SEQUENCE_SIZE = 5
x = []
y = []
for i in range(len(prices)-SEQUENCE_SIZE-1):
window = prices[i:(i+SEQUENCE_SIZE)]
after_window = prices[i+SEQUENCE_SIZE]
window = [[x] for x in window]
x.append(window)
y.append(after_window)
x = np.array(x)
# Measure RMSE error. RMSE is common for regression.
pred = regressor.predict(x)
score = np.sqrt(metrics.mean_squared_error(pred,y))
print("Out of sample score (RMSE): {}".format(score))
import os
import pandas as pd
from sklearn.cross_validation import train_test_split
import tensorflow.contrib.learn as skflow
import numpy as np
from sklearn import metrics
path = "./data/"
filename = os.path.join(path,"t81_558_train.csv")
train_df = pd.read_csv(filename)
train_df.drop('id',1,inplace=True)
train_x, train_y = to_xy(train_df,'outcome')
train_x, test_x, train_y, test_y = train_test_split(
train_x, train_y, test_size=0.25, random_state=42)
# Create a deep neural network with 3 hidden layers of 50, 25, 10
regressor = skflow.TensorFlowDNNRegressor(hidden_units=[50, 25, 10], steps=5000)
# Early stopping
early_stop = skflow.monitors.ValidationMonitor(test_x, test_y,
early_stopping_rounds=200, print_steps=50)
# Fit/train neural network
regressor.fit(train_x, train_y, monitor=early_stop)
# Measure RMSE error. RMSE is common for regression.
pred = regressor.predict(test_x)
score = np.sqrt(metrics.mean_squared_error(pred,test_y))
print("Final score (RMSE): {}".format(score))
####################
# Build submit file
####################
from IPython.display import display, HTML
filename = os.path.join(path,"t81_558_test.csv")
submit_df = pd.read_csv(filename)
ids = submit_df.Id
submit_df.drop('Id',1,inplace=True)
submit_x = submit_df.as_matrix()
pred_submit = regressor.predict(submit_x)
submit_df = pd.DataFrame({'Id': ids, 'outcome': pred_submit[:,0]})
submit_filename = os.path.join(path,"t81_558_jheaton_submit.csv")
submit_df.to_csv(submit_filename, index=False)
display(submit_df)
import matplotlib.pyplot as plt
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.ensemble import RandomForestRegressor
# Build a forest and compute the feature importances
forest = RandomForestRegressor(n_estimators=50,
random_state=0, verbose = True)
print("Training random forest")
forest.fit(train_x, train_y)
importances = forest.feature_importances_
std = np.std([tree.feature_importances_ for tree in forest.estimators_],
axis=0)
indices = np.argsort(importances)[::-1]
# Print the feature ranking
#train_df.drop('outcome',1,inplace=True)
bag_cols = train_df.columns.values
print("Feature ranking:")
for f in range(train_x.shape[1]):
print("{}. {} ({})".format(f + 1, bag_cols[indices[f]], importances[indices[f]]))
The following code uses engineered features.
import os
import pandas as pd
from sklearn.cross_validation import train_test_split
import tensorflow.contrib.learn as skflow
import numpy as np
from sklearn import metrics
path = "./data/"
filename = os.path.join(path,"t81_558_train.csv")
train_df = pd.read_csv(filename)
train_df.drop('id',1,inplace=True)
#train_df.drop('g',1,inplace=True)
#train_df.drop('e',1,inplace=True)
train_df.insert(0, "a-b", train_df.a - train_df.b)
#display(train_df)
train_x, train_y = to_xy(train_df,'outcome')
train_x, test_x, train_y, test_y = train_test_split(
train_x, train_y, test_size=0.25, random_state=42)
# Create a deep neural network with 3 hidden layers of 50, 25, 10
regressor = skflow.TensorFlowDNNRegressor(hidden_units=[50, 25, 10], steps=5000)
# Early stopping
early_stop = skflow.monitors.ValidationMonitor(test_x, test_y,
early_stopping_rounds=200, print_steps=50)
# Fit/train neural network
regressor.fit(train_x, train_y, monitor=early_stop)
# Measure RMSE error. RMSE is common for regression.
pred = regressor.predict(test_x)
score = np.sqrt(metrics.mean_squared_error(pred,test_y))
print("Final score (RMSE): {}".format(score))
# foxtrot bravo
# charlie alpha
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: if clauses and embedded lists
Step2: dictionary and set comprehensions
Step3: <a id='exercise_1_1'></a>
Step4: <a id="string_formatting"></a>
Step5: You can identify the variables to be replaced by
Step6: <a id='exercise_2_1'></a>
Step7: You could enhance the readability of this message by formatting x and m to 2 decimal places and r as a percentage
Step8: Warning
Step9: <a id='lambda'></a>
Step10: When do you want to use lambda?
Step11: <a id='exercise_3_1'></a>
Step12: <a id='args'></a>
Step13: You can also use the *args and **kwargs syntax in the function calls
Step14: You can mix fixed arguments and variable length arguments
Step15: When can/shall we use them?
Step16: Whenever we inherit a class and override some of the methods of inherited class, we should use *args and **kwargs and pass the received positional and keyword arguments to the superclass method.
Step17: <a id='logical'></a>
Step18: <a id='exercise_5_1'></a>
Step19: Truth value testing
Step20: Be careful when handling this. It can lead to weird behaviours
Step21: Comparisons
Step22: In the next example, a and b point to different memory locations, each of them containing the same information. Whereas c, points to the same location as a.
Step23: When we modify a, we modify the memory location where it is pointing. So, when we modify a, we also modify c. In the other hand, nothing happens to b.
Step24: <a id='scope'></a>
Step25: You should not declare a local variable with the same name as a global one.
Step26: If what you want to do is modify the global variable, you can use the global keyword at the beginning of the function body.
Step27: Class and Instance variables
Step28: Not using class and instance variables as detailed above, can lead to weird behaviors, specially with mutable objects
Step29: <a id='mutable'></a>
Step30: Why does mutability matter?
Step31: What have we done here? Have we modified the value of x?
Step32: Exercise 7.1
Step33: <a id='floating'></a>
Step34: Order of operations can matter
Step35: Since $\pi$ can not be exactly represented it is not surprising that $sin(\pi)$ is not $0$
Step36: Unexpected cancellation due to loss of precision
Step37: Operating numbers of very different magnitudes
Step38: Overflow and underflow (reaching the maximum and minimum limits)
Step39: Starting with version 3.1, when printing, python displays the shortest number that maps to the same floating-point representation.
Step40: Some hints
Step41: In case you really need correctly-rounded decimal floating point arithmetic, check the built-in decimal library.
Step42: <a id='decorators'></a>
Step43: Remember we are doing
Step44: It may look complicated (and it is), but don't panic. It's the same as in the previous example.
Step45: Let's see what happens if the authentication is not correct
Step46: And what happens if the authentication is correct
Step47: Example 9.3
Step48: Ok, now you can panic a little bit.
|
<ASSISTANT_TASK:>
Python Code:
N_SQUARES = 10
# Don't do this!!!
ugly_list = []
for i in range(N_SQUARES):
ugly_list.append(i**2)
print('ugly list = {}'.format(ugly_list))
# You can do the same in one line
wonderful_list = [ i**2 for i in range(N_SQUARES) ]
print('wonderful list = {}'.format(wonderful_list))
# List comprehensions can contain if clauses after the for clause
even_list = [ i**2 for i in range(N_SQUARES) if i % 2 == 0]
print('even list = {}'.format(even_list))
# List comprehensions can be embedded within one another
IN_LEN = 3
embedded_list = [ [ j**2 for j in range(i, i + IN_LEN) ] for i in range(0, N_SQUARES, IN_LEN)]
print('embedded list = {}'.format(embedded_list))
# You can use a similar syntax to create dictionaries
fancy_dict = {'square of {}'.format(i): i**2 for i in range(N_SQUARES)}
print('fancy dict = {}'.format(fancy_dict))
# and sets
fancy_set = {i**2 for i in range(N_SQUARES)}
print('fancy set = {}'.format(fancy_set))
N = 100
all_divisors_list = []
for i in range(1, N + 1):
divisors_list = []
for j in range(1, i + 1):
if i % j == 0:
divisors_list.append(j)
all_divisors_list.append(divisors_list)
print('list of divisors = {}'.format(all_divisors_list))
# %load -r 2:5 solutions/03_02_TipsAndTricks.py
from datetime import datetime
an_int = 1
a_float = 0.123456
a_datetime = datetime.now()
a_string = 'foo'
a_list = list('abcd')
print('''This is a string formatting example:
* An integer formatted to a fixed length string filled with leading 0s: {0:06}
* A float formatted as a percentage with 2 decimal positions: {1:0.2%}, or as a float with 4 decimal places: {1:0.4f}
* Extract attributes from an object: the year "{2.year}" and the month "{2.month:02}" in a date
* Align a text filling it with hyphens
- to the left:
{3:-<32}
- to the right:
{3:->32}
* Access the values in a list: {4[0]}, {4[2]}
'''.format(an_int, a_float, a_datetime, a_string, a_list))
# an example of arguments referenced by their index
print('Example 1: 1st arg: {0}, 2nd arg: {1}, referencing the indexes'.format('a', 'b'))
# Empty brackets are filled with a list of indexes:
print('Example 2: 1st arg: {}, 2nd arg: {}, without referencing the indexes'.format('a', 'b'))
# If an argument is not referenced in the string, it is ignored
print('Example 3: 1st arg: {0}, 2nd arg: {1}, other arguments are ignored'.format('a', 'b', 'c', 'd', 'e', 'f'))
# You can also use keyword arguments:
print('Example 4: keyword arg "a": {a}, keyword arg "b": {b}, "c" is ignored'.format(a='a', b='b', c='c'))
# You can also mix non-keyword and keyword arguments
print('Example 5: 1st arg: {0}, keyword arg "b": {b}'.format('a', 'c', b='b', d='d'))
x = 12.3456789012345678901234567890
m = 0.98765432109876543210987654321
r = 0.05
print('The CI for mu is {} \xb1 {} with a significance level of {}'.format(x, m, r))
# %load -r 7 solutions/03_02_TipsAndTricks.py
print('The CI for mu is {:.2f} \xb1 {:.2f} with a significance level of {:.0%}'.format(x, m, r))
moan = ''.join([x*5 for x in 'ARGH!'])
print('In the future this may crash!!\n\n%s' %moan)
# This expressions are equivalent
sum_ = lambda x, y: x + y
print('This is the result of the lambda function: {}'.format(sum_(1., 2.)))
def sum__(x, y):
return x + y
print('And this is the result of the standard defined function: {}'.format(sum__(1., 2.)))
# Back to the squares example, using lambda and the map function
list_of_squares = list(map(lambda x: x**2, range(10))) # not a very good example ... better with comprehensions
print(list_of_squares)
# Let's try with another one: compute the sum of the squares of all the numbers up to 10
import functools
sum_of_list_of_squares = functools.reduce(lambda x, y: x + y, map(lambda x: x**2, range(10)))
print(sum_of_list_of_squares)
# Let's check if the result is ok
sum_of_list_of_squares == sum(list_of_squares)
# %load -r 10,11 solutions/03_02_TipsAndTricks.py
# A simple example
def foo(*bar, **barbar):
print('type of bar: {}'.format(type(bar)))
print('type of barbar: {}'.format(type(barbar)))
foo()
# An example
def sample_function(*args, **kwargs):
print('These are the arguments of my function:')
for i, arg in enumerate(args):
print(' Variable non keyword argument {}: {}'.format(i, arg))
for karg, varg in kwargs.items():
print(' Variable keyword argument: {}:{}'.format(karg, varg))
print('-'*36 + '\n')
sample_function(1, 2, kwarg_1=3, kwarg_2=4)
sample_function(6, 5, 4, 3, 2, 1)
args = range(5)
kwargs = {'kwarg_{}'.format(x): x for x in range(5)}
sample_function(*args, **kwargs)
# We want to force arg1 and arg2
def resample_function(arg1, arg2, *args, **kwargs):
return sample_function(arg1, arg2, *args, **kwargs)
sample_function()
resample_function(1, 2)
resample_function()
def multiplication(*args):
z = 1
for arg in args:
z *= arg
return z
print(multiplication(1, 2, 3, 4, 5, 6, 7, 8, 9, 10))
print(multiplication(0.1, 4, 6.7))
class ValueLogger():
def print_values(self, *values):
print('These are my values:')
for value in values:
print(' {}'.format(value))
self.print_separator()
def print_separator(self):
print('-'*64)
class AdvancedValueLogger(ValueLogger):
def print_values(self, *values):
if len(values) == 0:
print('There are no values')
self.print_separator()
else:
super().print_values(*values)
primitive_logger = ValueLogger()
primitive_logger.print_values()
primitive_logger.print_values(1, 2, 3, 4, 5)
advanced_logger = AdvancedValueLogger()
advanced_logger.print_values()
ex_dict_1 = {'k_1': {'k_1_1': 1, 'k_1_2':2}, 'k_2': 3}
# we can use this to simplify our code
# we can rewrite this
if ex_dict_1.get('k_1'):
if ex_dict_1['k_1'].get('k_1_1'):
print(ex_dict_1['k_1']['k_1_1'])
else:
print(False)
else:
print(False)
# like this
print(ex_dict_1.get('k_1') and ex_dict_1['k_1'].get('k_1_1') or False)
ex_dict_2 = {'type': 'sum', 'data': [1, 2, 3, 4, 5] }
# This code can be reformatted into a single line
if ex_dict_2['type'] == 'raw':
res_1 = ex_dict_2['data']
elif ex_dict_2['type'] == 'sum':
res_1 = sum(ex_dict_2['data'])
else:
res_1 = None
print(res_1)
# %load -r 14,15 solutions/03_02_TipsAndTricks.py
a = ''
if a:
res = 'true'
else:
res = 'false'
print('{!r} has been interpreted as {}'.format(a, res))
def sum_function(list_of_values):
if len(list_of_values) > 0:
return sum(list_of_values)
def wrap_around_sum(list_of_values):
s = sum_function(list_of_values)
if not s:
print('error!!')
else:
print('Here is your result {}'.format(s))
wrap_around_sum([1, 2, 3, 4, 5])
wrap_around_sum([-1, -3, 4])
a = 0
b = 0.
c = 4 - 4
print('0 and 0. are equal: {}'.format(a == b))
print('0 and 0. are not the same object: {}'.format(a is b))
print('But two different instances of 0, point to the same object: {}'.format(a is c))
class Dummy():
def __init__(self, val):
self.val = val
a = Dummy(0)
b = Dummy(0)
c = 0
print('Two instances of the same class are not equal: {}'.format(a == b))
print('or two instances of different classes: {}'.format(a == c))
# Unless we define the __eq__ method
class NotSoDummy(Dummy):
def __eq__(self, other):
if other == self.val:
return True
else:
return False
a = NotSoDummy(0)
b = NotSoDummy(0)
c = 0
print('Now the two instances of the same class are tested equal: {}'.format(a == b))
print('even the two of different classes: {}'.format(a == c))
print('But they are not the same object: {}'.format(a is b))
a = [1, 2, 3]
b = [1, 2, 3]
c = a
print('a and b are equal: {}, but are they the same instance? {}'.format(a == b, a is b))
print('but a and c are both equal: {}, and the same instance: {}'.format(a == c, a is c))
a[0] = 0
print('c = {}'.format(c))
print('b = {}'.format(b))
# This behaviour does not happen with numbers though
a_1 = 1
a_2 = a_1
a_1 = 2
print(a_2)
global_var = 'foo'
print('global outside: {}'.format(global_var))
def my_func():
local_var = 'bar'
print('global inside: {}'.format(global_var))
print('local inside: {}'.format(local_var))
my_func()
print('local outside: {}'.format(local_var))
global_var = 'foo'
def my_func():
print(global_var)
global_var = 'bar'
my_func()
global_var = 'foo'
def my_func():
global global_var
print('original global variable value: {}'.format(global_var))
global_var = 'bar'
print('new global variable value: {}'.format(global_var))
my_func()
class MySampleClass():
class_var = 'foo'
class_list = [] # wrong placement of a mutable object
def __init__(self, instance_var):
self.instance_var = instance_var
self.instance_list = []
inst_1 = MySampleClass('bar')
inst_2 = MySampleClass('bar bar')
print('Inst 1 - class var value: {}, instance var value: {}'.format(inst_1.class_var, inst_1.instance_var))
print('Inst 2 - class var value: {}, instance var value: {}'.format(inst_2.class_var, inst_2.instance_var))
inst_1.class_list.append('foo')
inst_1.instance_list.append('foo')
inst_2.class_list.append('bar')
inst_2.instance_list.append('bar')
print('class_list is shared by all instances. inst_1: {}, inst_2: {}'.format(inst_1.class_list, inst_2.class_list))
print('instance_list is not: inst_1: {}, inst_2: {}'.format(inst_1.instance_list, inst_2.instance_list))
my_list = [1, 2, 3]
my_tuple = (1, 2, 3)
# we can modify a value in a list
my_list[2] = 4
# or we can extend it, drop some value, ...
my_list.append(5)
my_list.remove(1)
print('my_list: {}'.format(my_list))
# We can not do that with tuples
my_tuple[2] = 4
# Tuples don't have any append, remove,... methods
# We can do this:
x = 'foo'
print('old value of x: {}'.format(x))
x += ' bar'
print('new value of x: {}'.format(x))
import csv
import time
with open('../resources/iris.csv', 'r') as f:
reader = csv.reader(f)
iris_lines = list(reader)
iris_lines = iris_lines*100 # artificially increase the size of the data
print(iris_lines[:10])
# if we want to build a string with the concatenation of all the 'species' we could to this
init = time.clock()
species = iris_lines[1][4]
for iris_line in iris_lines[2:]:
species += ',' + iris_line[4]
end = time.clock()
comp_time = end - init
print('computation took {:0.8} seconds\n'.format(comp_time))
# %load -r 18:25 solutions/03_02_TipsAndTricks.py
0.1 + 0.1 + 0.1 == 0.3
b = 1e-16 + 1 - 1e-16
c = 1e-16 - 1e-16 + 1
b == c
from math import sin, pi, sqrt
sin(pi)
sqrt(1e-16 + 1) - 1 == 0.
1e+12 + 1e-5 == 1e+12
def fib(n):
return ((1. + sqrt(5.))**n - (1. - sqrt(5.))**n)/(2**n*sqrt(5.))
print([int(fib(i)) for i in range(1, 20)])
fib(700)
# This numbers have all the same binary representation
a = 0.1
b = 0.10000000000000000001
c = 0.1000000000000000055511151231257827021181583404541015625
print(a, b, c)
# However, they are not exactly 0.1
print('{:0.24} {:0.24} {:0.24}'.format(a, b, c))
from math import isclose, sin, pi
print(isclose(0.1 + 0.1 + 0.1, 0.3))
print(isclose(sin(pi), 0., abs_tol=1e-09))
from decimal import Decimal
a = Decimal(1)
b = Decimal(10)
c = Decimal(3)
print(a/b, c/b)
a/b + a/b + a/b == c/b
def p_a_d(some_func):
def wrapped(*args, **kwargs):
print("going to run function '{}' with arguments: {}, {}".format(some_func.__name__, args, kwargs))
res = some_func(*args, **kwargs)
print("the result of '{}' is: {}".format(some_func.__name__, res))
return wrapped
@p_a_d
def dummy(i):
return i*10
dummy(1)
auth_tokens = {'user1': 'CLb3MML7GEXoaElk0DFtxuS0uhzYsDOHmdsj',
'user2': 'uuR4QxFQtwMp5RCVEZTh93lAeLnV1sQF1ZTk'}
def check_authentication(request):
''' Check if the token in the request correspond to the one stored'''
user = request.get('user')
token = request.get('token')
if auth_tokens.get(user) and auth_tokens[user] == token:
return True
else:
return False
def authenticate(func):
'''Decorator to add authentication checking'''
def authenticate_and_call(request):
if not check_authentication(request):
raise Exception('Authentication Failed.')
return func(request)
return authenticate_and_call
@authenticate
def dummy_sum(request):
return request.get('param1', 0.0) + request.get('param2', 0.0)
dummy_sum({'user': 'user1', 'token': '7jHeWjvt5qAn281eLHbnwKApay2rggAlrbOk', 'param1': 2.0, 'param2': 3.0})
dummy_sum({'user': 'user1', 'token': 'CLb3MML7GEXoaElk0DFtxuS0uhzYsDOHmdsj', 'param1': 2.0, 'param2': 3.0})
def accepts(*wrapper_args):
def wrapper(f):
def check_input_and_call(*func_args):
for func_arg, wrapper_arg in zip(func_args, wrapper_args):
if type(func_arg) != wrapper_arg:
raise Exception('wrong type for argument {}'.format(func_arg))
return f(*func_args)
return check_input_and_call
return wrapper
@accepts(float, int)
def compute_root(base, degree):
return base**(1./float(degree))
compute_root(4., 2.)
compute_root(4., 2)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Run the next cell to load the "SIGNS" dataset you are going to use.
Step2: As a reminder, the SIGNS dataset is a collection of 6 signs representing numbers from 0 to 5.
Step3: In Course 2, you had built a fully-connected network for this dataset. But since this is an image dataset, it is more natural to apply a ConvNet to it.
Step5: 1.1 - Create placeholders
Step7: Expected Output
Step9: Expected Output
Step11: Expected Output
Step13: Expected Output
Step14: Run the following cell to train your model for 100 epochs. Check if your cost after epoch 0 and 5 matches our output. If not, stop the cell and go back to your code!
Step15: Expected output
|
<ASSISTANT_TASK:>
Python Code:
import math
import numpy as np
import h5py
import matplotlib.pyplot as plt
import scipy
from PIL import Image
from scipy import ndimage
import tensorflow as tf
from tensorflow.python.framework import ops
from cnn_utils import *
%matplotlib inline
np.random.seed(1)
# Loading the data (signs)
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
# Example of a picture
index = 6
plt.imshow(X_train_orig[index])
print ("y = " + str(np.squeeze(Y_train_orig[:, index])))
X_train = X_train_orig/255.
X_test = X_test_orig/255.
Y_train = convert_to_one_hot(Y_train_orig, 6).T
Y_test = convert_to_one_hot(Y_test_orig, 6).T
print ("number of training examples = " + str(X_train.shape[0]))
print ("number of test examples = " + str(X_test.shape[0]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))
conv_layers = {}
# GRADED FUNCTION: create_placeholders
def create_placeholders(n_H0, n_W0, n_C0, n_y):
Creates the placeholders for the tensorflow session.
Arguments:
n_H0 -- scalar, height of an input image
n_W0 -- scalar, width of an input image
n_C0 -- scalar, number of channels of the input
n_y -- scalar, number of classes
Returns:
X -- placeholder for the data input, of shape [None, n_H0, n_W0, n_C0] and dtype "float"
Y -- placeholder for the input labels, of shape [None, n_y] and dtype "float"
### START CODE HERE ### (≈2 lines)
X = tf.placeholder(dtype='float', shape=(None, n_H0, n_W0, n_C0), name='X')
Y = tf.placeholder(dtype='float', shape=(None, n_y), name='Y')
### END CODE HERE ###
return X, Y
X, Y = create_placeholders(64, 64, 3, 6)
print ("X = " + str(X))
print ("Y = " + str(Y))
# GRADED FUNCTION: initialize_parameters
def initialize_parameters():
Initializes weight parameters to build a neural network with tensorflow. The shapes are:
W1 : [4, 4, 3, 8]
W2 : [2, 2, 8, 16]
Returns:
parameters -- a dictionary of tensors containing W1, W2
tf.set_random_seed(1) # so that your "random" numbers match ours
### START CODE HERE ### (approx. 2 lines of code)
W1 = tf.get_variable('W1', [4, 4, 3, 8], initializer=tf.contrib.layers.xavier_initializer(seed=0))
W2 = tf.get_variable('W2', [2, 2, 8, 16], initializer=tf.contrib.layers.xavier_initializer(seed=0))
### END CODE HERE ###
parameters = {"W1": W1,
"W2": W2}
return parameters
tf.reset_default_graph()
with tf.Session() as sess_test:
parameters = initialize_parameters()
init = tf.global_variables_initializer()
sess_test.run(init)
print("W1 = " + str(parameters["W1"].eval()[1,1,1]))
print("W2 = " + str(parameters["W2"].eval()[1,1,1]))
# GRADED FUNCTION: forward_propagation
def forward_propagation(X, parameters):
Implements the forward propagation for the model:
CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED
Arguments:
X -- input dataset placeholder, of shape (input size, number of examples)
parameters -- python dictionary containing your parameters "W1", "W2"
the shapes are given in initialize_parameters
Returns:
Z3 -- the output of the last LINEAR unit
# Retrieve the parameters from the dictionary "parameters"
W1 = parameters['W1']
W2 = parameters['W2']
### START CODE HERE ###
# CONV2D: stride of 1, padding 'SAME'
Z1 = tf.nn.conv2d(X, W1, strides = [1,1,1,1], padding = 'SAME')
# RELU
A1 = tf.nn.relu(Z1)
# MAXPOOL: window 8x8, sride 8, padding 'SAME'
P1 = tf.nn.max_pool(A1, ksize=[1,8,8,1], strides=[1,8,8,1], padding = 'SAME')
# CONV2D: filters W2, stride 1, padding 'SAME'
Z2 = tf.nn.conv2d(P1, W2, strides=[1,1,1,1], padding='SAME')
# RELU
A2 = tf.nn.relu(Z2)
# MAXPOOL: window 4x4, stride 4, padding 'SAME'
P2 = tf.nn.max_pool(A2, ksize=[1,4,4,1], strides=[1,4,4,1], padding='SAME')
# FLATTEN
P2 = tf.contrib.layers.flatten(P2)
# FULLY-CONNECTED without non-linear activation function (not not call softmax).
# 6 neurons in output layer. Hint: one of the arguments should be "activation_fn=None"
Z3 = tf.contrib.layers.fully_connected(P2, num_outputs=6, activation_fn=None)
### END CODE HERE ###
return Z3
tf.reset_default_graph()
with tf.Session() as sess:
np.random.seed(1)
X, Y = create_placeholders(64, 64, 3, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
init = tf.global_variables_initializer()
sess.run(init)
a = sess.run(Z3, {X: np.random.randn(2,64,64,3), Y: np.random.randn(2,6)})
print("Z3 = " + str(a))
# GRADED FUNCTION: compute_cost
def compute_cost(Z3, Y):
Computes the cost
Arguments:
Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (6, number of examples)
Y -- "true" labels vector placeholder, same shape as Z3
Returns:
cost - Tensor of the cost function
### START CODE HERE ### (1 line of code)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = Z3, labels = Y))
### END CODE HERE ###
return cost
tf.reset_default_graph()
with tf.Session() as sess:
np.random.seed(1)
X, Y = create_placeholders(64, 64, 3, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
cost = compute_cost(Z3, Y)
init = tf.global_variables_initializer()
sess.run(init)
a = sess.run(cost, {X: np.random.randn(4,64,64,3), Y: np.random.randn(4,6)})
print("cost = " + str(a))
# GRADED FUNCTION: model
def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.009,
num_epochs = 100, minibatch_size = 64, print_cost = True):
Implements a three-layer ConvNet in Tensorflow:
CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED
Arguments:
X_train -- training set, of shape (None, 64, 64, 3)
Y_train -- test set, of shape (None, n_y = 6)
X_test -- training set, of shape (None, 64, 64, 3)
Y_test -- test set, of shape (None, n_y = 6)
learning_rate -- learning rate of the optimization
num_epochs -- number of epochs of the optimization loop
minibatch_size -- size of a minibatch
print_cost -- True to print the cost every 100 epochs
Returns:
train_accuracy -- real number, accuracy on the train set (X_train)
test_accuracy -- real number, testing accuracy on the test set (X_test)
parameters -- parameters learnt by the model. They can then be used to predict.
ops.reset_default_graph() # to be able to rerun the model without overwriting tf variables
tf.set_random_seed(1) # to keep results consistent (tensorflow seed)
seed = 3 # to keep results consistent (numpy seed)
(m, n_H0, n_W0, n_C0) = X_train.shape
n_y = Y_train.shape[1]
costs = [] # To keep track of the cost
# Create Placeholders of the correct shape
### START CODE HERE ### (1 line)
X, Y = create_placeholders(64, 64, 3, 6)
### END CODE HERE ###
# Initialize parameters
### START CODE HERE ### (1 line)
parameters = initialize_parameters()
### END CODE HERE ###
# Forward propagation: Build the forward propagation in the tensorflow graph
### START CODE HERE ### (1 line)
Z3 = forward_propagation(X, parameters)
### END CODE HERE ###
# Cost function: Add cost function to tensorflow graph
### START CODE HERE ### (1 line)
cost = compute_cost(Z3, Y)
### END CODE HERE ###
# Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer that minimizes the cost.
### START CODE HERE ### (1 line)
optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate).minimize(cost)
### END CODE HERE ###
# Initialize all the variables globally
init = tf.global_variables_initializer()
# Start the session to compute the tensorflow graph
with tf.Session() as sess:
# Run the initialization
sess.run(init)
# Do the training loop
for epoch in range(num_epochs):
minibatch_cost = 0.
num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set
seed = seed + 1
minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed)
for minibatch in minibatches:
# Select a minibatch
(minibatch_X, minibatch_Y) = minibatch
# IMPORTANT: The line that runs the graph on a minibatch.
# Run the session to execute the optimizer and the cost, the feedict should contain a minibatch for (X,Y).
### START CODE HERE ### (1 line)
_ , temp_cost = sess.run([optimizer, cost], feed_dict={X: minibatch_X, Y: minibatch_Y})
### END CODE HERE ###
minibatch_cost += temp_cost / num_minibatches
# Print the cost every epoch
if print_cost == True and epoch % 5 == 0:
print ("Cost after epoch %i: %f" % (epoch, minibatch_cost))
if print_cost == True and epoch % 1 == 0:
costs.append(minibatch_cost)
# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
# Calculate the correct predictions
predict_op = tf.argmax(Z3, 1)
correct_prediction = tf.equal(predict_op, tf.argmax(Y, 1))
# Calculate accuracy on the test set
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print(accuracy)
train_accuracy = accuracy.eval({X: X_train, Y: Y_train})
test_accuracy = accuracy.eval({X: X_test, Y: Y_test})
print("Train Accuracy:", train_accuracy)
print("Test Accuracy:", test_accuracy)
return train_accuracy, test_accuracy, parameters
_, _, parameters = model(X_train, Y_train, X_test, Y_test)
fname = "images/thumbs_up.jpg"
image = np.array(ndimage.imread(fname, flatten=False))
my_image = scipy.misc.imresize(image, size=(64,64))
plt.imshow(my_image)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Bayesian Solution
Step2: Short Cut #1
Step3: Short Cut #2
Step4: How does the outcome depend on all those parameter?
Step5: Observation
Step6: Convergence tests
Step7: For comparison, run a second chain with different step sizes
Step8: Chain with smaller step size does not look stationary yet, but exhibists features comparable to chain length.
Step9: Diagnostic 1
Step10: What's wrong with chain3?
Step11: Diagnostic 2
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
from straightline_utils import *
%matplotlib inline
from matplotlib import rcParams
rcParams['savefig.dpi'] = 100
(x,y,sigmay) = get_data_no_outliers()
plot_yerr(x, y, sigmay)
def straight_line_log_likelihood(x, y, sigmay, m, b):
'''
Returns the log-likelihood of drawing data values *y* at
known values *x* given Gaussian measurement noise with standard
deviation with known *sigmay*, where the "true" y values are
*y_t = m * x + b*
x: list of x coordinates
y: list of y coordinates
sigmay: list of y uncertainties
m: scalar slope
b: scalar line intercept
Returns: scalar log likelihood
'''
return (np.sum(np.log(1./(np.sqrt(2.*np.pi) * sigmay))) +
np.sum(-0.5 * (y - (m*x + b))**2 / sigmay**2))
def straight_line_log_prior(m, b):
return 0.
def straight_line_log_posterior(x,y,sigmay, m,b):
return (straight_line_log_likelihood(x,y,sigmay, m,b) +
straight_line_log_prior(m, b))
# Evaluate log P(m,b | x,y,sigmay) on a grid.
# Set up grid
mgrid = np.linspace(mlo, mhi, 100)
bgrid = np.linspace(blo, bhi, 101)
log_posterior = np.zeros((len(mgrid),len(bgrid)))
# Evaluate log probability on grid
for im,m in enumerate(mgrid):
for ib,b in enumerate(bgrid):
log_posterior[im,ib] = straight_line_log_posterior(x, y, sigmay, m, b)
# Convert to probability density and plot
posterior = np.exp(log_posterior - log_posterior.max())
plt.imshow(posterior, extent=[blo,bhi, mlo,mhi],cmap='Blues',
interpolation='nearest', origin='lower', aspect=(bhi-blo)/(mhi-mlo),
vmin=0, vmax=1)
plt.contour(bgrid, mgrid, posterior, pdf_contour_levels(posterior), colors='k')
i = np.argmax(posterior)
i,j = np.unravel_index(i, posterior.shape)
print 'Grid maximum posterior values:', bgrid[i], mgrid[j]
plt.title('Straight line: posterior PDF for parameters');
#plt.plot(b_ls, m_ls, 'w+', ms=12, mew=4);
plot_mb_setup();
# Linear algebra: weighted least squares
N = len(x)
A = np.zeros((N,2))
A[:,0] = 1. / sigmay
A[:,1] = x / sigmay
b = y / sigmay
theta,nil,nil,nil = np.linalg.lstsq(A, b)
plot_yerr(x, y, sigmay)
b_ls,m_ls = theta
print 'Least Squares (maximum likelihood) estimator:', b_ls,m_ls
plot_line(m_ls, b_ls);
def straight_line_posterior(x, y, sigmay, m, b):
return np.exp(straight_line_log_posterior(x, y, sigmay, m, b))
def run_MC(m, mstep, b, bstep, nsteps, burn_in = 0):
chain = []
probs = []
naccept = 0
print 'Running MC for', nsteps, 'steps'
# First point:
L_old = straight_line_log_likelihood(x, y, sigmay, m, b)
p_old = straight_line_log_prior(m, b)
log_prob_old = L_old + p_old
for i in range(nsteps+burn_in):
# step
mnew = m + np.random.normal() * mstep
bnew = b + np.random.normal() * bstep
# evaluate probabilities
# prob_new = straight_line_posterior(x, y, sigmay, mnew, bnew)
L_new = straight_line_log_likelihood(x, y, sigmay, mnew, bnew)
p_new = straight_line_log_prior(mnew, bnew)
log_prob_new = L_new + p_new
if (np.exp(log_prob_new - log_prob_old) > np.random.uniform()):
# accept
m = mnew
b = bnew
L_old = L_new
p_old = p_new
log_prob_old = log_prob_new
if (i > burn_in): #measure acceptance rate after burn-in only
naccept += 1
else:
# Stay where we are; m,b stay the same, and we append them
# to the chain below.
pass
chain.append((b,m))
probs.append((L_old,p_old))
print 'Acceptance fraction:', naccept/float(nsteps)
return chain[burn_in:]
# initial m, b
m0,b0 = 0., 450.
# step sizes
mstep, bstep = 0.1, 10.
# how many steps?
nsteps = 5000
chain = run_MC(m0, mstep, b0, bstep, nsteps)
mm = [m for b,m in chain]
bb = [b for b,m in chain]
plt.clf()
# Plot trajectory of chain in (b,m) plane
plt.plot(bb, mm,linestyle='-',alpha=0.5)
for n in range(0,250,50):
plt.text(bb[n],mm[n],"%d"%(n))
#overplot posterior contours from grid based estimate
plt.contour(bgrid, mgrid, posterior, pdf_contour_levels(posterior), colors='k',linewidth = 3)
plt.show()
new_chain = run_MC(m0+5., mstep, b0-100, bstep*2., nsteps)
new_mm = [m for b,m in new_chain]
new_bb = [b for b,m in new_chain]
plt.clf()
plt.plot(new_bb, new_mm,linestyle='-',alpha=0.5)
for n in range(0,250,50):
plt.text(new_bb[n],new_mm[n],"%d"%(n))
plt.show()
#Run chain again, now with burn_in = 1000
chain = run_MC(m0, mstep, b0, bstep, nsteps, burn_in = 1000)
# Redo the same plotting as before
mm = [m for b,m in chain]
bb = [b for b,m in chain]
plt.clf()
# Plot trajectory of chain in (b,m) plane
plt.plot(bb, mm,linestyle='-',alpha=0.5)
for n in range(0,250,50):
plt.text(bb[n],mm[n],"%d"%(n))
#overplot posterior contours from grid based estimate
plt.contour(bgrid, mgrid, posterior, pdf_contour_levels(posterior), colors='k',linewidth = 3)
plt.show()
# 1 and 2D marginalised distributions:
import triangle
triangle.corner(chain, labels=['b','m'], range=[(blo,bhi),(mlo,mhi)],quantiles=[0.16,0.5,0.84],
show_titles=True, title_args={"fontsize": 12},
plot_datapoints=True, fill_contours=True, levels=[0.68, 0.95], color='b', bins=40, smooth=1.0);
plt.show()
# Traces, for convergence inspection:
plt.clf()
plt.subplot(2,1,1)
plt.plot(mm, 'b-')
plt.ylabel('m')
plt.subplot(2,1,2)
plt.plot(bb, 'b-')
plt.ylabel('b')
plt.show()
m0 = 1.0
b0 = 0.0
chain2 = run_MC(m0, mstep/10., b0, bstep, nsteps,burn_in = 1000)
mm2 = [m for b,m in chain2]
bb2 = [b for b,m in chain2]
# Scatterplot of m,b posterior samples
plt.clf()
plt.contour(bgrid, mgrid, posterior, pdf_contour_levels(posterior), colors='k')
#plt.gca().set_aspect((bhi-blo)/(mhi-mlo))
plt.plot(bb, mm, 'b.', alpha=0.1)
plt.plot(bb2, mm2, 'r.', alpha=0.1)
#plot_mb_setup()
plt.show()
# Traces, for convergence inspection:
plt.clf()
plt.subplot(2,1,1)
plt.plot(mm, 'b-')
plt.plot(mm2, 'r-')
#plt.ylim(mlo,mhi)
plt.ylabel('m')
plt.subplot(2,1,2)
plt.plot(bb, 'b-')
plt.plot(bb2, 'r-')
plt.ylabel('b')
#plt.ylim(blo,bhi)
plt.show()
chain3 = run_MC(m0, mstep/10., b0, bstep, nsteps*10,burn_in = 1000)
mm3 = [m for b,m in chain3]
bb3 = [b for b,m in chain3]
# Scatterplot of m,b posterior samples
plt.clf()
plt.contour(bgrid, mgrid, posterior, pdf_contour_levels(posterior), colors='k')
plt.plot(bb3, mm3, 'k.', alpha=0.1)
plt.plot(bb2, mm2, 'r.', alpha=0.1)
plt.show()
# Traces, for convergence inspection:
plt.clf()
plt.subplot(2,1,1)
plt.plot(mm3, 'k-')
plt.ylabel('m')
plt.subplot(2,1,2)
plt.plot(bb3, 'k-')
plt.ylabel('b')
plt.show()
def autocor (chain, kmax):
x = chain - np.mean(chain)
cor = np.zeros(kmax)
cor[0] =1.0
for k in range(1,kmax):
cor[k] = np.sum(x[0:-k]*x[k:])/np.sum(x*x)
return cor
plt.clf()
kmax = 500
plt.plot(autocor(bb,kmax), 'b-', label = "chain1")
plt.plot(autocor(bb2,kmax), 'r-',label = "chain2")
plt.plot(autocor(bb3,kmax), 'k-',label = "chain3")
plt.ylabel(r'$\rho(b)$')
plt.xlabel('lag [steps]')
plt.ylim(-0.2,1.0)
plt.legend()
plt.show()
def autocor_scaled (chain, kmax):
x = chain - np.mean(chain)
cor = np.zeros(kmax)
cor[0] = 1.0
for k in range(1,kmax):
cor[k] = np.sum(x[0:-k]*x[k:])/np.sum(x*x)
return np.arange(0,kmax)/float(len(chain)),cor
plt.clf()
plt.subplot(2,1,1)
kmax = 500
r, cor = autocor_scaled(mm,kmax)
plt.plot(r,cor, 'b-', label = "chain1")
r, cor = autocor_scaled(mm2,kmax)
plt.plot(r,cor, 'r-', label = "chain2")
r, cor = autocor_scaled(mm3,10*kmax)
plt.plot(r,cor, 'k-', label = "chain3")
plt.ylabel(r'$\rho(m)$')
plt.xlim(0,.1)
plt.ylim(-0.2,1.0)
plt.legend()
plt.subplot(2,1,2)
r, cor = autocor_scaled(bb,kmax)
plt.plot(r,cor, 'b-')
r, cor = autocor_scaled(bb2,kmax)
plt.plot(r,cor, 'r-')
r, cor = autocor_scaled(bb3,10*kmax)
plt.plot(r,cor, 'k-')
plt.xlim(0,.1)
plt.ylim(-0.2,1.0)
plt.ylabel(r'$\rho(b)$')
plt.xlabel('lag [chain length]')
plt.show()
def gelmanrubin(chains):
M = chains.shape[0]
N = chains.shape[1]
thetaJ = np.mean(chains,axis =1)
thetabar = np.mean(chains)
sJ = np.zeros(M)
for i in range(0,M):
sJ[i] = 1./(N-1.0)*np.sum(np.power(chains[i,:]-thetaJ[i],2.))
W = 1./float(M)*np.sum(sJ)
B = float(N)/(M-1.)*np.sum(np.power(thetaJ-thetabar,2.0))
vartheta = float(N-1)/float(N)*W +B/float(N)
return np.sqrt(vartheta/W)
M = 10
burnin = 1000
nsteps = 1000
chains_m = np.zeros((M,nsteps))
chains_b = np.zeros((M,nsteps))
for J in range(0,M):
m0 = 5.*np.random.uniform()
b0 = 500.*np.random.uniform()
chaini = run_MC(m0, mstep, b0, bstep, nsteps, burn_in = burnin)
chains_m[J,:] = [m for b,m in chaini]
chains_b[J,:] = [b for b,m in chaini]
print "\n\nR(m) = %f" %(gelmanrubin(chains_m))
print "R(b) = %f" %(gelmanrubin(chains_b))
As a test, add in one chain with smaller step sizes which is unlikely to be converged:
chain = run_MC(m0, mstep/10., b0, bstep/20., nsteps, burn_in = burnin)
chains_m[M-1,:] = [m for b,m in chain]
chains_b[M-1,:] = [b for b,m in chain]
print "\n\nR(m) = %f" %(gelmanrubin(chains_m))
print "R(b) = %f" %(gelmanrubin(chains_b))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The process is quite simple and only requires the user to store the .qasm file in an appropriate location and maintain the absolute path of the file. This will reading the file simpler. For this demonstration, we already saved a few qasm circuit examples in the directory qasm_files. You can find more examples at OpenQASM repository Let's start off by reading one of the examples
Step2: Qasm Import
Step3: The mode refers to the internal way in which QuTiP processes the QASM files.
Step4: Custom Gates
Step5: Furthermore, the circuit also measures the two qubits q[0] and q[1] and stores the results in the classical registers c[0] and c[1]
Step6: We can now run the circuit to confirm that the circuit is correctly loaded and performs the correct operations. To do this, we can use the QubitCircuit.run function with the appropriate input state. In our case, we can take the state |01⟩.
Step7: As predicted the output is the state after swapping which is |10⟩
Step8: We can also read in a QASM file from a string by specifying strmode=True to read_qasm
Step9: Note
Step10: Note
|
<ASSISTANT_TASK:>
Python Code:
from qutip_qip.qasm import read_qasm
from qutip import rand_ket, tensor, basis
from qutip_qip.circuit import Measurement
import numpy as np
path = "qasm_files/swap.qasm"
qasm_file = open(path, "r")
print(qasm_file.read())
from qutip_qip.operations.gates import gate_sequence_product
from qutip import tensor, basis
qc = read_qasm(path, mode="qiskit", version="2.0")
gate_sequence_product(qc.propagators())
from qutip_qip.qasm import print_qasm
print_qasm(qc)
path = "qasm_files/swap_custom.qasm"
qasm_file = open(path, "r")
print(qasm_file.read())
qc = read_qasm(path)
from qutip import tensor, basis
qc.run(tensor(basis(2, 0), basis(2, 1)))
path = "qasm_files/teleportation.qasm"
qasm_file = open(path, "r")
qasm_str = qasm_file.read()
print(qasm_str)
teleportation = read_qasm(qasm_str, strmode=True)
state = tensor(rand_ket(2), basis(2, 0), basis(2, 0))
initial_measurement = Measurement("start", targets=[0])
_, initial_probabilities = initial_measurement.measurement_comp_basis(state)
state_final = teleportation.run(state)
final_measurement = Measurement("start", targets=[2])
_, final_probabilities = final_measurement.measurement_comp_basis(state_final)
np.testing.assert_allclose(initial_probabilities, final_probabilities)
from qutip.ipynbtools import version_table
version_table()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In Python, models should not be saved with pickle; the Stan backend attached to the model object will not pickle well, and will produce issues under certain versions of Python. Instead, you should use the built-in serialization functions to serialize the model to json
Step2: The json file will be portable across systems, and deserialization is backwards compatible with older versions of prophet.
Step4: Note that if this is used on a time series that doesn't have a constant trend, any trend will be fit with the noise term and so there will be high predictive uncertainty in the forecast.
|
<ASSISTANT_TASK:>
Python Code:
%%R
saveRDS(m, file="model.RDS") # Save model
m <- readRDS(file="model.RDS") # Load model
import json
from prophet.serialize import model_to_json, model_from_json
with open('serialized_model.json', 'w') as fout:
json.dump(model_to_json(m), fout) # Save model
with open('serialized_model.json', 'r') as fin:
m = model_from_json(json.load(fin)) # Load model
%%R
m <- prophet(df, growth='flat')
m = Prophet(growth='flat')
def stan_init(m):
Retrieve parameters from a trained model.
Retrieve parameters from a trained model in the format
used to initialize a new Stan model.
Parameters
----------
m: A trained model of the Prophet class.
Returns
-------
A Dictionary containing retrieved parameters of m.
res = {}
for pname in ['k', 'm', 'sigma_obs']:
res[pname] = m.params[pname][0][0]
for pname in ['delta', 'beta']:
res[pname] = m.params[pname][0]
return res
df = pd.read_csv('../examples/example_wp_log_peyton_manning.csv')
df1 = df.loc[df['ds'] < '2016-01-19', :] # All data except the last day
m1 = Prophet().fit(df1) # A model fit to all data except the last day
%timeit m2 = Prophet().fit(df) # Adding the last day, fitting from scratch
%timeit m2 = Prophet().fit(df, init=stan_init(m1)) # Adding the last day, warm-starting from m1
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Data fetching and preprocessing
Step2: We can see from this example that the textual data are not very cleaned
Step3: Textual data preprocessing
Step4: define stop words
Step5: words to understand as one token
Step6: an high-dimensionnal vocabulary
Step7: Translate token to index
Step8: Illustration
Step9: Implementation of the Latent Dirichlet Allocation
Step10: Initialisation
Step11: Understanding the object
Step12: We can check that the amount of word_0 in the full corpus is dispatched between each topic.
Step13: Understanding the object doc_topic
Step14: We can check that the number of words classified matches the number of word in the document
Step15: Algorithm
Step16: The result
Step17: definition of topics with their related words
Step20: A complete analysis of the result is out of our goal since we haven't thought long enough to our data that were a little bit unclean due to the fetching method (sometimes other info that were we guess around the article in the website page are in fact integrated to the text). Still we can interpret the topic 4 to be related to international matter, the topic 3 to economics, topic 2 to juridical matter, topic 1 the global picture in terms of network and topic 0 to the act itself.
Step21: Study of time convergence
Step22: Influence of Dirichlet parameters
Step23: Influence of the number of topics
|
<ASSISTANT_TASK:>
Python Code:
from jyquickhelper import add_notebook_menu
add_notebook_menu()
#get raw data
import xml.etree.ElementTree as ET
tree = ET.parse('../dataset/nysk.xml')
root = tree.getroot()
root1 = root.getchildren()[150].getchildren()
texts=[]
for document in root.iter('document'):
text = document.find('text').text
texts += [text]
#Example of text
texts[1]
# Sample texts
print(len(texts))
shuffle(texts)
texts_test = texts[:250]
print(len(texts_test))
from nltk.tokenize import MWETokenizer
from nltk.stem.snowball import SnowballStemmer
my_stop_words = nltk.corpus.stopwords.words('english')
# Add my stopwords
my_stop_words = my_stop_words + ["n't", "'s", "wednesday", "year",
"ve", "said", "a", "would", "may", "say", "saturday",
"thursday", "select", "one", "part"]
tokenizer = MWETokenizer([("world", "trade", "organisation"), ('dominique', 'strauss-kahn'),
("international", "monetary", "fund"), ('new', 'york'), ("wall", "street")])
stemmer = SnowballStemmer("english")
texts_tok = []
for text in texts_test:
tokens = [word.lower() for sent in nltk.sent_tokenize(text) for word in nltk.word_tokenize(sent)]
tokens = tokenizer.tokenize(tokens)
filtered_tokens = []
for token in tokens:
if token not in my_stop_words:
if re.search('[a-zA-Z]', token):
stemmed_token = stemmer.stem(token)
filtered_tokens.append(stemmed_token)
texts_tok += [filtered_tokens]
from collections import defaultdict
frequency = defaultdict(int)
for text in texts_tok:
for token in text:
frequency[token] += 1
df_freq = pd.DataFrame(frequency, index=['value']).T
print('Most frequent words')
print(df_freq.sort_values(['value'], ascending=False).head())
df_freq[df_freq['value'] <10].hist(['value'], bins=50)
plt.title('Distribution for the least frequent words')
print()
# Extract the most discrimnant tokens
def tokenStem(text):
tokens = [word.lower() for sent in nltk.sent_tokenize(text) for word in nltk.word_tokenize(sent)]
tokens = tokenizer.tokenize(tokens)
filtered_tokens = []
for token in tokens:
if re.search('[a-zA-Z]', token):
stemmed_token = stemmer.stem(token)
filtered_tokens.append(stemmed_token)
return filtered_tokens
tfidf_vectorizer = TfidfVectorizer(max_df=0.8, max_features=200000,
min_df=0.2, stop_words=my_stop_words,
use_idf=True, tokenizer=tokenStem, ngram_range=(1,1))
%time tfidf_matrix = tfidf_vectorizer.fit_transform(texts_test)
terms = tfidf_vectorizer.get_feature_names()
#Build
texts_red = [[token for token in text if token in terms] for text in texts_tok]
# Building the dictionnary
dictionary = corpora.Dictionary(texts_red)
# Store the translation rule in a pd.DataFrame
dict_df = pd.DataFrame(data=dictionary.token2id, index=['value']).T
dict_df.head()
# Translating our corpus
texts_idx = [dictionary.doc2idx(text) for text in texts_red]
# Example
texts_idx[2]
length = []
for text in range(len(texts_idx)):
length += [len(texts_idx[text])]
plt.figure()
plt.boxplot(length)
print(max(length), min(length))
n_docs = len(texts_idx) # Number of documents in corpus
n_words = len(dict_df) # Number of words in full corpus
n_topics =5 # Number of topics we want to find
n_iter =20
alpha = 0.1
beta =0.1
def initialisation(n_docs,n_topics,n_words,texts_idx):
doc_topic = np.zeros((n_docs, n_topics)) # number of words per topic for each doc
word_topic = np.zeros((n_topics, n_words)) # count of each word for each topic
doc = np.zeros(n_docs) # number of words for each doc/length of each doc
topic = np.zeros(n_topics) # number of words for each topic
topics_peridx = {} # topic assigned for each word for each document
for d in range(n_docs):
idx =0
for w in texts_idx[d]:
# generate random data for the first step
t=np.random.randint(n_topics)
doc_topic[d, t] +=1 #
doc[d] +=1
word_topic[t,w] +=1
topic[t] +=1
topics_peridx[(d,idx)] = t
idx +=1
output = [doc_topic, doc, word_topic, topic, topics_peridx]
return output
new = initialisation(n_docs,n_topics,n_words,texts_idx)
doc_topic = new[0]
doc = new[1]
word_topic = new[2]
topic = new[3]
topics_peridx =new[4]
print(word_topic.shape) # n_topics*n_words (number of topics * number of words in final dictionary)
print(word_topic)
# Find the word corresponding to the index 0
value0 = dict_df[dict_df.value==0].index[0]
print(value0)
# Look for its frequency inside the final vocabulary (of texts_red)
freq_red = defaultdict(int)
for text in texts_red:
for token in text:
freq_red[token] += 1
print('Number of occurences in full corpus:',freq_red[value0])
print('Dispatched word_0 to each topic:',freq_red[value0] == sum(word_topic[:,0]) )
print(doc_topic[0:10])
print()
print('Matrix shape',doc_topic.shape) # n_docs*n_topics
print('Number of words in document_0:', len(texts_idx[0]))
print('Equals to sum of words in each topic for document_0:',sum(doc_topic[0])==len(texts_idx[0]))
def pi(d,w, alpha, beta):
'''
Compute p(t|w, -t):
the full conditional distribution of topic t given the word w
'''
left = (word_topic[:,w] + beta) / (topic + beta*n_words)
right = (doc_topic[d,:] + alpha) / (doc[d] + alpha*n_topics)
p_t = left*right # is equivalent
p_t /= (np.sum(p_t)) # normalization to get a probability
return(p_t)
start_time = time.time()
for iteration in range(n_iter):
print('iteration:',iteration)
for d in range(n_docs):
idx =0
for w in texts_idx[d]:
t = topics_peridx[(d,idx)]
# withdraw the current assignment of t
doc_topic[d, t] -=1
doc[d] -=1
word_topic[t,w] -=1
topic[t] -=1
# compute the conditional distribution
p_t = pi(d,w,alpha, beta)
# choose the topic for word w
t = np.random.multinomial(1,p_t)
t= t.argmax()
doc_topic[d, t] +=1
doc[d] +=1
word_topic[t,w] +=1
topic[t] +=1
topics_peridx[(d,idx)] = t
idx +=1
print("--- %s seconds ---" % (time.time() - start_time))
# Relative number of words per topic
pd.DataFrame(topic/sum(topic)*100).T
# Distribution of words per topic
word_topic_df = pd.DataFrame(word_topic)
word_topic_df.columns = dict_df.sort_values(['value']).index
word_topic_df
# Estimation of pi : P(w|t)
word_topic_df / word_topic_df.sum(axis=0)
for t in range(n_topics):
topic_ = word_topic_df.iloc[t]
print('topic', t)
print(topic_[topic_ >50].index)
def log_multi_beta(alpha, K=None):
Logarithm of the multinomial beta function.
if K is None:
# alpha is assumed to be a vector
return np.sum(gammaln(alpha)) - gammaln(np.sum(alpha))
else:
# alpha is assumed to be a scalar
return K * gammaln(alpha) - gammaln(K*alpha)
def loglikelihood():
Compute the likelihood that the model generated the data.
loglik = 0
for t in range(n_topics):
loglik += log_multi_beta(word_topic[t,:]+beta)
loglik -= log_multi_beta(beta, n_words)
for d in range(n_docs):
loglik += log_multi_beta(doc_topic[d,:]+alpha)
loglik -= log_multi_beta(alpha, n_topics)
return loglik
def LDA(n_iter,alpha,beta, verbose =False):
logliks = []
for iteration in range(n_iter):
for d in range(n_docs):
idx =0
for w in texts_idx[d]:
t = topics_peridx[(d,idx)]
# withdraw the current assignment of t
doc_topic[d, t] -=1
doc[d] -=1
word_topic[t,w] -=1
topic[t] -=1
p_t = pi(d,w, alpha, beta)
t = np.random.multinomial(1,p_t)
t= t.argmax()
doc_topic[d, t] +=1
doc[d] +=1
word_topic[t,w] +=1
topic[t] +=1
topics_peridx[(d,idx)] = t
idx +=1
if (iteration % 5==0):
print('iteration:',iteration)
if (verbose==True):
loglik = loglikelihood()
print("loglikelihood",round(loglik))
logliks += [loglik]
if verbose == False:
logliks = loglikelihood()
print("loglikelihood",round(logliks))
return(logliks)
new = initialisation(n_docs,n_topics,n_words,texts_idx)
doc_topic = new[0]
doc = new[1]
word_topic = new[2]
topic = new[3]
topics_peridx =new[4]
n_iter = 70
%time convergenceLDA = LDA(n_iter, alpha,beta, verbose = True)
x = range(n_iter)
fig = plt.figure()
plt.plot(x,convergenceLDA)
plt.ylabel('loglikelihood')
plt.xlabel('iterations')
plt.ylim(-290000, -210000)
#Study on alpha
lik_alpha = []
iter_alpha = np.linspace(0.1, 2.0, num=10).tolist()
for alpha in iter_alpha:
print('alpha:',alpha)
new = initialisation(n_docs,n_topics,n_words,texts_idx)
doc_topic = new[0]
doc = new[1]
word_topic = new[2]
topic = new[3]
topics_peridx =new[4]
lik = LDA(20, alpha, beta)
lik_alpha += [lik]
fig = plt.figure()
plt.plot(iter_alpha,lik_alpha)
plt.ylabel('loglikelihood')
plt.xlabel('iterations')
plt.ylim(-290000, -210000)
alpha=0.1
lik_beta = []
iter_beta = np.linspace(0.1, 2.0, num=10).tolist()
for beta in iter_beta:
print('beta:',beta)
new = initialisation(n_docs,n_topics,n_words,texts_idx)
doc_topic = new[0]
doc = new[1]
word_topic = new[2]
topic = new[3]
topics_peridx =new[4]
lik = LDA(20, alpha, beta)
lik_beta += [lik]
fig = plt.figure()
plt.plot(iter_beta,lik_beta)
plt.ylabel('pseudo loglikelihood')
plt.xlabel('iterations')
plt.ylim(-77000, -57000)
print(min(lik_beta), max(lik_beta))
alpha=0.1
beta =0.75
iter_topics = range(2,6)
lik_topics = []
for n_topics in iter_topics:
print('n_topics:',n_topics)
new = initialisation(n_docs,n_topics,n_words,texts_idx)
doc_topic = new[0]
doc = new[1]
word_topic = new[2]
topic = new[3]
topics_peridx =new[4]
lik = LDA(20, alpha, beta)
lik_topics += [lik]
fig = plt.figure()
plt.plot(iter_topics,lik_topics)
plt.ylabel('loglikelihood')
plt.xlabel('iterations')
plt.ylim(-290000, -210000)
print(min(lik_topics), max(lik_topics))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Recall that tables in relational databases are linked through primary keys and sometimes other fields that are common to multiple tables (as is the case with our Dognition data set). Our goal when we execute a JOIN or make a joined table is to use those common columns to let the database figure out which rows in one table match up to which rows in another table. Once that mapping is established using at least one common field or column, the database can pull any columns you want out of the mapped, or joined, tables and output the matched data to one common table.
Step2: You should receive an error message stating that the identity of dog_guid and user_guid in the field list is ambiguous. The reason is that the column title exists in both tables, and MySQL doesn't know which one we want. We have to specify the table name before stating the field name, and separate the two names by a period <mark>(NOTE
Step3: The query should execute quickly. This would NOT have been the case if you did not include the WHERE clause to combine the two tables. If you accidentally request a Cartesian product from datasets with billions of rows, you could be waiting for your query output for days (and will probably get in trouble with your database administrator). <mark>So always remember to tell the database how to join your tables!</mark>
Step4: It's clear from looking at this output that (A) not many customers provided ratings, and (B) when they did, they usually were not very surprised by their dog's performance. Therefore, these ratings are probably not going to provide a lot of instructive insight into how to improve Dognition's completion rate. However, the ratings table still provides a great opportunity to illustrate the results of different types of joins.
Step5: These counts indicate some important things
Step6: When you run the query by joining on the dog_guid only, you still get 389 rows in your output. When you run the query by joining on the user_guid only, you get 5586 rows in your output. This means that
Step7: 2. Joining More than 2 Tables
Step8: Practice inner joining your own tables!
Step9: Question 8
Step10: Question 9
|
<ASSISTANT_TASK:>
Python Code:
%load_ext sql
%sql mysql://studentuser:studentpw@mysqlserver/dognitiondb
%sql USE dognitiondb
%config SqlMagic.displaylimit=25
%%sql
SELECT d.dog_guid AS DogID, d.user_guid AS UserID, AVG(r.rating) AS AvgRating,
COUNT(r.rating) AS NumRatings, d.breed, d.breed_group, d.breed_type
FROM dogs d, reviews r
WHERE d.dog_guid=r.dog_guid AND d.user_guid=r.user_guid
GROUP BY d.user_guid
HAVING NumRatings >= 10
ORDER BY AvgRating DESC
LIMIT 200
%%sql
SELECT d.dog_guid AS DogID, d.user_guid AS UserID, AVG(r.rating) AS AvgRating,
COUNT(r.rating) AS NumRatings, d.breed, d.breed_group, d.breed_type
FROM dogs d, reviews r
WHERE d.dog_guid=r.dog_guid AND d.user_guid=r.user_guid
GROUP BY d.user_guid
HAVING NumRatings >= 10
ORDER BY AvgRating DESC
LIMIT 200
%%sql
SELECT d.dog_guid AS DogID, d.user_guid AS UserID, AVG(r.rating) AS AvgRating,
COUNT(r.rating) AS NumRatings, d.breed, d.breed_group, d.breed_type
FROM dogs d, reviews r
WHERE d.dog_guid=r.dog_guid AND d.user_guid=r.user_guid
GROUP BY d.user_guid
ORDER BY AvgRating DESC
%%sql
SELECT COUNT(DISTINCT dog_guid) AS uniq_dog_guid, COUNT(DISTINCT user_guid) AS uniq_user_guid
FROM reviews
%%sql
SELECT COUNT(DISTINCT dog_guid) AS uniq_dog_guid, COUNT(DISTINCT user_guid) AS uniq_user_guid
FROM dogs
%%sql
SELECT d.dog_guid AS DogID, d.user_guid AS UserID, AVG(r.rating) AS AvgRating,
COUNT(r.rating) AS NumRatings, d.breed, d.breed_group, d.breed_type
FROM dogs d, reviews r
WHERE d.dog_guid=r.dog_guid
GROUP BY d.user_guid
ORDER BY AvgRating DESC
%%sql
SELECT d.dog_guid AS DogID, d.user_guid AS UserID, AVG(r.rating) AS AvgRating,
COUNT(r.rating) AS NumRatings, d.breed, d.breed_group, d.breed_type
FROM dogs d, reviews r
WHERE d.user_guid=r.user_guid
GROUP BY d.user_guid
ORDER BY AvgRating DESC
%%sql
SELECT d.user_guid, d.dog_guid, d.breed, d.breed_type, d.breed_group
FROM dogs d, complete_tests t
WHERE d.dog_guid = t.dog_guid AND t.test_name='Yawn Warm-up'
%%sql
show tables
%%sql
Describe complete_tests
%%sql
SELECT DISTINCT u.user_guid, u.membership_type, d.dog_guid
FROM users u, dogs d, complete_tests t
WHERE
d.dog_guid = t.dog_guid
AND u.user_guid = d.user_guid
AND d.breed = 'Golden Retriever'
%%sql
show tables
%%sql
Describe complete_tests
%%sql
Describe users
%%sql
Describe dogs
%%sql
SELECT count(breed)
from dogs
where breed = 'Golden Retriever'
limit 5
%%sql
SELECT distinct d.dog_guid
FROM dogs d, complete_tests t
WHERE
d.dog_guid = t.dog_guid
AND d.breed = 'Golden Retriever'
%%sql
SELECT COUNT(distinct d.dog_guid)
FROM users u, dogs d
WHERE
u.user_guid = d.user_guid
AND d.breed = 'Golden Retriever'
AND u.state = 'NC'
%%sql
SELECT COUNT(distinct d.dog_guid)
FROM users u, dogs d
WHERE
u.user_guid = d.user_guid
AND d.breed = 'Golden Retriever'
GROUP BY u.state
HAVING u.state='NC'
%%sql
SELECT COUNT(DISTINCT u.user_guid)
FROM users u, reviews r
WHERE u.user_guid = r.user_guid
GROUP BY u.membership_type
LIMIT 5;
%%sql
SELECT d.breed, COUNT(s.script_detail_id) AS site_activity_amount
FROM dogs d, site_activities s
WHERE d.dog_guid = s.dog_guid
AND s.script_detail_id IS NOT NULL
GROUP BY d.breed
ORDER BY site_activity_amount DESC
LIMIT 0, 5
%%sql
Describe site_activities
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Calculate the mean
Step2: Calculate the median
Step3: Calculate the mode
Step4: Calculate the standard deviation
Step5: Alternatively, we can import the whole statistics module at once (all the functions in the staticsitics module) using the the line
|
<ASSISTANT_TASK:>
Python Code:
from statistics import mean, median, mode, stdev
test_scores = [60 , 83, 83, 91, 100]
mean(test_scores)
median(test_scores)
83
mode(test_scores)
stdev(test_scores)
import statistics
test_scores = [60 , 83, 83, 91, 100]
statistics.mean(test_scores)
statistics.median(test_scores)
statistics.mode(test_scores)
statistics.stdev(test_scores)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2) local histogram equilization
Step2: 3) Filter out the noise in the image to get only the points we actually want
Step3: 4) Now that you've calculated the points, you have options.
Step4: Sample output from generate_plotly_html()
Step5: Sample output from generate_density_graph()
|
<ASSISTANT_TASK:>
Python Code:
from clarityviz import claritybase
token = 'Fear199'
source_directory = '/cis/home/alee/claritycontrol/code/data/raw'
# Initialize the claritybase object, the initial basis for all operations.
# After you initialize with a token and source directory, a folder will be created in your current directory
# with the token name, and all the output files will be stored there.
cb = claritybase(token, source_directory)
cb.applyLocalEq()
cb.loadGeneratedNii()
cb.calculatePoints(threshold = 0.9, sample = 0.1)
cb.generate_plotly_html()
# savePoints generates the csv file of all the points in the graph.
cb.savePoints()
# plot3d calculates all the edges between the nodes.
cb.plot3d()
# graphmlconvert() creates a graphml file based on the nodes and edges file generated in plo3d.
cb.graphmlconvert()
from clarityviz import densitygraph
# Uses the same token before, must be in the same directory as before.
dg = densitygraph(token)
# generates a 3d plotly with color representations of density
dg.generate_density_graph()
# generates a heat map, essentially a legend, telling how many edges a certain color represents,
# with number of edges representing how dense a certain node clustering may be.
dg.generate_heat_map()
from clarityviz import atlasregiongraph
regiongraph = atlasregiongraph(token)
regiongraph.generate_atlas_region_graph()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Note pairs of clusters with similar mean densities and y-values.
Step2: This clustering result is more along the lines of what we expected, as it partitions by different y-values and densities. Let's make some graphs of the previous two clusters.
Step3: Observe the similarity in clusters when projected onto the y-axis. Also note difference in x and z coordinate means for the first 2 and second 2 clusters. Let's try kmeans with k=2.
Step4: Fit 2 gaussians...
Step5: Clustering results for the most part, do not seem to be indicating a particular cut-off between layers, although they indicate some correlation between y coordinates and density.
Step6: Some negative correlation between density and y coordinate, although not extreme. Let's compare the correlation for each coordinate
|
<ASSISTANT_TASK:>
Python Code:
from sklearn import cluster
kmeans1 = cluster.KMeans(4)
kmeans1.fit_predict(data)
print kmeans1.cluster_centers_
data_yd = data[:, (1, 3)]
kmeans2 = cluster.KMeans(4)
kmeans2.fit_predict(data_yd)
print kmeans2.cluster_centers_
colors = ['b', 'g', 'r', 'c', 'm']
for i, c in zip(range(4), colors):
a = np.where(kmeans2.labels_ == i)
plt.scatter(data[a, 1], data[a, -1],
alpha = .2,
color = c,
label='cluster #' + str(i))
plt.legend(bbox_to_anchor=(1.4, 1.00))
plt.xlabel('y coordinate (normalized)')
plt.ylabel('synaptic density (normalized)')
plt.title('KMeans clustering, k=4, only y-coord and density considered')
plt.show()
print kmeans2.cluster_centers_
for i, c in zip(range(4), colors):
a = np.where(kmeans1.labels_ == i)
plt.scatter(data[a, 1], data[a, -1],
alpha = .2,
color = c,
label='cluster #' + str(i))
plt.legend(bbox_to_anchor=(1.4, 1.00))
plt.xlabel('y coordinate (normalized)')
plt.ylabel('synaptic density (normalized)')
plt.title('Kmeans clusters, k=4, x,y,z coords and density considered')
plt.show()
for i, c in zip(range(4), colors):
a = np.where(kmeans1.labels_ == i)
plt.scatter(data[a, 0], data[a, -1],
alpha = .2,
color = c,
label='cluster #' + str(i))
plt.legend(bbox_to_anchor=(1.4, 1.00))
plt.xlabel('x coordinate (normalized)')
plt.ylabel('synaptic density (normalized)')
plt.title('Kmeans clusters, k=4, x,y,z coords and density considered')
plt.show()
print kmeans1.cluster_centers_
kmeans3 = cluster.KMeans(2)
kmeans3.fit_predict(data)
for i, c in zip(range(2), colors):
a = np.where(kmeans3.labels_ == i)
plt.scatter(data[a, 1], data[a, -1],
alpha = .2,
color = c,
label='cluster #' + str(i))
plt.legend(bbox_to_anchor=(1.4, 1.00))
plt.xlabel('y coordinate (normalized)')
plt.ylabel('synaptic density (normalized)')
plt.title('KMeans clustering, k=2, xyz and density considered')
plt.show()
print kmeans3.cluster_centers_
kmeans4 = cluster.KMeans(2)
kmeans4.fit_predict(data[:, (1, 3)])
for i, c in zip(range(2), colors):
a = np.where(kmeans4.labels_ == i)
plt.scatter(data[a, 1], data[a, -1],
alpha = .2,
color = c,
label='cluster #' + str(i))
plt.legend(bbox_to_anchor=(1.4, 1.00))
plt.xlabel('y coordinate (normalized)')
plt.ylabel('synaptic density (normalized)')
plt.title('KMeans clustering, k=2, y coord and density considered')
plt.show()
print kmeans4.cluster_centers_
from sklearn import mixture
gmm = mixture.GMM(2)
labels = gmm.fit_predict(data)
print gmm.means_
print "compare with kmeans, k=2, all coordinates"
print kmeans3.cluster_centers_
for i, c in zip(range(2), colors):
a = np.where(labels == i)
plt.scatter(data[a, 1], data[a, -1],
alpha = .2,
color = c,
label='cluster #' + str(i))
plt.legend(bbox_to_anchor=(1.4, 1.00))
plt.xlabel('y coordinate (normalized)')
plt.ylabel('synaptic density (normalized)')
plt.title('2 Component Gaussian Mixture Model Prediction')
plt.show()
# now do GMM w/ 4 clusters, since Bock 2011 suggests 4 layers
gmm = mixture.GMM(4)
labels = gmm.fit_predict(data)
for i, c in zip(range(4), colors):
a = np.where(labels == i)
plt.scatter(data[a, 1], data[a, -1],
alpha = .2,
color = c,
label='cluster #' + str(i))
plt.legend(bbox_to_anchor=(1.4, 1.00))
plt.xlabel('y coordinate (normalized)')
plt.ylabel('synaptic density (normalized)')
plt.title('4 Component Gaussian Mixture Model Prediction')
plt.show()
for i, c in zip(range(4), colors):
a = np.where(labels == i)
plt.scatter(data[a, 0], data[a, -1],
alpha = .2,
color = c,
label='cluster #' + str(i))
plt.legend(bbox_to_anchor=(1.4, 1.00))
plt.xlabel('x coordinate (normalized)')
plt.ylabel('synaptic density (normalized)')
plt.title('4 Component Gaussian Mixture Model Prediction')
plt.show()
print gmm.means_
print np.cov(data[:, 1], data[:, -1])
for i, coord in enumerate(['cx', 'cy', 'cz']):
print "correlation between density and " + coord
print np.cov(data[:, i], data[:, -1])
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps
Step2: Inline Question #1
Step3: You should expect to see approximately 27% accuracy. Now lets try out a larger k, say k = 5
Step5: You should expect to see a slightly better performance than with k = 1.
Step6: Cross-validation
|
<ASSISTANT_TASK:>
Python Code:
# Run some setup code for this notebook.
import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt
# This is a bit of magic to make matplotlib figures appear inline in the notebook
# rather than in a new window.
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# Some more magic so that the notebook will reload external python modules;
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
# Load the raw CIFAR-10 data.
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# As a sanity check, we print out the size of the training and test data.
print 'Training data shape: ', X_train.shape
print 'Training labels shape: ', y_train.shape
print 'Test data shape: ', X_test.shape
print 'Test labels shape: ', y_test.shape
# Visualize some examples from the dataset.
# We show a few examples of training images from each class.
classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
num_classes = len(classes)
samples_per_class = 7
for y, cls in enumerate(classes):
idxs = np.flatnonzero(y_train == y)
idxs = np.random.choice(idxs, samples_per_class, replace=False)
for i, idx in enumerate(idxs):
plt_idx = i * num_classes + y + 1
plt.subplot(samples_per_class, num_classes, plt_idx)
plt.imshow(X_train[idx].astype('uint8'))
plt.axis('off')
if i == 0:
plt.title(cls)
plt.show()
# Subsample the data for more efficient code execution in this exercise
num_training = 5000
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
num_test = 500
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
# Reshape the image data into rows
X_train = np.reshape(X_train, (X_train.shape[0], -1))
X_test = np.reshape(X_test, (X_test.shape[0], -1))
print X_train.shape, X_test.shape
from cs231n.classifiers import KNearestNeighbor
# Create a kNN classifier instance.
# Remember that training a kNN classifier is a noop:
# the Classifier simply remembers the data and does no further processing
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
# Open cs231n/classifiers/k_nearest_neighbor.py and implement
# compute_distances_two_loops.
# Test your implementation:
dists = classifier.compute_distances_two_loops(X_test)
print dists.shape
# We can visualize the distance matrix: each row is a single test example and
# its distances to training examples
plt.imshow(dists, interpolation='none')
plt.show()
# Now implement the function predict_labels and run the code below:
# We use k = 1 (which is Nearest Neighbor).
y_test_pred = classifier.predict_labels(dists, k=1)
# Compute and print the fraction of correctly predicted examples
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
y_test_pred = classifier.predict_labels(dists, k=5)
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
# Now lets speed up distance matrix computation by using partial vectorization
# with one loop. Implement the function compute_distances_one_loop and run the
# code below:
dists_one = classifier.compute_distances_one_loop(X_test)
# To ensure that our vectorized implementation is correct, we make sure that it
# agrees with the naive implementation. There are many ways to decide whether
# two matrices are similar; one of the simplest is the Frobenius norm. In case
# you haven't seen it before, the Frobenius norm of two matrices is the square
# root of the squared sum of differences of all elements; in other words, reshape
# the matrices into vectors and compute the Euclidean distance between them.
difference = np.linalg.norm(dists - dists_one, ord='fro')
print 'Difference was: %f' % (difference, )
if difference < 0.001:
print 'Good! The distance matrices are the same'
else:
print 'Uh-oh! The distance matrices are different'
# Now implement the fully vectorized version inside compute_distances_no_loops
# and run the code
dists_two = classifier.compute_distances_no_loops(X_test)
# check that the distance matrix agrees with the one we computed before:
difference = np.linalg.norm(dists - dists_two, ord='fro')
print 'Difference was: %f' % (difference, )
if difference < 0.001:
print 'Good! The distance matrices are the same'
else:
print 'Uh-oh! The distance matrices are different'
# Let's compare how fast the implementations are
def time_function(f, *args):
Call a function f with args and return the time (in seconds) that it took to execute.
import time
tic = time.time()
f(*args)
toc = time.time()
return toc - tic
two_loop_time = time_function(classifier.compute_distances_two_loops, X_test)
print 'Two loop version took %f seconds' % two_loop_time
one_loop_time = time_function(classifier.compute_distances_one_loop, X_test)
print 'One loop version took %f seconds' % one_loop_time
no_loop_time = time_function(classifier.compute_distances_no_loops, X_test)
print 'No loop version took %f seconds' % no_loop_time
# you should see significantly faster performance with the fully vectorized implementation
num_folds = 5
k_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100]
X_train_folds = np.array_split(X_train, num_folds)
y_train_folds = np.array_split(y_train, num_folds)
# A dictionary holding the accuracies for different values of k that we find
# when running cross-validation. After running cross-validation,
# k_to_accuracies[k] should be a list of length num_folds giving the different
# accuracy values that we found when using that value of k.
k_to_accuracies = {}
for k in k_choices:
k_to_accuracies[k] = []
for i in xrange(num_folds):
X_cross_test = X_train_folds[i]
y_cross_test = y_train_folds[i]
idx = [j for j in xrange(num_folds) if j != i]
X_cross_train = X_train_folds[0]
y_cross_train = y_train_folds[0]
num_cross_test = y_cross_test.shape[0]
for j in xrange(1, len(idx)):
X_cross_train = np.concatenate((X_cross_train, X_train_folds[idx[j]]))
y_cross_train = np.concatenate((y_cross_train, y_train_folds[idx[j]]))
classifier.train(X_cross_train, y_cross_train)
dists = classifier.compute_distances_no_loops(X_cross_test)
y_test_pred = classifier.predict_labels(dists, k=k)
num_correct = np.sum(y_test_pred == y_cross_test)
accuracy = float(num_correct) / num_cross_test
k_to_accuracies[k].append(accuracy)
# Print out the computed accuracies
for k in sorted(k_to_accuracies):
for accuracy in k_to_accuracies[k]:
print 'k = %d, accuracy = %f' % (k, accuracy)
# plot the raw observations
for k in k_choices:
accuracies = k_to_accuracies[k]
plt.scatter([k] * len(accuracies), accuracies)
# plot the trend line with error bars that correspond to standard deviation
accuracies_mean = np.array(
[np.mean(v) for k, v in sorted(k_to_accuracies.items())])
accuracies_std = np.array(
[np.std(v) for k, v in sorted(k_to_accuracies.items())])
plt.errorbar(k_choices, accuracies_mean, yerr=accuracies_std)
plt.title('Cross-validation on k')
plt.xlabel('k')
plt.ylabel('Cross-validation accuracy')
plt.show()
# Based on the cross-validation results above, choose the best value for k,
# retrain the classifier using all the training data, and test it on the test
# data. You should be able to get above 28% accuracy on the test data.
best_k = 10
classifier = KNearestNeighbor()
classifier.train(X_train, y_train)
y_test_pred = classifier.predict(X_test, k=best_k)
# Compute and display the accuracy
num_correct = np.sum(y_test_pred == y_test)
accuracy = float(num_correct) / num_test
print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load raw data
Step2: Use tf.data to read the CSV files
Step3: Build a simple keras DNN model
Step4: Next, we can call the build_model to create the model. Here we'll have two hidden layers before our final output layer. And we'll train with the same parameters we used before.
Step5: Export and deploy model
|
<ASSISTANT_TASK:>
Python Code:
!sudo chown -R jupyter:jupyter /home/jupyter/training-data-analyst
import datetime
import os
import shutil
import pandas as pd
import tensorflow as tf
from matplotlib import pyplot as plt
from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, DenseFeatures
from tensorflow.keras.callbacks import TensorBoard
print(tf.__version__)
%matplotlib inline
PROJECT = 'cloud-training-demos' # REPLACE WITH YOUR PROJECT ID
BUCKET = 'cloud-training-demos' # REPLACE WITH YOUR BUCKET NAME
REGION = 'us-central1' # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
# For Bash Code
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
%%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
!ls -l ../data/taxi-traffic*
!head ../data/taxi-traffic*
CSV_COLUMNS = [
'fare_amount',
'dayofweek',
'hourofday',
'pickup_longitude',
'pickup_latitude',
'dropoff_longitude',
'dropoff_latitude',
'traffic_last_5min'
]
LABEL_COLUMN = 'fare_amount'
DEFAULTS = [[0.0], [0.0], [0.0], [0.0], [0.0], [0.0], [0.0], [0.0]]
def features_and_labels(row_data):
label = row_data.pop(LABEL_COLUMN)
features = row_data
return features, label
def create_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL):
dataset = tf.data.experimental.make_csv_dataset(
pattern, batch_size, CSV_COLUMNS, DEFAULTS)
dataset = dataset.map(features_and_labels)
if mode == tf.estimator.ModeKeys.TRAIN:
dataset = dataset.shuffle(buffer_size=1000).repeat()
# take advantage of multi-threading; 1=AUTOTUNE
dataset = dataset.prefetch(1)
return dataset
INPUT_COLS = [
'dayofweek',
'hourofday',
'pickup_longitude',
'pickup_latitude',
'dropoff_longitude',
'dropoff_latitude',
'traffic_last_5min'
]
# Create input layer of feature columns
feature_columns = {
colname: tf.feature_column.numeric_column(colname)
for colname in INPUT_COLS
}
# Build a keras DNN model using Sequential API
def build_model(dnn_hidden_units):
model = Sequential(DenseFeatures(feature_columns=feature_columns.values()))
for num_nodes in dnn_hidden_units:
model.add(Dense(units=num_nodes, activation="relu"))
model.add(Dense(units=1, activation="linear"))
# Create a custom evalution metric
def rmse(y_true, y_pred):
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
# Compile the keras model
model.compile(optimizer="adam", loss="mse", metrics=[rmse, "mse"])
return model
HIDDEN_UNITS = [32, 8]
model = build_model(dnn_hidden_units=HIDDEN_UNITS)
BATCH_SIZE = 1000
NUM_TRAIN_EXAMPLES = 10000 * 6 # training dataset will repeat, wrap around
NUM_EVALS = 60 # how many times to evaluate
NUM_EVAL_EXAMPLES = 10000 # enough to get a reasonable sample
trainds = create_dataset(
pattern='../data/taxi-traffic-train*',
batch_size=BATCH_SIZE,
mode=tf.estimator.ModeKeys.TRAIN)
evalds = create_dataset(
pattern='../data/taxi-traffic-valid*',
batch_size=BATCH_SIZE,
mode=tf.estimator.ModeKeys.EVAL).take(NUM_EVAL_EXAMPLES//1000)
%%time
steps_per_epoch = NUM_TRAIN_EXAMPLES // (BATCH_SIZE * NUM_EVALS)
LOGDIR = "./taxi_trained"
history = model.fit(x=trainds,
steps_per_epoch=steps_per_epoch,
epochs=NUM_EVALS,
validation_data=evalds,
callbacks=[TensorBoard(LOGDIR)])
RMSE_COLS = ['rmse', 'val_rmse']
pd.DataFrame(history.history)[RMSE_COLS].plot()
model.predict(x={"dayofweek": tf.convert_to_tensor([6]),
"hourofday": tf.convert_to_tensor([17]),
"pickup_longitude": tf.convert_to_tensor([-73.982683]),
"pickup_latitude": tf.convert_to_tensor([40.742104]),
"dropoff_longitude": tf.convert_to_tensor([-73.983766]),
"dropoff_latitude": tf.convert_to_tensor([40.755174]),
"traffic_last_5min": tf.convert_to_tensor([114])},
steps=1)
OUTPUT_DIR = "./export/savedmodel"
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
EXPORT_PATH = os.path.join(OUTPUT_DIR,
datetime.datetime.now().strftime("%Y%m%d%H%M%S"))
tf.saved_model.save(model, EXPORT_PATH) # with default serving function
os.environ['EXPORT_PATH'] = EXPORT_PATH
%%bash
PROJECT=${PROJECT}
BUCKET=${BUCKET}
REGION=${REGION}
MODEL_NAME=taxifare
VERSION_NAME=traffic
if [[ $(gcloud ai-platform models list --format='value(name)' --region=$REGION | grep "^$MODEL_NAME$") ]]; then
echo "$MODEL_NAME already exists"
else
# create model
echo "Creating $MODEL_NAME"
gcloud ai-platform models create --region=$REGION $MODEL_NAME
fi
if [[ $(gcloud ai-platform versions list --model $MODEL_NAME --format='value(name)' --region=$REGION | grep "^$VERSION_NAME$") ]]; then
echo "Deleting already existing $MODEL_NAME:$VERSION_NAME ... "
gcloud ai-platform versions delete --model=$MODEL_NAME $VERSION_NAME --region=$REGION
echo "Please run this cell again if you don't see a Creating message ... "
sleep 2
fi
# create model
echo "Creating $MODEL_NAME:$VERSION_NAME"
gcloud ai-platform versions create --model=$MODEL_NAME $VERSION_NAME --async \
--framework=tensorflow --python-version=3.5 --runtime-version=1.14 \
--origin=${EXPORT_PATH} --staging-bucket=gs://$BUCKET --region=$REGION
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Benchmarking
Step2: Parallelism
|
<ASSISTANT_TASK:>
Python Code:
import pescador
import numpy as np
np.set_printoptions(precision=4)
import sklearn
import sklearn.datasets
import sklearn.linear_model
import sklearn.metrics
import sklearn.model_selection
def batch_sampler(X, Y, batch_size=20, scale = 1e-1):
'''A gaussian noise generator for data
Parameters
----------
X : ndarray
features, n_samples by dimensions
Y : ndarray
labels, n_samples
batch_size : int
size of the minibatches to generate
scale : float > 0
scale of the noise to add
Generates
---------
data
An infinite stream of data dictionaries
batch = dict(X=X[i], Y=Y[i])
'''
X = np.atleast_2d(X)
Y = np.atleast_1d(Y)
n, d = X.shape
while True:
i = np.random.randint(0, n, size=batch_size)
noise = scale * np.random.randn(batch_size, d)
yield {'X': X[i] + noise, 'Y': Y[i]}
# Load up the iris dataset for the demo
data = sklearn.datasets.load_iris()
X, Y = data.data, data.target
classes = np.unique(Y)
# What does the data stream look like?
# First, we'll wrap the generator function in a Streamer object.
# This is necessary for a few reasons, notably so that we can re-instantiate
# the generator multiple times (eg once per epoch)
batches = pescador.Streamer(batch_sampler, X, Y)
for q in batches(max_iter=3):
print(q)
%%time
ss = sklearn.model_selection.ShuffleSplit(n_splits=2, test_size=0.2)
for train, test in ss.split(np.arange(len(X))):
# Make an SGD learner, nothing fancy here
classifier = sklearn.linear_model.SGDClassifier(verbose=0,
loss='log',
penalty='l1',
n_iter=1)
# Again, build a streamer object
batches = pescador.Streamer(batch_sampler, X[train], Y[train])
# And train the model on the stream.
n_steps = 0
for batch in batches(max_iter=5e3):
classifier.partial_fit(batch['X'], batch['Y'], classes=classes)
n_steps += 1
# How's it do on the test set?
print('Test-set accuracy: {:.3f}'.format(sklearn.metrics.accuracy_score(Y[test], classifier.predict(X[test]))))
print('# Steps: ', n_steps)
%%time
ss = sklearn.model_selection.ShuffleSplit(n_splits=2, test_size=0.2)
for train, test in ss.split(np.arange(len(X))):
# Make an SGD learner, nothing fancy here
classifier = sklearn.linear_model.SGDClassifier(verbose=0,
loss='log',
penalty='l1',
n_iter=1)
# First, turn the data_generator function into a Streamer object
batches = pescador.Streamer(batch_sampler, X[train], Y[train])
# Then, send this thread to a second process
zmq_stream = pescador.ZMQStreamer(batches, 5156)
# And train the model on the stream.
n_steps = 0
for batch in zmq_stream(max_iter=5e3):
classifier.partial_fit(batch['X'], batch['Y'], classes=classes)
n_steps += 1
# How's it do on the test set?
print('Test-set accuracy: {:.3f}'.format(sklearn.metrics.accuracy_score(Y[test], classifier.predict(X[test]))))
print('# Steps: ', n_steps)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load & Read Datasets
Step2: Extracting features
Step3: Convert Occurrence to Frequency
Step4: In the above code, we first used the fit() method to fit our estimator and then the transform() method to transform our count-matrix to a tf-idf representation.
Step5: Train Classifier
Step6: Building a Pipeline
Step7: Let's use stop words filter in CountVectorizer method and see how it affects the classifier's accuracy. We see that this increases accuracy.
Step8: Classification Report (precision, recall, f1-score)
Step9: Confusion Matrix
Step10: Stochastic Gradient Descent (SGD) Classifier
Step11: Logistic Regression Classifier
Step12: OneVsOne Classifier
Step13: Create Submission
|
<ASSISTANT_TASK:>
Python Code:
import nltk
import pandas as pd
import numpy as np
from sklearn.feature_extraction.text import CountVectorizer, TfidfTransformer
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression, SGDClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.naive_bayes import MultinomialNB
from sklearn.svm import LinearSVC
from sklearn.multiclass import OneVsRestClassifier, OneVsOneClassifier
from sklearn.metrics import classification_report, confusion_matrix
train = pd.read_csv('train.tsv', delimiter='\t')
test = pd.read_csv('test.tsv', delimiter='\t')
train.shape, test.shape
train.head()
test.head()
# unique sentiment labels
train.Sentiment.unique()
train.info()
train.Sentiment.value_counts()
train.Sentiment.value_counts() / train.Sentiment.count()
X_train = train['Phrase']
y_train = train['Sentiment']
# Convert a collection of text documents to a matrix of token counts
count_vect = CountVectorizer()
# Fit followed by Transform
# Learn the vocabulary dictionary and return term-document matrix
X_train_counts = count_vect.fit_transform(X_train)
#X_train_count = X_train_count.toarray()
# 156060 rows of train data & 15240 features (one for each vocabulary word)
X_train_counts.shape
# get all words in the vocabulary
vocab = count_vect.get_feature_names()
print (vocab)
# get index of any word
count_vect.vocabulary_.get(u'100')
# Sum up the counts of each vocabulary word
dist = np.sum(X_train_counts, axis=0)
# print (dist) # matrix
dist = np.squeeze(np.asarray(dist))
print (dist) # array
zipped = zip(vocab, dist)
zipped.sort(key = lambda t: t[1], reverse=True) # sort words by highest number of occurrence
# For each, print the vocabulary word and the number of times it
# appears in the training set
for tag, count in zipped:
print (count, tag)
tf_transformer = TfidfTransformer(use_idf=False).fit(X_train_counts)
X_train_tf = tf_transformer.transform(X_train_counts)
# 156060 rows of train data & 15240 features (one for each vocabulary word)
X_train_tf.shape
# print some values of tf-idf transformed feature vector
print X_train_tf[1:2]
tfidf_transformer = TfidfTransformer()
X_train_tfidf = tfidf_transformer.fit_transform(X_train_counts)
X_train_tfidf.shape
clf = MultinomialNB().fit(X_train_tfidf, y_train)
predicted = clf.predict(X_train_tfidf)
np.mean(predicted == y_train)
text_clf = Pipeline([
('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', MultinomialNB()),
])
text_clf.fit(X_train, y_train)
predicted = text_clf.predict(X_train)
np.mean(predicted == y_train)
text_clf = Pipeline([
('vect', CountVectorizer(stop_words='english')),
('tfidf', TfidfTransformer()),
('clf', MultinomialNB()),
])
text_clf.fit(X_train, y_train)
predicted = text_clf.predict(X_train)
np.mean(predicted == y_train)
target_names = y_train.unique()
#np.array(map(str, target_names))
#np.char.mod('%d', target_names)
target_names = ['0', '1', '2', '3', '4']
print (classification_report(
y_train, \
predicted, \
target_names = target_names
))
print (confusion_matrix(y_train, predicted))
text_clf = Pipeline([
('vect', CountVectorizer(stop_words='english')),
('tfidf', TfidfTransformer()),
('clf', SGDClassifier(loss='modified_huber', shuffle=True, penalty='l2', alpha=1e-3, random_state=42, max_iter=5, tol=None)),
])
text_clf.fit(X_train, y_train)
predicted = text_clf.predict(X_train)
np.mean(predicted == y_train)
text_clf = Pipeline([
('vect', CountVectorizer(stop_words='english', max_features=5000)),
('tfidf', TfidfTransformer()),
('clf', LogisticRegression())
])
text_clf.fit(X_train, y_train)
predicted = text_clf.predict(X_train)
np.mean(predicted == y_train)
text_clf = Pipeline([
('vect', CountVectorizer(stop_words='english', max_features=5000)),
('tfidf', TfidfTransformer()),
('clf', OneVsOneClassifier(LinearSVC()))
])
text_clf.fit(X_train, y_train)
predicted = text_clf.predict(X_train)
np.mean(predicted == y_train)
test.info()
X_test = test['Phrase']
phraseIds = test['PhraseId']
predicted = text_clf.predict(X_test)
output = pd.DataFrame( data={"PhraseId":phraseIds, "Sentiment":predicted} )
#output.to_csv( "submission.csv", index=False, quoting=3 )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The Rise and Fall of the US Employment-Population Ratio
Step2: Source
Step3: Source
Step4: Source
|
<ASSISTANT_TASK:>
Python Code:
Creates a figure using FRED data
Uses pandas Remote Data Access API
Documentation can be found at http://pandas.pydata.org/pandas-docs/stable/remote_data.html
%matplotlib inline
import pandas as pd
import pandas.io.data as web
import matplotlib.pyplot as plt
import numpy as np
import datetime as dt
from dateutil.relativedelta import relativedelta
start, end = dt.datetime(1989, 1, 1), dt.datetime(2015, 6, 1) # Set the date range of the data
data = web.DataReader(['EMRATIO', 'UNRATE', 'USREC'],'fred', start, end) # Choose data series you wish to download
data.columns = ['Empl Pop Ratio', 'Unemployment Rate', 'Recession']
plt.figure(figsize=plt.figaspect(0.5))
data['Empl Pop Ratio'].plot()
plt.xlabel('')
plt.text(dt.datetime(1990, 1, 1), 64.25, 'Employment-', fontsize=11, weight='bold')
plt.text(dt.datetime(1990, 1, 1), 63.75, 'Population Ratio', fontsize=11, weight='bold')
data['Unemployment Rate'].plot(secondary_y=True, color = 'r')
plt.text(dt.datetime(1990, 1, 1), 4, 'Unemployment Rate', fontsize=11, weight='bold')
def get_recession_months():
rec_dates = data['Recession']
one_vals = np.where(rec_dates == 1)
rec_startind = rec_dates.index[one_vals]
return rec_startind
def shade_recession(dates):
for date in dates:
plt.axvspan(date, date+relativedelta(months=+1), color='gray', alpha=0.1, lw=0)
shade_recession(get_recession_months())
plt.suptitle('Figure 1. Employment-Population Ratio and Unemployment, 1989-2015', fontsize=12, weight='bold')
plt.show()
start, end = dt.datetime(1976, 1, 1), dt.datetime(2015, 3, 1)
data = web.DataReader(['CIVPART', 'USREC'], 'fred', start, end)
data.columns = ['LFPR', 'Recession']
plt.figure(figsize=plt.figaspect(0.5))
data['LFPR'].plot(color = 'k')
plt.xlabel('')
shade_recession(get_recession_months())
plt.suptitle('Figure 2. Labor Force Participation Rate, 1976-2015', fontsize=12, fontweight='bold')
plt.show()
#file = '/Users/davidcai/lfpr.csv'
file = 'https://raw.githubusercontent.com/DaveBackus/Data_Bootcamp/master/Code/Projects/lfpr.csv'
df = pd.read_csv(file, index_col=0)
start, end = dt.datetime(1980, 1, 1), dt.datetime(2010, 1, 1)
data = web.DataReader('USREC', 'fred', start, end)
data.columns=['Recession']
# Take a simple averages of ratios for men and women
df["Age 62"] = df[["M62-64", "W62-64"]].mean(axis=1)
df["Age 65"] = df[["M65-69", "W65-69"]].mean(axis=1)
df["Age 70"] = df[["M70-74", "W70-74"]].mean(axis=1)
df["Age 75"] = df[["M75-79", "W75-79"]].mean(axis=1)
# Convert years into datetime series
df.index = df.index.astype(str) + "-1-1"
df.index = pd.to_datetime(df.index)
plt.figure(figsize=(plt.figaspect(0.5)))
df["Age 62"].plot()
df["Age 65"].plot()
df["Age 70"].plot()
df["Age 75"].plot()
plt.text(dt.datetime(2007, 1, 1), 42, 'Age 62', fontsize=11, weight='bold')
plt.text(dt.datetime(2007, 1, 1), 25, 'Age 65', fontsize=11, weight='bold')
plt.text(dt.datetime(2007, 1, 1), 15, 'Age 70', fontsize=11, weight='bold')
plt.text(dt.datetime(2007, 1, 1), 6, 'Age 75', fontsize=11, weight='bold')
shade_recession(get_recession_months())
plt.suptitle('Figure 3. Labor Force Participation Rates, By Age, 1980-2010', fontsize=12, fontweight='bold')
plt.show()
start, end = dt.datetime(1970, 1, 1), dt.datetime(2015, 3, 1)
data = web.DataReader(['LNS12300001', 'EMRATIO','LNS12300002', 'USREC'], 'fred', start, end)
data.columns=['Men', 'Overall', 'Women', 'Recession']
plt.figure(figsize=plt.figaspect(0.5))
data["Men"].plot()
data["Overall"].plot()
data["Women"].plot()
plt.xlabel('')
plt.text(dt.datetime(1971, 1, 1), 71, 'Men', fontsize=11, weight='bold')
plt.text(dt.datetime(1971, 1, 1), 52, 'Overall', fontsize=11, weight='bold')
plt.text(dt.datetime(1971, 1, 1), 37, 'Women', fontsize=11, weight='bold')
shade_recession(get_recession_months())
plt.suptitle('Figure 4. Employment Population Ratios, Overall and by Sex, 1970-2015', fontsize=12, fontweight='bold')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Optimizer -
Step2: Initializing an Optimizer
Step3: cbs is a list of functions that will be composed when applying the step. For instance, you can compose a function making the SGD step, with another one applying weight decay. Additionally, each cb can have a defaults attribute that contains hyper-parameters and their default value. Those are all gathered at initialization, and new values can be passed to override those defaults with the defaults kwargs. The steppers will be called by Optimizer.step (which is the standard PyTorch name), and gradients can be cleared with Optimizer.zero_grad (also a standard PyTorch name).
Step4: For each hyper-parameter, you can pass a slice or a collection to set them, if there are multiple parameter groups. A slice will be converted to a log-uniform collection from its beginning to its end, or if it only has an end e, to a collection of as many values as there are parameter groups that are ...,e/10,e/10,e.
Step5: Basic steppers
Step6: Warning
Step7: This method will loop over all param groups, then all parameters for which grad is not None and call each function in stepper, passing it the parameter p with the hyper-parameters in the corresponding dict in hypers.
Step8: Some of the Optimizer cbs can be functions updating the state associated with a parameter. That state can then be used by any stepper. The best example is a momentum calculation.
Step9: Statistics
Step10: dampening=False gives the classical formula for momentum in SGD
Step11: dampening=False gives the classical formula for momentum in SGD
Step12: Freezing part of the model
Step13: Parameters such as batchnorm weights/bias can be marked to always be in training mode, just put force_train=true in their state.
Step14: Serializing
Step15: Optimizers
Step16: Optional weight decay of wd is applied, as true weight decay (decay the weights directly) if decouple_wd=True else as L2 regularization (add the decay to the gradients).
Step17: Test weight decay, notice how we can see that L2 regularization is different from weight decay even for simple SGD with momentum.
Step18: RMSProp
Step19: RMSProp was introduced by Geoffrey Hinton in his course. What is named sqr_mom here is the alpha in the course. Optional weight decay of wd is applied, as true weight decay (decay the weights directly) if decouple_wd=True else as L2 regularization (add the decay to the gradients).
Step20: Adam
Step21: Adam was introduced by Diederik P. Kingma and Jimmy Ba in Adam
Step22: RAdam
Step23: This is the effective correction reported to the adam step for 500 iterations in RAdam. We can see how it goes from 0 to 1, mimicking the effect of a warm-up.
Step24: QHAdam
Step25: LARS/LARC
Step26: The LARS optimizer was first introduced in Large Batch Training of Convolutional Networks then refined in its LARC variant (original LARS is with clip=False). A learning rate is computed for each individual layer with a certain trust_coefficient, then clipped to be always less than lr.
Step27: LAMB
Step28: LAMB was introduced in Large Batch Optimization for Deep Learning
Step29: Lookahead -
Step30: OptimWrapper -
Step31: To use an existing PyTorch optimizer, you can define an optimizer function like this
Step32: Or if you already have an existing one, pass in only opt
Step33: Export -
|
<ASSISTANT_TASK:>
Python Code:
#|export
class _BaseOptimizer():
"Common functionality between `Optimizer` and `OptimWrapper`"
def all_params(self,
n:(slice, int)=slice(None), # Extended slicing over the optimizer `param_lists`
with_grad:bool=False # Get all param tuples. If `True` select only those with a gradient
):
res = L((p,pg,self.state[p],hyper) for pg,hyper in zip(self.param_lists[n],self.hypers[n]) for p in pg)
return L(o for o in res if hasattr(o[0], 'grad') and o[0].grad is not None) if with_grad else res
def _set_require_grad(self,
rg:bool, # Requires grad: if `True` sets gradient for parameters, else uses state `state["force_train"]`
p:Tensor, # Parameters to set gradient
pg, # Param groups (unused but needed because unpack *o)
state: dict,
h # Hyperparameter (unused but needed because unpack *o)
):
p.requires_grad_(rg or state.get('force_train', False))
def freeze_to(self,
n:int # Freeze up to `n` layers
):
self.frozen_idx = n if n >= 0 else len(self.param_lists) + n
if self.frozen_idx >= len(self.param_lists):
warn(f"Freezing {self.frozen_idx} groups; model has {len(self.param_lists)}; whole model is frozen.")
for o in self.all_params(slice(n, None)): self._set_require_grad(True, *o)
for o in self.all_params(slice(None, n)): self._set_require_grad(False, *o)
def freeze(self):
assert(len(self.param_lists)>1)
self.freeze_to(-1)
def set_freeze(self,
n:int,
rg:bool, # Whether grad is required
ignore_force_train=False # Overwrites "force_train" or batch norm always trains even if frozen
):
for p in self.param_lists[n]: p.requires_grad_(rg or (state.get('force_train', False) and not ignore_force_train))
def set_hypers(self, **kwargs): L(kwargs.items()).starmap(self.set_hyper)
def _set_hyper(self,
k, # Hyperparameter key
v # Hyperparameter value
):
for v_,h in zip(v, self.hypers): h[k] = v_
def set_hyper(self,
k, # Hyperparameter key or slice of keys
v # Hyperparameter value or slice of values
):
if isinstance(v, slice):
if v.start: v = even_mults(v.start, v.stop, len(self.param_lists))
else: v = [v.stop/10]*(len(self.param_lists)-1) + [v.stop]
v = L(v, use_list=None)
if len(v)==1: v = v*len(self.param_lists)
assert len(v) == len(self.hypers), f"Trying to set {len(v)} values for {k} but there are {len(self.param_lists)} parameter groups."
self._set_hyper(k, v)
def unfreeze(self): self.freeze_to(0)
@property
def param_groups(self): return [{**{'params': pg}, **hp} for pg,hp in zip(self.param_lists, self.hypers)]
@param_groups.setter
def param_groups(self,
v:dict # List of dicts to set `params` and other hyper parameters
):
for pg,v_ in zip(self.param_lists,v): pg = v_['params']
for hyper,v_ in zip(self.hypers,v):
for k,t in v_.items():
if k != 'params': hyper[k] = t
add_docs(_BaseOptimizer,
all_params="List of param_groups, parameters, and hypers",
freeze_to="Freeze parameter groups up to `n`",
freeze="Freeze up to last parameter group",
set_freeze="Set `rg` for parameter group `n` only",
unfreeze="Unfreeze the entire model",
set_hypers="`set_hyper` for all `kwargs`",
set_hyper="Set the value(s) in `v` for hyper-parameter `k`")
#|export
def _update(
state:dict,
new=None # New values to update `state` dict
):
if new is None: return state
if isinstance(new, dict): state.update(new)
return state
#|export
class Optimizer(_BaseOptimizer):
"Base optimizer class for the fastai library, updating `params` with `cbs`"
_keep_on_clear = ['force_train', 'do_wd']
def __init__(self,
params:Tensor, # Parameters and hyper parameters
cbs:list, # `Optimizer` callbacks
train_bn:bool=True, # Batch normalization is always trained
**defaults # Default values to set on hyper parameters
):
params = L(params)
self.cbs,self.state,self.train_bn = L(cbs),defaultdict(dict),train_bn
defaults = merge(*self.cbs.attrgot('defaults'), defaults)
self.param_lists = L(L(p) for p in params) if isinstance(params[0], (L,list)) else L([params])
self.hypers = L({} for _ in range_of(self.param_lists))
self.set_hypers(**defaults)
self.frozen_idx = 0
def zero_grad(self):
for p,*_ in self.all_params(with_grad=True):
p.grad.detach_()
p.grad.zero_()
def step(self, closure=None):
if closure is not None: raise NotImplementedError("fastai optimizers currently do not support closure")
for p,pg,state,hyper in self.all_params(with_grad=True):
for cb in self.cbs: state = _update(state, cb(p, **{**state, **hyper}))
self.state[p] = state
def clear_state(self):
for p,pg,state,hyper in self.all_params():
self.state[p] = {k: state[k] for k in self._keep_on_clear if k in state}
def state_dict(self):
state = [self.state[p] for p,*_ in self.all_params()]
return {'state': state, 'hypers': self.hypers}
def load_state_dict(self,
sd:dict # State dict with `hypers` and `state` to load on the optimizer
):
assert len(sd["hypers"]) == len(self.param_lists)
assert len(sd["state"]) == sum([len(pg) for pg in self.param_lists])
self.hypers = sd['hypers']
self.state = {p: s for p,s in zip(self.all_params().itemgot(0), sd['state'])}
add_docs(Optimizer,
zero_grad="Standard PyTorch API: Zero all the grad attributes of the parameters",
step="Standard PyTorch API: Update the stats and execute the steppers in on all parameters that have a grad",
state_dict="Return the state of the optimizer in a dictionary",
load_state_dict="Load the content of `sd`",
clear_state="Reset the state of the optimizer")
opt = Optimizer([1,2,3], noop)
test_eq(opt.param_lists, [[1,2,3]])
opt = Optimizer(range(3), noop)
test_eq(opt.param_lists, [[0,1,2]])
opt = Optimizer([[1,2],[3]], noop)
test_eq(opt.param_lists, [[1,2],[3]])
opt = Optimizer(([o,o+1] for o in range(0,4,2)), noop)
test_eq(opt.param_lists, [[0,1],[2,3]])
def tst_arg(p, lr=0, **kwargs): return p
tst_arg.defaults = dict(lr=1e-2)
def tst_arg2(p, lr2=0, **kwargs): return p
tst_arg2.defaults = dict(lr2=1e-3)
def tst_arg3(p, mom=0, **kwargs): return p
tst_arg3.defaults = dict(mom=0.9)
def tst_arg4(p, **kwargs): return p
opt = Optimizer([1,2,3], [tst_arg,tst_arg2, tst_arg3])
test_eq(opt.hypers, [{'lr2': 1e-3, 'mom': 0.9, 'lr': 1e-2}])
opt = Optimizer([1,2,3], tst_arg, lr=0.1)
test_eq(opt.hypers, [{'lr': 0.1}])
opt = Optimizer([[1,2],[3]], tst_arg)
test_eq(opt.hypers, [{'lr': 1e-2}, {'lr': 1e-2}])
opt = Optimizer([[1,2],[3]], tst_arg, lr=0.1)
test_eq(opt.hypers, [{'lr': 0.1}, {'lr': 0.1}])
opt = Optimizer([[1,2],[3]], tst_arg, lr=[0.1,0.2])
test_eq(opt.hypers, [{'lr': 0.1}, {'lr': 0.2}])
opt = Optimizer([[1,2],[3],[4]], tst_arg, lr=slice(1e-2))
test_eq(opt.hypers, [{'lr': 1e-3}, {'lr': 1e-3}, {'lr': 1e-2}])
opt = Optimizer([[1,2],[3],[4]], tst_arg, lr=slice(1e-4,1e-2))
test_eq(opt.hypers, [{'lr': 1e-4}, {'lr': 1e-3}, {'lr': 1e-2}])
test_eq(opt.param_groups, [{'params': [1,2], 'lr': 1e-4}, {'params': [3], 'lr': 1e-3}, {'params': [4], 'lr': 1e-2}])
test_fail(lambda: Optimizer([[1,2],[3],[4]], tst_arg, lr=np.array([0.1,0.2])))
#|export
def sgd_step(p, lr, **kwargs):
p.data.add_(p.grad.data, alpha=-lr)
def tst_param(val, grad=None):
"Create a tensor with `val` and a gradient of `grad` for testing"
res = tensor([val]).float()
res.grad = tensor([val/10 if grad is None else grad]).float()
return res
p = tst_param(1., 0.1)
sgd_step(p, 1.)
test_eq(p, tensor([0.9]))
test_eq(p.grad, tensor([0.1]))
#|export
def weight_decay(p, lr, wd, do_wd=True, **kwargs):
"Weight decay as decaying `p` with `lr*wd`"
if do_wd and wd!=0: p.data.mul_(1 - lr*wd)
weight_decay.defaults = dict(wd=0.)
p = tst_param(1., 0.1)
weight_decay(p, 1., 0.1)
test_eq(p, tensor([0.9]))
test_eq(p.grad, tensor([0.1]))
#|export
def l2_reg(p, lr, wd, do_wd=True, **kwargs):
"L2 regularization as adding `wd*p` to `p.grad`"
if do_wd and wd!=0: p.grad.data.add_(p.data, alpha=wd)
l2_reg.defaults = dict(wd=0.)
p = tst_param(1., 0.1)
l2_reg(p, 1., 0.1)
test_eq(p, tensor([1.]))
test_eq(p.grad, tensor([0.2]))
show_doc(Optimizer.step)
#test basic step
r = L.range(4)
def tst_params(): return r.map(tst_param)
params = tst_params()
opt = Optimizer(params, sgd_step, lr=0.1)
opt.step()
test_close([p.item() for p in params], r.map(mul(0.99)))
#test two steps
params = tst_params()
opt = Optimizer(params, [weight_decay, sgd_step], lr=0.1, wd=0.1)
opt.step()
test_close([p.item() for p in params], r.map(mul(0.98)))
#test None gradients are ignored
params = tst_params()
opt = Optimizer(params, sgd_step, lr=0.1)
params[-1].grad = None
opt.step()
test_close([p.item() for p in params], [0., 0.99, 1.98, 3.])
#test discriminative lrs
params = tst_params()
opt = Optimizer([params[:2], params[2:]], sgd_step, lr=0.1)
opt.hypers[0]['lr'] = 0.01
opt.step()
test_close([p.item() for p in params], [0., 0.999, 1.98, 2.97])
show_doc(Optimizer.zero_grad)
params = tst_params()
opt = Optimizer(params, [weight_decay, sgd_step], lr=0.1, wd=0.1)
opt.zero_grad()
[test_eq(p.grad, tensor([0.])) for p in params];
def tst_stat(p, **kwargs):
s = kwargs.get('sum', torch.zeros_like(p)) + p.data
return {'sum': s}
tst_stat.defaults = {'mom': 0.9}
#Test Optimizer init
opt = Optimizer([1,2,3], tst_stat)
test_eq(opt.hypers, [{'mom': 0.9}])
opt = Optimizer([1,2,3], tst_stat, mom=0.99)
test_eq(opt.hypers, [{'mom': 0.99}])
#Test stat
x = torch.randn(4,5)
state = tst_stat(x)
assert 'sum' in state
test_eq(x, state['sum'])
state = tst_stat(x, **state)
test_eq(state['sum'], 2*x)
#|export
def average_grad(p, mom, dampening=False, grad_avg=None, **kwargs):
"Keeps track of the avg grads of `p` in `state` with `mom`."
if grad_avg is None: grad_avg = torch.zeros_like(p.grad.data)
damp = 1-mom if dampening else 1.
grad_avg.mul_(mom).add_(p.grad.data, alpha=damp)
return {'grad_avg': grad_avg}
average_grad.defaults = dict(mom=0.9)
p = tst_param([1,2,3], [4,5,6])
state = {}
state = average_grad(p, mom=0.9, **state)
test_eq(state['grad_avg'], p.grad)
state = average_grad(p, mom=0.9, **state)
test_eq(state['grad_avg'], p.grad * 1.9)
#Test dampening
state = {}
state = average_grad(p, mom=0.9, dampening=True, **state)
test_eq(state['grad_avg'], 0.1*p.grad)
state = average_grad(p, mom=0.9, dampening=True, **state)
test_close(state['grad_avg'], (0.1*0.9+0.1)*p.grad)
#|export
def average_sqr_grad(p, sqr_mom, dampening=True, sqr_avg=None, **kwargs):
if sqr_avg is None: sqr_avg = torch.zeros_like(p.grad.data)
damp = 1-sqr_mom if dampening else 1.
sqr_avg.mul_(sqr_mom).addcmul_(p.grad.data, p.grad.data, value=damp)
return {'sqr_avg': sqr_avg}
average_sqr_grad.defaults = dict(sqr_mom=0.99)
p = tst_param([1,2,3], [4,5,6])
state = {}
state = average_sqr_grad(p, sqr_mom=0.99, dampening=False, **state)
test_eq(state['sqr_avg'], p.grad.pow(2))
state = average_sqr_grad(p, sqr_mom=0.99, dampening=False, **state)
test_eq(state['sqr_avg'], p.grad.pow(2) * 1.99)
#Test dampening
state = {}
state = average_sqr_grad(p, sqr_mom=0.99, **state)
test_close(state['sqr_avg'], 0.01*p.grad.pow(2))
state = average_sqr_grad(p, sqr_mom=0.99, **state)
test_close(state['sqr_avg'], (0.01*0.99+0.01)*p.grad.pow(2))
show_doc(Optimizer.freeze, name="Optimizer.freeze")
show_doc(Optimizer.freeze_to, name="Optimizer.freeze_to")
show_doc(Optimizer.unfreeze, name="Optimizer.unfreeze")
#Freezing the first layer
params = [tst_params(), tst_params(), tst_params()]
opt = Optimizer(params, sgd_step, lr=0.1)
opt.freeze_to(1)
req_grad = Self.requires_grad()
test_eq(L(params[0]).map(req_grad), [False]*4)
for i in {1,2}: test_eq(L(params[i]).map(req_grad), [True]*4)
#Unfreezing
opt.unfreeze()
for i in range(2): test_eq(L(params[i]).map(req_grad), [True]*4)
#TODO: test warning
# opt.freeze_to(3)
params = [tst_params(), tst_params(), tst_params()]
opt = Optimizer(params, sgd_step, lr=0.1)
for p in L(params[1])[[1,3]]: opt.state[p] = {'force_train': True}
opt.freeze()
test_eq(L(params[0]).map(req_grad), [False]*4)
test_eq(L(params[1]).map(req_grad), [False, True, False, True])
test_eq(L(params[2]).map(req_grad), [True]*4)
show_doc(Optimizer.state_dict)
show_doc(Optimizer.load_state_dict)
p = tst_param([1,2,3], [4,5,6])
opt = Optimizer(p, average_grad)
opt.step()
test_eq(opt.state[p]['grad_avg'], tensor([[4., 5., 6.]]))
sd = opt.state_dict()
p1 = tst_param([10,20,30], [40,50,60])
opt = Optimizer(p1, average_grad, mom=0.99)
test_eq(opt.hypers[0]['mom'], 0.99)
test_eq(opt.state, {})
opt.load_state_dict(sd)
test_eq(opt.hypers[0]['mom'], 0.9)
test_eq(opt.state[p1]['grad_avg'], tensor([[4., 5., 6.]]))
show_doc(Optimizer.clear_state)
p = tst_param([1,2,3], [4,5,6])
opt = Optimizer(p, average_grad)
opt.state[p] = {'force_train': True}
opt.step()
test_eq(opt.state[p]['grad_avg'], tensor([[4., 5., 6.]]))
opt.clear_state()
test_eq(opt.state[p], {'force_train': True})
#|export
def momentum_step(p, lr, grad_avg, **kwargs):
"Step for SGD with momentum with `lr`"
p.data.add_(grad_avg, alpha=-lr)
#|export
def SGD(params, lr, mom=0., wd=0., decouple_wd=True):
"A `Optimizer` for SGD with `lr` and `mom` and `params`"
cbs = [weight_decay] if decouple_wd else [l2_reg]
if mom != 0: cbs.append(average_grad)
cbs.append(sgd_step if mom==0 else momentum_step)
return Optimizer(params, cbs, lr=lr, mom=mom, wd=wd)
#Vanilla SGD
params = tst_params()
opt = SGD(params, lr=0.1)
opt.step()
test_close([p.item() for p in params], [i*0.99 for i in range(4)])
opt.step()
[p.item() for p in params]
test_close([p.item() for p in params], [i*0.98 for i in range(4)])
#SGD with momentum
params = tst_params()
opt = SGD(params, lr=0.1, mom=0.9)
assert isinstance(opt, Optimizer)
opt.step()
test_close([p.item() for p in params], [i*0.99 for i in range(4)])
opt.step()
[p.item() for p in params]
test_close([p.item() for p in params], [i*(1 - 0.1 * (0.1 + 0.1*1.9)) for i in range(4)])
for i,p in enumerate(params): test_close(opt.state[p]['grad_avg'].item(), i*0.19)
params = tst_params()
#Weight decay
opt = SGD(params, lr=0.1, mom=0.9, wd=0.1)
opt.step()
test_close([p.item() for p in params], [i*0.98 for i in range(4)])
#L2 reg
opt = SGD(params, lr=0.1, mom=0.9, wd=0.1, decouple_wd=False)
opt.step()
#TODO: fix cause this formula was wrong
#test_close([p.item() for p in params], [i*0.97 for i in range(4)])
#|export
def rms_prop_step(p, lr, sqr_avg, eps, grad_avg=None, **kwargs):
"Step for SGD with momentum with `lr`"
denom = sqr_avg.sqrt().add_(eps)
p.data.addcdiv_((grad_avg if grad_avg is not None else p.grad), denom, value=-lr)
rms_prop_step.defaults = dict(eps=1e-8)
#|export
def RMSProp(params, lr, sqr_mom=0.99, mom=0., wd=0., decouple_wd=True):
"A `Optimizer` for RMSProp with `lr`, `sqr_mom`, `mom` and `params`"
cbs = [weight_decay] if decouple_wd else [l2_reg]
cbs += ([average_sqr_grad] if mom==0. else [average_grad, average_sqr_grad])
cbs.append(rms_prop_step)
return Optimizer(params, cbs, lr=lr, mom=mom, sqr_mom=sqr_mom, wd=wd)
#Without momentum
params = tst_param([1,2,3], [0.1,0.2,0.3])
opt = RMSProp(params, lr=0.1)
opt.step()
test_close(params[0], tensor([0.,1.,2.]))
opt.step()
step = - 0.1 * 0.1 / (math.sqrt((0.01*0.99+0.01) * 0.1**2) + 1e-8)
test_close(params[0], tensor([step, 1+step, 2+step]))
#With momentum
params = tst_param([1,2,3], [0.1,0.2,0.3])
opt = RMSProp(params, lr=0.1, mom=0.9)
opt.step()
test_close(params[0], tensor([0.,1.,2.]))
opt.step()
step = - 0.1 * (0.1 + 0.9*0.1) / (math.sqrt((0.01*0.99+0.01) * 0.1**2) + 1e-8)
test_close(params[0], tensor([step, 1+step, 2+step]))
#|export
def step_stat(p, step=0, **kwargs):
"Register the number of steps done in `state` for `p`"
step += 1
return {'step' : step}
p = tst_param(1,0.1)
state = {}
state = step_stat(p, **state)
test_eq(state['step'], 1)
for _ in range(5): state = step_stat(p, **state)
test_eq(state['step'], 6)
#|export
def debias(mom, damp, step): return damp * (1 - mom**step) / (1-mom)
#|export
def adam_step(p, lr, mom, step, sqr_mom, grad_avg, sqr_avg, eps, **kwargs):
"Step for Adam with `lr` on `p`"
debias1 = debias(mom, 1-mom, step)
debias2 = debias(sqr_mom, 1-sqr_mom, step)
p.data.addcdiv_(grad_avg, (sqr_avg/debias2).sqrt() + eps, value = -lr / debias1)
return p
adam_step._defaults = dict(eps=1e-5)
#|export
def Adam(params, lr, mom=0.9, sqr_mom=0.99, eps=1e-5, wd=0.01, decouple_wd=True):
"A `Optimizer` for Adam with `lr`, `mom`, `sqr_mom`, `eps` and `params`"
cbs = [weight_decay] if decouple_wd else [l2_reg]
cbs += [partial(average_grad, dampening=True), average_sqr_grad, step_stat, adam_step]
return Optimizer(params, cbs, lr=lr, mom=mom, sqr_mom=sqr_mom, eps=eps, wd=wd)
params = tst_param([1,2,3], [0.1,0.2,0.3])
opt = Adam(params, lr=0.1, wd=0)
opt.step()
step = -0.1 * 0.1 / (math.sqrt(0.1**2) + 1e-8)
test_close(params[0], tensor([1+step, 2+step, 3+step]))
opt.step()
test_close(params[0], tensor([1+2*step, 2+2*step, 3+2*step]), eps=1e-3)
#|export
def radam_step(p, lr, mom, step, sqr_mom, grad_avg, sqr_avg, eps, beta, **kwargs):
"Step for RAdam with `lr` on `p`"
debias1 = debias(mom, 1-mom, step)
debias2 = debias(sqr_mom, 1-sqr_mom, step)
r_inf = 2/(1-sqr_mom) - 1
r = r_inf - 2*step*sqr_mom**step/(1-sqr_mom**step)
if r > 5:
v = math.sqrt(((r-4) * (r-2) * r_inf)/((r_inf-4)*(r_inf-2)*r))
denom = (sqr_avg/debias2).sqrt()
if eps: denom += eps
if beta: denom = F.softplus(denom, beta)
p.data.addcdiv_(grad_avg, denom, value = -lr*v / debias1)
else: p.data.add_(grad_avg, alpha=-lr / debias1)
return p
radam_step._defaults = dict(eps=1e-5)
#|export
def RAdam(params, lr, mom=0.9, sqr_mom=0.99, eps=1e-5, wd=0., beta=0., decouple_wd=True):
"A `Optimizer` for Adam with `lr`, `mom`, `sqr_mom`, `eps` and `params`"
cbs = [weight_decay] if decouple_wd else [l2_reg]
cbs += [partial(average_grad, dampening=True), average_sqr_grad, step_stat, radam_step]
return Optimizer(params, cbs, lr=lr, mom=mom, sqr_mom=sqr_mom, eps=eps, wd=wd, beta=beta)
beta = 0.99
r_inf = 2/(1-beta) - 1
rs = np.array([r_inf - 2*s*beta**s/(1-beta**s) for s in range(5,500)])
v = np.sqrt(((rs-4) * (rs-2) * r_inf)/((r_inf-4)*(r_inf-2)*rs))
plt.plot(v);
params = tst_param([1,2,3], [0.1,0.2,0.3])
opt = RAdam(params, lr=0.1)
#The r factor is lower than 5 during the first 5 steps so updates use the average of gradients (all the same)
r_inf = 2/(1-0.99) - 1
for i in range(5):
r = r_inf - 2*(i+1)*0.99**(i+1)/(1-0.99**(i+1))
assert r <= 5
opt.step()
p = tensor([0.95, 1.9, 2.85])
test_close(params[0], p)
#The r factor is greater than 5 for the sixth step so we update with RAdam
r = r_inf - 2*6*0.99**6/(1-0.99**6)
assert r > 5
opt.step()
v = math.sqrt(((r-4) * (r-2) * r_inf)/((r_inf-4)*(r_inf-2)*r))
step = -0.1*0.1*v/(math.sqrt(0.1**2) + 1e-8)
test_close(params[0], p+step)
#|export
def qhadam_step(p, lr, mom, sqr_mom, sqr_avg, nu_1, nu_2, step, grad_avg, eps, **kwargs):
debias1 = debias(mom, 1-mom, step)
debias2 = debias(sqr_mom, 1-sqr_mom, step)
p.data.addcdiv_(((1-nu_1) * p.grad.data) + (nu_1 * (grad_avg / debias1)),
(((1 - nu_2) * (p.grad.data)**2) + (nu_2 * (sqr_avg / debias2))).sqrt() + eps,
value = -lr)
return p
qhadam_step._defaults = dict(eps=1e-8)
#|export
def QHAdam(params, lr, mom=0.999, sqr_mom=0.999, nu_1=0.7, nu_2 = 1.0, eps=1e-8, wd=0., decouple_wd=True):
"An `Optimizer` for Adam with `lr`, `mom`, `sqr_mom`, `nus`, eps` and `params`"
cbs = [weight_decay] if decouple_wd else [l2_reg]
cbs += [partial(average_grad, dampening=True), partial(average_sqr_grad, dampening=True), step_stat, qhadam_step]
return Optimizer(params, cbs, lr=lr, nu_1=nu_1, nu_2=nu_2 ,
mom=mom, sqr_mom=sqr_mom, eps=eps, wd=wd)
params = tst_param([1,2,3], [0.1,0.2,0.3])
opt = QHAdam(params, lr=0.1)
opt.step()
step = -0.1 * (((1-0.7) * 0.1) + (0.7 * 0.1)) / (
math.sqrt(((1-1.0) * 0.1**2) + (1.0 * 0.1**2)) + 1e-8)
test_close(params[0], tensor([1+step, 2+step, 3+step]))
opt.step()
test_close(params[0], tensor([1+2*step, 2+2*step, 3+2*step]), eps=1e-3)
#|export
def larc_layer_lr(p, lr, trust_coeff, wd, eps, clip=True, **kwargs):
"Computes the local lr before weight decay is applied"
p_norm,g_norm = torch.norm(p.data),torch.norm(p.grad.data)
local_lr = lr*trust_coeff * (p_norm) / (g_norm + p_norm * wd + eps)
return {'local_lr': min(lr, local_lr) if clip else local_lr}
larc_layer_lr.defaults = dict(trust_coeff=0.02, wd=0., eps=1e-8)
#|export
def larc_step(p, local_lr, grad_avg=None, **kwargs):
"Step for LARC `local_lr` on `p`"
p.data.add_(p.grad.data if grad_avg is None else grad_avg, alpha = -local_lr)
#|export
def Larc(params, lr, mom=0.9, clip=True, trust_coeff=0.02, eps=1e-8, wd=0., decouple_wd=True):
"A `Optimizer` for Adam with `lr`, `mom`, `sqr_mom`, `eps` and `params`"
cbs = [weight_decay] if decouple_wd else [l2_reg]
if mom!=0.: cbs.append(average_grad)
cbs += [partial(larc_layer_lr, clip=clip), larc_step]
return Optimizer(params, cbs, lr=lr, mom=mom, trust_coeff=trust_coeff, eps=eps, wd=wd)
params = [tst_param([1,2,3], [0.1,0.2,0.3]), tst_param([1,2,3], [0.01,0.02,0.03])]
opt = Larc(params, lr=0.1)
opt.step()
#First param local lr is 0.02 < lr so it's not clipped
test_close(opt.state[params[0]]['local_lr'], 0.02)
#Second param local lr is 0.2 > lr so it's clipped
test_eq(opt.state[params[1]]['local_lr'], 0.1)
test_close(params[0], tensor([0.998,1.996,2.994]))
test_close(params[1], tensor([0.999,1.998,2.997]))
params = [tst_param([1,2,3], [0.1,0.2,0.3]), tst_param([1,2,3], [0.01,0.02,0.03])]
opt = Larc(params, lr=0.1, clip=False)
opt.step()
#No clipping
test_close(opt.state[params[0]]['local_lr'], 0.02)
test_close(opt.state[params[1]]['local_lr'], 0.2)
test_close(params[0], tensor([0.998,1.996,2.994]))
test_close(params[1], tensor([0.998,1.996,2.994]))
#|export
def lamb_step(p, lr, mom, step, sqr_mom, grad_avg, sqr_avg, eps, **kwargs):
"Step for LAMB with `lr` on `p`"
debias1 = debias(mom, 1-mom, step)
debias2 = debias(sqr_mom, 1-sqr_mom, step)
r1 = p.data.pow(2).mean().sqrt()
step = (grad_avg/debias1) / ((sqr_avg/debias2).sqrt()+eps)
r2 = step.pow(2).mean().sqrt()
q = 1 if r1 == 0 or r2 == 0 else min(r1/r2,10)
p.data.add_(step, alpha = -lr * q)
lamb_step._defaults = dict(eps=1e-6, wd=0.)
#|export
def Lamb(params, lr, mom=0.9, sqr_mom=0.99, eps=1e-5, wd=0., decouple_wd=True):
"A `Optimizer` for Adam with `lr`, `mom`, `sqr_mom`, `eps` and `params`"
cbs = [weight_decay] if decouple_wd else [l2_reg]
cbs += [partial(average_grad, dampening=True), average_sqr_grad, step_stat, lamb_step]
return Optimizer(params, cbs, lr=lr, mom=mom, sqr_mom=sqr_mom, eps=eps, wd=wd)
params = tst_param([1,2,3], [0.1,0.2,0.3])
opt = Lamb(params, lr=0.1)
opt.step()
test_close(params[0], tensor([0.7840,1.7840,2.7840]), eps=1e-3)
#|export
class Lookahead(Optimizer, GetAttr):
"Wrap `opt` in a lookahead optimizer"
_default='opt'
def __init__(self, opt, k=6, alpha=0.5):
store_attr('opt,k,alpha')
self._init_state()
def step(self, closure=None):
if closure is not None: raise NotImplementedError("fastai optimizers currently do not support closure")
if self.slow_weights is None: self._copy_weights()
self.opt.step()
self.count += 1
if self.count%self.k != 0: return
for slow_pg,fast_pg in zip(self.slow_weights,self.param_lists):
for slow_p,fast_p in zip(slow_pg,fast_pg):
slow_p.data.add_(fast_p.data-slow_p.data, alpha=self.alpha)
fast_p.data.copy_(slow_p.data)
def clear_state(self):
self.opt.clear_state()
self._init_state()
def state_dict(self):
state = self.opt.state_dict()
state.update({'count': self.count, 'slow_weights': self.slow_weights})
return state
def load_state_dict(self, sd):
self.count = sd.pop('count')
self.slow_weights = sd.pop('slow_weights')
self.opt.load_state_dict(sd)
def _init_state(self): self.count,self.slow_weights = 0,None
def _copy_weights(self): self.slow_weights = L(L(p.clone().detach() for p in pg) for pg in self.param_lists)
@property
def param_lists(self): return self.opt.param_lists
@param_lists.setter
def param_lists(self, v): self.opt.param_lists = v
params = tst_param([1,2,3], [0.1,0.2,0.3])
p,g = params[0].data.clone(),tensor([0.1,0.2,0.3])
opt = Lookahead(SGD(params, lr=0.1))
for k in range(5): opt.step()
#first 5 steps are normal SGD steps
test_close(params[0], p - 0.5*g)
#Since k=6, sixth step is a moving average of the 6 SGD steps with the initial weight
opt.step()
test_close(params[0], p * 0.5 + (p-0.6*g) * 0.5)
#|export
@delegates(RAdam)
def ranger(p, lr, mom=0.95, wd=0.01, eps=1e-6, **kwargs):
"Convenience method for `Lookahead` with `RAdam`"
return Lookahead(RAdam(p, lr=lr, mom=mom, wd=wd, eps=eps, **kwargs))
#|export
def detuplify_pg(d):
res = {}
for k,v in d.items():
if k == 'params': continue
if is_listy(v): res.update(**{f'{k}__{i}': v_ for i,v_ in enumerate(v)})
else: res[k] = v
return res
tst = {'lr': 1e-2, 'mom': 0.9, 'params':[0,1,2]}
test_eq(detuplify_pg(tst), {'lr': 1e-2, 'mom': 0.9})
tst = {'lr': 1e-2, 'betas': (0.9,0.999), 'params':[0,1,2]}
test_eq(detuplify_pg(tst), {'lr': 1e-2, 'betas__0': 0.9, 'betas__1': 0.999})
#|export
def set_item_pg(pg, k, v):
if '__' not in k: pg[k] = v
else:
name,idx = k.split('__')
pg[name] = tuple(v if i==int(idx) else pg[name][i] for i in range_of(pg[name]))
return pg
tst = {'lr': 1e-2, 'mom': 0.9, 'params':[0,1,2]}
test_eq(set_item_pg(tst, 'lr', 1e-3), {'lr': 1e-3, 'mom': 0.9, 'params':[0,1,2]})
tst = {'lr': 1e-2, 'betas': (0.9,0.999), 'params':[0,1,2]}
test_eq(set_item_pg(tst, 'betas__0', 0.95), {'lr': 1e-2, 'betas': (0.95,0.999), 'params':[0,1,2]})
#|export
pytorch_hp_map = {'momentum': 'mom', 'weight_decay': 'wd', 'alpha': 'sqr_mom', 'betas__0': 'mom',
'betas__1': 'sqr_mom'}
if version.parse(torch.version.__version__)>version.parse('1.12.0'):
# Torch>=1.12 has a foreach param
pytorch_hp_map = merge(*(pytorch_hp_map,{'foreach': 'foreach'}))
#|export
def _convert_params(o:list) -> list:
splitter = []
for group in o:
if isinstance(group, dict): splitter.append(group)
else: splitter.append({'params':group})
return splitter
#|export
class OptimWrapper(_BaseOptimizer, GetAttr):
"A wrapper class for existing PyTorch optimizers"
_xtra=['zero_grad', 'step', 'state_dict', 'load_state_dict']
_default='opt'
def __init__(self,
params:list|dict=None, # Model parameters to pass to `opt`. If using an already built `opt`
opt:callable|torch.optim.Optimizer=None, # A torch optimizer constructor, or an already built optimizer
hp_map:dict=None, # A dictionary converting the keys of a built `opt` to the keys of fastai's Optimizer
convert_groups=True, # Whether to convert parameter groups
**kwargs
):
if params is None and opt is None: raise ValueError("Both `params` and `opt` cannot be None.")
if callable(opt):
self.opt = opt(_convert_params(params), **kwargs) if convert_groups else opt(params, **kwargs)
else:
if params is not None: raise ValueError("Tried using both `params` and a built optimizer. Just pass in `opt`.")
self.opt = opt
if hp_map is None: hp_map = pytorch_hp_map
self.fwd_map = {k: hp_map[k] if k in hp_map else k for k in detuplify_pg(self.opt.param_groups[0]).keys()}
self.bwd_map = {v:k for k,v in self.fwd_map.items()}
self.state = defaultdict(dict, {})
self.frozen_idx = 0
@property
def hypers(self):
return [{self.fwd_map[k]:v for k,v in detuplify_pg(pg).items() if k != 'params'} for pg in self.opt.param_groups]
def _set_hyper(self, k, v):
for pg,v_ in zip(self.opt.param_groups,v): pg = set_item_pg(pg, self.bwd_map[k], v_)
def clear_state(self): self.opt.state = defaultdict(dict, {})
@property
def param_lists(self): return [pg['params'] for pg in self.opt.param_groups]
@param_lists.setter
def param_lists(self, v):
for pg,v_ in zip(self.opt.param_groups,v): pg['params'] = v_
sgd = SGD([tensor([1,2,3])], lr=1e-3, mom=0.9, wd=1e-2)
tst_sgd = OptimWrapper([tensor([1,2,3])], torch.optim.SGD, lr=1e-3, momentum=0.9, weight_decay=1e-2)
#Access to param_groups
test_eq(tst_sgd.param_lists, sgd.param_lists)
#Set param_groups
tst_sgd.param_lists = [[tensor([4,5,6])]]
test_eq(tst_sgd.opt.param_groups[0]['params'], [tensor(4,5,6)])
#Access to hypers
_xtra_hypers = dict(dampening=0., nesterov=False, maximize=False)
if version.parse(torch.version.__version__)>version.parse('1.12.0'):
_xtra_hypers = merge(*(_xtra_hypers,dict(foreach=None)))
test_eq(tst_sgd.hypers, [{**sgd.hypers[0], **_xtra_hypers}])
#Set hypers
tst_sgd.set_hyper('mom', 0.95)
test_eq(tst_sgd.opt.param_groups[0]['momentum'], 0.95)
tst_sgd = OptimWrapper([{'params': [tensor([1,2,3])], 'lr': 1e-3},
{'params': [tensor([4,5,6])], 'lr': 1e-2}], torch.optim.SGD, momentum=0.9, weight_decay=1e-2)
sgd = SGD([[tensor([1,2,3])], [tensor([4,5,6])]], lr=[1e-3, 1e-2], mom=0.9, wd=1e-2)
#Access to param_groups
test_eq(tst_sgd.param_lists, sgd.param_lists)
#Set param_groups
tst_sgd.param_lists = [[tensor([4,5,6])], [tensor([1,2,3])]]
test_eq(tst_sgd.opt.param_groups[0]['params'], [tensor(4,5,6)])
test_eq(tst_sgd.opt.param_groups[1]['params'], [tensor(1,2,3)])
#Access to hypers
test_eq(tst_sgd.hypers, [{**sgd.hypers[i], **_xtra_hypers} for i in range(2)])
#Set hypers
tst_sgd.set_hyper('mom', 0.95)
test_eq([pg['momentum'] for pg in tst_sgd.opt.param_groups], [0.95,0.95])
tst_sgd.set_hyper('lr', [1e-4,1e-3])
test_eq([pg['lr'] for pg in tst_sgd.opt.param_groups], [1e-4,1e-3])
# Ensure we can use an already made optimizer
tst_sgd = torch.optim.SGD([{'params': [tensor([1,2,3])], 'lr': 1e-3},
{'params': [tensor([4,5,6])], 'lr': 1e-2}])
tst_sgd = OptimWrapper(opt = tst_sgd)
sgd = SGD([[tensor([1,2,3])], [tensor([4,5,6])]], lr=[1e-3, 1e-2])
#Access to param_groups
test_eq(tst_sgd.param_lists, sgd.param_lists)
#Set param_groups
tst_sgd.param_lists = [[tensor([4,5,6])], [tensor([1,2,3])]]
test_eq(tst_sgd.opt.param_groups[0]['params'], [tensor(4,5,6)])
test_eq(tst_sgd.opt.param_groups[1]['params'], [tensor(1,2,3)])
#Access to hypers
test_eq(tst_sgd.hypers, [{**sgd.hypers[i], **_xtra_hypers} for i in range(2)])
#Set hypers
tst_sgd.set_hyper('mom', 0.95)
test_eq([pg['momentum'] for pg in tst_sgd.opt.param_groups], [0.95,0.95])
tst_sgd.set_hyper('lr', [1e-4,1e-3])
test_eq([pg['lr'] for pg in tst_sgd.opt.param_groups], [1e-4,1e-3])
#|hide
#check it works with tuply hp names like in Adam
tst_adam = OptimWrapper([tensor([1,2,3])], torch.optim.Adam, lr=1e-2, betas=(0.9, 0.99))
tst_hypers = {'lr': 0.01, 'mom': 0.9, 'sqr_mom': 0.99, 'eps': 1e-08, 'wd': 0,
'amsgrad': False, 'maximize':False}
if version.parse(torch.version.__version__)>version.parse('1.12.0'):
tst_hypers = merge(*(tst_hypers,dict(foreach=None)))
test_eq(tst_adam.hypers, [tst_hypers])
tst_adam.set_hyper('mom', 0.95)
test_eq(tst_adam.opt.param_groups[0]['betas'], (0.95, 0.99))
tst_adam.set_hyper('sqr_mom', 0.9)
test_eq(tst_adam.opt.param_groups[0]['betas'], (0.95, 0.9))
tst_adam = torch.optim.Adam([tensor([1,2,3])], lr=1e-2, betas=(0.9, 0.99))
tst_adam = OptimWrapper(opt=tst_adam)
tst_hypers = {'lr': 0.01, 'mom': 0.9, 'sqr_mom': 0.99, 'eps': 1e-08, 'wd': 0, 'amsgrad': False,
'maximize':False}
if version.parse(torch.version.__version__)>version.parse('1.12.0'):
tst_hypers = merge(*(tst_hypers,dict(foreach=None)))
test_eq(tst_adam.hypers, [tst_hypers])
tst_adam.set_hyper('mom', 0.95)
test_eq(tst_adam.opt.param_groups[0]['betas'], (0.95, 0.99))
tst_adam.set_hyper('sqr_mom', 0.9)
test_eq(tst_adam.opt.param_groups[0]['betas'], (0.95, 0.9))
def _mock_train(m, x, y, opt):
m.train()
for i in range(0, 100, 25):
z = m(x[i:i+25])
loss = F.mse_loss(z, y[i:i+25])
loss.backward()
opt.step()
opt.zero_grad()
m = nn.Linear(4,5)
x = torch.randn(100, 3, 4)
y = torch.randn(100, 3, 5)
try:
torch.save(m.state_dict(), 'tmp.pth')
wgt,bias = m.weight.data.clone(),m.bias.data.clone()
m.load_state_dict(torch.load('tmp.pth'))
opt1 = OptimWrapper(m.parameters(), torch.optim.AdamW, betas=(0.9, 0.99), eps=1e-5, weight_decay=1e-2)
_mock_train(m, x.clone(), y.clone(), opt1)
wgt1,bias1 = m.weight.data.clone(),m.bias.data.clone()
m.load_state_dict(torch.load('tmp.pth'))
opt2 = Adam(m.parameters(), 1e-3, wd=1e-2)
_mock_train(m, x.clone(), y.clone(), opt2)
wgt2,bias2 = m.weight.data.clone(),m.bias.data.clone()
test_close(wgt1,wgt2,eps=1e-3)
test_close(bias1,bias2,eps=1e-3)
finally: os.remove('tmp.pth')
#hide
m = nn.Linear(4,5)
x = torch.randn(100, 3, 4)
y = torch.randn(100, 3, 5)
try:
torch.save(m.state_dict(), 'tmp.pth')
wgt,bias = m.weight.data.clone(),m.bias.data.clone()
m.load_state_dict(torch.load('tmp.pth'))
opt1 = torch.optim.AdamW(m.parameters(), betas=(0.9, 0.99), eps=1e-5, weight_decay=1e-2)
opt1 = OptimWrapper(opt=opt1)
_mock_train(m, x.clone(), y.clone(), opt1)
wgt1,bias1 = m.weight.data.clone(),m.bias.data.clone()
m.load_state_dict(torch.load('tmp.pth'))
opt2 = Adam(m.parameters(), 1e-3, wd=1e-2)
_mock_train(m, x.clone(), y.clone(), opt2)
wgt2,bias2 = m.weight.data.clone(),m.bias.data.clone()
test_close(wgt1,wgt2,eps=1e-3)
test_close(bias1,bias2,eps=1e-3)
finally: os.remove('tmp.pth')
m = nn.Linear(4,5)
x = torch.randn(100, 3, 4)
y = torch.randn(100, 3, 5)
try:
torch.save(m.state_dict(), 'tmp.pth')
wgt,bias = m.weight.data.clone(),m.bias.data.clone()
m.load_state_dict(torch.load('tmp.pth'))
opt1 = OptimWrapper(m.parameters(), torch.optim.Adam, betas=(0.9, 0.99), eps=1e-5, weight_decay=1e-2)
_mock_train(m, x.clone(), y.clone(), opt1)
wgt1,bias1 = m.weight.data.clone(),m.bias.data.clone()
m.load_state_dict(torch.load('tmp.pth'))
opt2 = Adam(m.parameters(), 1e-3, wd=1e-2, decouple_wd=False)
_mock_train(m, x.clone(), y.clone(), opt2)
wgt2,bias2 = m.weight.data.clone(),m.bias.data.clone()
test_close(wgt1,wgt2,eps=1e-3)
test_close(bias1,bias2,eps=1e-3)
finally: os.remove('tmp.pth')
#hide
m = nn.Linear(4,5)
x = torch.randn(100, 3, 4)
y = torch.randn(100, 3, 5)
try:
torch.save(m.state_dict(), 'tmp.pth')
wgt,bias = m.weight.data.clone(),m.bias.data.clone()
m.load_state_dict(torch.load('tmp.pth'))
opt1 = torch.optim.Adam(m.parameters(), betas=(0.9, 0.99), eps=1e-5, weight_decay=1e-2)
opt1 = OptimWrapper(opt=opt1)
_mock_train(m, x.clone(), y.clone(), opt1)
wgt1,bias1 = m.weight.data.clone(),m.bias.data.clone()
m.load_state_dict(torch.load('tmp.pth'))
opt2 = Adam(m.parameters(), 1e-3, wd=1e-2, decouple_wd=False)
_mock_train(m, x.clone(), y.clone(), opt2)
wgt2,bias2 = m.weight.data.clone(),m.bias.data.clone()
test_close(wgt1,wgt2,eps=1e-3)
test_close(bias1,bias2,eps=1e-3)
finally: os.remove('tmp.pth')
opt_func = partial(OptimWrapper, opt=torch.optim.SGD)
opt = torch.optim.SGD([tensor([1,2,3])], lr=1e-2)
opt_func = OptimWrapper(opt=opt)
#|hide
from nbdev.export import *
notebook2script()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Inspecting unique values in each columns
Step2: Selecting only relevant columns
Step3: still needs to be done
Step4: Saving selection
Step5: Selecting a subset of the data
Step6: New column with all votes summary with parlementary id as key
Step7: Preparing data for d3
Step8: Selecting a subset of the data to export
|
<ASSISTANT_TASK:>
Python Code:
with open('d3/mapHemicycle/data/scrutins.json', 'r') as f:
json_data = json.load(f)
json_data.keys()
json_data['scrutins'].keys()
df = pd.io.json.json_normalize(json_data['scrutins']['scrutin'])
for col in df.columns:
print ('____________________')
print (col)
try:
print (df[col].unique())
except:
print ('error col ' + col)
df_sel = df[['dateScrutin', 'demandeur.texte',
'miseAuPoint.abstentions.votant',
'miseAuPoint.abstentions.votant.acteurRef',
'miseAuPoint.abstentions.votant.mandatRef',
'miseAuPoint.contres.votant', 'miseAuPoint.contres.votant.acteurRef',
'miseAuPoint.contres.votant.mandatRef',
'miseAuPoint.nonVotantsVolontaires.votant',
'miseAuPoint.pours.votant', 'miseAuPoint.pours.votant.acteurRef',
'miseAuPoint.pours.votant.mandatRef', 'modePublicationDesVotes',
'numero', 'objet.libelle',
'quantiemeJourSeance', 'seanceRef', 'sessionRef', 'sort.code',
'sort.libelle', 'syntheseVote.annonce',
'syntheseVote.decompte.abstention', 'syntheseVote.decompte.contre',
'syntheseVote.decompte.nonVotant', 'syntheseVote.decompte.pour',
'syntheseVote.nbrSuffragesRequis', 'syntheseVote.nombreVotants',
'syntheseVote.suffragesExprimes', 'titre', 'typeVote.codeTypeVote',
'typeVote.libelleTypeVote', 'typeVote.typeMajorite', 'uid',
'ventilationVotes.organe.groupes.groupe']]
df_sel.head()
def map_all_votes(all_votes):
dict_votes = {'pours':[], 'contres':[], 'nonVotants':[], 'abstentions':[]}
for parl_group in all_votes:
decompte_nominatif = parl_group['vote']['decompteNominatif']
for voteposition in decompte_nominatif.keys():
try:
dict_votes[voteposition] += [vote['acteurRef'] for vote in decompte_nominatif[voteposition]['votant']]
except:
dict_votes[voteposition] += []
return [dict_votes['pours'], dict_votes['contres'], dict_votes['nonVotants'], dict_votes['abstentions'] ]
df_sel['votes_pour'], df_sel['votes_contre'], df_sel['nonVotants'], df_sel['abstentions'] = zip(*df_sel['ventilationVotes.organe.groupes.groupe'].map(map_all_votes))
df_sel.head()
df_sel.to_pickle('data/scrutins/tabular_data_scrutin_sel.pkl')
df_sel.to_csv('data/scrutins/tabular_data_scrutin_sel.csv')
df_short = df_sel[['dateScrutin', 'demandeur.texte', 'modePublicationDesVotes', 'numero', 'objet.libelle', 'quantiemeJourSeance', 'sort.code', 'syntheseVote.decompte.abstention',
'syntheseVote.decompte.contre', 'syntheseVote.decompte.nonVotant', 'syntheseVote.decompte.pour',
'syntheseVote.nbrSuffragesRequis', 'syntheseVote.nombreVotants',
'syntheseVote.suffragesExprimes', 'titre', 'typeVote.libelleTypeVote', 'typeVote.typeMajorite', 'votes_pour', 'votes_contre',
'nonVotants', 'abstentions']]
df_short = df_short.rename(columns={'demandeur.texte':'demandeur', 'sort.code': 'resultat','syntheseVote.decompte.abstention': 'nombre_abstentions',
'syntheseVote.decompte.pour': 'nombre_pour',
'syntheseVote.decompte.contre': 'nombre_contre',
'syntheseVote.decompte.nonVotant': 'nombre_nonVotant',
'syntheseVote.nbrSuffragesRequis': 'nombre_suffrages_requis',
'syntheseVote.nombreVotants': 'nombre_votants',
'syntheseVote.suffragesExprimes': 'nombre_suffrages_exprimes',
'typeVote.libelleTypeVote': 'type_vote',
'typeVote.typeMajorite': 'type_majorite',
'objet.libelle':'libelle'})
df_short.info()
typevote_dict = {"votes_pour":"Pour", "votes_contre":"Contre", "nonVotants":"Non-votant", "abstentions":"Abstention"}
def populate_summary_col(row):
new_dict = {}
for keyvote, valuevote in typevote_dict.items():
for vote in row[keyvote]:
new_dict[vote] = valuevote
return new_dict
df_short['all_votes'] = df_short.apply(populate_summary_col, axis=1)
df_short.to_pickle('data/scrutins/tabular_data_scrutin_sh.pkl')
df_short.to_csv('data/scrutins/tabular_data_scrutin_sh.csv')
df_short_for_d3 = df_short.copy()
df_short_for_d3["votes_pour"] = df_short_for_d3.votes_pour.astype('str').map(lambda x: x.replace("'", '"'))
df_short_for_d3["votes_contre"] = df_short_for_d3.votes_contre.astype('str').map(lambda x: x.replace("'", '"'))
df_short_for_d3["nonVotants"] = df_short_for_d3.nonVotants.astype('str').map(lambda x: x.replace("'", '"'))
df_short_for_d3["abstentions"] = df_short_for_d3.abstentions.astype('str').map(lambda x: x.replace("'", '"'))
df_short_for_d3['all_votes'] = df_short_for_d3.all_votes.astype('str').map(lambda x: x.replace("'", '"'))
df_short_for_d3 = df_short_for_d3[['dateScrutin', 'demandeur', 'modePublicationDesVotes', 'numero',
'libelle', 'quantiemeJourSeance', 'resultat', 'nombre_abstentions',
'nombre_contre', 'nombre_nonVotant', 'nombre_pour',
'nombre_suffrages_requis', 'nombre_votants',
'nombre_suffrages_exprimes', 'titre', 'type_vote', 'type_majorite',
'all_votes']]
df_short_for_d3.to_csv('d3/mapHemicycle/data/scrutins_tabular.csv')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's start again with our text-classification problem, but for now we will only use a reduced number of instances. We will work only with 3,000 instances.
Step2: Then import the set of stop words and create a pipeline that compounds the TF-IDF vectorizer and the Naïve Bayes algorithms (recall that we had a stopwords_en.txt file with a list of stop words).
Step3: If we evaluate our algorithm with a three-fold cross-validation, we obtain a mean score of around 0.81.
Step4: It looks like we should train the algorithm with a list of different parameter values and keep the parameter value that achieves the best results. Let's implement a helper function to do that. This function will train the algorithm with a list of values, each time obtaining an accuracy score calculated by performing k-fold cross-validation
Step5: Let's call this function; we will use numpy's logspace function to generate a list of alpha values spaced evenly on a log scale.
Step6: As expected, the training accuracy is always greater than the testing accuracy. The best results are obtained with an alpha value of 0.1 (accuracy of 0.81)
Step7: We created a very useful function to graph and obtain the best parameter value for a classifier. Let's use it to adjust another classifier that uses a Support Vector Machines (SVM) instead of MultinomialNB
Step8: For gamma < 1 we have underfitting. For gamma > 1 we have overfitting. So here, the best result is for gamma = 1 where we obtain a training an accuracy of 0.999 and a testing accuracy of about 0.75
Step9: Let's execute our grid search and print the best parameter values and scores.
Step11: With the grid search we obtained a better combination of C and gamma parameters, for values 10.0 and 0.10 respectively, we obtained a 3-fold cross validation accuracy of 0.828 much better than the best value we obtained (0.76) in the previous experiment by only adjusting gamma and keeeping C value at 1.0.
Step12: The following function loads a particular fold and fits the classifier with the specified parameters set. Finally returns the testing score. This function will be called by each of the parallel processes
Step14: This function executes the grid search in parallel processes. For each of the parameter combination (returned by the IterGrid iterator), it iterates over the K folds and creates a process to compute the evaluation. It returns the parameter combinations alongside with the tasks list
|
<ASSISTANT_TASK:>
Python Code:
%pylab inline
import IPython
import sklearn as sk
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
print 'IPython version:', IPython.__version__
print 'numpy version:', np.__version__
print 'scikit-learn version:', sk.__version__
print 'matplotlib version:', matplotlib.__version__
from sklearn.datasets import fetch_20newsgroups
news = fetch_20newsgroups(subset='all')
n_samples = 3000
X = news.data[:n_samples]
y = news.target[:n_samples]
from sklearn.naive_bayes import MultinomialNB
from sklearn.pipeline import Pipeline
from sklearn.feature_extraction.text import TfidfVectorizer
def get_stop_words():
result = set()
for line in open('data/stopwords_en.txt', 'r').readlines():
result.add(line.strip())
return result
stop_words = get_stop_words()
clf = Pipeline([
('vect', TfidfVectorizer(
stop_words=stop_words,
token_pattern=ur"\b[a-z0-9_\-\.]+[a-z][a-z0-9_\-\.]+\b",
)),
('nb', MultinomialNB(alpha=0.01)),
])
from sklearn.cross_validation import cross_val_score, KFold
from scipy.stats import sem
def evaluate_cross_validation(clf, X, y, K):
# create a k-fold croos validation iterator of k=5 folds
cv = KFold(len(y), K, shuffle=True, random_state=0)
# by default the score used is the one returned by score method of the estimator (accuracy)
scores = cross_val_score(clf, X, y, cv=cv)
print scores
print ("Mean score: {0:.3f} (+/-{1:.3f})").format(
np.mean(scores), sem(scores))
evaluate_cross_validation(clf, X, y, 3)
def calc_params(X, y, clf, param_values, param_name, K):
# initialize training and testing scores with zeros
train_scores = np.zeros(len(param_values))
test_scores = np.zeros(len(param_values))
# iterate over the different parameter values
for i, param_value in enumerate(param_values):
print param_name, ' = ', param_value
# set classifier parameters
clf.set_params(**{param_name:param_value})
# initialize the K scores obtained for each fold
k_train_scores = np.zeros(K)
k_test_scores = np.zeros(K)
# create KFold cross validation
cv = KFold(n_samples, K, shuffle=True, random_state=0)
# iterate over the K folds
for j, (train, test) in enumerate(cv):
# fit the classifier in the corresponding fold
# and obtain the corresponding accuracy scores on train and test sets
clf.fit([X[k] for k in train], y[train])
k_train_scores[j] = clf.score([X[k] for k in train], y[train])
k_test_scores[j] = clf.score([X[k] for k in test], y[test])
# store the mean of the K fold scores
train_scores[i] = np.mean(k_train_scores)
test_scores[i] = np.mean(k_test_scores)
# plot the training and testing scores in a log scale
plt.semilogx(param_values, train_scores, alpha=0.4, lw=2, c='b')
plt.semilogx(param_values, test_scores, alpha=0.4, lw=2, c='g')
plt.xlabel(param_name + " values")
plt.ylabel("Mean cross validation accuracy")
# return the training and testing scores on each parameter value
return train_scores, test_scores
alphas = np.logspace(-7, 0, 8)
print alphas
train_scores, test_scores = calc_params(X, y, clf, alphas, 'nb__alpha', 3)
print 'training scores: ', train_scores
print 'testing scores: ', test_scores
from sklearn.svm import SVC
clf = Pipeline([
('vect', TfidfVectorizer(
stop_words=stop_words,
token_pattern=ur"\b[a-z0-9_\-\.]+[a-z][a-z0-9_\-\.]+\b",
)),
('svc', SVC()),
])
gammas = np.logspace(-2, 1, 4)
train_scores, test_scores = calc_params(X, y, clf, gammas, 'svc__gamma', 3)
print 'training scores: ', train_scores
print 'testing scores: ', test_scores
from sklearn.grid_search import GridSearchCV
parameters = {
'svc__gamma': np.logspace(-2, 1, 4),
'svc__C': np.logspace(-1, 1, 3),
}
clf = Pipeline([
('vect', TfidfVectorizer(
stop_words=stop_words,
token_pattern=ur"\b[a-z0-9_\-\.]+[a-z][a-z0-9_\-\.]+\b",
)),
('svc', SVC()),
])
gs = GridSearchCV(clf, parameters, verbose=2, refit=False, cv=3)
%time _ = gs.fit(X, y)
gs.best_params_, gs.best_score_
from sklearn.externals import joblib
from sklearn.cross_validation import ShuffleSplit
import os
def persist_cv_splits(X, y, K=3, name='data', suffix="_cv_%03d.pkl"):
Dump K folds to filesystem.
cv_split_filenames = []
# create KFold cross validation
cv = KFold(n_samples, K, shuffle=True, random_state=0)
# iterate over the K folds
for i, (train, test) in enumerate(cv):
cv_fold = ([X[k] for k in train], y[train], [X[k] for k in test], y[test])
cv_split_filename = name + suffix % i
cv_split_filename = os.path.abspath(cv_split_filename)
joblib.dump(cv_fold, cv_split_filename)
cv_split_filenames.append(cv_split_filename)
return cv_split_filenames
cv_filenames = persist_cv_splits(X, y, name='news')
def compute_evaluation(cv_split_filename, clf, params):
# All module imports should be executed in the worker namespace
from sklearn.externals import joblib
# load the fold training and testing partitions from the filesystem
X_train, y_train, X_test, y_test = joblib.load(
cv_split_filename, mmap_mode='c')
clf.set_params(**params)
clf.fit(X_train, y_train)
test_score = clf.score(X_test, y_test)
return test_score
from sklearn.grid_search import ParameterGrid
def parallel_grid_search(lb_view, clf, cv_split_filenames, param_grid):
all_tasks = []
all_parameters = list(ParameterGrid(param_grid))
# iterate over parameter combinations
for i, params in enumerate(all_parameters):
task_for_params = []
# iterate over the K folds
for j, cv_split_filename in enumerate(cv_split_filenames):
t = lb_view.apply(
compute_evaluation, cv_split_filename, clf, params)
task_for_params.append(t)
all_tasks.append(task_for_params)
return all_parameters, all_tasks
from sklearn.svm import SVC
from IPython.parallel import Client
client = Client()
lb_view = client.load_balanced_view()
all_parameters, all_tasks = parallel_grid_search(
lb_view, clf, cv_filenames, parameters)
def print_progress(tasks):
progress = np.mean([task.ready() for task_group in tasks
for task in task_group])
print "Tasks completed: {0}%".format(100 * progress)
print_progress(all_tasks)
def find_bests(all_parameters, all_tasks, n_top=5):
Compute the mean score of the completed tasks
mean_scores = []
for param, task_group in zip(all_parameters, all_tasks):
scores = [t.get() for t in task_group if t.ready()]
if len(scores) == 0:
continue
mean_scores.append((np.mean(scores), param))
return sorted(mean_scores, reverse=True)[:n_top]
print find_bests(all_parameters, all_tasks)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: BNU 1
Step2: HNU Dataset
Step3: DC1 Dataset
Step4: NKI 1
|
<ASSISTANT_TASK:>
Python Code:
%%script false
## disklog.sh
#!/bin/bash -e
# run this in the background with nohup ./disklog.sh > disk.txt &
#
while true; do
echo "$(du -s $1 | awk '{print $1}')"
sleep 30
done
##cpulog.sh
import psutil
import time
import argparse
def cpulog(outfile):
with open(outfile, 'w') as outf:
while(True):
cores = psutil.cpu_percent(percpu=True)
corestr = ",".join([str(core) for core in cores])
outf.write(corestr + '\n')
outf.flush()
time.sleep(1) # delay for 1 second
def main():
parser = argparse.ArgumentParser()
parser.add_argument('outfile', help='the file to write core usage to.')
args = parser.parse_args()
cpulog(args.outfile)
if __name__ == "__main__":
main()
## memlog.sh
#!/bin/bash -e
# run this in the background with nohup ./memlog.sh > mem.txt &
#
while true; do
echo "$(free -m | grep buffers/cache | awk '{print $3}')"
sleep 1
done
## runonesub.sh
# A function for generating memory and cpu summaries for fngs pipeline.
#
# Usage: ./generate_statistics.sh /path/to/rest /path/to/anat /path/to/output
rm -rf $3
mkdir $3
./memlog.sh > ${3}/mem.txt &
memkey=$!
python cpulog.py ${3}/cpu.txt &
cpukey=$!
./disklog.sh $3 > ${3}/disk.txt &
diskkey=$!
res=2mm
atlas='/FNGS_server/atlases/atlas/MNI152_T1-${res}.nii.gz'
atlas_brain='/FNGS_server/atlases/atlas/MNI152_T1-${res}_brain.nii.gz'
atlas_mask='/FNGS_server/atlases/mask/MNI152_T1-${res}_brain_mask.nii.gz'
lv_mask='/FNGS_server/atlases/mask/HarvOx_lv_thr25-${res}.nii.gz'
label='/FNGS_server/atlases/label/desikan-${res}.nii.gz'
exec 4<$1
exec 5<$2
fngs_pipeline $1 $2 $atlas $atlas_brain $atlas_mask $lv_mask $3 none $label --fmt graphml
kill $memkey $cpukey $diskkey
%matplotlib inline
import numpy as np
import re
import matplotlib.pyplot as plt
from IPython.display import Image, display
def memory_function(infile, dataset):
with open(infile, 'r') as mem:
lines = mem.readlines()
testar = np.asarray([line.strip() for line in lines]).astype(float)/1000
fig=plt.figure()
ax = fig.add_subplot(111)
ax.plot(range(0, testar.shape[0]), testar - min(testar))
ax.set_ylabel('memory usage in GB')
ax.set_xlabel('Time (s)')
ax.set_title(dataset + ' Memory Usage; max = %.2f GB; mean = %.2f GB' % (max(testar), np.mean(testar)))
return fig
def cpu_function(infile, dataset):
with open(infile, 'r') as cpuf:
lines = cpuf.readlines()
testar = [re.split(',',line.strip()) for line in lines][0:-1]
corear = np.zeros((len(testar), len(testar[0])))
for i in range(0, len(testar)):
corear[i,:] = np.array([float(cpu) for cpu in testar[i]])
fig=plt.figure()
ax = fig.add_subplot(111)
lines = [ax.plot(corear[:,i], '--', label='cpu '+ str(i),
alpha=0.5)[0] for i in range(0, corear.shape[1])]
total = corear.sum(axis=1)
lines.append(ax.plot(total, label='all cores')[0])
labels = [h.get_label() for h in lines]
fig.legend(handles=lines, labels=labels, loc='lower right', prop={'size':6})
ax.set_ylabel('CPU usage (%)')
ax.set_ylim([0, max(total)+10])
ax.set_xlabel('Time (s)')
ax.set_title(dataset + ' Processor Usage; max = %.1f per; mean = %.1f per' % (max(total), np.mean(total)))
return fig
def disk_function(infile, dataset):
with open(infile, 'r') as disk:
lines = disk.readlines()
testar = np.asarray([line.strip() for line in lines]).astype(float)/1000000
fig=plt.figure()
ax = fig.add_subplot(111)
ax.plot(range(0, testar.shape[0]), testar - min(testar))
ax.set_ylabel('Disk usage GB')
ax.set_xlabel('Time (30 s)')
ax.set_title(dataset + ' Disk Usage; max = %.2f GB; mean = %.2f GB' % (max(testar), np.mean(testar)))
return fig
memfig = memory_function('/data/BNU_sub/BNU_single/mem.txt', 'BNU 1 single')
diskfig = disk_function('/data/BNU_sub/BNU_single/disk.txt', 'BNU 1 single')
cpufig = cpu_function('/data/BNU_sub/BNU_single/cpu.txt', 'BNU 1 single')
memfig.show()
diskfig.show()
cpufig.show()
memfig = memory_function('/data/HNU_sub/HNU_single/mem.txt', 'HNU 1 single')
diskfig = disk_function('/data/HNU_sub/HNU_single/disk.txt', 'HNU 1 single')
cpufig = cpu_function('/data/HNU_sub/HNU_single/cpu.txt', 'HNU 1 single')
memfig.show()
diskfig.show()
cpufig.show()
memfig = memory_function('/data/DC_sub/DC_single/mem.txt', 'DC 1 single')
diskfig = disk_function('/data/DC_sub/DC_single/disk.txt', 'DC 1 single')
cpufig = cpu_function('/data/DC_sub/DC_single/cpu.txt', 'DC 1 single')
memfig.show()
diskfig.show()
cpufig.show()
memfig = memory_function('/data/NKI_sub/NKI_single/mem.txt', 'NKI 1 single')
diskfig = disk_function('/data/NKI_sub/NKI_single/disk.txt', 'NKI 1 single')
cpufig = cpu_function('/data/NKI_sub/NKI_single/cpu.txt', 'NKI 1 single')
memfig.show()
diskfig.show()
cpufig.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Bootstrap Comparisons
Step2: TOST Equivalence Tests
|
<ASSISTANT_TASK:>
Python Code:
# Import numpy and set random number generator
import numpy as np
np.random.seed(10)
# Import stats functions
from pymer4.stats import perm_test
# Generate two samples of data: X (M~2, SD~10, N=100) and Y (M~2.5, SD~1, N=100)
x = np.random.normal(loc=2, size=100)
y = np.random.normal(loc=2.5, size=100)
# Between groups t-test. The first value is the t-stat and the
# second is the permuted p-value
result = perm_test(x, y, stat="tstat", n_perm=500, n_jobs=1)
print(result)
# Spearman rank correlation. The first values is spearman's rho
# and the second is the permuted p-value
result = perm_test(x, y, stat="spearmanr", n_perm=500, n_jobs=1)
print(result)
# Import stats function
from pymer4.stats import boot_func
# Define a simple function for a median difference test
def med_diff(x, y):
return np.median(x) - np.median(y)
# Between groups median test with resampling
# The first value is the median difference and the
# second is the lower and upper 95% confidence interval
result = boot_func(x, y, func=med_diff)
print(result)
# Import stats function
from pymer4.stats import tost_equivalence
# Generate some data
lower, upper = -0.1, 0.1
x, y = np.random.normal(0.145, 0.025, 35), np.random.normal(0.16, 0.05, 17)
result = tost_equivalence(x, y, lower, upper, plot=True)
# Print the results dictionary nicely
for k, v in result.items():
print(f"{k}: {v}\n")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load time series data
Step2: There are a few supported file formats. AT2 files can be loaded as follows
Step3: Create site profile
Step4: Create the site response calculator
Step5: Specify the output
Step6: Perform the calculation
Step7: Calculate all of the outputs from the calculation object.
Step8: Plot the outputs
|
<ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import numpy as np
import pysra
%matplotlib inline
# Increased figure sizes
plt.rcParams["figure.dpi"] = 120
fname = "data/NIS090.AT2"
with open(fname) as fp:
next(fp)
description = next(fp).strip()
next(fp)
parts = next(fp).split()
time_step = float(parts[1])
accels = [float(p) for l in fp for p in l.split()]
ts = pysra.motion.TimeSeriesMotion(fname, description, time_step, accels)
ts.accels
ts = pysra.motion.TimeSeriesMotion.load_at2_file(fname)
ts.accels
fig, ax = plt.subplots()
ax.plot(ts.times, ts.accels)
ax.set(xlabel="Time (sec)", ylabel="Accel (g)")
fig.tight_layout();
profile = pysra.site.Profile(
[
pysra.site.Layer(pysra.site.SoilType("Soil", 18.0, None, 0.05), 30, 400),
pysra.site.Layer(pysra.site.SoilType("Rock", 24.0, None, 0.01), 0, 1200),
]
)
profile.plot('initial_shear_vel')
calc = pysra.propagation.LinearElasticCalculator()
freqs = np.logspace(-1, 2, num=500)
outputs = pysra.output.OutputCollection(
[
pysra.output.ResponseSpectrumOutput(
# Frequency
freqs,
# Location of the output
pysra.output.OutputLocation("outcrop", index=0),
# Damping
0.05,
),
pysra.output.ResponseSpectrumRatioOutput(
# Frequency
freqs,
# Location in (denominator),
pysra.output.OutputLocation("outcrop", index=-1),
# Location out (numerator)
pysra.output.OutputLocation("outcrop", index=0),
# Damping
0.05,
),
pysra.output.FourierAmplitudeSpectrumOutput(
# Frequency
freqs,
# Location of the output
pysra.output.OutputLocation("outcrop", index=0),
# Bandwidth for Konno-Omachi smoothing window
ko_bandwidth=30,
)
]
)
calc(ts, profile, profile.location("outcrop", index=-1))
outputs(calc)
for o in outputs:
o.plot(style="indiv")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Series
Step2: También podemos crear una Series a partir de un diccionario de Python. Como no le especificamos índices, se genera a partir de las primeras componentes, ordenadas en el mismo orden de inserción en el diccionario
Step3: Podemos especificar un índice para indicar el orden (y para meter elementos inexistentes). La forma estándar en Pandas de especificar la ausencia de datos es vía NaN.
Step4: Alternativamente, podemos ver a las series como un diccionario (de largo fijo) que puede accederse y cambiar valores a través de su índice
Step5: Al igual que en NumPy, las series admite operaciones vectorizadas. También es interesante ver que las operaciones sobre Series alinean en base a las etiquetas automáticamente (utilizando la unión de las etiquetas de las series involucradas). Cuando una etiqueta está en una serie pero no en la otra, el resultado se marca como NaN.
Step6: Las Series tienen un nombre, que está en el atributo name, y que puede especificarse al crearlo, o cambiarse con rename()
Step7: La función value_counts es muy interesante, porque, dada una Series, nos devuelve una Series con la cantidad de valores diferentes (en nuestro ejemplo es trivial, porque todos los valores son diferentes).
Step8: 2. DataFrames
Step9: Obsérvese que los nombres de los index y los columns son creados automáticamente, pero probablemente querramos especificarlos en la creación. En el ejemplo anterior, nos gustaría ponerle nombres a las columnas (en este caso, cada fila tiene las características de un arma en el juego Call of Duty)
Step10: Podemos consultar los índices y las columnas
Step11: Veamos otra forma de crear DataFrames
Step12: Los arrays son objetos, y tienen métodos asociados. Utilice el método dtype para conocer el tipo de los elementos de a
Step13: 3.Operaciones básicas con DataFrames
Step14: Es posible calcular funciones numéricas sobre algunas columnas
Step15: Para borrar una columna, usamos del
Step16: Si los valores que se pasan para crear una columna no son suficientes, se completan con NaN
Step17: Como vimos, seleccionar una columna de un DataSeries es muy parecido a seleccionar un elemento de un diccionario, siendo la clave el nombre de la columna (también es posible seleccionar por más de una columna a la vez
Step18: Vamos a agregarle a nuestro DataFrame los nombres de las armas, y lo ponemos como índice.
Step19: Para seleccionar una fila, existen varias formas diferentes. Si conocemos su index, utilizamos loc
Step20: Si conocemos el índice de su posición, utilizamos iloc
Step21: Podemos hacer slicing de las filas igual que con los ndarrays, utilizando un rango en la selección (obsérvese que aquí se busca en las filas, no en las columnas, y que se devuelve un DataFrame)
Step22: Podemos seleccionar de un dataframe las celdas que cumplan cierta condición (igual que se podía hacer con los arrays), y utilizar el resultado para seleccionar celdas que cumplan la condición (aquí se marcarán con NaN las celdas que no hayan sido seleccionadas).
Step23: Cuando se realizan operaciones entre DataFrames, al igual que con Series, se alinean tanto las indexes como las columns, devolviéndose siempre la unión de los indexes/columns de los DataFrames involucrados.
Step24: En la siguiente operaciones, vamos a restarle una Series al DataFrame. En ese caso, pandas alinea las columnas con los ìndices de la Series, y eso hace que se resten todos los elementos de la fila.
Step25: Los DF se pueden multiplicar por escalares, y se pueden aplicar operadores booleanos, exactamente igual que a los ndarrays.
Step26: 4. Operaciones de selección avanzada.
Step27: 4.2 Selección de filas por condición booleana
Step28: Lo que estamos haciendo es construir una Series de valores booleanos, para que me devuelva todas las filas que tienen True
Step29: ... y luego pasamos esta Series como argumento para seleccionar aquellas filas que valen True.
Step30: Creemos un nuevo DataFrame que tenga, para cada arma, su tipo. Para eso, construimos una Series a partir de un diccionario (donde los índices coincidan con los que tenemos), y simplemente lo asignamos a nueva nueva columna del DataFrame ya construido.
Step31: Listemos solamente los Fusiles de Precisión
Step32: Es posible ordenar los resultados de una consulta (que es siempre un DataFrame)
Step33: 4.3 Operaciones sobre conjuntos de filas
Step34: Si queremos obtener la medida de todas las columnas, usamos agg para indicarle que aplique el método np.mean a todas las columnas (el método permite más de una función, así que calcularemos también la desviación estándar). En nuestro ejemplo, estamos agrupando según el valor de una sola columna, pero puede agruparse por más de una.
Step35: Podemos aplicar diferentes funciones a diferentes columnas...
Step36: Y podemos también aplicarlo a todas las filas de nuestro DataFrame
Step37: 4.4 Más selección de elementos
Step38: También podemos especificar más de una fila, utilizando una lista (nótese que es una lista dentro de otra)
Step39: Si queremos obtener una celda, el primer elemento de loc nos selecciona la fila, y el segundo la columna
Step40: Podemos setear el valor de una celda utilizando loc
Step41: Finalmente, podemos especificar slices, indicando, en cada axis, el primer y último elemento (que estarán incluidos).
Step42: Alternativamente, el método iloc permite hacer lo mismo, pero especifiando posiciones enteras, en lugar de labels (de forma similar a como lo hace NumPy)
Step43: ¿Cuál es la mejor arma para un francotirador? Ordenamos por el promedio entre alcance y precisión
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
# Este notebook fue elaborado con la versión 1.0.3 de Pandas
pd.__version__
s= pd.Series(np.random.randn(5), index=['a','b','c','d','e'])
s
d = pd.Series({'b': 1, 'a': 0, 'c': 2})
d
s[s > s.median()] # Seleccionamos los valores mayores a la mediana del array.
s['a']
s['e']=12
s
'e' in s
s.get(['f'],np.nan) # Si no ponemos el get, devuelve error
s[1:] # sin el primer elemento
s[:-1] # sin el último elemento
s[1:]+s[:-1]
s2=s.rename('My_index')
s2
s.value_counts()
a = np.array([
[65,60,60,45,60],
[75,35,50,75,40],
[85,80,30,20,75],
[75,45,30,70,80],
[80,55,90,40,45],
[90,60,95,15,45],
[60,55,45,55,40]
])
df=pd.DataFrame(a)
df
df=pd.DataFrame(a,columns=['Daño','Precisión','Alcance','Cadencia','Movilidad'])
df
df.index, df.columns
d = {'one': pd.Series([1., 2., 3.], index=['a', 'b', 'c']),
'two': pd.Series([1., 2., 3., 4.], index=['a', 'b', 'c', 'd'])}
# Obsérvese que en la columna 'one' no tenemos nada en la fila 'd'
df2 = pd.DataFrame(d)
df2
pd.DataFrame(d, columns=['two','three'])
df.describe()
df['Precisión']
df['Dummy']=df['Alcance']*df['Daño']
df['Es_preciso']=df['Precisión'] >= 60
df
df[['Precisión', 'Alcance']].mean()
del df['Dummy']
df
df2['one_trunc'] = df2['one'][:2]
df2
df['Precisión']
df[['Precisión', 'Alcance']]
df['Arma']=['M16 Evil Clown', 'S36 Evil Clown', 'BY15 SnowFlakes', 'MSMC Ancient Runes',
'XPR-50 April\'s Fool', 'DLQ33 DeepShark', 'M4LMG RibbonExplosion']
df.set_index('Arma', inplace=True)
df
df.index # El índice ahora cambió
df.loc['BY15 SnowFlakes']
df.iloc[0]
df[1:3]
df>50
df[df>50]
del df['Es_preciso']
d = np.array([[85,52,95,30,50,-1],[80,55,90,40,45,-1],[65,60,60,45,60,-1],[85,52,95,30,50,-1],[48,65,90,63,60,-1],
[60,55,45,55,40,-1]
,[78,55,32,60,75,-1],[90,40,25,60,75,-1]])
arm_names=['Arctic.50 Bats','XPR-50 RedTriangle','M16 NeonTiger', 'Arctic.50 RedTriangle','BK57 JackFrost',
'M4MLG RedTriangle', 'AKS-74U NeonTiger','PDW-57 ZombieGene']
df2= pd.DataFrame(d,index=arm_names, columns=['Daño','Precisión','Alcance','Cadencia','Movilidad','Dummy'])
df3= df.append(df2, sort=False)
df3
del df3['Dummy']
df3.iloc[0]
df3 - df3.iloc[0]
df3['Movilidad']*1.5
df3[['Daño','Precisión']].head(5)
df3[df3['Precisión']>50]
df3['Precisión']>50
df3[df3['Precisión']>50][['Alcance', 'Cadencia']]
df3
d={ 'M16 Evil Clown':'Fusil de Asalto',
'S36 Evil Clown':'Ametralladora',
'BY15 SnowFlakes':'Escopeta',
'MSMC Ancient Runes':'Ametralladora Ligera',
'XPR-50 April\'s Fool':'Fusil de Precisión',
'DLQ33 DeepShark':'Fusil de Precisión',
'M4LMG RibbonExplosion':'Ametralladora',
'Arctic.50 Bats':'Fusil de Precisión',
'XPR-50 RedTriangle':'Fusil de Precisión',
'M16 NeonTiger':'Fusil de Asalto',
'Arctic.50 RedTriangle':'Fusil de Precisión',
'BK57 JackFrost':'Fusil de Asalto',
'M4MLG RedTriangle':'Ametralladora Ligera',
'AKS-74U NeonTiger':'Subfusil',
'PDW-57 ZombieGene':'Subfusil'}
df3['Tipo']=pd.Series(d)
df3
df3[df3['Tipo']=='Fusil de Precisión']
df3[df3['Tipo']=='Fusil de Precisión'].sort_values(['Daño', 'Precisión'])
df3.groupby('Tipo')['Precisión'].mean()
df3.groupby('Tipo').agg([np.mean, np.std])
df3.groupby('Tipo').agg({'Daño':[np.mean, np.std], 'Alcance':[np.mean]})
df3.agg([np.mean, np.std])
df3['Precisión'].agg('mean')
df3.loc['PDW-57 ZombieGene']
df3.loc[['XPR-50 RedTriangle', 'PDW-57 ZombieGene']]
df3.loc['XPR-50 RedTriangle', 'Alcance']
df3.loc['XPR-50 RedTriangle', 'Alcance'] *=2
df3.loc[['XPR-50 RedTriangle', 'PDW-57 ZombieGene']]
df3.loc[['XPR-50 RedTriangle', 'PDW-57 ZombieGene'], 'Daño':'Alcance']
df4=pd.read_csv('https://raw.githubusercontent.com/gmonce/datascience/master/data/call_of_duty.csv')
df4
df4.set_index('Arma', inplace=True)
df4
df4['Alc&Prec']=(df4['Alcance']+ df4['Precision']+ df4['Daño'])/3
df4.sort_values('Alc&Prec', ascending=False)
# Ahora promediamos el resto, para los tiradores de cerca
# Elmino las que hacen poco daño o tienen poca cadencia
df4['Cerca']=(df4['Cadencia']+ df4['Alcance']+ df4['Daño']+ df4['Movilidad'])/4
df4[(df4['Daño']>50) & (df4['Cadencia']>50)].sort_values('Cerca', ascending=False)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exercise 1
Step2: Exercise 2
Step3: Exercise 3
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
emails = ['alawrence0@prlog.org',
'blynch1@businessweek.com',
'mdixon2@cmu.edu',
'rvasquez3@1688.com',
'astone4@creativecommons.org',
'mcarter5@chicagotribune.com',
'dcole6@vinaora.com',
'kpeterson7@topsy.com',
'ewebb8@cnet.com',
'jtaylor9@google.ru',
'ecarra@buzzfeed.com',
'jjonesb@arizona.edu',
'jbowmanc@disqus.com',
'eduardo_sanchezd@npr.org',
'emooree@prweb.com',
'eberryf@brandeis.edu',
'sgardnerh@wikipedia.org',
'balvarezi@delicious.com',
'blewisj@privacy.gov.au']
#Your code here...
emails_records = pd.Series(emails)
emails_records[emails_records.str.contains('.edu')].str.split('@').str[0]
weights = [31.09, 46.48, 24.0, 39.99, 19.33, 39.61, 40.91, 52.24, 30.77, 17.23, 34.87 ]
pd.Series(weights).apply(lambda x: x * 0.45359237)
df_hosts = pd.Series([
'192.168.1.2', '10.10.10.2', '172.143.23.34',
'34.34.35.34', '172.15.0.1', '172.17.0.1'])
df_hosts[df_hosts.apply(lambda x: ipaddress.ip_address(x).is_private)]
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Locally and Remote
Step2: Plot a Histogram of x
Step3: Customizable
Step4: Other Languages
Step5: Keep it all together
Step6: NBconvert examples
|
<ASSISTANT_TASK:>
Python Code:
2+4
print("hello")
print("Hello world!")
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
x = np.random.randn(10000)
print(x)
plt.hist(x, bins=50)
plt.show()
%lsmagic
%timeit y = np.random.randn(100000)
%ll
%%bash
ls -l
files = !ls # But glob is a better way
print files[:5]
%%writefile example.cpp
#include <iostream>
int main(){
std::cout << "hello from c++" << std::endl;
}
%ls
%%bash
g++ example.cpp -o example
./example
!ipython nbconvert --to 'PDF' 01_introduction-IPython-notebook.ipynb
!open 01_introduction-IPython-notebook.pdf
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Sample Code
Step2: Compile Model
Step3: Fit Model
|
<ASSISTANT_TASK:>
Python Code:
from IPython.display import YouTubeVideo
YouTubeVideo('mPFq5KMxKVw', width=800, height=450)
from tensorflow.python.keras.applications import ResNet50
from tensorflow.python.keras.models import Sequential
from tensorflow.python.keras.layers import Dense, Flatten, GlobalAveragePooling2D
num_classes = 2
resnet_weights_path = '../input/resnet50/resnet50_weights_tf_dim_ordering_tf_kernels_notop.h5'
my_new_model = Sequential()
my_new_model.add(ResNet50(include_top=False, pooling='avg', weights=resnet_weights_path))
my_new_model.add(Dense(num_classes, activation='softmax'))
# Say not to train first layer (ResNet) model. It is already trained
my_new_model.layers[0].trainable = False
my_new_model.compile(optimizer='sgd', loss='categorical_crossentropy', metrics=['accuracy'])
from tensorflow.python.keras.applications.resnet50 import preprocess_input
from tensorflow.python.keras.preprocessing.image import ImageDataGenerator
image_size = 224
data_generator = ImageDataGenerator(preprocessing_function=preprocess_input)
train_generator = data_generator.flow_from_directory(
'../input/urban-and-rural-photos/train',
target_size=(image_size, image_size),
batch_size=24,
class_mode='categorical')
validation_generator = data_generator.flow_from_directory(
'../input/urban-and-rural-photos/val',
target_size=(image_size, image_size),
class_mode='categorical')
my_new_model.fit_generator(
train_generator,
steps_per_epoch=3,
validation_data=validation_generator,
validation_steps=1)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: We're going to be building a model that recognizes these digits as 5, 0, and 4.
Step3: Working with the images
Step4: The first 10 pixels are all 0 values. Not very interesting, but also unsurprising. We'd expect most of the pixel values to be the background color, 0.
Step5: The large number of 0 values correspond to the background of the image, another large mass of value 255 is black, and a mix of grayscale transition values in between.
Step6: Great -- we've retained the correct image data while properly rescaling to the range [-0.5, 0.5].
Step8: Indeed, the first label of the test set is 7.
Step9: A crucial difference here is how we reshape the array of pixel values. Instead of one image that's 28x28, we now have a set of 60,000 images, each one being 28x28. We also include a number of channels, which for grayscale images as we have here is 1.
Step11: Looks good. Now we know how to index our full set of training and test images.
Step12: As with our image data, we'll double-check that our 1-hot encoding of the first few values matches our expectations.
Step13: The 1-hot encoding looks reasonable.
Step14: Defining the model
Step16: Now that we've defined the variables to be trained, we're ready to wire them together into a TensorFlow graph.
Step17: Having defined the basic structure of the graph, we're ready to stamp out multiple copies for training, testing, and validation.
Step18: Training and visualizing results
Step19: Now we're ready to perform operations on the graph. Let's start with one round of training. We're going to organize our training steps into batches for efficiency; i.e., training using a small set of examples at each step rather than a single example.
Step20: Let's take a look at the predictions. How did we do? Recall that the output will be probabilities over the possible classes, so let's look at those probabilities.
Step21: As expected without training, the predictions are all noise. Let's write a scoring function that picks the class with the maximum probability and compares with the example's label. We'll start by converting the probability vectors returned by the softmax into predictions we can match against the labels.
Step22: Next, we can do the same thing for our labels -- using argmax to convert our 1-hot encoding into a digit class.
Step23: Now we can compare the predicted and label classes to compute the error rate and confusion matrix for this batch.
Step25: Now let's wrap this up into our scoring function.
Step26: We'll need to train for some time to actually see useful predicted values. Let's define a loop that will go through our data. We'll print the loss and error periodically.
Step27: The error seems to have gone down. Let's evaluate the results using the test set.
Step28: We can see here that we're mostly accurate, with some errors you might expect, e.g., '9' is often confused as '4'.
|
<ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
from IPython.display import Image
import base64
Image(data=base64.decodestring("iVBORw0KGgoAAAANSUhEUgAAAMYAAABFCAYAAAARv5krAAAYl0lEQVR4Ae3dV4wc1bYG4D3YYJucc8455yCSSIYrBAi4EjriAZHECyAk3rAID1gCIXGRgIvASIQr8UTmgDA5imByPpicTcYGY+yrbx+tOUWpu2e6u7qnZ7qXVFPVVbv2Xutfce+q7hlasmTJktSAXrnn8vR/3/xXmnnadg1aTfxL3/7rwfSPmT+kf/7vf098YRtK+FnaZaf/SS++OjNNathufF9caiT2v/xxqbTGki/SXyM1nODXv/r8+7Tb+r+lnxZNcEFHEG/e3LnpoINXSh/PWzxCy/F9eWjOnDlLrr/++jR16tQakgylqdOWTZOGFqX5C/5IjXNLjdt7/NTvv/+eTjnllLT//vunr776Kl100UVpueWWq8n10lOmpSmTU5o/f0Fa3DDH1ry9p0/++eefaZ999slYYPS0005LK664Yk2eJ02ekqZNnZx+XzA/LfprYgGxePHitOqqq6YZM2akyfPmzUvXXXddHceoic2EOckxDj300CzPggUL0g033NC3OKy00krDer3pppv6FgcBIjvGUkv9u5paZZVVhoHpl4Mvv/wyhfxDQ0NZ7H7EQbacPHny39Tejzj88ccfacqUKRmHEecYf0Nr8GGAQJ8gMHCMPlH0QMzmEBg4RnN4DVr3CQIDx+gTRQ/EbA6BgWM0h9egdZ8g8PeliD4RutfF/Ouvfz9OtZy8aNGiNH/+/GGWl1122XzseYuVNKtqsaI23Ghw0DYCA8doG8JqO+AUG2+8cVq4cGHaY4890vLLL5/WXXfdfI6jvPDCC3lJ8amnnkoezP3000/pl19+GThHtWpIPekYomTxFS7HnkqKjMsss0yGgFE4r62tSBFVJ02aNPyconi9V4/JwzHwT9ZNNtkkeZ6w5ZZbph133DH99ttv6ccff8zXX3nllcRRnHNfv2cNGMQWGRaOrWbUrjsGBRLAA6U4Lhoqw9h2223ztRBq6aWXzsbgvueffz4Lu9NOO2UnYTgrr7xy7tO9nOH111/Pbb744ov0ww8/jAvngAdFMvQDDjggG/0GG2yQX1GZNm1aziCCwzrrrJPl3muvvXKwePnll9M333wzHDCKWPbLMbuAkfISjnvvvXcW/emnn85lqCBqa4a65hiYR/Gk2RNGRlwm3n7ggQfmdrKD9sqJtdZaKxvCnDlz8n3Tp09PXmPYeuutc0SVNQjvnmuvvTa3efzxx9N33303PGZ5rF75DBvvqq233nrp22+/TWeddVbyikpgxCE4vQDhlQUBRfDw2esbs2fPTquvvnqviNN1PuIdJ4GErVx44YUZowsuuCB9+umn6eeff84BspmsWqljhPFDxjGGYx/lDkN33udajCoVlAjRzl4U8LjefRwnPjsXG8OJqKBd8NB1LTU5IHyCd7LJGOYXNoGjFqaGIKtrERDIDKtukfGMH/zRZa1A101+YBF44KfMYzO8VOYYjDWiukiGqc022yyXOUqdzTffPJ/z1ialeqNVxA9gi0wzlOJ5juJlR8JeddVV+ZrIKTq4ZvJp/8EHH+SU+txzz+W2SqmxVFZRplrH5DTRXmGFFdKuu+6azjjjjOzosl5g6D54CQCI4mGjhNQO5occckh2LvLTA6fqJOEnyhU6kNlkZmUuvrtNcFx77bUzhsZWXgoSsm6t4Dsa/tp2DErCmA04HAI4FLjaaqtlBhmnSKiNY4rDtHZFB6jFMMH0RVDH+nCPYxtDCFJnKkniRbDitWjTK3sykQUuMLPn3DZGX8SFnCG/fVyz5zCCBtIHTLshdzif8fERn8cKXxjCNOwCTu3Qf6yqhV4AQokiP489//zzM0DxnQYKwqAtIkko1kQzFFxvaNcJ6u3Pe+65J/cRRvDee+9lA2BInIyRff/997nNO++8k7t0vl2A6vHWynmyiPJ43WKLLbIijz/++LTddtvlTCdzwIWSg9yjxBJ0GN/DDz+c7zv77LOzbEceeWSekwVGgsOsWbNyNo0+qt7DfPvtt8/dmtvIGnPnzk3PPPPMsJ6rHrNef/BBeJA90RprrJEDcNhctMkXR/mnbccwuCjNGTbaaKMc8TBZprITxOdgOvbuKxqGz6LSJ598kseJ9Gi1CYmSv/76a3YyJZWMZJ6Ceskp8EMusihFEAyUmVaa8G2rxTNHIrd733///eH7YeaLNe5xrEzlWNF/HqQDf0Tm+GIbvYdD43MsKAIo/JDgE0G5aFfN8NaWYxiUshikqGYTTUSt0TCkjXsYNqJQQso+rgGa0vX58ccf56hQTtk+48F92rmvlnE1A0on2uKP0Yrw+Nxzzz0zn+ZhjKwRXq6vueaa2TmUiRQfS7SyNeMks9IV9vrvJOl/q622yo4Mfw5Pvm6TMclLdit6shh+YAMnq1E29tEsteUYBgMSgxa5MOAzJZcVXQs4bUR8XxhCHIwzMALCBuCcx5q0tF3u133l8XrRMchFiRYNyMxBKM/5IjZlWVzjULKwACISytIWFsi56aab5mvOKyEikmdAO/iHY+BDCRUZuoPD1e1akECyLseA7d13352DhdKak8Cmlt3U7TSl9p58FwejYK8ncAwKpDTnGDcARbWiAUjHiNEHsITSPlagpEZChcfrZzwSOfBOiQwXLuR3PjAhtwAD08iAMCO/a+5xPTIm3ALjwERf0V+c69QeT7ZujVdLDhgKBrANXAMreMESRkU7rdVPrXNtZ4xIpSLH1VdfnR3j4IMPzkbw2Wefpa+//jovo5188slZsZjArAcvFP3YY4+lSy+9NEdTdTTy0I5xHHfccfm1CH2LtuORKEqmkwVlVU+sBY+IdJRmE0zeeOONnEXuu+++7AhnnnlmWn/99XMJ5brtzTffzHMJx/o555xzkgdb0U8rRtAKrnTYqtG1Ml6teyxInHDCCdlGYByBmG2Z97ChVvFo2zEwbHCRTbqP7EDxPjN2pUBEe86AXAcsg+f10TYMSTvnRM1ulQe1wG/nHEXZZEJZUIYQ5cgWMsEgMgqclFdkdh+MbFFyuddnWMLNfTYkcuuXHlBkpFYNI3dS+mMMfCHHsZWadfUjmQVn8iLywscG21apMscQwR555JEM3KuvvpoZ5LHOmzgjAvBwzFt2/Oijj3Lm4Ayin/MU/eGHH+b2N998c/5MGSaZ44nw7OEd5Rx77LE5+1EehYXxkpes5li2K6+8Mhv8Lrvsko381ltvzcEBfvHQKh5auk9GPvHEE3NJAx+/eKL/HXbYIQcbK3nwN067xAk4s5VHdbvsx0nxrYQeKxJMZAfBA7GlRx99NC9EtCN7JY4RoPBeAHIAyrB3jpHYwqu1d02d7HpZcfqINo5dL7eJMXtxTzk2sgWFM/gcsnCakI2cFOk+523O+Qw7WaeYHYpYRp9xn4BkbPdWSfgJXYYM+ne+2xRj2sdx8EDu8rm4Ntp9pY4RSmb0CIPOAVNGoLA47yU4S2xen37ppZdy9CkLE/3lm8bJHzJbbiavt2Q9p7AkK7oyXAZOLk7gs9c4PJC0AOE8DDyrgJkaWgYQkSPYuAdpWySfteU8HhqKouYq+io6ZfGeZo7xpbT1+jt+jGULfprpq922ePHMBibwjWVq523KVrzBsIzTaMeu1DFi0HI0YyyYtAekY5MltbRyihFJiROBKIYTwMCTWJNubwdQFCXFapK9z96mtbjgs3thFKWnUgjBzNZIya5FOyUcPG36q4LwRgZ6Ix8HtBk3tirGGU0feAkslHfk5PzBh2cXSkvtWqWOOEaRGcoSHdXDMoYn1tK8yaON0ahbCWgFS/vxSnjn5F4ItLeiFAGAzCKc7MDA1OlIjc4pLFKE7FEyxb5ZPNTbtuiv2fvrtddfOFsYXcwj8d8qv/XGq3femLvvvnvOvrIYPPEjG+PDseDbDnXcMXiyiGiyyACOPvrovN95552zV3/++ef5zVveznlEo6CICvG5l/d4JSvHP+qoo7JjKDs4PkVSGPm9HSz9W5rlPEoCQYHjVFXyRGnBOcKA28VOP/qTBWX6YnS2IKB8qYL/enyGHPbKziOOOCLj6sGeslGW8L6Y4ANr2MY99fpsdL7jjmFwkSTSr6gDVCk+tmDQedcJ5LgdwaLPbu7xjJRRNlErSsiQhVHJlOEQoh182o1wRTnharwYs3itnWP9Rd/RD5mLW5yveh/YRhYMjItyBh/wjPat8tEVx6B00RKo5513XpIl7rzzzuwEourMmTOz95uIcyBfTSXYiy++mCOrSFS1klsFrNZ9eGPoJtmeyRx00EE5cpGbIi21XnbZZbkMee2117KMHIKMIVcotVb/vXoOz6I0+URoMlVFcBFE7L1+IjNYIo6v/fo+D3tC+FCR+FHuwNUCgfOtUlccI5hnJMoIBhN1sBICqMoNNaLP3pkiFGciIIBC4HaEbRWk0dyHb3Mp/EY0I6+NsytvyKxsKhpQr8ozGpm1IZ8IbV+PyllGuyh1YBXXOQEcy6R8M5eAHzuxxX3GRvbaCKJ4aRfXrjkG5jEbk00Prxi8SZTJKmc5/PDDc5v99tsvC+hBjWtqStmD0F4Ma1foMvDtfqZMUc3/lYjMSFFW3NS7JtyyoKzSiTocHoFJHMc+MlK7Mta7n9NbATJerbEYvQWIWCVitIyaXrV3nsG7H2Y2GVcbxyj6NX+waKEPmOvbfShwtjhQDDz5Ygt/uuoY+OPtnICDEMBTWsAQUu0NBBsDEgFEWOADAiDaVRERWsCq5i34IRN+TbTJgn8KwzOFuR4KDUXW7Kyik53Ep8w/+RkxWeO5S1EM5wVABguXMGp69dk1x87D0ObdL32GHI5tsDQGHtwbm/Hw4TpnKvNY5Ge0x113DEwT3tIsIdSnDIfxcxJAevCHfE9cXcmotHXfAw88kIFUdgFjLMn4HuZRuh9FExmjRCCnZxRqcPxz8ioUVk9eRhJkPAYHV8ZVFRkjjFSfAtw222yTy2OZ0iv15fHcQ4dKaMcwsBdEEL26RzaIh5+yK7LSBGPno8yOZX+vzRhfXzZ8cRrtyzzkzpr803XHwB8wTJYIRol+VY8zqMMBbP0f+cExE1qTdbU7x3jwwQdzVBYdesExKNiEWx2MfwoOAyCbJ9uRHZvUTcPmsENhGNE4HBKOHKNqZzQu3KNfX9H1nRABQZlbNkpt4SNo4DWIIesDj9qYnwki2giWqol3330348kZLPm7xvi1Pffcc7MzhA3gy/0oeIuxWtmPiWNgNCIFYwcCAa2FA1ikJZz1aeUVsBmge9TyoqGoIqKUFdEKCFXcU0/pHJizVMUnXBiBh6IicdTTzsEOnuZkDE/2rcJI4KMf/TF+0TucwDhkZ+DGL4/nGkPGV/AIC+2RvfP6ZPTI4gu5XNM/Um7RPzuIFyn1zW7wpQ9UHj+fbOHPmDlGCOGBGIeQQfwuq0jnISBQfOHft7JEHN94Q5xF6XLFFVfkyKIEGyuiGAo3r6BIx0imcM6k+6GHHspOEQbcDq+UTl4BwRu7PstUiPEJFsa9/PLL83nXg6d2xnUvoxS5L7744uGyh/wyRpRF9YwSHsHjE088kWWADQeRFThZkTgBstensZG5h4m56oEdcAp9CwTOVUlj6hgECcGBpA6XDazeiLKhVABQAhKB3cNxbEAL4KoEppm+gjf3OMafDf+UW7zeTL/ltqIiAxBMOIIxnLOHgbFsMGQ4InhE0nJfrXw2hnIRD3SFBKmYWDfqE49woFvOzZno3NxM0HDciMjBDsjEBgLTsJHYN+qjmWtj7hjBLKFFQgL7qRz14jHHHJPBcC2M3wRPVDT5ohzZRv0Z16O/sdozAKmdopUH5kftTrzJpl+lk29CcgpLw3BgpMbwwqF/S80pGJ6xO0WM+8Ybbxw2TuOEoTYakwyovB/JKdzDMVQOHvCRzXju890fL11aGhcMqqIxdwwCRkYQDZAaE7lWBhyosQEmQM439MgffDHm0Si8EcuBC0ezcQSZVKYktzFEW+3sfQ4natRvu9eMTS9F7IvHo+m/2fb6LNuCc0WsW+mzHq9j6hgE9YCHp5tkez2EAVjlMOmyUlU2Lis8ygVR0rykyoltPZCaOY9fr32Qp50X6xi7pWCGbsHBvwLgGIcddljGxvcsjOU1GseyiKjJQWydpiqNsBlei85BfhNxeJunVCl31x0jBOMAjJ9jRC3OEERDS7QMI0qQohIYgLSq7FJuMZbi9WZA7kRbvFAWx5Dyy449mjEDG/dyDPW4VSiy2iNvBcCSUdxyyy35OYHrqJUx843j8I/qQpA074BVVdR1x+AIHCIiIGewsqIuds41tSSlOxeOFHuOQ/E+2zPEuFYVKM32U3RMvGy44YbZMTg2B2+GOIXXJcjpR9lkUy/QyZ7GUU8zAD9RCiuR0oQYVv1IMAk7qFL+rjkGg7GZQPLufffdN69QKJtkCAKKjNGu1p7gMgWDYEDRpkpAmu0rnMLehie/RavcI49Sr1ZW0w6V91ac/IsxmdHPB0U5pQ+4+TExDudNUhPufnaKIn7N6m2k9h11jKLRqP+UQJb2eHh4uYjK0LW1D0MpCq0NR4g24RTR/0hCdvM6/m14FtljeTL4D/liedFeO7LYcyh7eMGDY8X16IM8Vp9kWjj2GwWG5IZb2FKVOHTMMTCvDKBgD2Z22223bNynnnpqVrZXBFxjQDZUFJiwIqKHN8qHO+64IxvN/fffn9vG/VWC0UpfeC5uZMEbg/ctM/8SzYOxZ599Nhs4ebSx0ECpcDFvMCdRggkesoQ+zaHU0N4EgAEnue2227JTON+LgaEVDFu5h+w2Wdl33GFkEUIQqYIqdYwwbJGO8q2xOydqUiTFWpJVPzsuUwhlzzFETxlGdFSCqaMB4XwvUzgKWU3AyW4uwFns4QMbilUyxbq8p/4cw3UEB8FDGQUDx/acqB8zRS2dw5qthe3VatPKucocg6JiYu3lP2nfawvekKVITzgJQLH24QTBtPZeE2D89957b27jwZ1IwIm8R2OMWHmJ+3pxTzaK8l+HyMrgTzrppMxqOIEsGoZvz0nsyWiliRMUl2G9aOk6POyLZVUvYtBpniL4wA1m9lVSW46BOQqKpTLK9FnUsxftvW4swssa4dkhCGFCMNfcp08lhM9KKc4h0obgsa8ShHb6Cv5DJnu8IwHB9TB852DkOlzIRV6kXbSVMfQj48BWdhE0TLr1Fe3zQR/+gRMK5yjuq4KjZccQ2SlYjexHmCnSkiLjtsesmlnpQ5naFo1A5GMAHoJxBI709ttv54ygntZWmWEcQMS9VQleRT9kNmfAG0P3HRPGbHnVudg4gEyJOAYiE0wikHAAcxHyxndO4KI/WHEK/Qzo7wjAXfaFNdurikaNtIERRTqmYIYdE2tGEs8hfJ8iFB/3xV67MCjG8NZbb6Unn3wyC+XfDxfnDxFp496qhK6qn5CDA5twK/fIRH5Gb0MMOhxCFgkKjOBoHqKEkmWvueaanG04iTHcP3CKQO0/e3ZhgceP2smqcKyKRuUYlEKhPDL+d5z1c4qVFTDnmBIZMwZ9DiKAzTmvCetPNFR7W7fXXt/KLddqTcyjr17bRybkEF5XiQhPHnMuDlF07MCB3I49l4EDxTrnfsFBJBxQbQSKeGoROqjdurWzIzoGJqRxS2KUf/rpp2flcRDRjRKVCdpFhCwz7rOVKE5z++235/7uuuuuXDq5P5yKEY0np8B3TKb9K1/vLTF0/7MiJtyRPYrq4fx+7R2e7vFDDzDyfx1goPwcUGMEYG/rFI3oGAYW0UUyimQIcRwGzbgpVsZAUTYE065xCtc5GUeSHTyg4kzKs/FKoSBljyhvTz6y2gseZAwlwgI+cNBGtpV9ZRj4BobjFY9O8g0bQcXWaRpxBE5hHuFnJ0XB6dOn56ge2QGDlK2dFSSG4b8kxVzEdSWGVxgYQLzrxJkIGgbTaUE73b9MZ/KNfIMOJpdcckndYZWmFAwv+wgydW/o8wsCK3xnz56dFzx8oxPGtk7QiI5h0FBaeGzRKYIpjDN2ig6lB9OiprmI60qNieIMIXvsQy7yotjH9eI+2hbPDY4bI8D+2JdnWTYY+iwDs78qaUTHEM0sI1pClAVMnqX9ImGQszB6DHoNOLzZNZlGRlEq9JNB9JOsRXvoxDGnsDTudwFUHTNmzMjDqEaU9xYvGgWiZnka0TEo16CeNyCM1SLtwmt5cNEoCOUa5xjQAIFWEGBP5rbKdTRr1qwcfGUMthXVTCt917pnRMdwE6ZiQm0JckADBMYCgWLwtXjTSeq/d5Y7ieag7wmDwMAxJowqB4JUicDAMapEc9DXhEFgcjxcM7vvR4on7bHS1q84WNkpUr/iEL+aOLRw4cIlQCmuIhUBmsjHlpQ9c7EmzjEsN1vd6DeCg8UVT+qRd7b6EQey8wMT+6El8RSu36xhIO8AgQYI9F94bADG4NIAgUDg/wHX+3lgThDIegAAAABJRU5ErkJggg==".encode('utf-8')), embed=True)
import os
from six.moves.urllib.request import urlretrieve
SOURCE_URL = 'http://yann.lecun.com/exdb/mnist/'
WORK_DIRECTORY = "/tmp/mnist-data"
def maybe_download(filename):
A helper to download the data files if not present.
if not os.path.exists(WORK_DIRECTORY):
os.mkdir(WORK_DIRECTORY)
filepath = os.path.join(WORK_DIRECTORY, filename)
if not os.path.exists(filepath):
filepath, _ = urlretrieve(SOURCE_URL + filename, filepath)
statinfo = os.stat(filepath)
print('Successfully downloaded', filename, statinfo.st_size, 'bytes.')
else:
print('Already downloaded', filename)
return filepath
train_data_filename = maybe_download('train-images-idx3-ubyte.gz')
train_labels_filename = maybe_download('train-labels-idx1-ubyte.gz')
test_data_filename = maybe_download('t10k-images-idx3-ubyte.gz')
test_labels_filename = maybe_download('t10k-labels-idx1-ubyte.gz')
import gzip, binascii, struct, numpy
import matplotlib.pyplot as plt
with gzip.open(test_data_filename) as f:
# Print the header fields.
for field in ['magic number', 'image count', 'rows', 'columns']:
# struct.unpack reads the binary data provided by f.read.
# The format string '>i' decodes a big-endian integer, which
# is the encoding of the data.
print(field, struct.unpack('>i', f.read(4))[0])
# Read the first 28x28 set of pixel values.
# Each pixel is one byte, [0, 255], a uint8.
buf = f.read(28 * 28)
image = numpy.frombuffer(buf, dtype=numpy.uint8)
# Print the first few values of image.
print('First 10 pixels:', image[:10])
%matplotlib inline
# We'll show the image and its pixel value histogram side-by-side.
_, (ax1, ax2) = plt.subplots(1, 2)
# To interpret the values as a 28x28 image, we need to reshape
# the numpy array, which is one dimensional.
ax1.imshow(image.reshape(28, 28), cmap=plt.cm.Greys);
ax2.hist(image, bins=20, range=[0,255]);
# Let's convert the uint8 image to 32 bit floats and rescale
# the values to be centered around 0, between [-0.5, 0.5].
#
# We again plot the image and histogram to check that we
# haven't mangled the data.
scaled = image.astype(numpy.float32)
scaled = (scaled - (255 / 2.0)) / 255
_, (ax1, ax2) = plt.subplots(1, 2)
ax1.imshow(scaled.reshape(28, 28), cmap=plt.cm.Greys);
ax2.hist(scaled, bins=20, range=[-0.5, 0.5]);
with gzip.open(test_labels_filename) as f:
# Print the header fields.
for field in ['magic number', 'label count']:
print(field, struct.unpack('>i', f.read(4))[0])
print('First label:', struct.unpack('B', f.read(1))[0])
IMAGE_SIZE = 28
PIXEL_DEPTH = 255
def extract_data(filename, num_images):
Extract the images into a 4D tensor [image index, y, x, channels].
For MNIST data, the number of channels is always 1.
Values are rescaled from [0, 255] down to [-0.5, 0.5].
print('Extracting', filename)
with gzip.open(filename) as bytestream:
# Skip the magic number and dimensions; we know these values.
bytestream.read(16)
buf = bytestream.read(IMAGE_SIZE * IMAGE_SIZE * num_images)
data = numpy.frombuffer(buf, dtype=numpy.uint8).astype(numpy.float32)
data = (data - (PIXEL_DEPTH / 2.0)) / PIXEL_DEPTH
data = data.reshape(num_images, IMAGE_SIZE, IMAGE_SIZE, 1)
return data
train_data = extract_data(train_data_filename, 60000)
test_data = extract_data(test_data_filename, 10000)
print('Training data shape', train_data.shape)
_, (ax1, ax2) = plt.subplots(1, 2)
ax1.imshow(train_data[0].reshape(28, 28), cmap=plt.cm.Greys);
ax2.imshow(train_data[1].reshape(28, 28), cmap=plt.cm.Greys);
NUM_LABELS = 10
def extract_labels(filename, num_images):
Extract the labels into a 1-hot matrix [image index, label index].
print('Extracting', filename)
with gzip.open(filename) as bytestream:
# Skip the magic number and count; we know these values.
bytestream.read(8)
buf = bytestream.read(1 * num_images)
labels = numpy.frombuffer(buf, dtype=numpy.uint8)
# Convert to dense 1-hot representation.
return (numpy.arange(NUM_LABELS) == labels[:, None]).astype(numpy.float32)
train_labels = extract_labels(train_labels_filename, 60000)
test_labels = extract_labels(test_labels_filename, 10000)
print('Training labels shape', train_labels.shape)
print('First label vector', train_labels[0])
print('Second label vector', train_labels[1])
VALIDATION_SIZE = 5000
validation_data = train_data[:VALIDATION_SIZE, :, :, :]
validation_labels = train_labels[:VALIDATION_SIZE]
train_data = train_data[VALIDATION_SIZE:, :, :, :]
train_labels = train_labels[VALIDATION_SIZE:]
train_size = train_labels.shape[0]
print('Validation shape', validation_data.shape)
print('Train size', train_size)
import tensorflow as tf
# We'll bundle groups of examples during training for efficiency.
# This defines the size of the batch.
BATCH_SIZE = 60
# We have only one channel in our grayscale images.
NUM_CHANNELS = 1
# The random seed that defines initialization.
SEED = 42
# This is where training samples and labels are fed to the graph.
# These placeholder nodes will be fed a batch of training data at each
# training step, which we'll write once we define the graph structure.
train_data_node = tf.placeholder(
tf.float32,
shape=(BATCH_SIZE, IMAGE_SIZE, IMAGE_SIZE, NUM_CHANNELS))
train_labels_node = tf.placeholder(tf.float32,
shape=(BATCH_SIZE, NUM_LABELS))
# For the validation and test data, we'll just hold the entire dataset in
# one constant node.
validation_data_node = tf.constant(validation_data)
test_data_node = tf.constant(test_data)
# The variables below hold all the trainable weights. For each, the
# parameter defines how the variables will be initialized.
conv1_weights = tf.Variable(
tf.truncated_normal([5, 5, NUM_CHANNELS, 32], # 5x5 filter, depth 32.
stddev=0.1,
seed=SEED))
conv1_biases = tf.Variable(tf.zeros([32]))
conv2_weights = tf.Variable(
tf.truncated_normal([5, 5, 32, 64],
stddev=0.1,
seed=SEED))
conv2_biases = tf.Variable(tf.constant(0.1, shape=[64]))
fc1_weights = tf.Variable( # fully connected, depth 512.
tf.truncated_normal([IMAGE_SIZE // 4 * IMAGE_SIZE // 4 * 64, 512],
stddev=0.1,
seed=SEED))
fc1_biases = tf.Variable(tf.constant(0.1, shape=[512]))
fc2_weights = tf.Variable(
tf.truncated_normal([512, NUM_LABELS],
stddev=0.1,
seed=SEED))
fc2_biases = tf.Variable(tf.constant(0.1, shape=[NUM_LABELS]))
print('Done')
def model(data, train=False):
The Model definition.
# 2D convolution, with 'SAME' padding (i.e. the output feature map has
# the same size as the input). Note that {strides} is a 4D array whose
# shape matches the data layout: [image index, y, x, depth].
conv = tf.nn.conv2d(data,
conv1_weights,
strides=[1, 1, 1, 1],
padding='SAME')
# Bias and rectified linear non-linearity.
relu = tf.nn.relu(tf.nn.bias_add(conv, conv1_biases))
# Max pooling. The kernel size spec ksize also follows the layout of
# the data. Here we have a pooling window of 2, and a stride of 2.
pool = tf.nn.max_pool(relu,
ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1],
padding='SAME')
conv = tf.nn.conv2d(pool,
conv2_weights,
strides=[1, 1, 1, 1],
padding='SAME')
relu = tf.nn.relu(tf.nn.bias_add(conv, conv2_biases))
pool = tf.nn.max_pool(relu,
ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1],
padding='SAME')
# Reshape the feature map cuboid into a 2D matrix to feed it to the
# fully connected layers.
pool_shape = pool.get_shape().as_list()
reshape = tf.reshape(
pool,
[pool_shape[0], pool_shape[1] * pool_shape[2] * pool_shape[3]])
# Fully connected layer. Note that the '+' operation automatically
# broadcasts the biases.
hidden = tf.nn.relu(tf.matmul(reshape, fc1_weights) + fc1_biases)
# Add a 50% dropout during training only. Dropout also scales
# activations such that no rescaling is needed at evaluation time.
if train:
hidden = tf.nn.dropout(hidden, 0.5, seed=SEED)
return tf.matmul(hidden, fc2_weights) + fc2_biases
print('Done')
# Training computation: logits + cross-entropy loss.
logits = model(train_data_node, True)
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
labels=train_labels_node, logits=logits))
# L2 regularization for the fully connected parameters.
regularizers = (tf.nn.l2_loss(fc1_weights) + tf.nn.l2_loss(fc1_biases) +
tf.nn.l2_loss(fc2_weights) + tf.nn.l2_loss(fc2_biases))
# Add the regularization term to the loss.
loss += 5e-4 * regularizers
# Optimizer: set up a variable that's incremented once per batch and
# controls the learning rate decay.
batch = tf.Variable(0)
# Decay once per epoch, using an exponential schedule starting at 0.01.
learning_rate = tf.train.exponential_decay(
0.01, # Base learning rate.
batch * BATCH_SIZE, # Current index into the dataset.
train_size, # Decay step.
0.95, # Decay rate.
staircase=True)
# Use simple momentum for the optimization.
optimizer = tf.train.MomentumOptimizer(learning_rate,
0.9).minimize(loss,
global_step=batch)
# Predictions for the minibatch, validation set and test set.
train_prediction = tf.nn.softmax(logits)
# We'll compute them only once in a while by calling their {eval()} method.
validation_prediction = tf.nn.softmax(model(validation_data_node))
test_prediction = tf.nn.softmax(model(test_data_node))
print('Done')
# Create a new interactive session that we'll use in
# subsequent code cells.
s = tf.InteractiveSession()
# Use our newly created session as the default for
# subsequent operations.
s.as_default()
# Initialize all the variables we defined above.
tf.global_variables_initializer().run()
BATCH_SIZE = 60
# Grab the first BATCH_SIZE examples and labels.
batch_data = train_data[:BATCH_SIZE, :, :, :]
batch_labels = train_labels[:BATCH_SIZE]
# This dictionary maps the batch data (as a numpy array) to the
# node in the graph it should be fed to.
feed_dict = {train_data_node: batch_data,
train_labels_node: batch_labels}
# Run the graph and fetch some of the nodes.
_, l, lr, predictions = s.run(
[optimizer, loss, learning_rate, train_prediction],
feed_dict=feed_dict)
print('Done')
print(predictions[0])
# The highest probability in the first entry.
print('First prediction', numpy.argmax(predictions[0]))
# But, predictions is actually a list of BATCH_SIZE probability vectors.
print(predictions.shape)
# So, we'll take the highest probability for each vector.
print('All predictions', numpy.argmax(predictions, 1))
print('Batch labels', numpy.argmax(batch_labels, 1))
correct = numpy.sum(numpy.argmax(predictions, 1) == numpy.argmax(batch_labels, 1))
total = predictions.shape[0]
print(float(correct) / float(total))
confusions = numpy.zeros([10, 10], numpy.float32)
bundled = zip(numpy.argmax(predictions, 1), numpy.argmax(batch_labels, 1))
for predicted, actual in bundled:
confusions[predicted, actual] += 1
plt.grid(False)
plt.xticks(numpy.arange(NUM_LABELS))
plt.yticks(numpy.arange(NUM_LABELS))
plt.imshow(confusions, cmap=plt.cm.jet, interpolation='nearest');
def error_rate(predictions, labels):
Return the error rate and confusions.
correct = numpy.sum(numpy.argmax(predictions, 1) == numpy.argmax(labels, 1))
total = predictions.shape[0]
error = 100.0 - (100 * float(correct) / float(total))
confusions = numpy.zeros([10, 10], numpy.float32)
bundled = zip(numpy.argmax(predictions, 1), numpy.argmax(labels, 1))
for predicted, actual in bundled:
confusions[predicted, actual] += 1
return error, confusions
print('Done')
# Train over the first 1/4th of our training set.
steps = train_size // BATCH_SIZE
for step in range(steps):
# Compute the offset of the current minibatch in the data.
# Note that we could use better randomization across epochs.
offset = (step * BATCH_SIZE) % (train_size - BATCH_SIZE)
batch_data = train_data[offset:(offset + BATCH_SIZE), :, :, :]
batch_labels = train_labels[offset:(offset + BATCH_SIZE)]
# This dictionary maps the batch data (as a numpy array) to the
# node in the graph it should be fed to.
feed_dict = {train_data_node: batch_data,
train_labels_node: batch_labels}
# Run the graph and fetch some of the nodes.
_, l, lr, predictions = s.run(
[optimizer, loss, learning_rate, train_prediction],
feed_dict=feed_dict)
# Print out the loss periodically.
if step % 100 == 0:
error, _ = error_rate(predictions, batch_labels)
print('Step %d of %d' % (step, steps))
print('Mini-batch loss: %.5f Error: %.5f Learning rate: %.5f' % (l, error, lr))
print('Validation error: %.1f%%' % error_rate(
validation_prediction.eval(), validation_labels)[0])
test_error, confusions = error_rate(test_prediction.eval(), test_labels)
print('Test error: %.1f%%' % test_error)
plt.xlabel('Actual')
plt.ylabel('Predicted')
plt.grid(False)
plt.xticks(numpy.arange(NUM_LABELS))
plt.yticks(numpy.arange(NUM_LABELS))
plt.imshow(confusions, cmap=plt.cm.jet, interpolation='nearest');
for i, cas in enumerate(confusions):
for j, count in enumerate(cas):
if count > 0:
xoff = .07 * len(str(count))
plt.text(j-xoff, i+.2, int(count), fontsize=9, color='white')
plt.xticks(numpy.arange(NUM_LABELS))
plt.hist(numpy.argmax(test_labels, 1));
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Re-arrange classes to 2 separate directories
Step2: Training configs
Step3: Setup generators to provide with train and validation batches
Step4: Define LeNet model architecture
Step5: Check for TPU availability
Step6: Convert keras model to TPU model
Step7: Run training
Step8: Save the model weights
Step9: Download model weights locally
|
<ASSISTANT_TASK:>
Python Code:
!pip install kaggle
api_token = {"username":"xxxxx","key":"xxxxxxxxxxxxxxxxxxxxxxxx"}
import json
import zipfile
import os
os.mkdir('/root/.kaggle')
with open('/root/.kaggle/kaggle.json', 'w') as file:
json.dump(api_token, file)
!chmod 600 /root/.kaggle/kaggle.json
# !kaggle config path -p /root
!kaggle competitions download -c dogs-vs-cats
zip_ref = zipfile.ZipFile('/content/train.zip', 'r')
zip_ref.extractall()
zip_ref.close()
!mkdir train/cat train/dog
!mv train/*cat*.jpg train/cat
!mv train/*dog*.jpg train/dog
BATCH_SIZE = 64
IMG_DIM = (256, 256, 3)
NUM_EPOCHS = 1
import tensorflow as tf
from tensorflow import keras
print(keras.__version__)
print(tf.__version__)
datagen = keras.preprocessing.image.ImageDataGenerator(
rescale=1./255,
validation_split=0.2)
traingen = datagen.flow_from_directory(
'train',
batch_size = BATCH_SIZE,
target_size = IMG_DIM[:-1],
class_mode = 'categorical',
subset='training')
valgen = datagen.flow_from_directory(
'train',
batch_size = BATCH_SIZE,
target_size = IMG_DIM[:-1],
class_mode = 'categorical',
subset='validation')
input = keras.layers.Input(IMG_DIM, name="input")
conv1 = keras.layers.Conv2D(20, kernel_size=(5, 5), padding='same')(input)
pool1 = keras.layers.MaxPooling2D(pool_size=(2,2), strides=(2,2))(conv1)
conv2 = keras.layers.Conv2D(50, kernel_size=(5,5), padding='same')(pool1)
pool2 = keras.layers.MaxPooling2D(pool_size=(2,2), strides=(2,2))(conv1)
flatten1 = keras.layers.Flatten()(pool2)
fc1 = keras.layers.Dense(500, activation='relu')(flatten1)
fc2 = keras.layers.Dense(2, activation='softmax')(fc1)
model = keras.models.Model(inputs=input, outputs=fc2)
model.compile(
loss='categorical_crossentropy',
optimizer=keras.optimizers.SGD(lr=0.01),
metrics=['accuracy'])
print(model.summary())
import os
try:
device_name = os.environ['COLAB_TPU_ADDR']
TPU_ADDRESS = 'grpc://' + device_name
print('Found TPU at: {}'.format(TPU_ADDRESS))
except KeyError:
print('TPU not found')
tpu_model = tf.contrib.tpu.keras_to_tpu_model(
model,
strategy=tf.contrib.tpu.TPUDistributionStrategy(
tf.contrib.cluster_resolver.TPUClusterResolver(TPU_ADDRESS)))
tpu_model.fit_generator(
traingen,
steps_per_epoch=traingen.n//traingen.batch_size,
epochs=1,
validation_data=valgen,
validation_steps=valgen.n//valgen.batch_size)
tpu_model.save_weights('./lenet-catdog.h5', overwrite=True)
from google.colab import files
files.download("lenet-catdog.h5")
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Read raw data
Step2: Time-frequency beamforming based on DICS
|
<ASSISTANT_TASK:>
Python Code:
# Author: Roman Goj <roman.goj@gmail.com>
#
# License: BSD (3-clause)
import mne
from mne.event import make_fixed_length_events
from mne.datasets import sample
from mne.time_frequency import csd_fourier
from mne.beamformer import tf_dics
from mne.viz import plot_source_spectrogram
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
noise_fname = data_path + '/MEG/sample/ernoise_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_raw-eve.fif'
fname_fwd = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'
subjects_dir = data_path + '/subjects'
label_name = 'Aud-lh'
fname_label = data_path + '/MEG/sample/labels/%s.label' % label_name
raw = mne.io.read_raw_fif(raw_fname, preload=True)
raw.info['bads'] = ['MEG 2443'] # 1 bad MEG channel
# Pick a selection of magnetometer channels. A subset of all channels was used
# to speed up the example. For a solution based on all MEG channels use
# meg=True, selection=None and add mag=4e-12 to the reject dictionary.
left_temporal_channels = mne.read_selection('Left-temporal')
picks = mne.pick_types(raw.info, meg='mag', eeg=False, eog=False,
stim=False, exclude='bads',
selection=left_temporal_channels)
raw.pick_channels([raw.ch_names[pick] for pick in picks])
reject = dict(mag=4e-12)
# Re-normalize our empty-room projectors, which should be fine after
# subselection
raw.info.normalize_proj()
# Setting time windows. Note that tmin and tmax are set so that time-frequency
# beamforming will be performed for a wider range of time points than will
# later be displayed on the final spectrogram. This ensures that all time bins
# displayed represent an average of an equal number of time windows.
tmin, tmax, tstep = -0.5, 0.75, 0.05 # s
tmin_plot, tmax_plot = -0.3, 0.5 # s
# Read epochs
event_id = 1
events = mne.read_events(event_fname)
epochs = mne.Epochs(raw, events, event_id, tmin, tmax,
baseline=None, preload=True, proj=True, reject=reject)
# Read empty room noise raw data
raw_noise = mne.io.read_raw_fif(noise_fname, preload=True)
raw_noise.info['bads'] = ['MEG 2443'] # 1 bad MEG channel
raw_noise.pick_channels([raw_noise.ch_names[pick] for pick in picks])
raw_noise.info.normalize_proj()
# Create noise epochs and make sure the number of noise epochs corresponds to
# the number of data epochs
events_noise = make_fixed_length_events(raw_noise, event_id)
epochs_noise = mne.Epochs(raw_noise, events_noise, event_id, tmin_plot,
tmax_plot, baseline=None, preload=True, proj=True,
reject=reject)
epochs_noise.info.normalize_proj()
epochs_noise.apply_proj()
# then make sure the number of epochs is the same
epochs_noise = epochs_noise[:len(epochs.events)]
# Read forward operator
forward = mne.read_forward_solution(fname_fwd)
# Read label
label = mne.read_label(fname_label)
# Setting frequency bins as in Dalal et al. 2008
freq_bins = [(4, 12), (12, 30), (30, 55), (65, 300)] # Hz
win_lengths = [0.3, 0.2, 0.15, 0.1] # s
# Then set FFTs length for each frequency range.
# Should be a power of 2 to be faster.
n_ffts = [256, 128, 128, 128]
# Subtract evoked response prior to computation?
subtract_evoked = False
# Calculating noise cross-spectral density from empty room noise for each
# frequency bin and the corresponding time window length. To calculate noise
# from the baseline period in the data, change epochs_noise to epochs
noise_csds = []
for freq_bin, win_length, n_fft in zip(freq_bins, win_lengths, n_ffts):
noise_csd = csd_fourier(epochs_noise, fmin=freq_bin[0], fmax=freq_bin[1],
tmin=-win_length, tmax=0, n_fft=n_fft)
noise_csds.append(noise_csd.sum())
# Computing DICS solutions for time-frequency windows in a label in source
# space for faster computation, use label=None for full solution
stcs = tf_dics(epochs, forward, noise_csds, tmin, tmax, tstep, win_lengths,
freq_bins=freq_bins, subtract_evoked=subtract_evoked,
n_ffts=n_ffts, reg=0.05, label=label, inversion='matrix')
# Plotting source spectrogram for source with maximum activity
# Note that tmin and tmax are set to display a time range that is smaller than
# the one for which beamforming estimates were calculated. This ensures that
# all time bins shown are a result of smoothing across an identical number of
# time windows.
plot_source_spectrogram(stcs, freq_bins, tmin=tmin_plot, tmax=tmax_plot,
source_index=None, colorbar=True)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: アテンションを用いたニューラル機械翻訳
Step2: データセットのダウンロードと準備
Step3: 実験を速くするためデータセットのサイズを制限(オプション)
Step4: tf.data データセットの作成
Step5: エンコーダー・デコーダーモデルの記述
Step6: オプティマイザと損失関数の定義
Step7: チェックポイント(オブジェクトベースの保存)
Step8: 訓練
Step9: 翻訳
Step10: 最後のチェックポイントを復元しテストする
|
<ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
from sklearn.model_selection import train_test_split
import unicodedata
import re
import numpy as np
import os
import io
import time
# ファイルのダウンロード
path_to_zip = tf.keras.utils.get_file(
'spa-eng.zip', origin='http://storage.googleapis.com/download.tensorflow.org/data/spa-eng.zip',
extract=True)
path_to_file = os.path.dirname(path_to_zip)+"/spa-eng/spa.txt"
# ユニコードファイルを ascii に変換
def unicode_to_ascii(s):
return ''.join(c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn')
def preprocess_sentence(w):
w = unicode_to_ascii(w.lower().strip())
# 単語とそのあとの句読点の間にスペースを挿入
# 例: "he is a boy." => "he is a boy ."
# 参照:- https://stackoverflow.com/questions/3645931/python-padding-punctuation-with-white-spaces-keeping-punctuation
w = re.sub(r"([?.!,¿])", r" \1 ", w)
w = re.sub(r'[" "]+', " ", w)
# (a-z, A-Z, ".", "?", "!", ",") 以外の全ての文字をスペースに置き換え
w = re.sub(r"[^a-zA-Z?.!,¿]+", " ", w)
w = w.rstrip().strip()
# 文の開始と終了のトークンを付加
# モデルが予測をいつ開始し、いつ終了すれば良いかを知らせるため
w = '<start> ' + w + ' <end>'
return w
en_sentence = u"May I borrow this book?"
sp_sentence = u"¿Puedo tomar prestado este libro?"
print(preprocess_sentence(en_sentence))
print(preprocess_sentence(sp_sentence).encode('utf-8'))
# 1. アクセント記号を除去
# 2. 文をクリーニング
# 3. [ENGLISH, SPANISH] の形で単語のペアを返す
def create_dataset(path, num_examples):
lines = io.open(path, encoding='UTF-8').read().strip().split('\n')
word_pairs = [[preprocess_sentence(w) for w in l.split('\t')] for l in lines[:num_examples]]
return zip(*word_pairs)
en, sp = create_dataset(path_to_file, None)
print(en[-1])
print(sp[-1])
def max_length(tensor):
return max(len(t) for t in tensor)
def tokenize(lang):
lang_tokenizer = tf.keras.preprocessing.text.Tokenizer(
filters='')
lang_tokenizer.fit_on_texts(lang)
tensor = lang_tokenizer.texts_to_sequences(lang)
tensor = tf.keras.preprocessing.sequence.pad_sequences(tensor,
padding='post')
return tensor, lang_tokenizer
def load_dataset(path, num_examples=None):
# クリーニングされた入力と出力のペアを生成
targ_lang, inp_lang = create_dataset(path, num_examples)
input_tensor, inp_lang_tokenizer = tokenize(inp_lang)
target_tensor, targ_lang_tokenizer = tokenize(targ_lang)
return input_tensor, target_tensor, inp_lang_tokenizer, targ_lang_tokenizer
# このサイズのデータセットで実験
num_examples = 30000
input_tensor, target_tensor, inp_lang, targ_lang = load_dataset(path_to_file, num_examples)
# ターゲットテンソルの最大長を計算
max_length_targ, max_length_inp = max_length(target_tensor), max_length(input_tensor)
# 80-20で分割を行い、訓練用と検証用のデータセットを作成
input_tensor_train, input_tensor_val, target_tensor_train, target_tensor_val = train_test_split(input_tensor, target_tensor, test_size=0.2)
# 長さを表示
print(len(input_tensor_train), len(target_tensor_train), len(input_tensor_val), len(target_tensor_val))
def convert(lang, tensor):
for t in tensor:
if t!=0:
print ("%d ----> %s" % (t, lang.index_word[t]))
print ("Input Language; index to word mapping")
convert(inp_lang, input_tensor_train[0])
print ()
print ("Target Language; index to word mapping")
convert(targ_lang, target_tensor_train[0])
BUFFER_SIZE = len(input_tensor_train)
BATCH_SIZE = 64
steps_per_epoch = len(input_tensor_train)//BATCH_SIZE
embedding_dim = 256
units = 1024
vocab_inp_size = len(inp_lang.word_index)+1
vocab_tar_size = len(targ_lang.word_index)+1
dataset = tf.data.Dataset.from_tensor_slices((input_tensor_train, target_tensor_train)).shuffle(BUFFER_SIZE)
dataset = dataset.batch(BATCH_SIZE, drop_remainder=True)
example_input_batch, example_target_batch = next(iter(dataset))
example_input_batch.shape, example_target_batch.shape
class Encoder(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, enc_units, batch_sz):
super(Encoder, self).__init__()
self.batch_sz = batch_sz
self.enc_units = enc_units
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.gru = tf.keras.layers.GRU(self.enc_units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
def call(self, x, hidden):
x = self.embedding(x)
output, state = self.gru(x, initial_state = hidden)
return output, state
def initialize_hidden_state(self):
return tf.zeros((self.batch_sz, self.enc_units))
encoder = Encoder(vocab_inp_size, embedding_dim, units, BATCH_SIZE)
# サンプル入力
sample_hidden = encoder.initialize_hidden_state()
sample_output, sample_hidden = encoder(example_input_batch, sample_hidden)
print ('Encoder output shape: (batch size, sequence length, units) {}'.format(sample_output.shape))
print ('Encoder Hidden state shape: (batch size, units) {}'.format(sample_hidden.shape))
class BahdanauAttention(tf.keras.layers.Layer):
def __init__(self, units):
super(BahdanauAttention, self).__init__()
self.W1 = tf.keras.layers.Dense(units)
self.W2 = tf.keras.layers.Dense(units)
self.V = tf.keras.layers.Dense(1)
def call(self, query, values):
# hidden shape == (batch_size, hidden size)
# hidden_with_time_axis shape == (batch_size, 1, hidden size)
# スコアを計算するためにこのように加算を実行する
hidden_with_time_axis = tf.expand_dims(query, 1)
# score shape == (batch_size, max_length, 1)
# スコアを self.V に適用するために最後の軸は 1 となる
# self.V に適用する前のテンソルの shape は (batch_size, max_length, units)
score = self.V(tf.nn.tanh(
self.W1(values) + self.W2(hidden_with_time_axis)))
# attention_weights の shape == (batch_size, max_length, 1)
attention_weights = tf.nn.softmax(score, axis=1)
# context_vector の合計後の shape == (batch_size, hidden_size)
context_vector = attention_weights * values
context_vector = tf.reduce_sum(context_vector, axis=1)
return context_vector, attention_weights
attention_layer = BahdanauAttention(10)
attention_result, attention_weights = attention_layer(sample_hidden, sample_output)
print("Attention result shape: (batch size, units) {}".format(attention_result.shape))
print("Attention weights shape: (batch_size, sequence_length, 1) {}".format(attention_weights.shape))
class Decoder(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, dec_units, batch_sz):
super(Decoder, self).__init__()
self.batch_sz = batch_sz
self.dec_units = dec_units
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.gru = tf.keras.layers.GRU(self.dec_units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
self.fc = tf.keras.layers.Dense(vocab_size)
# アテンションのため
self.attention = BahdanauAttention(self.dec_units)
def call(self, x, hidden, enc_output):
# enc_output の shape == (batch_size, max_length, hidden_size)
context_vector, attention_weights = self.attention(hidden, enc_output)
# 埋め込み層を通過したあとの x の shape == (batch_size, 1, embedding_dim)
x = self.embedding(x)
# 結合後の x の shape == (batch_size, 1, embedding_dim + hidden_size)
x = tf.concat([tf.expand_dims(context_vector, 1), x], axis=-1)
# 結合したベクトルを GRU 層に渡す
output, state = self.gru(x)
# output shape == (batch_size * 1, hidden_size)
output = tf.reshape(output, (-1, output.shape[2]))
# output shape == (batch_size, vocab)
x = self.fc(output)
return x, state, attention_weights
decoder = Decoder(vocab_tar_size, embedding_dim, units, BATCH_SIZE)
sample_decoder_output, _, _ = decoder(tf.random.uniform((64, 1)),
sample_hidden, sample_output)
print ('Decoder output shape: (batch_size, vocab size) {}'.format(sample_decoder_output.shape))
optimizer = tf.keras.optimizers.Adam()
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(
from_logits=True, reduction='none')
def loss_function(real, pred):
mask = tf.math.logical_not(tf.math.equal(real, 0))
loss_ = loss_object(real, pred)
mask = tf.cast(mask, dtype=loss_.dtype)
loss_ *= mask
return tf.reduce_mean(loss_)
checkpoint_dir = './training_checkpoints'
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
checkpoint = tf.train.Checkpoint(optimizer=optimizer,
encoder=encoder,
decoder=decoder)
@tf.function
def train_step(inp, targ, enc_hidden):
loss = 0
with tf.GradientTape() as tape:
enc_output, enc_hidden = encoder(inp, enc_hidden)
dec_hidden = enc_hidden
dec_input = tf.expand_dims([targ_lang.word_index['<start>']] * BATCH_SIZE, 1)
# Teacher Forcing - 正解値を次の入力として供給
for t in range(1, targ.shape[1]):
# passing enc_output to the decoder
predictions, dec_hidden, _ = decoder(dec_input, dec_hidden, enc_output)
loss += loss_function(targ[:, t], predictions)
# Teacher Forcing を使用
dec_input = tf.expand_dims(targ[:, t], 1)
batch_loss = (loss / int(targ.shape[1]))
variables = encoder.trainable_variables + decoder.trainable_variables
gradients = tape.gradient(loss, variables)
optimizer.apply_gradients(zip(gradients, variables))
return batch_loss
EPOCHS = 10
for epoch in range(EPOCHS):
start = time.time()
enc_hidden = encoder.initialize_hidden_state()
total_loss = 0
for (batch, (inp, targ)) in enumerate(dataset.take(steps_per_epoch)):
batch_loss = train_step(inp, targ, enc_hidden)
total_loss += batch_loss
if batch % 100 == 0:
print('Epoch {} Batch {} Loss {:.4f}'.format(epoch + 1,
batch,
batch_loss.numpy()))
# 2 エポックごとにモデル(のチェックポイント)を保存
if (epoch + 1) % 2 == 0:
checkpoint.save(file_prefix = checkpoint_prefix)
print('Epoch {} Loss {:.4f}'.format(epoch + 1,
total_loss / steps_per_epoch))
print('Time taken for 1 epoch {} sec\n'.format(time.time() - start))
def evaluate(sentence):
attention_plot = np.zeros((max_length_targ, max_length_inp))
sentence = preprocess_sentence(sentence)
inputs = [inp_lang.word_index[i] for i in sentence.split(' ')]
inputs = tf.keras.preprocessing.sequence.pad_sequences([inputs],
maxlen=max_length_inp,
padding='post')
inputs = tf.convert_to_tensor(inputs)
result = ''
hidden = [tf.zeros((1, units))]
enc_out, enc_hidden = encoder(inputs, hidden)
dec_hidden = enc_hidden
dec_input = tf.expand_dims([targ_lang.word_index['<start>']], 0)
for t in range(max_length_targ):
predictions, dec_hidden, attention_weights = decoder(dec_input,
dec_hidden,
enc_out)
# 後ほどプロットするためにアテンションの重みを保存
attention_weights = tf.reshape(attention_weights, (-1, ))
attention_plot[t] = attention_weights.numpy()
predicted_id = tf.argmax(predictions[0]).numpy()
result += targ_lang.index_word[predicted_id] + ' '
if targ_lang.index_word[predicted_id] == '<end>':
return result, sentence, attention_plot
# 予測された ID がモデルに戻される
dec_input = tf.expand_dims([predicted_id], 0)
return result, sentence, attention_plot
# アテンションの重みをプロットする関数
def plot_attention(attention, sentence, predicted_sentence):
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(1, 1, 1)
ax.matshow(attention, cmap='viridis')
fontdict = {'fontsize': 14}
ax.set_xticklabels([''] + sentence, fontdict=fontdict, rotation=90)
ax.set_yticklabels([''] + predicted_sentence, fontdict=fontdict)
ax.xaxis.set_major_locator(ticker.MultipleLocator(1))
ax.yaxis.set_major_locator(ticker.MultipleLocator(1))
plt.show()
def translate(sentence):
result, sentence, attention_plot = evaluate(sentence)
print('Input: %s' % (sentence))
print('Predicted translation: {}'.format(result))
attention_plot = attention_plot[:len(result.split(' ')), :len(sentence.split(' '))]
plot_attention(attention_plot, sentence.split(' '), result.split(' '))
# checkpoint_dir の中の最後のチェックポイントを復元
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_dir))
translate(u'hace mucho frio aqui.')
translate(u'esta es mi vida.')
translate(u'¿todavia estan en casa?')
# 翻訳あやまりの例
translate(u'trata de averiguarlo.')
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The shape of an ndarray gives us the dimensions. b is a 1-by-4 matrix, or a row vector. c is a 2-by-2 vector, or a column vector. d is a 2-by-2 matrix.
Step2: Similarly, a matrix can be entered by first typing out a one-dimensional array and then putting the array through reshape
Step3: 5.1.4. There are a number of pre-built functions for special kinds of vectors and matrices.
Step4: 5.1.5. Matrix multiplications are performed via dot
Step5: If you are running Python 3.5 or higher, the binary operator @ may be used to denote matrix multiplication.
Step6: 5.1.6. ndarray supports coordinatewise operations. Observe, in particular, that * does not result in matrix multiplication.
Step7: Operations with mismatching dimensions sometimes result in legitimate operation, via broadcasting. For example, scalar multiplication works fine
Step8: Row-wise operations and column-wise operations are also possible
Step9: 5.2. Floating-Point Arithmetic
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
a = np.array([1, 2, 3, 4])
b = np.array([[1, 2, 3, 4]])
c = np.array([[1], [2], [3], [4]])
d = np.array([[1, 2], [3, 4]])
print(a)
print('shape of a: {}'.format(a.shape))
print()
print(b)
print('shape of b: {}'.format(b.shape))
print()
print(c)
print('shape of c: {}'.format(c.shape))
print()
print(d)
print('shape of d: {}'.format(d.shape))
print(b)
print('shape of b: {}'.format(b.shape))
print()
print(b.transpose())
print('shape of b.transpose(): {}'.format(b.transpose().shape))
print(b)
print('shape of b: {}'.format(b.shape))
print()
print(b.reshape((2,2)))
print('shape of b.reshape((2,2)): {}'.format(b.transpose().reshape((2,2))))
print()
print(b.reshape((4,1)))
print('shape of b.reshape((4,1)): {}'.format(b.transpose().reshape((4,1))))
print(np.arange(5))
print()
print(np.arange(2, 8))
print()
print(np.arange(2, 15, 3))
print()
print(np.eye(1))
print()
print(np.eye(2))
print()
print(np.eye(3))
print()
print(np.zeros(1))
print()
print(np.zeros(2))
print()
print(np.zeros(3))
print()
x = np.array([[2, 3], [5, 7]])
y = np.array([[1, -1], [-1, 1]])
print(np.dot(x,y))
print()
print(x.dot(y))
import sys
version_major, version_minor = sys.version_info[0:2]
if version_major >= 3 and version_minor >= 5:
print(x @ y)
else:
print('unsupported operation')
print(np.array([1, 2, 3]) + np.array([3,4,5]))
print()
print(np.array([[4,3],[2,1]]) - np.array([[1,1],[1,1]]))
print()
print(np.array([[1, 2, 3], [4, 5, 6]]) * np.array([[1, 2, 1], [3, -1, -1]]))
print()
print(np.array([[1], [3]]) / np.array([[2], [2]]))
3 * np.array([[2,4,3], [1,2,5], [-1, -1, -1]])
x = np.array([5, -1, 3])
y = np.arange(9).reshape((3, 3))
print(y)
print()
print(x.reshape((3, 1)) + y)
print()
print(x.reshape((1, 3)) + y)
e = 1.0
while (1.0 + 0.5 * e) != 1.0:
e = 0.5 * e
print(e)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Questão 01
Step2: Questão 02
Step3: Questão 03
Step4: Questão 04
Step5: Questão 05
Step6: Questão 06
Step7: Questão 07
Step8: Questão 08
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
% matplotlib inline
def f(x):
return (x**4 - 10 * x ** 3 - x**2 + 5 * x) / (x**4 + 1)
A = 8.00
B = 12.00
xa = A
xb = B
ga = f(xa)
gb = f(xb)
for i in range(10):
xmed = (xa + xb) / 2
gmed = f(xmed)
if gmed < 0:
xa, ga = xmed, gmed
elif gmed > 0:
xb, gb = xmed, gmed
elif:
print('%.6f' % xmed)
break
print('%.6f' % xmed)
def g(x):
return x**4 - 10 * x**3 + 8 * x
def g_linha(x):
return 4*x**3 - 30*x**2 + 8
x0 = 8.00
x = x0
for i in range(10):
value = g(x)
diff = g_linha(x)
print('%.6f' % x)
x = x - value / diff
def y(x):
return (x**2 + 3*x - 3)/(x**2 + 1)
A = -1.00
B = 10.00
if A >= B:
print("Intervalo invalido")
else:
xa = A
xb = B
ga = y(xa)
gb = y(xb)
for i in range(10):
x_pfalse = (gb * xa - ga * xb) / (gb - ga)
g_pfalse = y(x_pfalse)
if g_pfalse < 0:
xa = x_pfalse
ga = g_pfalse
elif g_pfalse > 0:
xb = x_pfalse
gb = g_pfalse
elif g_pfalse == 0:
print('%.6f' % x_pfalse)
break
print('%.6f' % x_pfalse)
def k(x):
return x ** 4 - 10 * x**3 + 8 * x
x0 = -5.00
x1 = 5.00
xa = x0
xb = x1
ga = k(xa)
gb = k(xb)
for i in range(10):
xi = (xa * gb - xb * ga) / (gb - ga)
xa = xb
xb = xi
# xa, xb = xb, (xa * gb - xb * ga) / (gb - ga)
# ga, gb = gb, k(xb)
# xb = (xa * gb - xb * ga) / (gb - ga)
# xa = xb
# ga = gb
# gb = k(xb)
print ('f(%.6f) = %.6f' % (xi, k(xi)))
def g(x):
return x * (x - 1) * (x + 1) * (x - 2)
A = 0.00
B = 10.0
for i in range(10):
xa = A + (B - A) / 3
xb = A + (B - A) * 2 / 3
fa = g(xa)
fb = g(xb)
if fa > fb:
A = xa
else:
B = xb
print('%.6f' % (xa if fa < fb else xb))
a = 1.0
b = 0.0
c = -1.0
d = 0.0
e = 1.0
def h(x):
return a * x**4 + b * x**3 + c * x**2 + d * x + np.e
A = 0.0
B = 10.0
golden = (1 + 5 ** 0.5) / 2
xa = B - (B - A)/golden
xb = A + (B - A)/golden
fa = h(xa)
fb = h(xb)
print('%.6f' % (xa if fa < fb else xb))
for i in range(9):
if fa > fb:
A = xa
xa = xb
fa = fb
xb = A + (B - A)/golden
fb = h(xb)
else:
B = xb
xb = xa
fb = fa
xa = B - (B - A)/golden
fa = h(xa)
print('%.6f' % (xa if fa < fb else xb))
def l(x):
return -x * np.e ** -x
def l_linha(x):
return np.e**-x * (x - 1)
x0 = 7.5
k = 1.0
for i in range(9):
print('%d: f(%.6f) = %.6f' % (i, x0, l(x0)))
dx = l_linha(x0)
if dx > 0:
x0 -= k
else:
x0 += k
print('%d: f(%.6f) = %.6f' % (9, x0, l(x0)))
def w(x):
return x**4 - 10 * x**3 + 8 * x
def w_linha(x):
return 4 * x**3 - 30 * x**2 + 8
def w_2linha(x):
return 12 * x**2 - 60 * x
x0 = 8.00
xi = x0
for i in range(10):
xi = xi - w_linha(xi) / w_2linha(xi)
print('%d: f(%.6f) = %.6f' % (i, xi, w(xi)))
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: A number of Pandas functions are useful when cleaning up raw data and converting it to a data set ready for analysis and visualisation. In this notebook a selection of methods are introduced
Step2: drop
Step3: rename
Step4: replace
Step5: Similar to rename, one can use a dict with the dictionary keys the old data and the dictionary values the new data
Step6: explode
Step7: drop_duplicates
Step8: astype
Step9: unique
Step10: .str.-methods
Step11: <div class="alert alert-info">
Step12: <div class="alert alert-info">
Step13: Whereas the data is already well organised and structured, some adjustments are required to support further analysis
Step14: <div class="alert alert-success">
Step15: <div class="alert alert-success">
Step16: <div class="alert alert-success">
Step17: <div class="alert alert-success">
Step18: <div class="alert alert-success">
Step19: <div class="alert alert-success">
Step20: <div class="alert alert-success">
|
<ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
countries = pd.DataFrame({'county name': ['Belgium', 'Flance', 'Germany', 'Netherlands', ['United Kingdom', 'Germany']],
'population': [11.3, 64.3, 81.3, 16.9, 64.9],
'area': [30510, 671308, 357050, 41526, [244820, np.nan]],
'capital': ['Brussels', ' Paris ', 'Barlin', 'Amsterdam', 'London']})
countries
countries.drop(columns=["area", "capital"])
countries = countries.rename(columns={"county name": "country"})
countries["capital"].replace("Barlin", "Berlin")
countries = countries.replace({"Barlin": "Berlin", "Flance": "France"})
countries
countries_exploded = countries.explode(["country", "area"])
countries_exploded
countries_exploded.duplicated(subset=["country"])
countries_exploded = countries_exploded.drop_duplicates(subset=["country"], keep="first").copy() # More on this copy later
countries_exploded
countries_exploded.dtypes
countries_exploded["area"] = countries_exploded["area"].astype(int)
countries_exploded.dtypes
countries_exploded["capital"].unique()
countries_exploded["capital"] = countries_exploded["capital"].str.strip()
countries_exploded["capital"].unique()
casualties_raw = pd.read_csv("./data/TF_ACCIDENTS_VICTIMS_2020.zip",
compression='zip',
sep="|",
low_memory=False)
casualties_raw.head()
pd.options.display.max_columns = 45
casualties_raw.head()
column_names_with_fr = [col for col in casualties_raw.columns if col.endswith("_FR")]
column_names_with_fr
# %load _solutions/pandas_06_data_cleaning1.py
def clean_column_name(name):
return name.removeprefix("TX_").removesuffix("_DESCR_NL")
# %load _solutions/pandas_06_data_cleaning2.py
# %load _solutions/pandas_06_data_cleaning3.py
# %load _solutions/pandas_06_data_cleaning4.py
# %load _solutions/pandas_06_data_cleaning5.py
# %load _solutions/pandas_06_data_cleaning6.py
# %load _solutions/pandas_06_data_cleaning7.py
# %load _solutions/pandas_06_data_cleaning8.py
# %load _solutions/pandas_06_data_cleaning9.py
# %load _solutions/pandas_06_data_cleaning10.py
# Conversion to english weekday names
casualties["DAY_OF_WEEK"] = casualties["DAY_OF_WEEK"].replace({"maandag": "Monday",
"dinsdag": "Tuesday",
"woensdag": "Wednesday",
"donderdag": "Thursday",
"vrijdag": "Friday",
"zaterdag": "Saturday",
"zondag": "Sunday"})
# %load _solutions/pandas_06_data_cleaning11.py
# %load _solutions/pandas_06_data_cleaning12.py
# %load _solutions/pandas_06_data_cleaning13.py
# verify outcome
casualties["AGE_CLS"].unique()
# %load _solutions/pandas_06_data_cleaning14.py
# %load _solutions/pandas_06_data_cleaning15.py
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Get data (cAFM and SKPM images of a P3HT/PMMA blend) for this tutorial
Step2: 3. Import relevant packages and data into the notebook.
Step3: The data is stored in n-dimensional arrays where n = # data channels. The first layer (i.e, layer 0) is topography, the second, third, etc are the same as when the files are opened in Igor.
Step4: Note that the image is unflattened. We can use the flatten function in the imagealignment file (which you can look at via spyder, textedit, notepad, or whatever you fancy.)
Step5: Here's how we can display the bottom, half of the 256x256 SKPM image
Step6: Image Registration with Affine Transformations
Step7: See doc string for details on the parameters. Docstrings are embedded between three quotes in the raw
Step8: 3. Now that the affine registration is set up, we can start registering images with respect to translation in the x-, y- plane, rotation, scaling, etc with find_affine. Like the example on the dipy page, we'll optimize the registration with a transformation with the fewest degrees of freedom (like RotationTransform2D() or TranslationTransform2D()) and then refine it.)
Step9: You can apply the optimized translation transformation to the moving image, SKPMtopo, with apply_affine.
Step10: We can optimize the transformation with respect to translation and rotation by supplying the previously optimized translation transformation.
Step11: ...and apply it to the original SKPM topo.
Step12: Do the same with the full affine transformation.
Step13: To register the cAFM with the SKPM image, you can apply it to the SKPM layer
|
<ASSISTANT_TASK:>
Python Code:
#for igor files:
!curl -o util.py https://raw.githubusercontent.com/kongjy/hyperAFM/master/hyperAFM/util.py
#for image alignment:
!curl -o imagealignment.py https://raw.githubusercontent.com/kongjy/hyperAFM/master/hyperAFM/imagealignment.py
#the above will download the files at the specified URL and save them as the filenames specified after '-o'
#curl stands for "see url." to learn more about curl see this page:
#https://tecadmin.net/5-curl-commands-to-download-files/#
#SKPM file:
!curl -o SKPM.ibw https://raw.githubusercontent.com/kongjy/hyperAFM/master/Data/PolymerBlends/Image%20Alignment%20Tutorial/Film15SKPM_0000.ibw
#cAFM file:
!curl -o cAFM.ibw https://raw.githubusercontent.com/kongjy/hyperAFM/master/Data/PolymerBlends/Image%20Alignment%20Tutorial/Film15cAFM_1V_0001.ibw
#packages to load in data and for image alignment
from util import * #* means to import everything.
from imagealignment import *
#to plot
import matplotlib.pyplot as plt
#display graphs/plots in notebook
%matplotlib inline
#import data with load_ibw function in util
SKPMfile=load_ibw('SKPM.ibw')
cAFMfile=load_ibw('cAFM.ibw')
fig=plt.imshow(SKPMfile[:,:,0])
plt.colorbar()
SKPMtopo_flattend=flatten(SKPMfile[:,:,0])
plt.imshow(SKPMtopo_flattend)
plt.colorbar()
SKPM_bottomquarter=flatten(SKPMfile[128:,:,3])
plt.imshow(SKPM_bottomquarter)
mutualinformation=setup_mutualinformation(nbins=32, sampling_prop=None)
affreg=setup_affine(metric=mutualinformation, level_iters=None , sigmas=None, \
factors=None, method='L-BFGS-B')
cAFMtopo=flatten(cAFMfile[:,:,0])
SKPMtopo=flatten(SKPMfile[:,:,0])
translationtrans=find_affine(static=cAFMtopo, moving=SKPMtopo, affreg=affreg, \
transform=TranslationTransform2D(), params0=None, \
starting_affine=None)
SKPMtopo_translated = apply_affine(moving=SKPMtopo, transformation=translationtrans)
plt.imshow(SKPMtopo_translated)
rigidtrans=find_affine(static=cAFMtopo, moving=SKPMtopo, affreg=affreg, \
transform=RigidTransform2D(), params0=None, \
starting_affine=translationtrans)
#a rigid transform is one that includes rotatons and translations
SKPMtopo_rigid=apply_affine(moving=SKPMtopo, transformation=rigidtrans)
plt.imshow(SKPMtopo_rigid)
affinetrans=find_affine(static=cAFMtopo, moving=SKPMtopo, affreg=affreg, \
transform=AffineTransform2D(), params0=None, \
starting_affine=rigidtrans)
SKPMtopo_affine=apply_affine(moving=SKPMtopo, transformation=affinetrans)
plt.imshow(SKPMtopo_affine)
SKPM_transformed=apply_affine(moving=flatten(SKPMfile[:,:,3]), transformation=affinetrans)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set parameters
Step2: Define adjacency for statistics
Step3: Compute statistic
Step4: View time-frequency plots
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.time_frequency import tfr_morlet
from mne.stats import permutation_cluster_1samp_test
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
tmin, tmax, event_id = -0.3, 0.6, 1
# Setup for reading the raw data
raw = mne.io.read_raw_fif(raw_fname)
events = mne.find_events(raw, stim_channel='STI 014')
include = []
raw.info['bads'] += ['MEG 2443', 'EEG 053'] # bads + 2 more
# picks MEG gradiometers
picks = mne.pick_types(raw.info, meg='grad', eeg=False, eog=True,
stim=False, include=include, exclude='bads')
# Load condition 1
event_id = 1
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), preload=True,
reject=dict(grad=4000e-13, eog=150e-6))
# just use right temporal sensors for speed
epochs.pick_channels(mne.read_selection('Right-temporal'))
evoked = epochs.average()
# Factor to down-sample the temporal dimension of the TFR computed by
# tfr_morlet. Decimation occurs after frequency decomposition and can
# be used to reduce memory usage (and possibly computational time of downstream
# operations such as nonparametric statistics) if you don't need high
# spectrotemporal resolution.
decim = 5
freqs = np.arange(8, 40, 2) # define frequencies of interest
sfreq = raw.info['sfreq'] # sampling in Hz
tfr_epochs = tfr_morlet(epochs, freqs, n_cycles=4., decim=decim,
average=False, return_itc=False, n_jobs=1)
# Baseline power
tfr_epochs.apply_baseline(mode='logratio', baseline=(-.100, 0))
# Crop in time to keep only what is between 0 and 400 ms
evoked.crop(-0.1, 0.4)
tfr_epochs.crop(-0.1, 0.4)
epochs_power = tfr_epochs.data
sensor_adjacency, ch_names = mne.channels.find_ch_adjacency(
tfr_epochs.info, 'grad')
# Subselect the channels we are actually using
use_idx = [ch_names.index(ch_name.replace(' ', ''))
for ch_name in tfr_epochs.ch_names]
sensor_adjacency = sensor_adjacency[use_idx][:, use_idx]
assert sensor_adjacency.shape == \
(len(tfr_epochs.ch_names), len(tfr_epochs.ch_names))
assert epochs_power.data.shape == (
len(epochs), len(tfr_epochs.ch_names),
len(tfr_epochs.freqs), len(tfr_epochs.times))
adjacency = mne.stats.combine_adjacency(
sensor_adjacency, len(tfr_epochs.freqs), len(tfr_epochs.times))
# our adjacency is square with each dim matching the data size
assert adjacency.shape[0] == adjacency.shape[1] == \
len(tfr_epochs.ch_names) * len(tfr_epochs.freqs) * len(tfr_epochs.times)
threshold = 3.
n_permutations = 50 # Warning: 50 is way too small for real-world analysis.
T_obs, clusters, cluster_p_values, H0 = \
permutation_cluster_1samp_test(epochs_power, n_permutations=n_permutations,
threshold=threshold, tail=0,
adjacency=adjacency,
out_type='mask', verbose=True)
evoked_data = evoked.data
times = 1e3 * evoked.times
plt.figure()
plt.subplots_adjust(0.12, 0.08, 0.96, 0.94, 0.2, 0.43)
# Create new stats image with only significant clusters
T_obs_plot = np.nan * np.ones_like(T_obs)
for c, p_val in zip(clusters, cluster_p_values):
if p_val <= 0.05:
T_obs_plot[c] = T_obs[c]
# Just plot one channel's data
ch_idx, f_idx, t_idx = np.unravel_index(
np.nanargmax(np.abs(T_obs_plot)), epochs_power.shape[1:])
# ch_idx = tfr_epochs.ch_names.index('MEG 1332') # to show a specific one
vmax = np.max(np.abs(T_obs))
vmin = -vmax
plt.subplot(2, 1, 1)
plt.imshow(T_obs[ch_idx], cmap=plt.cm.gray,
extent=[times[0], times[-1], freqs[0], freqs[-1]],
aspect='auto', origin='lower', vmin=vmin, vmax=vmax)
plt.imshow(T_obs_plot[ch_idx], cmap=plt.cm.RdBu_r,
extent=[times[0], times[-1], freqs[0], freqs[-1]],
aspect='auto', origin='lower', vmin=vmin, vmax=vmax)
plt.colorbar()
plt.xlabel('Time (ms)')
plt.ylabel('Frequency (Hz)')
plt.title(f'Induced power ({tfr_epochs.ch_names[ch_idx]})')
ax2 = plt.subplot(2, 1, 2)
evoked.plot(axes=[ax2], time_unit='s')
plt.show()
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Process our simulated raw data (taking into account head movements)
|
<ASSISTANT_TASK:>
Python Code:
# Authors: Eric Larson <larson.eric.d@gmail.com>
#
# License: BSD (3-clause)
from os import path as op
import mne
from mne.preprocessing import maxwell_filter
print(__doc__)
data_path = op.join(mne.datasets.misc.data_path(verbose=True), 'movement')
pos = mne.chpi.read_head_pos(op.join(data_path, 'simulated_quats.pos'))
raw = mne.io.read_raw_fif(op.join(data_path, 'simulated_movement_raw.fif'))
raw_stat = mne.io.read_raw_fif(op.join(data_path,
'simulated_stationary_raw.fif'))
# extract our resulting events
events = mne.find_events(raw, stim_channel='STI 014')
events[:, 2] = 1
raw.plot(events=events)
topo_kwargs = dict(times=[0, 0.1, 0.2], ch_type='mag', vmin=-500, vmax=500)
# 0. Take average of stationary data (bilateral auditory patterns)
evoked_stat = mne.Epochs(raw_stat, events, 1, -0.2, 0.8).average()
evoked_stat.plot_topomap(title='Stationary', **topo_kwargs)
# 1. Take a naive average (smears activity)
evoked = mne.Epochs(raw, events, 1, -0.2, 0.8).average()
evoked.plot_topomap(title='Moving: naive average', **topo_kwargs)
# 2. Use raw movement compensation (restores pattern)
raw_sss = maxwell_filter(raw, head_pos=pos)
evoked_raw_mc = mne.Epochs(raw_sss, events, 1, -0.2, 0.8).average()
evoked_raw_mc.plot_topomap(title='Moving: movement compensated', **topo_kwargs)
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <h3>Display Sequence logo of unaligned motives</h3>
Step2: <h3>Multiple Sequence Alignment of motives with Muscle</h3>
Step3: <h3>Display sequence logo of aligned motives</h3>
Step4: <h3>Position Weight Matrices of Motives</h3>
Step5: <h4>Display PWM of a single motif</h4>
Step6: <h3>Scoring a single sequence w.r.t a motif</h3>
Step7: <h3> Transform with HMM as scoring criteria</h3>
|
<ASSISTANT_TASK:>
Python Code:
#printing motives as lists
for motif in glam2.motives_list:
for m in motif:
print m
print
glam2.display_logo(do_alignment=False)
glam2.display_logo(motif_num=1)
glam2.align_motives() #MSA with Muscle
motives1=glam2.aligned_motives_list
for m in motives1:
for i in m:
print i
print
glam2.display_logo(do_alignment=True)
glam2.display()
glam2.matrix()
glam2.display(motif_num=3)
test_seq = 'GGAGAAAATACCGC' * 10
seq_score = glam2.score(motif_num=2, seq=test_seq)
print seq_score
glam_3 = Glam2(alphabet='dna', gap_in_alphabet=True, scoring_criteria='hmm', alignment_runs=3)
matches = glam_3.fit_transform(fasta_file="seq9.fa", return_match=True)
for m in matches: print m
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Creating a SEA model
Step2: We are only interested in a limited frequency range, e.g. the third octave bands ranging from 1000 to 4000 Hz.
Step3: We don't know the shear modulus of concrete, so let's calculate it. With the function modulus we can calculate for an isotropic material any elastic modulus given two other ones.
Step4: Just to be sure, we can list the properties of the concrete.
Step5: Rooms and wall
Step6: Given the material type and the volume we can for example calculate the mass of the air in the room
Step7: or plot the modal density of the subsystem representing longitudinal waves
Step8: We now add the concrete wall.
Step9: Let's have a look at the modal densities of the subsystems.
Step10: The modal density of the subsystem representing bending waves in the wall seems to remain constant.
Step11: Shown is now a table, but what is returned is in fact a pandas DataFrame. Pandas is a data analysis toolkit and offers powerful tools to analyse data and to export data to e.g. spreadsheet formats like Excel.
Step12: Now, when we call junction1.update_couplings it tries to determine all the couplings between the subsystems of the components that were added.
Step13: We can now for example see the coupling loss factors of all the couplings that were added.
Step14: Now that both the coupling loss factors and damping loss factors are known we can also list the total loss factor
Step15: The coupling loss factor of the coupling between the rooms is based on the non-resonant transmission coefficient.
Step16: Excitation
Step17: The input power $P$ depends on the volume velocity $U$ of the source and the real part of the radiation impedance, i.e. the radiation resistance $R$.
Step18: The resistance increases with frequency and therefore the radiated power increases similary.
Step19: Solving the system
Step20: We can have a look at the modal energy
Step21: but those values are generally hard to interpret. Instead, we could just request the sound pressure levels in the rooms
Step22: or plot them.
Step23: Let's consider the sound pressure level difference between the two rooms.
Step24: Obviously, we can also look at the modal energies
Step25: or see the level contributions of the individual subsystems.
Step26: Path analysis and graphs
Step27: As soon as a model gets a bit bigger it can be hard to track which objects are connected. One way to help with keeping an overview is by drawing graphs.
Step28: The following graph shows the relation between components and subsystems.
Step29: We can also show for example subsystems and couplings.
Step30: Path analysis
Step31: and to determine the possible paths between any two subsystems.
Step32: We can also calculate the level difference due to a transmission path.
Step33: Saving and restoring a model
Step34: YAML is a human-readable file format. Models can be implemented or edited in the YAML file if desired.
Step35: Loading is done using the load method.
Step36: To verify whether the models are similar we check the modal energy.
Step37: That looks correctly. To be really sure we just calculate the modal energies again in the second model, to verify that other parameters have also been restored.
Step38: Same results.
|
<ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
pd.set_option('float_format', '{:.2e}'.format)
import matplotlib
%matplotlib inline
from seapy import System
from acoustics.signal import OctaveBand
f = OctaveBand(fstart=20.0, fstop=4000.0, fraction=1)
system1 = System(f)
air = system1.add_material('air',
'MaterialGas',
density = 1.296,
temperature = 293.0,
bulk = 1.01e5,
loss_factor=0.05)
concrete = system1.add_material('concrete',
'MaterialSolid',
young=3.0e10,
poisson=0.15,
density=2.3e3,
loss_factor=0.02)
from seapy.materials.materialsolid import modulus
concrete.shear = modulus('shear', young=3.0e10, poisson=0.15)
concrete.info(['density',
'poisson',
'young',
'shear',])
room1 = system1.add_component('room1',
'Component3DAcoustical',
material='air',
length=4.0,
height=2.5,
width=5.0)
room2 = system1.add_component('room2',
'Component3DAcoustical',
material='air',
length=5.0,
height=2.5,
width=5.0)
room1.mass
fig = room1.subsystem_long.plot("modal_density", yscale='log')
wall = system1.add_component('wall',
'Component2DPlate',
material='concrete',
length=3.0,
width=2.5,
height=0.05)
system1.info(system1.subsystems, 'modal_density')
wall.subsystem_bend.info(['soundspeed_group',
'soundspeed_phase',
'modal_density',
'average_frequency_spacing',
'power_input',
'dlf',
'tlf',])
junction1 = system1.add_junction('junction1', 'Junction', shape='Surface', components=['room1',
'room2',
'wall'])
junction1.update_couplings()
system1.info(system1.couplings, 'clf')
system1.info(system1.subsystems, 'tlf')
system1.get_object('room1_SubsystemLong_room2_SubsystemLong').info(['tau', 'sound_reduction_index'])
system1.get_object('wall_SubsystemBend_room1_SubsystemLong').info(['critical_frequency'])
excitation1 = room1.subsystem_long.add_excitation('excitation1',
'ExcitationPointVolume',
velocity=0.001,
radius=0.05)
excitation1.info(['resistance'])
fig = excitation1.plot('power_level')
system1.solve()
system1.info(system1.subsystems, 'modal_energy')
system1.info(['room1', 'room2'], 'pressure_level')
fig = system1.plot(['room1', 'room2'], 'pressure_level')
(room1.info(['pressure_level']) - room2.info(['pressure_level']))
fig = system1.get_object('room1_SubsystemLong_room2_SubsystemLong').plot('sound_reduction_index')
system1.info(system1.subsystems, 'modal_energy')
system1.info(system1.subsystems, 'velocity_level')
system1.info(system1.subsystems, 'pressure_level')
for obj in wall.linked_subsystems:
print(obj.name)
import networkx as nx
G = system1.path_analysis.graph(['components', 'subsystems'])
nx.draw_networkx(G)
G = system1.path_analysis.graph(['subsystems', 'couplings'])
fig = nx.draw_networkx(G)
from seapy.tools import graph_couplings
G = graph_couplings(system1)
fig = nx.draw_networkx(G)
system1.path_analysis.has_path('room1_SubsystemLong', 'room2_SubsystemLong')
for path in system1.path_analysis.paths('room1_SubsystemLong', 'room2_SubsystemLong'):
print(path)
for path in system1.path_analysis.paths('room1_SubsystemLong', 'room2_SubsystemLong'):
print(path.level_difference)
list(system1.path_analysis.paths('room1_SubsystemLong', 'room2_SubsystemLong'))[0].level_difference
system1.save("model.yaml")
!head -n 20 model.yaml
system2 = System.load("model.yaml")
system1.info(system1.subsystems, 'modal_energy')
system2.info(system2.subsystems, 'modal_energy')
system2.solve()
system2.info(system2.subsystems, 'modal_energy')
from IPython.display import IFrame
IFrame("https://seapy.readthedocs.io/en/latest/", width=800, height=600)
<END_TASK>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.