Unnamed: 0
int64 0
16k
| text_prompt
stringlengths 149
45.7k
| code_prompt
stringlengths 67
140k
|
---|---|---|
15,500 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Übungsblatt 7
Präsenzaufgaben
Aufgabe 1 CFG
Step3: Aufgabe 2 CFG
Step5: Hausaufgaben
Aufgabe 7 Plural für das Subjekt
Ergänzen Sie die in den Präsenzaufgaben erstellte Grammatik um die Möglichkeit, das Subjekt in den Plural zu setzen.
Dafür müssen Sie folgendes tun
Step6: Aufgabe 8 Adverben und Verbzweitstellung
Fügen Sie der Grammatik jetzt die zwei Adverben heute und morgen hinzu. Adverben können prinzipiell sehr frei im Satz platziert werden. Eine Besonderheit des Deutschen ist aber die sog. Verbzweitstellung, wie sie z. B. in Sätzen wie Heute schläft der Mann. deutlich wird.
Versuchen Sie alle Möglichkeiten zu implementieren | <ASSISTANT_TASK:>
Python Code:
grammar =
S -> NP VP
NP -> DET[GEN=?x] NOM[GEN=?x]
NOM[GEN=?x] -> ADJ NOM[GEN=?x] | N[GEN=?x]
ADJ -> "schöne" | "kluge" | "dicke"
DET[GEN=mask,KAS=nom] -> "der"
DET[GEN=fem,KAS=dat] -> "der"
DET[GEN=fem,KAS=nom] -> "die"
DET[GEN=fem,KAS=akk] -> "die"
DET[GEN=neut,KAS=nom] -> "das"
DET[GEN=neut,KAS=akk] -> "das"
N[GEN=mask] -> "Mann"
N[GEN=fem] -> "Frau"
N[GEN=neut] -> "Buch"
VP -> V NP NP | V NP | V
V -> "gibt" | "schenkt" | "schläft" | "gefällt" | "kennt"
import nltk
from IPython.display import display
import sys
def test_grammar(grammar, sentences):
cfg = nltk.grammar.FeatureGrammar.fromstring(grammar)
parser = nltk.parse.FeatureEarleyChartParser(cfg)
for i, sent in enumerate(sentences, 1):
print("Satz {}: {}".format(i, sent))
sys.stdout.flush()
results = parser.parse(sent.split())
analyzed = False
for tree in results:
display(tree) # tree.draw() oder print(tree)
analyzed = True
if not analyzed:
print("Keine Analyse möglich", file=sys.stderr)
sys.stderr.flush()
pos_sentences = [
"der Mann schläft",
"der schöne Mann schläft",
"der Mann gibt der Frau das Buch"
]
neg_sentences = ["das Mann schläft", "das schöne Mann schläft"]
test_grammar(grammar, neg_sentences)
test_grammar(grammar, pos_sentences)
Explanation: Übungsblatt 7
Präsenzaufgaben
Aufgabe 1 CFG: Kongruenz in Nominalphrasen
Die folgende Grammatik entspricht der Grammatik von Übungsblatt 4 am Ende der Präsenzaufgaben. (Sie können also stattdessen auch Ihre im Zuge der Übung von damals selbst erstellte Grammatik als Grundlage verwenden.)
Orientieren Sie sich an folgender Tabelle zur Mehrdeutigkeit der Formen des bestimmen Artikels im Deutschen und passen Sie die Grammatik so an, dass sie nur noch grammatikalisch korrekte Nominalphrasen als Teil von Sätzen akzeptiert. Konzentrieren Sie sich auf die Kongruenz von Artikel und Nomen im Genus.
|Form|mögliche Merkmale|
|----|-----------------|
|der|[NUM=sg, GEN=mas, KAS=nom]|
||[NUM=sg, GEN=fem, KAS=dat]|
||[NUM=sg, GEN=fem, KAS=GEN]|
||[NUM=pl, KAS=GEN]|
|die|[NUM=sg, GEN=fem, KAS=nom]|
||[NUM=sg, GEN=fem, KAS=akk]|
||[NUM=pl, KAS=nom]|
||[NUM=pl, KAS=akk]|
|das|[NUM=sg, GEN=neu, KAS=nom]|
||[NUM=sg, GEN=neu, KAS=akk]|
End of explanation
grammar =
S -> NP[KAS=nom] VP
NP[KAS=?y] -> DET[GEN=?x,KAS=?y] NOM[GEN=?x]
NOM[GEN=?x] -> ADJ NOM[GEN=?x] | N[GEN=?x]
ADJ -> "schöne" | "kluge" | "dicke"
DET[GEN=mask,KAS=nom] -> "der"
DET[GEN=fem,KAS=dat] -> "der"
DET[GEN=fem,KAS=nom] -> "die"
DET[GEN=fem,KAS=akk] -> "die"
DET[GEN=neut,KAS=nom] -> "das"
DET[GEN=neut,KAS=akk] -> "das"
N[GEN=mask] -> "Mann"
N[GEN=fem] -> "Frau"
N[GEN=neut] -> "Buch"
VP -> V[SUBCAT=ditr, VAL1=?x, VAL2=?y] NP[KAS=?x] NP[KAS=?y]
VP -> V[VAL=?x,SUBCAT=tr] NP[KAS=?x]
VP -> V[SUBCAT=intr]
V[SUBCAT=ditr, VAL1=dat, VAL2=akk] -> "gibt" | "schenkt"
V[SUBCAT=intr] -> "schläft"
V[SUBCAT=tr,VAL=dat] -> "gefällt"
V[SUBCAT=tr,VAL=akk] -> "kennt"
pos_sentences.extend([
"das Buch gefällt der Frau",
"das Buch kennt die Frau"
])
neg_sentences.extend([
"der Mann schläft das Buch",
"die Frau gefällt das Buch",
"das Buch kennt",
"die Frau gibt das Buch",
"die Frau gibt die Frau das Buch"
])
test_grammar(grammar, pos_sentences)
test_grammar(grammar, neg_sentences)
Explanation: Aufgabe 2 CFG: Kasus
Als nächstes sollen Kasusbedingungen in die Grammatik integriert werden:
Es gibt nur eine Nominalphrase im Nominativ (Subjekt).
Je nach Valenzstellen des Verbes sollen nur Nominalphrasen in den korrekten Kasus akzeptiert werden.
Optional Versuchen Sie die freie Satzstellung des Deutschen zu berücksichtigen.
End of explanation
grammar =
BITTE NACH BEARBEITUNG VON (2) VON OBEN KOPIEREN
pos_sentences.extend([
"die Männer geben der Frau das Buch",
"die Bücher gefallen der Frau",
"die Frauen schlafen"
])
neg_sentences.extend([
"der Mann geben der Frau das Buch",
"das Buch gefällt der Frauen",
"die Frauen schläft"
])
Explanation: Hausaufgaben
Aufgabe 7 Plural für das Subjekt
Ergänzen Sie die in den Präsenzaufgaben erstellte Grammatik um die Möglichkeit, das Subjekt in den Plural zu setzen.
Dafür müssen Sie folgendes tun:
1. Erstellen Sie lexikalische Regeln für Pluralformen der Verben, Adjektive und Substantive (Nominativ ist ausreichend.).
1. Vervollständigen Sie die lexikalischen Regeln für die Form des Artikels die um die korrekte Merkmalstruktur für den Plural.
1. Formulieren Sie eine Kongruenzbedingung in Numerus zwischen Verb und Subjekt.
End of explanation
pos_sentences.extend([
"heute gibt der Mann der Frau das Buch",
"der Mann gibt heute der Frau das Buch",
"der Mann gibt der Frau heute das Buch",
"der Mann gibt der Frau das Buch heute"
])
neg_sentences.extend([
"heute der Mann gibt der Frau das Buch"
])
Explanation: Aufgabe 8 Adverben und Verbzweitstellung
Fügen Sie der Grammatik jetzt die zwei Adverben heute und morgen hinzu. Adverben können prinzipiell sehr frei im Satz platziert werden. Eine Besonderheit des Deutschen ist aber die sog. Verbzweitstellung, wie sie z. B. in Sätzen wie Heute schläft der Mann. deutlich wird.
Versuchen Sie alle Möglichkeiten zu implementieren:
End of explanation
<END_TASK> |
15,501 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Creating TCGA cohorts (part 2)
This notebook will provide another example of building a cohort, this time based on the types of samples available.
Step1: Many different types of samples were obtained from the TCGA participants, and details about these samples are available in the Biospecimen data table. This next query shows how many samples exist of each type, as well as the full names and abbreviations of each type
Step2: Note that there are many types of tumor samples
Step3: In this example, let's assume that we would like to do a study that requires a primary tumor sample and a matched-normal (adjacent) tissue sample. In order to find out which patients provided which types of samples, we need to query the Biospecimen data table. This next query module uses two sub-queries, one to get all patients with TP samples and another to get all patients with NT samples. The final query joins these two and returns a single list of patients.
Step4: It might be interesting to find out what the distribution of tumor types is for this list of patients with matched tumor-normal sample pairs. We can define a new SQL module that refers to the results of a previously defined query as long as we pass that reference in when we call bq.Query() | <ASSISTANT_TASK:>
Python Code:
import gcp.bigquery as bq
Explanation: Creating TCGA cohorts (part 2)
This notebook will provide another example of building a cohort, this time based on the types of samples available.
End of explanation
%%sql
SELECT
SampleType,
SampleTypeLetterCode,
COUNT(*) AS n
FROM
[isb-cgc:tcga_201607_beta.Biospecimen_data]
GROUP BY
SampleType,
SampleTypeLetterCode,
ORDER BY
n DESC
Explanation: Many different types of samples were obtained from the TCGA participants, and details about these samples are available in the Biospecimen data table. This next query shows how many samples exist of each type, as well as the full names and abbreviations of each type:
End of explanation
%%sql
SELECT
SampleTypeLetterCode,
COUNT(*) AS n
FROM (
SELECT
SampleBarcode,
SampleTypeLetterCode
FROM
[isb-cgc:tcga_201607_beta.mRNA_UNC_HiSeq_RSEM]
GROUP BY
SampleBarcode,
SampleTypeLetterCode )
GROUP BY
SampleTypeLetterCode
ORDER BY
n DESC
Explanation: Note that there are many types of tumor samples: primary, metastatic, recurrent, etc, although the vast majority are samples from primary tumors. In the TCGA project, almost all tumor samples were assayed on multiple platforms for mRNA and miRNA expression, DNA methylation, DNA copy-number, and either exome- or whole-genome DNA sequence. For some tumor samples, protein activity was also measured using RPPA arrays. When available, adjacent "normal" tissue samples were also assayed on a subset of these platforms. The "blood normal" samples were primarily used only as a reference source of germline DNA in order to call somatic mutations.
We can do a similar counting exercise of the sample types represented in one of the molecular data tables, using one of the mRNA expression data tables:
End of explanation
%%sql --module patients_with_matched_samples
DEFINE QUERY patients_tp
SELECT
ParticipantBarcode
FROM
[isb-cgc:tcga_201607_beta.Biospecimen_data]
WHERE
( SampleTypeLetterCode="TP" )
GROUP BY
ParticipantBarcode
ORDER BY
ParticipantBarcode
DEFINE QUERY patients_nt
SELECT
ParticipantBarcode
FROM
[isb-cgc:tcga_201607_beta.Biospecimen_data]
WHERE
( SampleTypeLetterCode="NT" )
GROUP BY
ParticipantBarcode
ORDER BY
ParticipantBarcode
DEFINE QUERY patients_both
SELECT
nt.ParticipantBarcode AS ParticipantBarcode
FROM ( $patients_nt ) AS nt
JOIN
( $patients_tp ) AS tp
ON
nt.ParticipantBarcode = tp.ParticipantBarcode
ORDER BY
ParticipantBarcode
bq.Query(patients_with_matched_samples).results().to_dataframe()
Explanation: In this example, let's assume that we would like to do a study that requires a primary tumor sample and a matched-normal (adjacent) tissue sample. In order to find out which patients provided which types of samples, we need to query the Biospecimen data table. This next query module uses two sub-queries, one to get all patients with TP samples and another to get all patients with NT samples. The final query joins these two and returns a single list of patients.
End of explanation
%%sql --module tumor_type_counts
# now we'll use this list to find what types of tumors these patients
# belong to:
SELECT
Study,
COUNT(*) AS n
FROM
[isb-cgc:tcga_201607_beta.Clinical_data]
WHERE
ParticipantBarcode IN ($patients_both)
GROUP BY
Study
ORDER BY
n DESC
bq.Query(tumor_type_counts,
patients_nt=patients_with_matched_samples.patients_nt,
patients_tp=patients_with_matched_samples.patients_tp,
patients_both=patients_with_matched_samples.patients_both).results().to_dataframe()
Explanation: It might be interesting to find out what the distribution of tumor types is for this list of patients with matched tumor-normal sample pairs. We can define a new SQL module that refers to the results of a previously defined query as long as we pass that reference in when we call bq.Query():
End of explanation
<END_TASK> |
15,502 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Testing Band 9
Creation of Data Cubes
Creation of the Synthetic Data Cubes ALMA-like using ASYDO Project.
Parameters
Step1: To select the isolist, the wavelength range of the cube is obtained, and a searching from theoretical catalog Splatalogue is performed. All the isotopes that have spectral lines on the range of the cube are determinated.
Step2: Then, we get all the possible combination sets of the previously determined isotopes.
Step3: Finally, random sets previously determinated are selected in order to generate the data cubes.
Step4: Generate Datacubes in Band 9, Fixed Width
Step5: Generate Datacubes in Band 9, Variable (TO DO
Step6: Creation of Dictionary
We create the words necessary to fit a sparse coding model to the observed spectra in the previous created cube.
It returns a DataFrame with a vector for each theoretical line for each isotope in molist
Step7: Recalibration of Dictionary
Step8: Testing Band 7
Generate Datacubes in Band 7, Fixed Width
Step9: Generate Datacubes in Band 7, Variable (TO DO
Step10: Creation of Dictionary
Step11: Training
Recalibration of the Dictionary
Step12: Testing
Step13: Blending case
Step14: Hyperfine lines case
Step16: Double peaks for single Line | <ASSISTANT_TASK:>
Python Code:
cube_params = {
'freq' : 604000,
'alpha' : 0,
'delta' : 0,
'spe_bw' : 4000,
'spe_res' : 1,
's_f' : 4,
's_a' : 0}
Explanation: Testing Band 9
Creation of Data Cubes
Creation of the Synthetic Data Cubes ALMA-like using ASYDO Project.
Parameters:
isolist : subset of the list of isotopes to generate a cube
cube_params:
freq : spectral center (frequency)
alpha : right-ascension center (degrees
delta : declination center (degrees
spe_res : spectral resolution (MHz)
spe_bw : spectral bandwidth (MHz)
s_f, s_a : skew-normal distrib, s_f: full width at half maximum, alpha: curtosis parameter.
End of explanation
# freq_init = cube_params['freq'] - cube_params['spe_bw']/2.0
# freq_end = cube_params['freq'] + cube_params['spe_bw']/2.0
# molist_present = theoretical_presence(molist, freq_init, freq_end)
Explanation: To select the isolist, the wavelength range of the cube is obtained, and a searching from theoretical catalog Splatalogue is performed. All the isotopes that have spectral lines on the range of the cube are determinated.
End of explanation
# all_subsets = sum(map(lambda r: list(combinations(molist_present, r)),
# range(1, len(molist_present)+1)), [])
Explanation: Then, we get all the possible combination sets of the previously determined isotopes.
End of explanation
# random_index = np.random.randint(len(all_subsets), size=25)
# isolist = []
# for i in random_index:
# isolist.append(all_subsets[i])
# save_isolist(isolist)
isolist = load_isolist()
Explanation: Finally, random sets previously determinated are selected in order to generate the data cubes.
End of explanation
# log=open('data/isolist_fixed_width.log', 'w')
# cube_n = 0
# cube_name = 'data/cube_fixed_width_'
# for i in range(0, 25):
# Creation of the cube
# gen_cube(isolist[i], cube_params, cube_name + str(cube_n))
# log.write(cube_name + ': ' + str(isolist[i]) + '\n')
# cube_n += 1
# log.close()
Explanation: Generate Datacubes in Band 9, Fixed Width
End of explanation
# log=open('data/isolist_variable_width.log', 'w')
# cube_n = 25
# cube_name = 'data/cube_variable_width_'
# for i in range(0, 25):
# Creation of the cube
# gen_cube_variable_width(isolist[i], cube_params, cube_name + str(cube_n))
# log.write(cube_name + ': ' + str(isolist[i]) + '\n')
# cube_n += 1
# log.close()
Explanation: Generate Datacubes in Band 9, Variable (TO DO: Fix variable width in ASYDO) Width
End of explanation
# dictionary = gen_words(molist, cube_params)
# save_dictionary(dictionary, 'band_9')
# dictionary = gen_words(molist, cube_params, True)
# save_dictionary(dictionary, 'band_9_dual')
# dictionary = load_dictionary('band_9')
dictionary = load_dictionary('band_9_dual')
Explanation: Creation of Dictionary
We create the words necessary to fit a sparse coding model to the observed spectra in the previous created cube.
It returns a DataFrame with a vector for each theoretical line for each isotope in molist
End of explanation
confusion_matrix = []
results = []
noise_pixel = (0,0)
train_pixel = (1,1)
for i in range(0, 1):
if (i == 0):
cube_name = 'data/cube_fixed_width_'
if (i == 25):
cube_name = 'data/cube_variable_width_'
file_path = cube_name + str(i) + '.fits'
train_pixel = (1, 1)
dictionary_recal, detected_peaks = recal_words(file_path, dictionary, cube_params, train_pixel, noise_pixel)
X = get_values_filtered_normalized(file_path, train_pixel, cube_params)
y_train = get_fortran_array(np.asmatrix(X))
dictionary_recal_fa = np.asfortranarray(dictionary_recal,
dtype= np.double)
lambda_param = 0
for idx in range(0, len(detected_peaks)):
if detected_peaks[idx] != 0:
lambda_param += 1
param = {
'lambda1' : lambda_param,
# 'L': 1,
'pos' : True,
'mode' : 0,
'ols' : True,
'numThreads' : -1}
alpha = spams.lasso(y_train, dictionary_recal_fa, **param).toarray()
total = np.inner(dictionary_recal_fa, alpha.T)
if i == 0:
confusion_matrix = [get_confusion_matrix(dictionary_recal, alpha,
file_path, cube_params, True)]
results = [get_results(confusion_matrix[i])]
else:
confusion_matrix.append(get_confusion_matrix(dictionary_recal, alpha,
file_path, cube_params, True))
results.append(get_results(confusion_matrix[i]))
print(i)
Explanation: Recalibration of Dictionary
End of explanation
cube_params['freq'] = 277000
# log=open('data/isolist_fixed_width.log', 'w')
# cube_n = 50
# cube_name = 'data/cube_fixed_width_'
# for i in range(0, 25):
# # Creation of the cube
# gen_cube(isolist[i], cube_params, cube_name + str(cube_n))
# log.write(cube_name + ': ' + str(isolist[i]) + '\n')
# cube_n += 1
# log.close()
Explanation: Testing Band 7
Generate Datacubes in Band 7, Fixed Width
End of explanation
# log=open('data/isolist_fixed_width.log', 'w')
# cube_n = 75
# cube_name = 'data/cube_variable_width_'
# for i in range(0, 25):
# Creation of the cube
# gen_cube_variable_width(isolist[i], cube_params, cube_name + str(cube_n))
# log.write(cube_name + ': ' + str(isolist[i]) + '\n')
# cube_n += 1
# log.close()
Explanation: Generate Datacubes in Band 7, Variable (TO DO: Fix variable width in ASYDO) Width
End of explanation
# dictionary = gen_words(molist, cube_params)
# save_dictionary(dictionary, 'band_7')
# dictionary = gen_words(molist, cube_params, True)
# save_dictionary(dictionary, 'band_7_dual')
# dictionary = load_dictionary('band_7')
dictionary = load_dictionary('band_7_dual')
Explanation: Creation of Dictionary
End of explanation
for i in range(50, 100):
if (i == 50):
cube_name = 'data/cube_fixed_width_'
if (i == 75):
cube_name = 'data/cube_variable_width_'
file_path = cube_name + str(i) + '.fits'
train_pixel = (1, 1)
dictionary_recal, detected_peaks = recal_words(file_path, dictionary, cube_params,
train_pixel, noise_pixel
X = get_values_filtered_normalized(file_path, train_pixel, cube_params)
y_train = get_fortran_array(np.asmatrix(X))
dictionary_recal_fa = np.asfortranarray(dictionary_recal,
dtype= np.double)
lambda_param = 0
for idx in range(0, len(detected_peaks)):
if detected_peaks[idx] != 0:
lambda_param += 1
param = {
'lambda1' : lambda_param,
# 'L': 1,
'pos' : True,
'mode' : 0,
'ols' : True,
'numThreads' : -1}
alpha = spams.lasso(y_train, dictionary_recal_fa, **param).toarray()
total = np.inner(dictionary_recal_fa, alpha.T)
confusion_matrix.append(get_confusion_matrix(dictionary_recal, alpha,
file_path, cube_params, True))
results.append(get_results(confusion_matrix[i]))
print(i)
Explanation: Training
Recalibration of the Dictionary
End of explanation
latexify(fig_height=6.9)
plt.subplot(3, 1, 1)
plt.title("Precision of Predictions for Fixed Width")
plt.xlabel("Precision")
plt.ylabel("Amount")
plt.legend()
plt.hist([np.mean(means["Precision"]) for means in results[:25]], 25, normed=True, color='b', alpha=1, label='Band 9')
plt.hist([np.mean(means["Precision"]) for means in results[50:75]], 25, normed=True, color='r', alpha=0.75, label='Band 7')
plt.subplot(3, 1, 2)
plt.title("Recall of Predictions for Fixed Width")
plt.xlabel("Recall")
plt.ylabel("Amount")
plt.legend()
plt.hist([np.mean(means["Recall"]) for means in results[:25] if np.mean(means["Recall"]) > 0.3 and np.mean(means["Recall"]) < 1], 25, normed=True, color='b', alpha=1, label='Band 9')
plt.hist([np.mean(means["Recall"]) for means in results[50:75]], 25, normed=True, color='r', alpha=0.75, label='Band 7')
plt.subplot(3, 1, 3)
plt.title("F-Score of Predictions for Fixed Width")
plt.xlabel("F-Score")
plt.ylabel("Amount")
plt.legend()
plt.hist([np.mean(means["F-Score"]) for means in results[:25]], 25, normed=True, color='b', alpha=1, label='Band 9')
plt.hist([np.mean(means["F-Score"]) for means in results[50:75]], 25, normed=True, color='r', alpha=0.75, label='Band 7')
plt.tight_layout()
plt.savefig("images/hist1.pdf")
latexify(fig_height=6.9)
plt.subplot(3, 1, 1)
plt.title("Precision of Predictions for Variable Width")
plt.xlabel("Precision")
plt.ylabel("Amount")
plt.legend()
plt.hist([np.mean(means["Precision"]) for means in results[25:50]], 25, normed=True, color='g', alpha=1, label='Band 9')
plt.hist([np.mean(means["Precision"]) for means in results[75:]], 25, normed=True, color='y', alpha=0.75, label='Band 7')
plt.subplot(3, 1, 2)
plt.title("Recall of Predictions for Variable Width")
plt.xlabel("Recall")
plt.ylabel("Amount")
plt.legend()
plt.hist([np.mean(means["Recall"]) for means in results[25:50] if np.mean(means["Recall"]) > 0.3], 25, normed=True, color='g', alpha=1, label='Band 9')
plt.hist([np.mean(means["Recall"]) for means in results[75:]], 25, normed=True, color='y', alpha=0.75, label='Band 7')
plt.subplot(3, 1, 3)
plt.title("F-Score of Predictions for Variable Width")
plt.xlabel("F-Score")
plt.ylabel("Amount")
plt.legend()
plt.hist([np.mean(means["F-Score"]) for means in results[25:50]], 25, normed=True, color='g', alpha=1, label='Band 9')
plt.hist([np.mean(means["F-Score"]) for means in results[75:]], 25, normed=True, color='y', alpha=0.75, label='Band 7')
plt.tight_layout()
plt.savefig("images/hist2.pdf")
Explanation: Testing
End of explanation
latexify()
file_path = "data/cube_fixed_width_6.fits"
train_pixel = (1, 1)
x = get_freq_index_from_params(cube_params)
y = get_values_filtered_normalized(file_path, train_pixel, cube_params)
plt.plot(x, y)
plt.legend(loc='upper right')
plt.xlim(xmin = 605075, xmax = 605275)
plt.ylim(ymin = -1,ymax = 1)
lines = get_lines_from_fits(file_path)
current_isotopes = [""]
for line in lines:
isotope_frequency = int(line[1])
isotope_name = line[0] + "-f" + str(line[1])
if isotope_frequency in range(605075, 605275) \
and line[0] not in current_isotopes:
# Shows lines really present
plt.axvline(x=isotope_frequency, ymin=0, ymax= 3, color='g', linewidth=2, label='Present Line')
plt.text(isotope_frequency + 1.5, -0.125, isotope_name, size='8', rotation='vertical')
current_isotopes.append(line[0])
plt.title("Blending case")
plt.xlabel("Frequency [MHz]")
ax = plt.gca()
ax.get_xaxis().get_major_formatter().set_useOffset(False)
plt.ylabel("Normalized Temperature")
plt.legend()
plt.savefig("images/blending.pdf")
latexify()
file_path = "data/cube_fixed_width_6.fits"
train_pixel = (1, 1)
x = get_freq_index_from_params(cube_params)
y = get_values_filtered_normalized(file_path, train_pixel, cube_params)
plt.plot(x, y)
plt.legend(loc='upper right')
plt.xlim(xmin = 605140, xmax = 605200)
plt.ylim(ymin = -1,ymax = 1)
lines = get_lines_from_fits(file_path)
current_isotopes = [""]
for line in lines:
isotope_frequency = int(line[1])
isotope_name = line[0] + "-f" + str(line[1])
if isotope_frequency in range(605140, 605275) \
and line[0] not in current_isotopes:
# Shows lines really present
plt.axvline(x=isotope_frequency, ymin=0, ymax= 3, color='g', linewidth=2, label='Present Line', linestyle='--')
plt.text(isotope_frequency + 1.5, -0.125, isotope_name, size='8', rotation='vertical', color='g')
current_isotopes.append(line[0])
for idx in range(0, len(detected_peaks)):
if detected_peaks[idx] != 0 and x[idx] in range(605075, 605200):
plt.axvline(x=x[idx], ymin=0, ymax= 1, color='r')
frecuencia = x[idx]
m = 0
detections = alpha_columns[dictionary_recal.ix[frecuencia] != 0]
for pch, elements in enumerate(detections):
m = m + 0.75
# print(detections.index[pch])
probability = round(elements * 100 / np.sum(alpha_columns[dictionary_recal.ix[frecuencia] != 0]))
# print(probability)
match = "match " + str(int(probability)) + " \%"
if '33SO2-f605162.1267' in detections.index[pch]:
plt.text(frecuencia + 1.5, 0.525, match, size='10', rotation='vertical', color='r')
break
elif 'OS17O-f605172.0102' in detections.index[pch]:
plt.text(frecuencia + 1.5, 0.525, match, size='10', rotation='vertical', color='r')
plt.title("Blending case")
plt.xlabel("Frequency [MHz]")
ax = plt.gca()
ax.get_xaxis().get_major_formatter().set_useOffset(False)
plt.ylabel("Normalized Temperature")
plt.legend()
plt.savefig("images/blending.pdf")
Explanation: Blending case
End of explanation
file_path = "data/cube_fixed_width_6.fits"
dictionary = load_dictionary('band_9_dual')
train_pixel = (1, 1)
dictionary_recal, detected_peaks = recal_words(file_path, dictionary, cube_params, 4)
X = get_values_filtered_normalized(file_path, train_pixel, cube_params)
y_train = get_fortran_array(np.asmatrix(X))
dictionary_recal_fa = np.asfortranarray(dictionary_recal,
dtype= np.double)
lambda_param = 0
for idx in range(0, len(detected_peaks)):
if detected_peaks[idx] != 0:
lambda_param += 1
param = {
'lambda1' : lambda_param,
# 'L': 1,
'pos' : True,
'mode' : 0,
'ols' : True,
'numThreads' : -1}
alpha = spams.lasso(y_train, dictionary_recal_fa, **param).toarray()
total = np.inner(dictionary_recal_fa, alpha.T)
latexify()
file_path = "data/cube_fixed_width_6.fits"
train_pixel = (1, 1)
x = get_freq_index_from_params(cube_params)
y = get_values_filtered_normalized(file_path, train_pixel, cube_params)
plt.plot(x, y)
plt.legend(loc='upper right')
plt.xlim(xmin = 605350, xmax = 605390)
plt.ylim(ymin = -1,ymax = 1.)
lines = get_lines_from_fits(file_path)
for i in range(0, len(lines)):
isotope_frequency = int(lines[i][1])
isotope_name = lines[i][0] + "-f" + str(lines[i][1])
if isotope_frequency in range(605335, 605375):
# Shows lines really present
plt.axvline(x=isotope_frequency, ymin=0, ymax= 3, color='g', linewidth=2, label='Present Line', linestyle='--')
if (i == 27):
plt.text(isotope_frequency + 1.5, -0.125, isotope_name.split('.')[0], size='8', rotation='vertical', color='g')
elif (i == 28):
plt.text(isotope_frequency + 2.25, -0.125, isotope_name.split('.')[0], size='8', rotation='vertical', color='g')
else:
plt.text(isotope_frequency + 1, -0.125, isotope_name.split('.')[0], size='8', rotation='vertical', color='g')
alpha_columns = pd.Series(alpha[:,0])
alpha_columns.index = dictionary_recal.columns
alpha_columns = alpha_columns[alpha_columns > 0]
hardcoder = 0
for idx in range(0, len(detected_peaks)):
if detected_peaks[idx] != 0 and x[idx] in range(605350, 605390):
plt.axvline(x=x[idx], ymin=0, ymax= 1, color='r')
frecuencia = x[idx]
m = 0
detections = alpha_columns[dictionary_recal.ix[frecuencia] != 0]
for pch, elements in enumerate(detections):
m = m + 0.75
print(detections.index[pch])
probability = round(elements * 100 / np.sum(alpha_columns[dictionary_recal.ix[frecuencia] != 0]))
print(probability)
match = "match " + str(int(probability)) + " \%"
if hardcoder == 0:
plt.text(frecuencia + 1.5, 0.525, match, size='10', rotation='vertical', color='r')
hardcoder = hardcoder + 1
break
else:
hardcoder = hardcoder - 1
continue
plt.title("Hyperfine lines case")
plt.xlabel("Frequency [MHz]")
ax = plt.gca()
ax.get_xaxis().get_major_formatter().set_useOffset(False)
plt.ylabel("Normalized Temperature")
plt.legend()
plt.savefig("images/hyperfine.pdf")
file_path = "data/cube_fixed_width_1.fits"
dictionary = load_dictionary('band_9_dual')
train_pixel = (1, 1)
dictionary_recal, detected_peaks = recal_words(file_path, dictionary, cube_params, 4)
X = get_values_filtered_normalized(file_path, train_pixel, cube_params)
y_train = get_fortran_array(np.asmatrix(X))
dictionary_recal_fa = np.asfortranarray(dictionary_recal,
dtype= np.double)
lambda_param = 0
for idx in range(0, len(detected_peaks)):
if detected_peaks[idx] != 0:
lambda_param += 1
param = {
'lambda1' : lambda_param,
# 'L': 1,
'pos' : True,
'mode' : 0,
'ols' : True,
'numThreads' : -1}
alpha = spams.lasso(y_train, dictionary_recal_fa, **param).toarray()
total = np.inner(dictionary_recal_fa, alpha.T)
Explanation: Hyperfine lines case
End of explanation
latexify()
file_path = "data/cube_fixed_width_1.fits"
train_pixel = (1, 1)
x = get_freq_index_from_params(cube_params)
y = get_values_filtered_normalized(file_path, train_pixel, cube_params)
plt.plot(x, y)
plt.legend(loc='upper right')
plt.xlim(xmin = 604356, xmax = 604456)
plt.ylim(ymin = -1,ymax = 1)
lines = get_lines_from_fits(file_path)
for line in lines:
isotope_frequency = int(line[1])
isotope_name = line[0] + "-f" + str(line[1])
if isotope_frequency in range(604356, 604456):
# Shows lines really present
plt.axvline(x=isotope_frequency, ymin=0, ymax= 3, color='g', linewidth=2, label='Present Line', linestyle='--')
plt.text(isotope_frequency + 2, 0, isotope_name, size='8', rotation='vertical', color='g')
alpha_columns = pd.Series(alpha[:,0])
alpha_columns.index = dictionary_recal.columns
alpha_columns = alpha_columns[alpha_columns > 0]
for idx in range(0, len(detected_peaks)):
if detected_peaks[idx] != 0 and x[idx] in range(604356, 604456):
plt.axvline(x=x[idx], ymin=0, ymax= 1, color='r')
frecuencia = x[idx]
m = 0
detections = alpha_columns[dictionary_recal.ix[frecuencia] != 0]
for pch, elements in enumerate(detections):
m = m + 0.75
print(detections.index[pch])
probability = round(elements * 100 / np.sum(alpha_columns[dictionary_recal.ix[frecuencia] != 0]))
print(probability)
match = "match " + str(int(probability)) + " \%"
plt.text(frecuencia + 2.5, 0.725, match, size='10', rotation='vertical', color='r')
plt.title("Double peaks for single Line")
plt.xlabel("Frequency [MHz]")
ax = plt.gca()
ax.get_xaxis().get_major_formatter().set_useOffset(False)
plt.ylabel("Normalized Temperature")
plt.legend()
plt.savefig("images/doublepeak.pdf")
np.mean([np.mean(means["F-Score"]) for means in results])
min_distance_req_list = pd.DataFrame([])
for i in range(0, 100):
if (i == 0 or i == 50):
cube_name = 'data/cube_fixed_width_'
if (i == 25 or i == 75):
cube_name = 'data/cube_variable_width_'
file_path = cube_name + str(i) + '.fits'
lines = get_lines_from_fits(file_path)
sorted_lines = sorted([lines[idx][1] for idx in range(0, len(lines) )])
min_distance_req = True
last_freq = float(sorted_lines[0])
for idx in range(1, len(sorted_lines)):
distance = float(sorted_lines[idx]) - last_freq
if(distance <= 1):
min_distance_req = False
break
last_freq = float(sorted_lines[idx])
if len(min_distance_req_list) == 0:
if (min_distance_req):
min_distance_req_list = [i]
else:
if (min_distance_req):
min_distance_req_list.append(i)
min_distance_req_list
results_filtered = [results[min_distance_req_list[0]]]
for ix in min_distance_req_list[1:]:
results_filtered.append(results[ix])
np.mean([np.mean(means["F-Score"]) for means in results_filtered])
cf_filtered = [confusion_matrix[min_distance_req_list[0]]]
for ix in min_distance_req_list[1:]:
cf_filtered.append(confusion_matrix[ix])
confusion_matrix[0]
latexify()
n = 5
fig, axes = plt.subplots(nrows=4, ncols=5)
filtered_matrices = confusion_matrix[:20]
for ax, matrix in zip(axes.flat, filtered_matrices):
order_index = np.argsort([float(f.split('f')[1].split('&')[0]) for f in matrix.index])
order_columns = np.argsort([float(f.split('f')[1].split('&')[0]) for f in matrix.columns])
im = ax.matshow(matrix[order_columns].iloc[order_index], cmap='hot')
ax.set_xticklabels([])
ax.set_yticklabels([])
fig.suptitle("Modified Confusion Matrices")
fig.colorbar(im, ax=axes.ravel().tolist())
plt.savefig("images/confusion_matrix.pdf")
latexify()
# Plot Precision-Recall curve for each cube
precision_avg = [np.mean(means["Precision"]) for means in results[:50]]
recall_avg = [np.mean(means["Recall"]) for means in results[:50]]
area = simps(precision_avg, dx=0.01)
plt.clf()
plt.plot(np.sort(recall_avg),
-np.sort(-np.ones(1)*precision_avg),
label='Overall')
precision_avg = [np.mean(means["Precision"]) for means in results[:25]]
recall_avg = [np.mean(means["Recall"]) for means in results[:25]]
area = simps(precision_avg, dx=0.01)
plt.plot(np.sort(recall_avg),
-np.sort(-np.ones(1)*precision_avg),
label='Fixed width')
precision_avg = [np.mean(means["Precision"]) for means in results[25:50]]
recall_avg = [np.mean(means["Recall"]) for means in results[25:50]]
area = simps(precision_avg, dx=0.01)
plt.plot(np.sort(recall_avg),
-np.sort(-np.ones(1)*precision_avg),
label='Variable width ')
plt.xlim([0.2, 1.0])
plt.ylim([0.6, 1.01])
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.title('Precision-Recall Curves Band 9')
plt.legend(loc="lower left")
plt.savefig("images/results1.pdf")
latexify()
# Plot Precision-Recall curve for each cube
from scipy.integrate import simps
precision_avg = [np.mean(means["Precision"]) for means in results[50:100]]
recall_avg = [np.mean(means["Recall"]) for means in results[50:100]]
area = simps(precision_avg, dx=0.01)
plt.clf()
plt.plot(np.sort(recall_avg),
-np.sort(-np.ones(1)*precision_avg),
label='Overall')
precision_avg = [np.mean(means["Precision"]) for means in results[50:75]]
recall_avg = [np.mean(means["Recall"]) for means in results[50:75]]
area = simps(precision_avg, dx=0.01)
plt.plot(np.sort(recall_avg),
-np.sort(-np.ones(1)*precision_avg),
label='Fixed Width')
precision_avg = [np.mean(means["Precision"]) for means in results[75:100]]
recall_avg = [np.mean(means["Recall"]) for means in results[75:100]]
area = simps(precision_avg, dx=0.01)
plt.plot(np.sort(recall_avg),
-np.sort(-np.ones(1)*precision_avg),
label='Variable Width ')
plt.xlim([0.415, 0.854])
plt.ylim([0.745, 0.96])
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.title('Precision-Recall Curves Band 7')
plt.legend(loc="lower left")
plt.savefig("images/results2.pdf")
def latexify(fig_width=None, fig_height=None, columns=1):
Set up matplotlib's RC params for LaTeX plotting.
Call this before plotting a figure.
Parameters
----------
fig_width : float, optional, inches
fig_height : float, optional, inches
columns : {1, 2}
# code adapted from http://www.scipy.org/Cookbook/Matplotlib/LaTeX_Examples
# Width and max height in inches for IEEE journals taken from
# computer.org/cms/Computer.org/Journal%20templates/transactions_art_guide.pdf
assert(columns in [1,2])
if fig_width is None:
fig_width = 4.89 if columns==1 else 6.9 # width in inches
if fig_height is None:
golden_mean = (sqrt(5)-1.0)/2.0 # Aesthetic ratio
fig_height = fig_width*golden_mean # height in inches
MAX_HEIGHT_INCHES = 24.0
if fig_height > MAX_HEIGHT_INCHES:
print("WARNING: fig_height too large:" + fig_height +
"so will reduce to" + MAX_HEIGHT_INCHES + "inches.")
fig_height = MAX_HEIGHT_INCHES
params = {'backend': 'ps',
'text.latex.preamble': ['\usepackage{gensymb}'],
'axes.labelsize': 8, # fontsize for x and y labels (was 10)
'axes.titlesize': 8,
'text.fontsize': 8, # was 10
'legend.fontsize': 8, # was 10
'xtick.labelsize': 10,
'ytick.labelsize': 8,
'text.usetex': True,
'figure.figsize': [fig_width,fig_height],
'font.family': 'serif'
}
matplotlib.rcParams.update(params)
def format_axes(ax):
for spine in ['top', 'right']:
ax.spines[spine].set_visible(False)
for spine in ['left', 'bottom']:
ax.spines[spine].set_color(SPINE_COLOR)
ax.spines[spine].set_linewidth(0.5)
ax.xaxis.set_ticks_position('bottom')
ax.yaxis.set_ticks_position('left')
for axis in [ax.xaxis, ax.yaxis]:
axis.set_tick_params(direction='out', color=SPINE_COLOR)
return ax
Explanation: Double peaks for single Line
End of explanation
<END_TASK> |
15,503 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
A Decision Tree of Observable Operators
Part 1
Step1: ..that was returned from a function called at subscribe-time
Step2: ..that was returned from an Action, Callable, Runnable, or something of that sort, called at subscribe-time
Step3: ...after a specified delay
Step4: ...that emits a sequence of items repeatedly
Step6: ...from scratch, with custom logic and cleanup (calling a function again and again)
Step7: ...for each observer that subscribes OR according to a condition at subscription time
Step8: ...that emits a sequence of integers
Step9: ...at particular intervals of time
Step10: ...after a specified delay (see timer)
...that completes without emitting items
Step11: ...that does nothing at all
Step12: ...that excepts | <ASSISTANT_TASK:>
Python Code:
reset_start_time(O.just)
stream = O.just({'answer': rand()})
disposable = subs(stream)
sleep(0.5)
disposable = subs(stream) # same answer
# all stream ops work, its a real stream:
disposable = subs(stream.map(lambda x: x.get('answer', 0) * 2))
Explanation: A Decision Tree of Observable Operators
Part 1: NEW Observables.
source: http://reactivex.io/documentation/operators.html#tree.
(transcribed to RxPY 1.5.7, Py2.7 / 2016-12, Gunther Klessinger, axiros)
This tree can help you find the ReactiveX Observable operator you’re looking for.
<h2 id="tocheading">Table of Contents</h2>
<div id="toc"></div>
Usage
There are no configured behind the scenes imports or code except startup.py, which defines output helper functions, mainly:
rst, reset_start_time: resets a global timer, in order to have use cases starting from 0.
subs(observable): subscribes to an observable, printing notifications with time, thread, value
All other code is explicitly given in the notebook.
Since all initialisiation of tools is in the first cell, you always have to run the first cell after ipython kernel restarts.
All other cells are autonmous.
In the use case functions, in contrast to the official examples we simply use rand quite often (mapped to randint(0, 100)), to demonstrate when/how often observable sequences are generated and when their result is buffered for various subscribers.
When in doubt then run the cell again, you might have been "lucky" and got the same random.
RxJS
The (bold printed) operator functions are linked to the official documentation and created roughly analogous to the RxJS examples. The rest of the TOC lines links to anchors within the notebooks.
Output
When the output is not in marble format we display it like so:
```
new subscription on stream 276507289
3.4 M [next] 1.4: {'answer': 42}
3.5 T1 [cmpl] 1.6: fin
``
where the lines are syncronouslyprinted as they happen. "M" and "T1" would be thread names ("M" is main thread).
For each use case inreset_start_time()(aliasrst`), a global timer is set to 0 and we show the offset to it, in milliseconds & with one decimal value and also the offset to the start of stream subscription. In the example 3.4, 3.5 are millis since global counter reset, while 1.4, 1.6 are offsets to start of subscription.
I want to create a NEW Observable...
... that emits a particular item: just
End of explanation
print('There is a little API difference to RxJS, see Remarks:\n')
rst(O.start)
def f():
log('function called')
return rand()
stream = O.start(func=f)
d = subs(stream)
d = subs(stream)
header("Exceptions are handled correctly (an observable should never except):")
def breaking_f():
return 1 / 0
stream = O.start(func=breaking_f)
d = subs(stream)
d = subs(stream)
# startasync: only in python3 and possibly here(?) http://www.tornadoweb.org/en/stable/concurrent.html#tornado.concurrent.Future
#stream = O.start_async(f)
#d = subs(stream)
Explanation: ..that was returned from a function called at subscribe-time: start
End of explanation
rst(O.from_iterable)
def f():
log('function called')
return rand()
# aliases: O.from_, O.from_list
# 1.: From a tuple:
stream = O.from_iterable((1,2,rand()))
d = subs(stream)
# d = subs(stream) # same result
# 2. from a generator
gen = (rand() for j in range(3))
stream = O.from_iterable(gen)
d = subs(stream)
rst(O.from_callback)
# in my words: In the on_next of the subscriber you'll have the original arguments,
# potentially objects, e.g. user original http requests.
# i.e. you could merge those with the result stream of a backend call to
# a webservice or db and send the request.response back to the user then.
def g(f, a, b):
f(a, b)
log('called f')
stream = O.from_callback(lambda a, b, f: g(f, a, b))('fu', 'bar')
d = subs(stream.delay(200))
# d = subs(stream.delay(200)) # does NOT work
Explanation: ..that was returned from an Action, Callable, Runnable, or something of that sort, called at subscribe-time: from
End of explanation
rst()
# start a stream of 0, 1, 2, .. after 200 ms, with a delay of 100 ms:
stream = O.timer(200, 100).time_interval()\
.map(lambda x: 'val:%s dt:%s' % (x.value, x.interval))\
.take(3)
d = subs(stream, name='observer1')
# intermix directly with another one
d = subs(stream, name='observer2')
Explanation: ...after a specified delay: timer
End of explanation
rst(O.repeat)
# repeat is over *values*, not function calls. Use generate or create for function calls!
subs(O.repeat({'rand': time.time()}, 3))
header('do while:')
l = []
def condition(x):
l.append(1)
return True if len(l) < 2 else False
stream = O.just(42).do_while(condition)
d = subs(stream)
Explanation: ...that emits a sequence of items repeatedly: repeat
End of explanation
rx = O.create
rst(rx)
def f(obs):
# this function is called for every observer
obs.on_next(rand())
obs.on_next(rand())
obs.on_completed()
def cleanup():
log('cleaning up...')
return cleanup
stream = O.create(f).delay(200) # the delay causes the cleanup called before the subs gets the vals
d = subs(stream)
d = subs(stream)
sleep(0.5)
rst(title='Exceptions are handled nicely')
l = []
def excepting_f(obs):
for i in range(3):
l.append(1)
obs.on_next('%s %s (observer hash: %s)' % (i, 1. / (3 - len(l)), hash(obs) ))
obs.on_completed()
stream = O.create(excepting_f)
d = subs(stream)
d = subs(stream)
rst(title='Feature or Bug?')
print('(where are the first two values?)')
l = []
def excepting_f(obs):
for i in range(3):
l.append(1)
obs.on_next('%s %s (observer hash: %s)' % (i, 1. / (3 - len(l)), hash(obs) ))
obs.on_completed()
stream = O.create(excepting_f).delay(100)
d = subs(stream)
d = subs(stream)
# I think its an (amazing) feature, preventing to process functions results of later(!) failing functions
rx = O.generate
rst(rx)
The basic form of generate takes four parameters:
the first item to emit
a function to test an item to determine whether to emit it (true) or terminate the Observable (false)
a function to generate the next item to test and emit based on the value of the previous item
a function to transform items before emitting them
def generator_based_on_previous(x): return x + 1.1
def doubler(x): return 2 * x
d = subs(rx(0, lambda x: x < 4, generator_based_on_previous, doubler))
rx = O.generate_with_relative_time
rst(rx)
stream = rx(1, lambda x: x < 4, lambda x: x + 1, lambda x: x, lambda t: 100)
d = subs(stream)
Explanation: ...from scratch, with custom logic and cleanup (calling a function again and again): create
End of explanation
rst(O.defer)
# plural! (unique per subscription)
streams = O.defer(lambda: O.just(rand()))
d = subs(streams)
d = subs(streams) # gets other values - created by subscription!
# evaluating a condition at subscription time in order to decide which of two streams to take.
rst(O.if_then)
cond = True
def should_run():
return cond
streams = O.if_then(should_run, O.return_value(43), O.return_value(56))
d = subs(streams)
log('condition will now evaluate falsy:')
cond = False
streams = O.if_then(should_run, O.return_value(43), O.return_value(rand()))
d = subs(streams)
d = subs(streams)
Explanation: ...for each observer that subscribes OR according to a condition at subscription time: defer / if_then
End of explanation
rst(O.range)
d = subs(O.range(0, 3))
Explanation: ...that emits a sequence of integers: range
End of explanation
rst(O.interval)
d = subs(O.interval(100).time_interval()\
.map(lambda x, v: '%(interval)s %(value)s' \
% ItemGetter(x)).take(3))
Explanation: ...at particular intervals of time: interval
(you can .publish() it to get an easy "hot" observable)
End of explanation
rst(O.empty)
d = subs(O.empty())
Explanation: ...after a specified delay (see timer)
...that completes without emitting items: empty
End of explanation
rst(O.never)
d = subs(O.never())
Explanation: ...that does nothing at all: never
End of explanation
rst(O.on_error)
d = subs(O.on_error(ZeroDivisionError))
Explanation: ...that excepts: throw
End of explanation
<END_TASK> |
15,504 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Test for anisotropic water-table aquifer
This test is taken from examples presented in MLU tutorial.
Step1: Set basic parameters
Step2: Load datasets of observation wells
Step3: Check how well TTim can simulate drawdowns in a vertically anisotropic water-table aquifer
Step4: Try calibrating model to find the parameters
Step5: Try calibrating model with stratified kaq
Step6: Summary of calibrated values | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from ttim import *
import pandas as pd
Explanation: Test for anisotropic water-table aquifer
This test is taken from examples presented in MLU tutorial.
End of explanation
b = 10 #aquifer thickness in m
Q = 172.8 #constant discharge rate in m^3/d
rw = 0.1 #well radius in m
rc = 0.1 #casing radius in m
Explanation: Set basic parameters:
End of explanation
r1 = 3.16
r2 = 31.6
data0 = np.loadtxt('data/moench_pumped.txt', skiprows=1)
t0 = data0[:, 0] / 60 / 60 / 24 #convert time from seconds to days
h0 = -data0[:, 1]
data1 = np.loadtxt('data/moench_ps1.txt', skiprows=1)
t1 = data1[:, 0] / 60 / 60 / 24 #convert time from seconds to days
h1 = -data1[:, 1]
data2 = np.loadtxt('data/moench_pd1.txt', skiprows=1)
t2 = data2[:, 0] / 60 / 60 / 24 #convert time from seconds to days
h2 = -data2[:, 1]
data3 = np.loadtxt('data/moench_ps2.txt', skiprows=1)
t3 = data3[:, 0] / 60 / 60 / 24 #convert time from seconds to days
h3 = -data3[:, 1]
data4 = np.loadtxt('data/moench_pd2.txt', skiprows=1)
t4 = data4[:, 0] / 60 / 60 / 24 #convert time from seconds to days
h4 = -data4[:, 1]
Explanation: Load datasets of observation wells:
End of explanation
#Set kaq, Saq, Sy and kzoverkh as given in Moench (1997)
kaq = 1e-4 * 60 * 60 * 24 #convert from m/s to m/d
Sy = 0.2
Saq = 2e-5
zh = 0.5 #kzoverkh
ml1 = Model3D(kaq=kaq, z=[0, -0.1, -2.1, -5.1, -10.1], Saq=[Sy, Saq, Saq, Saq], \
kzoverkh=zh, tmin=1e-5, tmax=3)
w1 = Well(ml1, xw=0, yw=0, rw=rw, rc=rc, tsandQ=[(0, Q)], layers=3)
ml1.solve()
hm1 = ml1.head(r1, 0, t1, layers=1)[0]
hm2 = ml1.head(r1, 0, t2, layers=3)[0]
hm3 = ml1.head(r2, 0, t3, layers=1)[0]
hm4 = ml1.head(r2, 0, t4, layers=3)[0]
hm0 = ml1.head(0, 0, t0, layers=3)[0]
plt.figure(figsize=(8, 5))
plt.loglog(t0, -h0, '.', label='pumped well')
plt.loglog(t0, -hm0, label='ttim pumped well')
plt.loglog(t1, -h1, '.', label='PS1')
plt.loglog(t1, -hm1, label='ttim PS1')
plt.loglog(t2, -h2, '.', label='PD1')
plt.loglog(t2, -hm2, label='ttim PD1')
plt.loglog(t3, -h3, '.', label='PS2')
plt.loglog(t3, -hm3, label='ttim PS2')
plt.loglog(t4, -h4, '.', label='PD2')
plt.loglog(t4, -hm4, label='ttim PD2')
plt.legend();
res1 = 0
res2 = 0
res3 = 0
res4 = 0
res0 = 0
for i in range(len(h1)):
r = (h1[i] - hm1[i]) ** 2
res1 = res1 + r
for i in range(len(h2)):
r = (h2[i] - hm2[i]) ** 2
res2 = res2 + r
for i in range(len(h3)):
r = (h3[i] - hm3[i]) ** 2
res3 = res3 + r
for i in range(len(h4)):
r = (h4[i] - hm4[i]) ** 2
res4 = res4 + r
for i in range(len(h0)):
r = (h0[i] - hm0[i]) ** 2
res0 = res0 + r
n = len(h1) + len(h2) + len(h3) + len(h4) + len(h0)
residuals = res1 + res2 + res3 + res4 + res0
rmse = np.sqrt(residuals/n)
print('RMSE:', rmse)
Explanation: Check how well TTim can simulate drawdowns in a vertically anisotropic water-table aquifer:
End of explanation
ml2 = Model3D(kaq=1, z=[0, -0.1, -2.1, -5.1, -10.1], Saq=[0.1, 1e-4, 1e-4, 1e-4], \
kzoverkh=1, tmin=1e-5, tmax=3)
w2 = Well(ml2, xw=0, yw=0, rw=rw, rc=rc, tsandQ=[(0, Q)], layers=3)
ml2.solve()
ca2 = Calibrate(ml2)
ca2.set_parameter(name='kaq0_3', initial=1)
ca2.set_parameter(name='Saq0', initial=0.2)
ca2.set_parameter(name='Saq1_3', initial=1e-4, pmin=0)
ca2.set_parameter_by_reference(name='kzoverkh', parameter=ml2.aq.kzoverkh, \
initial=0.1, pmin=0)
ca2.series(name='pumped', x=0, y=0, t=t0, h=h0, layer=3)
ca2.series(name='PS1', x=r1, y=0, t=t1, h=h1, layer=1)
ca2.series(name='PD1', x=r1, y=0, t=t2, h=h2, layer=3)
ca2.series(name='PS2', x=r2, y=0, t=t3, h=h3, layer=1)
ca2.series(name='PD2', x=r2, y=0, t=t4, h=h4, layer=3)
ca2.fit()
display(ca2.parameters)
print('RMSE:', ca2.rmse())
hm0_2 = ml2.head(0, 0, t0, layers=3)[0]
hm1_2 = ml2.head(r1, 0, t1, layers=1)[0]
hm2_2 = ml2.head(r1, 0, t2, layers=3)[0]
hm3_2 = ml2.head(r2, 0, t3, layers=1)[0]
hm4_2 = ml2.head(r2, 0, t4, layers=3)[0]
plt.figure(figsize=(8, 5))
plt.semilogx(t0, h0, '.', label='pumped')
plt.semilogx(t0, hm0_2, label='ttim pumped')
plt.semilogx(t1, h1, '.', label='PS1')
plt.semilogx(t1, hm1_2, label='ttim PS1')
plt.semilogx(t2, h2, '.', label='PD1')
plt.semilogx(t2, hm2_2, label='ttim PD1')
plt.semilogx(t3, h3, ',', label='PS2')
plt.semilogx(t3, hm3_2, label='ttim PS2')
plt.semilogx(t4, h4, '.', label='PD2')
plt.semilogx(t4, hm4_2, label='ttim PD2')
plt.legend();
Explanation: Try calibrating model to find the parameters:
End of explanation
ml3 = Model3D(kaq=1, z=[0, -0.1, -2.1, -5.1, -10.1], Saq=[0.1, 1e-4, 1e-4, 1e-4], \
kzoverkh=1, tmin=1e-5, tmax=3)
w3 = Well(ml3, xw=0, yw=0, rw=rw, rc=rc, tsandQ=[(0, Q)], layers=3)
ml3.solve()
ca3 = Calibrate(ml3)
ca3.set_parameter(name='kaq0', initial=1, pmin=0)
ca3.set_parameter(name='kaq1_3', initial=1)
ca3.set_parameter(name='Saq0', initial=0.2, pmin=0)
ca3.set_parameter(name='Saq1_3', initial=1e-4, pmin=0)
ca3.set_parameter_by_reference(name='kzoverkh', parameter=ml3.aq.kzoverkh, \
initial=0.1, pmin=0)
ca3.series(name='pumped', x=0, y=0, t=t0, h=h0, layer=3)
ca3.series(name='PS1', x=r1, y=0, t=t1, h=h1, layer=1)
ca3.series(name='PD1', x=r1, y=0, t=t2, h=h2, layer=3)
ca3.series(name='PS2', x=r2, y=0, t=t3, h=h3, layer=1)
ca3.series(name='PD2', x=r2, y=0, t=t4, h=h4, layer=3)
ca3.fit()
display(ca3.parameters)
print('RMSE:', ca3.rmse())
hm0_3 = ml3.head(0, 0, t0, layers=3)[0]
hm1_3 = ml3.head(r1, 0, t1, layers=1)[0]
hm2_3 = ml3.head(r1, 0, t2, layers=3)[0]
hm3_3 = ml3.head(r2, 0, t3, layers=1)[0]
hm4_3 = ml3.head(r2, 0, t4, layers=3)[0]
plt.figure(figsize=(8, 5))
plt.semilogx(t0, h0, '.', label='pumped')
plt.semilogx(t0, hm0_3, label='ttim pumped')
plt.semilogx(t1, h1, '.', label='PS1')
plt.semilogx(t1, hm1_3, label='ttim PS1')
plt.semilogx(t2, h2, '.', label='PD1')
plt.semilogx(t2, hm2_3, label='ttim PD1')
plt.semilogx(t3, h3, ',', label='PS2')
plt.semilogx(t3, hm3_3, label='ttim PS2')
plt.semilogx(t4, h4, '.', label='PD2')
plt.semilogx(t4, hm4_3, label='ttim PD2');
Explanation: Try calibrating model with stratified kaq:
End of explanation
ca3.parameters['optimal'].values
ta = pd.DataFrame(columns=['Moench', 'TTim', 'TTim-stratified'],\
index=['k0[m/d]', 'k[m/d]', 'Sy[-]', 'Ss[1/m]', 'kz/kh'])
ta.loc[:, 'TTim-stratified'] = ca3.parameters['optimal'].values
ta.loc[1:, 'TTim'] = ca2.parameters['optimal'].values
ta.loc[1:, 'Moench'] = [8.640, 0.2, 2e-5, 0.5]
ta.loc['RMSE'] = [0.061318, ca2.rmse(), ca3.rmse()]
ta
Explanation: Summary of calibrated values
End of explanation
<END_TASK> |
15,505 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Copyright 2019 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: TF-Hub CORD-19 Swivel 埋め込みを探索する
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: 埋め込みを分析する
まず、異なる単語間の相関行列を計算してプロットし、埋め込みを分析してみましょう。異なる単語の意味をうまく捉えられるように埋め込みが学習できていれば、意味的に似た単語の埋め込みベクトルは近くにあるはずです。COVID-19 関連の用語をいくつか見てみましょう。
Step5: 埋め込みが異なる用語の意味をうまく捉えていることが分かります。それぞれの単語は所属するクラスタの他の単語に類似していますが(「コロナウイルス」は「SARS」や「MERS」と高い関連性がある)、ほかのクラスタの単語とは異なります(「SARS」と「スペイン」の類似度はゼロに近い)。
では、これらの埋め込みを使用して特定のタスクを解決する方法を見てみましょう。
SciCite
Step6: 引用の意図分類器をトレーニングする
分類器のトレーニングには、<a href="https
Step7: 上に分類レイヤーを持ち、CORD-19 埋め込みを使用するモデルを構築してみましょう。
Step8: モデルをトレーニングして評価する
モデルをトレーニングして評価を行い、SciCite タスクでのパフォーマンスを見てみましょう。
Step9: 損失はすぐに減少しますが、特に精度は急速に上がることが分かります。予測と真のラベルがどのように関係しているかを確認するために、いくつかの例をプロットしてみましょう。 | <ASSISTANT_TASK:>
Python Code:
# Copyright 2019 The TensorFlow Hub Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
Explanation: Copyright 2019 The TensorFlow Hub Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
import functools
import itertools
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
import pandas as pd
import tensorflow.compat.v1 as tf
tf.disable_eager_execution()
tf.logging.set_verbosity('ERROR')
import tensorflow_datasets as tfds
import tensorflow_hub as hub
try:
from google.colab import data_table
def display_df(df):
return data_table.DataTable(df, include_index=False)
except ModuleNotFoundError:
# If google-colab is not available, just display the raw DataFrame
def display_df(df):
return df
Explanation: TF-Hub CORD-19 Swivel 埋め込みを探索する
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/hub/tutorials/cord_19_embeddings"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で実行</a></td>
<td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/hub/tutorials/cord_19_embeddings.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab で実行</a>
</td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/hub/tutorials/cord_19_embeddings.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示</a></td>
<td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/hub/tutorials/cord_19_embeddings.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png"> ノートブックをダウンロード</a>
</td>
<td> <a href="https://tfhub.dev/tensorflow/cord-19/swivel-128d/1"><img src="https://www.tensorflow.org/images/hub_logo_32px.png">TF Hub モデルを参照</a> </td>
</table>
TF-Hub (https://tfhub.dev/tensorflow/cord-19/swivel-128d/1) の CORD-19 Swivel テキスト埋め込みモジュールは、COVID-19 に関連する自然言語テキストを分析する研究者をサポートするために構築されました。これらの埋め込みは、CORD-19 データセットの論文のタイトル、著者、抄録、本文、および参照タイトルをトレーニングしています。
この Colab では、以下について取り上げます。
埋め込み空間内の意味的に類似した単語の分析
CORD-19 埋め込みを使用した SciCite データセットによる分類器のトレーニング
セットアップ
End of explanation
# Use the inner product between two embedding vectors as the similarity measure
def plot_correlation(labels, features):
corr = np.inner(features, features)
corr /= np.max(corr)
sns.heatmap(corr, xticklabels=labels, yticklabels=labels)
with tf.Graph().as_default():
# Load the module
query_input = tf.placeholder(tf.string)
module = hub.Module('https://tfhub.dev/tensorflow/cord-19/swivel-128d/1')
embeddings = module(query_input)
with tf.train.MonitoredTrainingSession() as sess:
# Generate embeddings for some terms
queries = [
# Related viruses
"coronavirus", "SARS", "MERS",
# Regions
"Italy", "Spain", "Europe",
# Symptoms
"cough", "fever", "throat"
]
features = sess.run(embeddings, feed_dict={query_input: queries})
plot_correlation(queries, features)
Explanation: 埋め込みを分析する
まず、異なる単語間の相関行列を計算してプロットし、埋め込みを分析してみましょう。異なる単語の意味をうまく捉えられるように埋め込みが学習できていれば、意味的に似た単語の埋め込みベクトルは近くにあるはずです。COVID-19 関連の用語をいくつか見てみましょう。
End of explanation
#@title Set up the dataset from TFDS
class Dataset:
Build a dataset from a TFDS dataset.
def __init__(self, tfds_name, feature_name, label_name):
self.dataset_builder = tfds.builder(tfds_name)
self.dataset_builder.download_and_prepare()
self.feature_name = feature_name
self.label_name = label_name
def get_data(self, for_eval):
splits = THE_DATASET.dataset_builder.info.splits
if tfds.Split.TEST in splits:
split = tfds.Split.TEST if for_eval else tfds.Split.TRAIN
else:
SPLIT_PERCENT = 80
split = "train[{}%:]".format(SPLIT_PERCENT) if for_eval else "train[:{}%]".format(SPLIT_PERCENT)
return self.dataset_builder.as_dataset(split=split)
def num_classes(self):
return self.dataset_builder.info.features[self.label_name].num_classes
def class_names(self):
return self.dataset_builder.info.features[self.label_name].names
def preprocess_fn(self, data):
return data[self.feature_name], data[self.label_name]
def example_fn(self, data):
feature, label = self.preprocess_fn(data)
return {'feature': feature, 'label': label}, label
def get_example_data(dataset, num_examples, **data_kw):
Show example data
with tf.Session() as sess:
batched_ds = dataset.get_data(**data_kw).take(num_examples).map(dataset.preprocess_fn).batch(num_examples)
it = tf.data.make_one_shot_iterator(batched_ds).get_next()
data = sess.run(it)
return data
TFDS_NAME = 'scicite' #@param {type: "string"}
TEXT_FEATURE_NAME = 'string' #@param {type: "string"}
LABEL_NAME = 'label' #@param {type: "string"}
THE_DATASET = Dataset(TFDS_NAME, TEXT_FEATURE_NAME, LABEL_NAME)
#@title Let's take a look at a few labeled examples from the training set
NUM_EXAMPLES = 20 #@param {type:"integer"}
data = get_example_data(THE_DATASET, NUM_EXAMPLES, for_eval=False)
display_df(
pd.DataFrame({
TEXT_FEATURE_NAME: [ex.decode('utf8') for ex in data[0]],
LABEL_NAME: [THE_DATASET.class_names()[x] for x in data[1]]
}))
Explanation: 埋め込みが異なる用語の意味をうまく捉えていることが分かります。それぞれの単語は所属するクラスタの他の単語に類似していますが(「コロナウイルス」は「SARS」や「MERS」と高い関連性がある)、ほかのクラスタの単語とは異なります(「SARS」と「スペイン」の類似度はゼロに近い)。
では、これらの埋め込みを使用して特定のタスクを解決する方法を見てみましょう。
SciCite: 引用の意図の分類
このセクションでは、テキスト分類など下流のタスクに埋め込みを使う方法を示します。学術論文の引用の意図の分類には、TensorFlow Dataset の <a href="https://www.tensorflow.org/datasets/catalog/scicite">SciCite データセット</a>を使用します。学術論文からの引用がある文章がある場合に、その引用の主な意図が背景情報、方法の使用、または結果の比較のうち、どれであるかを分類します。
End of explanation
def preprocessed_input_fn(for_eval):
data = THE_DATASET.get_data(for_eval=for_eval)
data = data.map(THE_DATASET.example_fn, num_parallel_calls=1)
return data
def input_fn_train(params):
data = preprocessed_input_fn(for_eval=False)
data = data.repeat(None)
data = data.shuffle(1024)
data = data.batch(batch_size=params['batch_size'])
return data
def input_fn_eval(params):
data = preprocessed_input_fn(for_eval=True)
data = data.repeat(1)
data = data.batch(batch_size=params['batch_size'])
return data
def input_fn_predict(params):
data = preprocessed_input_fn(for_eval=True)
data = data.batch(batch_size=params['batch_size'])
return data
Explanation: 引用の意図分類器をトレーニングする
分類器のトレーニングには、<a href="https://www.tensorflow.org/datasets/catalog/scicite">SciCite データセット</a>に対して Estimator を使用します。input_fns を設定してデータセットをモデルに読み込みましょう。
End of explanation
def model_fn(features, labels, mode, params):
# Embed the text
embed = hub.Module(params['module_name'], trainable=params['trainable_module'])
embeddings = embed(features['feature'])
# Add a linear layer on top
logits = tf.layers.dense(
embeddings, units=THE_DATASET.num_classes(), activation=None)
predictions = tf.argmax(input=logits, axis=1)
if mode == tf.estimator.ModeKeys.PREDICT:
return tf.estimator.EstimatorSpec(
mode=mode,
predictions={
'logits': logits,
'predictions': predictions,
'features': features['feature'],
'labels': features['label']
})
# Set up a multi-class classification head
loss = tf.nn.sparse_softmax_cross_entropy_with_logits(
labels=labels, logits=logits)
loss = tf.reduce_mean(loss)
if mode == tf.estimator.ModeKeys.TRAIN:
optimizer = tf.train.GradientDescentOptimizer(learning_rate=params['learning_rate'])
train_op = optimizer.minimize(loss, global_step=tf.train.get_or_create_global_step())
return tf.estimator.EstimatorSpec(mode=mode, loss=loss, train_op=train_op)
elif mode == tf.estimator.ModeKeys.EVAL:
accuracy = tf.metrics.accuracy(labels=labels, predictions=predictions)
precision = tf.metrics.precision(labels=labels, predictions=predictions)
recall = tf.metrics.recall(labels=labels, predictions=predictions)
return tf.estimator.EstimatorSpec(
mode=mode,
loss=loss,
eval_metric_ops={
'accuracy': accuracy,
'precision': precision,
'recall': recall,
})
#@title Hyperparmeters { run: "auto" }
EMBEDDING = 'https://tfhub.dev/tensorflow/cord-19/swivel-128d/1' #@param {type: "string"}
TRAINABLE_MODULE = False #@param {type: "boolean"}
STEPS = 8000#@param {type: "integer"}
EVAL_EVERY = 200 #@param {type: "integer"}
BATCH_SIZE = 10 #@param {type: "integer"}
LEARNING_RATE = 0.01 #@param {type: "number"}
params = {
'batch_size': BATCH_SIZE,
'learning_rate': LEARNING_RATE,
'module_name': EMBEDDING,
'trainable_module': TRAINABLE_MODULE
}
Explanation: 上に分類レイヤーを持ち、CORD-19 埋め込みを使用するモデルを構築してみましょう。
End of explanation
estimator = tf.estimator.Estimator(functools.partial(model_fn, params=params))
metrics = []
for step in range(0, STEPS, EVAL_EVERY):
estimator.train(input_fn=functools.partial(input_fn_train, params=params), steps=EVAL_EVERY)
step_metrics = estimator.evaluate(input_fn=functools.partial(input_fn_eval, params=params))
print('Global step {}: loss {:.3f}, accuracy {:.3f}'.format(step, step_metrics['loss'], step_metrics['accuracy']))
metrics.append(step_metrics)
global_steps = [x['global_step'] for x in metrics]
fig, axes = plt.subplots(ncols=2, figsize=(20,8))
for axes_index, metric_names in enumerate([['accuracy', 'precision', 'recall'],
['loss']]):
for metric_name in metric_names:
axes[axes_index].plot(global_steps, [x[metric_name] for x in metrics], label=metric_name)
axes[axes_index].legend()
axes[axes_index].set_xlabel("Global Step")
Explanation: モデルをトレーニングして評価する
モデルをトレーニングして評価を行い、SciCite タスクでのパフォーマンスを見てみましょう。
End of explanation
predictions = estimator.predict(functools.partial(input_fn_predict, params))
first_10_predictions = list(itertools.islice(predictions, 10))
display_df(
pd.DataFrame({
TEXT_FEATURE_NAME: [pred['features'].decode('utf8') for pred in first_10_predictions],
LABEL_NAME: [THE_DATASET.class_names()[pred['labels']] for pred in first_10_predictions],
'prediction': [THE_DATASET.class_names()[pred['predictions']] for pred in first_10_predictions]
}))
Explanation: 損失はすぐに減少しますが、特に精度は急速に上がることが分かります。予測と真のラベルがどのように関係しているかを確認するために、いくつかの例をプロットしてみましょう。
End of explanation
<END_TASK> |
15,506 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Get Data set from here
Step1: Python 3 Encodings | <ASSISTANT_TASK:>
Python Code:
rootdir = "C:\\Users\\Shantnu\\Desktop\\Data Sources\\Enron Spam"
# Loop through all the directories, sub directories and files in the above folder, and print them.
# For files, print number of files.
for directories, subdirs, files in os.walk(rootdir):
print(directories, subdirs, len(files))
print(os.path.split("C:\\Users\\Shantnu\\Desktop\\Data Sources\\Enron Spam\\enron1\\ham"))
print(os.path.split("C:\\Users\\Shantnu\\Desktop\\Data Sources\\Enron Spam\\enron1\\ham")[0])
print(os.path.split("C:\\Users\\Shantnu\\Desktop\\Data Sources\\Enron Spam\\enron1\\ham")[1])
# Same as before, but only print the ham and spam folders
for directories, subdirs, files in os.walk(rootdir):
if (os.path.split(directories)[1] == 'ham'):
print(directories, subdirs, len(files))
if (os.path.split(directories)[1] == 'spam'):
print(directories, subdirs, len(files))
ham_list = []
spam_list = []
# Same as before, but this time, read the files, and append them to the ham and spam list
for directories, subdirs, files in os.walk(rootdir):
if (os.path.split(directories)[1] == 'ham'):
for filename in files:
with open(os.path.join(directories, filename), encoding="latin-1") as f:
data = f.read()
ham_list.append(data)
if (os.path.split(directories)[1] == 'spam'):
for filename in files:
with open(os.path.join(directories, filename), encoding="latin-1") as f:
data = f.read()
spam_list.append(data)
print(ham_list[0])
print(spam_list[0])
Explanation: Get Data set from here: http://www.aueb.gr/users/ion/data/enron-spam/
End of explanation
# Write a function , that when passed in words, will return a dictionary of the form
# {Word1: True, Word2: True, Words3: True}
# Removing stop words is optional
def create_word_features(words):
my_dict = dict( [ (word, True) for word in words] )
return my_dict
create_word_features(["the", "quick", "brown", "quick", "a", "fox"])
ham_list = []
spam_list = []
# Same as before, but this time:
# 1. Break the sentences into words using word_tokenize
# 2. Use the create_word_features() function you just wrote
for directories, subdirs, files in os.walk(rootdir):
if (os.path.split(directories)[1] == 'ham'):
for filename in files:
with open(os.path.join(directories, filename), encoding="latin-1") as f:
data = f.read()
# The data we read is one big string. We need to break it into words.
words = word_tokenize(data)
ham_list.append((create_word_features(words), "ham"))
if (os.path.split(directories)[1] == 'spam'):
for filename in files:
with open(os.path.join(directories, filename), encoding="latin-1") as f:
data = f.read()
# The data we read is one big string. We need to break it into words.
words = word_tokenize(data)
spam_list.append((create_word_features(words), "spam"))
print(ham_list[0])
print(spam_list[0])
combined_list = ham_list + spam_list
print(len(combined_list))
random.shuffle(combined_list)
# Create a test and train section.
# 70% of the data is training. 30% is test
training_part = int(len(combined_list) * .7)
print(len(combined_list))
training_set = combined_list[:training_part]
test_set = combined_list[training_part:]
print (len(training_set))
print (len(test_set))
# Create the Naive Bayes filter
classifier = NaiveBayesClassifier.train(training_set)
# Find the accuracy, using the test data
accuracy = nltk.classify.util.accuracy(classifier, test_set)
print("Accuracy is: ", accuracy * 100)
classifier.show_most_informative_features(20)
# Clasify the below as spam or ham
# Hint: 1. Break into words using word_tokenzise
# 2. create_word_features
# 3. Use the classify function
msg1 = '''Hello th̓ere seُx master :-)
i need c0ck ri͏ght noِw ..͏. don't tell my hǔbbٚy.ٚ. ))
My sc͕rٞeٚe̻nname is Dorry.
My accֺo֔unt is h֯ere: http:nxusxbnd.GirlsBadoo.ru
C u late٘r!'''
msg2 = '''As one of our top customers we are providing 10% OFF the total of your next used book purchase from www.letthestoriesliveon.com. Please use the promotional code, TOPTENOFF at checkout. Limited to 1 use per customer. All books have free shipping within the contiguous 48 United States and there is no minimum purchase.
We have millions of used books in stock that are up to 90% off MRSP and add tens of thousands of new items every day. Don’t forget to check back frequently for new arrivals.'''
msg3 = '''To start off, I have a 6 new videos + transcripts in the members section. In it, we analyse the Enron email dataset, half a million files, spread over 2.5GB. It's about 1.5 hours of video.
I have also created a Conda environment for running the code (both free and member lessons). This is to ensure everyone is running the same version of libraries, preventing the Works on my machine problems. If you get a second, do you mind trying it here?'''
words = word_tokenize(msg1)
features = create_word_features(words)
print("Message 1 is :" ,classifier.classify(features))
words = word_tokenize(msg2)
features = create_word_features(words)
print("Message 2 is :" ,classifier.classify(features))
words = word_tokenize(msg3)
features = create_word_features(words)
print("Message 3 is :" ,classifier.classify(features))
Explanation: Python 3 Encodings: http://python-notes.curiousefficiency.org/en/latest/python3/text_file_processing.html
End of explanation
<END_TASK> |
15,507 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Rossman-GBT-Modeling" data-toc-modified-id="Rossman-GBT-Modeling-1"><span class="toc-item-num">1 </span>Rossman GBT Modeling</a></span><ul class="toc-item"><li><span><a href="#Data-Preparation" data-toc-modified-id="Data-Preparation-1.1"><span class="toc-item-num">1.1 </span>Data Preparation</a></span></li><li><span><a href="#Model-Training" data-toc-modified-id="Model-Training-1.2"><span class="toc-item-num">1.2 </span>Model Training</a></span></li></ul></li></ul></div>
Step1: Rossman GBT Modeling
Data Preparation
We've done most of our data preparation and feature engineering in the previous notebook, we'll still perform some additional ones here, but this notebook focuses on getting the data ready for fitting a Gradient Boosted Tree model. For the model, we will be leveraging lightgbm.
Step2: We've pulled most of our configurable parameters outside into a json configuration file. In the ideal scenario, we can move all of our code into a python script and only change the configuration file to experiment with different type of settings to see which one leads to the best overall performance.
Step3: Here, we will remove all records where the store had zero sale / was closed (feel free to experiment with not excluding the zero sales record and see if improves performance)
We also perform a train/validation split. The validation split will be used in our hyper-parameter tuning process and for early stopping. Notice that because this is a time series application, where we are trying to predict different stores' daily sales. It's important to not perform a random train/test split, but instead divide the training and validation set based on time/date.
Our training data is already sorted by date in decreasing order, hence we can create the validation set by checking how big is our test set and select the top-N observations to create a validation set that has similar size to our test set. Here we're saying similar size and not exact size, because we make sure that all the records from the same date falls under either training or validation set.
Step4: The validation fold we're creating is used for sklearn's PredefinedSplit, where we set the index to 0 for all samples that are part of the validation set, and to -1 for all other samples.
Step5: Here, we assign the validation fold back to the original dataframe to illustrate the point, this is technically not required for the rest of the pipeline. Notice in the dataframe that we've printed out, the last record's date, 2015-06-18 is different from the rest, and the record's val_fold takes on a value of -1. This means that all records including/after the date 2015-06-19 will become our validation set.
Step6: We proceed to extracting the necessary columns both numerical and categorical that we'll use for modeling.
Step7: Model Training
We use a helper class to train a boosted tree model, generate the prediction on our test set, create the submission file, check the feature importance of the tree-based model and also make sure we can save and re-load the model. | <ASSISTANT_TASK:>
Python Code:
from jupyterthemes import get_themes
from jupyterthemes.stylefx import set_nb_theme
themes = get_themes()
set_nb_theme(themes[3])
# 1. magic for inline plot
# 2. magic to print version
# 3. magic so that the notebook will reload external python modules
# 4. magic to enable retina (high resolution) plots
# https://gist.github.com/minrk/3301035
%matplotlib inline
%load_ext watermark
%load_ext autoreload
%autoreload 2
%config InlineBackend.figure_format='retina'
import os
import json
import time
import numpy as np
import pandas as pd
%watermark -a 'Ethen' -d -t -v -p numpy,pandas,pyarrow,sklearn
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Rossman-GBT-Modeling" data-toc-modified-id="Rossman-GBT-Modeling-1"><span class="toc-item-num">1 </span>Rossman GBT Modeling</a></span><ul class="toc-item"><li><span><a href="#Data-Preparation" data-toc-modified-id="Data-Preparation-1.1"><span class="toc-item-num">1.1 </span>Data Preparation</a></span></li><li><span><a href="#Model-Training" data-toc-modified-id="Model-Training-1.2"><span class="toc-item-num">1.2 </span>Model Training</a></span></li></ul></li></ul></div>
End of explanation
data_dir = 'cleaned_data'
path_train = os.path.join(data_dir, 'train_clean.parquet')
path_test = os.path.join(data_dir, 'test_clean.parquet')
engine = 'pyarrow'
df_train = pd.read_parquet(path_train, engine)
df_test = pd.read_parquet(path_test, engine)
print('train dimension: ', df_train.shape)
print('test dimension: ', df_test.shape)
df_train.head()
Explanation: Rossman GBT Modeling
Data Preparation
We've done most of our data preparation and feature engineering in the previous notebook, we'll still perform some additional ones here, but this notebook focuses on getting the data ready for fitting a Gradient Boosted Tree model. For the model, we will be leveraging lightgbm.
End of explanation
config_path = os.path.join('config', 'gbt_training_template.json')
with open(config_path) as f:
config_file = json.load(f)
config_file
# extract settings from the configuration file into local variables
columns = config_file['columns']
num_cols = columns['num_cols_pattern']
cat_cols = columns['cat_cols_pattern']
id_cols = columns['id_cols']
label_col = columns['label_col']
weights_col = columns['weights_col']
model_task = config_file['model_task']
model_type = config_file['model_type']
model_parameters = config_file['model_parameters'][model_type]
model_hyper_parameters = config_file['model_hyper_parameters'][model_type]
model_fit_parameters = config_file['model_fit_parameters'][model_type]
search_parameters = config_file['search_parameters']
Explanation: We've pulled most of our configurable parameters outside into a json configuration file. In the ideal scenario, we can move all of our code into a python script and only change the configuration file to experiment with different type of settings to see which one leads to the best overall performance.
End of explanation
df_train = df_train[df_train[label_col] != 0].reset_index(drop=True)
mask = df_train['Date'] == df_train['Date'].iloc[len(df_test)]
val_index = df_train.loc[mask, 'Date'].index.max()
val_index
Explanation: Here, we will remove all records where the store had zero sale / was closed (feel free to experiment with not excluding the zero sales record and see if improves performance)
We also perform a train/validation split. The validation split will be used in our hyper-parameter tuning process and for early stopping. Notice that because this is a time series application, where we are trying to predict different stores' daily sales. It's important to not perform a random train/test split, but instead divide the training and validation set based on time/date.
Our training data is already sorted by date in decreasing order, hence we can create the validation set by checking how big is our test set and select the top-N observations to create a validation set that has similar size to our test set. Here we're saying similar size and not exact size, because we make sure that all the records from the same date falls under either training or validation set.
End of explanation
val_fold = np.full(df_train.shape[0], fill_value=-1)
val_fold[:(val_index + 1)] = 0
val_fold
Explanation: The validation fold we're creating is used for sklearn's PredefinedSplit, where we set the index to 0 for all samples that are part of the validation set, and to -1 for all other samples.
End of explanation
df_train['val_fold'] = val_fold
df_train[(val_index - 2):(val_index + 2)]
Explanation: Here, we assign the validation fold back to the original dataframe to illustrate the point, this is technically not required for the rest of the pipeline. Notice in the dataframe that we've printed out, the last record's date, 2015-06-18 is different from the rest, and the record's val_fold takes on a value of -1. This means that all records including/after the date 2015-06-19 will become our validation set.
End of explanation
# the model id is used as the indicator when saving the model
model_id = 'gbt'
input_cols = num_cols + cat_cols
df_train = df_train[input_cols + [label_col]]
# we will perform the modeling at the log-scale
df_train[label_col] = np.log(df_train[label_col])
df_test = df_test[input_cols + id_cols]
print('train dimension: ', df_train.shape)
print('test dimension: ', df_test.shape)
df_train.head()
for cat_col in cat_cols:
df_train[cat_col] = df_train[cat_col].astype('category')
df_test[cat_col] = df_test[cat_col].astype('category')
df_train.head()
Explanation: We proceed to extracting the necessary columns both numerical and categorical that we'll use for modeling.
End of explanation
from gbt_module.model import GBTPipeline
model = GBTPipeline(input_cols, cat_cols, label_col, weights_col,
model_task, model_id, model_type, model_parameters,
model_hyper_parameters, search_parameters)
model
start = time.time()
model.fit(df_train, val_fold, model_fit_parameters)
elapsed = time.time() - start
print('elapsed minutes: ', elapsed / 60)
pd.DataFrame(model.model_tuned_.cv_results_)
# we logged our label, remember to exponentiate it back to the original scale
prediction_test = model.predict(df_test[input_cols])
df_test[label_col] = np.exp(prediction_test)
submission_cols = id_cols + [label_col]
df_test[submission_cols] = df_test[submission_cols].astype('int')
submission_dir = 'submission'
if not os.path.isdir(submission_dir):
os.makedirs(submission_dir, exist_ok=True)
submission_file = 'rossmann_submission_{}.csv'.format(model_id)
submission_path = os.path.join(submission_dir, submission_file)
df_test[submission_cols].to_csv(submission_path, index=False)
df_test[submission_cols].head()
model.get_feature_importance()
model_checkpoint = os.path.join('models', model_id + '.pkl')
model.save(model_checkpoint)
loaded_model = GBTPipeline.load(model_checkpoint)
# print the cv_results_ again to ensure the checkpointing works
pd.DataFrame(loaded_model.model_tuned_.cv_results_)
Explanation: Model Training
We use a helper class to train a boosted tree model, generate the prediction on our test set, create the submission file, check the feature importance of the tree-based model and also make sure we can save and re-load the model.
End of explanation
<END_TASK> |
15,508 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Executed
Step1: Load software and filenames definitions
Step2: Data folder
Step3: List of data files
Step4: Data load
Initial loading of the data
Step5: Load the leakage coefficient from disk
Step6: Load the direct excitation coefficient ($d_{exAA}$) from disk
Step7: Update d with the correction coefficients
Step8: Laser alternation selection
At this point we have only the timestamps and the detector numbers
Step9: We need to define some parameters
Step10: We should check if everithing is OK with an alternation histogram
Step11: If the plot looks good we can apply the parameters with
Step12: Measurements infos
All the measurement data is in the d variable. We can print it
Step13: Or check the measurements duration
Step14: Compute background
Compute the background using automatic threshold
Step15: Burst search and selection
Step16: Donor Leakage fit
Step17: Burst sizes
Step18: Fret fit
Max position of the Kernel Density Estimation (KDE)
Step19: Weighted mean of $E$ of each burst
Step20: Gaussian fit (no weights)
Step21: Gaussian fit (using burst size as weights)
Step22: Stoichiometry fit
Max position of the Kernel Density Estimation (KDE)
Step23: The Maximum likelihood fit for a Gaussian population is the mean
Step24: Computing the weighted mean and weighted standard deviation we get
Step25: Save data to file
Step26: The following string contains the list of variables to be saved. When saving, the order of the variables is preserved.
Step27: This is just a trick to format the different variables | <ASSISTANT_TASK:>
Python Code:
ph_sel_name = "None"
data_id = "22d"
# data_id = "7d"
Explanation: Executed: Mon Mar 27 11:39:00 2017
Duration: 7 seconds.
usALEX-5samples - Template
This notebook is executed through 8-spots paper analysis.
For a direct execution, uncomment the cell below.
End of explanation
from fretbursts import *
init_notebook()
from IPython.display import display
Explanation: Load software and filenames definitions
End of explanation
data_dir = './data/singlespot/'
import os
data_dir = os.path.abspath(data_dir) + '/'
assert os.path.exists(data_dir), "Path '%s' does not exist." % data_dir
Explanation: Data folder:
End of explanation
from glob import glob
file_list = sorted(f for f in glob(data_dir + '*.hdf5') if '_BKG' not in f)
## Selection for POLIMI 2012-11-26 datatset
labels = ['17d', '27d', '7d', '12d', '22d']
files_dict = {lab: fname for lab, fname in zip(labels, file_list)}
files_dict
data_id
Explanation: List of data files:
End of explanation
d = loader.photon_hdf5(filename=files_dict[data_id])
Explanation: Data load
Initial loading of the data:
End of explanation
leakage_coeff_fname = 'results/usALEX - leakage coefficient DexDem.csv'
leakage = np.loadtxt(leakage_coeff_fname)
print('Leakage coefficient:', leakage)
Explanation: Load the leakage coefficient from disk:
End of explanation
dir_ex_coeff_fname = 'results/usALEX - direct excitation coefficient dir_ex_aa.csv'
dir_ex_aa = np.loadtxt(dir_ex_coeff_fname)
print('Direct excitation coefficient (dir_ex_aa):', dir_ex_aa)
Explanation: Load the direct excitation coefficient ($d_{exAA}$) from disk:
End of explanation
d.leakage = leakage
d.dir_ex = dir_ex_aa
Explanation: Update d with the correction coefficients:
End of explanation
d.ph_times_t, d.det_t
Explanation: Laser alternation selection
At this point we have only the timestamps and the detector numbers:
End of explanation
d.add(det_donor_accept=(0, 1), alex_period=4000, D_ON=(2850, 580), A_ON=(900, 2580), offset=0)
Explanation: We need to define some parameters: donor and acceptor ch, excitation period and donor and acceptor excitiations:
End of explanation
plot_alternation_hist(d)
Explanation: We should check if everithing is OK with an alternation histogram:
End of explanation
loader.alex_apply_period(d)
Explanation: If the plot looks good we can apply the parameters with:
End of explanation
d
Explanation: Measurements infos
All the measurement data is in the d variable. We can print it:
End of explanation
d.time_max
Explanation: Or check the measurements duration:
End of explanation
d.calc_bg(bg.exp_fit, time_s=60, tail_min_us='auto', F_bg=1.7)
dplot(d, timetrace_bg)
d.rate_m, d.rate_dd, d.rate_ad, d.rate_aa
Explanation: Compute background
Compute the background using automatic threshold:
End of explanation
d.burst_search(L=10, m=10, F=7, ph_sel=Ph_sel('all'))
print(d.ph_sel)
dplot(d, hist_fret);
# if data_id in ['7d', '27d']:
# ds = d.select_bursts(select_bursts.size, th1=20)
# else:
# ds = d.select_bursts(select_bursts.size, th1=30)
ds = d.select_bursts(select_bursts.size, add_naa=False, th1=30)
n_bursts_all = ds.num_bursts[0]
def select_and_plot_ES(fret_sel, do_sel):
ds_fret= ds.select_bursts(select_bursts.ES, **fret_sel)
ds_do = ds.select_bursts(select_bursts.ES, **do_sel)
bpl.plot_ES_selection(ax, **fret_sel)
bpl.plot_ES_selection(ax, **do_sel)
return ds_fret, ds_do
ax = dplot(ds, hist2d_alex, S_max_norm=2, scatter_alpha=0.1)
if data_id == '7d':
fret_sel = dict(E1=0.60, E2=1.2, S1=0.2, S2=0.9, rect=False)
do_sel = dict(E1=-0.2, E2=0.5, S1=0.8, S2=2, rect=True)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '12d':
fret_sel = dict(E1=0.30,E2=1.2,S1=0.131,S2=0.9, rect=False)
do_sel = dict(E1=-0.4, E2=0.4, S1=0.8, S2=2, rect=False)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '17d':
fret_sel = dict(E1=0.01, E2=0.98, S1=0.14, S2=0.88, rect=False)
do_sel = dict(E1=-0.4, E2=0.4, S1=0.80, S2=2, rect=False)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '22d':
fret_sel = dict(E1=-0.16, E2=0.6, S1=0.2, S2=0.80, rect=False)
do_sel = dict(E1=-0.2, E2=0.4, S1=0.85, S2=2, rect=True)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '27d':
fret_sel = dict(E1=-0.1, E2=0.5, S1=0.2, S2=0.82, rect=False)
do_sel = dict(E1=-0.2, E2=0.4, S1=0.88, S2=2, rect=True)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
n_bursts_do = ds_do.num_bursts[0]
n_bursts_fret = ds_fret.num_bursts[0]
n_bursts_do, n_bursts_fret
d_only_frac = 1.*n_bursts_do/(n_bursts_do + n_bursts_fret)
print('D-only fraction:', d_only_frac)
dplot(ds_fret, hist2d_alex, scatter_alpha=0.1);
dplot(ds_do, hist2d_alex, S_max_norm=2, scatter=False);
Explanation: Burst search and selection
End of explanation
bandwidth = 0.03
E_range_do = (-0.1, 0.15)
E_ax = np.r_[-0.2:0.401:0.0002]
E_pr_do_kde = bext.fit_bursts_kde_peak(ds_do, bandwidth=bandwidth, weights='size',
x_range=E_range_do, x_ax=E_ax, save_fitter=True)
mfit.plot_mfit(ds_do.E_fitter, plot_kde=True, bins=np.r_[E_ax.min(): E_ax.max(): bandwidth])
plt.xlim(-0.3, 0.5)
print("%s: E_peak = %.2f%%" % (ds.ph_sel, E_pr_do_kde*100))
Explanation: Donor Leakage fit
End of explanation
nt_th1 = 50
dplot(ds_fret, hist_size, which='all', add_naa=False)
xlim(-0, 250)
plt.axvline(nt_th1)
Th_nt = np.arange(35, 120)
nt_th = np.zeros(Th_nt.size)
for i, th in enumerate(Th_nt):
ds_nt = ds_fret.select_bursts(select_bursts.size, th1=th)
nt_th[i] = (ds_nt.nd[0] + ds_nt.na[0]).mean() - th
plt.figure()
plot(Th_nt, nt_th)
plt.axvline(nt_th1)
nt_mean = nt_th[np.where(Th_nt == nt_th1)][0]
nt_mean
Explanation: Burst sizes
End of explanation
E_pr_fret_kde = bext.fit_bursts_kde_peak(ds_fret, bandwidth=bandwidth, weights='size')
E_fitter = ds_fret.E_fitter
E_fitter.histogram(bins=np.r_[-0.1:1.1:0.03])
E_fitter.fit_histogram(mfit.factory_gaussian(), center=0.5)
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(E_fitter, ax=ax[0])
mfit.plot_mfit(E_fitter, plot_model=False, plot_kde=True, ax=ax[1])
print('%s\nKDE peak %.2f ' % (ds_fret.ph_sel, E_pr_fret_kde*100))
display(E_fitter.params*100)
Explanation: Fret fit
Max position of the Kernel Density Estimation (KDE):
End of explanation
ds_fret.fit_E_m(weights='size')
Explanation: Weighted mean of $E$ of each burst:
End of explanation
ds_fret.fit_E_generic(fit_fun=bl.gaussian_fit_hist, bins=np.r_[-0.1:1.1:0.03], weights=None)
Explanation: Gaussian fit (no weights):
End of explanation
ds_fret.fit_E_generic(fit_fun=bl.gaussian_fit_hist, bins=np.r_[-0.1:1.1:0.005], weights='size')
E_kde_w = E_fitter.kde_max_pos[0]
E_gauss_w = E_fitter.params.loc[0, 'center']
E_gauss_w_sig = E_fitter.params.loc[0, 'sigma']
E_gauss_w_err = float(E_gauss_w_sig/np.sqrt(ds_fret.num_bursts[0]))
E_kde_w, E_gauss_w, E_gauss_w_sig, E_gauss_w_err
Explanation: Gaussian fit (using burst size as weights):
End of explanation
S_pr_fret_kde = bext.fit_bursts_kde_peak(ds_fret, burst_data='S', bandwidth=0.03) #weights='size', add_naa=True)
S_fitter = ds_fret.S_fitter
S_fitter.histogram(bins=np.r_[-0.1:1.1:0.03])
S_fitter.fit_histogram(mfit.factory_gaussian(), center=0.5)
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(S_fitter, ax=ax[0])
mfit.plot_mfit(S_fitter, plot_model=False, plot_kde=True, ax=ax[1])
print('%s\nKDE peak %.2f ' % (ds_fret.ph_sel, S_pr_fret_kde*100))
display(S_fitter.params*100)
S_kde = S_fitter.kde_max_pos[0]
S_gauss = S_fitter.params.loc[0, 'center']
S_gauss_sig = S_fitter.params.loc[0, 'sigma']
S_gauss_err = float(S_gauss_sig/np.sqrt(ds_fret.num_bursts[0]))
S_kde, S_gauss, S_gauss_sig, S_gauss_err
Explanation: Stoichiometry fit
Max position of the Kernel Density Estimation (KDE):
End of explanation
S = ds_fret.S[0]
S_ml_fit = (S.mean(), S.std())
S_ml_fit
Explanation: The Maximum likelihood fit for a Gaussian population is the mean:
End of explanation
weights = bl.fret_fit.get_weights(ds_fret.nd[0], ds_fret.na[0], weights='size', naa=ds_fret.naa[0], gamma=1.)
S_mean = np.dot(weights, S)/weights.sum()
S_std_dev = np.sqrt(
np.dot(weights, (S - S_mean)**2)/weights.sum())
S_wmean_fit = [S_mean, S_std_dev]
S_wmean_fit
Explanation: Computing the weighted mean and weighted standard deviation we get:
End of explanation
sample = data_id
Explanation: Save data to file
End of explanation
variables = ('sample n_bursts_all n_bursts_do n_bursts_fret '
'E_kde_w E_gauss_w E_gauss_w_sig E_gauss_w_err S_kde S_gauss S_gauss_sig S_gauss_err '
'E_pr_do_kde nt_mean\n')
Explanation: The following string contains the list of variables to be saved. When saving, the order of the variables is preserved.
End of explanation
variables_csv = variables.replace(' ', ',')
fmt_float = '{%s:.6f}'
fmt_int = '{%s:d}'
fmt_str = '{%s}'
fmt_dict = {**{'sample': fmt_str},
**{k: fmt_int for k in variables.split() if k.startswith('n_bursts')}}
var_dict = {name: eval(name) for name in variables.split()}
var_fmt = ', '.join([fmt_dict.get(name, fmt_float) % name for name in variables.split()]) + '\n'
data_str = var_fmt.format(**var_dict)
print(variables_csv)
print(data_str)
# NOTE: The file name should be the notebook name but with .csv extension
with open('results/usALEX-5samples-PR-leakage-dir-ex-all-ph.csv', 'a') as f:
f.seek(0, 2)
if f.tell() == 0:
f.write(variables_csv)
f.write(data_str)
Explanation: This is just a trick to format the different variables:
End of explanation
<END_TASK> |
15,509 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
(numba_for_arviz)=
Numba - an overview
Numba is a just-in-time compiler for Python that works best on code that uses NumPy arrays and functions, and loops.
ArviZ includes {ref}Numba as an optional dependency <Optional-dependencies> and a number of functions have been included in utils.py for systems in which Numba is pre-installed. Additional functionality, {class}arviz.Numba, of disabling/re-enabling numba for systems that have Numba installed has also been included.
A simple example to display the effectiveness of Numba
Step1: That is almost 300 times faster!! Let's compare this to NumPy
Step2: In certain scenarios, Numba can even outperform NumPy!
Numba within ArviZ
Let's see Numba's effect on a few of ArviZ functions
Step3: The methods of the {class}~arviz.Numba class can be used to enable or disable numba. The attribute numba_flag indicates whether numba is enabled within ArviZ or not. | <ASSISTANT_TASK:>
Python Code:
import arviz as az
import numpy as np
import timeit
from arviz.utils import conditional_jit, Numba
from arviz.stats.diagnostics import ks_summary
data = np.random.randn(1000000)
def variance(data, ddof=0): # Method to calculate variance without using numba
a_a, b_b = 0, 0
for i in data:
a_a = a_a + i
b_b = b_b + i * i
var = b_b / (len(data)) - ((a_a / (len(data))) ** 2)
var = var * (len(data) / (len(data) - ddof))
return var
%timeit variance(data, ddof=1)
@conditional_jit
def variance_jit(data, ddof=0): # Calculating variance with numba
a_a, b_b = 0, 0
for i in data:
a_a = a_a + i
b_b = b_b + i * i
var = b_b / (len(data)) - ((a_a / (len(data))) ** 2)
var = var * (len(data) / (len(data) - ddof))
return var
%timeit variance_jit(data, ddof=1)
Explanation: (numba_for_arviz)=
Numba - an overview
Numba is a just-in-time compiler for Python that works best on code that uses NumPy arrays and functions, and loops.
ArviZ includes {ref}Numba as an optional dependency <Optional-dependencies> and a number of functions have been included in utils.py for systems in which Numba is pre-installed. Additional functionality, {class}arviz.Numba, of disabling/re-enabling numba for systems that have Numba installed has also been included.
A simple example to display the effectiveness of Numba
End of explanation
%timeit np.var(data, ddof=1)
Explanation: That is almost 300 times faster!! Let's compare this to NumPy
End of explanation
summary_data = np.random.randn(1000, 100, 10)
school = az.load_arviz_data("centered_eight").posterior["mu"].values
Explanation: In certain scenarios, Numba can even outperform NumPy!
Numba within ArviZ
Let's see Numba's effect on a few of ArviZ functions
End of explanation
Numba.disable_numba()
Numba.numba_flag
%timeit ks_summary(summary_data)
%timeit ks_summary(school)
Numba.enable_numba()
Numba.numba_flag
%timeit ks_summary(summary_data)
%timeit ks_summary(school)
Explanation: The methods of the {class}~arviz.Numba class can be used to enable or disable numba. The attribute numba_flag indicates whether numba is enabled within ArviZ or not.
End of explanation
<END_TASK> |
15,510 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Time Series Prediction
Objectives
1. Build a linear, DNN and CNN model in keras to predict stock market behavior.
2. Build a simple RNN model and a multi-layer RNN model in keras.
3. Combine RNN and CNN architecture to create a keras model to predict stock market behavior.
In this lab we will build a custom Keras model to predict stock market behavior using the stock market dataset we created in the previous labs. We'll start with a linear, DNN and CNN model
Since the features of our model are sequential in nature, we'll next look at how to build various RNN models in keras. We'll start with a simple RNN model and then see how to create a multi-layer RNN in keras. We'll also see how to combine features of 1-dimensional CNNs with a typical RNN architecture.
We will be exploring a lot of different model types in this notebook. To keep track of your results, record the accuracy on the validation set in the table here. In machine learning there are rarely any "one-size-fits-all" so feel free to test out different hyperparameters (e.g. train steps, regularization, learning rates, optimizers, batch size) for each of the models. Keep track of your model performance in the chart below.
| Model | Validation Accuracy |
|----------|
Step2: Explore time series data
We'll start by pulling a small sample of the time series data from Big Query and write some helper functions to clean up the data for modeling. We'll use the data from the percent_change_sp500 table in BigQuery. The close_values_prior_260 column contains the close values for any given stock for the previous 260 days.
Step4: The function clean_data below does three things
Step7: Read data and preprocessing
Before we begin modeling, we'll preprocess our features by scaling to the z-score. This will ensure that the range of the feature values being fed to the model are comparable and should help with convergence during gradient descent.
Step8: Let's plot a few examples and see that the preprocessing steps were implemented correctly.
Step13: Make train-eval-test split
Next, we'll make repeatable splits for our train/validation/test datasets and save these datasets to local csv files. The query below will take a subsample of the entire dataset and then create a 70-15-15 split for the train/validation/test sets.
Step14: Modeling
For experimentation purposes, we'll train various models using data we can fit in memory using the .csv files we created above.
Step16: To monitor training progress and compare evaluation metrics for different models, we'll use the function below to plot metrics captured from the training job such as training and validation loss or accuracy.
Step17: Baseline
Before we begin modeling in keras, let's create a benchmark using a simple heuristic. Let's see what kind of accuracy we would get on the validation set if we predict the majority class of the training set.
Step18: Ok. So just naively guessing the most common outcome UP will give about 29.5% accuracy on the validation set.
Linear model
We'll start with a simple linear model, mapping our sequential input to a single fully dense layer.
Step19: The accuracy seems to level out pretty quickly. To report the accuracy, we'll average the accuracy on the validation set across the last few epochs of training.
Step20: Deep Neural Network
The linear model is an improvement on our naive benchmark. Perhaps we can do better with a more complicated model. Next, we'll create a deep neural network with keras. We'll experiment with a two layer DNN here but feel free to try a more complex model or add any other additional techniques to try an improve your performance.
Step21: Convolutional Neural Network
The DNN does slightly better. Let's see how a convolutional neural network performs.
A 1-dimensional convolutional can be useful for extracting features from sequential data or deriving features from shorter, fixed-length segments of the data set. Check out the documentation for how to implement a Conv1d in Tensorflow. Max pooling is a downsampling strategy commonly used in conjunction with convolutional neural networks. Next, we'll build a CNN model in keras using the Conv1D to create convolution layers and MaxPool1D to perform max pooling before passing to a fully connected dense layer.
Step22: Recurrent Neural Network
RNNs are particularly well-suited for learning sequential data. They retain state information from one iteration to the next by feeding the output from one cell as input for the next step. In the cell below, we'll build a RNN model in keras. The final state of the RNN is captured and then passed through a fully connected layer to produce a prediction.
Step23: Multi-layer RNN
Next, we'll build multi-layer RNN. Just as multiple layers of a deep neural network allow for more complicated features to be learned during training, additional RNN layers can potentially learn complex features in sequential data. For a multi-layer RNN the output of the first RNN layer is fed as the input into the next RNN layer.
Step24: Combining CNN and RNN architecture
Finally, we'll look at some model architectures which combine aspects of both convolutional and recurrant networks. For example, we can use a 1-dimensional convolution layer to process our sequences and create features which are then passed to a RNN model before prediction.
Step25: We can also try building a hybrid model which uses a 1-dimensional CNN to create features from the outputs of an RNN. | <ASSISTANT_TASK:>
Python Code:
import os
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import tensorflow as tf
from google.cloud import bigquery
from tensorflow.keras.callbacks import ModelCheckpoint, TensorBoard
from tensorflow.keras.layers import (
GRU,
LSTM,
RNN,
Bidirectional,
Conv1D,
Dense,
MaxPool1D,
Reshape,
)
from tensorflow.keras.models import Sequential
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.utils import to_categorical
# To plot pretty figures
%matplotlib inline
mpl.rc("axes", labelsize=14)
mpl.rc("xtick", labelsize=12)
mpl.rc("ytick", labelsize=12)
# For reproducible results.
from numpy.random import seed
seed(1)
tf.random.set_seed(2)
PROJECT = !(gcloud config get-value core/project)
PROJECT = PROJECT[0]
%env PROJECT = {PROJECT}
%env BUCKET = {PROJECT}
%env REGION = "us-central1"
Explanation: Time Series Prediction
Objectives
1. Build a linear, DNN and CNN model in keras to predict stock market behavior.
2. Build a simple RNN model and a multi-layer RNN model in keras.
3. Combine RNN and CNN architecture to create a keras model to predict stock market behavior.
In this lab we will build a custom Keras model to predict stock market behavior using the stock market dataset we created in the previous labs. We'll start with a linear, DNN and CNN model
Since the features of our model are sequential in nature, we'll next look at how to build various RNN models in keras. We'll start with a simple RNN model and then see how to create a multi-layer RNN in keras. We'll also see how to combine features of 1-dimensional CNNs with a typical RNN architecture.
We will be exploring a lot of different model types in this notebook. To keep track of your results, record the accuracy on the validation set in the table here. In machine learning there are rarely any "one-size-fits-all" so feel free to test out different hyperparameters (e.g. train steps, regularization, learning rates, optimizers, batch size) for each of the models. Keep track of your model performance in the chart below.
| Model | Validation Accuracy |
|----------|:---------------:|
| Baseline | 0.295 |
| Linear | -- |
| DNN | -- |
| 1-d CNN | -- |
| simple RNN | -- |
| multi-layer RNN | -- |
| RNN using CNN features | -- |
| CNN using RNN features | -- |
Load necessary libraries and set up environment variables
End of explanation
%%time
bq = bigquery.Client(project=PROJECT)
bq_query =
#standardSQL
SELECT
symbol,
Date,
direction,
close_values_prior_260
FROM
`stock_market.eps_percent_change_sp500`
LIMIT
100
df_stock_raw = bq.query(bq_query).to_dataframe()
df_stock_raw.head()
Explanation: Explore time series data
We'll start by pulling a small sample of the time series data from Big Query and write some helper functions to clean up the data for modeling. We'll use the data from the percent_change_sp500 table in BigQuery. The close_values_prior_260 column contains the close values for any given stock for the previous 260 days.
End of explanation
def clean_data(input_df):
Cleans data to prepare for training.
Args:
input_df: Pandas dataframe.
Returns:
Pandas dataframe.
df = input_df.copy()
# TF doesn't accept datetimes in DataFrame.
df["Date"] = pd.to_datetime(df["Date"], errors="coerce")
df["Date"] = df["Date"].dt.strftime("%Y-%m-%d")
# TF requires numeric label.
df["direction_numeric"] = df["direction"].apply(
lambda x: {"DOWN": 0, "STAY": 1, "UP": 2}[x]
)
return df
df_stock = clean_data(df_stock_raw)
df_stock.head()
Explanation: The function clean_data below does three things:
1. First, we'll remove any inf or NA values
2. Next, we parse the Date field to read it as a string.
3. Lastly, we convert the label direction into a numeric quantity, mapping 'DOWN' to 0, 'STAY' to 1 and 'UP' to 2.
End of explanation
STOCK_HISTORY_COLUMN = "close_values_prior_260"
COL_NAMES = ["day_" + str(day) for day in range(0, 260)]
LABEL = "direction_numeric"
def _scale_features(df):
z-scale feature columns of Pandas dataframe.
Args:
features: Pandas dataframe.
Returns:
Pandas dataframe with each column standardized according to the
values in that column.
avg = df.mean()
std = df.std()
return (df - avg) / std
def create_features(df, label_name):
Create modeling features and label from Pandas dataframe.
Args:
df: Pandas dataframe.
label_name: str, the column name of the label.
Returns:
Pandas dataframe
# Expand 1 column containing a list of close prices to 260 columns.
time_series_features = df[STOCK_HISTORY_COLUMN].apply(pd.Series)
# Rename columns.
time_series_features.columns = COL_NAMES
time_series_features = _scale_features(time_series_features)
# Concat time series features with static features and label.
label_column = df[LABEL]
return pd.concat([time_series_features, label_column], axis=1)
df_features = create_features(df_stock, LABEL)
df_features.head()
Explanation: Read data and preprocessing
Before we begin modeling, we'll preprocess our features by scaling to the z-score. This will ensure that the range of the feature values being fed to the model are comparable and should help with convergence during gradient descent.
End of explanation
ix_to_plot = [0, 1, 9, 5]
fig, ax = plt.subplots(1, 1, figsize=(15, 8))
for ix in ix_to_plot:
label = df_features["direction_numeric"].iloc[ix]
example = df_features[COL_NAMES].iloc[ix]
ax = example.plot(label=label, ax=ax)
ax.set_ylabel("scaled price")
ax.set_xlabel("prior days")
ax.legend()
Explanation: Let's plot a few examples and see that the preprocessing steps were implemented correctly.
End of explanation
def _create_split(phase):
Create string to produce train/valid/test splits for a SQL query.
Args:
phase: str, either TRAIN, VALID, or TEST.
Returns:
String.
floor, ceiling = "2002-11-01", "2010-07-01"
if phase == "VALID":
floor, ceiling = "2010-07-01", "2011-09-01"
elif phase == "TEST":
floor, ceiling = "2011-09-01", "2012-11-30"
return
WHERE Date >= '{}'
AND Date < '{}'
.format(
floor, ceiling
)
def create_query(phase):
Create SQL query to create train/valid/test splits on subsample.
Args:
phase: str, either TRAIN, VALID, or TEST.
sample_size: str, amount of data to take for subsample.
Returns:
String.
basequery =
#standardSQL
SELECT
symbol,
Date,
direction,
close_values_prior_260
FROM
`stock_market.eps_percent_change_sp500`
return basequery + _create_split(phase)
bq = bigquery.Client(project=PROJECT)
for phase in ["TRAIN", "VALID", "TEST"]:
# 1. Create query string
query_string = create_query(phase)
# 2. Load results into DataFrame
df = bq.query(query_string).to_dataframe()
# 3. Clean, preprocess dataframe
df = clean_data(df)
df = create_features(df, label_name="direction_numeric")
# 3. Write DataFrame to CSV
if not os.path.exists("../data"):
os.mkdir("../data")
df.to_csv(
f"../data/stock-{phase.lower()}.csv",
index_label=False,
index=False,
)
print(
"Wrote {} lines to {}".format(
len(df), f"../data/stock-{phase.lower()}.csv"
)
)
ls -la ../data
Explanation: Make train-eval-test split
Next, we'll make repeatable splits for our train/validation/test datasets and save these datasets to local csv files. The query below will take a subsample of the entire dataset and then create a 70-15-15 split for the train/validation/test sets.
End of explanation
N_TIME_STEPS = 260
N_LABELS = 3
Xtrain = pd.read_csv("../data/stock-train.csv")
Xvalid = pd.read_csv("../data/stock-valid.csv")
ytrain = Xtrain.pop(LABEL)
yvalid = Xvalid.pop(LABEL)
ytrain_categorical = to_categorical(ytrain.values)
yvalid_categorical = to_categorical(yvalid.values)
Explanation: Modeling
For experimentation purposes, we'll train various models using data we can fit in memory using the .csv files we created above.
End of explanation
def plot_curves(train_data, val_data, label="Accuracy"):
Plot training and validation metrics on single axis.
Args:
train_data: list, metrics obtrained from training data.
val_data: list, metrics obtained from validation data.
label: str, title and label for plot.
Returns:
Matplotlib plot.
plt.plot(
np.arange(len(train_data)) + 0.5,
train_data,
"b.-",
label="Training " + label,
)
plt.plot(
np.arange(len(val_data)) + 1,
val_data,
"r.-",
label="Validation " + label,
)
plt.gca().xaxis.set_major_locator(mpl.ticker.MaxNLocator(integer=True))
plt.legend(fontsize=14)
plt.xlabel("Epochs")
plt.ylabel(label)
plt.grid(True)
Explanation: To monitor training progress and compare evaluation metrics for different models, we'll use the function below to plot metrics captured from the training job such as training and validation loss or accuracy.
End of explanation
sum(yvalid == ytrain.value_counts().idxmax()) / yvalid.shape[0]
Explanation: Baseline
Before we begin modeling in keras, let's create a benchmark using a simple heuristic. Let's see what kind of accuracy we would get on the validation set if we predict the majority class of the training set.
End of explanation
# TODO 1a
model = Sequential()
model.add(
Dense(
units=N_LABELS,
activation="softmax",
kernel_regularizer=tf.keras.regularizers.l1(l=0.1),
)
)
model.compile(
optimizer=Adam(lr=0.001),
loss="categorical_crossentropy",
metrics=["accuracy"],
)
history = model.fit(
x=Xtrain.values,
y=ytrain_categorical,
batch_size=Xtrain.shape[0],
validation_data=(Xvalid.values, yvalid_categorical),
epochs=30,
verbose=0,
)
plot_curves(history.history["loss"], history.history["val_loss"], label="Loss")
plot_curves(
history.history["accuracy"],
history.history["val_accuracy"],
label="Accuracy",
)
Explanation: Ok. So just naively guessing the most common outcome UP will give about 29.5% accuracy on the validation set.
Linear model
We'll start with a simple linear model, mapping our sequential input to a single fully dense layer.
End of explanation
np.mean(history.history["val_accuracy"][-5:])
Explanation: The accuracy seems to level out pretty quickly. To report the accuracy, we'll average the accuracy on the validation set across the last few epochs of training.
End of explanation
# TODO 1b
dnn_hidden_units = [16, 8]
model = Sequential()
for layer in dnn_hidden_units:
model.add(Dense(units=layer, activation="relu"))
model.add(
Dense(
units=N_LABELS,
activation="softmax",
kernel_regularizer=tf.keras.regularizers.l1(l=0.1),
)
)
model.compile(
optimizer=Adam(lr=0.001),
loss="categorical_crossentropy",
metrics=["accuracy"],
)
history = model.fit(
x=Xtrain.values,
y=ytrain_categorical,
batch_size=Xtrain.shape[0],
validation_data=(Xvalid.values, yvalid_categorical),
epochs=10,
verbose=0,
)
plot_curves(history.history["loss"], history.history["val_loss"], label="Loss")
plot_curves(
history.history["accuracy"],
history.history["val_accuracy"],
label="Accuracy",
)
np.mean(history.history["val_accuracy"][-5:])
Explanation: Deep Neural Network
The linear model is an improvement on our naive benchmark. Perhaps we can do better with a more complicated model. Next, we'll create a deep neural network with keras. We'll experiment with a two layer DNN here but feel free to try a more complex model or add any other additional techniques to try an improve your performance.
End of explanation
# TODO 1c
model = Sequential()
# Convolutional layer
model.add(Reshape(target_shape=[N_TIME_STEPS, 1]))
model.add(
Conv1D(
filters=5,
kernel_size=5,
strides=2,
padding="valid",
input_shape=[None, 1],
)
)
model.add(MaxPool1D(pool_size=2, strides=None, padding="valid"))
# Flatten the result and pass through DNN.
model.add(tf.keras.layers.Flatten())
model.add(Dense(units=N_TIME_STEPS // 4, activation="relu"))
model.add(
Dense(
units=N_LABELS,
activation="softmax",
kernel_regularizer=tf.keras.regularizers.l1(l=0.1),
)
)
model.compile(
optimizer=Adam(lr=0.01),
loss="categorical_crossentropy",
metrics=["accuracy"],
)
history = model.fit(
x=Xtrain.values,
y=ytrain_categorical,
batch_size=Xtrain.shape[0],
validation_data=(Xvalid.values, yvalid_categorical),
epochs=10,
verbose=0,
)
plot_curves(history.history["loss"], history.history["val_loss"], label="Loss")
plot_curves(
history.history["accuracy"],
history.history["val_accuracy"],
label="Accuracy",
)
np.mean(history.history["val_accuracy"][-5:])
Explanation: Convolutional Neural Network
The DNN does slightly better. Let's see how a convolutional neural network performs.
A 1-dimensional convolutional can be useful for extracting features from sequential data or deriving features from shorter, fixed-length segments of the data set. Check out the documentation for how to implement a Conv1d in Tensorflow. Max pooling is a downsampling strategy commonly used in conjunction with convolutional neural networks. Next, we'll build a CNN model in keras using the Conv1D to create convolution layers and MaxPool1D to perform max pooling before passing to a fully connected dense layer.
End of explanation
# TODO 2a
model = Sequential()
# Reshape inputs to pass through RNN layer.
model.add(Reshape(target_shape=[N_TIME_STEPS, 1]))
model.add(LSTM(N_TIME_STEPS // 8, activation="relu", return_sequences=False))
model.add(
Dense(
units=N_LABELS,
activation="softmax",
kernel_regularizer=tf.keras.regularizers.l1(l=0.1),
)
)
# Create the model.
model.compile(
optimizer=Adam(lr=0.001),
loss="categorical_crossentropy",
metrics=["accuracy"],
)
history = model.fit(
x=Xtrain.values,
y=ytrain_categorical,
batch_size=Xtrain.shape[0],
validation_data=(Xvalid.values, yvalid_categorical),
epochs=40,
verbose=0,
)
plot_curves(history.history["loss"], history.history["val_loss"], label="Loss")
plot_curves(
history.history["accuracy"],
history.history["val_accuracy"],
label="Accuracy",
)
np.mean(history.history["val_accuracy"][-5:])
Explanation: Recurrent Neural Network
RNNs are particularly well-suited for learning sequential data. They retain state information from one iteration to the next by feeding the output from one cell as input for the next step. In the cell below, we'll build a RNN model in keras. The final state of the RNN is captured and then passed through a fully connected layer to produce a prediction.
End of explanation
# TODO 2b
rnn_hidden_units = [N_TIME_STEPS // 16, N_TIME_STEPS // 32]
model = Sequential()
# Reshape inputs to pass through RNN layer.
model.add(Reshape(target_shape=[N_TIME_STEPS, 1]))
for layer in rnn_hidden_units[:-1]:
model.add(GRU(units=layer, activation="relu", return_sequences=True))
model.add(GRU(units=rnn_hidden_units[-1], return_sequences=False))
model.add(
Dense(
units=N_LABELS,
activation="softmax",
kernel_regularizer=tf.keras.regularizers.l1(l=0.1),
)
)
model.compile(
optimizer=Adam(lr=0.001),
loss="categorical_crossentropy",
metrics=["accuracy"],
)
history = model.fit(
x=Xtrain.values,
y=ytrain_categorical,
batch_size=Xtrain.shape[0],
validation_data=(Xvalid.values, yvalid_categorical),
epochs=50,
verbose=0,
)
plot_curves(history.history["loss"], history.history["val_loss"], label="Loss")
plot_curves(
history.history["accuracy"],
history.history["val_accuracy"],
label="Accuracy",
)
np.mean(history.history["val_accuracy"][-5:])
Explanation: Multi-layer RNN
Next, we'll build multi-layer RNN. Just as multiple layers of a deep neural network allow for more complicated features to be learned during training, additional RNN layers can potentially learn complex features in sequential data. For a multi-layer RNN the output of the first RNN layer is fed as the input into the next RNN layer.
End of explanation
# TODO 3a
model = Sequential()
# Reshape inputs for convolutional layer
model.add(Reshape(target_shape=[N_TIME_STEPS, 1]))
model.add(
Conv1D(
filters=20,
kernel_size=4,
strides=2,
padding="valid",
input_shape=[None, 1],
)
)
model.add(MaxPool1D(pool_size=2, strides=None, padding="valid"))
model.add(
LSTM(
units=N_TIME_STEPS // 2,
return_sequences=False,
kernel_regularizer=tf.keras.regularizers.l1(l=0.1),
)
)
model.add(Dense(units=N_LABELS, activation="softmax"))
model.compile(
optimizer=Adam(lr=0.001),
loss="categorical_crossentropy",
metrics=["accuracy"],
)
history = model.fit(
x=Xtrain.values,
y=ytrain_categorical,
batch_size=Xtrain.shape[0],
validation_data=(Xvalid.values, yvalid_categorical),
epochs=30,
verbose=0,
)
plot_curves(history.history["loss"], history.history["val_loss"], label="Loss")
plot_curves(
history.history["accuracy"],
history.history["val_accuracy"],
label="Accuracy",
)
np.mean(history.history["val_accuracy"][-5:])
Explanation: Combining CNN and RNN architecture
Finally, we'll look at some model architectures which combine aspects of both convolutional and recurrant networks. For example, we can use a 1-dimensional convolution layer to process our sequences and create features which are then passed to a RNN model before prediction.
End of explanation
# TODO 3b
rnn_hidden_units = [N_TIME_STEPS // 32, N_TIME_STEPS // 64]
model = Sequential()
# Reshape inputs and pass through RNN layer.
model.add(Reshape(target_shape=[N_TIME_STEPS, 1]))
for layer in rnn_hidden_units:
model.add(LSTM(layer, return_sequences=True))
# Apply 1d convolution to RNN outputs.
model.add(Conv1D(filters=5, kernel_size=3, strides=2, padding="valid"))
model.add(MaxPool1D(pool_size=4, strides=None, padding="valid"))
# Flatten the convolution output and pass through DNN.
model.add(tf.keras.layers.Flatten())
model.add(
Dense(
units=N_TIME_STEPS // 32,
activation="relu",
kernel_regularizer=tf.keras.regularizers.l1(l=0.1),
)
)
model.add(
Dense(
units=N_LABELS,
activation="softmax",
kernel_regularizer=tf.keras.regularizers.l1(l=0.1),
)
)
model.compile(
optimizer=Adam(lr=0.001),
loss="categorical_crossentropy",
metrics=["accuracy"],
)
history = model.fit(
x=Xtrain.values,
y=ytrain_categorical,
batch_size=Xtrain.shape[0],
validation_data=(Xvalid.values, yvalid_categorical),
epochs=80,
verbose=0,
)
plot_curves(history.history["loss"], history.history["val_loss"], label="Loss")
plot_curves(
history.history["accuracy"],
history.history["val_accuracy"],
label="Accuracy",
)
np.mean(history.history["val_accuracy"][-5:])
Explanation: We can also try building a hybrid model which uses a 1-dimensional CNN to create features from the outputs of an RNN.
End of explanation
<END_TASK> |
15,511 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step 0 - hyperparams
vocab_size is all the potential words you could have (classification for translation case)
and max sequence length are the SAME thing
decoder RNN hidden units are usually same size as encoder RNN hidden units in translation but for our case it does not seem really to be a relationship there but we can experiment and find out later, not a priority thing right now
Step1: Once generate data
Step2: Step 1 - collect data
Step3: Step 2 - Build model
Step4: Step 3 training the network
Step5: One epoch takes approximately 268 secs
If we want to let it run for ~8 hours = 8 * 3600 / 268 ~= 107 epochs
So let it run for 100 epochs and see how it behaves | <ASSISTANT_TASK:>
Python Code:
num_units = 400 #state size
input_len = 60
target_len = 30
batch_size = 64
with_EOS = False
total_size = 57994
train_size = 46400
test_size = 11584
Explanation: Step 0 - hyperparams
vocab_size is all the potential words you could have (classification for translation case)
and max sequence length are the SAME thing
decoder RNN hidden units are usually same size as encoder RNN hidden units in translation but for our case it does not seem really to be a relationship there but we can experiment and find out later, not a priority thing right now
End of explanation
data_folder = '../../../../Dropbox/data'
ph_data_path = '../data/price_history'
npz_full = ph_data_path + '/price_history_dp_60to30_57994.npz'
npz_train = ph_data_path + '/price_history_dp_60to30_57994_46400_train.npz'
npz_test = ph_data_path + '/price_history_dp_60to30_57994_11584_test.npz'
Explanation: Once generate data
End of explanation
# dp = PriceHistorySeq2SeqDataProvider(npz_path=npz_train, batch_size=batch_size, with_EOS=with_EOS)
# dp.inputs.shape, dp.targets.shape
# aa, bb = dp.next()
# aa.shape, bb.shape
Explanation: Step 1 - collect data
End of explanation
model = PriceHistorySeq2SeqDynDecIns(rng=random_state, dtype=dtype, config=config, with_EOS=with_EOS)
# graph = model.getGraph(batch_size=batch_size,
# num_units=num_units,
# input_len=input_len,
# target_len=target_len)
#show_graph(graph)
Explanation: Step 2 - Build model
End of explanation
best_params = [500,
tf.nn.tanh,
0.0001,
0.62488034788862112,
0.001]
num_units, activation, lamda2, keep_prob_input, learning_rate = best_params
batch_size
def experiment():
return model.run(npz_path=npz_train,
npz_test = npz_test,
epochs=100,
batch_size = batch_size,
num_units = num_units,
input_len=input_len,
target_len=target_len,
learning_rate = learning_rate,
preds_gather_enabled=True,
batch_norm_enabled = True,
activation = activation,
decoder_first_input = PriceHistorySeq2SeqDynDecIns.DECODER_FIRST_INPUT.ZEROS,
keep_prob_input = keep_prob_input,
lamda2 = lamda2,
)
#%%time
dyn_stats, preds_dict, targets = get_or_run_nn(experiment, filename='024_seq2seq_60to30_002',
nn_runs_folder= data_folder + '/nn_runs')
Explanation: Step 3 training the network
End of explanation
dyn_stats.plotStats()
plt.show()
data_len = len(targets)
mses = np.empty(data_len)
for ii, (pred, target) in enumerate(zip(preds_dict.values(), targets.values())):
mses[ii] = mean_squared_error(pred, target)
np.mean(mses)
huber_losses = np.empty(data_len)
for ii, (pred, target) in enumerate(zip(preds_dict.values(), targets.values())):
huber_losses[ii] = np.mean(huber_loss(pred, target))
np.mean(huber_losses)
targets_arr = np.array(targets.values())
targets_arr.shape
preds_arr = np.array(preds_dict.values())
preds_arr.shape
np.mean(huber_loss(y_true=targets_arr, y_pred=preds_arr))
r2_scores = [r2_score(y_true=targets[ind], y_pred=preds_dict[ind])
for ind in range(len(targets))]
ind = np.argmin(r2_scores)
ind
reals = targets[ind]
preds = preds_dict[ind]
r2_score(y_true=reals, y_pred=preds)
#sns.tsplot(data=dp.inputs[ind].flatten())
fig = plt.figure(figsize=(15,6))
plt.plot(reals, 'b')
plt.plot(preds, 'g')
plt.legend(['reals','preds'])
plt.show()
%%time
dtw_scores = [fastdtw(targets[ind], preds_dict[ind])[0]
for ind in range(len(targets))]
np.mean(dtw_scores)
coint(preds, reals)
cur_ind = np.random.randint(len(targets))
reals = targets[cur_ind]
preds = preds_dict[cur_ind]
fig = plt.figure(figsize=(15,6))
plt.plot(reals, 'b')
plt.plot(preds, 'g')
plt.legend(['reals','preds'])
plt.show()
Explanation: One epoch takes approximately 268 secs
If we want to let it run for ~8 hours = 8 * 3600 / 268 ~= 107 epochs
So let it run for 100 epochs and see how it behaves
End of explanation
<END_TASK> |
15,512 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Vertex SDK
Step1: Install the latest GA version of google-cloud-storage library as well.
Step2: Restart the kernel
Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
Step3: Before you begin
GPU runtime
This tutorial does not require a GPU runtime.
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the following APIs
Step4: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas
Step5: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
Step6: Authenticate your Google Cloud account
If you are using Google Cloud Notebooks, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps
Step7: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
Step8: Only if your bucket doesn't already exist
Step9: Finally, validate access to your Cloud Storage bucket by examining its contents
Step10: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Step11: Initialize Vertex SDK for Python
Initialize the Vertex SDK for Python for your project and corresponding bucket.
Step12: Set hardware accelerators
You can set hardware accelerators for training and prediction.
Set the variables TRAIN_GPU/TRAIN_NGPU and DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify
Step13: Set pre-built containers
Set the pre-built Docker container image for training and prediction.
For the latest list, see Pre-built containers for training.
For the latest list, see Pre-built containers for prediction.
Step14: Set machine type
Next, set the machine type to use for training and prediction.
Set the variables TRAIN_COMPUTE and DEPLOY_COMPUTE to configure the compute resources for the VMs you will use for for training and prediction.
machine type
n1-standard
Step15: Tutorial
Now you are ready to start creating your own custom model and training for Boston Housing.
Examine the training package
Package layout
Before you start the training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout.
PKG-INFO
README.md
setup.cfg
setup.py
trainer
__init__.py
task.py
The files setup.cfg and setup.py are the instructions for installing the package into the operating environment of the Docker image.
The file trainer/task.py is the Python script for executing the custom training job. Note, when we referred to it in the worker pool specification, we replace the directory slash with a dot (trainer.task) and dropped the file suffix (.py).
Package Assembly
In the following cells, you will assemble the training package.
Step16: Task.py contents
In the next cell, you write the contents of the training script task.py. I won't go into detail, it's just there for you to browse. In summary
Step17: Store training script on your Cloud Storage bucket
Next, you package the training folder into a compressed tar ball, and then store it in your Cloud Storage bucket.
Step18: Create and run custom training job
To train a custom model, you perform two steps
Step19: Prepare your command-line arguments
Now define the command-line arguments for your custom training container
Step20: Run the custom training job
Next, you run the custom job to start the training job by invoking the method run, with the following parameters
Step21: Load the saved model
Your model is stored in a TensorFlow SavedModel format in a Cloud Storage bucket. Now load it from the Cloud Storage bucket, and then you can do some things, like evaluate the model, and do a prediction.
To load, you use the TF.Keras model.load_model() method passing it the Cloud Storage path where the model is saved -- specified by MODEL_DIR.
Step22: Evaluate the model
Now let's find out how good the model is.
Load evaluation data
You will load the Boston Housing test (holdout) data from tf.keras.datasets, using the method load_data(). This returns the dataset as a tuple of two elements. The first element is the training data and the second is the test data. Each element is also a tuple of two elements
Step23: Perform the model evaluation
Now evaluate how well the model in the custom job did.
Step24: Get the serving function signature
You can get the signatures of your model's input and output layers by reloading the model into memory, and querying it for the signatures corresponding to each layer.
When making a prediction request, you need to route the request to the serving function instead of the model, so you need to know the input layer name of the serving function -- which you will use later when you make a prediction request.
You also need to know the name of the serving function's input and output layer for constructing the explanation metadata -- which is discussed subsequently.
Step25: Explanation Specification
To get explanations when doing a prediction, you must enable the explanation capability and set corresponding settings when you upload your custom model to an Vertex Model resource. These settings are referred to as the explanation metadata, which consists of
Step26: Explanation Metadata
Let's first dive deeper into the explanation metadata, which consists of
Step27: Upload the model
Next, upload your model to a Model resource using Model.upload() method, with the following parameters
Step28: Deploy the model
Next, deploy your model for online prediction. To deploy the model, you invoke the deploy method, with the following parameters
Step29: Get test item
You will use an example out of the test (holdout) portion of the dataset as a test item.
Step30: Make the prediction with explanation
Now that your Model resource is deployed to an Endpoint resource, one can do online explanations by sending prediction requests to the Endpoint resource.
Request
The format of each instance is
Step31: Understanding the explanations response
First, you will look what your model predicted and compare it to the actual value.
Step32: Examine feature attributions
Next you will look at the feature attributions for this particular example. Positive attribution values mean a particular feature pushed your model prediction up by that amount, and vice versa for negative attribution values.
Step33: Check your explanations and baselines
To better make sense of the feature attributions you're getting, you should compare them with your model's baseline. In most cases, the sum of your attribution values + the baseline should be very close to your model's predicted value for each input. Also note that for regression models, the baseline_score returned from AI Explanations will be the same for each example sent to your model. For classification models, each class will have its own baseline.
In this section you'll send 10 test examples to your model for prediction in order to compare the feature attributions with the baseline. Then you'll run each test example's attributions through a sanity check in the sanity_check_explanations method.
Get explanations
Step34: Sanity check
In the function below you perform a sanity check on the explanations.
Step35: Undeploy the model
When you are done doing predictions, you undeploy the model from the Endpoint resouce. This deprovisions all compute resources and ends billing for the deployed model.
Step36: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial | <ASSISTANT_TASK:>
Python Code:
import os
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
Explanation: Vertex SDK: Custom training tabular regression model for online prediction with explainabilty using get_metadata
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_custom_tabular_regression_online_explain_get_metadata.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_custom_tabular_regression_online_explain_get_metadata.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
<td>
<a href="https://console.cloud.google.com/ai/platform/notebooks/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/tree/master/notebooks/official/automl/sdk_custom_tabular_regression_online_explain_get_metadata.ipynb">
Open in Google Cloud Notebooks
</a>
</td>
</table>
<br/><br/><br/>
Overview
This tutorial demonstrates how to use the Vertex SDK to train and deploy a custom tabular regression model for online prediction with explanation.
Dataset
The dataset used for this tutorial is the Boston Housing Prices dataset. The version of the dataset you will use in this tutorial is built into TensorFlow. The trained model predicts the median price of a house in units of 1K USD.
Objective
In this tutorial, you create a custom model from a Python script in a Google prebuilt Docker container using the Vertex SDK, and then do a prediction with explanations on the deployed model by sending data. You can alternatively create custom models using gcloud command-line tool or online using Cloud Console.
The steps performed include:
Create a Vertex custom job for training a model.
Train a TensorFlow model.
Retrieve and load the model artifacts.
View the model evaluation.
Set explanation parameters.
Upload the model as a Vertex Model resource.
Deploy the Model resource to a serving Endpoint resource.
Make a prediction with explanation.
Undeploy the Model resource.
Costs
This tutorial uses billable components of Google Cloud:
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Set up your local development environment
If you are using Colab or Google Cloud Notebooks, your environment already meets all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements. You need the following:
The Cloud Storage SDK
Git
Python 3
virtualenv
Jupyter notebook running in a virtual environment with Python 3
The Cloud Storage guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions:
Install and initialize the SDK.
Install Python 3.
Install virtualenv and create a virtual environment that uses Python 3. Activate the virtual environment.
To install Jupyter, run pip3 install jupyter on the command-line in a terminal shell.
To launch Jupyter, run jupyter notebook on the command-line in a terminal shell.
Open this notebook in the Jupyter Notebook Dashboard.
Installation
Install the latest version of Vertex SDK for Python.
End of explanation
! pip3 install -U google-cloud-storage $USER_FLAG
if os.getenv("IS_TESTING"):
! pip3 install --upgrade tensorflow $USER_FLAG
Explanation: Install the latest GA version of google-cloud-storage library as well.
End of explanation
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the kernel
Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
End of explanation
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
Explanation: Before you begin
GPU runtime
This tutorial does not require a GPU runtime.
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the following APIs: Vertex AI APIs, Compute Engine APIs, and Cloud Storage.
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $.
End of explanation
REGION = "us-central1" # @param {type: "string"}
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.
Learn more about Vertex AI regions
End of explanation
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
End of explanation
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebooks, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key page.
Click Create service account.
In the Service account name field, enter a name, and click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"}
if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]":
BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
End of explanation
! gsutil mb -l $REGION $BUCKET_NAME
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
! gsutil ls -al $BUCKET_NAME
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
import google.cloud.aiplatform as aip
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
End of explanation
aip.init(project=PROJECT_ID, staging_bucket=BUCKET_NAME)
Explanation: Initialize Vertex SDK for Python
Initialize the Vertex SDK for Python for your project and corresponding bucket.
End of explanation
if os.getenv("IS_TESTING_TRAIN_GPU"):
TRAIN_GPU, TRAIN_NGPU = (
aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_TRAIN_GPU")),
)
else:
TRAIN_GPU, TRAIN_NGPU = (None, None)
if os.getenv("IS_TESTING_DEPLOY_GPU"):
DEPLOY_GPU, DEPLOY_NGPU = (
aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_DEPLOY_GPU")),
)
else:
DEPLOY_GPU, DEPLOY_NGPU = (None, None)
Explanation: Set hardware accelerators
You can set hardware accelerators for training and prediction.
Set the variables TRAIN_GPU/TRAIN_NGPU and DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify:
(aip.AcceleratorType.NVIDIA_TESLA_K80, 4)
Otherwise specify (None, None) to use a container image to run on a CPU.
Learn more here hardware accelerator support for your region
Note: TF releases before 2.3 for GPU support will fail to load the custom model in this tutorial. It is a known issue and fixed in TF 2.3 -- which is caused by static graph ops that are generated in the serving function. If you encounter this issue on your own custom models, use a container image for TF 2.3 with GPU support.
End of explanation
if os.getenv("IS_TESTING_TF"):
TF = os.getenv("IS_TESTING_TF")
else:
TF = "2-1"
if TF[0] == "2":
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf2-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf2-cpu.{}".format(TF)
else:
if TRAIN_GPU:
TRAIN_VERSION = "tf-gpu.{}".format(TF)
else:
TRAIN_VERSION = "tf-cpu.{}".format(TF)
if DEPLOY_GPU:
DEPLOY_VERSION = "tf-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf-cpu.{}".format(TF)
TRAIN_IMAGE = "gcr.io/cloud-aiplatform/training/{}:latest".format(TRAIN_VERSION)
DEPLOY_IMAGE = "gcr.io/cloud-aiplatform/prediction/{}:latest".format(DEPLOY_VERSION)
print("Training:", TRAIN_IMAGE, TRAIN_GPU, TRAIN_NGPU)
print("Deployment:", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU)
Explanation: Set pre-built containers
Set the pre-built Docker container image for training and prediction.
For the latest list, see Pre-built containers for training.
For the latest list, see Pre-built containers for prediction.
End of explanation
if os.getenv("IS_TESTING_TRAIN_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_TRAIN_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
TRAIN_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Train machine type", TRAIN_COMPUTE)
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
Explanation: Set machine type
Next, set the machine type to use for training and prediction.
Set the variables TRAIN_COMPUTE and DEPLOY_COMPUTE to configure the compute resources for the VMs you will use for for training and prediction.
machine type
n1-standard: 3.75GB of memory per vCPU.
n1-highmem: 6.5GB of memory per vCPU
n1-highcpu: 0.9 GB of memory per vCPU
vCPUs: number of [2, 4, 8, 16, 32, 64, 96 ]
Note: The following is not supported for training:
standard: 2 vCPUs
highcpu: 2, 4 and 8 vCPUs
Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs.
End of explanation
# Make folder for Python training script
! rm -rf custom
! mkdir custom
# Add package information
! touch custom/README.md
setup_cfg = "[egg_info]\n\ntag_build =\n\ntag_date = 0"
! echo "$setup_cfg" > custom/setup.cfg
setup_py = "import setuptools\n\nsetuptools.setup(\n\n install_requires=[\n\n 'tensorflow_datasets==1.3.0',\n\n ],\n\n packages=setuptools.find_packages())"
! echo "$setup_py" > custom/setup.py
pkg_info = "Metadata-Version: 1.0\n\nName: Boston Housing tabular regression\n\nVersion: 0.0.0\n\nSummary: Demostration training script\n\nHome-page: www.google.com\n\nAuthor: Google\n\nAuthor-email: aferlitsch@google.com\n\nLicense: Public\n\nDescription: Demo\n\nPlatform: Vertex"
! echo "$pkg_info" > custom/PKG-INFO
# Make the training subfolder
! mkdir custom/trainer
! touch custom/trainer/__init__.py
Explanation: Tutorial
Now you are ready to start creating your own custom model and training for Boston Housing.
Examine the training package
Package layout
Before you start the training, you will look at how a Python package is assembled for a custom training job. When unarchived, the package contains the following directory/file layout.
PKG-INFO
README.md
setup.cfg
setup.py
trainer
__init__.py
task.py
The files setup.cfg and setup.py are the instructions for installing the package into the operating environment of the Docker image.
The file trainer/task.py is the Python script for executing the custom training job. Note, when we referred to it in the worker pool specification, we replace the directory slash with a dot (trainer.task) and dropped the file suffix (.py).
Package Assembly
In the following cells, you will assemble the training package.
End of explanation
%%writefile custom/trainer/task.py
# Single, Mirror and Multi-Machine Distributed Training for Boston Housing
import tensorflow_datasets as tfds
import tensorflow as tf
from tensorflow.python.client import device_lib
import numpy as np
import argparse
import os
import sys
tfds.disable_progress_bar()
parser = argparse.ArgumentParser()
parser.add_argument('--model-dir', dest='model_dir',
default=os.getenv('AIP_MODEL_DIR'), type=str, help='Model dir.')
parser.add_argument('--lr', dest='lr',
default=0.001, type=float,
help='Learning rate.')
parser.add_argument('--epochs', dest='epochs',
default=20, type=int,
help='Number of epochs.')
parser.add_argument('--steps', dest='steps',
default=100, type=int,
help='Number of steps per epoch.')
parser.add_argument('--distribute', dest='distribute', type=str, default='single',
help='distributed training strategy')
parser.add_argument('--param-file', dest='param_file',
default='/tmp/param.txt', type=str,
help='Output file for parameters')
args = parser.parse_args()
print('Python Version = {}'.format(sys.version))
print('TensorFlow Version = {}'.format(tf.__version__))
print('TF_CONFIG = {}'.format(os.environ.get('TF_CONFIG', 'Not found')))
# Single Machine, single compute device
if args.distribute == 'single':
if tf.test.is_gpu_available():
strategy = tf.distribute.OneDeviceStrategy(device="/gpu:0")
else:
strategy = tf.distribute.OneDeviceStrategy(device="/cpu:0")
# Single Machine, multiple compute device
elif args.distribute == 'mirror':
strategy = tf.distribute.MirroredStrategy()
# Multiple Machine, multiple compute device
elif args.distribute == 'multi':
strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy()
# Multi-worker configuration
print('num_replicas_in_sync = {}'.format(strategy.num_replicas_in_sync))
def make_dataset():
# Scaling Boston Housing data features
def scale(feature):
max = np.max(feature)
feature = (feature / max).astype(np.float)
return feature, max
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.boston_housing.load_data(
path="boston_housing.npz", test_split=0.2, seed=113
)
params = []
for _ in range(13):
x_train[_], max = scale(x_train[_])
x_test[_], _ = scale(x_test[_])
params.append(max)
# store the normalization (max) value for each feature
with tf.io.gfile.GFile(args.param_file, 'w') as f:
f.write(str(params))
return (x_train, y_train), (x_test, y_test)
# Build the Keras model
def build_and_compile_dnn_model():
model = tf.keras.Sequential([
tf.keras.layers.Dense(128, activation='relu', input_shape=(13,)),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dense(1, activation='linear')
])
model.compile(
loss='mse',
optimizer=tf.keras.optimizers.RMSprop(learning_rate=args.lr))
return model
NUM_WORKERS = strategy.num_replicas_in_sync
# Here the batch size scales up by number of workers since
# `tf.data.Dataset.batch` expects the global batch size.
BATCH_SIZE = 16
GLOBAL_BATCH_SIZE = BATCH_SIZE * NUM_WORKERS
with strategy.scope():
# Creation of dataset, and model building/compiling need to be within
# `strategy.scope()`.
model = build_and_compile_dnn_model()
# Train the model
(x_train, y_train), (x_test, y_test) = make_dataset()
model.fit(x_train, y_train, epochs=args.epochs, batch_size=GLOBAL_BATCH_SIZE)
model.save(args.model_dir)
Explanation: Task.py contents
In the next cell, you write the contents of the training script task.py. I won't go into detail, it's just there for you to browse. In summary:
Get the directory where to save the model artifacts from the command line (--model_dir), and if not specified, then from the environment variable AIP_MODEL_DIR.
Loads Boston Housing dataset from TF.Keras builtin datasets
Builds a simple deep neural network model using TF.Keras model API.
Compiles the model (compile()).
Sets a training distribution strategy according to the argument args.distribute.
Trains the model (fit()) with epochs specified by args.epochs.
Saves the trained model (save(args.model_dir)) to the specified model directory.
Saves the maximum value for each feature f.write(str(params)) to the specified parameters file.
End of explanation
! rm -f custom.tar custom.tar.gz
! tar cvf custom.tar custom
! gzip custom.tar
! gsutil cp custom.tar.gz $BUCKET_NAME/trainer_boston.tar.gz
Explanation: Store training script on your Cloud Storage bucket
Next, you package the training folder into a compressed tar ball, and then store it in your Cloud Storage bucket.
End of explanation
job = aip.CustomTrainingJob(
display_name="boston_" + TIMESTAMP,
script_path="custom/trainer/task.py",
container_uri=TRAIN_IMAGE,
requirements=["gcsfs==0.7.1", "tensorflow-datasets==4.4"],
)
print(job)
Explanation: Create and run custom training job
To train a custom model, you perform two steps: 1) create a custom training job, and 2) run the job.
Create custom training job
A custom training job is created with the CustomTrainingJob class, with the following parameters:
display_name: The human readable name for the custom training job.
container_uri: The training container image.
requirements: Package requirements for the training container image (e.g., pandas).
script_path: The relative path to the training script.
End of explanation
MODEL_DIR = "{}/{}".format(BUCKET_NAME, TIMESTAMP)
EPOCHS = 20
STEPS = 100
DIRECT = True
if DIRECT:
CMDARGS = [
"--model-dir=" + MODEL_DIR,
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
]
else:
CMDARGS = [
"--epochs=" + str(EPOCHS),
"--steps=" + str(STEPS),
]
Explanation: Prepare your command-line arguments
Now define the command-line arguments for your custom training container:
args: The command-line arguments to pass to the executable that is set as the entry point into the container.
--model-dir : For our demonstrations, we use this command-line argument to specify where to store the model artifacts.
direct: You pass the Cloud Storage location as a command line argument to your training script (set variable DIRECT = True), or
indirect: The service passes the Cloud Storage location as the environment variable AIP_MODEL_DIR to your training script (set variable DIRECT = False). In this case, you tell the service the model artifact location in the job specification.
"--epochs=" + EPOCHS: The number of epochs for training.
"--steps=" + STEPS: The number of steps per epoch.
End of explanation
if TRAIN_GPU:
job.run(
args=CMDARGS,
replica_count=1,
machine_type=TRAIN_COMPUTE,
accelerator_type=TRAIN_GPU.name,
accelerator_count=TRAIN_NGPU,
base_output_dir=MODEL_DIR,
sync=True,
)
else:
job.run(
args=CMDARGS,
replica_count=1,
machine_type=TRAIN_COMPUTE,
base_output_dir=MODEL_DIR,
sync=True,
)
model_path_to_deploy = MODEL_DIR
Explanation: Run the custom training job
Next, you run the custom job to start the training job by invoking the method run, with the following parameters:
args: The command-line arguments to pass to the training script.
replica_count: The number of compute instances for training (replica_count = 1 is single node training).
machine_type: The machine type for the compute instances.
accelerator_type: The hardware accelerator type.
accelerator_count: The number of accelerators to attach to a worker replica.
base_output_dir: The Cloud Storage location to write the model artifacts to.
sync: Whether to block until completion of the job.
End of explanation
import tensorflow as tf
local_model = tf.keras.models.load_model(MODEL_DIR)
Explanation: Load the saved model
Your model is stored in a TensorFlow SavedModel format in a Cloud Storage bucket. Now load it from the Cloud Storage bucket, and then you can do some things, like evaluate the model, and do a prediction.
To load, you use the TF.Keras model.load_model() method passing it the Cloud Storage path where the model is saved -- specified by MODEL_DIR.
End of explanation
import numpy as np
from tensorflow.keras.datasets import boston_housing
(_, _), (x_test, y_test) = boston_housing.load_data(
path="boston_housing.npz", test_split=0.2, seed=113
)
def scale(feature):
max = np.max(feature)
feature = (feature / max).astype(np.float32)
return feature
# Let's save one data item that has not been scaled
x_test_notscaled = x_test[0:1].copy()
for _ in range(13):
x_test[_] = scale(x_test[_])
x_test = x_test.astype(np.float32)
print(x_test.shape, x_test.dtype, y_test.shape)
print("scaled", x_test[0])
print("unscaled", x_test_notscaled)
Explanation: Evaluate the model
Now let's find out how good the model is.
Load evaluation data
You will load the Boston Housing test (holdout) data from tf.keras.datasets, using the method load_data(). This returns the dataset as a tuple of two elements. The first element is the training data and the second is the test data. Each element is also a tuple of two elements: the feature data, and the corresponding labels (median value of owner-occupied home).
You don't need the training data, and hence why we loaded it as (_, _).
Before you can run the data through evaluation, you need to preprocess it:
x_test:
1. Normalize (rescale) the data in each column by dividing each value by the maximum value of that column. This replaces each single value with a 32-bit floating point number between 0 and 1.
End of explanation
local_model.evaluate(x_test, y_test)
Explanation: Perform the model evaluation
Now evaluate how well the model in the custom job did.
End of explanation
loaded = tf.saved_model.load(model_path_to_deploy)
serving_input = list(
loaded.signatures["serving_default"].structured_input_signature[1].keys()
)[0]
print("Serving function input:", serving_input)
serving_output = list(loaded.signatures["serving_default"].structured_outputs.keys())[0]
print("Serving function output:", serving_output)
input_name = local_model.input.name
print("Model input name:", input_name)
output_name = local_model.output.name
print("Model output name:", output_name)
Explanation: Get the serving function signature
You can get the signatures of your model's input and output layers by reloading the model into memory, and querying it for the signatures corresponding to each layer.
When making a prediction request, you need to route the request to the serving function instead of the model, so you need to know the input layer name of the serving function -- which you will use later when you make a prediction request.
You also need to know the name of the serving function's input and output layer for constructing the explanation metadata -- which is discussed subsequently.
End of explanation
XAI = "ig" # [ shapley, ig, xrai ]
if XAI == "shapley":
PARAMETERS = {"sampled_shapley_attribution": {"path_count": 10}}
elif XAI == "ig":
PARAMETERS = {"integrated_gradients_attribution": {"step_count": 50}}
elif XAI == "xrai":
PARAMETERS = {"xrai_attribution": {"step_count": 50}}
parameters = aip.explain.ExplanationParameters(PARAMETERS)
Explanation: Explanation Specification
To get explanations when doing a prediction, you must enable the explanation capability and set corresponding settings when you upload your custom model to an Vertex Model resource. These settings are referred to as the explanation metadata, which consists of:
parameters: This is the specification for the explainability algorithm to use for explanations on your model. You can choose between:
Shapley - Note, not recommended for image data -- can be very long running
XRAI
Integrated Gradients
metadata: This is the specification for how the algoithm is applied on your custom model.
Explanation Parameters
Let's first dive deeper into the settings for the explainability algorithm.
Shapley
Assigns credit for the outcome to each feature, and considers different permutations of the features. This method provides a sampling approximation of exact Shapley values.
Use Cases:
- Classification and regression on tabular data.
Parameters:
path_count: This is the number of paths over the features that will be processed by the algorithm. An exact approximation of the Shapley values requires M! paths, where M is the number of features. For the CIFAR10 dataset, this would be 784 (28*28).
For any non-trival number of features, this is too compute expensive. You can reduce the number of paths over the features to M * path_count.
Integrated Gradients
A gradients-based method to efficiently compute feature attributions with the same axiomatic properties as the Shapley value.
Use Cases:
- Classification and regression on tabular data.
- Classification on image data.
Parameters:
step_count: This is the number of steps to approximate the remaining sum. The more steps, the more accurate the integral approximation. The general rule of thumb is 50 steps, but as you increase so does the compute time.
XRAI
Based on the integrated gradients method, XRAI assesses overlapping regions of the image to create a saliency map, which highlights relevant regions of the image rather than pixels.
Use Cases:
Classification on image data.
Parameters:
step_count: This is the number of steps to approximate the remaining sum. The more steps, the more accurate the integral approximation. The general rule of thumb is 50 steps, but as you increase so does the compute time.
In the next code cell, set the variable XAI to which explainabilty algorithm you will use on your custom model.
End of explanation
from google.cloud.aiplatform.explain.metadata.tf.v2 import \
saved_model_metadata_builder
builder = saved_model_metadata_builder.SavedModelMetadataBuilder(MODEL_DIR)
metadata = builder.get_metadata_protobuf()
print(metadata)
Explanation: Explanation Metadata
Let's first dive deeper into the explanation metadata, which consists of:
outputs: A scalar value in the output to attribute -- what to explain.
inputs: The features for attribution -- how they contributed to the output.
You can either customize your metadata -- what to explain and what to attribute, or automatically generate the metadata using the method get_metadata_protobuf(). This method will construct metadata for explaining all outputs and attributing all inputs.
End of explanation
model = aip.Model.upload(
display_name="boston_" + TIMESTAMP,
artifact_uri=MODEL_DIR,
serving_container_image_uri=DEPLOY_IMAGE,
explanation_parameters=parameters,
explanation_metadata=metadata,
sync=False,
)
model.wait()
Explanation: Upload the model
Next, upload your model to a Model resource using Model.upload() method, with the following parameters:
display_name: The human readable name for the Model resource.
artifact: The Cloud Storage location of the trained model artifacts.
serving_container_image_uri: The serving container image.
sync: Whether to execute the upload asynchronously or synchronously.
explanation_parameters: Parameters to configure explaining for Model's predictions.
explanation_metadata: Metadata describing the Model's input and output for explanation.
If the upload() method is run asynchronously, you can subsequently block until completion with the wait() method.
End of explanation
DEPLOYED_NAME = "boston-" + TIMESTAMP
TRAFFIC_SPLIT = {"0": 100}
MIN_NODES = 1
MAX_NODES = 1
if DEPLOY_GPU:
endpoint = model.deploy(
deployed_model_display_name=DEPLOYED_NAME,
traffic_split=TRAFFIC_SPLIT,
machine_type=DEPLOY_COMPUTE,
accelerator_type=DEPLOY_GPU,
accelerator_count=DEPLOY_NGPU,
min_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
)
else:
endpoint = model.deploy(
deployed_model_display_name=DEPLOYED_NAME,
traffic_split=TRAFFIC_SPLIT,
machine_type=DEPLOY_COMPUTE,
accelerator_type=DEPLOY_GPU,
accelerator_count=0,
min_replica_count=MIN_NODES,
max_replica_count=MAX_NODES,
)
Explanation: Deploy the model
Next, deploy your model for online prediction. To deploy the model, you invoke the deploy method, with the following parameters:
deployed_model_display_name: A human readable name for the deployed model.
traffic_split: Percent of traffic at the endpoint that goes to this model, which is specified as a dictionary of one or more key/value pairs.
If only one model, then specify as { "0": 100 }, where "0" refers to this model being uploaded and 100 means 100% of the traffic.
If there are existing models on the endpoint, for which the traffic will be split, then use model_id to specify as { "0": percent, model_id: percent, ... }, where model_id is the model id of an existing model to the deployed endpoint. The percents must add up to 100.
machine_type: The type of machine to use for training.
accelerator_type: The hardware accelerator type.
accelerator_count: The number of accelerators to attach to a worker replica.
starting_replica_count: The number of compute instances to initially provision.
max_replica_count: The maximum number of compute instances to scale to. In this tutorial, only one instance is provisioned.
End of explanation
test_item = x_test[0]
test_label = y_test[0]
print(test_item.shape)
Explanation: Get test item
You will use an example out of the test (holdout) portion of the dataset as a test item.
End of explanation
instances_list = [test_item.tolist()]
prediction = endpoint.explain(instances_list)
print(prediction)
Explanation: Make the prediction with explanation
Now that your Model resource is deployed to an Endpoint resource, one can do online explanations by sending prediction requests to the Endpoint resource.
Request
The format of each instance is:
[feature_list]
Since the explain() method can take multiple items (instances), send your single test item as a list of one test item.
Response
The response from the explain() call is a Python dictionary with the following entries:
ids: The internal assigned unique identifiers for each prediction request.
predictions: The prediction per instance.
deployed_model_id: The Vertex AI identifier for the deployed Model resource which did the predictions.
explanations: The feature attributions
End of explanation
value = prediction[0][0][0]
print("Predicted Value:", value)
Explanation: Understanding the explanations response
First, you will look what your model predicted and compare it to the actual value.
End of explanation
from tabulate import tabulate
feature_names = [
"crim",
"zn",
"indus",
"chas",
"nox",
"rm",
"age",
"dis",
"rad",
"tax",
"ptratio",
"b",
"lstat",
]
attributions = (
prediction.explanations[0].attributions[0].feature_attributions[serving_input]
)
rows = []
for i, val in enumerate(feature_names):
rows.append([val, test_item[i], attributions[i]])
print(tabulate(rows, headers=["Feature name", "Feature value", "Attribution value"]))
Explanation: Examine feature attributions
Next you will look at the feature attributions for this particular example. Positive attribution values mean a particular feature pushed your model prediction up by that amount, and vice versa for negative attribution values.
End of explanation
# Prepare 10 test examples to your model for prediction
instances = []
for i in range(10):
instances.append(x_test[i].tolist())
response = endpoint.explain(instances)
Explanation: Check your explanations and baselines
To better make sense of the feature attributions you're getting, you should compare them with your model's baseline. In most cases, the sum of your attribution values + the baseline should be very close to your model's predicted value for each input. Also note that for regression models, the baseline_score returned from AI Explanations will be the same for each example sent to your model. For classification models, each class will have its own baseline.
In this section you'll send 10 test examples to your model for prediction in order to compare the feature attributions with the baseline. Then you'll run each test example's attributions through a sanity check in the sanity_check_explanations method.
Get explanations
End of explanation
import numpy as np
def sanity_check_explanations(
explanation, prediction, mean_tgt_value=None, variance_tgt_value=None
):
passed_test = 0
total_test = 1
# `attributions` is a dict where keys are the feature names
# and values are the feature attributions for each feature
baseline_score = explanation.attributions[0].baseline_output_value
print("baseline:", baseline_score)
# Sanity check 1
# The prediction at the input is equal to that at the baseline.
# Please use a different baseline. Some suggestions are: random input, training
# set mean.
if abs(prediction - baseline_score) <= 0.05:
print("Warning: example score and baseline score are too close.")
print("You might not get attributions.")
else:
passed_test += 1
print("Sanity Check 1: Passed")
print(passed_test, " out of ", total_test, " sanity checks passed.")
i = 0
for explanation in response.explanations:
try:
prediction = np.max(response.predictions[i]["scores"])
except TypeError:
prediction = np.max(response.predictions[i])
sanity_check_explanations(explanation, prediction)
i += 1
Explanation: Sanity check
In the function below you perform a sanity check on the explanations.
End of explanation
endpoint.undeploy_all()
Explanation: Undeploy the model
When you are done doing predictions, you undeploy the model from the Endpoint resouce. This deprovisions all compute resources and ends billing for the deployed model.
End of explanation
delete_all = True
if delete_all:
# Delete the dataset using the Vertex dataset object
try:
if "dataset" in globals():
dataset.delete()
except Exception as e:
print(e)
# Delete the model using the Vertex model object
try:
if "model" in globals():
model.delete()
except Exception as e:
print(e)
# Delete the endpoint using the Vertex endpoint object
try:
if "endpoint" in globals():
endpoint.delete()
except Exception as e:
print(e)
# Delete the AutoML or Pipeline trainig job
try:
if "dag" in globals():
dag.delete()
except Exception as e:
print(e)
# Delete the custom trainig job
try:
if "job" in globals():
job.delete()
except Exception as e:
print(e)
# Delete the batch prediction job using the Vertex batch prediction object
try:
if "batch_predict_job" in globals():
batch_predict_job.delete()
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex hyperparameter tuning object
try:
if "hpt_job" in globals():
hpt_job.delete()
except Exception as e:
print(e)
if "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
Explanation: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
Dataset
Pipeline
Model
Endpoint
AutoML Training Job
Batch Job
Custom Job
Hyperparameter Tuning Job
Cloud Storage Bucket
End of explanation
<END_TASK> |
15,513 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Income Inequality between high earners and low earners
A critique of http
Step1: Getting the data
Before going into the purely visual aspects and how effective they are at conveying a story, I want to understand what data we are dealing with. At the bottom of the graph, there is a bit.ly URL that points to a google drive document. Adding export?format=xlsx will allow us to download this document as an excel spreadsheet, which can then be sliced and diced easily with the pandas analytics module.
Step2: First issue with the data, right away we can see the wide range of dates. Let's look at the date distribution. We probably would want to use only 2010 if it represents enough data. We will make a note of <b>39.99</b> as the average Gini coefficient over all those years.
Step3: We will get just the data for 2009. Not only it is recent, but it is plenty of data points to represent at once. This will also address the other issue with the data
Step4: This is already way easier to compare than the original infographic. Perhaps not as snazzy, but at least it gives us a start in trying to understand the data. But it is just that, a start. One angle would be to investigate how much above average is the Gini for the US. But I would also want to have the measures, including the average from the same year. A quick comparison of the two distributions (2009 vs all the data) shows how sampling on 2009 skews toward a higher Gini.
Step5: Comparing with GDP, population, gender inequality, even subjective "satisfaction indexes" and the like would be much more interesting. To tell a real story, we need to show some correlation, and provide some narrative and/or visualization to explain Gini. At the end of the day, perhaps the real story is that Gini is not a great universal indicator.
Colors
Where the graph at http | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
sns.set(palette = sns.dark_palette("skyblue", 8, reverse=True))
Explanation: Income Inequality between high earners and low earners
A critique of http://www.informationisbeautiful.net/visualizations/what-are-wallst-protestors-angry-about/
End of explanation
!wget 'https://docs.google.com/spreadsheets/d/1N_Hc-xKr7DQc8bZAvLROGWr5Cr-A6MfGnH91fFW3ZwA/export?format=xlsx&id=1N_Hc-xKr7DQc8bZAvLROGWr5Cr-A6MfGnH91fFW3ZwA' -O wallstreet.xlsx
df = pd.read_excel('wallstreet.xlsx', skiprows=1, index_col = 'Country')
df.describe()
Explanation: Getting the data
Before going into the purely visual aspects and how effective they are at conveying a story, I want to understand what data we are dealing with. At the bottom of the graph, there is a bit.ly URL that points to a google drive document. Adding export?format=xlsx will allow us to download this document as an excel spreadsheet, which can then be sliced and diced easily with the pandas analytics module.
End of explanation
df['Year'].hist(bins=22) # 22 bins so I get every year as a distinct sum
Explanation: First issue with the data, right away we can see the wide range of dates. Let's look at the date distribution. We probably would want to use only 2010 if it represents enough data. We will make a note of <b>39.99</b> as the average Gini coefficient over all those years.
End of explanation
gini_df = df[(df.Year==2009)|(df.index=='United States')]['Gini'] # Only 2009, and choose only the gini columns (and the index, country)
gini_df
current_ax = gini_df.plot(kind='barh', color=sns.color_palette()[0])
current_ax.set_title('Gini index (%) in 2009')
current_ax.vlines(39.99, 0, len(gini_df), color=sns.color_palette()[2])
Explanation: We will get just the data for 2009. Not only it is recent, but it is plenty of data points to represent at once. This will also address the other issue with the data: in the raw form, it is too numerous and will overload the reader if presented as is. We will also load the US data, since it is supposed to tell the story of <b>'occupy wallstreet'</b>. If we are missing further critical data, we can always add a specific data point later, as we are keeping the original data frame untouched.
End of explanation
ax = df['Gini'].plot(kind='kde')
gini_df.plot(kind='kde', ax=ax) #overlay 2009 vs all years/countries
Explanation: This is already way easier to compare than the original infographic. Perhaps not as snazzy, but at least it gives us a start in trying to understand the data. But it is just that, a start. One angle would be to investigate how much above average is the Gini for the US. But I would also want to have the measures, including the average from the same year. A quick comparison of the two distributions (2009 vs all the data) shows how sampling on 2009 skews toward a higher Gini.
End of explanation
current_ax = gini_df.plot(kind='barh', color=sns.color_palette()[0])
current_ax.patches[list(gini_df.index).index("United States")].set_facecolor('#cc5555')
current_ax.set_title('Gini index (%) in 2009')
current_ax.vlines(39.99, 0, len(gini_df), color=sns.color_palette()[2])
current_ax.annotate('Average for\n1989-2010',
(40, 2),
xytext=(20, 10),
textcoords='offset points',
arrowprops=dict(arrowstyle='-|>'))
Explanation: Comparing with GDP, population, gender inequality, even subjective "satisfaction indexes" and the like would be much more interesting. To tell a real story, we need to show some correlation, and provide some narrative and/or visualization to explain Gini. At the end of the day, perhaps the real story is that Gini is not a great universal indicator.
Colors
Where the graph at http://www.informationisbeautiful.net/visualizations/what-are-wallst-protestors-angry-about/ was using a very gradual change in hue based on the value (redundant, the width and the number already shows this), it is so subtle that it doesn't show any significant difference between two consecutive rows.
A better use of color is to highlight our focus, or for reference lines. With that in mind, let's enhance our bar plot with judicious use of color for making it quicker to spot the US data.
End of explanation
<END_TASK> |
15,514 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Machine Learning Engineer Nanodegree
Introduction and Foundations
Project
Step1: From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship
Step3: The very same sample of the RMS Titanic data now shows the Survived feature removed from the DataFrame. Note that data (the passenger data) and outcomes (the outcomes of survival) are now paired. That means for any passenger data.loc[i], they have the survival outcome outcomes[i].
To measure the performance of our predictions, we need a metric to score our predictions against the true outcomes of survival. Since we are interested in how accurate our predictions are, we will calculate the proportion of passengers where our prediction of their survival is correct. Run the code cell below to create our accuracy_score function and test a prediction on the first five passengers.
Think
Step5: Tip
Step6: Question 1
Using the RMS Titanic data, how accurate would a prediction be that none of the passengers survived?
Hint
Step7: Answer
Step9: Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females did survive the ship sinking. Let's build on our previous prediction
Step10: Question 2
How accurate would a prediction be that all female passengers survived and the remaining passengers did not survive?
Hint
Step11: Answer
Step13: Examining the survival statistics, the majority of males younger than 10 survived the ship sinking, whereas most males age 10 or older did not survive the ship sinking. Let's continue to build on our previous prediction
Step14: Question 3
How accurate would a prediction be that all female passengers and all male passengers younger than 10 survived?
Hint
Step15: Answer
Step18: After exploring the survival statistics visualization, fill in the missing code below so that the function will make your prediction.
Make sure to keep track of the various features and conditions you tried before arriving at your final prediction model.
Hint
Step19: Question 4
Describe the steps you took to implement the final prediction model so that it got an accuracy of at least 80%. What features did you look at? Were certain features more informative than others? Which conditions did you use to split the survival outcomes in the data? How accurate are your predictions?
Hint | <ASSISTANT_TASK:>
Python Code:
# Import libraries necessary for this project
import numpy as np
import pandas as pd
from IPython.display import display # Allows the use of display() for DataFrames
# Import supplementary visualizations code visuals.py
import visuals as vs
# Pretty display for notebooks
%matplotlib inline
# Load the dataset
in_file = 'titanic_data.csv'
full_data = pd.read_csv(in_file)
# Print the first few entries of the RMS Titanic data
display(full_data.head())
Explanation: Machine Learning Engineer Nanodegree
Introduction and Foundations
Project: Titanic Survival Exploration
In 1912, the ship RMS Titanic struck an iceberg on its maiden voyage and sank, resulting in the deaths of most of its passengers and crew. In this introductory project, we will explore a subset of the RMS Titanic passenger manifest to determine which features best predict whether someone survived or did not survive. To complete this project, you will need to implement several conditional predictions and answer the questions below. Your project submission will be evaluated based on the completion of the code and your responses to the questions.
Tip: Quoted sections like this will provide helpful instructions on how to navigate and use an iPython notebook.
Getting Started
To begin working with the RMS Titanic passenger data, we'll first need to import the functionality we need, and load our data into a pandas DataFrame.
Run the code cell below to load our data and display the first few entries (passengers) for examination using the .head() function.
Tip: You can run a code cell by clicking on the cell and using the keyboard shortcut Shift + Enter or Shift + Return. Alternatively, a code cell can be executed using the Play button in the hotbar after selecting it. Markdown cells (text cells like this one) can be edited by double-clicking, and saved using these same shortcuts. Markdown allows you to write easy-to-read plain text that can be converted to HTML.
End of explanation
# Store the 'Survived' feature in a new variable and remove it from the dataset
outcomes = full_data['Survived']
data = full_data.drop('Survived', axis = 1)
# Show the new dataset with 'Survived' removed
display(data.head())
Explanation: From a sample of the RMS Titanic data, we can see the various features present for each passenger on the ship:
- Survived: Outcome of survival (0 = No; 1 = Yes)
- Pclass: Socio-economic class (1 = Upper class; 2 = Middle class; 3 = Lower class)
- Name: Name of passenger
- Sex: Sex of the passenger
- Age: Age of the passenger (Some entries contain NaN)
- SibSp: Number of siblings and spouses of the passenger aboard
- Parch: Number of parents and children of the passenger aboard
- Ticket: Ticket number of the passenger
- Fare: Fare paid by the passenger
- Cabin Cabin number of the passenger (Some entries contain NaN)
- Embarked: Port of embarkation of the passenger (C = Cherbourg; Q = Queenstown; S = Southampton)
Since we're interested in the outcome of survival for each passenger or crew member, we can remove the Survived feature from this dataset and store it as its own separate variable outcomes. We will use these outcomes as our prediction targets.
Run the code cell below to remove Survived as a feature of the dataset and store it in outcomes.
End of explanation
def accuracy_score(truth, pred):
Returns accuracy score for input truth and predictions.
# Ensure that the number of predictions matches number of outcomes
if len(truth) == len(pred):
# Calculate and return the accuracy as a percent
return "Predictions have an accuracy of {:.2f}%.".format((truth == pred).mean()*100)
else:
return "Number of predictions does not match number of outcomes!"
# Test the 'accuracy_score' function
predictions = pd.Series(np.ones(5, dtype = int))
print accuracy_score(outcomes[:5], predictions)
Explanation: The very same sample of the RMS Titanic data now shows the Survived feature removed from the DataFrame. Note that data (the passenger data) and outcomes (the outcomes of survival) are now paired. That means for any passenger data.loc[i], they have the survival outcome outcomes[i].
To measure the performance of our predictions, we need a metric to score our predictions against the true outcomes of survival. Since we are interested in how accurate our predictions are, we will calculate the proportion of passengers where our prediction of their survival is correct. Run the code cell below to create our accuracy_score function and test a prediction on the first five passengers.
Think: Out of the first five passengers, if we predict that all of them survived, what would you expect the accuracy of our predictions to be?
End of explanation
def predictions_0(data):
Model with no features. Always predicts a passenger did not survive.
predictions = []
for _, passenger in data.iterrows():
# Predict the survival of 'passenger'
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_0(data)
Explanation: Tip: If you save an iPython Notebook, the output from running code blocks will also be saved. However, the state of your workspace will be reset once a new session is started. Make sure that you run all of the code blocks from your previous session to reestablish variables and functions before picking up where you last left off.
Making Predictions
If we were asked to make a prediction about any passenger aboard the RMS Titanic whom we knew nothing about, then the best prediction we could make would be that they did not survive. This is because we can assume that a majority of the passengers (more than 50%) did not survive the ship sinking.
The predictions_0 function below will always predict that a passenger did not survive.
End of explanation
print accuracy_score(outcomes, predictions)
Explanation: Question 1
Using the RMS Titanic data, how accurate would a prediction be that none of the passengers survived?
Hint: Run the code cell below to see the accuracy of this prediction.
End of explanation
vs.survival_stats(data, outcomes, 'Sex')
Explanation: Answer: Predictions have an accuracy of 61.62%.
Let's take a look at whether the feature Sex has any indication of survival rates among passengers using the survival_stats function. This function is defined in the visuals.py Python script included with this project. The first two parameters passed to the function are the RMS Titanic data and passenger survival outcomes, respectively. The third parameter indicates which feature we want to plot survival statistics across.
Run the code cell below to plot the survival outcomes of passengers based on their sex.
End of explanation
def predictions_1(data):
Model with one feature:
- Predict a passenger survived if they are female.
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# and write your prediction conditions here
if passenger['Sex'] == "female":
predictions.append(1)
else:
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_1(data)
Explanation: Examining the survival statistics, a large majority of males did not survive the ship sinking. However, a majority of females did survive the ship sinking. Let's build on our previous prediction: If a passenger was female, then we will predict that they survived. Otherwise, we will predict the passenger did not survive.
Fill in the missing code below so that the function will make this prediction.
Hint: You can access the values of each feature for a passenger like a dictionary. For example, passenger['Sex'] is the sex of the passenger.
End of explanation
print accuracy_score(outcomes, predictions)
Explanation: Question 2
How accurate would a prediction be that all female passengers survived and the remaining passengers did not survive?
Hint: Run the code cell below to see the accuracy of this prediction.
End of explanation
vs.survival_stats(data, outcomes, 'Age', ["Sex == 'male'"])
Explanation: Answer: Predictions have an accuracy of 78.68%.
Using just the Sex feature for each passenger, we are able to increase the accuracy of our predictions by a significant margin. Now, let's consider using an additional feature to see if we can further improve our predictions. For example, consider all of the male passengers aboard the RMS Titanic: Can we find a subset of those passengers that had a higher rate of survival? Let's start by looking at the Age of each male, by again using the survival_stats function. This time, we'll use a fourth parameter to filter out the data so that only passengers with the Sex 'male' will be included.
Run the code cell below to plot the survival outcomes of male passengers based on their age.
End of explanation
def predictions_2(data):
Model with two features:
- Predict a passenger survived if they are female.
- Predict a passenger survived if they are male and younger than 10.
predictions = []
for _, passenger in data.iterrows():
# Remove the 'pass' statement below
# and write your prediction conditions here
if passenger['Sex'] == "female" or passenger['Age'] < 10:
predictions.append(1)
else:
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_2(data)
Explanation: Examining the survival statistics, the majority of males younger than 10 survived the ship sinking, whereas most males age 10 or older did not survive the ship sinking. Let's continue to build on our previous prediction: If a passenger was female, then we will predict they survive. If a passenger was male and younger than 10, then we will also predict they survive. Otherwise, we will predict they do not survive.
Fill in the missing code below so that the function will make this prediction.
Hint: You can start your implementation of this function using the prediction code you wrote earlier from predictions_1.
End of explanation
print accuracy_score(outcomes, predictions)
Explanation: Question 3
How accurate would a prediction be that all female passengers and all male passengers younger than 10 survived?
Hint: Run the code cell below to see the accuracy of this prediction.
End of explanation
vs.survival_stats(data, outcomes, 'Age', ["Sex == 'female'","SibSp == 3"])
Explanation: Answer: Predictions have an accuracy of 79.35%.
Adding the feature Age as a condition in conjunction with Sex improves the accuracy by a small margin more than with simply using the feature Sex alone. Now it's your turn: Find a series of features and conditions to split the data on to obtain an outcome prediction accuracy of at least 80%. This may require multiple features and multiple levels of conditional statements to succeed. You can use the same feature multiple times with different conditions.
Pclass, Sex, Age, SibSp, and Parch are some suggested features to try.
Use the survival_stats function below to to examine various survival statistics.
Hint: To use mulitple filter conditions, put each condition in the list passed as the last argument. Example: ["Sex == 'male'", "Age < 18"]
End of explanation
def predictions_3(data):
Model with multiple features. Makes a prediction with an accuracy of at least 80%.
predictions = []
for _, passenger in data.iterrows():
Model with features of Sex, Age, Pclass, and SibSp.
if passenger['Sex'] == "female":
if passenger['Pclass'] == 3 and passenger['Age']>=40 and passenger['Age']<60:
predictions.append(0)
elif passenger['SibSp'] == 3 and passenger['Age'] <= 10:
predictions.append(0)
else:
predictions.append(1)
else:
if passenger['Age'] < 10:
predictions.append(1)
elif passenger['Pclass'] == 1 and passenger['Age']>=20 and passenger['Age'] < 40:
predictions.append(1)
else:
predictions.append(0)
# Return our predictions
return pd.Series(predictions)
# Make the predictions
predictions = predictions_3(data)
Explanation: After exploring the survival statistics visualization, fill in the missing code below so that the function will make your prediction.
Make sure to keep track of the various features and conditions you tried before arriving at your final prediction model.
Hint: You can start your implementation of this function using the prediction code you wrote earlier from predictions_2.
End of explanation
print accuracy_score(outcomes, predictions)
Explanation: Question 4
Describe the steps you took to implement the final prediction model so that it got an accuracy of at least 80%. What features did you look at? Were certain features more informative than others? Which conditions did you use to split the survival outcomes in the data? How accurate are your predictions?
Hint: Run the code cell below to see the accuracy of your predictions.
End of explanation
<END_TASK> |
15,515 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
<a href="https
Step1: Template matching
Let's first download an image and a template to search for. The template is a smaller part of the original image.
Step2: Both the image used for processing and the template are converted to grayscale images to boost efficiency.
Step3: Change the code above to plot grayscale images.
The OpenCV package has a function for template mathing, so let's call it and display the result. The matchTemplate function can calculate six different formulas to find the best match. Within the function, TM_CCOEFF_NORMED it calculates a normalized coefficient in the range (0, 1), where the perfect match gives value 1.
Step4: Change the code above and try other methods, TM_CCORR_NORMED, TM_SQDIFF_NORMED, for instance.
Image transformation
If the pattern is rotated or scaled, the pattern might not match the image. This issue can be fixed by using homology matrix. For more details see
Step5: Let's try to find the template on the rotated image.
Step7: Let's transform the image back to the perpendicular plan.
Step8: Recognition of ArUco markers
"An ArUco marker is a synthetic square marker composed by a wide black border and an inner binary matrix which determines its identifier (id). The black border facilitates its fast detection in the image and the binary codification allows its identification and the application of error detection and correction techniques. The marker size determines the size of the internal matrix. For instance a marker size of 4x4 is composed by 16 bits." (from OpenCV documentation)
There is a contrib package in OpenCV to detect ArUco markers called aruco.
Let's find six ArUco markers on a simple image.
Step9: Calibration
Low-cost cameras might have significant distortions (either radial or tangential). Therefore, we have to calibrate cameras before using in deformation and movement analysis.
Radial distortion
$$ x' = x (1 + k_1 r^2 + k_2 r^4 + k_3 r^6) $$
$$ y' = y (1 + k_1 r^2 + k_2 r^4 + k_3 r^6) $$
Tangential distortion
$$ x' = x + (2 p_1 x y + p_2 (r^2 + 2 x^2)) $$
$$ y' = y + (p_1 (r^2+2 y^2) + 2 p_2 x y) $$
Camera matrix
<table>
<tr><td>f<sub>x</sub></td><td>0</td><td>c<sub>x</sub></td></tr>
<tr><td>0</td><td>f<sub>y</sub></td><td>c<sub>y</sub></td></tr>
<tr><td>0</td><td>0</td><td>1</td></tr></table>
Distortion parameters are ($ k_1, k_2, k_3, p_1, p_2 $). Camera matrix contains focal length ($ f_x, f_y $) and optical centers ($ c_x, c_y $).
For the calibration we need a chessboard like figure and more than ten photos from different directions.
Let's download the images for calibration.
Step10: The first 6 images for calibration
Step11: Using the ArUco calibration, let's find the camera matrix and the associated radial and tangential distortion parameters.
Step12: Plot undistorted image and the one corrected by calibration parameters.
Step13: Complex example
We have a video of a moving object with an ArUco marker. Let's process the video frame by frame and make a plot of movements. During the process images are corrected by the calibration data.
Click here to watch video. | <ASSISTANT_TASK:>
Python Code:
import glob # to extend file name pattern to list
import cv2 # OpenCV for image processing
from cv2 import aruco # to find ArUco markers
import numpy as np # for matrices
import matplotlib.pyplot as plt # to show images
Explanation: <a href="https://colab.research.google.com/github/OSGeoLabBp/tutorials/blob/master/english/data_processing/lessons/img_def.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Movement and deformation analysis from images
Principles
Images/videos are made by a stable camera, to put it another way, the camera does not move during observations
Calibrated camera/system is necessary
Image resolution is enhanced by geodetic telescope
Methods
Template matching
Pattern recognition
Template matching characteristics
Pros
There is always a match
Simple algorithm
Special marker is not necessary
Cons
The chance of false match is higher
No or minimal rotation
No or minimal scale change
Pattern recognition charasteristics
Pros
Marker can rotate
Marker scale can change
Normal of the marker can be estimated
Cons
Special marker have to be fit to target
More sensitive for light conditions
First off, let's import the necessary Python packages.
End of explanation
!wget -q -O sample_data/monalisa.jpg https://raw.githubusercontent.com/OSGeoLabBp/tutorials/master/english/img_processing/code/monalisa.jpg
!wget -q -O sample_data/mona_temp4.png https://raw.githubusercontent.com/OSGeoLabBp/tutorials/master/english/img_processing/code/mona_temp4.png
Explanation: Template matching
Let's first download an image and a template to search for. The template is a smaller part of the original image.
End of explanation
img = cv2.imread('sample_data/monalisa.jpg') # load image
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # convert image to grayscale
templ = cv2.imread('sample_data/mona_temp4.png') # load template
templ_gray = cv2.cvtColor(templ, cv2.COLOR_BGR2GRAY) # convert template to grayscale
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 5)) # show image and template
ax1.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB))
ax1.set_title('image to scan')
ax2.imshow(cv2.cvtColor(templ, cv2.COLOR_BGR2RGB)) # BGR vs. RGB
ax2.set_title('template to find')
ax2.set_xlim(ax1.get_xlim()) # set same scale
ax2.set_ylim(ax1.get_ylim())
print(f'image sizes: {img_gray.shape} template sizes: {templ_gray.shape}')
Explanation: Both the image used for processing and the template are converted to grayscale images to boost efficiency.
End of explanation
result = cv2.matchTemplate(img_gray, templ_gray, cv2.TM_CCOEFF_NORMED)
val, _, max = cv2.minMaxLoc(result)[1:4] # get position of best match
fr = np.array([max,
(max[0]+templ.shape[1], max[1]),
(max[0]+templ.shape[1], max[1]+templ.shape[0]),
(max[0], max[1]+templ.shape[0]),
max])
result_uint = ((result - np.min(result)) / (np.max(result) - np.min(result)) * 256).astype('uint8')
result_img = cv2.cvtColor(result_uint, cv2.COLOR_GRAY2BGR)
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 5))
ax1.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB))
ax1.set_title('Match on original image')
ax1.plot(fr[:,0], fr[:,1], 'r')
ax1.plot([max[0]],[max[1]], 'r*')
ax2.imshow(result_img)
ax2.plot(fr[:,0], fr[:,1], 'r')
ax2.plot([max[0]],[max[1]], 'r*')
ax2.set_title('Normalized coefficients')
ax2.set_xlim(ax1.get_xlim()) # set same scale
ax2.set_ylim(ax1.get_ylim())
print(f'best match at {max} value {val:.6f}')
Explanation: Change the code above to plot grayscale images.
The OpenCV package has a function for template mathing, so let's call it and display the result. The matchTemplate function can calculate six different formulas to find the best match. Within the function, TM_CCOEFF_NORMED it calculates a normalized coefficient in the range (0, 1), where the perfect match gives value 1.
End of explanation
!wget -q -O sample_data/monalisa_tilt.jpg https://raw.githubusercontent.com/OSGeoLabBp/tutorials/master/english/img_processing/code/monalisa_tilt.jpg
Explanation: Change the code above and try other methods, TM_CCORR_NORMED, TM_SQDIFF_NORMED, for instance.
Image transformation
If the pattern is rotated or scaled, the pattern might not match the image. This issue can be fixed by using homology matrix. For more details see: source
Let's download another image with a rotated Mona Lisa.
End of explanation
img = cv2.imread('sample_data/monalisa_tilt.jpg', cv2.IMREAD_GRAYSCALE)
result = cv2.matchTemplate(img, templ_gray, cv2.TM_CCOEFF_NORMED)
val, _, max = cv2.minMaxLoc(result)[1:4]
fr = np.array([max,
(max[0]+templ.shape[1], max[1]),
(max[0]+templ.shape[1], max[1]+templ.shape[0]),
(max[0], max[1]+templ.shape[0]),
max])
plt.imshow(img, cmap="gray")
plt.plot(fr[:,0], fr[:,1], 'r')
plt.plot([max[0]],[max[1]], 'r*')
print(f'best match at {max} value {val:.6f} BUT FALSE!')
Explanation: Let's try to find the template on the rotated image.
End of explanation
def project_img(image, a_src, a_dst):
calculate transformation matrix
new_image = image.copy() # make a copy of input image
# get parameters of transformation
projective_matrix = cv2.getPerspectiveTransform(a_src, a_dst)
# transform image
transformed = cv2.warpPerspective(img, projective_matrix, image.shape)
# cut destination area
transformed = transformed[0:int(np.max(a_dst[:,1])),0:int(np.max(a_dst[:,0]))]
return transformed
# frame on warped image
src = [(240, 44), (700, 116), (703, 815), (243, 903)]
# frame on original
s = img_gray.shape
dst = [(0, 0), (s[1], 0), (s[1], s[0]), (0,s[0])]
a_src = np.float32(src)
a_dst = np.float32(dst)
# image transformation
img_dst = project_img(img, a_src, a_dst)
# template match
result = cv2.matchTemplate(img_dst, templ_gray, cv2.TM_CCOEFF_NORMED)
val, _, max = cv2.minMaxLoc(result)[1:4]
# frame around template on transformed image
fr = np.array([max,
(max[0]+templ.shape[1], max[1]),
(max[0]+templ.shape[1], max[1]+templ.shape[0]),
(max[0], max[1]+templ.shape[0]),
max])
fig, ax = plt.subplots(1,2, figsize=(13,8))
ax[0].imshow(img, cmap="gray");
ax[0].plot(a_src[:,0], a_src[:,1], 'r--')
ax[0].set_title('Original Image')
ax[1].imshow(img_dst, cmap="gray")
ax[1].plot(a_dst[:,0], a_dst[:,1], 'r--')
ax[1].set_title('Warped Image')
ax[1].plot(fr[:,0], fr[:,1], 'r')
ax[1].plot([max[0]],[max[1]], 'r*')
print(f'best match at {max} value {val:.2f}')
Explanation: Let's transform the image back to the perpendicular plan.
End of explanation
!wget -q -O sample_data/markers.png https://raw.githubusercontent.com/OSGeoLabBp/tutorials/master/english/img_processing/code/markers.png
img = cv2.imread('sample_data/markers.png')
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
aruco_dict = aruco.Dictionary_get(cv2.aruco.DICT_4X4_100)
params = aruco.DetectorParameters_create()
corners, ids, _ = aruco.detectMarkers(img_gray, aruco_dict, parameters=params)
x = np.zeros(ids.size)
y = np.zeros(ids.size)
img1 = img.copy()
for j in range(ids.size):
x[j] = int(round(np.average(corners[j][0][:, 0])))
y[j] = int(round(np.average(corners[j][0][:, 1])))
cv2.putText(img1, str(ids[j][0]), (int(x[j]+2), int(y[j])), cv2.FONT_HERSHEY_SIMPLEX, 1.5, (255, 0, 255), 3)
fig, ax = plt.subplots(1,2, figsize=(10,5))
ax[0].imshow(img)
ax[1].imshow(img1)
ax[1].plot(x, y, "ro")
print(list(zip(list(x), list(y))))
Explanation: Recognition of ArUco markers
"An ArUco marker is a synthetic square marker composed by a wide black border and an inner binary matrix which determines its identifier (id). The black border facilitates its fast detection in the image and the binary codification allows its identification and the application of error detection and correction techniques. The marker size determines the size of the internal matrix. For instance a marker size of 4x4 is composed by 16 bits." (from OpenCV documentation)
There is a contrib package in OpenCV to detect ArUco markers called aruco.
Let's find six ArUco markers on a simple image.
End of explanation
!wget -q -O sample_data/cal.zip https://raw.githubusercontent.com/OSGeoLabBp/tutorials/master/english/img_processing/code/cal.zip
!unzip -q -o sample_data/cal.zip -d sample_data
width = 5 # Charuco board size
height = 7
board = cv2.aruco.CharucoBoard_create(width, height, .025, .0125, aruco_dict) # generate board in memory
img = board.draw((500, 700))
plt.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB))
_ = plt.title('Charuco board')
Explanation: Calibration
Low-cost cameras might have significant distortions (either radial or tangential). Therefore, we have to calibrate cameras before using in deformation and movement analysis.
Radial distortion
$$ x' = x (1 + k_1 r^2 + k_2 r^4 + k_3 r^6) $$
$$ y' = y (1 + k_1 r^2 + k_2 r^4 + k_3 r^6) $$
Tangential distortion
$$ x' = x + (2 p_1 x y + p_2 (r^2 + 2 x^2)) $$
$$ y' = y + (p_1 (r^2+2 y^2) + 2 p_2 x y) $$
Camera matrix
<table>
<tr><td>f<sub>x</sub></td><td>0</td><td>c<sub>x</sub></td></tr>
<tr><td>0</td><td>f<sub>y</sub></td><td>c<sub>y</sub></td></tr>
<tr><td>0</td><td>0</td><td>1</td></tr></table>
Distortion parameters are ($ k_1, k_2, k_3, p_1, p_2 $). Camera matrix contains focal length ($ f_x, f_y $) and optical centers ($ c_x, c_y $).
For the calibration we need a chessboard like figure and more than ten photos from different directions.
Let's download the images for calibration.
End of explanation
fig, ax = plt.subplots(1, 6, figsize=(15, 2))
for i in range(6):
im = cv2.imread('sample_data/cal{:d}.jpg'.format(i+1))
ax[i].imshow(cv2.cvtColor(im, cv2.COLOR_BGR2RGB))
ax[i].set_title('cal{:d}.jpg'.format(i+1))
Explanation: The first 6 images for calibration:
End of explanation
allCorners = []
allIds = []
decimator = 0
for name in glob.glob("sample_data/cal*.jpg"):
frame = cv2.imread(name)
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
corners, ids, _ = cv2.aruco.detectMarkers(gray, aruco_dict)
ret, corners1, ids1 = cv2.aruco.interpolateCornersCharuco(corners, ids, gray, board)
allCorners.append(corners1)
allIds.append(ids1)
decimator += 1
ret, mtx, dist, rvecs, tvecs = cv2.aruco.calibrateCameraCharuco(allCorners, allIds, board, gray.shape, None, None)
print("Camera matrix [pixels]")
for i in range(mtx.shape[0]):
print(f'{mtx[i][0]:8.1f} {mtx[i][1]:8.1f} {mtx[i][2]:8.1f}')
print('Radial components')
print(30 * '-')
print(f'{dist[0][0]:10.5f} {dist[0][1]:10.5f} {dist[0][2]:10.5f}')
print(30 * '-')
print('Tangential components')
print(f'{dist[0][3]:10.5f} {dist[0][4]:10.5f}')
Explanation: Using the ArUco calibration, let's find the camera matrix and the associated radial and tangential distortion parameters.
End of explanation
gray = cv2.imread('sample_data/cal1.jpg', cv2.IMREAD_GRAYSCALE)
fig, ax = plt.subplots(1, 2, figsize=(10,5))
ax[0].imshow(gray, cmap='gray')
ax[0].set_title('distorted image')
ax[1].imshow(cv2.undistort(gray, mtx, dist, None), cmap='gray')
_ = ax[1].set_title('undistorted image')
Explanation: Plot undistorted image and the one corrected by calibration parameters.
End of explanation
!wget -q -O sample_data/demo.mp4 https://raw.githubusercontent.com/OSGeoLabBp/tutorials/master/english/img_processing/code/demo.mp4
cap = cv2.VideoCapture('sample_data/demo.mp4')
frame = 0 # frame counter
xc = [] # for pixel coordinates of marker
yc = []
frames = []
while True:
ret, img = cap.read() # get next frame from video
if ret:
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # convert image to grayscale
img_gray = cv2.undistort(gray, mtx, dist, None) # remove camera distortion using calibration
corners, ids, _ = aruco.detectMarkers(img_gray, aruco_dict, parameters=params) # find ArUco markers
if ids: # marker found?
yc.append(img_gray.shape[1] - int(round(np.average(corners[0][0][:, 1])))) # change y direction
frames.append(frame)
frame += 1 # frame count
else:
break # no more images
plt.plot(frames, yc)
plt.title('Vertical positions of ArUco marker from video frames')
plt.xlabel('frame count')
plt.grid()
_ = plt.ylabel('vertical position [pixel]')
Explanation: Complex example
We have a video of a moving object with an ArUco marker. Let's process the video frame by frame and make a plot of movements. During the process images are corrected by the calibration data.
Click here to watch video.
End of explanation
<END_TASK> |
15,516 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
ES-DOC CMIP6 Model Properties - Ocean
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Model Family
Is Required
Step7: 1.4. Basic Approximations
Is Required
Step8: 1.5. Prognostic Variables
Is Required
Step9: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required
Step10: 2.2. Eos Functional Temp
Is Required
Step11: 2.3. Eos Functional Salt
Is Required
Step12: 2.4. Eos Functional Depth
Is Required
Step13: 2.5. Ocean Freezing Point
Is Required
Step14: 2.6. Ocean Specific Heat
Is Required
Step15: 2.7. Ocean Reference Density
Is Required
Step16: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required
Step17: 3.2. Type
Is Required
Step18: 3.3. Ocean Smoothing
Is Required
Step19: 3.4. Source
Is Required
Step20: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required
Step21: 4.2. River Mouth
Is Required
Step22: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required
Step23: 5.2. Code Version
Is Required
Step24: 5.3. Code Languages
Is Required
Step25: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required
Step26: 6.2. Canonical Horizontal Resolution
Is Required
Step27: 6.3. Range Horizontal Resolution
Is Required
Step28: 6.4. Number Of Horizontal Gridpoints
Is Required
Step29: 6.5. Number Of Vertical Levels
Is Required
Step30: 6.6. Is Adaptive Grid
Is Required
Step31: 6.7. Thickness Level 1
Is Required
Step32: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required
Step33: 7.2. Global Mean Metrics Used
Is Required
Step34: 7.3. Regional Metrics Used
Is Required
Step35: 7.4. Trend Metrics Used
Is Required
Step36: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required
Step37: 8.2. Scheme
Is Required
Step38: 8.3. Consistency Properties
Is Required
Step39: 8.4. Corrected Conserved Prognostic Variables
Is Required
Step40: 8.5. Was Flux Correction Used
Is Required
Step41: 9. Grid
Ocean grid
9.1. Overview
Is Required
Step42: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required
Step43: 10.2. Partial Steps
Is Required
Step44: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required
Step45: 11.2. Staggering
Is Required
Step46: 11.3. Scheme
Is Required
Step47: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required
Step48: 12.2. Diurnal Cycle
Is Required
Step49: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required
Step50: 13.2. Time Step
Is Required
Step51: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required
Step52: 14.2. Scheme
Is Required
Step53: 14.3. Time Step
Is Required
Step54: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required
Step55: 15.2. Time Step
Is Required
Step56: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required
Step57: 17. Advection
Ocean advection
17.1. Overview
Is Required
Step58: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required
Step59: 18.2. Scheme Name
Is Required
Step60: 18.3. ALE
Is Required
Step61: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required
Step62: 19.2. Flux Limiter
Is Required
Step63: 19.3. Effective Order
Is Required
Step64: 19.4. Name
Is Required
Step65: 19.5. Passive Tracers
Is Required
Step66: 19.6. Passive Tracers Advection
Is Required
Step67: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required
Step68: 20.2. Flux Limiter
Is Required
Step69: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required
Step70: 21.2. Scheme
Is Required
Step71: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required
Step72: 22.2. Order
Is Required
Step73: 22.3. Discretisation
Is Required
Step74: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required
Step75: 23.2. Constant Coefficient
Is Required
Step76: 23.3. Variable Coefficient
Is Required
Step77: 23.4. Coeff Background
Is Required
Step78: 23.5. Coeff Backscatter
Is Required
Step79: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required
Step80: 24.2. Submesoscale Mixing
Is Required
Step81: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required
Step82: 25.2. Order
Is Required
Step83: 25.3. Discretisation
Is Required
Step84: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required
Step85: 26.2. Constant Coefficient
Is Required
Step86: 26.3. Variable Coefficient
Is Required
Step87: 26.4. Coeff Background
Is Required
Step88: 26.5. Coeff Backscatter
Is Required
Step89: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required
Step90: 27.2. Constant Val
Is Required
Step91: 27.3. Flux Type
Is Required
Step92: 27.4. Added Diffusivity
Is Required
Step93: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required
Step94: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required
Step95: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required
Step96: 30.2. Closure Order
Is Required
Step97: 30.3. Constant
Is Required
Step98: 30.4. Background
Is Required
Step99: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required
Step100: 31.2. Closure Order
Is Required
Step101: 31.3. Constant
Is Required
Step102: 31.4. Background
Is Required
Step103: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required
Step104: 32.2. Tide Induced Mixing
Is Required
Step105: 32.3. Double Diffusion
Is Required
Step106: 32.4. Shear Mixing
Is Required
Step107: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required
Step108: 33.2. Constant
Is Required
Step109: 33.3. Profile
Is Required
Step110: 33.4. Background
Is Required
Step111: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required
Step112: 34.2. Constant
Is Required
Step113: 34.3. Profile
Is Required
Step114: 34.4. Background
Is Required
Step115: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required
Step116: 35.2. Scheme
Is Required
Step117: 35.3. Embeded Seaice
Is Required
Step118: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required
Step119: 36.2. Type Of Bbl
Is Required
Step120: 36.3. Lateral Mixing Coef
Is Required
Step121: 36.4. Sill Overflow
Is Required
Step122: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required
Step123: 37.2. Surface Pressure
Is Required
Step124: 37.3. Momentum Flux Correction
Is Required
Step125: 37.4. Tracers Flux Correction
Is Required
Step126: 37.5. Wave Effects
Is Required
Step127: 37.6. River Runoff Budget
Is Required
Step128: 37.7. Geothermal Heating
Is Required
Step129: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required
Step130: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required
Step131: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required
Step132: 40.2. Ocean Colour
Is Required
Step133: 40.3. Extinction Depth
Is Required
Step134: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required
Step135: 41.2. From Sea Ice
Is Required
Step136: 41.3. Forced Mode Restoring
Is Required | <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'ncar', 'sandbox-2', 'ocean')
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: NCAR
Source ID: SANDBOX-2
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:22
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation
<END_TASK> |
15,517 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
<style>
@font-face {
font-family
Step1: One nice feature of ipython notebooks is it's easy to make small changes to code and
then re-execute quickly, to see how things change. For example, printing the first 5 lines
of the labels dataframe (which is the default) isn't really ideal here, since there's a label
("functional needs repair") which doesn't appear in the first five lines. Type 20 in the
parentheses labels_df.head(), so it now reads labels_df.head(20), and press shift-enter to
rerun the code. See the difference?
Now take a quick look at the features, again by calling .head() (set up for you in the code box
below, or add your own code to the code box above). You can print or as few
rows as you like. Take a quick look at the data--approximately how many features are there?
Are they all numeric, or will you have to do work to transform non-numeric features into
numbers?
Step2: Transforming string labels into integers
The machine learning algorithms downstream are not going to handle it well if the class labels
used for training are strings; instead, we'll want to use integers. The mapping that we'll use
is that "non functional" will be transformed to 0, "functional needs repair" will be 1, and
"functional" becomes 2.
There are a number of ways to do this; the framework below uses applymap() in pandas.
Here's
the documentation for applymap(); in the code below, you should fill in the function body for
label_map(y) so that if y is "functional", label_map returns 2; if y is "functional needs
repair" then it should return 1, and "non functional" is 0.
There's a print statement there to help you confirm that the label transformation is working
properly.
As an aside, you could also use apply() here if you like. The difference between apply()
and applymap() is that applymap() operates on a whole dataframe while apply() operates on a series
(or you can think of it as operating on one column of your dataframe). Since labels_df only has
one column (aside from the index column), either one will work here.
Step3: Transforming string features into integers
Now that the labels are ready, we'll turn our attention to the features. Many of the features
are categorical, where a feature can take on one of a few discrete values, which are not ordered.
Fill in the function body of transform_feature( df, column ) below so that it takes our features_df and
the name of a column in that dataframe, and returns the same dataframe but with the indicated
feature encoded with integers rather than strings.
We've provided code to wrap your transformer function in a loop iterating through all the columns that should
be transformed.
Last, add a line of code at the bottom of the block below that removes the date_recorded column from features_df. Time-series information like dates and times need special treatment, which we won't be going into today.
Step4: Ok, a couple last steps to get everything ready for sklearn. The features and labels are taken out of their dataframes and put into a numpy.ndarray and list, respectively.
Step5: Predicting well failures with logistic regression
The cheapest and easiest way to train on one portion of your dataset and test on another, and to get a measure of model quality at the same time, is to use sklearn.cross_validation.cross_val_score(). This splits your data into 3 equal portions, trains on two of them, and tests on the third. This process repeats 3 times. That's why 3 numbers get printed in the code block below.
You don't have to add anything to the code block, it's ready to go already. However, use it for reference in the next part of the tutorial, where you will be looking at other sklearn algorithms.
Heads up
Step6: Comparing logistic regression to tree-based methods
We have a baseline logistic regression model for well failures. Let's compare to a couple of other classifiers, a decision tree classifier and a random forest classifier, to see which one seems to do the best.
Code this up on your own. You can use the code in the box above as a kind of template, and just drop in the new classifiers. The sklearn documentation might also be helpful
Step7: Congratulations! You have a working data science setup, in which you have
Step8: Now we'll take the to_transform list that you populated above with categorical variables, and use that to loop through columns that will be one-hot encoded.
One note before you code that up
Step9: Now that the features are a little fixed up, I'd invite you to rerun the models, and see if the cross_val_score goes up as a result. It is also a great chance to take some of the theory discussion from the workshop and play around with the parameters of your models, and see if you can increase their scores that way. There's a blank code box below where you can play around.
Step10: End-to-end workflows using Pipeline and GridSearchCV
So far we have made a nice workflow using a few ideas assembled in a script-like workflow. A few spots remain where we can tighten things up though
Step11: Pipeline
After selecting the 100 best features, the natural next step would be to run our random forest again to see if it does a little better with fewer features. So we would have SelectKBest doing selection, with the output of that process going straight into a classifier. A Pipeline packages the transformation step of SelectKBest with the estimation step of RandomForestClassifier into a coherent workflow.
Why might you want to use Pipeline instead of keeping the steps separate?
makes code more readable
don't have to worry about keeping track data during intermediate steps, for example between transforming and estimating
makes it trivial to move ordering of the pipeline pieces, or to swap pieces in and out
Allows you to do GridSearchCV on your workflow
This last point is, in my opinion, the most important. We will get to it very soon, but first let's get a pipeline up and running that does SelectKBest followed by RandomForestClassifier.
In the code box below, I've also set up a slightly better training/testing structure, where I am explicitly splitting the data into training and testing sets which we'll use below. The training/testing split before was handled automatically in cross_val_score, but we'll be using a different evaluation metric from here forward, the classification report, which requires us to handle the train/test split ourselves.
Note
Step12: Reading the classification report
A brief aside--we've switched from cross_val_score to classification_report for evaluation, mostly to show you two different ways to evaluating a model. The classification report has the advantage of giving you a lot more information, and if (for example) one class is more important to get right than the others (say you're trying to zero in on non-functional wells, so finding those correctly is more important than getting the functional wells right).
For more information, the sklearn docs on classification_report are, like all the sklearn docs, incredibly helpful. For interpreting the various metrics, this page may also help.
GridSearchCV
We're in the home stretch now. When we decided to select the 100 best features, setting that number to 100 was kind of a hand-wavey decision. Similarly, the RandomForestClassifier that we're using right now has all its parameters set to their default values, which might not be optimal.
So, a straightforward thing to do now is to try different values of k and any RandomForestClassifier parameters we want to tune (for the sake of concreteness, let's play with n_estimators and min_samples_split). Trying lots of values for each of these free parameters is tedious, and there can sometimes be interactions between the choices you make in one step and the optimal value for a downstream step. In other words, to avoid local optima, you should try all the combinations of parameters, and not just vary them independently. So if you want to try 5 different values each for k, n_estimators and min_samples_split, that means 5 x 5 x 5 = 125 different combinations to try. Not something you want to do by hand.
GridSearchCV allows you to construct a grid of all the combinations of parameters, tries each combination, and then reports back the best combination/model. | <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import numpy as np
features_df = pd.DataFrame.from_csv("well_data.csv")
labels_df = pd.DataFrame.from_csv("well_labels.csv")
print( labels_df.head(20) )
Explanation: <style>
@font-face {
font-family: CharisSILW;
src: url(files/CharisSIL-R.woff);
}
@font-face {
font-family: CharisSILW;
font-style: italic;
src: url(files/CharisSIL-I.woff);
}
@font-face {
font-family: CharisSILW;
font-weight: bold;
src: url(files/CharisSIL-B.woff);
}
@font-face {
font-family: CharisSILW;
font-weight: bold;
font-style: italic;
src: url(files/CharisSIL-BI.woff);
}
div.cell, div.text_cell_render{
max-width:1000px;
}
h1 {
text-align:center;
font-family: Charis SIL, CharisSILW, serif;
}
.rendered_html {
font-size: 130%;
line-height: 1.3;
}
.rendered_html li {
line-height: 2;
}
.rendered_html h1{
line-height: 1.3;
}
.rendered_html h2{
line-height: 1.2;
}
.rendered_html h3{
line-height: 1.0;
}
.text_cell_render {
font-family: Charis SIL, CharisSILW, serif;
line-height: 145%;
}
li li {
font-size: 85%;
}
</style>
End-to-End Data Science in Python
<img src="scikit-learn.png" />
Introduction
This is the workbook for the "End-to-End Data Analysis in Python" workshop
at the Open Data Science Conference 2015, in beautiful San Francisco.
This notebook contains starter code only; the goal is that we will fill in the
gaps together as we progress through the workshop. If, however, you're doing this
asynchronously or you get stuck, you can reference the solutions workbook.
The objective is to complete the "Pump it Up: Mining the Water Table" challenge
on drivendata.org; the objective here is to predict
African wells that are non-functional or in need of repair. Per the rules of the
competition, you should register for an account with drivendata.org, at which point you
can download the training set values and labels. We will be working with those datasets
during this workshop. You should download those files to the directory in which this
notebook lives, and name them wells_features.csv and wells_labels.csv (to be consistent
with our nomenclature). You are also encouraged to continue developing your solution
after this workshop, and/or to enter your solution in the competition on the drivendata
website!
### Code requirements
Here's the environment you'll need to work with this code base:
python 3 (2.x may work with minor changes, but no guarantees)
pandas
scikit-learn
numpy
First Draft of an Analysis
End of explanation
print( features_df.head() )
Explanation: One nice feature of ipython notebooks is it's easy to make small changes to code and
then re-execute quickly, to see how things change. For example, printing the first 5 lines
of the labels dataframe (which is the default) isn't really ideal here, since there's a label
("functional needs repair") which doesn't appear in the first five lines. Type 20 in the
parentheses labels_df.head(), so it now reads labels_df.head(20), and press shift-enter to
rerun the code. See the difference?
Now take a quick look at the features, again by calling .head() (set up for you in the code box
below, or add your own code to the code box above). You can print or as few
rows as you like. Take a quick look at the data--approximately how many features are there?
Are they all numeric, or will you have to do work to transform non-numeric features into
numbers?
End of explanation
def label_map(y):
if y=="functional":
return 2
elif y=="functional needs repair":
return 1
else:
return 0
labels_df = labels_df.applymap(label_map)
print( labels_df.head() )
Explanation: Transforming string labels into integers
The machine learning algorithms downstream are not going to handle it well if the class labels
used for training are strings; instead, we'll want to use integers. The mapping that we'll use
is that "non functional" will be transformed to 0, "functional needs repair" will be 1, and
"functional" becomes 2.
There are a number of ways to do this; the framework below uses applymap() in pandas.
Here's
the documentation for applymap(); in the code below, you should fill in the function body for
label_map(y) so that if y is "functional", label_map returns 2; if y is "functional needs
repair" then it should return 1, and "non functional" is 0.
There's a print statement there to help you confirm that the label transformation is working
properly.
As an aside, you could also use apply() here if you like. The difference between apply()
and applymap() is that applymap() operates on a whole dataframe while apply() operates on a series
(or you can think of it as operating on one column of your dataframe). Since labels_df only has
one column (aside from the index column), either one will work here.
End of explanation
def transform_feature( df, column_name ):
unique_values = set( df[column_name].tolist() )
transformer_dict = {}
for ii, value in enumerate(unique_values):
transformer_dict[value] = ii
def label_map(y):
return transformer_dict[y]
df[column_name] = df[column_name].apply( label_map )
return df
### list of column names indicating which columns to transform;
### this is just a start! Use some of the print( labels_df.head() )
### output upstream to help you decide which columns get the
### transformation
names_of_columns_to_transform = ["funder", "installer", "wpt_name", "basin", "subvillage",
"region", "lga", "ward", "public_meeting", "recorded_by",
"scheme_management", "scheme_name", "permit",
"extraction_type", "extraction_type_group",
"extraction_type_class",
"management", "management_group",
"payment", "payment_type",
"water_quality", "quality_group", "quantity", "quantity_group",
"source", "source_type", "source_class",
"waterpoint_type", "waterpoint_type_group"]
for column in names_of_columns_to_transform:
features_df = transform_feature( features_df, column )
print( features_df.head() )
### remove the "date_recorded" column--we're not going to make use
### of time-series data today
features_df.drop("date_recorded", axis=1, inplace=True)
print(features_df.columns.values)
Explanation: Transforming string features into integers
Now that the labels are ready, we'll turn our attention to the features. Many of the features
are categorical, where a feature can take on one of a few discrete values, which are not ordered.
Fill in the function body of transform_feature( df, column ) below so that it takes our features_df and
the name of a column in that dataframe, and returns the same dataframe but with the indicated
feature encoded with integers rather than strings.
We've provided code to wrap your transformer function in a loop iterating through all the columns that should
be transformed.
Last, add a line of code at the bottom of the block below that removes the date_recorded column from features_df. Time-series information like dates and times need special treatment, which we won't be going into today.
End of explanation
X = features_df.as_matrix()
y = labels_df["status_group"].tolist()
Explanation: Ok, a couple last steps to get everything ready for sklearn. The features and labels are taken out of their dataframes and put into a numpy.ndarray and list, respectively.
End of explanation
import sklearn.linear_model
import sklearn.cross_validation
clf = sklearn.linear_model.LogisticRegression()
score = sklearn.cross_validation.cross_val_score( clf, X, y )
print( score )
Explanation: Predicting well failures with logistic regression
The cheapest and easiest way to train on one portion of your dataset and test on another, and to get a measure of model quality at the same time, is to use sklearn.cross_validation.cross_val_score(). This splits your data into 3 equal portions, trains on two of them, and tests on the third. This process repeats 3 times. That's why 3 numbers get printed in the code block below.
You don't have to add anything to the code block, it's ready to go already. However, use it for reference in the next part of the tutorial, where you will be looking at other sklearn algorithms.
Heads up: it can be a little slow. This took a minute or two to evaluate on my MacBook Pro.
End of explanation
import sklearn.tree
import sklearn.ensemble
clf = sklearn.tree.DecisionTreeClassifier()
score = sklearn.cross_validation.cross_val_score( clf, X, y )
print( score )
clf = sklearn.ensemble.RandomForestClassifier()
score = sklearn.cross_validation.cross_val_score( clf, X, y )
print( score )
Explanation: Comparing logistic regression to tree-based methods
We have a baseline logistic regression model for well failures. Let's compare to a couple of other classifiers, a decision tree classifier and a random forest classifier, to see which one seems to do the best.
Code this up on your own. You can use the code in the box above as a kind of template, and just drop in the new classifiers. The sklearn documentation might also be helpful:
* Decision tree classifier
* Random forest classifier
We will talk about all three of these models more in the next part of the tutorial.
End of explanation
import sklearn.preprocessing
def hot_encoder(df, column_name):
column = df[column_name].tolist()
column = np.reshape( column, (len(column), 1) ) ### needs to be an N x 1 numpy array
enc = sklearn.preprocessing.OneHotEncoder()
enc.fit( column )
new_column = enc.transform( column ).toarray()
column_titles = []
### making titles for the new columns, and appending them to dataframe
for ii in range( len(new_column[0]) ):
this_column_name = column_name+"_"+str(ii)
df[this_column_name] = new_column[:,ii]
return df
Explanation: Congratulations! You have a working data science setup, in which you have:
* read in data
* transformed features and labels to make the data amenable to machine learning
* made a train/test split (this was done implicitly when you called cross_val_score)
* evaluated several models for identifying wells that are failed or in danger of failing
Paying down technical debt and tuning the models
We got things running really fast, which is great, but at the cost of being a little quick-and-dirty about some details. First, we got the features encoded as integers, but they really should be dummy variables. Second, it's worth going through the models a little more thoughtfully, to try to understand their performance and if there's any more juice we can get out of them.
One-hot encoding to make dummy variables
A problem with representing categorical variables as integers is that integers are ordered, while categories are not. The standard way to deal with this is to use dummy variables; one-hot encoding is a very common way of dummying. Each possible category becomes a new boolean feature. For example, if our dataframe looked like this:
index country
1 "United States"
2 "Mexico"
3 "Mexico"
4 "Canada"
5 "United States"
6 "Canada"
then after dummying it will look something like this:
index country_UnitedStates country_Mexico country_Canada
1 1 0 0
2 0 1 0
3 0 1 0
4 0 0 1
5 1 0 0
6 0 0 1
Hopefully the origin of the name is clear--each variable is now encoded over several boolean columns, one of which is true (hot) and the others are false.
Now we'll write a hot-encoder function that takes the data frame and the title of a column, and returns the same data frame but one-hot encoding performed on the indicated feature.
Protip: sklearn has a one-hot encoder function available that will be your friend here.
End of explanation
print(features_df.columns.values)
features_df.drop( "funder", axis=1, inplace=True )
features_df.drop( "installer", axis=1, inplace=True )
features_df.drop( "wpt_name", axis=1, inplace=True )
features_df.drop( "subvillage", axis=1, inplace=True )
features_df.drop( "ward", axis=1, inplace=True )
names_of_columns_to_transform.remove("funder")
names_of_columns_to_transform.remove("installer")
names_of_columns_to_transform.remove("wpt_name")
names_of_columns_to_transform.remove("subvillage")
names_of_columns_to_transform.remove("ward")
for feature in names_of_columns_to_transform:
features_df = hot_encoder( features_df, feature )
print( features_df.head() )
Explanation: Now we'll take the to_transform list that you populated above with categorical variables, and use that to loop through columns that will be one-hot encoded.
One note before you code that up: one-hot encoding comes with the baggage that it makes your dataset bigger--sometimes a lot bigger. In the countries example above, one column that encoded the country has now been expanded out to three columns. You can imagine that this can sometimes get really, really big (imagine a column encoding all the counties in the United States, for example).
There are some columns in this example that will really blow up the dataset, so we'll remove them before proceeding with the one-hot encoding.
End of explanation
X = features_df.as_matrix()
y = labels_df["status_group"].tolist()
clf = sklearn.ensemble.RandomForestClassifier()
score = sklearn.cross_validation.cross_val_score( clf, X, y )
print(score)
Explanation: Now that the features are a little fixed up, I'd invite you to rerun the models, and see if the cross_val_score goes up as a result. It is also a great chance to take some of the theory discussion from the workshop and play around with the parameters of your models, and see if you can increase their scores that way. There's a blank code box below where you can play around.
End of explanation
import sklearn.feature_selection
select = sklearn.feature_selection.SelectKBest(k=100)
selected_X = select.fit_transform(X, y)
print( selected_X.shape )
Explanation: End-to-end workflows using Pipeline and GridSearchCV
So far we have made a nice workflow using a few ideas assembled in a script-like workflow. A few spots remain where we can tighten things up though:
the best model, the random forest, has a lot of parameters that we'd have to work through if we really wanted to tune it
after dummying, we have lots of features, probably only a subset of which are really offering any discriminatory power (this is a version of the bias-variance tradeoff)
maybe there's a way to make the code more streamlined (hint: there is)
We will solve all these with two related and lovely tools in sklearn: Pipeline and GridSearchCV.
Pipeline in sklearn is a tool for chaining together multiple pieces of a workflow into a single coherent analysis. In our example, we will chain together a tool for feature selection, to will address the second point, which then feeds our optimized feature set into the random forest model, all in a few lines of code (which addresses the third point).
To get to the first point, about finding the best parameters--that's where the magic of GridSearchCV comes in. But first we need to get the feature selector and pipeline up and running, so let's do that now.
In sklearn.feature_selection there is a useful tool, SelectKBest (link) that you should use. By default, this will select the 10 best features; that seems like it might be too few features to do well on this problem, so change the number of features to 100.
End of explanation
import sklearn.pipeline
select = sklearn.feature_selection.SelectKBest(k=100)
clf = sklearn.ensemble.RandomForestClassifier()
steps = [('feature_selection', select),
('random_forest', clf)]
pipeline = sklearn.pipeline.Pipeline(steps)
X_train, X_test, y_train, y_test = sklearn.cross_validation.train_test_split(X, y, test_size=0.33, random_state=42)
### fit your pipeline on X_train and y_train
pipeline.fit( X_train, y_train )
### call pipeline.predict() on your X_test data to make a set of test predictions
y_prediction = pipeline.predict( X_test )
### test your predictions using sklearn.classification_report()
report = sklearn.metrics.classification_report( y_test, y_prediction )
### and print the report
print(report)
Explanation: Pipeline
After selecting the 100 best features, the natural next step would be to run our random forest again to see if it does a little better with fewer features. So we would have SelectKBest doing selection, with the output of that process going straight into a classifier. A Pipeline packages the transformation step of SelectKBest with the estimation step of RandomForestClassifier into a coherent workflow.
Why might you want to use Pipeline instead of keeping the steps separate?
makes code more readable
don't have to worry about keeping track data during intermediate steps, for example between transforming and estimating
makes it trivial to move ordering of the pipeline pieces, or to swap pieces in and out
Allows you to do GridSearchCV on your workflow
This last point is, in my opinion, the most important. We will get to it very soon, but first let's get a pipeline up and running that does SelectKBest followed by RandomForestClassifier.
In the code box below, I've also set up a slightly better training/testing structure, where I am explicitly splitting the data into training and testing sets which we'll use below. The training/testing split before was handled automatically in cross_val_score, but we'll be using a different evaluation metric from here forward, the classification report, which requires us to handle the train/test split ourselves.
Note: when you do SelectKBest, you might see a warning about a bunch of features that are constant. This isn't a problem. It's giving you a heads up that the indicated features don't show any variation, which could be a signal that something is wrong or that SelectKBest might be doing something unexpected. I
End of explanation
import sklearn.grid_search
#import warnings
#warnings.filterwarnings("ignore")
parameters = dict(feature_selection__k=[100, 200],
random_forest__n_estimators=[50],
random_forest__min_samples_split=[4])
cv = sklearn.grid_search.GridSearchCV(pipeline, param_grid=parameters)
print(pipeline.named_steps)
cv.fit(X_train, y_train)
y_predictions = cv.predict(X_test)
report = sklearn.metrics.classification_report( y_test, y_predictions )
### and print the report
print(report)
Explanation: Reading the classification report
A brief aside--we've switched from cross_val_score to classification_report for evaluation, mostly to show you two different ways to evaluating a model. The classification report has the advantage of giving you a lot more information, and if (for example) one class is more important to get right than the others (say you're trying to zero in on non-functional wells, so finding those correctly is more important than getting the functional wells right).
For more information, the sklearn docs on classification_report are, like all the sklearn docs, incredibly helpful. For interpreting the various metrics, this page may also help.
GridSearchCV
We're in the home stretch now. When we decided to select the 100 best features, setting that number to 100 was kind of a hand-wavey decision. Similarly, the RandomForestClassifier that we're using right now has all its parameters set to their default values, which might not be optimal.
So, a straightforward thing to do now is to try different values of k and any RandomForestClassifier parameters we want to tune (for the sake of concreteness, let's play with n_estimators and min_samples_split). Trying lots of values for each of these free parameters is tedious, and there can sometimes be interactions between the choices you make in one step and the optimal value for a downstream step. In other words, to avoid local optima, you should try all the combinations of parameters, and not just vary them independently. So if you want to try 5 different values each for k, n_estimators and min_samples_split, that means 5 x 5 x 5 = 125 different combinations to try. Not something you want to do by hand.
GridSearchCV allows you to construct a grid of all the combinations of parameters, tries each combination, and then reports back the best combination/model.
End of explanation
<END_TASK> |
15,518 | <SYSTEM_TASK:>
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
<END_TASK>
<USER_TASK:>
Problem:
So in numpy arrays there is the built in function for getting the diagonal indices, but I can't seem to figure out how to get the diagonal starting from the top right rather than top left. | <ASSISTANT_TASK:>
Python Code:
import numpy as np
a = np.array([[ 0, 1, 2, 3, 4, 5],
[ 5, 6, 7, 8, 9, 10],
[10, 11, 12, 13, 14, 15],
[15, 16, 17, 18, 19, 20],
[20, 21, 22, 23, 24, 25]])
result = np.diag(np.fliplr(a))
<END_TASK> |
15,519 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Oscilloskope USBTCM utility
Step1: Capture notes
There is a FTP server that can save to CSV files.
Note
Step2: Read waveforms from the file
Step3: Capture waveforms and plot it
Step4: Readin buffer with stored records | <ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import sys
import os
import time
import h5py
import numpy as np
# TRYING TO ADAPT THE SCRIPT FOR THE MICSIG detector
# Create ownership rule as /etc/udev/rules.d/99-micsig.rules
# SUBSYSTEMS=="usb", ATTRS{idVendor}=="18d1", ATTRS{idProduct}=="0303", GROUP="medved", MODE="0666"
class UsbTmcDriver:
def __init__(self, device):
self.device = device
self.FILE = os.open(device, os.O_RDWR)
def write(self, command):
os.write(self.FILE, command);
def read(self, length = 2048):
return os.read(self.FILE, length)
def getName(self):
self.write("*IDN?")
return self.read(300)
def sendReset(self):
self.write("*RST") # Be carefull, this real resets an oscilloscope
# Looking for USBTMC device
def getDeviceList():
dirList=os.listdir("/dev")
result=list()
for fname in dirList:
if(fname.startswith("usbtmc")):
result.append("/dev/" + fname)
return result
# looking for oscilloscope
devices = getDeviceList()
# initiate oscilloscope
osc = UsbTmcDriver(devices[0])
print osc.getName()
#osc.write(":STOP")
#osc.write(":WAW:SOUR CH1")
#osc.write(":WAW:MODE NORMAL")
#osc.write(":WAW:BEGIN")
#print osc.read(2048)
#osc.write(":WAV:POIN")
#osc.write(":STORAGE:SAVE CH1 LOCAL")
#osc.write(":STOR:CAPT?")
#time.sleep(1.0)
#print osc.read()
%matplotlib notebook
fig,ax = plt.subplots(1,1)
osc.write(':WAV:SOUR CHAN1')
time.sleep(0.01)
osc.write(':WAV:MODE NORM')
time.sleep(0.01)
#osc.write(':WAV:FORM BYTE')
#osc.write(':WAV:POIN 700')
osc.write(':WAV:DATA?')
wave1 = bytearray(osc.read(500))
wave2 = bytearray(osc.read(500))
wave = np.concatenate((wave1[11:],wave2[:(700-489)]))
fig.clf()
plt.ylim(0, 255)
plt.plot(wave)
fig.canvas.draw()
Explanation: Oscilloskope USBTCM utility
End of explanation
runs = 10
%matplotlib notebook
fig,ax = plt.subplots(1,1)
osc.write(':WAV:XINC?')
xinc = float(osc.read(100))
print 'XINC:', xinc
osc.write(':WAV:YINC?')
yinc = float(osc.read(100))
print 'YINC:', yinc
osc.write(':WAV:SOUR CHAN1')
osc.write(':WAV:MODE NORM')
osc.write(':WAV:FORM BYTE')
osc.write(':WAV:POIN 700')
with h5py.File('data.h5', 'w') as hf:
hf.create_dataset('XINC', data=(xinc))
hf.create_dataset('YINC', data=(yinc))
#hf.create_dataset('YINC', bytearray(yinc))
for n in range(runs):
#osc.write(':RUN')
# waiting for SINGLE trigger
osc.write(':SING')
osc.write(':TRIG:STAT?')
while (osc.read(4) != 'STOP'):
osc.write(':TRIG:STAT?')
time.sleep(0.1)
#osc.write(':STOP')
osc.write(':WAV:DATA?')
wave1 = bytearray(osc.read(500))
wave2 = bytearray(osc.read(500))
wave = np.concatenate((wave1[11:],wave2[:(700-489)]))
#osc.write(':WAV:RES')
hf.create_dataset(str(n), data=wave)
fig.clf()
plt.ylim(0, 255)
plt.plot(wave)
fig.canvas.draw()
Explanation: Capture notes
There is a FTP server that can save to CSV files.
Note:
Set Display:Persist:None
Set Display:Waveform:Dots
8888 is for remote control.
See also:
https://www.eevblog.com/forum/testgear/micsig-to1104-(similar-to-rigol-1104z)/125/
Read number of waveforms and wtite it to a file
End of explanation
%matplotlib inline
with h5py.File('data.h5','r') as hf:
#print('List of arrays in this file: \n', hf.keys())
print 'XINC:', np.array(hf.get('XINC')), ' YINC:', np.array(hf.get('YINC'))
for n in range(10):
data = hf.get(str(n))
np_data = np.array(data)
plt.plot(np_data)
#print np_data
maximum = 0
minimum = 255
suma = 0
for i in range(700):
if np_data[i] > maximum:
maximum = np_data[i]
if np_data[i] < minimum:
minimum = np_data[i]
suma = suma + np_data[i]
print minimum, maximum, suma
Explanation: Read waveforms from the file
End of explanation
%matplotlib notebook
fig,ax = plt.subplots(1,1)
# looking for oscilloscope
devices = getDeviceList()
# initiate oscilloscope
osc = UsbTmcDriver(devices[0])
df = pd.Series(0) # Create a new dataframe
for n in range(10):
# waiting for SINGLE trigger
osc.write(':SINGLE')
osc.write(':TRIG:STAT?')
while (osc.read(4) != 'STOP'):
osc.write(':TRIG:STAT?')
#osc.write(':STOP')
osc.write(':WAV:SOUR CHAN1')
osc.write(':WAV:MODE NORM')
osc.write(':WAV:FORM BYTE')
osc.write(':WAV:DATA?')
#wave = map(float, osc.read(100).split(','))
wave = bytearray(osc.read(1400))
fig.clf()
plt.ylim(0, 255)
plt.plot(wave[10:])
fig.canvas.draw()
time.sleep(0.1)
runs = 10
osc.write(':WAV:XINC?')
xinc = float(osc.read(100))
print 'XINC:', xinc
osc.write(':WAV:YINC?')
yinc = float(osc.read(100))
print 'YINC:', yinc
osc.write(':WAV:SOUR CHAN1')
osc.write(':WAV:MODE NORM')
osc.write(':WAV:FORM BYTE')
osc.write(':WAV:POIN 700')
with h5py.File('data.h5', 'w') as hf:
hf.create_dataset('XINC', data=(xinc))
hf.create_dataset('YINC', data=(yinc))
#hf.create_dataset('YINC', bytearray(yinc))
for n in range(runs):
#osc.write(':RUN')
# waiting for SINGLE trigger
osc.write(':SING')
osc.write(':TRIG:STAT?')
while (osc.read(4) != 'STOP'):
osc.write(':TRIG:STAT?')
time.sleep(0.2)
#osc.write(':STOP')
osc.write(':WAV:DATA?')
wave1 = bytearray(osc.read(500))
wave2 = bytearray(osc.read(500))
wave = np.concatenate((wave1[11:],wave2[:(700-489)]))
#osc.write(':WAV:END')
hf.create_dataset(str(n), data=wave)
%matplotlib inline
with h5py.File('data.h5','r') as hf:
#print('List of arrays in this file: \n', hf.keys())
print 'XINC:', np.array(hf.get('XINC')), ' YINC:', np.array(hf.get('YINC'))
for n in range(10):
data = hf.get(str(n))
np_data = np.array(data)
plt.plot(np_data)
#print np_data
maximum = 0
minimum = 255
suma = 0
for i in range(700):
if np_data[i] > maximum:
maximum = np_data[i]
if np_data[i] < minimum:
minimum = np_data[i]
suma = suma + np_data[i]
print minimum, maximum, suma
runs = 1
%matplotlib notebook
fig,ax = plt.subplots(1,1)
osc.write(':WAV:XINC?')
xinc = float(osc.read(100))
print 'XINC:', xinc
osc.write(':WAV:YINC?')
yinc = float(osc.read(100))
print 'YINC:', yinc
osc.write(':WAV:SOUR CHAN1')
osc.write(':WAV:MODE NORM')
osc.write(':WAV:FORM BYTE')
osc.write(':WAV:POIN 700')
with h5py.File('data.h5', 'w') as hf:
hf.create_dataset('XINC', data=(xinc))
hf.create_dataset('YINC', data=(yinc))
#hf.create_dataset('YINC', bytearray(yinc))
for n in range(runs):
#osc.write(':RUN')
# waiting for SINGLE trigger
#osc.write(':SING')
#osc.write(':TRIG:STAT?')
#while (osc.read(4) != 'STOP'):
# osc.write(':TRIG:STAT?')
#time.sleep(0.1)
#osc.write(':STOP')
osc.write(':WAV:DATA?')
wave1 = bytearray(osc.read(500))
wave2 = bytearray(osc.read(500))
wave = np.concatenate((wave1[11:],wave2[:(700-489)]))
#osc.write(':WAV:RES')
hf.create_dataset(str(n), data=wave)
fig.clf()
plt.ylim(0, 255)
plt.plot(wave)
fig.canvas.draw()
Explanation: Capture waveforms and plot it
End of explanation
with h5py.File('data.h5', 'w') as hf:
osc.write(':FUNC:WREP:FEND?') # get number of last frame
frames = int(osc.read(100))
print 'FRAMES:', frames
hf.create_dataset('FRAMES', data=(frames)) # write number of frames
hf.create_dataset('XINC', data=(xinc)) # write axis parameters
hf.create_dataset('YINC', data=(yinc))
for n in range(1,frames+1):
osc.write(':FUNC:WREP:FCUR ' + str(n)) # skip to n-th frame
time.sleep(0.1)
osc.write(':WAV:DATA?') # read data
#time.sleep(0.4)
wave1 = bytearray(osc.read(500))
wave2 = bytearray(osc.read(500))
wave = np.concatenate((wave1[11:],wave2[:(700-489)]))
hf.create_dataset(str(n), data=wave)
%matplotlib inline
with h5py.File('data.h5','r') as hf:
#print('List of arrays in this file: \n', hf.keys())
print 'XINC:', np.array(hf.get('XINC')), ' YINC:', np.array(hf.get('YINC')), ' FRAMES:', np.array(hf.get('FRAMES'))
frames = np.array(hf.get('FRAMES'))
for n in range(1,frames+1):
data = hf.get(str(n))
np_data = np.array(data)
plt.plot(np_data)
#print np_data
maximum = 0
minimum = 255
suma = 0
for i in range(700):
if np_data[i] > maximum:
maximum = np_data[i]
if np_data[i] < minimum:
minimum = np_data[i]
suma = suma + np_data[i]
print minimum, maximum, suma
%matplotlib inline
import matplotlib.pyplot as plt
import sys
import os
import time
import h5py
import numpy as np
with h5py.File('data2.h5','r') as hf:
#print('List of arrays in this file: \n', hf.keys())
print 'XINC:', np.array(hf.get('XINC')), ' YINC:', np.array(hf.get('YINC')), ' FRAMES:', np.array(hf.get('FRAMES'))
frames = np.array(hf.get('FRAMES'))
for n in range(1,frames+1):
data = hf.get(str(n))
np_data = np.array(data)
plt.plot(np_data)
#print np_data
maximum = 0
minimum = 255
suma = 0
for i in range(700):
if np_data[i] > maximum:
maximum = np_data[i]
if np_data[i] < minimum:
minimum = np_data[i]
suma = suma + np_data[i]
if n<10:
print n,',', minimum,',', maximum,',', suma
Explanation: Readin buffer with stored records
End of explanation
<END_TASK> |
15,520 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Markowitz Portfolio Optimization
Final Project for CPSC 458 by Connor Durkin
This project explores the use of a mean variance or Markowitz method of portfolio optimization. The goal is to employ this trading strategy for a portfolio of SPDR ETFs and track returns over historical data. More importantly, though, as this is a class in decision making, I have incporated the ability for the functions here to explain their motivations to a human being--hopefully in a palatable manner. Below are the function definitions and at the end of the notebook you will find an example of their use. These functions were written with default key values but the operations are general enough to apply this strategy to any selection of securities with return data available via yahoo finance. Be sure to read the Results and Analysis at the end!
Step1: getTimeSeries( ticker, start_date, end_date)
What it does
Step2: getMultTimeSeries( tickers, start_date, end_date)
What it does
Step7: markowitzReturns( returns)
What it does
Step8: backtest( tickers, start_date, end_date, start, max_lookback, explain)
What it does
Step10: analyzeResults( weights_df, total_returns, naive_return, commission)
What it does | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import yahoo_finance
from yahoo_finance import Share
import numpy as np
import pandas
import matplotlib.pyplot as plt
import datetime
import cvxopt as opt
from cvxopt import blas, solvers
# We will do a lot of optimizations,
# and don't want to see each step.
solvers.options['show_progress'] = False
Explanation: Markowitz Portfolio Optimization
Final Project for CPSC 458 by Connor Durkin
This project explores the use of a mean variance or Markowitz method of portfolio optimization. The goal is to employ this trading strategy for a portfolio of SPDR ETFs and track returns over historical data. More importantly, though, as this is a class in decision making, I have incporated the ability for the functions here to explain their motivations to a human being--hopefully in a palatable manner. Below are the function definitions and at the end of the notebook you will find an example of their use. These functions were written with default key values but the operations are general enough to apply this strategy to any selection of securities with return data available via yahoo finance. Be sure to read the Results and Analysis at the end!
End of explanation
def getTimeSeries( ticker, start_date='2012-01-01', end_date='2012-02-01'):
# yahoo_finance API to load list of dictionaries
obj = Share(ticker)
ts = obj.get_historical(start_date,end_date)
# yahoo_finance indexes most recent date first, reverse this
ts = list(reversed(ts))
# Convert date strings to python datetime objects for easier manipulation
dates = [datetime.datetime.strptime(ts[i]['Date'],'%Y-%m-%d').date() for i in range(len(ts))]
# Convert close price strings to floats for numerical manipulation
prices = [float(ts[i]['Adj_Close']) for i in range(len(ts))]
# Create DataFrame from the list produced - python will recognize as Series
time_series = pandas.DataFrame( prices, index = dates, columns = [ticker])
return time_series
Explanation: getTimeSeries( ticker, start_date, end_date)
What it does:
getTimeSeries() takes in a date range and a ticker and returns a timeseries of adjusted closing prices.
Inputs:
ticker: a string indiciating the security for which the time series will be generated.
start_date: a string of the form 'YYYY-MM-DD' declaring the beginning of the historical window.
end_date: a string of the form 'YYYY-MM-DD' declaring the end of the historical window
Returns:
time_series: a single column Pandas DataFrame containing the time series of adjusted close prices
for the indicated ticker.
End of explanation
def getMultTimeSeries( tickers = ['XLY','XLP','XLE','XLF','XLV','XLI','XLB','XLK','XLU'],
start_date = '2012-01-01', end_date = '2012-02-01'):
# Initialize DataFrame
time_series_dataframe = pandas.DataFrame()
# Iterate over all tickers and append column to DataFrame
for ticker in tickers:
# Use helper function to get single column DataFrame
df = getTimeSeries( ticker, start_date, end_date)
# Concatanate on axis = 1
time_series_dataframe = pandas.concat([time_series_dataframe,df],axis = 1)
return time_series_dataframe
Explanation: getMultTimeSeries( tickers, start_date, end_date)
What it does:
getMultTimeSeries() takes in a list of tickers and a specified date range and returns a Pandas DataFrame containing timeseries of adjusted closing prices.
Inputs:
tickers: a list of strings indicating which tickers to include. Defaults to these 9 SPDR ETFs: 'XLY','XLP','XLE','XLF','XLV','XLI','XLB','XLK','XLU'.
start_date: a string of the form 'YYYY-MM-DD' declaring the beginning of the historical window.
end_date: a string of the form 'YYYY-MM-DD' declaring the end of the historical window
Returns:
time_series_dataframe: a dataframe of adjusted closing price timeseries over the specified date range for the specified group of tickers
End of explanation
def markowitzReturns( returns, tickers, explain = False):
n = len(returns)
returns_df = returns
returns = np.asmatrix(returns)
mus = [10**(5.0 * t/50 - 1.0) for t in range(50)]
# Convert to cvxopt matrices
Sigma = opt.matrix(np.cov(returns))
q = opt.matrix(np.mean(returns, axis=1))
# Create constraint matrices
G = -opt.matrix(np.eye(n)) # negative n x n identity matrix
h = opt.matrix(0.0, (n ,1)) # -I*w < 0 i.e. no shorts
A = opt.matrix(1.0, (1, n)) # A is all ones so A*w = w
b = opt.matrix(1.0) # Dot product sums to 1
# Calculate efficient frontier weights using quadratic programming
ports = [solvers.qp(mu*Sigma, -q, G, h, A, b)['x'] for mu in mus]
# Calculate risks and returns of frontier
returns = [blas.dot(q, x) for x in ports]
risks = [np.sqrt(blas.dot(x, Sigma*x)) for x in ports]
# Fit polynomial to frontier curve
m = np.polyfit(returns, risks, 2)
x = np.sqrt(m[2]/m[0])
# Calculate optimal portfolio weights
optimal_weights = solvers.qp(opt.matrix(x * Sigma), -q, G, h, A, b)['x']
optimal_return = blas.dot(q, optimal_weights)
optimal_risk = np.sqrt(blas.dot(optimal_weights, Sigma*optimal_weights))
# Method to justify this portfolio distribution if asked for
if( explain ):
date_text =
--------------------------------------------------------------------------------------------------
Using returns data from {0} to {1} a careful mean - variance analysis was performed.
The analysis found a number of portfolios lying on the markowitz efficient frontier and they are
found below. The analysis indicates that the optimal portfolio for the next trading day will have
the following distribution:
print(date_text.format(returns_df.columns[0],returns_df.columns[len(returns_df.columns)-1]))
# Print optimal weights
weights = np.asarray(optimal_weights)
weights = [float(weights[i]) for i in range(len(weights))]
wts = dict(zip(tickers,weights))
for k in wts:
weight_text = "\t{0} : {1:.4f}%"
print(weight_text.format(str(k),float(wts[k])*100))
returns_text =
This portfolio distribution has an expected return of:
{0:.4f}%
print(returns_text.format(float(optimal_return)*100))
risk_text =
And the associated risk (standard deviation) is:
{0:.4f}
print(risk_text.format(float(optimal_risk)))
break_text=
--------------------------------------------------------------------------------------------------
print(break_text)
plt.plot(risks, returns, 'b-o')
plt.title('Efficient Portfolios on {}'.format(returns_df.columns[len(returns_df.columns)-1]))
plt.ylabel('Returns (%)')
plt.xlabel('Risk (STD)')
return np.asarray(optimal_weights), returns, risks
Explanation: markowitzReturns( returns)
What it does:
markowitzReturns() takes in a Pandas DataFrame (or any container which can be converted to a numpy matrix) of returns and uses mean-variance portfolio theory to return an optimally weighted portfolio. It does so by minimizing $\omega^{T}\Sigma\omega -qR^{T}\omega$ (the Markowitz mean - variance framework) for portfolio weights $\omega$. Where $\Sigma$ is the covariance matrix of the securities, $R$ is the expected return matrix and $q$ is the mean return vector of all securities. The optimization is performed using the CVXOPT package employing the use of the solvers.qp() quadratic programming method. This method minimizes $(1/2)x^{T}Px + q^{T}x$ subject to $Gx \preceq h$ and $Ax = b$. It also utilizes CVXOPT's BLAS methods for performing linear algebra computations. Inspiration for this process was found in Dr. Thomas Starke, David Edwards and Dr. Thomas Wiecki's quantopian blog post located at: http://blog.quantopian.com/markowitz-portfolio-optimization-2/.
Inputs:
returns: a Pandas DataFrame(or other container which can be converted to a numpy matrix). NOTE: the dataframe produced by getMultTimeSeries must be transposed (returns.T) for meaningful results.
justify: a True / False input determining whether to print a robust explanation of the choice for the portfolio shift.
Returns:
optimal_weights: the weights of the optimal portfolio in array form.
returns: the returns of all portfolios calculated across the effecient frontier.
risks: list of risks of all portfolios calculated across the efficient frontier.
End of explanation
def backtest( tickers = ['XLY','XLP','XLE','XLF','XLV','XLI','XLB','XLK','XLU'],
start_date = '2012-01-01', end_date = '2012-01-20', start = 10, max_lookback = 100,
explain = False):
timeseries = getMultTimeSeries( tickers, start_date, end_date)
returns = timeseries.pct_change().dropna()
weights_df = pandas.DataFrame()
for i in range(len(returns)):
if ( i > start ):
if( i < max_lookback ):
returns_window = returns[0:i]
else:
returns_window = returns[(i-max_lookback):i]
try:
if( explain ):
weights, returns_window, risks = markowitzReturns(returns_window.T, tickers, explain = True)
else:
weights, returns_window, risks = markowitzReturns(returns_window.T, tickers, explain = False)
except ValueError as e:
# Sometimes CVXOPT fails (infrequently)
# "ValueError: Rank(A) < p or Rank([P; A; G]) < n"
# In this case just do nothing (keep current weights)
weights, returns_window, risks = weights_prev, returns_window_prev, risks_prev
weights = [float(weights[i]) for i in range(len(weights))]
wts = dict(zip(tickers,weights))
df = pandas.DataFrame(wts, index = [returns.index[i]])
weights_df = pandas.concat([weights_df, df])
weights_prev, returns_window_prev, risks_prev = weights, returns_window, risks
total_returns = pandas.DataFrame(weights_df.values*returns[(start+1)::],
columns = returns.columns, index = returns.index)
naive_returns = [np.sum(total_returns[[i]]) for i in range(len(total_returns.columns))]
naive_return = np.sum(naive_returns)
return weights_df, total_returns.dropna(), naive_return
weights, returns, naive_return = backtest(explain = True)
Explanation: backtest( tickers, start_date, end_date, start, max_lookback, explain)
What it does:
backtest() applies the mean-variance portfolio optimization trading strategy to a list of stocks. It applies the markowitzReturns() method over a range of dates and tracks the portfolio movement and returns, outputting a DataFrame describing the portfolio over time, a DataFrame describing the returns over time and a total return amount. Backtest does not take into account commission costs. Running backtest(explain = True) produces the output below. The default dates were carefully selected so that just one explain instance would print.
Inputs:
tickers: a list of strings indicating which tickers to include. Defaults to these 9 SPDR ETFs: 'XLY','XLP','XLE','XLF','XLV','XLI','XLB','XLK','XLU'.
start_date: a string of the form 'YYYY-MM-DD' declaring the beginning of the historical window.
end_date: a string of the form 'YYYY-MM-DD' declaring the end of the historical window
start: the minimum number of days to wait before beginning to trade (i.e. how much information is needed). Default is 10.
max_lookback: the maximum number of days to look back for data, i.e. the size of the input to markowitzReturns(). Default is 100.
Returns:
weights_df: a pandas DataFrame containing the portfolio weights over time beginning with the start date + start$*$days.
total_returns: a pandas DataFrame containing the portfolio returns over time beginning with the start date + start$*$days.
naive_return: the total naive return (numpy float).
End of explanation
weights, returns, naive_return = backtest(start_date='2012-01-01',end_date='2012-12-31')
def analyzeResults( weights_df, total_returns, naive_return, commission = .0004):
start_date = weights_df.index[0]
end_date = weights_df.index[len(weights_df.index)-1]
# Get cummulative sum of returns for plotting
return_sums = total_returns.cumsum()
return_sums['total_return'] = return_sums.sum(axis=1)
# Analyze data with commission costs
weights_diff = weights_df.diff()
weights_diff['total_delta'] = weights_diff.abs().sum(axis = 1)
portfolio_movement = pandas.DataFrame(weights_diff['total_delta']/2)
portfolio_movement['commissions'] = portfolio_movement['total_delta']*commission
portfolio_movement['naive_return'] = total_returns.sum(axis=1)
portfolio_movement['real_return'] = (portfolio_movement['naive_return'] - portfolio_movement['commissions'])
real_sums = portfolio_movement.cumsum()
real_return = portfolio_movement['real_return'].sum()
# Print naive_return and real_return + analysis
naive_return_text =
--------------------------------------------------------------------------------------------------
In trading from {0} to {1} the total return ignoring commission fees was:
{2:.4f}%
After factoring in commission fees of {3} the total return was:
{4:.4f}%
--------------------------------------------------------------------------------------------------
print(naive_return_text.format( start_date, end_date, naive_return*100, commission ,real_return*100) )
# Get plot of naive_returns and real returns over time
plt.figure(figsize=(12,6))
plt.plot(return_sums.index,return_sums['total_return'],label='Naive Returns')
plt.plot(real_sums.index,real_sums['real_return'],label='Real Returns')
plt.title('Returns over Time')
plt.xlabel('Time')
plt.ylabel('Returns (%)')
plt.xticks(rotation=70)
plt.legend()
plt.legend(bbox_to_anchor=(1.01, .5), loc=2, borderaxespad=0.)
return
analyzeResults( weights, returns, naive_return, commission = .0004)
Explanation: analyzeResults( weights_df, total_returns, naive_return, commission)
What it does:
analyzeResults() is the final function which analyzes and displays the results of the backtest() function. It takes the output of backtest() plus an argument for the commission wich defaults to 4 basis points. It plots the real and naive returns over time and displays the total real and naive returns over the date range from backtest(). Below is an example from 2012.
Inputs:
weights_df: pandas DataFrame of portfolio weights over time, returned from backtest().
total_returns: pandas DataFrame of naive returns over time, returned from backtest().
naive_return: total naive_return as returned by backtest().
commission: basis point cost on trades, defualts to 4 basis points.
Returns:
nothing
End of explanation
<END_TASK> |
15,521 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Load necessary packages
Step1: Define functions for filtering, moving averages, and normalizing data
Step2: Read bandwidth and rain/temperature data and normalize them
Step3: Smoothing data (11-year running average)
Step4: Calculate correlation and p-values with considering autocorrelation, and the autocorrelations (coef)
Step5: Check the correlation results | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from scipy import interpolate
from scipy import special
from scipy.signal import butter, lfilter, filtfilt
import matplotlib.pyplot as plt
import numpy as np
from numpy import genfromtxt
from nitime import algorithms as alg
from nitime import utils
from scipy.stats import t
import pandas as pd
Explanation: Load necessary packages
End of explanation
def butter_lowpass(cutoff, fs, order=3):
nyq = 0.5 * fs
normal_cutoff = cutoff / nyq
b, a = butter(order, normal_cutoff, btype='low', analog=False)
return b, a
def filter(x, cutoff, axis, fs=1.0, order=3):
b, a = butter_lowpass(cutoff, fs, order=order)
y = filtfilt(b, a, x, axis=axis)
return y
def movingaverage(interval, window_size):
window = np.ones(int(window_size))/float(window_size)
return np.convolve(interval, window, 'valid')
def owncorr(x,y,n):
x_ano=np.ma.anomalies(x)
x_sd=np.sum(x_ano**2,axis=0)
y_ano=np.ma.anomalies(y)
y_sd=np.sum(y_ano**2,axis=0)
nomi = np.dot(x_ano,y_ano)
corr = nomi/np.sqrt(np.dot(x_sd[None],y_sd[None]))
# When using AR_est_YW, we should substract mean from
# time series first
x_coef, x_sigma = alg.AR_est_YW (x_ano, 1)
y_coef, y_sigma = alg.AR_est_YW (y_ano, 1)
if x_coef > 1:
eps = np.spacing(1.0)
x_coef = 1.0 - eps**(1/4)
elif x_coef < 0:
x_coef = 0.0
if y_coef > 1:
eps = np.spacing(1.0)
y_coef = 1.0 - eps**(1/4)
elif y_coef < 0:
y_coef = 0.0
neff = n*(1-x_coef*y_coef)/(1+x_coef*y_coef)
if neff <3:
neff = 3
coef = []
coef.append(x_coef)
coef.append(y_coef)
tval = corr/np.sqrt(1-corr**2)*np.sqrt(neff-2)
pval = t.sf(abs(tval),neff-2)*2
return corr,pval,coef
def gaussianize(X):
n = X.shape[0]
#p = X.shape[1]
Xn = np.empty((n,))
Xn[:] = np.NAN
nz = np.logical_not(np.isnan(X))
index = np.argsort(X[nz])
rank = np.argsort(index)
CDF = 1.*(rank+1)/(1.*n) -1./(2*n)
Xn[nz] = np.sqrt(2)*special.erfinv(2*CDF -1)
return Xn
Explanation: Define functions for filtering, moving averages, and normalizing data
End of explanation
data = genfromtxt('data/scotland.csv', delimiter=',')
bandw = data[0:115,4] # band width (1879-1993), will be correlated with T/P
bandwl = data[3:129,4] # band width (1865-1990), will be correlation with winter NAO
bandwn = gaussianize(bandw) #normalized band width
bandwln = gaussianize(bandwl) #normalized band width
rain = genfromtxt('data/Assynt_P.txt') #precipitaiton
temp = genfromtxt('data/Assynt_T.txt') #temperature
wnao = genfromtxt('data/wnao.txt') #winter NAO
wnao = wnao[::-1]
rainn = gaussianize(rain)
tempn = gaussianize(temp)
#calculate the ratio of temperature over precipitation
ratio = temp/rain
ration = gaussianize(ratio)
Explanation: Read bandwidth and rain/temperature data and normalize them
End of explanation
bandw_fil = movingaverage(bandw, 11)
bandwn_fil = movingaverage(bandwn, 11)
bandwl_fil = movingaverage(bandwl, 11)
rain_fil = movingaverage(rain, 11)
rainn_fil = movingaverage(rainn, 11)
ratio_fil = movingaverage(ratio, 11)
wnao_fil = movingaverage(wnao, 11)
Explanation: Smoothing data (11-year running average)
End of explanation
corr_ratio,pval_ratio,coef = owncorr(bandw_fil,ratio_fil,115) #correlation between smoothed bandwidth and ratio
corr_nao,pval_nao,coef_nao = owncorr(bandwl_fil,wnao_fil,126) #correlation between smoothed bandwidth and winter NAO
corr_n,pval_n,coef_n = owncorr(bandwn,ration,115) #correlation between normalized bandwidth and ratio
corr_naon,pval_naon,coef_naon = owncorr(bandwln,wnao,126) #correlation between normalized bandwidtha and winter NAO
Explanation: Calculate correlation and p-values with considering autocorrelation, and the autocorrelations (coef)
End of explanation
print(corr_ratio)
print(pval_ratio)
print(coef)
print(corr_nao)
print(pval_nao)
print(coef_nao)
print(corr_n)
print(pval_n)
print(coef_n)
print(corr_naon)
print(pval_naon)
print(coef_naon)
Explanation: Check the correlation results
End of explanation
<END_TASK> |
15,522 |
<SYSTEM_TASK:>
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
<END_TASK>
<USER_TASK:>
Problem:
how to draw a heatmap for given dataframe
| <ASSISTANT_TASK:>
Python Code::
import seaborn as sns
sns.heatmap(df, cmap = 'magma', linecolor = 'white', linewidth = 1)
<END_TASK>
|
15,523 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Copyright 2019 The TensorFlow Authors.
Licensed under the Apache License, Version 2.0 (the "License");
Step1: 画像セグメンテーション
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: Oxford-IIIT ペットデータセットをダウンロードする
データセットは TensorFlow Datasets から入手できます。セグメンテーションマスクはバージョン 3 以上に含まれています。
Step3: また、画像の色値は [0,1] の範囲に正規化されています。最後に、上記で説明したとおり、セグメンテーションのマスクは {1, 2, 3} のいずれかでラベル付けされています。便宜上、セグメンテーションマスクから 1 を減算して、ラベルを {0, 1, 2} としましょう。
Step4: データセットにはすでに必要となる training と test split が含まれているため、そのまま同じ split を使用します。
Step5: 次のクラスは、画像をランダムにフリップする単純な拡張を実行します。詳細は、画像のデータ拡張チュートリアルをご覧ください。
Step6: 入力パイプラインをビルドし、入力をバッチ処理した後に拡張を適用します。
Step7: 画像サンプルと対応するデータセットのマスクを可視化しましょう。
Step8: モデルを定義する
ここで使用されるモデルは修正された U-Net です。U-Net には、エンコーダ(ダウンサンプラー)とデコーダ(アップサンプラー)が含まれます。強力な特徴量を理解してトレーニング可能なパラメータ数を減らすため、MobileNetV2 というトレーニング済みモデルをエンコーダとして使用します。デコーダについてはアップサンプルブロックを使用しますが、これは TensorFlow Examples リポジトリの pix2pix の例に実装済みです。(ノートブックの pix2pix
Step9: デコーダおよびアップサンプラは、単に TensorFlow の 例に実装されている一連のアップサンプラブロックに過ぎません。
Step10: 最後のレイヤーのフィルタ数は output_channels の数に設定されています。これはクラス当たり 1 つの出力チャンネルとなります。
モデルをトレーニングする
では、後は、モデルををコンパイルしてトレーニングするだけです。
これはマルチクラスの分類問題であり、ラベルがクラスごとのピクセルのスコアのベクトルではなくスカラー整数であるため、tf.keras.losses.CategoricalCrossentropy 損失関数を使用して、from_logits を True に設定します。
推論を実行すると、ピクセルに割り当てられたラベルが最も高い値を持つチャンネルです。これは、create_mask 関数の作用です。
Step11: 結果のモデルアーキテクチャを簡単に見てみましょう。
Step12: トレーニングする前に、モデルが何を予測するかを試してみましょう。
Step13: 以下に定義されるコールバックは、トレーニング中にモデルがどのように改善するかを観測するために使用されます。
Step14: 予測する
いくつか予測を行ってみましょう。時間の節約重視の場合はエポック数を少なくしますが、高精度の結果重視の場合はエポック数を増やして設定します。
Step15: オプション
Step16: そのため、この場合、自分で重み付けを実装する必要があります。これにはサンプルの重み付けを使用します。Model.fit は (data, label) ペアのほかに (data, label, sample_weight) トリプレットも受け入れます。
Model.fit は sample_weight を損失とメトリクスに伝搬しますが、sample_weight 引数も受け入れます。サンプル重みは、縮小ステップの前にサンプル値で乗算されます。次に例を示します。
Step17: つまり、このチュートリアルのサンプル重みを作るには、(data, label) ペアを取って (data, label, sample_weight) トリプルを返す関数が必要となります。sample_weight は各ピクセルのクラス重みを含む 1-channel の画像です。
実装を可能な限り単純にするために、ラベルをclass_weight リストのインデックスとして使用します。
Step18: この結果、データセットの各要素には、3 つの画像が含まれます。
Step19: これで、この重み付けが付けられたデータセットでモデルをトレーニングできるようになりました。 | <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Authors.
Licensed under the Apache License, Version 2.0 (the "License");
End of explanation
!pip install git+https://github.com/tensorflow/examples.git
import tensorflow as tf
import tensorflow_datasets as tfds
from tensorflow_examples.models.pix2pix import pix2pix
from IPython.display import clear_output
import matplotlib.pyplot as plt
Explanation: 画像セグメンテーション
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/tutorials/images/segmentation"> <img src="https://www.tensorflow.org/images/tf_logo_32px.png"> TensorFlow.org で表示</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/tutorials/images/segmentation.ipynb"> <img src="https://www.tensorflow.org/images/colab_logo_32px.png"> Google Colab で実行</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/tutorials/images/segmentation.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/tutorials/images/segmentation.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a></td>
</table>
このチュートリアルでは、修正した <a href="https://lmb.informatik.uni-freiburg.de/people/ronneber/u-net/" class="external">U-Net</a> を使用した画像セグメンテーションのタスクに焦点を当てます。
画像セグメンテーションとは
画像分類タスクでは、ネットワークが各入力画像にラベル(またはクラス)を割り当てますが、そのオブジェクトの形状やどのピクセルがどのオブジェクトに属しているかなどを知りたい場合はどうすればよいでしょうか。この場合、画像のピクセルごとにクラスを割り当てようと考えるでしょう。このタスクはセグメンテーションとして知られています。セグメンテーションモデルは、画像に関してはるかに詳細な情報を返します。画像セグメンテーションには、医用イメージング、自動走行車、衛星撮像など、数多くの用途があります。
このチュートリアルでは Oxford-IIIT Pet Dataset(Parkhi <em>et al</em>)を使用します。データセットには、37 種のペット品種と、品種当たり 200 枚の画像(train と test split で約 100 枚ずつ)が含まれます。それぞれの画像には対応するラベルとピクセル方向のマスクが含まれます。マスクは各ピクセルのクラスラベルです。各ピクセルには、次のいずれかのカテゴリが指定されます。
クラス 1 : ペットに属するピクセル。
クラス 2 : ペットと境界のピクセル。
クラス 3: 上記のいずれにも該当しない、または周囲のピクセル。
End of explanation
dataset, info = tfds.load('oxford_iiit_pet:3.*.*', with_info=True)
Explanation: Oxford-IIIT ペットデータセットをダウンロードする
データセットは TensorFlow Datasets から入手できます。セグメンテーションマスクはバージョン 3 以上に含まれています。
End of explanation
def normalize(input_image, input_mask):
input_image = tf.cast(input_image, tf.float32) / 255.0
input_mask -= 1
return input_image, input_mask
def load_image(datapoint):
input_image = tf.image.resize(datapoint['image'], (128, 128))
input_mask = tf.image.resize(datapoint['segmentation_mask'], (128, 128))
input_image, input_mask = normalize(input_image, input_mask)
return input_image, input_mask
Explanation: また、画像の色値は [0,1] の範囲に正規化されています。最後に、上記で説明したとおり、セグメンテーションのマスクは {1, 2, 3} のいずれかでラベル付けされています。便宜上、セグメンテーションマスクから 1 を減算して、ラベルを {0, 1, 2} としましょう。
End of explanation
TRAIN_LENGTH = info.splits['train'].num_examples
BATCH_SIZE = 64
BUFFER_SIZE = 1000
STEPS_PER_EPOCH = TRAIN_LENGTH // BATCH_SIZE
train_images = dataset['train'].map(load_image, num_parallel_calls=tf.data.AUTOTUNE)
test_images = dataset['test'].map(load_image, num_parallel_calls=tf.data.AUTOTUNE)
Explanation: データセットにはすでに必要となる training と test split が含まれているため、そのまま同じ split を使用します。
End of explanation
class Augment(tf.keras.layers.Layer):
def __init__(self, seed=42):
super().__init__()
# both use the same seed, so they'll make the same random changes.
self.augment_inputs = tf.keras.layers.RandomFlip(mode="horizontal", seed=seed)
self.augment_labels = tf.keras.layers.RandomFlip(mode="horizontal", seed=seed)
def call(self, inputs, labels):
inputs = self.augment_inputs(inputs)
labels = self.augment_labels(labels)
return inputs, labels
Explanation: 次のクラスは、画像をランダムにフリップする単純な拡張を実行します。詳細は、画像のデータ拡張チュートリアルをご覧ください。
End of explanation
train_batches = (
train_images
.cache()
.shuffle(BUFFER_SIZE)
.batch(BATCH_SIZE)
.repeat()
.map(Augment())
.prefetch(buffer_size=tf.data.AUTOTUNE))
test_batches = test_images.batch(BATCH_SIZE)
Explanation: 入力パイプラインをビルドし、入力をバッチ処理した後に拡張を適用します。
End of explanation
def display(display_list):
plt.figure(figsize=(15, 15))
title = ['Input Image', 'True Mask', 'Predicted Mask']
for i in range(len(display_list)):
plt.subplot(1, len(display_list), i+1)
plt.title(title[i])
plt.imshow(tf.keras.utils.array_to_img(display_list[i]))
plt.axis('off')
plt.show()
for images, masks in train_batches.take(2):
sample_image, sample_mask = images[0], masks[0]
display([sample_image, sample_mask])
Explanation: 画像サンプルと対応するデータセットのマスクを可視化しましょう。
End of explanation
base_model = tf.keras.applications.MobileNetV2(input_shape=[128, 128, 3], include_top=False)
# Use the activations of these layers
layer_names = [
'block_1_expand_relu', # 64x64
'block_3_expand_relu', # 32x32
'block_6_expand_relu', # 16x16
'block_13_expand_relu', # 8x8
'block_16_project', # 4x4
]
base_model_outputs = [base_model.get_layer(name).output for name in layer_names]
# Create the feature extraction model
down_stack = tf.keras.Model(inputs=base_model.input, outputs=base_model_outputs)
down_stack.trainable = False
Explanation: モデルを定義する
ここで使用されるモデルは修正された U-Net です。U-Net には、エンコーダ(ダウンサンプラー)とデコーダ(アップサンプラー)が含まれます。強力な特徴量を理解してトレーニング可能なパラメータ数を減らすため、MobileNetV2 というトレーニング済みモデルをエンコーダとして使用します。デコーダについてはアップサンプルブロックを使用しますが、これは TensorFlow Examples リポジトリの pix2pix の例に実装済みです。(ノートブックの pix2pix: 条件付き GAN による画像から画像への変換チュートリアルをご覧ください。)
前述したように、エンコーダは事前トレーニング済み MobileNetV2 モデルであり、tf.keras.applications で即座に使用できるように準備されています。エンコーダはモデル内の中間レイヤーからの特定の出力で構成されています。トレーニングプロセス中にエンコーダはトレーニングされないので注意してください。
End of explanation
up_stack = [
pix2pix.upsample(512, 3), # 4x4 -> 8x8
pix2pix.upsample(256, 3), # 8x8 -> 16x16
pix2pix.upsample(128, 3), # 16x16 -> 32x32
pix2pix.upsample(64, 3), # 32x32 -> 64x64
]
def unet_model(output_channels:int):
inputs = tf.keras.layers.Input(shape=[128, 128, 3])
# Downsampling through the model
skips = down_stack(inputs)
x = skips[-1]
skips = reversed(skips[:-1])
# Upsampling and establishing the skip connections
for up, skip in zip(up_stack, skips):
x = up(x)
concat = tf.keras.layers.Concatenate()
x = concat([x, skip])
# This is the last layer of the model
last = tf.keras.layers.Conv2DTranspose(
filters=output_channels, kernel_size=3, strides=2,
padding='same') #64x64 -> 128x128
x = last(x)
return tf.keras.Model(inputs=inputs, outputs=x)
Explanation: デコーダおよびアップサンプラは、単に TensorFlow の 例に実装されている一連のアップサンプラブロックに過ぎません。
End of explanation
OUTPUT_CLASSES = 3
model = unet_model(output_channels=OUTPUT_CLASSES)
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
Explanation: 最後のレイヤーのフィルタ数は output_channels の数に設定されています。これはクラス当たり 1 つの出力チャンネルとなります。
モデルをトレーニングする
では、後は、モデルををコンパイルしてトレーニングするだけです。
これはマルチクラスの分類問題であり、ラベルがクラスごとのピクセルのスコアのベクトルではなくスカラー整数であるため、tf.keras.losses.CategoricalCrossentropy 損失関数を使用して、from_logits を True に設定します。
推論を実行すると、ピクセルに割り当てられたラベルが最も高い値を持つチャンネルです。これは、create_mask 関数の作用です。
End of explanation
tf.keras.utils.plot_model(model, show_shapes=True)
Explanation: 結果のモデルアーキテクチャを簡単に見てみましょう。
End of explanation
def create_mask(pred_mask):
pred_mask = tf.argmax(pred_mask, axis=-1)
pred_mask = pred_mask[..., tf.newaxis]
return pred_mask[0]
def show_predictions(dataset=None, num=1):
if dataset:
for image, mask in dataset.take(num):
pred_mask = model.predict(image)
display([image[0], mask[0], create_mask(pred_mask)])
else:
display([sample_image, sample_mask,
create_mask(model.predict(sample_image[tf.newaxis, ...]))])
show_predictions()
Explanation: トレーニングする前に、モデルが何を予測するかを試してみましょう。
End of explanation
class DisplayCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs=None):
clear_output(wait=True)
show_predictions()
print ('\nSample Prediction after epoch {}\n'.format(epoch+1))
EPOCHS = 20
VAL_SUBSPLITS = 5
VALIDATION_STEPS = info.splits['test'].num_examples//BATCH_SIZE//VAL_SUBSPLITS
model_history = model.fit(train_batches, epochs=EPOCHS,
steps_per_epoch=STEPS_PER_EPOCH,
validation_steps=VALIDATION_STEPS,
validation_data=test_batches,
callbacks=[DisplayCallback()])
loss = model_history.history['loss']
val_loss = model_history.history['val_loss']
plt.figure()
plt.plot(model_history.epoch, loss, 'r', label='Training loss')
plt.plot(model_history.epoch, val_loss, 'bo', label='Validation loss')
plt.title('Training and Validation Loss')
plt.xlabel('Epoch')
plt.ylabel('Loss Value')
plt.ylim([0, 1])
plt.legend()
plt.show()
Explanation: 以下に定義されるコールバックは、トレーニング中にモデルがどのように改善するかを観測するために使用されます。
End of explanation
show_predictions(test_batches, 3)
Explanation: 予測する
いくつか予測を行ってみましょう。時間の節約重視の場合はエポック数を少なくしますが、高精度の結果重視の場合はエポック数を増やして設定します。
End of explanation
try:
model_history = model.fit(train_batches, epochs=EPOCHS,
steps_per_epoch=STEPS_PER_EPOCH,
class_weight = {0:2.0, 1:2.0, 2:1.0})
assert False
except Exception as e:
print(f"Expected {type(e).__name__}: {e}")
Explanation: オプション: 不均衡なクラスとクラスの重み
セマンティックセグメンテーションデータセットは非常に不均衡であり、特定のクラスピクセルが他のクラスに比べて画像の内側寄りに存在する可能性があります。セグメンテーションの問題はピクセル単位の分類問題として対応することができるため、不均衡性を考慮して損失関数を重み付けすることで、不均衡の問題に対処することができます。単純かつエレガントにこの問題に取り組むことができます。詳細は、不均衡なデータでの分類のチュートリアルをご覧ください。
あいまいさを回避するために、Model.fit は 3 次元以上の入力の class_weight 引数をサポートしていません。
End of explanation
label = [0,0]
prediction = [[-3., 0], [-3, 0]]
sample_weight = [1, 10]
loss = tf.losses.SparseCategoricalCrossentropy(from_logits=True,
reduction=tf.losses.Reduction.NONE)
loss(label, prediction, sample_weight).numpy()
Explanation: そのため、この場合、自分で重み付けを実装する必要があります。これにはサンプルの重み付けを使用します。Model.fit は (data, label) ペアのほかに (data, label, sample_weight) トリプレットも受け入れます。
Model.fit は sample_weight を損失とメトリクスに伝搬しますが、sample_weight 引数も受け入れます。サンプル重みは、縮小ステップの前にサンプル値で乗算されます。次に例を示します。
End of explanation
def add_sample_weights(image, label):
# The weights for each class, with the constraint that:
# sum(class_weights) == 1.0
class_weights = tf.constant([2.0, 2.0, 1.0])
class_weights = class_weights/tf.reduce_sum(class_weights)
# Create an image of `sample_weights` by using the label at each pixel as an
# index into the `class weights` .
sample_weights = tf.gather(class_weights, indices=tf.cast(label, tf.int32))
return image, label, sample_weights
Explanation: つまり、このチュートリアルのサンプル重みを作るには、(data, label) ペアを取って (data, label, sample_weight) トリプルを返す関数が必要となります。sample_weight は各ピクセルのクラス重みを含む 1-channel の画像です。
実装を可能な限り単純にするために、ラベルをclass_weight リストのインデックスとして使用します。
End of explanation
train_batches.map(add_sample_weights).element_spec
Explanation: この結果、データセットの各要素には、3 つの画像が含まれます。
End of explanation
weighted_model = unet_model(OUTPUT_CLASSES)
weighted_model.compile(
optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
weighted_model.fit(
train_batches.map(add_sample_weights),
epochs=1,
steps_per_epoch=10)
Explanation: これで、この重み付けが付けられたデータセットでモデルをトレーニングできるようになりました。
End of explanation
<END_TASK> |
15,524 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Combined Cycle Power Plant Regression
This is an analysis of the <a href="https
Step1: Attribute Information
Features consist of hourly average ambient variables
- Temperature (T) in the range 1.81°C and 37.11°C,
- Ambient Pressure (AP) in the range 992.89-1033.30 milibar,
- Relative Humidity (RH) in the range 25.56% to 100.16%
- Exhaust Vacuum (V) in teh range 25.36-81.56 cm Hg
- Net hourly electrical energy output (EP) 420.26-495.76 MW
The averages are taken from various sensors located around the plant that record the ambient variables every second. The variables are given without normalization.
Univariate Analysis
Step2: Correlation With the Target column
Step3: Feature Correlations
Step4: Bivariate Analysis
Step5: OLS Regression
Step6: Cross-Validation | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
%pylab inline
pylab.style.use('ggplot')
import seaborn as sns
pp_data = pd.read_csv('ccpp.csv')
pp_data.head()
Explanation: Combined Cycle Power Plant Regression
This is an analysis of the <a href="https://archive.ics.uci.edu/ml/datasets/Combined+Cycle+Power+Plant">Combined Cycle Power Plant</a> dataset in the UCI archive.
End of explanation
for c in pp_data.columns:
_ = pylab.figure()
pp_data.loc[:, c].plot(kind='hist')
Explanation: Attribute Information
Features consist of hourly average ambient variables
- Temperature (T) in the range 1.81°C and 37.11°C,
- Ambient Pressure (AP) in the range 992.89-1033.30 milibar,
- Relative Humidity (RH) in the range 25.56% to 100.16%
- Exhaust Vacuum (V) in teh range 25.36-81.56 cm Hg
- Net hourly electrical energy output (EP) 420.26-495.76 MW
The averages are taken from various sensors located around the plant that record the ambient variables every second. The variables are given without normalization.
Univariate Analysis
End of explanation
feature_data = pp_data.drop('AT', axis=1)
corrs = feature_data.corrwith(pp_data.loc[:, 'AT'])
corrs.sort_values(ascending=False).plot(kind='barh')
Explanation: Correlation With the Target column
End of explanation
f_corrs = feature_data.corr()
sns.heatmap(f_corrs, annot=True)
Explanation: Feature Correlations
End of explanation
fig, axes = pylab.subplots(1, 4, figsize=(16, 8))
for i, c in enumerate(feature_data.columns):
sns.regplot(x=c, y='AT', data=pp_data, ax=axes[i])
from sklearn.feature_selection import f_regression
f_scores, f_probs = f_regression(feature_data, pp_data.loc[:, 'AT'])
f_imp_df = pd.DataFrame({'scores': f_scores, 'probs': f_probs}, index=feature_data.columns)
f_imp_df.plot(kind='barh', subplots=True)
Explanation: Bivariate Analysis
End of explanation
import statsmodels.formula.api as sm
model = sm.ols(formula='AT ~ PE + RH + V', data=pp_data)
result = model.fit()
result.summary()
Explanation: OLS Regression
End of explanation
from sklearn.model_selection import KFold
from sklearn.metrics import r2_score
n_splits = 10
fold = KFold(n_splits=n_splits, shuffle=True)
scores = []
for train_idx, test_idx in fold.split(pp_data):
model = sm.ols(formula='AT ~ PE + RH + V', data=pp_data.loc[train_idx])
result = model.fit()
test_features = pp_data.loc[test_idx].drop('AT', axis=1)
predictions = result.predict(test_features)
actual = pp_data.loc[test_idx, 'AT']
score = r2_score(actual, predictions)
scores.append(score)
scores = pd.Series(scores)
scores.plot(kind='bar')
Explanation: Cross-Validation
End of explanation
<END_TASK> |
15,525 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
KNN
Motivation
The principle behind nearest neighbor methods is to find a predefined number of training samples closest in distance to the new point, and predict the label from these. The number of samples can be a user-defined constant (k-nearest neighbor learning), or vary based on the local density of points (radius-based neighbor learning). The distance can, in general, be any metric measure
Step1: Remove Columns
Step2: Which are the factors?
Step3: Pre-Processing | <ASSISTANT_TASK:>
Python Code:
import pandas
import numpy
import csv
#from scipy.stats import mode
from sklearn import neighbors
from sklearn.neighbors import DistanceMetric
from pprint import pprint
MY_TITANIC_TRAIN = 'train.csv'
MY_TITANIC_TEST = 'test.csv'
titanic_dataframe = pandas.read_csv(MY_TITANIC_TRAIN, header=0)
print('length: {0} '.format(len(titanic_dataframe)))
titanic_dataframe.head(5)
Explanation: KNN
Motivation
The principle behind nearest neighbor methods is to find a predefined number of training samples closest in distance to the new point, and predict the label from these. The number of samples can be a user-defined constant (k-nearest neighbor learning), or vary based on the local density of points (radius-based neighbor learning). The distance can, in general, be any metric measure: standard Euclidean distance is the most common choice. Neighbors-based methods are known as non-generalizing machine learning methods, since they simply “remember” all of its training data
~scikit-learn
It's a beautiful day in this neighborhood,
A beautiful day for a neighbor.
Would you be mine?
Could you be mine?
~ Mr. Rogers
Readings:
* openCV: http://opencv-python-tutroals.readthedocs.org/en/latest/py_tutorials/py_ml/py_knn/py_knn_understanding/py_knn_understanding.html
* dataquest: https://www.dataquest.io/blog/k-nearest-neighbors/
* k-d tree: https://ashokharnal.wordpress.com/2015/01/20/a-working-example-of-k-d-tree-formation-and-k-nearest-neighbor-algorithms/
* euclidean: http://machinelearningmastery.com/tutorial-to-implement-k-nearest-neighbors-in-python-from-scratch/
Data
End of explanation
titanic_dataframe.drop(['Name', 'Ticket', 'Cabin'], axis=1, inplace=True)
print('dropped')
titanic_dataframe.describe()
Explanation: Remove Columns
End of explanation
titanic_dataframe.info()
Explanation: Which are the factors?
End of explanation
# age_mean = numpy.mean(titanic_dataframe['Age'])
titanic_dataframe['Age'].fillna(numpy.mean(titanic_dataframe['Age']),inplace=True)
# titanic_dataframe.fillna(value=age_mean, axis=0)
titanic_dataframe.info()
titanic_dataframe.info()
# titanic_dataframe = titanic_dataframe.dropna()
titanic_dataframe['Embarked'].fillna(titanic_dataframe['Embarked'].mode().item(),inplace=True)
titanic_dataframe['Port'] = titanic_dataframe['Embarked'].map({'C':1, 'S':2, 'Q':3}).astype(int)
titanic_dataframe['Gender'] = titanic_dataframe['Sex'].map({'female': 0, 'male': 1}).astype(int)
titanic_dataframe = titanic_dataframe.drop(['Sex', 'Embarked', 'PassengerId', ], axis=1)
titanic_dataframe.info()
#Convert Columns to List
cols = titanic_dataframe.columns.tolist()
titanic_dataframe = titanic_dataframe[cols]
train_cols = [x for x in cols if x != 'Survived']
target_cols = [cols[0]]
print(train_cols, target_cols)
train_data = titanic_dataframe[train_cols]
target_data = titanic_dataframe[target_cols]
algorithm_data_model = neighbors.KNeighborsClassifier()
algorithm_data_model.fit(train_data.values, [value[0] for value in target_data.values])
df_test = pandas.read_csv('test.csv')
ids = df_test.PassengerId.values
df_test.drop(['Name', 'Ticket', 'Cabin', 'PassengerId'], axis=1, inplace=True)
print(len(df_test))
df_test.info()
mean_age = df_test.Age.mean()
df_test.Age.fillna(mean_age, inplace=True)
mean_fare = df_test.Fare.mean()
df_test.Fare.fillna(mean_fare, inplace=True)
df_test['Gender'] = df_test['Sex'].map({'female': 0, 'male': 1}).astype(int)
df_test['Port'] = df_test['Embarked'].map({'C':1, 'S':2, 'Q':3}).astype(int)
df_test = df_test.drop(['Sex', 'Embarked'], axis=1)
test_data = df_test.values
df_test.info()
titanic_dataframe.info()
output = algorithm_data_model.predict(df_test).astype(int)
print(output[:10])
result = numpy.c_[ids.astype(int), output]
print(result)
prediction_file = open('ourpredictions.csv', 'w')
open_file = csv.writer(prediction_file)
open_file.writerow(['PassengerId', 'Survived'])
open_file.writerows(zip(ids, output))
prediction_file.close()
%timeit algorithm_data_model.predict(df_test).astype(int)
Explanation: Pre-Processing
End of explanation
<END_TASK> |
15,526 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Pairwise LD for functional SNPs
This notebook computes and plots pairwise LD between interesting functional SNPs.
Setup
Step2: Compute and plot allele frequencies
Step3: Compute and plot pairwise LD | <ASSISTANT_TASK:>
Python Code:
%run setup.ipynb
%matplotlib inline
# load in selected missense variants
tbl_variants_selected = etl.frompickle('../data/tbl_variants_missense_selected.pkl')
tbl_variants_selected.nrows()
# load in haplotypes
callset_haps = np.load('../data/haps_phase1.npz')
haps = allel.HaplotypeArray(callset_haps['haplotypes'])
pos = allel.SortedIndex(callset_haps['POS'])
pos.shape, haps.shape
def lewontin_d_prime(h, i, j, a=1, b=1):
Compute LD between a pair of alleles.
Parameters
----------
h : array
Haplotype array.
i : int
First variant index.
j : int
Second variant index.
a : int
First variant allele.
b : int
Second variant allele.
Returns
-------
ld : float
# setup
h = allel.HaplotypeArray(h)
n_a = n_b = 0 # allele counts
n_ab = 0 # haplotype counts
n = 0 # allele number (i.e., number of calls)
# iterate over haplotypes, counting alleles and haplotypes
for k in range(h.n_haplotypes):
# access alleles
allele_ik = h[i, k]
allele_jk = h[j, k]
# only count if allele non-missing at both sites
if allele_ik < 0 or allele_jk < 0:
continue
# accumulate
if allele_ik == a:
n_a += 1
if allele_jk == b:
n_b += 1
if allele_ik == a and allele_jk == b:
n_ab += 1
n += 1
# log('D_prime counts:', 'i', i, 'j', j, 'a', a, 'b', b, 'n', n, 'n_a', n_a, 'n_b', n_b)
# bail out if no data or either allele is absent or fixed
if n == 0 or n_a == 0 or n_b == 0 or n == n_a or n == n_b:
return None
# N.B., compute D prime using counts rather than frequencies to avoid floating-point errors
# N.B., preserve the sign of D prime to retain information about linkage versus repulsion
# compute coefficient of linkage disequilibrium * n**2
D_ab = (n * n_ab) - (n_a * n_b)
# compute normalisation coefficient * n**2
if D_ab >= 0:
D_max = min(n_a * (n - n_b), (n - n_a) * n_b)
else:
D_max = min(n_a * n_b, (n - n_a) * (n - n_b))
# compute D prime
D_prime = D_ab / D_max
# log('D_prime', D_prime, i, j, a, b, n, n_a, n_b, D_ab, D_max)
# if np.isnan(D_prime):
# log('nan')
# log(D_prime, i, j, a, b, n, n_a, n_b, D_ab, D_max)
return D_prime
pos_selected = allel.SortedIndex(sorted(tbl_variants_selected.values('POS').set()))
pos_selected
tbl_variants_selected
pos_selected.shape
loc_selected = pos.locate_keys(pos_selected)
np.count_nonzero(loc_selected)
haps_selected = haps[loc_selected]
haps_selected
ac = haps_selected.count_alleles()
ac.displayall()
Explanation: Pairwise LD for functional SNPs
This notebook computes and plots pairwise LD between interesting functional SNPs.
Setup
End of explanation
def compute_allele_af(ax=None):
global allele_af
recs = list(tbl_variants_selected.records())
n = len(recs)
allele_af = np.zeros(n, dtype='f8')
for i in range(n):
i_pos = recs[i].POS
i_allele = recs[i].ALTIX + 1
i_vidx = pos_selected.locate_key(i_pos)
# log('row', i, i_vidx, i_pos, i_allele)
x = ac[i_vidx, i_allele] * 100 / haps_selected.shape[1]
allele_af[i] = x
compute_allele_af()
Explanation: Compute and plot allele frequencies
End of explanation
def compute_ld():
global ld
recs = list(tbl_variants_selected.records())
n = len(recs)
ld = np.zeros((n, n), dtype='f8')
for i in range(n):
i_pos = recs[i].POS
i_allele = recs[i].ALTIX + 1
i_vidx = pos_selected.locate_key(i_pos)
# log('row', i, i_vidx, i_pos, i_allele)
for j in range(i+1, n):
j_pos = recs[j].POS
j_allele = recs[j].ALTIX + 1
j_vidx = pos_selected.locate_key(j_pos)
# log('col', j, j_vidx, j_pos, j_allele)
v = lewontin_d_prime(haps_selected, i_vidx, j_vidx, i_allele, j_allele)
# log('D_prime', v)
ld[i, j] = v
ld[j, i] = v
compute_ld()
ld[11]
def plot_allele_af(ax=None, **kwargs):
n = len(allele_af)
if ax is None:
fig, ax = plt.subplots(figsize=(7, 2))
left = np.arange(n) + 0.2
ax.bar(left, allele_af, align='edge', width=0.6, **kwargs)
ax.set_ylabel('Allele frequency (%)')
ax.set_xlim(0, n)
ax.set_xticks([])
ax.set_yticks(range(0, 60, 10))
ax.set_xticklabels([])
plot_allele_af()
def fig_pw_ld():
fig = plt.figure(figsize=(7, 7.3), dpi=120)
gs = mpl.gridspec.GridSpec(2, 2, height_ratios=[1.3, 6], width_ratios=[7, .5])
# sns.despine(ax=ax, offset=5)
#sns.heatmap(ld, vmin=-1, vmax=1, center=0, square=True, ax=ax, cmap='Blues', cbar_kws=dict(ticks=[-1, -.5, 0, .5, 1]))
ax = fig.add_subplot(gs[0, 0])
sns.despine(ax=ax)
plot_allele_af(ax, color='k')
ax = fig.add_subplot(gs[1, 0])
im = ax.pcolormesh(ld, vmin=-1, vmax=1, cmap='Blues', shading='flat', edgecolors='gray', linewidths=.5, antialiased=True)
labels = ['%s:%s>%s %s' % (rec.POS, rec.REF, rec.ALT, rec['AGAP004707-RA'].rjust(6))
for rec in tbl_variants_selected.records()]
# ax.invert_yaxis()
ticks = np.arange(ld.shape[0]) + .5
ax.set_xticks(ticks)
ax.set_yticks(ticks)
ax.set_xticklabels(labels, rotation=90, ha='center', va='top', fontdict=dict(family='monospace'))
ax.set_yticklabels(labels, rotation=0, va='center', ha='right', fontdict=dict(family='monospace'));
ax.set_xlim(0, ld.shape[0])
ax.set_ylim(0, ld.shape[0])
ax.xaxis.set_tick_params(length=0)
ax.yaxis.set_tick_params(length=0)
for i in range(ld.shape[0] + 1):
ax.add_patch(plt.Rectangle((i-1, i-1), 1, 1, color='gray'))
cax = fig.add_subplot(gs[1, 1])
fig.colorbar(im, cax=cax, )
# cax.set_title("Linkage disequilibrium (D')", loc='left')
cax.set_ylabel("Linkage disequilibrium (D')", va='top')
fig.tight_layout(pad=0.1)
fig.savefig('../artwork/fig_ld.png', dpi=300, bbox_inches='tight')
fig_pw_ld()
Explanation: Compute and plot pairwise LD
End of explanation
<END_TASK> |
15,527 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Excercises Electric Machinery Fundamentals
Chapter 4
Problem 4-10
Step1: Description
Three physically identical synchronous generators are operating in parallel. They are all rated for a full load of 100 MW at $PF=0.8$ lagging. The no-load frequency of generator A is 61 Hz, and its speed droop is 3 percent. The no-load frequency of generator B is 61.5 Hz, and its speed droop is 3.4 percent. The no-load frequency of generator C is 60.5 Hz, and its speed droop is 2.6 percent.
Step2: (a)
If a total load consisting of 230 MW is being supplied by this power system
Step3: and the slopes of the power-frequency curves are
Step4: The total load is 230 MW, so the system frequency can be optained form the load power as follows
Step5: The power supplied by each generator will be
Step6: (b)
Generate a vector of different load power values
Step7: Calculate the system frequency as function of $P_\text{load}$ using
$$f_\text{sys} = \frac{s_\text{PA} f_\text{nl,A} + s_\text{PB} f_{nl,B} + s_\text{PC}f_\text{nl,C} - P_\text{load}}{s_\text{PA} + s_\text{PB} + s_\text{PC}}$$
from part (a)
Step8: Calculate the power of each generator
Step9: Plot the power sharing versus load
Step10: This plot reveals that there are power sharing problems both for high loads and for low loads. Generator B is the first to exceed its ratings as load increases. Its rated power is reached at a total load of
Step11: MW.
On the other hand, Generator C gets into trouble as the total load is reduced. When the total load drops below | <ASSISTANT_TASK:>
Python Code:
%pylab notebook
%precision 2
Explanation: Excercises Electric Machinery Fundamentals
Chapter 4
Problem 4-10
End of explanation
Pn = 100e6 # [W]
PF = 0.8
f_nl_A = 61.0 # [Hz]
SD_A = 3 # [%]
f_nl_B = 61.5 # [Hz]
SD_B = 3.4 # [%]
f_nl_C = 60.5 # [Hz]
SD_C = 2.6 # [%]
Explanation: Description
Three physically identical synchronous generators are operating in parallel. They are all rated for a full load of 100 MW at $PF=0.8$ lagging. The no-load frequency of generator A is 61 Hz, and its speed droop is 3 percent. The no-load frequency of generator B is 61.5 Hz, and its speed droop is 3.4 percent. The no-load frequency of generator C is 60.5 Hz, and its speed droop is 2.6 percent.
End of explanation
f_fl_A = f_nl_A / (SD_A / 100.0 +1)
f_fl_B = f_nl_B / (SD_B / 100.0 +1)
f_fl_C = f_nl_C / (SD_C / 100.0 +1)
print ('f_fl_A = {:.3f} Hz'.format(f_fl_A))
print ('f_fl_B = {:.3f} Hz'.format(f_fl_B))
print ('f_fl_C = {:.3f} Hz'.format(f_fl_C))
Explanation: (a)
If a total load consisting of 230 MW is being supplied by this power system:
What will the system frequency be and how will the power be shared among the three generators?
(b)
Create a plot showing the power supplied by each generator as a function of the total power supplied to all loads (you may use Python to create this plot).
At what load does one of the generators exceed its ratings?
Which generator exceeds its ratings first?
(c)
Is this power sharing in (a) acceptable? Why or why not?
(d)
What actions could an operator take to improve the real power sharing among these generators?
SOLUTION
(a)
Speed droop is defined as:
$$SD = \frac{n_\text{nl}-n_\text{fl}}{n_\text{fl}} \cdot 100\% = \frac{f_\text{nl} - f_\text{fl}}{f_\text{fl}} \cdot 100\%$$
so,
$$f_n = \frac{f_\text{nl}}{\frac{SD}{100} + 1}$$
Thus, the full-load frequencies of generators A, B and C are:
End of explanation
sp_A = Pn / (f_nl_A - f_fl_A)
sp_B = Pn / (f_nl_B - f_fl_B)
sp_C = Pn / (f_nl_C - f_fl_C)
print('''
sp_A = {:.2f} MW/Hz
sp_B = {:.2f} MW/Hz
sp_C = {:.2f} MW/Hz
'''.format(sp_A/1e6, sp_B/1e6, sp_C/1e6))
Explanation: and the slopes of the power-frequency curves are:
$$s_P = \frac{P}{f_\text{nl} - f_\text{fl}}$$
End of explanation
Pload = 230e6 # [W]
f_sys = (sp_A*f_nl_A + sp_B*f_nl_B + sp_C*f_nl_C - Pload) / (sp_A + sp_B + sp_C)
print('''
f_sys = {:.2f} Hz
================'''.format(f_sys))
Explanation: The total load is 230 MW, so the system frequency can be optained form the load power as follows:
$$P_\text{load} = s_\text{PA}(f_\text{nl,A} - f_\text{sys}) + s_\text{PB}(f_{nl,B} - f_\text{sys}) + s_\text{PC}(f_\text{nl,C} - f_\text{sys})$$
$$\leadsto f_\text{sys} = \frac{s_\text{PA} f_\text{nl,A} + s_\text{PB} f_{nl,B} + s_\text{PC}f_\text{nl,C} - P_\text{load}}{s_\text{PA} + s_\text{PB} + s_\text{PC}}$$
End of explanation
Pa = sp_A * (f_nl_A - f_sys)
Pb = sp_B * (f_nl_B - f_sys)
Pc = sp_C * (f_nl_C - f_sys)
print('''
Pa = {:.1f} MW
Pb = {:.1f} MW
Pc = {:.1f} MW
============'''.format(Pa/1e6, Pb/1e6, Pc/1e6))
Explanation: The power supplied by each generator will be:
$$P = s_{P_x} \cdot (f_{\text{nl}x} - f{\text{sys}_x})$$
End of explanation
Pload_plot = arange(0,300.1,5) * 1e6 # [W]
Explanation: (b)
Generate a vector of different load power values
End of explanation
f_sys = (sp_A*f_nl_A + sp_B*f_nl_B + sp_C*f_nl_C - Pload_plot) / (sp_A + sp_B + sp_C)
Explanation: Calculate the system frequency as function of $P_\text{load}$ using
$$f_\text{sys} = \frac{s_\text{PA} f_\text{nl,A} + s_\text{PB} f_{nl,B} + s_\text{PC}f_\text{nl,C} - P_\text{load}}{s_\text{PA} + s_\text{PB} + s_\text{PC}}$$
from part (a):
End of explanation
PA = sp_A * (f_nl_A - f_sys)
PB = sp_B * (f_nl_B - f_sys)
PC = sp_C * (f_nl_C - f_sys)
Explanation: Calculate the power of each generator
End of explanation
title('Power Sharing Versus Total Load')
xlabel('Total Load [MW]')
ylabel('Generator Power [MW]')
plot(Pload_plot/1e6, PA/1e6, 'g--', linewidth = 2)
plot(Pload_plot/1e6, PB/1e6, 'b', linewidth = 2 )
plot(Pload_plot/1e6, PC/1e6, 'm.', linewidth = 2)
plot([0, 300], [Pn/1e6, Pn/1e6], 'r', linewidth = 2)
plot([0, 300], [0, 0], 'r:', linewidth = 2)
legend(('Generator A','Generator B','Generator C','upper power limit', 'lower power limit'), loc=4, framealpha=1);
grid()
Explanation: Plot the power sharing versus load:
End of explanation
interp(Pn, PB, Pload_plot)/1e6 # using the interpolate function to determine
# the exact crossover of PB and Pn @Pload
Explanation: This plot reveals that there are power sharing problems both for high loads and for low loads. Generator B is the first to exceed its ratings as load increases. Its rated power is reached at a total load of
End of explanation
interp(0, PC, Pload_plot)/1e6 # using the interpolate function to determine
# the exact crossover of PC and 0 @Pload
Explanation: MW.
On the other hand, Generator C gets into trouble as the total load is reduced. When the total load drops below
End of explanation
<END_TASK> |
15,528 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Functional Programming in Python
Interest in Functional Programming is currently growing as more developers become aware of the benefits it offers, particularly with respect to concurrency and scale.
For some background on Functional Programming see the paper 'Why Functional Programming Matters
Step1: Filter
fiter takes in a sequence and returns a sequence containing only those elements of the sequence for which a function when applied to the element returns true.
filter(function, sequence)
For example
Step2: Reduce
reduce performs computations on a sequence and returns a single result.
reduce(function, sequence)
For example | <ASSISTANT_TASK:>
Python Code:
sequence = [1, 2, 3, 4, 5]
def square(x):
return x**2
result = list(map(square, sequence))
print(result)
Explanation: Functional Programming in Python
Interest in Functional Programming is currently growing as more developers become aware of the benefits it offers, particularly with respect to concurrency and scale.
For some background on Functional Programming see the paper 'Why Functional Programming Matters: http://www.cse.chalmers.se/~rjmh/Papers/whyfp.pdf
There are several aspects of Python that support a functional programming style.
For example: map, reduce and filter.
Map
map applies a function to all the items in an input sequence.
python
map(function, sequence)
For example:
```python
sequence = [1, 2, 3, 4, 5]
def square(x):
return x**2
result = list(map(square, sequence))
print(result)
```
Run the code below to see the output.
End of explanation
sequence = range(-10, 10)
greater_than_zero = list(filter(lambda x: x > 0, sequence))
print(greater_than_zero)
Explanation: Filter
fiter takes in a sequence and returns a sequence containing only those elements of the sequence for which a function when applied to the element returns true.
filter(function, sequence)
For example:
python
sequence = range(-10, 10)
greater_than_zero = list(filter(lambda x: x > 0, sequence))
print(greater_than_zero)
End of explanation
from functools import reduce
product = reduce((lambda x, y: x * y), [1, 2, 3, 4])
print(product)
Explanation: Reduce
reduce performs computations on a sequence and returns a single result.
reduce(function, sequence)
For example:
python
from functools import reduce
product = reduce((lambda x, y: x * y), [1, 2, 3, 4])
print(product)
At each step the function passed to reduce is passed two arguments, the current reduced value and the next value from the sequence. By default, the first item in the sequence is used as the initial value of the reduction value.
End of explanation
<END_TASK> |
15,529 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
ES-DOC CMIP6 Model Properties - Land
MIP Era
Step1: Document Authors
Set document authors
Step2: Document Contributors
Specify document contributors
Step3: Document Publication
Specify document publication status
Step4: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required
Step5: 1.2. Model Name
Is Required
Step6: 1.3. Description
Is Required
Step7: 1.4. Land Atmosphere Flux Exchanges
Is Required
Step8: 1.5. Atmospheric Coupling Treatment
Is Required
Step9: 1.6. Land Cover
Is Required
Step10: 1.7. Land Cover Change
Is Required
Step11: 1.8. Tiling
Is Required
Step12: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required
Step13: 2.2. Water
Is Required
Step14: 2.3. Carbon
Is Required
Step15: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required
Step16: 3.2. Time Step
Is Required
Step17: 3.3. Timestepping Method
Is Required
Step18: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required
Step19: 4.2. Code Version
Is Required
Step20: 4.3. Code Languages
Is Required
Step21: 5. Grid
Land surface grid
5.1. Overview
Is Required
Step22: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required
Step23: 6.2. Matches Atmosphere Grid
Is Required
Step24: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required
Step25: 7.2. Total Depth
Is Required
Step26: 8. Soil
Land surface soil
8.1. Overview
Is Required
Step27: 8.2. Heat Water Coupling
Is Required
Step28: 8.3. Number Of Soil layers
Is Required
Step29: 8.4. Prognostic Variables
Is Required
Step30: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required
Step31: 9.2. Structure
Is Required
Step32: 9.3. Texture
Is Required
Step33: 9.4. Organic Matter
Is Required
Step34: 9.5. Albedo
Is Required
Step35: 9.6. Water Table
Is Required
Step36: 9.7. Continuously Varying Soil Depth
Is Required
Step37: 9.8. Soil Depth
Is Required
Step38: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required
Step39: 10.2. Functions
Is Required
Step40: 10.3. Direct Diffuse
Is Required
Step41: 10.4. Number Of Wavelength Bands
Is Required
Step42: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required
Step43: 11.2. Time Step
Is Required
Step44: 11.3. Tiling
Is Required
Step45: 11.4. Vertical Discretisation
Is Required
Step46: 11.5. Number Of Ground Water Layers
Is Required
Step47: 11.6. Lateral Connectivity
Is Required
Step48: 11.7. Method
Is Required
Step49: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required
Step50: 12.2. Ice Storage Method
Is Required
Step51: 12.3. Permafrost
Is Required
Step52: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required
Step53: 13.2. Types
Is Required
Step54: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required
Step55: 14.2. Time Step
Is Required
Step56: 14.3. Tiling
Is Required
Step57: 14.4. Vertical Discretisation
Is Required
Step58: 14.5. Heat Storage
Is Required
Step59: 14.6. Processes
Is Required
Step60: 15. Snow
Land surface snow
15.1. Overview
Is Required
Step61: 15.2. Tiling
Is Required
Step62: 15.3. Number Of Snow Layers
Is Required
Step63: 15.4. Density
Is Required
Step64: 15.5. Water Equivalent
Is Required
Step65: 15.6. Heat Content
Is Required
Step66: 15.7. Temperature
Is Required
Step67: 15.8. Liquid Water Content
Is Required
Step68: 15.9. Snow Cover Fractions
Is Required
Step69: 15.10. Processes
Is Required
Step70: 15.11. Prognostic Variables
Is Required
Step71: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required
Step72: 16.2. Functions
Is Required
Step73: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required
Step74: 17.2. Time Step
Is Required
Step75: 17.3. Dynamic Vegetation
Is Required
Step76: 17.4. Tiling
Is Required
Step77: 17.5. Vegetation Representation
Is Required
Step78: 17.6. Vegetation Types
Is Required
Step79: 17.7. Biome Types
Is Required
Step80: 17.8. Vegetation Time Variation
Is Required
Step81: 17.9. Vegetation Map
Is Required
Step82: 17.10. Interception
Is Required
Step83: 17.11. Phenology
Is Required
Step84: 17.12. Phenology Description
Is Required
Step85: 17.13. Leaf Area Index
Is Required
Step86: 17.14. Leaf Area Index Description
Is Required
Step87: 17.15. Biomass
Is Required
Step88: 17.16. Biomass Description
Is Required
Step89: 17.17. Biogeography
Is Required
Step90: 17.18. Biogeography Description
Is Required
Step91: 17.19. Stomatal Resistance
Is Required
Step92: 17.20. Stomatal Resistance Description
Is Required
Step93: 17.21. Prognostic Variables
Is Required
Step94: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required
Step95: 18.2. Tiling
Is Required
Step96: 18.3. Number Of Surface Temperatures
Is Required
Step97: 18.4. Evaporation
Is Required
Step98: 18.5. Processes
Is Required
Step99: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required
Step100: 19.2. Tiling
Is Required
Step101: 19.3. Time Step
Is Required
Step102: 19.4. Anthropogenic Carbon
Is Required
Step103: 19.5. Prognostic Variables
Is Required
Step104: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required
Step105: 20.2. Carbon Pools
Is Required
Step106: 20.3. Forest Stand Dynamics
Is Required
Step107: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required
Step108: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required
Step109: 22.2. Growth Respiration
Is Required
Step110: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required
Step111: 23.2. Allocation Bins
Is Required
Step112: 23.3. Allocation Fractions
Is Required
Step113: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required
Step114: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required
Step115: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required
Step116: 26.2. Carbon Pools
Is Required
Step117: 26.3. Decomposition
Is Required
Step118: 26.4. Method
Is Required
Step119: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required
Step120: 27.2. Carbon Pools
Is Required
Step121: 27.3. Decomposition
Is Required
Step122: 27.4. Method
Is Required
Step123: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required
Step124: 28.2. Emitted Greenhouse Gases
Is Required
Step125: 28.3. Decomposition
Is Required
Step126: 28.4. Impact On Soil Properties
Is Required
Step127: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required
Step128: 29.2. Tiling
Is Required
Step129: 29.3. Time Step
Is Required
Step130: 29.4. Prognostic Variables
Is Required
Step131: 30. River Routing
Land surface river routing
30.1. Overview
Is Required
Step132: 30.2. Tiling
Is Required
Step133: 30.3. Time Step
Is Required
Step134: 30.4. Grid Inherited From Land Surface
Is Required
Step135: 30.5. Grid Description
Is Required
Step136: 30.6. Number Of Reservoirs
Is Required
Step137: 30.7. Water Re Evaporation
Is Required
Step138: 30.8. Coupled To Atmosphere
Is Required
Step139: 30.9. Coupled To Land
Is Required
Step140: 30.10. Quantities Exchanged With Atmosphere
Is Required
Step141: 30.11. Basin Flow Direction Map
Is Required
Step142: 30.12. Flooding
Is Required
Step143: 30.13. Prognostic Variables
Is Required
Step144: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required
Step145: 31.2. Quantities Transported
Is Required
Step146: 32. Lakes
Land surface lakes
32.1. Overview
Is Required
Step147: 32.2. Coupling With Rivers
Is Required
Step148: 32.3. Time Step
Is Required
Step149: 32.4. Quantities Exchanged With Rivers
Is Required
Step150: 32.5. Vertical Grid
Is Required
Step151: 32.6. Prognostic Variables
Is Required
Step152: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required
Step153: 33.2. Albedo
Is Required
Step154: 33.3. Dynamics
Is Required
Step155: 33.4. Dynamic Lake Extent
Is Required
Step156: 33.5. Endorheic Basins
Is Required
Step157: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required | <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'niwa', 'sandbox-3', 'land')
Explanation: ES-DOC CMIP6 Model Properties - Land
MIP Era: CMIP6
Institute: NIWA
Source ID: SANDBOX-3
Topic: Land
Sub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes.
Properties: 154 (96 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:30
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
Explanation: Document Authors
Set document authors
End of explanation
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
Explanation: Document Contributors
Specify document contributors
End of explanation
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
Explanation: Document Publication
Specify document publication status
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Conservation Properties
3. Key Properties --> Timestepping Framework
4. Key Properties --> Software Properties
5. Grid
6. Grid --> Horizontal
7. Grid --> Vertical
8. Soil
9. Soil --> Soil Map
10. Soil --> Snow Free Albedo
11. Soil --> Hydrology
12. Soil --> Hydrology --> Freezing
13. Soil --> Hydrology --> Drainage
14. Soil --> Heat Treatment
15. Snow
16. Snow --> Snow Albedo
17. Vegetation
18. Energy Balance
19. Carbon Cycle
20. Carbon Cycle --> Vegetation
21. Carbon Cycle --> Vegetation --> Photosynthesis
22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
23. Carbon Cycle --> Vegetation --> Allocation
24. Carbon Cycle --> Vegetation --> Phenology
25. Carbon Cycle --> Vegetation --> Mortality
26. Carbon Cycle --> Litter
27. Carbon Cycle --> Soil
28. Carbon Cycle --> Permafrost Carbon
29. Nitrogen Cycle
30. River Routing
31. River Routing --> Oceanic Discharge
32. Lakes
33. Lakes --> Method
34. Lakes --> Wetlands
1. Key Properties
Land surface key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of land surface model.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of land surface model code (e.g. MOSES2.2)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.3. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "water"
# "energy"
# "carbon"
# "nitrogen"
# "phospherous"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.4. Land Atmosphere Flux Exchanges
Is Required: FALSE Type: ENUM Cardinality: 0.N
Fluxes exchanged with the atmopshere.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.5. Atmospheric Coupling Treatment
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "bare soil"
# "urban"
# "lake"
# "land ice"
# "lake ice"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 1.6. Land Cover
Is Required: TRUE Type: ENUM Cardinality: 1.N
Types of land cover defined in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.land_cover_change')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.7. Land Cover Change
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe how land cover change is managed (e.g. the use of net or gross transitions)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 1.8. Tiling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.energy')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2. Key Properties --> Conservation Properties
TODO
2.1. Energy
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how energy is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.water')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.2. Water
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how water is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.conservation_properties.carbon')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 2.3. Carbon
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 3. Key Properties --> Timestepping Framework
TODO
3.1. Timestep Dependent On Atmosphere
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a time step dependent on the frequency of atmosphere coupling?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 3.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Overall timestep of land surface model (i.e. time between calls)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 3.3. Timestepping Method
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of time stepping method and associated time step(s)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4. Key Properties --> Software Properties
Software properties of land surface code
4.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 4.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 5. Grid
Land surface grid
5.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the grid in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 6. Grid --> Horizontal
The horizontal grid in the land surface
6.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the horizontal grid (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the horizontal grid match the atmosphere?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 7. Grid --> Vertical
The vertical grid in the soil
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the vertical grid in the soil (not including any tiling)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.grid.vertical.total_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 7.2. Total Depth
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The total depth of the soil (in metres)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8. Soil
Land surface soil
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of soil in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_water_coupling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.2. Heat Water Coupling
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the coupling between heat and water in the soil
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.number_of_soil layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 8.3. Number Of Soil layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 8.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the soil scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9. Soil --> Soil Map
Key properties of the land surface soil map
9.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of soil map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.structure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.2. Structure
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil structure map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.texture')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.3. Texture
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil texture map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.organic_matter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.4. Organic Matter
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil organic matter map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.5. Albedo
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil albedo map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.water_table')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.6. Water Table
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil water table map, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 9.7. Continuously Varying Soil Depth
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Does the soil properties vary continuously with depth?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.soil_map.soil_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 9.8. Soil Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil depth map
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 10. Soil --> Snow Free Albedo
TODO
10.1. Prognostic
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is snow free albedo prognostic?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "soil humidity"
# "vegetation state"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
If prognostic, describe the dependancies on snow free albedo calculations
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "distinction between direct and diffuse albedo"
# "no distinction between direct and diffuse albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 10.3. Direct Diffuse
Is Required: FALSE Type: ENUM Cardinality: 0.1
If prognostic, describe the distinction between direct and diffuse albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 10.4. Number Of Wavelength Bands
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If prognostic, enter the number of wavelength bands used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11. Soil --> Hydrology
Key properties of the land surface soil hydrology
11.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of the soil hydrological model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river soil hydrology in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil hydrology tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 11.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 11.5. Number Of Ground Water Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of soil layers that may contain water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "perfect connectivity"
# "Darcian flow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.6. Lateral Connectivity
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe the lateral connectivity between tiles
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Bucket"
# "Force-restore"
# "Choisnel"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 11.7. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
The hydrological dynamics scheme in the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 12. Soil --> Hydrology --> Freezing
TODO
12.1. Number Of Ground Ice Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
How many soil layers may contain ground ice
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.2. Ice Storage Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the method of ice storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 12.3. Permafrost
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the treatment of permafrost, if any, within the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 13. Soil --> Hydrology --> Drainage
TODO
13.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General describe how drainage is included in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.hydrology.drainage.types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Gravity drainage"
# "Horton mechanism"
# "topmodel-based"
# "Dunne mechanism"
# "Lateral subsurface flow"
# "Baseflow from groundwater"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 13.2. Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
Different types of runoff represented by the land surface model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14. Soil --> Heat Treatment
TODO
14.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General description of how heat treatment properties are defined
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 14.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of soil heat scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.3. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the soil heat treatment tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 14.4. Vertical Discretisation
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the typical vertical discretisation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.heat_storage')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Force-restore"
# "Explicit diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.5. Heat Storage
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the method of heat storage
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.soil.heat_treatment.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "soil moisture freeze-thaw"
# "coupling with snow temperature"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 14.6. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe processes included in the treatment of soil heat
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15. Snow
Land surface snow
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of snow in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the snow tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.number_of_snow_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 15.3. Number Of Snow Layers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of snow levels used in the land surface scheme/model
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.4. Density
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow density
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.water_equivalent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.5. Water Equivalent
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the snow water equivalent
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.heat_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.6. Heat Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of the heat content of snow
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.temperature')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.7. Temperature
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow temperature
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.liquid_water_content')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.8. Liquid Water Content
Is Required: TRUE Type: ENUM Cardinality: 1.1
Description of the treatment of snow liquid water
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_cover_fractions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "ground snow fraction"
# "vegetation snow fraction"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.9. Snow Cover Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify cover fractions used in the surface snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "snow interception"
# "snow melting"
# "snow freezing"
# "blowing snow"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 15.10. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Snow related processes in the land surface scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 15.11. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the snow scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "prescribed"
# "constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16. Snow --> Snow Albedo
TODO
16.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of snow-covered land albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.snow.snow_albedo.functions')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation type"
# "snow age"
# "snow density"
# "snow grain type"
# "aerosol deposition"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 16.2. Functions
Is Required: FALSE Type: ENUM Cardinality: 0.N
*If prognostic, *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17. Vegetation
Land surface vegetation
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vegetation in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 17.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of vegetation scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.dynamic_vegetation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.3. Dynamic Vegetation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there dynamic evolution of vegetation?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.4. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vegetation tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "vegetation types"
# "biome types"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.5. Vegetation Representation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Vegetation classification used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "broadleaf tree"
# "needleleaf tree"
# "C3 grass"
# "C4 grass"
# "vegetated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.6. Vegetation Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of vegetation types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biome_types')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "evergreen needleleaf forest"
# "evergreen broadleaf forest"
# "deciduous needleleaf forest"
# "deciduous broadleaf forest"
# "mixed forest"
# "woodland"
# "wooded grassland"
# "closed shrubland"
# "opne shrubland"
# "grassland"
# "cropland"
# "wetlands"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.7. Biome Types
Is Required: FALSE Type: ENUM Cardinality: 0.N
List of biome types in the classification, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_time_variation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed (not varying)"
# "prescribed (varying from files)"
# "dynamical (varying from simulation)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.8. Vegetation Time Variation
Is Required: TRUE Type: ENUM Cardinality: 1.1
How the vegetation fractions in each tile are varying with time
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.vegetation_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.9. Vegetation Map
Is Required: FALSE Type: STRING Cardinality: 0.1
If vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.interception')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 17.10. Interception
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is vegetation interception of rainwater represented?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic (vegetation map)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.11. Phenology
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.phenology_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.12. Phenology Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation phenology
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prescribed"
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.13. Leaf Area Index
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.leaf_area_index_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.14. Leaf Area Index Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of leaf area index
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.15. Biomass
Is Required: TRUE Type: ENUM Cardinality: 1.1
*Treatment of vegetation biomass *
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biomass_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.16. Biomass Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biomass
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.17. Biogeography
Is Required: TRUE Type: ENUM Cardinality: 1.1
Treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.biogeography_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.18. Biogeography Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation biogeography
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "light"
# "temperature"
# "water availability"
# "CO2"
# "O3"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 17.19. Stomatal Resistance
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify what the vegetation stomatal resistance depends on
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.stomatal_resistance_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.20. Stomatal Resistance Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of the treatment of vegetation stomatal resistance
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.vegetation.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 17.21. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the vegetation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18. Energy Balance
Land surface energy balance
18.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of energy balance in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 18.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the energy balance tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 18.3. Number Of Surface Temperatures
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "alpha"
# "beta"
# "combined"
# "Monteith potential evaporation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.4. Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
Specify the formulation method for land surface evaporation, from soil and vegetation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.energy_balance.processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "transpiration"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 18.5. Processes
Is Required: TRUE Type: ENUM Cardinality: 1.N
Describe which processes are included in the energy balance scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19. Carbon Cycle
Land surface carbon cycle
19.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of carbon cycle in land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the carbon cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 19.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of carbon cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "grand slam protocol"
# "residence time"
# "decay time"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 19.4. Anthropogenic Carbon
Is Required: FALSE Type: ENUM Cardinality: 0.N
Describe the treament of the anthropogenic carbon pool
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 19.5. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the carbon scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 20. Carbon Cycle --> Vegetation
TODO
20.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 20.3. Forest Stand Dynamics
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of forest stand dyanmics
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 21. Carbon Cycle --> Vegetation --> Photosynthesis
TODO
21.1. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22. Carbon Cycle --> Vegetation --> Autotrophic Respiration
TODO
22.1. Maintainance Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for maintainence respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 22.2. Growth Respiration
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the general method used for growth respiration
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 23. Carbon Cycle --> Vegetation --> Allocation
TODO
23.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the allocation scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "leaves + stems + roots"
# "leaves + stems + roots (leafy + woody)"
# "leaves + fine roots + coarse roots + stems"
# "whole plant (no distinction)"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.2. Allocation Bins
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify distinct carbon bins used in allocation
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "fixed"
# "function of vegetation type"
# "function of plant allometry"
# "explicitly calculated"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 23.3. Allocation Fractions
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe how the fractions of allocation are calculated
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 24. Carbon Cycle --> Vegetation --> Phenology
TODO
24.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the phenology scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 25. Carbon Cycle --> Vegetation --> Mortality
TODO
25.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general principle behind the mortality scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 26. Carbon Cycle --> Litter
TODO
26.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.litter.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 26.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 27. Carbon Cycle --> Soil
TODO
27.1. Number Of Carbon Pools
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.2. Carbon Pools
Is Required: FALSE Type: STRING Cardinality: 0.1
List the carbon pools used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.soil.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 27.4. Method
Is Required: FALSE Type: STRING Cardinality: 0.1
List the general method used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 28. Carbon Cycle --> Permafrost Carbon
TODO
28.1. Is Permafrost Included
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is permafrost included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.2. Emitted Greenhouse Gases
Is Required: FALSE Type: STRING Cardinality: 0.1
List the GHGs emitted
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.3. Decomposition
Is Required: FALSE Type: STRING Cardinality: 0.1
List the decomposition methods used
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 28.4. Impact On Soil Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the impact of permafrost on soil properties
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29. Nitrogen Cycle
Land surface nitrogen cycle
29.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of the nitrogen cycle in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the notrogen cycle tiling, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 29.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of nitrogen cycle in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 29.4. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the nitrogen scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30. River Routing
Land surface river routing
30.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of river routing in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.tiling')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.2. Tiling
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the river routing, if any.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of river routing scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.4. Grid Inherited From Land Surface
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the grid inherited from land surface?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.grid_description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.5. Grid Description
Is Required: FALSE Type: STRING Cardinality: 0.1
General description of grid, if not inherited from land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.number_of_reservoirs')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 30.6. Number Of Reservoirs
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Enter the number of reservoirs
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.water_re_evaporation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "flood plains"
# "irrigation"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.7. Water Re Evaporation
Is Required: TRUE Type: ENUM Cardinality: 1.N
TODO
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 30.8. Coupled To Atmosphere
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Is river routing coupled to the atmosphere model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.coupled_to_land')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.9. Coupled To Land
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the coupling between land and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.10. Quantities Exchanged With Atmosphere
Is Required: FALSE Type: ENUM Cardinality: 0.N
If couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.basin_flow_direction_map')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "present day"
# "adapted for other periods"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 30.11. Basin Flow Direction Map
Is Required: TRUE Type: ENUM Cardinality: 1.1
What type of basin flow direction map is being used?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.flooding')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.12. Flooding
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the representation of flooding, if any
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 30.13. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the river routing
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "direct (large rivers)"
# "diffuse"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31. River Routing --> Oceanic Discharge
TODO
31.1. Discharge Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify how rivers are discharged to the ocean
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 31.2. Quantities Transported
Is Required: TRUE Type: ENUM Cardinality: 1.N
Quantities that are exchanged from river-routing to the ocean model component
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32. Lakes
Land surface lakes
32.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lakes in the land surface
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.coupling_with_rivers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 32.2. Coupling With Rivers
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Are lakes coupled to the river routing model component?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
Explanation: 32.3. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Time step of lake scheme in seconds
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "heat"
# "water"
# "tracers"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 32.4. Quantities Exchanged With Rivers
Is Required: FALSE Type: ENUM Cardinality: 0.N
If coupling with rivers, which quantities are exchanged between the lakes and rivers
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.vertical_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.5. Vertical Grid
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the vertical grid of lakes
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 32.6. Prognostic Variables
Is Required: TRUE Type: STRING Cardinality: 1.1
List the prognostic variables of the lake scheme
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.ice_treatment')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33. Lakes --> Method
TODO
33.1. Ice Treatment
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is lake ice included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "prognostic"
# "diagnostic"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.2. Albedo
Is Required: TRUE Type: ENUM Cardinality: 1.1
Describe the treatment of lake albedo
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamics')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "No lake dynamics"
# "vertical"
# "horizontal"
# "Other: [Please specify]"
# TODO - please enter value(s)
Explanation: 33.3. Dynamics
Is Required: TRUE Type: ENUM Cardinality: 1.N
Which dynamics of lakes are treated? horizontal, vertical, etc.
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.4. Dynamic Lake Extent
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is a dynamic lake extent scheme included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.method.endorheic_basins')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
Explanation: 33.5. Endorheic Basins
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Basins not flowing to ocean included?
End of explanation
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.land.lakes.wetlands.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
Explanation: 34. Lakes --> Wetlands
TODO
34.1. Description
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe the treatment of wetlands, if any
End of explanation
<END_TASK> |
15,530 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described.
<END_TASK>
<USER_TASK:>
Description:
Compute the minimum or maximum of two integers without branching
Python3 implementation of the approach ; Function to find minimum of x and y ; Function to find maximum of x and y ; Driver code
| <ASSISTANT_TASK:>
Python Code:
import sys ;
CHAR_BIT = 8 ;
INT_BIT = sys . getsizeof(int() ) ;
def Min(x , y ) :
return y +(( x - y ) &(( x - y ) >>(INT_BIT * CHAR_BIT - 1 ) ) ) ;
def Max(x , y ) :
return x -(( x - y ) &(( x - y ) >>(INT_BIT * CHAR_BIT - 1 ) ) ) ;
x = 15 ;
y = 6 ;
print("Minimum ▁ of ", x , "and ", y , "is ", Min(x , y ) ) ;
print("Maximum ▁ of ", x , "and ", y , "is ", Max(x , y ) ) ;
<END_TASK>
|
15,531 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Exercise 02 - CDS Feature Comparisons <img src="images/JHI_STRAP_Web.png" style="width
Step1: The first thing we do is load in the BLASTP output we generated, so that we can plot some of the key features. We do that using the ex02.read_data() function in the cell below. This puts the data into a dataframe called data_fwd.
Step2: <div class="alert alert-warning">
<b>NOTE
Step3: There are 5265 rows in this table, one for each of the query protein sequences in the P. syringae B728a annotation.
We can look at the distribution of values in the dataframe rows using the .hist() method for any column of interest. For example, data_fwd.subject_length.hist() plots a histogram of the values in the subject_length column.
<div class="alert alert-warning">
<b>NOTE
Step4: <div class="alert alert-warning">
<b>QUESTIONS
Step5: <div class="alert alert-warning">
<b>QUESTIONS
Step6: <div class="alert alert-warning">
<b>NOTE
Step7: We can inspect the dataframe of RBBH using the .head() and .describe() methods, by executing the cells below.
Step8: It is inevitable that the RBBH set will have the same or fewer protein pairs in it, than the number of proteins in the smallest of the forward and reverse protein sets. But how many proteins have been filtered in this comparison? We can find out by executing the cell below.
Step9: <div class="alert alert-warning">
<b>Approximately what proportion of best <b>BLAST</b> matches have been discarded?</b>
</div>
Visualising RBBH output
We can get a better idea of what this processing has done by looking at a visual representation of the percentage identity and coverage of RBBH, compared to the (forward) one-way matches. We can do this by executing the cells below.
First, let's look at the percentage identity of best BLAST matches
Step10: <div class="alert alert-warning">
<b>What has been the effect of excluding best matches that do not have an RBBH reverse match?</b>
</div>
Next, we can inspect the query and subject coverage of RBBH results, compared to the one-way forward BLAST matches by executing the cell below.
Step11: <div class="alert alert-warning">
<ul>
<li><b>Which one-way matches have been excluded by carrying out RBBH?</b><br />
<li><b>What is the biological significance of excluding those matches?</b>
<li><b>What would be a reasonable filter to exclude the remaining suspect matches?</b>
</ul>
</div>
Filtering RBBH output
The find_rbbh() function allows us to apply cutoff filters on percentage identity or coverage (or both) for an RBBH match - this, and visualisation of the results is done in the cells below.
<div class="alert alert-warning">
<b>NOTE
Step12: Visualising RBBH with ACT
Finally for this exercise, we will visualise the RBBH between P. syringae B728a and P. fluorescens NCIMB 11764 using ACT (as in exercise 01), comparing the output to that obtained by a BLASTN comparison of the chromosomes.
First, we need to generate an output file describing our (filtered) RBBH that ACT can read. We do this by executing the cell below. This does two things | <ASSISTANT_TASK:>
Python Code:
%pylab inline
# Import helper module
from helpers import ex02
Explanation: Exercise 02 - CDS Feature Comparisons <img src="images/JHI_STRAP_Web.png" style="width: 150px; float: right;">
Introduction
We often wish to establish an equivalence between the CDS features on two genomes - by which we mean some assertion that sequence A on genome 1 is the "same thing" (in some sense) as sequence B on genome 2. This equivalence can take many practical forms (same catalytic function, same binding interaction, same role in a pathway, and so on) but, given the volume of sequence data available today, is usually established on the basis of sequence similarity. This similarity is then taken as a proxy for the actual equivalence we're interested in.
When sequencing a new pathogen genome, or obtaining a novel transcriptome, we may want to annotate the coding sequences in that genome by determining orthologs - the equivalent sequences - in some other genome.
In this notebook, we will look at three methods (there are many others, but we are constrained by time!) of identifying equivalent sequence features in genomes, in bulk.
All three methods we will consider involve BLASTP comparisons between the protein complements of a plant pathogen genome and a related non-pathogenic isolate. They can be considered to fall under three categories, and all depend on initial BLASTP comparisons.
one-way pairwise comparison - best BLASTP match
two-way pairwise comparison - reciprocal best BLASTP match
clustering - Markov clustering (MCL) of BLASTP matches
We will also need to run some Python code to process and visualise the clustering output.
Learning outcomes
Conduct BLASTP comparisons between protein complements for prokaryotes
Using Python and Pandas to collect, examine and visualise tabular format data
Identify reciprocal best BLAST matches
Visualise and interpret genome-wide reciprocal best BLAST matches.
Running cells in this notebook
<div class="alert alert-info" role="alert">
This is an interactive notebook, which means you are able to run the code that is written in each of the cells.
<br /><br />
To run the code in a cell, you should:
<br /><br />
<ol>
<li>Place your mouse cursor in the cell, and click (this gives the cell *focus*) to make it active
<li>Hold down the <b>Shift</b> key, and press the <b>Return</b> key.
</ol>
</div>
If this is successful, you should see the input marker to the left of the cell change from
In [ ]:
to (for example)
In [1]:
and you may see output appear below the cell.
Requirements
<div class="alert alert-success">
To complete this exercise, you will need:
<ul>
<li>an active internet connection
<li>a local installation of <a href="https://blast.ncbi.nlm.nih.gov/Blast.cgi?PAGE_TYPE=BlastDocs&DOC_TYPE=Download"><b>BLAST+</b></a>
</ul>
</div>
Related online documentation/publications/software
Software
* CRB-BLAST - conditional reciprocal best BLAST
* OrthoMCL - a database of predicted orthologs obtained using MCL.
* OrthoFinder - a program for finding orthologous protein sequence families
Publications
* Aubrey et al. (2014) PLoS Genet. doi:10.1371/journal.pgen.1004365
Blogs
* On Reciprocal Best Blast Hits
One-Way Best BLAST matches (BBH)
It is still common to see one-way matches used - even if only informally, or as a first attempt - as a means of identifying equivalent proteins/features in a genome. In this section, we'll carry out a one-way BLAST search between the protein complements of the plant pathogen P. syringae B728a and its non-pathogenic relative P. fluorescens NCIMB 11764, and inspect the results graphically.
Performing the BLASTP query
We will use the blastp command at the terminal to use every protein sequence in the P. syringae B728a annotation as a query against the predicted proteome of P. fluorescens NCIMB 11764.
The BLAST databases have already been created for you to save time (using the scripts/02-cds_feature_comparisons.sh script), and the results are in the pseudomonas_blastp directory:
$ tree ./pseudomonas_blastp
./pseudomonas_blastp
├── GCF_000012245.1_ASM1224v1_protein.phr
├── GCF_000012245.1_ASM1224v1_protein.pin
├── GCF_000012245.1_ASM1224v1_protein.psq
├── GCF_000293885.2_ASM29388v3_protein.phr
├── GCF_000293885.2_ASM29388v3_protein.pin
├── GCF_000293885.2_ASM29388v3_protein.psq
├── GCF_000988485.1_ASM98848v1_protein.phr
├── GCF_000988485.1_ASM98848v1_protein.pin
└── GCF_000988485.1_ASM98848v1_protein.psq
We will use some custom settings to make our analysis easier to carry out.
<div class="alert alert-warning">
<ul>
<li> We will want to limit our matches to only the best hit, so we specify <b>-max_target_seqs 1</b>
<li> We want our output in tab-separated tabular particular format so we can import it easily into other tools (like <b>R</b> and <b>Python</b>), so use <b>-outfmt 6</b>.
<li> We want some specific non-standard columns (e.g. query sequence coverage) in that table so we can carry out some useful calculations and visualisation. We therefore specify <b>-outfmt "6 qseqid sseqid qlen slen length nident pident qcovs evalue bitscore"</b>
<li> To make the comparisons quicker, we should create <b>BLAST</b> databases for each of the three proteomes, with the <b>makeblastdb</b> command.
</ul>
</div>
To carry out the one-way BLASTP search of P. syringae B728a against P. fluorescens NCIMB 11764, we would execute the following command in the terminal:
blastp -query pseudomonas/GCF_000988485.1_ASM98848v1_protein.faa \
-db pseudomonas_blastp/GCF_000293885.2_ASM29388v3_protein \
-max_target_seqs 1 \
-outfmt "6 qseqid sseqid qlen slen length nident pident qcovs evalue bitscore" \
-out pseudomonas_blastp/B728a_vs_NCIMB_11764.tab
This will take a few minutes to complete, so to save time the comparison has already been made for you, with the result file being placed in pseudomonas_blastp/B728a_vs_NCIMB_11764.tab.
Importing and visualising the results
The Python module helpers is included in this directory, to provide useful helper functions so that we can read and view the BLASTP output generated above. To make the functions available, we import it by running the Python code cell below.
<div class="alert alert-warning">
<b>NOTE:</b> The <b>%pylab inline</b> "magic" below allows us to see plots of the <b>BLAST</b> data we load, <i>inline</i> in this notebook.
</div>
End of explanation
# Load one-way BLAST results into a data frame called data_fwd
data_fwd = ex02.read_data("pseudomonas_blastp/B728a_vs_NCIMB_11764.tab")
Explanation: The first thing we do is load in the BLASTP output we generated, so that we can plot some of the key features. We do that using the ex02.read_data() function in the cell below. This puts the data into a dataframe called data_fwd.
End of explanation
# Show first few lines of the loaded data
data_fwd.head()
# Show descriptive statistics for the table
data_fwd.describe()
Explanation: <div class="alert alert-warning">
<b>NOTE:</b> In the cell below, the <b>data.head()</b> function shows us the first few lines of the one-way <b>BLASTP</b> results, one per match; the <b>data.describe()</b> function shows us some summary data for the table.
</div>
End of explanation
# Plot a histogram of alignment lengths for the BLAST data
data_fwd.alignment_length.hist(bins=100)
# Plot a histogram of percentage identity for the BLAST data
data_fwd.identity.hist(bins=100)
# Plot a histogram of query_coverage for the BLAST data
data_fwd.query_coverage.hist(bins=100)
# Plot a histogram of percentage coverage for the BLAST data
data_fwd.subject_coverage.hist(bins=100)
Explanation: There are 5265 rows in this table, one for each of the query protein sequences in the P. syringae B728a annotation.
We can look at the distribution of values in the dataframe rows using the .hist() method for any column of interest. For example, data_fwd.subject_length.hist() plots a histogram of the values in the subject_length column.
<div class="alert alert-warning">
<b>NOTE:</b> The <b>bins=100</b> option sets the number of value bins used in the histogram
</div>
End of explanation
# Plot 2D histogram of subject sequence (match) coverage against query
# sequence coverag
ex02.plot_hist2d(data_fwd.query_coverage, data_fwd.subject_coverage,
"one-way query COV", "one-way subject COV",
"one-way coverage comparison")
ex02.plot_hist2d(data_fwd.query_coverage, data_fwd.identity,
"one-way query COV", "one-way match PID",
"one-way coverage/identity comparison")
Explanation: <div class="alert alert-warning">
<b>QUESTIONS:</b>
<ul>
<li><b>What size are most one-way best `BLAST` alignments?</b>
<li><b>What is the typical query coverage?</b>
<li><b>What is the typical subject coverage?</b>
<li><b>What is the typical best `BLAST` match identity?</b>
</ul>
</div>
We can view the relationship between query coverage and subject coverage, and query coverage and match identity for these one-way best BLAST hits by plotting a 2D histogram, with the helper function ex02.plot_hist2d() in the cell below.
End of explanation
# Load one-way BLAST results into a data frame called data_fwd
data_rev = ex02.read_data("pseudomonas_blastp/NCIMB_11764_vs_B728a.tab")
Explanation: <div class="alert alert-warning">
<b>QUESTIONS:</b>
<ul>
<li>**What is the query/subject coverage for most one-way best `BLAST` matches?**
<li>**Why do some one-way `BLAST` matches not have the same coverage for query and subject?**
<li>**What is the typical query coverage of a high percentage identity match?**
<li>**What is the typical query coverage of a low percentage identity match?**
</ul>
</div>
<div class="alert alert-danger" role="alert">
<b>QUESTION:</b><br />
<b>Do one-way best `BLAST` matches always identify equivalent proteins (<i>orthologs</i>)?</b>
</div>
Reciprocal (Two-Way) Best BLAST matches (RBBH)
To perform a reciprocal BLAST search between two sets of proteins S1 and S2 (say), we need to carry out the forward search of S1 vs S2, and the reverse search S2 vs S1.
Reciprocal best BLAST matches are those where the sequence G(S1) (a gene/CDS from sequence set S1) used as a query makes its best BLAST match to sequence G(S2) (a gene/CDS from sequence set S2), and when sequence G(S2) is used as a query it makes its best match to sequence G(S1) (see figure below).
We carried out the forward search above, for P. syringae B728a (our sequence set S1) against P. fluorescens NCIMB 11764 (our sequence set S2), and now we will carry out the corresponding reverse search by executing the command below at the terminal:
blastp -query pseudomonas/GCF_000293885.2_ASM29388v3_protein.faa \
-db pseudomonas_blastp/GCF_000988485.1_ASM98848v1_protein \
-max_target_seqs 1 \
-outfmt "6 qseqid sseqid qlen slen length nident pident qcovs evalue bitscore" \
-out pseudomonas_blastp/NCIMB_11764_vs_B728a.tab
As before, this would few minutes to complete, so to save some time the comparison has already been made for you, with the result file being placed in pseudomonas_blastp/NCIMB_11764_vs_B728a.tab.
We'll load the results into a dataframe called data_rev using the helper function ex02.read_data() in the cell below.
End of explanation
# Calculate RBBH for the two Pseudomonas datasets
# This returns three dataframes: df1 and df2 are the forward and reverse BLAST
# results (filtered, if any filters were used), and rbbh is the dataframe of
# reciprocal best BLAST hits
df1, df2, rbbh = ex02.find_rbbh(data_fwd, data_rev)
Explanation: <div class="alert alert-warning">
<b>NOTE:</b> You could inspect <b>data_rev</b> using the <b>.head()</b> and <b>.describe()</b> methods, just as you did for <b>data_fwd</b>
</div>
The ex02 module provides a function called find_rbbh() which calculates reciprocal best BLAST hits from forward and reverse BLAST searches. The calculation can be performed by executing the cell below.
End of explanation
# Peek at the first few lines of the RBBH results
rbbh.head()
# Show summary statistics for RBBH
rbbh.describe()
Explanation: We can inspect the dataframe of RBBH using the .head() and .describe() methods, by executing the cells below.
End of explanation
# Report the size of each of the forward and reverse input, and rbbh output dataframes
s = '\n'.join(["Forward BLAST input: {0} proteins",
"Reverse BLAST input: {1} proteins",
"RBBH output: {2} proteins"])
print(s.format(len(data_fwd), len(data_rev), len(rbbh)))
print("(min difference = {0})".format(min(len(data_fwd), len(data_rev)) - len(rbbh)))
Explanation: It is inevitable that the RBBH set will have the same or fewer protein pairs in it, than the number of proteins in the smallest of the forward and reverse protein sets. But how many proteins have been filtered in this comparison? We can find out by executing the cell below.
End of explanation
# Histogram of forward match percentage identity (one-way)
data_fwd.identity.hist(bins=100)
# Histogram of forward match percentage identity (RBBH)
rbbh.identity_x.hist(bins=100)
Explanation: <div class="alert alert-warning">
<b>Approximately what proportion of best <b>BLAST</b> matches have been discarded?</b>
</div>
Visualising RBBH output
We can get a better idea of what this processing has done by looking at a visual representation of the percentage identity and coverage of RBBH, compared to the (forward) one-way matches. We can do this by executing the cells below.
First, let's look at the percentage identity of best BLAST matches:
End of explanation
# Plot 2D histograms of query coverage against subject coverage for the
# one-way forward matches, and those retained after calculating RBBH
ex02.plot_hist2d(data_fwd.query_coverage, data_fwd.subject_coverage,
"one-way query COV", "one-way subject COV",
"one-way coverage comparison")
ex02.plot_hist2d(rbbh.query_coverage_x, rbbh.subject_coverage_x,
"RBBH (fwd) query COV", "RBBH (fwd) subject COV",
"RBBH coverage comparison")
Explanation: <div class="alert alert-warning">
<b>What has been the effect of excluding best matches that do not have an RBBH reverse match?</b>
</div>
Next, we can inspect the query and subject coverage of RBBH results, compared to the one-way forward BLAST matches by executing the cell below.
End of explanation
# Calculate ID and coverage-filtered RBBH for the two Pseudomonas datasets
# This returns three dataframes: df1_filtered and df2_filtered are the
# filtered forward and reverse BLAST results , and rbbh_filtered is the
# dataframe of reciprocal best BLAST hits
df1_filtered, df2_filtered, rbbh_filtered = ex02.find_rbbh(data_fwd, data_rev, pid=40, cov=70)
# Histogram of forward match percentage identity (RBBH, filtered)
rbbh_filtered.identity_x.hist(bins=100)
# Plot 2D histograms of query coverage against subject coverage for the
# one-way forward matches retained after calculating RBBH and
# filtering on percentage identity and coverage
ex02.plot_hist2d(rbbh_filtered.query_coverage_x, rbbh_filtered.subject_coverage_x,
"filtered RBBH (fwd) query COV", "filtered_RBBH (fwd) subject COV",
"filtered RBBH coverage comparison")
Explanation: <div class="alert alert-warning">
<ul>
<li><b>Which one-way matches have been excluded by carrying out RBBH?</b><br />
<li><b>What is the biological significance of excluding those matches?</b>
<li><b>What would be a reasonable filter to exclude the remaining suspect matches?</b>
</ul>
</div>
Filtering RBBH output
The find_rbbh() function allows us to apply cutoff filters on percentage identity or coverage (or both) for an RBBH match - this, and visualisation of the results is done in the cells below.
<div class="alert alert-warning">
<b>NOTE:</b> There is a software tool (<a href="https://github.com/cboursnell/crb-blast"><b>CRB-BLAST</b></a> - Conditional Reciprocal Best BLAST) available that calculates reciprocal best matches, and statistically evaluates an 'optimal' E-value cutoff, in order to improve accuracy of ortholog assignment.
</div>
End of explanation
# Read feature locations for each Pseudomonas file
features = ex02.read_genbank("pseudomonas/GCF_000988485.1_ASM98848v1_genomic.gbff",
"pseudomonas/GCF_000293885.2_ASM29388v3_genomic.gbff")
# Write a .crunch file of filtered RBBH for the Pseudomonas comparisons
ex02.write_crunch(rbbh_filtered, features,
fwd="GCF_000988485.1_ASM98848v1_genomic",
rev="GCF_000293885.2_ASM29388v3_genomic",
outdir="pseudomonas_blastp",
filename="B728a_rbbh_NCIMB_11764.crunch")
Explanation: Visualising RBBH with ACT
Finally for this exercise, we will visualise the RBBH between P. syringae B728a and P. fluorescens NCIMB 11764 using ACT (as in exercise 01), comparing the output to that obtained by a BLASTN comparison of the chromosomes.
First, we need to generate an output file describing our (filtered) RBBH that ACT can read. We do this by executing the cell below. This does two things:
Gets the locations of protein features on the chromosome of each organism from a .gbff file, using the helper function read_genbank(), putting them in a variable called features.
Writes the RBBH to a .crunch format file (pseudomonas_blastp/B728a_rbbh_NCIMB_11764.crunch), which ACT can read.
End of explanation
<END_TASK> |
15,532 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
LQ Approximation with QuantEcon.py
Step2: We consider a dynamic maximization problem with
reward function $f(s, x)$,
state transition function $g(s, x)$, and
discount rate $\delta$,
where $s$ and $x$ are the state and the control variables, respectively
(we follow Miranda-Fackler in notation).
Let $(s^, x^)$ denote the steady state state-control pair,
and write
$f^ = f(s^, x^)$, $f_i^ = f_i(s^, x^)$, $f_{ij}^ = f_{ij}(s^, x^)$,
$g^ = g(s^, x^)$, and $g_i^ = g_i(s^, x^*)$ for $i, j = s, x$.
First-order expansion of $g$ around $(s^, x^)$
Step3: Optimal Economic Growth
We consider the following optimal growth model from Miranda and Fackler, Section 9.7.1
Step4: Function definitions
Step5: Steady state
Step6: (s_star, x_star) satisfies the Euler equations
Step7: Construct $f^$, $\nabla f^$, $D^2 f^$, $g^$, and $\nabla g^*$
Step8: LQ Approximation
Generate an LQ instance that approximates our dynamic optimization problem
Step9: Solution by LQ.stationary_values
Solve the LQ problem
Step10: The optimal value function (of the LQ minimization problem)
Step11: The value at $s^*$
Step12: The optimal policy function
Step13: The optimal choice at $s^*$
Step14: Renewable Resource Management
Consider the renewable resource management model from Miranda and Fackler, Section 9.7.2 | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
import quantecon as qe
# matplotlib settings
plt.rcParams['axes.xmargin'] = 0
plt.rcParams['axes.ymargin'] = 0
Explanation: LQ Approximation with QuantEcon.py
End of explanation
def approx_lq(s_star, x_star, f_star, Df_star, DDf_star, g_star, Dg_star, discount):
Return an approximating LQ instance.
Gradient of f: Df_star = np.array([f_s, f_x])
Hessian of f: DDf_star = np.array([[f_ss, f_sx], [f_sx, f_xx]])
Gradient of g: Dg_star = np.array([g_s, g_x])
n = 2
k = 1
sx_star = np.array([s_star, x_star])
# (1, s)' R (1, s) + 2 x N (1, s) + x Q x
Q = np.empty((k, k))
R = np.empty((n, n))
N = np.empty((k, n))
R[0, 0] = -(f_star - Df_star @ sx_star + (sx_star @ DDf_star @ sx_star) / 2)
R[1, 1], N[0, 1], N[0, 1], Q[0, 0] = -DDf_star.ravel() / 2
R[1, 0], N[0, 0] = -(Df_star - DDf_star @ sx_star).ravel() / 2
R[0, 1] = R[1, 0]
# A (1, s) + B x + C w
A = np.empty((n, n))
B = np.empty((n, k))
C = np.zeros((n, 1))
A[0, 0], A[0, 1], B[0, 0] = 1, 0, 0
A[1, 0] = g_star - Dg_star @ sx_star
A[1, 1], B[1, 0] = Dg_star.ravel()
lq = qe.LQ(Q, R, A, B, C, N, beta=discount)
return lq
Explanation: We consider a dynamic maximization problem with
reward function $f(s, x)$,
state transition function $g(s, x)$, and
discount rate $\delta$,
where $s$ and $x$ are the state and the control variables, respectively
(we follow Miranda-Fackler in notation).
Let $(s^, x^)$ denote the steady state state-control pair,
and write
$f^ = f(s^, x^)$, $f_i^ = f_i(s^, x^)$, $f_{ij}^ = f_{ij}(s^, x^)$,
$g^ = g(s^, x^)$, and $g_i^ = g_i(s^, x^*)$ for $i, j = s, x$.
First-order expansion of $g$ around $(s^, x^)$:
$$
\begin{align}
g(s, x)
&\approx g^ + g_s^ (s - s^) + g_x^ (x - x^) \
&= A \begin{pmatrix}1 \ s\end{pmatrix} + B x,
\end{align*}
$$
where
$A =
\begin{pmatrix}
1 & 0 \
g^ - \nabla g^{\mathrm{T}} z^ & g_s^
\end{pmatrix}$,
$B =
\begin{pmatrix}
0 \ g_x^*
\end{pmatrix}$
with $z^ = (s^, x^)^{\mathrm{T}}$ and $\nabla g^ = (g_s^, g_x^)^{\mathrm{T}}$.
Second-order expansion of $f$ around $(s^, x^)$:
$$
\begin{align}
f(s, x)
&\approx f^ + f_s^ (s - s^) + f_x^ (x - x^) +
\frac{1}{2} f_{ss}^ (s - s^)^2 + f_{sx}^ (s - s^) (x - x^) +
\frac{1}{2} f_{xx}^ (x - x^)^2 \
&= \begin{pmatrix}
1 & s & x
\end{pmatrix}
\begin{pmatrix}
f^ - \nabla f^{\mathrm{T}} z^ + \frac{1}{2} z^{\mathrm{T}} D^2 f^ z^ &
\frac{1}{2} (\nabla f^ - D^2 f^ z^)^{\mathrm{T}} \
\frac{1}{2} (\nabla f^ - D^2 f^ z^) & \frac{1}{2} D^2 f^
\end{pmatrix}
\begin{pmatrix}
1 \ s \ x
\end{pmatrix},
\end{align}
$$
where
$\nabla f^ = (f_s^, f_x^)^{\mathrm{T}}$ and
$$
D^2 f^ =
\begin{pmatrix}
f_{ss}^ & f_{sx}^ \
f_{sx}^ & f_{xx}^*
\end{pmatrix}.
$$
Let
$$
\begin{align}
r(s, x)
&= -
\begin{pmatrix}
1 & s & x
\end{pmatrix}
\begin{pmatrix}
f^ - \nabla f^{\mathrm{T}} z^ + \frac{1}{2} z^{\mathrm{T}} D^2 f^ z^ &
\frac{1}{2} (\nabla f^ - D^2 f^ z^)^{\mathrm{T}} \
\frac{1}{2} (\nabla f^ - D^2 f^ z^) & \frac{1}{2} D^2 f^
\end{pmatrix}
\begin{pmatrix}
1 \ s \ x
\end{pmatrix} \
&= \begin{pmatrix}
1 & s
\end{pmatrix}
R
\begin{pmatrix}
1 \ s
\end{pmatrix} +
2 x N
\begin{pmatrix}
1 \ s
\end{pmatrix} +
Q x,
\end{align*}
$$
where
$R = -
\begin{pmatrix}
f^ - \nabla f^{\mathrm{T}} z^ + \frac{1}{2} z^{\mathrm{T}} D^2 f^ z^ &
\frac{1}{2} [f_s^ - (f_{ss}^ s^ + f_{sx}^ x^)] \
\frac{1}{2} [f_s^ - (f_{ss}^ s^ + f_{sx}^ x^)] & \frac{1}{2} f_{ss}^*
\end{pmatrix}$,
$N = -
\begin{pmatrix}
\frac{1}{2} [f_x^ - (f_{sx}^ s^ + f_{xx}^ x^)] & \frac{1}{2} f_{sx}^
\end{pmatrix}$.
$Q = -\frac{1}{2} f_{xx}^*$.
Remarks:
We are going to minimize the objective function.
End of explanation
alpha = 0.2
beta = 0.5
gamma = 0.9
discount = 0.9
Explanation: Optimal Economic Growth
We consider the following optimal growth model from Miranda and Fackler, Section 9.7.1:
$f(s, x) = \dfrac{(s - x)^{1-\alpha}}{1-\alpha}$,
$g(s, x) = \gamma + x^{\beta}$.
End of explanation
f = lambda s, x: (s - x)**(1 - alpha) / (1 - alpha)
f_s = lambda s, x: (s - x)**(-alpha)
f_x = lambda s, x: -f_s(s, x)
f_ss = lambda s, x: -alpha * (s - x)**(-alpha - 1)
f_sx = lambda s, x: -f_ss(s, x)
f_xx = lambda s, x: f_ss(s, x)
g = lambda s, x: gamma * x + x**beta
g_s = lambda s, x: 0
g_x = lambda s, x: gamma + beta * x**(beta - 1)
Explanation: Function definitions:
End of explanation
x_star = ((discount * beta) / (1 - discount * gamma))**(1 / (1 - beta))
s_star = gamma * x_star + x_star**beta
s_star, x_star
Explanation: Steady state:
End of explanation
f_x(s_star, x_star) + discount * f_s(g(s_star, x_star), x_star) * g_x(s_star, x_star)
Explanation: (s_star, x_star) satisfies the Euler equations:
End of explanation
f_star = f(s_star, x_star)
Df_star = np.array([f_s(s_star, x_star), f_x(s_star, x_star)])
DDf_star = np.array([[f_ss(s_star, x_star), f_sx(s_star, x_star)],
[f_sx(s_star, x_star), f_xx(s_star, x_star)]])
g_star = g(s_star, x_star)
Dg_star = np.array([g_s(s_star, x_star), g_x(s_star, x_star)])
Explanation: Construct $f^$, $\nabla f^$, $D^2 f^$, $g^$, and $\nabla g^*$:
End of explanation
lq = approx_lq(s_star, x_star, f_star, Df_star, DDf_star, g_star, Dg_star, discount)
Explanation: LQ Approximation
Generate an LQ instance that approximates our dynamic optimization problem:
End of explanation
P, F, d = lq.stationary_values()
P, F, d
Explanation: Solution by LQ.stationary_values
Solve the LQ problem:
End of explanation
V = lambda s: np.array([1, s]) @ P @ np.array([1, s]) + d
Explanation: The optimal value function (of the LQ minimization problem):
End of explanation
V(s_star)
-f_star / (1 - lq.beta)
Explanation: The value at $s^*$:
End of explanation
X = lambda s: -(F @ np.array([1, s]))[0]
Explanation: The optimal policy function:
End of explanation
X(s_star)
x_star
X = np.vectorize(X)
s_min, s_max = 5, 10
ss = np.linspace(s_min, s_max, 50)
title = "Optimal Investment Policy"
xlabel = "Wealth"
ylabel = "Investment (% of Wealth)"
fig, ax = plt.subplots(figsize=(8,5))
ax.plot(ss, X(ss)/ss, label='L-Q')
ax.plot(s_star, x_star/s_star, '*', color='k', markersize=10)
ax.set_xlim(s_min, s_max)
ax.set_ylim(0.65, 0.9)
ax.set_title(title)
ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel)
ax.tick_params(right='on')
ax.legend()
plt.show()
Explanation: The optimal choice at $s^*$:
End of explanation
alpha = 4.0
beta = 1.0
gamma = 0.5
kappa = 0.2
discount = 0.9
f = lambda s, x: (s - x)**(1 - gamma) / (1 - gamma) - kappa * (s - x)
f_s = lambda s, x: (s - x)**(-gamma) - kappa
f_x = lambda s, x: -f_s(s, x)
f_ss = lambda s, x: -gamma * (s - x)**(-gamma - 1)
f_sx = lambda s, x: -f_ss(s, x)
f_xx = lambda s, x: f_ss(s, x)
g = lambda s, x: alpha * x - 0.5 * beta * x**2
g_s = lambda s, x: 0
g_x = lambda s, x: alpha - beta * x
x_star = (discount * alpha - 1) / (discount * beta)
s_star = (alpha**2 - 1/discount**2) / (2 * beta)
s_star, x_star
f_x(s_star, x_star) + discount * f_s(g(s_star, x_star), x_star) * g_x(s_star, x_star)
f_star = f(s_star, x_star)
Df_star = np.array([f_s(s_star, x_star), f_x(s_star, x_star)])
DDf_star = np.array([[f_ss(s_star, x_star), f_sx(s_star, x_star)],
[f_sx(s_star, x_star), f_xx(s_star, x_star)]])
g_star = g(s_star, x_star)
Dg_star = np.array([g_s(s_star, x_star), g_x(s_star, x_star)])
lq = approx_lq(s_star, x_star, f_star, Df_star, DDf_star, g_star, Dg_star, discount)
P, F, d = lq.stationary_values()
P, F, d
V = lambda s: np.array([1, s]) @ P @ np.array([1, s]) + d
V(s_star)
-f_star / (1 - lq.beta)
X = lambda s: -(F @ np.array([1, s]))[0]
X(s_star)
x_star
X = np.vectorize(X)
s_min, s_max = 6, 9
ss = np.linspace(s_min, s_max, 50)
harvest = ss - X(ss)
h_star = s_star - x_star
title = "Optimal Harvest Policy"
xlabel = "Available Stock"
ylabel = "Harvest (% of Stock)"
fig, ax = plt.subplots(figsize=(8,5))
ax.plot(ss, harvest/ss, label='L-Q')
ax.plot(s_star, h_star/s_star, '*', color='k', markersize=10)
ax.set_xlim(s_min, s_max)
ax.set_ylim(0.5, 0.75)
ax.set_title(title)
ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel)
ax.tick_params(right='on')
ax.legend()
plt.show()
shadow_price = lambda s: -2 * (P @ [1, s])[1]
shadow_price = np.vectorize(shadow_price)
title = "Shadow Price Function"
ylabel = "Price"
fig, ax = plt.subplots(figsize=(8,5))
ax.plot(ss, shadow_price(ss), label='L-Q')
ax.plot(s_star, shadow_price(s_star), '*', color='k', markersize=10)
ax.set_xlim(s_min, s_max)
ax.set_ylim(0.2, 0.4)
ax.set_title(title)
ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel)
ax.tick_params(right='on')
ax.legend()
plt.show()
Explanation: Renewable Resource Management
Consider the renewable resource management model from Miranda and Fackler, Section 9.7.2:
$f(s, x) = \dfrac{(s - x)^{1-\gamma}}{1-\gamma} - \kappa (s - x)$,
$g(s, x) = \alpha x - 0.5 \beta x^2$.
End of explanation
<END_TASK> |
15,533 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Regression Week 5
Step1: Unzipping files with house sales data
Dataset is from house sales in King County, the region where the city of Seattle, WA is located.
Step2: Load in house sales data
Step3: Create new features
As in Week 2, we consider features that are some transformations of inputs.
Step4: Squaring bedrooms will increase the separation between not many bedrooms (e.g. 1) and lots of bedrooms (e.g. 4) since 1^2 = 1 but 4^2 = 16. Consequently this variable will mostly affect houses with many bedrooms.
On the other hand, taking square root of sqft_living will decrease the separation between big house and small house. The owner may not be exactly twice as happy for getting a house that is twice as big.
Learn regression weights with L1 penalty
Let us fit a model with all the features available, plus the features we just created above.
Step5: Using the entire house dataset, learn regression weights using an L1 penalty of 5e2. Make sure to add "normalize=True" when creating the Lasso object.
Step6: Note that a majority of the weights have been set to zero. So by setting an L1 penalty that's large enough, we are performing a subset selection.
Step7: Note that a majority of the weights have been set to zero. So by setting an L1 penalty that's large enough, we are performing a subset selection.
QUIZ QUESTION
Step8: Selecting an L1 penalty
To find a good L1 penalty, we will explore multiple values using a validation set. Let us do three way split into train, validation, and test sets
Step9: Make sure to create the 4 features as we did above
Step10: Next, we write a loop that does the following
Step11: Creating a dictionary to store the regression models for each L1 penalty. The key of the dictionary will be the index of the l1_pen_val array, passed as a string
Step12: Creating a regression model for each L1 penalty
Step13: Making a function to compute the RSS on the validation data
Step14: Making a list to store tuples of the form (RSS value for a L1 penalty, index of L1 penalty array)
Step15: In this loop, we use the repression model to calculate the predicted output values. We then use the predicted values and observed output value to calculate the RSS error. We then fill in values for the RSS_L1_vals.
Step16: QUIZ QUESTIONS
Q1. What was the best value for the l1_penalty?
Step17: QUIZ QUESTION
Also, using this value of L1 penalty, how many nonzero weights do you have?
Step18: Limit the number of nonzero weights
What if we absolutely wanted to limit ourselves to, say, 7 features? This may be important if we want to derive "a rule of thumb" --- an interpretable model that has only a few features in them.
In this section, you are going to implement a simple, two phase procedure to achive this goal
Step19: Exploring the larger range of values to find a narrow range with the desired sparsity
Let's define a wide range of possible l1_penalty_values
Step20: Now, implement a loop that search through this space of possible l1_penalty values
Step21: Creating a regression model for each L1 penalty. Then, finding the non-zero entries for the regression models. If number of non-zero weights are larger or smaller than max_nonzeros, store the number of non_zero weights
Step22: Out of this large range, we want to find the two ends of our desired narrow range of l1_penalty. At one end, we will have l1_penalty values that have too few non-zeros, and at the other end, we will have an l1_penalty that has too many non-zeros.
More formally, find
Step23: Exploring the narrow range of values to find the solution with the right number of non-zeros that has lowest RSS on the validation set
We will now explore the narrow region of l1_penalty values we found
Step24: For l1_penalty in np.linspace(l1_penalty_min,l1_penalty_max,20)
Step25: Creating a regression model for each L1 penalty. If the the number of non-zero weights is equal to max_nonzeros, storing the RSS on the validation set and the index for this L1 penalty in the l1_penalty_values list
Step26: QUIZ QUESTIONS
Q1. What value of l1_penalty in our narrow range has the lowest RSS on the VALIDATION set and has sparsity equal to max_nonzeros?
Step27: Q2. What features in this model have non-zero coefficients?
Re-learning the model with this L1 penalty
Step28: Printing the features with non-zero weights and the values of the weights. | <ASSISTANT_TASK:>
Python Code:
import os
import zipfile
from math import log, sqrt
import numpy as np
import pandas as pd
from sklearn import linear_model
import matplotlib as mpl
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('darkgrid')
%matplotlib inline
Explanation: Regression Week 5: Feature Selection and LASSO (Interpretation)
In this notebook, we will use LASSO to select features. You will:
* Run LASSO with different L1 penalties.
* Choose best L1 penalty using a validation set.
* Choose best L1 penalty using a validation set, with additional constraint on the size of subset.
In the second notebook, you will implement your own LASSO solver, using coordinate descent.
Importing Libraries
End of explanation
# Put files in current direction into a list
files_list = [f for f in os.listdir('.') if os.path.isfile(f)]
# Filenames of unzipped files
unzip_files = ['kc_house_data.csv','wk3_kc_house_train_data.csv',
'wk3_kc_house_test_data.csv', 'wk3_kc_house_train_data.csv',
'wk3_kc_house_valid_data.csv']
# If upzipped file not in files_list, unzip the file
for filename in unzip_files:
if filename not in files_list:
zip_file = filename + '.zip'
unzipping = zipfile.ZipFile(zip_file)
unzipping.extractall()
unzipping.close
Explanation: Unzipping files with house sales data
Dataset is from house sales in King County, the region where the city of Seattle, WA is located.
End of explanation
# Dictionary with the correct dtypes for the DataFrame columns
dtype_dict = {'bathrooms':float, 'waterfront':int, 'sqft_above':int,
'sqft_living15':float, 'grade':int, 'yr_renovated':int,
'price':float, 'bedrooms':float, 'zipcode':str,
'long':float, 'sqft_lot15':float, 'sqft_living':float,
'floors':float, 'condition':int, 'lat':float, 'date':str,
'sqft_basement':int, 'yr_built':int, 'id':str,
'sqft_lot':int, 'view':int}
sales = pd.read_csv('kc_house_data.csv', dtype=dtype_dict)
Explanation: Load in house sales data
End of explanation
sales['sqft_living_sqrt'] = sales['sqft_living'].apply(sqrt)
sales['sqft_lot_sqrt'] = sales['sqft_lot'].apply(sqrt)
sales['bedrooms_square'] = sales['bedrooms']*sales['bedrooms']
sales['floors_square'] = sales['floors']*sales['floors']
Explanation: Create new features
As in Week 2, we consider features that are some transformations of inputs.
End of explanation
all_features = ['bedrooms', 'bedrooms_square',
'bathrooms',
'sqft_living', 'sqft_living_sqrt',
'sqft_lot', 'sqft_lot_sqrt',
'floors', 'floors_square',
'waterfront', 'view', 'condition', 'grade',
'sqft_above',
'sqft_basement',
'yr_built', 'yr_renovated']
Explanation: Squaring bedrooms will increase the separation between not many bedrooms (e.g. 1) and lots of bedrooms (e.g. 4) since 1^2 = 1 but 4^2 = 16. Consequently this variable will mostly affect houses with many bedrooms.
On the other hand, taking square root of sqft_living will decrease the separation between big house and small house. The owner may not be exactly twice as happy for getting a house that is twice as big.
Learn regression weights with L1 penalty
Let us fit a model with all the features available, plus the features we just created above.
End of explanation
model_all = linear_model.Lasso(alpha=5e2, normalize=True) # set parameters
model_all.fit(sales[all_features], sales['price']) # learn weights
Explanation: Using the entire house dataset, learn regression weights using an L1 penalty of 5e2. Make sure to add "normalize=True" when creating the Lasso object.
End of explanation
print model_all.coef_
Explanation: Note that a majority of the weights have been set to zero. So by setting an L1 penalty that's large enough, we are performing a subset selection.
End of explanation
for feat, weight in zip(all_features, model_all.coef_):
if weight != 0.0:
print feat + ':', weight
Explanation: Note that a majority of the weights have been set to zero. So by setting an L1 penalty that's large enough, we are performing a subset selection.
QUIZ QUESTION:
For the model_all model, which of the features have been chosen, i.e. what features had non-zero weights?
End of explanation
testing = pd.read_csv('wk3_kc_house_test_data.csv', dtype=dtype_dict)
training = pd.read_csv('wk3_kc_house_train_data.csv', dtype=dtype_dict)
validation = pd.read_csv('wk3_kc_house_valid_data.csv', dtype=dtype_dict)
Explanation: Selecting an L1 penalty
To find a good L1 penalty, we will explore multiple values using a validation set. Let us do three way split into train, validation, and test sets:
End of explanation
testing['sqft_living_sqrt'] = testing['sqft_living'].apply(sqrt)
testing['sqft_lot_sqrt'] = testing['sqft_lot'].apply(sqrt)
testing['bedrooms_square'] = testing['bedrooms']*testing['bedrooms']
testing['floors_square'] = testing['floors']*testing['floors']
training['sqft_living_sqrt'] = training['sqft_living'].apply(sqrt)
training['sqft_lot_sqrt'] = training['sqft_lot'].apply(sqrt)
training['bedrooms_square'] = training['bedrooms']*training['bedrooms']
training['floors_square'] = training['floors']*training['floors']
validation['sqft_living_sqrt'] = validation['sqft_living'].apply(sqrt)
validation['sqft_lot_sqrt'] = validation['sqft_lot'].apply(sqrt)
validation['bedrooms_square'] = validation['bedrooms']*validation['bedrooms']
validation['floors_square'] = validation['floors']*validation['floors']
Explanation: Make sure to create the 4 features as we did above:
End of explanation
l1_pen_val = np.logspace(1, 7, num=13)
Explanation: Next, we write a loop that does the following:
* For l1_penalty in [10^1, 10^1.5, 10^2, 10^2.5, ..., 10^7] (to get this in Python, type np.logspace(1, 7, num=13).)
* Fit a regression model with a given l1_penalty on TRAIN data. Specify l1_penalty=l1_penalty and l2_penalty=0. in the parameter list.
* Compute the RSS on VALIDATION data (here you will want to use .predict()) for that l1_penalty
* Report which l1_penalty produced the lowest RSS on validation data.
End of explanation
models_diff_l1 = {}
Explanation: Creating a dictionary to store the regression models for each L1 penalty. The key of the dictionary will be the index of the l1_pen_val array, passed as a string
End of explanation
for i in range(len(l1_pen_val)):
key_val = str(i)
models_diff_l1[key_val] = linear_model.Lasso(alpha=l1_pen_val[i], normalize=True) # set parameters
models_diff_l1[key_val].fit(training[all_features], training['price']) # learn weights
Explanation: Creating a regression model for each L1 penalty
End of explanation
def RSS_val(output_vals, predictions):
RSS_error = sum( (output_vals - predictions)**2.0 )
return RSS_error
Explanation: Making a function to compute the RSS on the validation data
End of explanation
RSS_L1_vals = []
Explanation: Making a list to store tuples of the form (RSS value for a L1 penalty, index of L1 penalty array)
End of explanation
for i in range(len(l1_pen_val)):
key_val = str(i)
pred_vals = models_diff_l1[key_val].predict(validation[all_features])
RSS = RSS_val(validation['price'], pred_vals)
RSS_L1_vals.append( (RSS, i) )
Explanation: In this loop, we use the repression model to calculate the predicted output values. We then use the predicted values and observed output value to calculate the RSS error. We then fill in values for the RSS_L1_vals.
End of explanation
print l1_pen_val[ min(RSS_L1_vals)[1] ]
print '%.2e' % ( min(RSS_L1_vals)[0] )
Explanation: QUIZ QUESTIONS
Q1. What was the best value for the l1_penalty?
End of explanation
print ( np.count_nonzero(models_diff_l1[ str(min(RSS_L1_vals)[1]) ].coef_) +
np.count_nonzero(models_diff_l1[ str(min(RSS_L1_vals)[1]) ].intercept_) )
Explanation: QUIZ QUESTION
Also, using this value of L1 penalty, how many nonzero weights do you have?
End of explanation
max_nonzeros = 7
Explanation: Limit the number of nonzero weights
What if we absolutely wanted to limit ourselves to, say, 7 features? This may be important if we want to derive "a rule of thumb" --- an interpretable model that has only a few features in them.
In this section, you are going to implement a simple, two phase procedure to achive this goal:
1. Explore a large range of l1_penalty values to find a narrow region of l1_penalty values where models are likely to have the desired number of non-zero weights.
2. Further explore the narrow region you found to find a good value for l1_penalty that achieves the desired sparsity. Here, we will again use a validation set to choose the best value for l1_penalty.
End of explanation
l1_penalty_values = np.logspace(1, 4, num=20)
Explanation: Exploring the larger range of values to find a narrow range with the desired sparsity
Let's define a wide range of possible l1_penalty_values:
End of explanation
list_l1_pen_n_less_nmax = []
list_l1_pen_n_larger_nmax = []
Explanation: Now, implement a loop that search through this space of possible l1_penalty values:
For l1_penalty in np.logspace(8, 10, num=20):
Fit a regression model with a given l1_penalty on TRAIN data. Specify l1_penalty=l1_penalty and l2_penalty=0. in the parameter list.
Extract the weights of the model and count the number of nonzeros. Save the number of nonzeros to a list.
Creating lists to store L1 penalties for models with features less than max_nonzeros and for models with features more than max_nonzeros
End of explanation
for i in range(len(l1_penalty_values)):
mod_diff_l1_n7 = linear_model.Lasso(alpha=l1_penalty_values[i], normalize=True) # set parameters
mod_diff_l1_n7.fit(training[all_features], training['price']) # learn weights
non_0_weights = ( np.count_nonzero(mod_diff_l1_n7.coef_) +
np.count_nonzero(mod_diff_l1_n7.intercept_) )
if non_0_weights<max_nonzeros:
list_l1_pen_n_less_nmax.append(l1_penalty_values[i])
if non_0_weights>max_nonzeros:
list_l1_pen_n_larger_nmax.append(l1_penalty_values[i])
Explanation: Creating a regression model for each L1 penalty. Then, finding the non-zero entries for the regression models. If number of non-zero weights are larger or smaller than max_nonzeros, store the number of non_zero weights
End of explanation
l1_penalty_min = max(list_l1_pen_n_larger_nmax)
l1_penalty_max = min(list_l1_pen_n_less_nmax)
print 'l1_penalty_min: ', round(l1_penalty_min,0)
print 'l1_penalty_max: ', round(l1_penalty_max,0)
Explanation: Out of this large range, we want to find the two ends of our desired narrow range of l1_penalty. At one end, we will have l1_penalty values that have too few non-zeros, and at the other end, we will have an l1_penalty that has too many non-zeros.
More formally, find:
* The largest l1_penalty that has more non-zeros than max_nonzero (if we pick a penalty smaller than this value, we will definitely have too many non-zero weights)
* Store this value in the variable l1_penalty_min (we will use it later)
* The smallest l1_penalty that has fewer non-zeros than max_nonzero (if we pick a penalty larger than this value, we will definitely have too few non-zero weights)
* Store this value in the variable l1_penalty_max (we will use it later)
QUIZ QUESTIONS
What values did you find for l1_penalty_min andl1_penalty_max?
End of explanation
l1_penalty_values = np.linspace(l1_penalty_min,l1_penalty_max,20)
Explanation: Exploring the narrow range of values to find the solution with the right number of non-zeros that has lowest RSS on the validation set
We will now explore the narrow region of l1_penalty values we found:
End of explanation
RSS_L1_vals_ref = []
Explanation: For l1_penalty in np.linspace(l1_penalty_min,l1_penalty_max,20):
Fit a regression model with a given l1_penalty on TRAIN data. Specify l1_penalty=l1_penalty and l2_penalty=0. in the parameter list. When you call linear_regression.create() make sure you set validation_set = None
Measure the RSS of the learned model on the VALIDATION set
Find the model that the lowest RSS on the VALIDATION set and has sparsity equal to max_nonzero.
Creatting a list to store RSS values if number of non-zero weights is equal to max_nonzeros
End of explanation
for i in range(len(l1_penalty_values)):
mod_diff_l1_ref = linear_model.Lasso(alpha=l1_penalty_values[i], normalize=True) # set parameters
mod_diff_l1_ref.fit(training[all_features], training['price']) # learn weights
non_0_weights = ( np.count_nonzero(mod_diff_l1_ref.coef_) +
np.count_nonzero(mod_diff_l1_ref.intercept_) )
if non_0_weights==max_nonzeros:
pred_vals = mod_diff_l1_ref.predict(validation[all_features])
RSS = RSS_val(validation['price'], pred_vals)
RSS_L1_vals_ref.append( (RSS, i) )
Explanation: Creating a regression model for each L1 penalty. If the the number of non-zero weights is equal to max_nonzeros, storing the RSS on the validation set and the index for this L1 penalty in the l1_penalty_values list
End of explanation
print round( l1_penalty_values[ min(RSS_L1_vals_ref)[1] ] , 0 )
Explanation: QUIZ QUESTIONS
Q1. What value of l1_penalty in our narrow range has the lowest RSS on the VALIDATION set and has sparsity equal to max_nonzeros?
End of explanation
best_L1_index = min(RSS_L1_vals_ref)[1]
mod_diff_l1_ref = linear_model.Lasso(alpha=l1_penalty_values[ best_L1_index ], normalize=True) # set parameters
mod_diff_l1_ref.fit(training[all_features], training['price']) # learn weights
Explanation: Q2. What features in this model have non-zero coefficients?
Re-learning the model with this L1 penalty
End of explanation
if mod_diff_l1_ref.intercept_ != 0:
print 'intercept: %.2e' % (mod_diff_l1_ref.intercept_)
for feat, weight in zip(all_features, mod_diff_l1_ref.coef_):
if weight != 0.0:
print feat + ':', weight
Explanation: Printing the features with non-zero weights and the values of the weights.
End of explanation
<END_TASK> |
15,534 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In this workshop we will code up a genetic algorithm for a simple mathematical optimization problem.
Genetic Algorithm is a
* Meta-heuristic
* Inspired by Natural Selection
* Traditionally works on binary data. Can be adopted for other data types as well.
You can find an example illustrating GA below
Step11: The optimization problem
The problem we are considering is a mathematical one
<img src="cone.png" width=500px/>
Decisions
Step12: Great. Now that the class and its basic methods is defined, we move on to code up the GA.
Population
First up is to create an initial population.
Step13: Crossover
We perform a single point crossover between two points
Step14: Mutation
Randomly change a decision such that
Step16: Fitness Evaluation
To evaluate fitness between points we use binary domination. Binary Domination is defined as follows
Step17: Fitness and Elitism
In this workshop we will count the number of points of the population P dominated by a point A as the fitness of point A. This is a very naive measure of fitness since we are using binary domination.
Few prominent alternate methods are
1. Continuous Domination - Section 3.1
2. Non-dominated Sort
3. Non-dominated Sort + Niching
Elitism
Step18: Putting it all together and making the GA
Step19: Visualize
Lets plot the initial population with respect to the final frontier. | <ASSISTANT_TASK:>
Python Code:
# All the imports
from __future__ import print_function, division
from math import *
import random
import sys
import matplotlib.pyplot as plt
# TODO 1: Enter your unity ID here
__author__ = "dndesai"
class O:
Basic Class which
- Helps dynamic updates
- Pretty Prints
def __init__(self, **kwargs):
self.has().update(**kwargs)
def has(self):
return self.__dict__
def update(self, **kwargs):
self.has().update(kwargs)
return self
def __repr__(self):
show = [':%s %s' % (k, self.has()[k])
for k in sorted(self.has().keys())
if k[0] is not "_"]
txt = ' '.join(show)
if len(txt) > 60:
show = map(lambda x: '\t' + x + '\n', show)
return '{' + ' '.join(show) + '}'
print("Unity ID: ", __author__)
Explanation: In this workshop we will code up a genetic algorithm for a simple mathematical optimization problem.
Genetic Algorithm is a
* Meta-heuristic
* Inspired by Natural Selection
* Traditionally works on binary data. Can be adopted for other data types as well.
You can find an example illustrating GA below
End of explanation
# Few Utility functions
def say(*lst):
Print whithout going to new line
print(*lst, end="")
sys.stdout.flush()
def random_value(low, high, decimals=2):
Generate a random number between low and high.
decimals incidicate number of decimal places
return round(random.uniform(low, high),decimals)
def gt(a, b): return a > b
def lt(a, b): return a < b
def shuffle(lst):
Shuffle a list
random.shuffle(lst)
return lst
class Decision(O):
Class indicating Decision of a problem
def __init__(self, name, low, high):
@param name: Name of the decision
@param low: minimum value
@param high: maximum value
O.__init__(self, name=name, low=low, high=high)
class Objective(O):
Class indicating Objective of a problem
def __init__(self, name, do_minimize=True):
@param name: Name of the objective
@param do_minimize: Flag indicating if objective has to be minimized or maximized
O.__init__(self, name=name, do_minimize=do_minimize)
class Point(O):
Represents a member of the population
def __init__(self, decisions):
O.__init__(self)
self.decisions = decisions
self.objectives = None
def __hash__(self):
return hash(tuple(self.decisions))
def __eq__(self, other):
return self.decisions == other.decisions
def clone(self):
new = Point(self.decisions)
new.objectives = self.objectives
return new
class Problem(O):
Class representing the cone problem.
def __init__(self):
O.__init__(self)
# TODO 2: Code up decisions and objectives below for the problem
# using the auxilary classes provided above.
self.decisions = [Decision('r',0,10), Decision('h',0,20)]
self.objectives = [Objective('S', True), Objective('T', True)]
@staticmethod
def evaluate(point):
[r, h] = point.decisions
l = (r**2 + h**2)**0.5
sa = pi*r*l
ta = sa + pi * r**2
point.objectives = [sa, ta]
# TODO 3: Evaluate the objectives S and T for the point.
return point.objectives
@staticmethod
def is_valid(point):
[r, h] = point.decisions
# TODO 4: Check if the point has valid decisions
return pi * r**2 * h / 3.0 > 200
def generate_one(self):
# TODO 5: Generate a valid instance of Point.
while (True):
mypoint = Point([random_value(d.low, d.high) for d in self.decisions])
if Problem.is_valid(mypoint):
return mypoint
Explanation: The optimization problem
The problem we are considering is a mathematical one
<img src="cone.png" width=500px/>
Decisions: r in [0, 10] cm; h in [0, 20] cm
Objectives: minimize S, T
Constraints: V > 200cm<sup>3</sup>
End of explanation
def populate(problem, size):
population = []
# TODO 6: Create a list of points of length 'size'
population = [problem.generate_one() for _ in xrange(size)]
return population
Explanation: Great. Now that the class and its basic methods is defined, we move on to code up the GA.
Population
First up is to create an initial population.
End of explanation
def crossover(mom, dad):
# TODO 7: Create a new point which contains decisions from
# the first half of mom and second half of dad
n = len(mom.decisions)
return Point(mom.decisions[:n//2] + dad.decisions[n//2:])
Explanation: Crossover
We perform a single point crossover between two points
End of explanation
def mutate(problem, point, mutation_rate=0.01):
# TODO 8: Iterate through all the decisions in the point
# and if the probability is less than mutation rate
# change the decision(randomly set it between its max and min).
for i, d in enumerate(point.decisions):
if random.randrange(0, 1) < mutation_rate:
d = random_value(problem.decisions[i].low, problem.decisions[i].high)
return point
Explanation: Mutation
Randomly change a decision such that
End of explanation
def bdom(problem, one, two):
Return if one dominates two
objs_one = problem.evaluate(one)
objs_two = problem.evaluate(two)
dominates = False
# TODO 9: Return True/False based on the definition
# of bdom above.
eq_or_better = True
for i, o in enumerate(objs_one):
eq_or_better = o <= objs_two[i] if problem.objectives[i].do_minimize else o >= objs_two[i]
if not eq_or_better:
return False
if not dominates:
dominates = o < objs_two[i] if problem.objectives[i].do_minimize else o > objs_two[i]
return dominates
Explanation: Fitness Evaluation
To evaluate fitness between points we use binary domination. Binary Domination is defined as follows:
* Consider two points one and two.
* For every decision o and t in one and two, o <= t
* Atleast one decision o and t in one and two, o == t
Note: Binary Domination is not the best method to evaluate fitness but due to its simplicity we choose to use it for this workshop.
End of explanation
def fitness(problem, population, point):
dominates = 0
# TODO 10: Evaluate fitness of a point.
# For this workshop define fitness of a point
# as the number of points dominated by it.
# For example point dominates 5 members of population,
# then fitness of point is 5.
for another in population:
if bdom(problem, point, another):
dominates += 1
return dominates
def elitism(problem, population, retain_size):
# TODO 11: Sort the population with respect to the fitness
# of the points and return the top 'retain_size' points of the population
fitlist = [fitness(problem, population, p) for p in population]
new_pop = [y for x, y in sorted(zip(fitlist, population), reverse=True)]
return new_pop[:retain_size]
Explanation: Fitness and Elitism
In this workshop we will count the number of points of the population P dominated by a point A as the fitness of point A. This is a very naive measure of fitness since we are using binary domination.
Few prominent alternate methods are
1. Continuous Domination - Section 3.1
2. Non-dominated Sort
3. Non-dominated Sort + Niching
Elitism: Sort points with respect to the fitness and select the top points.
End of explanation
def ga(pop_size = 100, gens = 250):
problem = Problem()
population = populate(problem, pop_size)
[problem.evaluate(point) for point in population]
initial_population = [point.clone() for point in population]
gen = 0
while gen < gens:
say(".")
children = []
for _ in range(pop_size):
mom = random.choice(population)
dad = random.choice(population)
while (mom == dad):
dad = random.choice(population)
child = mutate(problem, crossover(mom, dad))
if problem.is_valid(child) and child not in population+children:
children.append(child)
population += children
population = elitism(problem, population, pop_size)
gen += 1
print("")
return initial_population, population
Explanation: Putting it all together and making the GA
End of explanation
def plot_pareto(initial, final):
initial_objs = [point.objectives for point in initial]
final_objs = [point.objectives for point in final]
initial_x = [i[0] for i in initial_objs]
initial_y = [i[1] for i in initial_objs]
final_x = [i[0] for i in final_objs]
final_y = [i[1] for i in final_objs]
plt.scatter(initial_x, initial_y, color='b', marker='+', label='initial')
plt.scatter(final_x, final_y, color='r', marker='o', label='final')
plt.title("Scatter Plot between initial and final population of GA")
plt.ylabel("Total Surface Area(T)")
plt.xlabel("Curved Surface Area(S)")
plt.legend(loc=9, bbox_to_anchor=(0.5, -0.175), ncol=2)
plt.show()
initial, final = ga()
plot_pareto(initial, final)
Explanation: Visualize
Lets plot the initial population with respect to the final frontier.
End of explanation
<END_TASK> |
15,535 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
你的第一个神经网络
在此项目中,你将构建你的第一个神经网络,并用该网络预测每日自行车租客人数。我们提供了一些代码,但是需要你来实现神经网络(大部分内容)。提交此项目后,欢迎进一步探索该数据和模型。
Step1: 加载和准备数据
构建神经网络的关键一步是正确地准备数据。不同尺度级别的变量使网络难以高效地掌握正确的权重。我们在下方已经提供了加载和准备数据的代码。你很快将进一步学习这些代码!
Step2: 数据简介
此数据集包含的是从 2011 年 1 月 1 日到 2012 年 12 月 31 日期间每天每小时的骑车人数。骑车用户分成临时用户和注册用户,cnt 列是骑车用户数汇总列。你可以在上方看到前几行数据。
下图展示的是数据集中前 10 天左右的骑车人数(某些天不一定是 24 个条目,所以不是精确的 10 天)。你可以在这里看到每小时租金。这些数据很复杂!周末的骑行人数少些,工作日上下班期间是骑行高峰期。我们还可以从上方的数据中看到温度、湿度和风速信息,所有这些信息都会影响骑行人数。你需要用你的模型展示所有这些数据。
Step3: 虚拟变量(哑变量)
下面是一些分类变量,例如季节、天气、月份。要在我们的模型中包含这些数据,我们需要创建二进制虚拟变量。用 Pandas 库中的 get_dummies() 就可以轻松实现。
Step4: 调整目标变量
为了更轻松地训练网络,我们将对每个连续变量标准化,即转换和调整变量,使它们的均值为 0,标准差为 1。
我们会保存换算因子,以便当我们使用网络进行预测时可以还原数据。
Step5: 将数据拆分为训练、测试和验证数据集
我们将大约最后 21 天的数据保存为测试数据集,这些数据集会在训练完网络后使用。我们将使用该数据集进行预测,并与实际的骑行人数进行对比。
Step6: 我们将数据拆分为两个数据集,一个用作训练,一个在网络训练完后用来验证网络。因为数据是有时间序列特性的,所以我们用历史数据进行训练,然后尝试预测未来数据(验证数据集)。
Step7: 开始构建网络
下面你将构建自己的网络。我们已经构建好结构和反向传递部分。你将实现网络的前向传递部分。还需要设置超参数:学习速率、隐藏单元的数量,以及训练传递数量。
<img src="assets/neural_network.png" width=300px>
该网络有两个层级,一个隐藏层和一个输出层。隐藏层级将使用 S 型函数作为激活函数。输出层只有一个节点,用于递归,节点的输出和节点的输入相同。即激活函数是 $f(x)=x$。这种函数获得输入信号,并生成输出信号,但是会考虑阈值,称为激活函数。我们完成网络的每个层级,并计算每个神经元的输出。一个层级的所有输出变成下一层级神经元的输入。这一流程叫做前向传播(forward propagation)。
我们在神经网络中使用权重将信号从输入层传播到输出层。我们还使用权重将错误从输出层传播回网络,以便更新权重。这叫做反向传播(backpropagation)。
提示:你需要为反向传播实现计算输出激活函数 ($f(x) = x$) 的导数。如果你不熟悉微积分,其实该函数就等同于等式 $y = x$。该等式的斜率是多少?也就是导数 $f(x)$。
你需要完成以下任务:
实现 S 型激活函数。将 __init__ 中的 self.activation_function 设为你的 S 型函数。
在 train 方法中实现前向传递。
在 train 方法中实现反向传播算法,包括计算输出错误。
在 run 方法中实现前向传递。
Step8: 单元测试
运行这些单元测试,检查你的网络实现是否正确。这样可以帮助你确保网络已正确实现,然后再开始训练网络。这些测试必须成功才能通过此项目。
Step9: 训练网络
现在你将设置网络的超参数。策略是设置的超参数使训练集上的错误很小但是数据不会过拟合。如果网络训练时间太长,或者有太多的隐藏节点,可能就会过于针对特定训练集,无法泛化到验证数据集。即当训练集的损失降低时,验证集的损失将开始增大。
你还将采用随机梯度下降 (SGD) 方法训练网络。对于每次训练,都获取随机样本数据,而不是整个数据集。与普通梯度下降相比,训练次数要更多,但是每次时间更短。这样的话,网络训练效率更高。稍后你将详细了解 SGD。
选择迭代次数
也就是训练网络时从训练数据中抽样的批次数量。迭代次数越多,模型就与数据越拟合。但是,如果迭代次数太多,模型就无法很好地泛化到其他数据,这叫做过拟合。你需要选择一个使训练损失很低并且验证损失保持中等水平的数字。当你开始过拟合时,你会发现训练损失继续下降,但是验证损失开始上升。
选择学习速率
速率可以调整权重更新幅度。如果速率太大,权重就会太大,导致网络无法与数据相拟合。建议从 0.1 开始。如果网络在与数据拟合时遇到问题,尝试降低学习速率。注意,学习速率越低,权重更新的步长就越小,神经网络收敛的时间就越长。
选择隐藏节点数量
隐藏节点越多,模型的预测结果就越准确。尝试不同的隐藏节点的数量,看看对性能有何影响。你可以查看损失字典,寻找网络性能指标。如果隐藏单元的数量太少,那么模型就没有足够的空间进行学习,如果太多,则学习方向就有太多的选择。选择隐藏单元数量的技巧在于找到合适的平衡点。
Step10: 检查预测结果
使用测试数据看看网络对数据建模的效果如何。如果完全错了,请确保网络中的每步都正确实现。 | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
Explanation: 你的第一个神经网络
在此项目中,你将构建你的第一个神经网络,并用该网络预测每日自行车租客人数。我们提供了一些代码,但是需要你来实现神经网络(大部分内容)。提交此项目后,欢迎进一步探索该数据和模型。
End of explanation
data_path = 'Bike-Sharing-Dataset/hour.csv'
rides = pd.read_csv(data_path)
rides.head()
Explanation: 加载和准备数据
构建神经网络的关键一步是正确地准备数据。不同尺度级别的变量使网络难以高效地掌握正确的权重。我们在下方已经提供了加载和准备数据的代码。你很快将进一步学习这些代码!
End of explanation
rides[:24*10].plot(x='dteday', y='cnt')
Explanation: 数据简介
此数据集包含的是从 2011 年 1 月 1 日到 2012 年 12 月 31 日期间每天每小时的骑车人数。骑车用户分成临时用户和注册用户,cnt 列是骑车用户数汇总列。你可以在上方看到前几行数据。
下图展示的是数据集中前 10 天左右的骑车人数(某些天不一定是 24 个条目,所以不是精确的 10 天)。你可以在这里看到每小时租金。这些数据很复杂!周末的骑行人数少些,工作日上下班期间是骑行高峰期。我们还可以从上方的数据中看到温度、湿度和风速信息,所有这些信息都会影响骑行人数。你需要用你的模型展示所有这些数据。
End of explanation
dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']
for each in dummy_fields:
dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)
rides = pd.concat([rides, dummies], axis=1)
fields_to_drop = ['instant', 'dteday', 'season', 'weathersit',
'weekday', 'atemp', 'mnth', 'workingday', 'hr']
data = rides.drop(fields_to_drop, axis=1)
data.head()
Explanation: 虚拟变量(哑变量)
下面是一些分类变量,例如季节、天气、月份。要在我们的模型中包含这些数据,我们需要创建二进制虚拟变量。用 Pandas 库中的 get_dummies() 就可以轻松实现。
End of explanation
quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']
# Store scalings in a dictionary so we can convert back later
scaled_features = {}
for each in quant_features:
mean, std = data[each].mean(), data[each].std()
scaled_features[each] = [mean, std]
data.loc[:, each] = (data[each] - mean)/std
Explanation: 调整目标变量
为了更轻松地训练网络,我们将对每个连续变量标准化,即转换和调整变量,使它们的均值为 0,标准差为 1。
我们会保存换算因子,以便当我们使用网络进行预测时可以还原数据。
End of explanation
# Save data for approximately the last 21 days
test_data = data[-21*24:]
# Now remove the test data from the data set
data = data[:-21*24]
# Separate the data into features and targets
target_fields = ['cnt', 'casual', 'registered']
features, targets = data.drop(target_fields, axis=1), data[target_fields]
test_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]
Explanation: 将数据拆分为训练、测试和验证数据集
我们将大约最后 21 天的数据保存为测试数据集,这些数据集会在训练完网络后使用。我们将使用该数据集进行预测,并与实际的骑行人数进行对比。
End of explanation
# Hold out the last 60 days or so of the remaining data as a validation set
train_features, train_targets = features[:-60*24], targets[:-60*24]
val_features, val_targets = features[-60*24:], targets[-60*24:]
Explanation: 我们将数据拆分为两个数据集,一个用作训练,一个在网络训练完后用来验证网络。因为数据是有时间序列特性的,所以我们用历史数据进行训练,然后尝试预测未来数据(验证数据集)。
End of explanation
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
# Initialize weights
self.weights_input_to_hidden = np.random.normal(0.0, self.input_nodes**-0.5,
(self.input_nodes, self.hidden_nodes))
self.weights_hidden_to_output = np.random.normal(0.0, self.hidden_nodes**-0.5,
(self.hidden_nodes, self.output_nodes))
self.lr = learning_rate
#### TODO: Set self.activation_function to your implemented sigmoid function ####
#
# Note: in Python, you can define a function with a lambda expression,
# as shown below.
self.activation_function = lambda x : 1/(1+np.exp(-x)) # Replace 0 with your sigmoid calculation.
### If the lambda code above is not something you're familiar with,
# You can uncomment out the following three lines and put your
# implementation there instead.
#
#def sigmoid(x):
# return 0 # Replace 0 with your sigmoid calculation here
#self.activation_function = sigmoid
def train(self, features, targets):
''' Train the network on batch of features and targets.
Arguments
---------
features: 2D array, each row is one data record, each column is a feature
targets: 1D array of target values
'''
n_records = features.shape[0]
delta_weights_i_h = np.zeros(self.weights_input_to_hidden.shape)
delta_weights_h_o = np.zeros(self.weights_hidden_to_output.shape)
for X, y in zip(features, targets):
#### Implement the forward pass here ####
### Forward pass ###
# TODO: Hidden layer - Replace these values with your calculations.
hidden_inputs = np.dot(X, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with your calculations.
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
final_outputs = final_inputs # # signals from final output layer
#### Implement the backward pass here ####
### Backward pass ###
# TODO: Output error - Replace this value with your calculations.
error = y - final_outputs # Output layer error is the difference between desired target and actual output.
output_error_term = error
# TODO: Calculate the hidden layer's contribution to the error
hidden_error = np.dot(output_error_term, self.weights_hidden_to_output.T)
# TODO: Backpropagated error terms - Replace these values with your calculations.
hidden_error_term = hidden_error * hidden_outputs * (1 - hidden_outputs)
# Weight step (input to hidden)
delta_weights_i_h += hidden_error_term * X[:, None]
# Weight step (hidden to output)
delta_weights_h_o += output_error_term * hidden_outputs[:,None]
# TODO: Update the weights - Replace these values with your calculations.
self.weights_hidden_to_output += self.lr * delta_weights_h_o / n_records # update hidden-to-output weights with gradient descent step
self.weights_input_to_hidden += self.lr * delta_weights_i_h / n_records # update input-to-hidden weights with gradient descent step
def run(self, features):
''' Run a forward pass through the network with input features
Arguments
---------
features: 1D array of feature values
'''
#### Implement the forward pass here ####
# TODO: Hidden layer - replace these values with the appropriate calculations.
hidden_inputs = np.dot(features, self.weights_input_to_hidden) # signals into hidden layer
hidden_outputs = self.activation_function(hidden_inputs) # signals from hidden layer
# TODO: Output layer - Replace these values with the appropriate calculations.
final_inputs = np.dot(hidden_outputs, self.weights_hidden_to_output) # signals into final output layer
final_outputs = final_inputs # signals from final output layer
return final_outputs
def MSE(y, Y):
return np.mean((y-Y)**2)
Explanation: 开始构建网络
下面你将构建自己的网络。我们已经构建好结构和反向传递部分。你将实现网络的前向传递部分。还需要设置超参数:学习速率、隐藏单元的数量,以及训练传递数量。
<img src="assets/neural_network.png" width=300px>
该网络有两个层级,一个隐藏层和一个输出层。隐藏层级将使用 S 型函数作为激活函数。输出层只有一个节点,用于递归,节点的输出和节点的输入相同。即激活函数是 $f(x)=x$。这种函数获得输入信号,并生成输出信号,但是会考虑阈值,称为激活函数。我们完成网络的每个层级,并计算每个神经元的输出。一个层级的所有输出变成下一层级神经元的输入。这一流程叫做前向传播(forward propagation)。
我们在神经网络中使用权重将信号从输入层传播到输出层。我们还使用权重将错误从输出层传播回网络,以便更新权重。这叫做反向传播(backpropagation)。
提示:你需要为反向传播实现计算输出激活函数 ($f(x) = x$) 的导数。如果你不熟悉微积分,其实该函数就等同于等式 $y = x$。该等式的斜率是多少?也就是导数 $f(x)$。
你需要完成以下任务:
实现 S 型激活函数。将 __init__ 中的 self.activation_function 设为你的 S 型函数。
在 train 方法中实现前向传递。
在 train 方法中实现反向传播算法,包括计算输出错误。
在 run 方法中实现前向传递。
End of explanation
import unittest
inputs = np.array([[0.5, -0.2, 0.1]])
targets = np.array([[0.4]])
test_w_i_h = np.array([[0.1, -0.2],
[0.4, 0.5],
[-0.3, 0.2]])
test_w_h_o = np.array([[0.3],
[-0.1]])
class TestMethods(unittest.TestCase):
##########
# Unit tests for data loading
##########
def test_data_path(self):
# Test that file path to dataset has been unaltered
self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')
def test_data_loaded(self):
# Test that data frame loaded
self.assertTrue(isinstance(rides, pd.DataFrame))
##########
# Unit tests for network functionality
##########
def test_activation(self):
network = NeuralNetwork(3, 2, 1, 0.5)
# Test that the activation function is a sigmoid
self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))
def test_train(self):
# Test that weights are updated correctly on training
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
network.train(inputs, targets)
self.assertTrue(np.allclose(network.weights_hidden_to_output,
np.array([[ 0.37275328],
[-0.03172939]])))
self.assertTrue(np.allclose(network.weights_input_to_hidden,
np.array([[ 0.10562014, -0.20185996],
[0.39775194, 0.50074398],
[-0.29887597, 0.19962801]])))
def test_run(self):
# Test correctness of run method
network = NeuralNetwork(3, 2, 1, 0.5)
network.weights_input_to_hidden = test_w_i_h.copy()
network.weights_hidden_to_output = test_w_h_o.copy()
self.assertTrue(np.allclose(network.run(inputs), 0.09998924))
suite = unittest.TestLoader().loadTestsFromModule(TestMethods())
unittest.TextTestRunner().run(suite)
Explanation: 单元测试
运行这些单元测试,检查你的网络实现是否正确。这样可以帮助你确保网络已正确实现,然后再开始训练网络。这些测试必须成功才能通过此项目。
End of explanation
import sys
### Set the hyperparameters here ###
iterations = 5200
learning_rate = 0.8
hidden_nodes = 9
output_nodes = 1
N_i = train_features.shape[1]
network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)
losses = {'train':[], 'validation':[]}
for ii in range(iterations):
# Go through a random batch of 128 records from the training data set
batch = np.random.choice(train_features.index, size=128)
X, y = train_features.ix[batch].values, train_targets.ix[batch]['cnt']
network.train(X, y)
# Printing out the training progress
train_loss = MSE(network.run(train_features).T, train_targets['cnt'].values)
val_loss = MSE(network.run(val_features).T, val_targets['cnt'].values)
sys.stdout.write("\rProgress: {:2.1f}".format(100 * ii/float(iterations)) \
+ "% ... Training loss: " + str(train_loss)[:5] \
+ " ... Validation loss: " + str(val_loss)[:5])
sys.stdout.flush()
losses['train'].append(train_loss)
losses['validation'].append(val_loss)
plt.plot(losses['train'], label='Training loss')
plt.plot(losses['validation'], label='Validation loss')
plt.legend()
_ = plt.ylim()
Explanation: 训练网络
现在你将设置网络的超参数。策略是设置的超参数使训练集上的错误很小但是数据不会过拟合。如果网络训练时间太长,或者有太多的隐藏节点,可能就会过于针对特定训练集,无法泛化到验证数据集。即当训练集的损失降低时,验证集的损失将开始增大。
你还将采用随机梯度下降 (SGD) 方法训练网络。对于每次训练,都获取随机样本数据,而不是整个数据集。与普通梯度下降相比,训练次数要更多,但是每次时间更短。这样的话,网络训练效率更高。稍后你将详细了解 SGD。
选择迭代次数
也就是训练网络时从训练数据中抽样的批次数量。迭代次数越多,模型就与数据越拟合。但是,如果迭代次数太多,模型就无法很好地泛化到其他数据,这叫做过拟合。你需要选择一个使训练损失很低并且验证损失保持中等水平的数字。当你开始过拟合时,你会发现训练损失继续下降,但是验证损失开始上升。
选择学习速率
速率可以调整权重更新幅度。如果速率太大,权重就会太大,导致网络无法与数据相拟合。建议从 0.1 开始。如果网络在与数据拟合时遇到问题,尝试降低学习速率。注意,学习速率越低,权重更新的步长就越小,神经网络收敛的时间就越长。
选择隐藏节点数量
隐藏节点越多,模型的预测结果就越准确。尝试不同的隐藏节点的数量,看看对性能有何影响。你可以查看损失字典,寻找网络性能指标。如果隐藏单元的数量太少,那么模型就没有足够的空间进行学习,如果太多,则学习方向就有太多的选择。选择隐藏单元数量的技巧在于找到合适的平衡点。
End of explanation
fig, ax = plt.subplots(figsize=(8,4))
mean, std = scaled_features['cnt']
predictions = network.run(test_features).T*std + mean
ax.plot(predictions[0], label='Prediction')
ax.plot((test_targets['cnt']*std + mean).values, label='Data')
ax.set_xlim(right=len(predictions))
ax.legend()
dates = pd.to_datetime(rides.ix[test_data.index]['dteday'])
dates = dates.apply(lambda d: d.strftime('%b %d'))
ax.set_xticks(np.arange(len(dates))[12::24])
_ = ax.set_xticklabels(dates[12::24], rotation=45)
Explanation: 检查预测结果
使用测试数据看看网络对数据建模的效果如何。如果完全错了,请确保网络中的每步都正确实现。
End of explanation
<END_TASK> |
15,536 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Let's look at a traditional logistic regression model for some mildly complicated data.
Step1: Another pair of metrics
Step2: F1 is the harmonic mean of precision and recall | <ASSISTANT_TASK:>
Python Code:
# synthetic data
X, y = make_classification(n_samples=10000, n_features=50, n_informative=12,
n_redundant=2, n_classes=2, random_state=0)
# statsmodels uses logit, not logistic
lm = sm.Logit(y, X).fit()
results = lm.summary()
print(results)
# hard problem
lm = sm.Logit(y, X).fit(maxiter=1000)
results = lm.summary()
print(results)
Xtr, Xte, ytr, yte = train_test_split(X, y, test_size=0.2, random_state=0)
# 'null' prediction
print(np.sum(yte) / len(yte))
null_preds = np.ones(len(yte))
print('{:.3f}'.format(accuracy_score(yte, null_preds)))
# linear model - logistic regression
lm = LogisticRegression().fit(Xtr, ytr)
lm.coef_
figsize(12, 6)
plt.scatter(range(len(lm.coef_[0])), lm.coef_[0])
plt.xlabel('predictor')
plt.ylabel('coefficient');
preds = lm.predict(Xte)
prob_preds = lm.predict_proba(Xte)
print(preds[:5])
print(prob_preds[:5])
accuracy_score(yte, preds)
import pandas as pd
pd.DataFrame(confusion_matrix(yte, preds)).apply(lambda x: x / sum(x), axis=1)
def plot_roc(actual, predicted):
fpr, tpr, thr = roc_curve(actual, predicted)
roc_auc = auc(fpr, tpr)
# exercise: add code to color curve by threshold value
figsize(12, 8)
plt.plot(fpr, tpr, label='ROC curve (area = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], 'k--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('ROC (Class 1)')
plt.legend(loc="lower right");
return
plot_roc(yte, null_preds)
plot_roc(yte, prob_preds[:,1])
# more synthetic data
Xub, yub = make_classification(n_samples=10000, n_features=50, n_informative=12,
n_redundant=2, n_classes=2, weights=(0.99, 0.01), random_state=0)
np.sum(yub)
Xtrub, Xteub, ytrub, yteub = train_test_split(Xub, yub, test_size=0.2, random_state=0)
lm = LogisticRegression().fit(Xtrub, ytrub)
accuracy_score(yteub, lm.predict(Xteub))
plot_roc(yteub, lm.predict_proba(Xteub)[:,1])
Explanation: Let's look at a traditional logistic regression model for some mildly complicated data.
End of explanation
# using data from balanced classes
prec, rec, thresh = precision_recall_curve(yte, prob_preds[:,1])
figsize(12, 6)
plt.plot(rec, prec, label='AUC={0:0.2f}'.format(average_precision_score(yte, prob_preds[:,1])))
plt.title('Precision-Recall')
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.legend(loc='best');
# classification_report
print(classification_report(yte, preds))
Explanation: Another pair of metrics: Precision and Recall:
These are sometimes also plotted against each other:
End of explanation
# if time - l1 vs l2 penalty
lm = LogisticRegression(penalty='l1').fit(Xtr, ytr)
plt.scatter(range(len(lm.coef_[0])), lm.coef_[0]);
Explanation: F1 is the harmonic mean of precision and recall:
$$F1 = \frac{2\cdot precision\cdot recall}{precision + recall}.$$
End of explanation
<END_TASK> |
15,537 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Copyright 2020 The TensorFlow Authors.
Step1: Wiki Talk Comments Toxicity Prediction
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Note that depending on when you run the cell below, you may receive a warning about the default version of TensorFlow in Colab switching to TensorFlow 2.X soon. You can safely ignore that warning as this notebook was designed to be compatible with TensorFlow 1.X and 2.X.
Step3: Though TFCO is compatible with eager and graph execution, this notebook assumes that eager execution is enabled by default. To ensure that nothing breaks, eager execution will be enabled in the cell below.
Step4: Hyper-parameters
First, we set some hyper-parameters needed for the data preprocessing and model training.
Step5: Load and pre-process dataset
Next, we download the dataset and preprocess it. The train, test and validation sets are provided as separate CSV files.
Step6: The comment column contains the discussion comments and is_toxic column indicates whether or not a comment is annotated as toxic.
In the following, we
Step7: Next, we tokenize the textual comments using the Tokenizer provided by Keras. We use the training set comments alone to build a vocabulary of tokens, and use them to convert all the comments into a (padded) sequence of tokens of the same length.
Step8: Finally, we identify comments related to certain sensitive topic groups. We consider a subset of the <a href="https
Step9: We then create separate group membership matrices for the train, test and validation sets, where the rows correspond to comments, the columns correspond to the four sensitive groups, and each entry is a boolean indicating whether the comment contains a term from the topic group.
Step10: As shown below, all four topic groups constitute only a small fraction of the overall dataset, and have varying proportions of toxic comments.
Step11: We see that only 1.3% of the dataset contains comments related to sexuality. Among them, 37% of the comments have been annotated as being toxic. Note that this is significantly larger than the overall proportion of comments annotated as toxic. This could be because the few comments that used those identity terms did so in pejorative contexts. As mentioned above, this could cause our model to disporportionately misclassify comments as toxic when they include those terms. Since this is the concern, we'll make sure to look at the False Positive Rate when we evaluate the model's performance.
Build CNN toxicity prediction model
Having prepared the dataset, we now build a Keras model for prediction toxicity. The model we use is a convolutional neural network (CNN) with the same architecture used by the Conversation AI project for their debiasing analysis. We adapt <a href="https
Step12: We use the downloaded GloVe embeddings to create an embedding matrix, where the rows contain the word embeddings for the tokens in the Tokenizer's vocabulary.
Step13: We are now ready to specify the Keras layers. We write a function to create a new model, which we will invoke whenever we wish to train a new model.
Step14: We also define a method to set random seeds. This is done to ensure reproducible results.
Step15: Fairness indicators
We also write functions to plot fairness indicators.
Step16: Train unconstrained model
For the first model we train, we optimize a simple cross-entropy loss without any constraints..
Step17: Note
Step18: Having trained the unconstrained model, we plot various evaluation metrics for the model on the test set.
Step19: As explained above, we are concentrating on the false positive rate. In their current version (0.1.2), Fairness Indicators select false negative rate by default. After running the line below, go ahead and deselect false_negative_rate and select false_positive_rate to look at the metric we are interested in.
Step20: While the overall false positive rate is less than 2%, the false positive rate on the sexuality-related comments is significantly higher. This is because the sexuality group is very small in size, and has a disproportionately higher fraction of comments annotated as toxic. Hence, training a model without constraints results in the model believing that sexuality-related terms are a strong indicator of toxicity.
Train with constraints on false positive rates
To avoid large differences in false positive rates across different groups, we
next train a model by constraining the false positive rates for each group to be within a desired limit. In this case, we will optimize the error rate of the model subject to the per-group false positive rates being lesser or equal to 2%.
Training on minibatches with per-group constraints can be challenging for this dataset, however, as the groups we wish to constraint are all small in size, and it's likely that the individual minibatches contain very few examples from each group. Hence the gradients we compute during training will be noisy, and result in the model converging very slowly.
To mitigate this problem, we recommend using two streams of minibatches, with the first stream formed as before from the entire training set, and the second stream formed solely from the sensitive group examples. We will compute the objective using minibatches from the first stream and the per-group constraints using minibatches from the second stream. Because the batches from the second stream are likely to contain a larger number of examples from each group, we expect our updates to be less noisy.
We create separate features, labels and groups tensors to hold the minibatches from the two streams.
Step21: We instantiate a new model, and compute predictions for minibatches from the two streams.
Step22: We then set up a constrained optimization problem with the error rate as the objective and with constraints on the per-group false positive rate.
Step23: We are ready to train the model. We maintain a separate counter for the two minibatch streams. Every time we perform a gradient update, we will have to copy the minibatch contents from the first stream to the tensors features_tensor and labels_tensor, and the minibatch contents from the second stream to the tensors features_tensor_sen, labels_tensor_sen and groups_tensor_sen.
Note
Step24: Having trained the constrained model, we plot various evaluation metrics for the model on the test set.
Step25: As with last time, remember to select false_positive_rate. | <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2020 The TensorFlow Authors.
End of explanation
#@title pip installs
!pip install git+https://github.com/google-research/tensorflow_constrained_optimization
!pip install git+https://github.com/tensorflow/fairness-indicators
Explanation: Wiki Talk Comments Toxicity Prediction
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/responsible_ai/fairness_indicators/tutorials/Fairness_Indicators_TFCO_Wiki_Case_Study"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/fairness-indicators/blob/master/g3doc/tutorials/Fairness_Indicators_TFCO_Wiki_Case_Study.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/fairness-indicators/tree/master/g3doc/tutorials/Fairness_Indicators_TFCO_Wiki_Case_Study.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/fairness-indicators/g3doc/tutorials/Fairness_Indicators_TFCO_Wiki_Case_Study.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
In this example, we consider the task of predicting whether a discussion comment posted on a Wiki talk page contains toxic content (i.e. contains content that is “rude, disrespectful or unreasonable”). We use a public <a href="https://figshare.com/articles/Wikipedia_Talk_Labels_Toxicity/4563973">dataset</a> released by the <a href="https://conversationai.github.io/">Conversation AI</a> project, which contains over 100k comments from the English Wikipedia that are annotated by crowd workers (see paper for labeling methodology).
One of the challenges with this dataset is that a very small proportion of the comments cover sensitive topics such as sexuality or religion. As such, training a neural network model on this dataset leads to disparate performance on the smaller sensitive topics. This can mean that innocuous statements about those topics might get incorrectly flagged as ‘toxic’ at higher rates, causing speech to be unfairly censored
By imposing constraints during training, we can train a fairer model that performs more equitably across the different topic groups.
We will use the TFCO library to optimize for our fairness goal during training.
Installation
Let's first install and import the relevant libraries. Note that you may have to restart your colab once after running the first cell because of outdated packages in the runtime. After doing so, there should be no further issues with imports.
End of explanation
#@title Import Modules
import io
import os
import shutil
import sys
import tempfile
import time
import urllib
import zipfile
import apache_beam as beam
from IPython.display import display
from IPython.display import HTML
import numpy as np
import pandas as pd
import tensorflow as tf
import tensorflow.keras as keras
from tensorflow.keras import layers
from tensorflow.keras.preprocessing import sequence
from tensorflow.keras.preprocessing import text
import tensorflow_constrained_optimization as tfco
import tensorflow_model_analysis as tfma
import fairness_indicators as fi
from tensorflow_model_analysis.addons.fairness.view import widget_view
from tensorflow_model_analysis.model_agnostic_eval import model_agnostic_evaluate_graph
from tensorflow_model_analysis.model_agnostic_eval import model_agnostic_extractor
from tensorflow_model_analysis.model_agnostic_eval import model_agnostic_predict as agnostic_predict
Explanation: Note that depending on when you run the cell below, you may receive a warning about the default version of TensorFlow in Colab switching to TensorFlow 2.X soon. You can safely ignore that warning as this notebook was designed to be compatible with TensorFlow 1.X and 2.X.
End of explanation
#@title Enable Eager Execution and Print Versions
if tf.__version__ < "2.0.0":
tf.enable_eager_execution()
print("Eager execution enabled.")
else:
print("Eager execution enabled by default.")
print("TensorFlow " + tf.__version__)
print("TFMA " + tfma.__version__)
print("FI " + fi.version.__version__)
Explanation: Though TFCO is compatible with eager and graph execution, this notebook assumes that eager execution is enabled by default. To ensure that nothing breaks, eager execution will be enabled in the cell below.
End of explanation
hparams = {
"batch_size": 128,
"cnn_filter_sizes": [128, 128, 128],
"cnn_kernel_sizes": [5, 5, 5],
"cnn_pooling_sizes": [5, 5, 40],
"constraint_learning_rate": 0.01,
"embedding_dim": 100,
"embedding_trainable": False,
"learning_rate": 0.005,
"max_num_words": 10000,
"max_sequence_length": 250
}
Explanation: Hyper-parameters
First, we set some hyper-parameters needed for the data preprocessing and model training.
End of explanation
toxicity_data_url = ("https://github.com/conversationai/unintended-ml-bias-analysis/"
"raw/e02b9f12b63a39235e57ba6d3d62d8139ca5572c/data/")
data_train = pd.read_csv(toxicity_data_url + "wiki_train.csv")
data_test = pd.read_csv(toxicity_data_url + "wiki_test.csv")
data_vali = pd.read_csv(toxicity_data_url + "wiki_dev.csv")
data_train.head()
Explanation: Load and pre-process dataset
Next, we download the dataset and preprocess it. The train, test and validation sets are provided as separate CSV files.
End of explanation
labels_train = data_train["is_toxic"].values.reshape(-1, 1) * 1.0
labels_test = data_test["is_toxic"].values.reshape(-1, 1) * 1.0
labels_vali = data_vali["is_toxic"].values.reshape(-1, 1) * 1.0
Explanation: The comment column contains the discussion comments and is_toxic column indicates whether or not a comment is annotated as toxic.
In the following, we:
1. Separate out the labels
2. Tokenize the text comments
3. Identify comments that contain sensitive topic terms
First, we separate the labels from the train, test and validation sets. The labels are all binary (0 or 1).
End of explanation
tokenizer = text.Tokenizer(num_words=hparams["max_num_words"])
tokenizer.fit_on_texts(data_train["comment"])
def prep_text(texts, tokenizer, max_sequence_length):
# Turns text into into padded sequences.
text_sequences = tokenizer.texts_to_sequences(texts)
return sequence.pad_sequences(text_sequences, maxlen=max_sequence_length)
text_train = prep_text(data_train["comment"], tokenizer, hparams["max_sequence_length"])
text_test = prep_text(data_test["comment"], tokenizer, hparams["max_sequence_length"])
text_vali = prep_text(data_vali["comment"], tokenizer, hparams["max_sequence_length"])
Explanation: Next, we tokenize the textual comments using the Tokenizer provided by Keras. We use the training set comments alone to build a vocabulary of tokens, and use them to convert all the comments into a (padded) sequence of tokens of the same length.
End of explanation
terms = {
'sexuality': ['gay', 'lesbian', 'bisexual', 'homosexual', 'straight', 'heterosexual'],
'gender identity': ['trans', 'transgender', 'cis', 'nonbinary'],
'religion': ['christian', 'muslim', 'jewish', 'buddhist', 'catholic', 'protestant', 'sikh', 'taoist'],
'race': ['african', 'african american', 'black', 'white', 'european', 'hispanic', 'latino', 'latina',
'latinx', 'mexican', 'canadian', 'american', 'asian', 'indian', 'middle eastern', 'chinese',
'japanese']}
group_names = list(terms.keys())
num_groups = len(group_names)
Explanation: Finally, we identify comments related to certain sensitive topic groups. We consider a subset of the <a href="https://github.com/conversationai/unintended-ml-bias-analysis/blob/master/unintended_ml_bias/bias_madlibs_data/adjectives_people.txt">identity terms</a> provided with the dataset and group them into
four broad topic groups: sexuality, gender identity, religion, and race.
End of explanation
def get_groups(text):
# Returns a boolean NumPy array of shape (n, k), where n is the number of comments,
# and k is the number of groups. Each entry (i, j) indicates if the i-th comment
# contains a term from the j-th group.
groups = np.zeros((text.shape[0], num_groups))
for ii in range(num_groups):
groups[:, ii] = text.str.contains('|'.join(terms[group_names[ii]]), case=False)
return groups
groups_train = get_groups(data_train["comment"])
groups_test = get_groups(data_test["comment"])
groups_vali = get_groups(data_vali["comment"])
Explanation: We then create separate group membership matrices for the train, test and validation sets, where the rows correspond to comments, the columns correspond to the four sensitive groups, and each entry is a boolean indicating whether the comment contains a term from the topic group.
End of explanation
print("Overall label proportion = %.1f%%" % (labels_train.mean() * 100))
group_stats = []
for ii in range(num_groups):
group_proportion = groups_train[:, ii].mean()
group_pos_proportion = labels_train[groups_train[:, ii] == 1].mean()
group_stats.append([group_names[ii],
"%.2f%%" % (group_proportion * 100),
"%.1f%%" % (group_pos_proportion * 100)])
group_stats = pd.DataFrame(group_stats,
columns=["Topic group", "Group proportion", "Label proportion"])
group_stats
Explanation: As shown below, all four topic groups constitute only a small fraction of the overall dataset, and have varying proportions of toxic comments.
End of explanation
zip_file_url = "http://nlp.stanford.edu/data/glove.6B.zip"
zip_file = urllib.request.urlopen(zip_file_url)
archive = zipfile.ZipFile(io.BytesIO(zip_file.read()))
Explanation: We see that only 1.3% of the dataset contains comments related to sexuality. Among them, 37% of the comments have been annotated as being toxic. Note that this is significantly larger than the overall proportion of comments annotated as toxic. This could be because the few comments that used those identity terms did so in pejorative contexts. As mentioned above, this could cause our model to disporportionately misclassify comments as toxic when they include those terms. Since this is the concern, we'll make sure to look at the False Positive Rate when we evaluate the model's performance.
Build CNN toxicity prediction model
Having prepared the dataset, we now build a Keras model for prediction toxicity. The model we use is a convolutional neural network (CNN) with the same architecture used by the Conversation AI project for their debiasing analysis. We adapt <a href="https://github.com/conversationai/unintended-ml-bias-analysis/blob/master/unintended_ml_bias/model_tool.py">code</a> provided by them to construct the model layers.
The model uses an embedding layer to convert the text tokens to fixed-length vectors. This layer converts the input text sequence into a sequence of vectors, and passes them through several layers of convolution and pooling operations, followed by a final fully-connected layer.
We make use of pre-trained GloVe word vector embeddings, which we download below. This may take a few minutes to complete.
End of explanation
embeddings_index = {}
glove_file = "glove.6B.100d.txt"
with archive.open(glove_file) as f:
for line in f:
values = line.split()
word = values[0].decode("utf-8")
coefs = np.asarray(values[1:], dtype="float32")
embeddings_index[word] = coefs
embedding_matrix = np.zeros((len(tokenizer.word_index) + 1, hparams["embedding_dim"]))
num_words_in_embedding = 0
for word, i in tokenizer.word_index.items():
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
num_words_in_embedding += 1
embedding_matrix[i] = embedding_vector
Explanation: We use the downloaded GloVe embeddings to create an embedding matrix, where the rows contain the word embeddings for the tokens in the Tokenizer's vocabulary.
End of explanation
def create_model():
model = keras.Sequential()
# Embedding layer.
embedding_layer = layers.Embedding(
embedding_matrix.shape[0],
embedding_matrix.shape[1],
weights=[embedding_matrix],
input_length=hparams["max_sequence_length"],
trainable=hparams['embedding_trainable'])
model.add(embedding_layer)
# Convolution layers.
for filter_size, kernel_size, pool_size in zip(
hparams['cnn_filter_sizes'], hparams['cnn_kernel_sizes'],
hparams['cnn_pooling_sizes']):
conv_layer = layers.Conv1D(
filter_size, kernel_size, activation='relu', padding='same')
model.add(conv_layer)
pooled_layer = layers.MaxPooling1D(pool_size, padding='same')
model.add(pooled_layer)
# Add a flatten layer, a fully-connected layer and an output layer.
model.add(layers.Flatten())
model.add(layers.Dense(128, activation='relu'))
model.add(layers.Dense(1))
return model
Explanation: We are now ready to specify the Keras layers. We write a function to create a new model, which we will invoke whenever we wish to train a new model.
End of explanation
def set_seeds():
np.random.seed(121212)
tf.compat.v1.set_random_seed(212121)
Explanation: We also define a method to set random seeds. This is done to ensure reproducible results.
End of explanation
def create_examples(labels, predictions, groups, group_names):
# Returns tf.examples with given labels, predictions, and group information.
examples = []
sigmoid = lambda x: 1/(1 + np.exp(-x))
for ii in range(labels.shape[0]):
example = tf.train.Example()
example.features.feature['toxicity'].float_list.value.append(
labels[ii][0])
example.features.feature['prediction'].float_list.value.append(
sigmoid(predictions[ii][0])) # predictions need to be in [0, 1].
for jj in range(groups.shape[1]):
example.features.feature[group_names[jj]].bytes_list.value.append(
b'Yes' if groups[ii, jj] else b'No')
examples.append(example)
return examples
def evaluate_results(labels, predictions, groups, group_names):
# Evaluates fairness indicators for given labels, predictions and group
# membership info.
examples = create_examples(labels, predictions, groups, group_names)
# Create feature map for labels, predictions and each group.
feature_map = {
'prediction': tf.io.FixedLenFeature([], tf.float32),
'toxicity': tf.io.FixedLenFeature([], tf.float32),
}
for group in group_names:
feature_map[group] = tf.io.FixedLenFeature([], tf.string)
# Serialize the examples.
serialized_examples = [e.SerializeToString() for e in examples]
BASE_DIR = tempfile.gettempdir()
OUTPUT_DIR = os.path.join(BASE_DIR, 'output')
with beam.Pipeline() as pipeline:
model_agnostic_config = agnostic_predict.ModelAgnosticConfig(
label_keys=['toxicity'],
prediction_keys=['prediction'],
feature_spec=feature_map)
slices = [tfma.slicer.SingleSliceSpec()]
for group in group_names:
slices.append(
tfma.slicer.SingleSliceSpec(columns=[group]))
extractors = [
model_agnostic_extractor.ModelAgnosticExtractor(
model_agnostic_config=model_agnostic_config),
tfma.extractors.slice_key_extractor.SliceKeyExtractor(slices)
]
metrics_callbacks = [
tfma.post_export_metrics.fairness_indicators(
thresholds=[0.5],
target_prediction_keys=['prediction'],
labels_key='toxicity'),
tfma.post_export_metrics.example_count()]
# Create a model agnostic aggregator.
eval_shared_model = tfma.types.EvalSharedModel(
add_metrics_callbacks=metrics_callbacks,
construct_fn=model_agnostic_evaluate_graph.make_construct_fn(
add_metrics_callbacks=metrics_callbacks,
config=model_agnostic_config))
# Run Model Agnostic Eval.
_ = (
pipeline
| beam.Create(serialized_examples)
| 'ExtractEvaluateAndWriteResults' >>
tfma.ExtractEvaluateAndWriteResults(
eval_shared_model=eval_shared_model,
output_path=OUTPUT_DIR,
extractors=extractors,
compute_confidence_intervals=True
)
)
fairness_ind_result = tfma.load_eval_result(output_path=OUTPUT_DIR)
# Also evaluate accuracy of the model.
accuracy = np.mean(labels == (predictions > 0.0))
return fairness_ind_result, accuracy
def plot_fairness_indicators(eval_result, title):
fairness_ind_result, accuracy = eval_result
display(HTML("<center><h2>" + title +
" (Accuracy = %.2f%%)" % (accuracy * 100) + "</h2></center>"))
widget_view.render_fairness_indicator(fairness_ind_result)
def plot_multi_fairness_indicators(multi_eval_results):
multi_results = {}
multi_accuracy = {}
for title, (fairness_ind_result, accuracy) in multi_eval_results.items():
multi_results[title] = fairness_ind_result
multi_accuracy[title] = accuracy
title_str = "<center><h2>"
for title in multi_eval_results.keys():
title_str+=title + " (Accuracy = %.2f%%)" % (multi_accuracy[title] * 100) + "; "
title_str=title_str[:-2]
title_str+="</h2></center>"
# fairness_ind_result, accuracy = eval_result
display(HTML(title_str))
widget_view.render_fairness_indicator(multi_eval_results=multi_results)
Explanation: Fairness indicators
We also write functions to plot fairness indicators.
End of explanation
# Set random seed for reproducible results.
set_seeds()
Explanation: Train unconstrained model
For the first model we train, we optimize a simple cross-entropy loss without any constraints..
End of explanation
# Optimizer and loss.
optimizer = tf.keras.optimizers.Adam(learning_rate=hparams["learning_rate"])
loss = lambda y_true, y_pred: tf.keras.losses.binary_crossentropy(
y_true, y_pred, from_logits=True)
# Create, compile and fit model.
model_unconstrained = create_model()
model_unconstrained.compile(optimizer=optimizer, loss=loss)
model_unconstrained.fit(
x=text_train, y=labels_train, batch_size=hparams["batch_size"], epochs=2)
Explanation: Note: The following code cell can take ~8 minutes to run.
End of explanation
scores_unconstrained_test = model_unconstrained.predict(text_test)
eval_result_unconstrained = evaluate_results(
labels_test, scores_unconstrained_test, groups_test, group_names)
Explanation: Having trained the unconstrained model, we plot various evaluation metrics for the model on the test set.
End of explanation
plot_fairness_indicators(eval_result_unconstrained, "Unconstrained")
Explanation: As explained above, we are concentrating on the false positive rate. In their current version (0.1.2), Fairness Indicators select false negative rate by default. After running the line below, go ahead and deselect false_negative_rate and select false_positive_rate to look at the metric we are interested in.
End of explanation
# Set random seed.
set_seeds()
# Features tensors.
batch_shape = (hparams["batch_size"], hparams['max_sequence_length'])
features_tensor = tf.Variable(np.zeros(batch_shape, dtype='int32'), name='x')
features_tensor_sen = tf.Variable(np.zeros(batch_shape, dtype='int32'), name='x_sen')
# Labels tensors.
batch_shape = (hparams["batch_size"], 1)
labels_tensor = tf.Variable(np.zeros(batch_shape, dtype='float32'), name='labels')
labels_tensor_sen = tf.Variable(np.zeros(batch_shape, dtype='float32'), name='labels_sen')
# Groups tensors.
batch_shape = (hparams["batch_size"], num_groups)
groups_tensor_sen = tf.Variable(np.zeros(batch_shape, dtype='float32'), name='groups_sen')
Explanation: While the overall false positive rate is less than 2%, the false positive rate on the sexuality-related comments is significantly higher. This is because the sexuality group is very small in size, and has a disproportionately higher fraction of comments annotated as toxic. Hence, training a model without constraints results in the model believing that sexuality-related terms are a strong indicator of toxicity.
Train with constraints on false positive rates
To avoid large differences in false positive rates across different groups, we
next train a model by constraining the false positive rates for each group to be within a desired limit. In this case, we will optimize the error rate of the model subject to the per-group false positive rates being lesser or equal to 2%.
Training on minibatches with per-group constraints can be challenging for this dataset, however, as the groups we wish to constraint are all small in size, and it's likely that the individual minibatches contain very few examples from each group. Hence the gradients we compute during training will be noisy, and result in the model converging very slowly.
To mitigate this problem, we recommend using two streams of minibatches, with the first stream formed as before from the entire training set, and the second stream formed solely from the sensitive group examples. We will compute the objective using minibatches from the first stream and the per-group constraints using minibatches from the second stream. Because the batches from the second stream are likely to contain a larger number of examples from each group, we expect our updates to be less noisy.
We create separate features, labels and groups tensors to hold the minibatches from the two streams.
End of explanation
# Create model, and separate prediction functions for the two streams.
# For the predictions, we use a nullary function returning a Tensor to support eager mode.
model_constrained = create_model()
def predictions():
return model_constrained(features_tensor)
def predictions_sen():
return model_constrained(features_tensor_sen)
Explanation: We instantiate a new model, and compute predictions for minibatches from the two streams.
End of explanation
epsilon = 0.02 # Desired false-positive rate threshold.
# Set up separate contexts for the two minibatch streams.
context = tfco.rate_context(predictions, lambda:labels_tensor)
context_sen = tfco.rate_context(predictions_sen, lambda:labels_tensor_sen)
# Compute the objective using the first stream.
objective = tfco.error_rate(context)
# Compute the constraint using the second stream.
# Subset the examples belonging to the "sexuality" group from the second stream
# and add a constraint on the group's false positive rate.
context_sen_subset = context_sen.subset(lambda: groups_tensor_sen[:, 0] > 0)
constraint = [tfco.false_positive_rate(context_sen_subset) <= epsilon]
# Create a rate minimization problem.
problem = tfco.RateMinimizationProblem(objective, constraint)
# Set up a constrained optimizer.
optimizer = tfco.ProxyLagrangianOptimizerV2(
optimizer=tf.keras.optimizers.Adam(learning_rate=hparams["learning_rate"]),
num_constraints=problem.num_constraints)
# List of variables to optimize include the model weights,
# and the trainable variables from the rate minimization problem and
# the constrained optimizer.
var_list = (model_constrained.trainable_weights + list(problem.trainable_variables) +
optimizer.trainable_variables())
Explanation: We then set up a constrained optimization problem with the error rate as the objective and with constraints on the per-group false positive rate.
End of explanation
# Indices of sensitive group members.
protected_group_indices = np.nonzero(groups_train.sum(axis=1))[0]
num_examples = text_train.shape[0]
num_examples_sen = protected_group_indices.shape[0]
batch_size = hparams["batch_size"]
# Number of steps needed for one epoch over the training sample.
num_steps = int(num_examples / batch_size)
start_time = time.time()
# Loop over minibatches.
for batch_index in range(num_steps):
# Indices for current minibatch in the first stream.
batch_indices = np.arange(
batch_index * batch_size, (batch_index + 1) * batch_size)
batch_indices = [ind % num_examples for ind in batch_indices]
# Indices for current minibatch in the second stream.
batch_indices_sen = np.arange(
batch_index * batch_size, (batch_index + 1) * batch_size)
batch_indices_sen = [protected_group_indices[ind % num_examples_sen]
for ind in batch_indices_sen]
# Assign features, labels, groups from the minibatches to the respective tensors.
features_tensor.assign(text_train[batch_indices, :])
labels_tensor.assign(labels_train[batch_indices])
features_tensor_sen.assign(text_train[batch_indices_sen, :])
labels_tensor_sen.assign(labels_train[batch_indices_sen])
groups_tensor_sen.assign(groups_train[batch_indices_sen, :])
# Gradient update.
optimizer.minimize(problem, var_list=var_list)
# Record and print batch training stats every 10 steps.
if (batch_index + 1) % 10 == 0 or batch_index in (0, num_steps - 1):
hinge_loss = problem.objective()
max_violation = max(problem.constraints())
elapsed_time = time.time() - start_time
sys.stdout.write(
"\rStep %d / %d: Elapsed time = %ds, Loss = %.3f, Violation = %.3f" %
(batch_index + 1, num_steps, elapsed_time, hinge_loss, max_violation))
Explanation: We are ready to train the model. We maintain a separate counter for the two minibatch streams. Every time we perform a gradient update, we will have to copy the minibatch contents from the first stream to the tensors features_tensor and labels_tensor, and the minibatch contents from the second stream to the tensors features_tensor_sen, labels_tensor_sen and groups_tensor_sen.
Note: The following code cell may take ~12 minutes to run.
End of explanation
scores_constrained_test = model_constrained.predict(text_test)
eval_result_constrained = evaluate_results(
labels_test, scores_constrained_test, groups_test, group_names)
Explanation: Having trained the constrained model, we plot various evaluation metrics for the model on the test set.
End of explanation
plot_fairness_indicators(eval_result_constrained, "Constrained")
multi_results = {
'constrained':eval_result_constrained,
'unconstrained':eval_result_unconstrained,
}
plot_multi_fairness_indicators(multi_eval_results=multi_results)
Explanation: As with last time, remember to select false_positive_rate.
End of explanation
<END_TASK> |
15,538 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Chapter 2 Distributions
Histograms
The most common representation of a distribution is a histogram. which is a graph that shows the frequency of each value
Step1: NSFG variables
Step2: Histogram of pregnancy length in weeks
Step3: Histogram of pregnancy lengths
Step4: Summarizing distributions
Some of the characteristics we might want to report are
Step5: Make a histogram of totincr the total income for the respondent's family.
Step6: Make a histogram of age_r, the respondent's age at the time of interview.
Step7: Use totincr to select the respondents with the highest income. Compute the distribution of parity for just the high income respondents.
Step8: Compare the mean parity for high income respondents and others.
Step9: Exercise 4
Using the variable totalwgt_lb, investigate whether first babies are lighter or heavier than others. Compute Cohen’s d to quantify the difference between the groups. How does it compare to the difference in pregnancy length? | <ASSISTANT_TASK:>
Python Code:
from matplotlib import pyplot as plt
%matplotlib inline
import seaborn as sns
import numpy as np
import pandas as pd
import thinkstats2
import thinkplot
hist = thinkstats2.Hist([1, 2, 2, 3, 5])
hist
hist.Freq(2) # hist[2]
hist.Values()
thinkplot.Hist(hist)
thinkplot.Show(xlabel='value', ylabel='frequency')
Explanation: Chapter 2 Distributions
Histograms
The most common representation of a distribution is a histogram. which is a graph that shows the frequency of each value
End of explanation
import nsfg
Explanation: NSFG variables
End of explanation
preg = nsfg.ReadFemPreg()
live = preg[preg.outcome == 1]
hist = thinkstats2.Hist(live.birthwgt_lb, label='birthwgt_lb')
thinkplot.Hist(hist)
thinkplot.Show(xlabel='pounds', ylabel='frequency')
Explanation: Histogram of pregnancy length in weeks
End of explanation
firsts = live[live.birthord == 1]
others = live[live.birthord != 1]
firsts.prglngth.plot(kind='hist', width=2)
others.prglngth.plot(kind='hist', width=2)
Explanation: Histogram of pregnancy lengths
End of explanation
import thinkstats2
resp = thinkstats2.ReadStataDct('2002FemResp.dct').ReadFixedWidth('2002FemResp.dat.gz', compression='gzip')
Explanation: Summarizing distributions
Some of the characteristics we might want to report are:
- central tendency
- modes
- spread
- tails
- outliers
mean
$$\overline{x}= \frac{1}{n}\sum_{i}a_i$$
Variance
$$S^{2} = \frac{1}{n}\sum_i(x_{i}-\overline{x})^{2}$$
$x_{i}-\overline{x}$ is called the “deviation from the mean”
$\sqrt{S}$ is the standard deviation.
Pandas data structures provides methods to compute mean, variance and standard deviation:
```python
mean = live.prglngth.mean()
var = live.prglngth.var() # variance
std = live.prglngth.std() # standard deviation
```
Effect size
An effect size is a quantitative measure of the strength of an event.
One obvious choice is the difference in the means.
Another way to convey the size of the effect is to compare the difference between groups to the variability within groups.
Cohen's d
$$d = \frac{\overline{x_1} -\overline{x_2}}{s}$$
s is the “pooled standard deviation”
$$s=\sqrt{\frac{(n_1-1)S_1^2 + (n_2-1)S_2^2}{n_1 +n_2 -2}}$$
$n_i$ is the sample size of $x_i$, $S_i$ is the variance.
Reporting results
Who
A scientist might be interested in any (real) effect, no matter how small.
A doctor might only care about effects that are clinically significant.
How
Goals
Exercise2
End of explanation
resp.totincr.plot.hist(bins=range(17))
Explanation: Make a histogram of totincr the total income for the respondent's family.
End of explanation
resp.ager.plot.hist(bins=range(15,46))
Explanation: Make a histogram of age_r, the respondent's age at the time of interview.
End of explanation
rich = resp[resp.totincr == resp.totincr.max() ]
rich.parity.plot.hist(bins=range(10))
Explanation: Use totincr to select the respondents with the highest income. Compute the distribution of parity for just the high income respondents.
End of explanation
rich = resp[resp.totincr == resp.totincr.max() ]
notrich = resp[resp.totincr < resp.totincr.max()]
rich.parity.mean(), notrich.parity.mean()
Explanation: Compare the mean parity for high income respondents and others.
End of explanation
preg = nsfg.ReadFemPreg()
first = preg[preg.birthord ==1 ]
others = preg[preg.birthord >1 ]
first.totalwgt_lb.mean(), others.totalwgt_lb.mean()
def CohenEffectSize(group1, group2):
mean_diff = group1.mean() - group2.mean()
n1= len(group1)
n2 = len(group2)
pooled_var = (n1*group1.var() + n2* group2.var())/(n1+n2)
d = mean_diff / np.math.sqrt(pooled_var)
return d
CohenEffectSize(first.totalwgt_lb, others.totalwgt_lb)
Explanation: Exercise 4
Using the variable totalwgt_lb, investigate whether first babies are lighter or heavier than others. Compute Cohen’s d to quantify the difference between the groups. How does it compare to the difference in pregnancy length?
End of explanation
<END_TASK> |
15,539 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Naming a Slice
Problem
Your program has become an unreadable mess of hardcoded slice indices and you want to clean it up.
Solution
Use slice()
Step1: Why use slice()
If you have a slice instance s, you can get more information about it by looking at its s.start, s.stop, and s.step attributes, respectively
Step2: In addition, you can map a slice onto a sequence of a specific size by using its indices(size) method. | <ASSISTANT_TASK:>
Python Code:
###### 0123456789012345678901234567890123456789012345678901234567890'
record = '....................100 .......513.25 ..........'
cost = int(record[20:32]) * float(record[40:48])
print(cost)
SHARES = slice(20,32)
PRICE = slice(40,48)
cost = int(record[SHARES]) * float(record[PRICE])
print(cost)
Explanation: Naming a Slice
Problem
Your program has become an unreadable mess of hardcoded slice indices and you want to clean it up.
Solution
Use slice()
End of explanation
a = slice(10, 50, 2)
print(a.start)
print(a.stop)
print(a.step)
Explanation: Why use slice()
If you have a slice instance s, you can get more information about it by looking at its s.start, s.stop, and s.step attributes, respectively
End of explanation
s = 'HelloWorld'
a = slice(5, 10, 2)
a.indices(len(s))
for i in range(*a.indices(len(s))):
print(s[i])
Explanation: In addition, you can map a slice onto a sequence of a specific size by using its indices(size) method.
End of explanation
<END_TASK> |
15,540 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
monocultural--1973, 1978, 1985
monoethnic -- 2008 --plateaus, increases slightly in current year.
Step1: bicultural
Step2: mixed race
Step3: polyculturalism has no frequency in NYT | <ASSISTANT_TASK:>
Python Code:
words = ['biracial', 'biethnic', 'bicultural', 'interracial']
plot(words)
_ = plt.xlim(1890, 2015)
_ = plt.ylim(10e-7, 10e-2)
savefig('NYT2.png')
Explanation: monocultural--1973, 1978, 1985
monoethnic -- 2008 --plateaus, increases slightly in current year.
End of explanation
words = ['mixed race', 'mixed ethnicity', 'other race', 'other ethnicity']
plot(words)
_ = plt.ylim(2e-4, 3e-1)
savefig('NYT3.png')
Explanation: bicultural: starts 1920-1940 (spike in 1930); starts again in 1950
biethnic: small plateau in 1995-2005
End of explanation
words = ['multiracial', 'multiethnic', 'multicultural', 'polyracial', 'polyethnic', 'polycultural']
plot(words)
_ = plt.xlim(1950, 2015)
_ = plt.ylim(1e-4, 3e-1)
savefig('NYT4.png')
words = ['multiculturalism', 'polyculturalism', 'cultural pluralism']
plot(words)
_ = plt.xlim(1920, 2015)
_ = plt.ylim(3e-4, 3e-1)
savefig('NYT5.png')
Explanation: mixed race: dips in 1950
mixed ethnicity: 1985 upward
other ethnicity: 1990 upward
End of explanation
words = ['monocultural', 'bicultural', 'interracial','multiracial', 'multicultural','mixed race']
plot(words)
_ = plt.ylim(3e-4, 3e-1)
savefig('NYT6.png')
Explanation: polyculturalism has no frequency in NYT
End of explanation
<END_TASK> |
15,541 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Copyright 2019 The Google Research Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http
Step1: Necessary packages and functions call
load_tabular_data
Step2: Data loading & Sample corruption
Create training dataset, validation and testing datasets, and save as train.csv, valid.csv, test.csv under './repo/data_files/' directory.
In this notebook, we corrupt a certain portion of samples in training set to create "artificially" low-quality data.
If you have your own train.csv (low-quality data), valid.csv (ideally high-quality data), test.csv (ideally similar to validation distribution), you can skip this cell and just save those files to './repo/data_files/' directory.
Step3: Data preprocessing
Extract features and labels from train.csv, valid.csv, test.csv in './repo/data_files/' directory.
Normalize the features of training, validation, and testing sets.
Step4: Run DVRL
Input
Step5: Evaluations
In this notebook, we use LightGBM as the predictive model in DVRL (but we can also replace it with another method for evaluation purposes.
Here, we use average accuracy as the performance metric (we can also replace with other metrics like AUC, see metrics.py).
1. Robust learning
DVRL learns robustly although the training data contains low quality/noisy samples, using the guidance from the high quality/clean validation data via reinforcement learning.
* Train predictive model with weighted optimization using estimated data values by DVRL as the weights.
Step6: 2. Removing high/low valued samples
Removing low value samples from the training dataset can improve the predictor model performance, especially in the cases where the training dataset contains corrupted samples. On the other
hand, removing high value samples, especially if the dataset is small, would decrease the performance significantly. Overall, the performance after removing high/low value samples is a strong
indicator for the quality of data valuation.
DVRL can rank the training data samples according to their estimated data value, and by removing the low value samples we can significantly improve performance, whereas removing the high value samples degrades the performance severely. Thus for a high performance data valuation method, a large gap is expected in the performance curves with removal of high vs. low value samples
* Train predictive models after removing certain portions of high/low valued training samples.
* Visualize the results using line graphs (set plot = True).
* x-axis
Step7: 3. Corrupted sample discovery
For our synthetically-generated noisy training dataset, we can assess the performance of our method in finding the noisy samples by using the known noise indices. Note that unlike the first two evaluations, this cell is only for academic purposes because you need the ground truth noisy sample indices so if users come with their own .csv files, they cannot use this cell.
Report True Positive Rates (TPR) of corrupted sample discovery.
Visualize the results using line graphs (set plot = True).
x-axis | <ASSISTANT_TASK:>
Python Code:
# Uses pip3 to install necessary package (lightgbm)
!pip3 install lightgbm
# Resets the IPython kernel to import the installed package.
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
import os
from git import Repo
# Current working directory
repo_dir = os.getcwd() + '/repo'
if not os.path.exists(repo_dir):
os.makedirs(repo_dir)
# Clones github repository
if not os.listdir(repo_dir):
git_url = "https://github.com/google-research/google-research.git"
Repo.clone_from(git_url, repo_dir)
Explanation: Copyright 2019 The Google Research Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Corrupted Sample Discovery & Robust Learning using DVRL
Jinsung Yoon, Sercan O Arik, Tomas Pfister, "Data Valuation using Reinforcement Learning", arXiv preprint arXiv:1909.11671 (2019) - https://arxiv.org/abs/1909.11671
This notebook describes the user-guide of corrupted sample discovery and robust learning applications using "Data Valuation using Reinforcement Learning (DVRL)".
There are some scenarios where training samples may contain corrupted samples, e.g. due to cheap label collection methods. An automated corrupted sample discovery method would be highly beneficial for distinguishing samples with clean vs. noisy labels. Data valuation can be used in this setting by having a small clean validation set to assign low data values to the potential samples with noisy labels. With an optimal data value estimator, all noisy labels would get the lowest data values.
DVRL can also reliably learn with noisy data in an end-to-end way. Ideally, noisy samples should get low data values as DVRL converges and a high performance model can be returned.
You need:
Training set (low-quality data (e.g. noisy data)) / Validation set (high-quality data (e.g. clean data)) / Testing set (high-quality data (e.g. clean data))
* If there is no explicit validation set, you can split a small portion of testing set as the validation set.
* Note that training set does not have to be low quality for DVRL; however, in this notebook, we use a low quality training set for a more clear demonstration as the samples are easier to distinguish in terms of their value.
* If you have your own training / validation / testing datasets, you can put them under './repo/data_files/' directory with 'train.csv', 'valid.csv', 'test.csv' names.
* In this notebook, we use adult income dataset (https://archive.ics.uci.edu/ml/datasets/Adult) as an example.
Prerequisite
Download lightgbm package.
Clone https://github.com/google-research/google-research.git to the current directory.
End of explanation
import numpy as np
import tensorflow as tf
from sklearn import linear_model
import lightgbm
# Sets current directory
os.chdir(repo_dir)
from dvrl.data_loading import load_tabular_data, preprocess_data
from dvrl import dvrl
from dvrl.dvrl_metrics import discover_corrupted_sample, remove_high_low, learn_with_dvrl
Explanation: Necessary packages and functions call
load_tabular_data: Data loader for tabular datasets.
data_preprocess: Data extraction and normalization.
dvrl_classification: Data valuation function for classification problem.
metrics: Evaluation metrics of the quality of data valuation in various metrics.
End of explanation
# Data name: 'adult' in this notebook
data_name = 'adult'
# The number of training and validation samples
dict_no = dict()
dict_no['train'] = 1000
dict_no['valid'] = 400
# Label noise ratio
noise_rate = 0.2
# Loads data and corrupts labels
noise_idx = load_tabular_data(data_name, dict_no, noise_rate)
# noise_idx: ground truth noisy sample indices
print('Finished data loading.')
Explanation: Data loading & Sample corruption
Create training dataset, validation and testing datasets, and save as train.csv, valid.csv, test.csv under './repo/data_files/' directory.
In this notebook, we corrupt a certain portion of samples in training set to create "artificially" low-quality data.
If you have your own train.csv (low-quality data), valid.csv (ideally high-quality data), test.csv (ideally similar to validation distribution), you can skip this cell and just save those files to './repo/data_files/' directory.
End of explanation
# Normalization methods: 'minmax' or 'standard'
normalization = 'minmax'
# Extracts features and labels. Then, normalizes features.
x_train, y_train, x_valid, y_valid, x_test, y_test, _ = \
preprocess_data(normalization, 'train.csv', 'valid.csv', 'test.csv')
print('Finished data preprocess.')
Explanation: Data preprocessing
Extract features and labels from train.csv, valid.csv, test.csv in './repo/data_files/' directory.
Normalize the features of training, validation, and testing sets.
End of explanation
# Resets the graph
tf.reset_default_graph()
# Network parameters
parameters = dict()
parameters['hidden_dim'] = 100
parameters['comb_dim'] = 10
parameters['iterations'] = 2000
parameters['activation'] = tf.nn.relu
parameters['layer_number'] = 5
parameters['batch_size'] = 2000
parameters['learning_rate'] = 0.01
# Sets checkpoint file name
checkpoint_file_name = './tmp/model.ckpt'
# Defines predictive model
pred_model = linear_model.LogisticRegression(solver='lbfgs')
problem = 'classification'
# Flags for using stochastic gradient descent / pre-trained model
flags = {'sgd': False, 'pretrain': False}
# Initalizes DVRL
dvrl_class = dvrl.Dvrl(x_train, y_train, x_valid, y_valid,
problem, pred_model, parameters, checkpoint_file_name, flags)
# Trains DVRL
dvrl_class.train_dvrl('auc')
print('Finished dvrl training.')
# Estimates data values
dve_out = dvrl_class.data_valuator(x_train, y_train)
# Predicts with DVRL
y_test_hat = dvrl_class.dvrl_predictor(x_test)
print('Finished data valuation.')
Explanation: Run DVRL
Input:
data valuator network parameters: Set network parameters of data valuator.
pred_model: The predictor model that maps output from the input. Any machine learning model (e.g. a neural network or ensemble decision tree) can be used as the predictor model, as long as it has fit, and predict (for regression)/predict_proba (for classification) as its subfunctions. Fit can be implemented using multiple backpropagation iterations.
Output:
data_valuator: Function that uses training set as inputs to estimate data values.
dvrl_predictor: Function that predicts labels of the testing samples.
dve_out: Estimated data values for all training samples.
End of explanation
# Defines evaluation model
eval_model = lightgbm.LGBMClassifier()
# Robust learning (DVRL-weighted learning)
robust_perf = learn_with_dvrl(dve_out, eval_model,
x_train, y_train, x_valid, y_valid, x_test, y_test, 'accuracy')
print('DVRL-weighted learning performance: ' + str(np.round(robust_perf, 4)))
Explanation: Evaluations
In this notebook, we use LightGBM as the predictive model in DVRL (but we can also replace it with another method for evaluation purposes.
Here, we use average accuracy as the performance metric (we can also replace with other metrics like AUC, see metrics.py).
1. Robust learning
DVRL learns robustly although the training data contains low quality/noisy samples, using the guidance from the high quality/clean validation data via reinforcement learning.
* Train predictive model with weighted optimization using estimated data values by DVRL as the weights.
End of explanation
# Evaluates performance after removing high/low valued samples
remove_high_low_performance = remove_high_low(dve_out, eval_model, x_train, y_train,
x_valid, y_valid, x_test, y_test, 'accuracy', plot = True)
Explanation: 2. Removing high/low valued samples
Removing low value samples from the training dataset can improve the predictor model performance, especially in the cases where the training dataset contains corrupted samples. On the other
hand, removing high value samples, especially if the dataset is small, would decrease the performance significantly. Overall, the performance after removing high/low value samples is a strong
indicator for the quality of data valuation.
DVRL can rank the training data samples according to their estimated data value, and by removing the low value samples we can significantly improve performance, whereas removing the high value samples degrades the performance severely. Thus for a high performance data valuation method, a large gap is expected in the performance curves with removal of high vs. low value samples
* Train predictive models after removing certain portions of high/low valued training samples.
* Visualize the results using line graphs (set plot = True).
* x-axis: Portions of removed samples.
* y-axis: Prediction performance (accuracy).
* Blue line: Removing low value data, Orange line: Removing high value data.
End of explanation
# If noise_rate is positive value.
if noise_rate > 0:
# Evaluates true positive rates (TPR) of corrupted sample discovery and plot TPR
noise_discovery_performance = discover_corrupted_sample(dve_out, noise_idx, noise_rate, plot = True)
Explanation: 3. Corrupted sample discovery
For our synthetically-generated noisy training dataset, we can assess the performance of our method in finding the noisy samples by using the known noise indices. Note that unlike the first two evaluations, this cell is only for academic purposes because you need the ground truth noisy sample indices so if users come with their own .csv files, they cannot use this cell.
Report True Positive Rates (TPR) of corrupted sample discovery.
Visualize the results using line graphs (set plot = True).
x-axis: Portions of inspected samples.
y-axis: True positive rates (TPR) of corrupted sample discovery.
Blue line: DVRL, Orange line: Optimal, Green line: Random.
End of explanation
<END_TASK> |
15,542 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Drift detection with TensorFlow Data Validation
This tutorial shows how to use TensorFlow Data Validation (TFDV) to identify and analyze different data skews in request-response serving data logged by AI Platform Prediction in BigQuery.
The tutorial has three parts
Step1: Configure Google Cloud environment settings
Step2: Authenticate your GCP account
This is required if you run the notebook in Colab
Step3: Import libraries
Step4: Create a local workspace
Step5: Part 1
Step6: 2. Compute baseline statistics
Step7: 3. Generate reference schema
Step8: Display the reference schema
Step9: Visualize baseline statistics
Step10: Part 2
Step12: 2.1. Read serving data from BigQuery
Step13: 2.2. Save serving data to CSV
Step14: 3. Compute Statistics from Serving Data
In addition to calculating statistics for the full dataset, we also configure TFDV to calculate statistics for each time window
Step15: 4. Validate Serving Statistics
Step16: Part 2
Step17: Visualize statistics for a time window with skewed data points
Step18: 2. Display Anomalies
Step19: 3. Analyze Statistics Change Over time
3.1. Numerical feature means over time
Step20: Get mean values from baseline statistics
Step21: 3.3. Categorical feature distribution over time | <ASSISTANT_TASK:>
Python Code:
!pip install -U -q tensorflow
!pip install -U -q tensorflow_data_validation
!pip install -U -q pandas
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Drift detection with TensorFlow Data Validation
This tutorial shows how to use TensorFlow Data Validation (TFDV) to identify and analyze different data skews in request-response serving data logged by AI Platform Prediction in BigQuery.
The tutorial has three parts:
Part 1: Produce baseline statistics and a reference schema
Download training data.
Compute baseline statistics from the training data.
Generate a reference schema using the baseline statistics.
Part 2: Detect data skews
Generate baseline statistics and a reference schema from training data using TFDV.
Read request-response serving data from BigQuery and save it to CSV files.
Compute statistics from the serving data.
Validate serving statistics against the reference schema and baseline statistics to detect anomalies (if any).
Part 3: Analyze statistics and anomalies
Use TFDV to visualize and display the statistics and anomalies.
Analyze how statistics change over time.
We use the covertype from UCI Machine Learning Repository.
The dataset has been preprocessed, split, and uploaded to a public Cloud Storage location:
gs://workshop-datasets/covertype
The notebook code uses this version of the preprocessed dataset. For more information, see Cover Type Dataset on GitHub.
In this notebook, you use the training data split to generate a reference schema and to gather statistics from for validating serving data.
Setup
Install packages and dependencies
End of explanation
PROJECT_ID = "sa-data-validation"
BUCKET = "sa-data-validation"
BQ_DATASET_NAME = 'prediction_logs'
BQ_VIEW_NAME = 'vw_covertype_classifier_logs_v1'
MODEL_NAME = 'covertype_classifier'
MODEL_VERSION = 'v1'
!gcloud config set project $PROJECT_ID
Explanation: Configure Google Cloud environment settings
End of explanation
try:
from google.colab import auth
auth.authenticate_user()
print("Colab user is authenticated.")
except: pass
Explanation: Authenticate your GCP account
This is required if you run the notebook in Colab
End of explanation
import os
import tensorflow as tf
import tensorflow_data_validation as tfdv
from tensorflow_metadata.proto.v0 import schema_pb2, statistics_pb2, anomalies_pb2
import apache_beam as beam
import pandas as pd
from datetime import datetime
import json
import numpy as np
import warnings
warnings.filterwarnings("ignore", category=FutureWarning)
print("TF version: {}".format(tf.__version__))
print("TFDV version: {}".format(tfdv.__version__))
print("Beam version: {}".format(beam.__version__))
Explanation: Import libraries
End of explanation
WORKSPACE = './workspace'
DATA_DIR = os.path.join(WORKSPACE, 'data')
TRAIN_DATA = os.path.join(DATA_DIR, 'train.csv')
if tf.io.gfile.exists(WORKSPACE):
print("Removing previous workspace artifacts...")
tf.io.gfile.rmtree(WORKSPACE)
print("Creating a new workspace...")
tf.io.gfile.makedirs(WORKSPACE)
tf.io.gfile.makedirs(DATA_DIR)
Explanation: Create a local workspace
End of explanation
!gsutil cp gs://workshop-datasets/covertype/data_validation/training/dataset.csv {TRAIN_DATA}
!wc -l {TRAIN_DATA}
sample = pd.read_csv(TRAIN_DATA).head()
sample.T
Explanation: Part 1: Generate Baseline Statistics and Reference Schema
We use TDV to generate baseline statistics, based on the training data, as well as a reference schema, to validate the serving data against.
1. Download data
End of explanation
baseline_stats = tfdv.generate_statistics_from_csv(
data_location=TRAIN_DATA,
stats_options = tfdv.StatsOptions(
sample_count=10000
)
)
Explanation: 2. Compute baseline statistics
End of explanation
reference_schema = tfdv.infer_schema(baseline_stats)
# Set Soil_Type to be categorical
tfdv.set_domain(reference_schema, 'Soil_Type', schema_pb2.IntDomain(
name='Soil_Type', is_categorical=True))
# Set Cover_Type to be categorical
tfdv.set_domain(reference_schema, 'Cover_Type', schema_pb2.IntDomain(
name='Cover_Type', is_categorical=True))
baseline_stats = tfdv.generate_statistics_from_csv(
data_location=TRAIN_DATA,
stats_options=tfdv.StatsOptions(
schema=reference_schema,
sample_count=10000
)
)
reference_schema = tfdv.infer_schema(baseline_stats)
# Set Soil_Type to be categorical
tfdv.set_domain(reference_schema, 'Soil_Type', schema_pb2.IntDomain(
name='Soil_Type', is_categorical=True))
# Set Cover_Type to be categorical
tfdv.set_domain(reference_schema, 'Cover_Type', schema_pb2.IntDomain(
name='Cover_Type', is_categorical=True))
# Set max and min values for Elevation
tfdv.set_domain(reference_schema,
'Elevation',
tfdv.utils.schema_util.schema_pb2.IntDomain(
min=1000,
max=5000))
# Allow no missing values
tfdv.get_feature(reference_schema,
'Slope').presence.min_fraction = 1.0
# Set distribution skew detector for Wilderness_Area
tfdv.get_feature(reference_schema,
'Wilderness_Area').skew_comparator.infinity_norm.threshold = 0.05
Explanation: 3. Generate reference schema
End of explanation
tfdv.display_schema(
schema=reference_schema)
Explanation: Display the reference schema
End of explanation
tfdv.visualize_statistics(baseline_stats)
Explanation: Visualize baseline statistics
End of explanation
TARGET_FEATURE_NAME = 'Cover_Type'
FEATURE_NAMES = [feature.name for feature in reference_schema.feature
if feature.name != TARGET_FEATURE_NAME]
Explanation: Part 2: Detecting Serving Data Skews
2. Export Serving Data from BigQuery
Although TFDV provides a utility function to calculate statistics on a Pandas dataframe - tfdv.generate_statistics_from_dataframe - that would simplify interactive analysis, the function does not support slicing. Since we need slicing for calculating statistics over different time windows, we will use tfdv.generate_statistics_from_csv instead.
Thus, we read the request-response serving logs from BigQuery and save the results to CSV files, in order to use tfdv.generate_statistics_from_csv.
End of explanation
def generate_query(source, features, target, start_time, end_time):
query =
SELECT
FORMAT_TIMESTAMP('%Y-%m-%d', time) AS time,
{},
predicted_class AS {}
FROM `{}`
WHERE time BETWEEN '{}' AND '{}'
;
.format(features, target, source, start_time, end_time)
return query
start_time = '2020-05-01 00:00:00 UTC'
end_time = '2020-07-01 00:50:00 UTC'
source = "{}.{}".format(BQ_DATASET_NAME, BQ_VIEW_NAME)
features = ', '.join(FEATURE_NAMES)
query = generate_query(source, features, TARGET_FEATURE_NAME, start_time, end_time)
serving_data = pd.io.gbq.read_gbq(
query, project_id=PROJECT_ID)
print(len(serving_data.index))
serving_data.head(5).T
Explanation: 2.1. Read serving data from BigQuery
End of explanation
serving_data_file = os.path.join(DATA_DIR, 'serving.csv')
serving_data.to_csv(serving_data_file, index=False)
Explanation: 2.2. Save serving data to CSV
End of explanation
slice_fn = tfdv.get_feature_value_slicer(features={'time': None})
serving_stats_list = tfdv.generate_statistics_from_csv(
data_location=serving_data_file,
stats_options=tfdv.StatsOptions(
slice_functions=[slice_fn],
schema=reference_schema
)
)
slice_keys = sorted([dataset.name for dataset in serving_stats_list.datasets])
slice_keys
Explanation: 3. Compute Statistics from Serving Data
In addition to calculating statistics for the full dataset, we also configure TFDV to calculate statistics for each time window
End of explanation
anomalies_list = []
for slice_key in slice_keys[1:]:
serving_stats = tfdv.get_slice_stats(serving_stats_list, slice_key)
anomalies = tfdv.validate_statistics(
serving_stats,
schema=reference_schema,
previous_statistics=baseline_stats
)
anomalies_list.append(anomalies)
Explanation: 4. Validate Serving Statistics
End of explanation
slice_key = slice_keys[1]
serving_stats = tfdv.get_slice_stats(serving_stats_list, slice_key)
tfdv.visualize_statistics(
baseline_stats, serving_stats, 'baseline', 'current')
Explanation: Part 2: Analyzing Serving Data Statistics and Anomalies
1. Visualize Statistics
Visualize statistics for a time window with normal data points
End of explanation
slice_key = slice_keys[-1]
serving_stats = tfdv.get_slice_stats(serving_stats_list, slice_key)
tfdv.visualize_statistics(
baseline_stats, serving_stats, 'baseline', 'current')
Explanation: Visualize statistics for a time window with skewed data points
End of explanation
for i, anomalies in enumerate(anomalies_list):
tfdv.utils.anomalies_util.remove_anomaly_types(
anomalies, [anomalies_pb2.AnomalyInfo.SCHEMA_NEW_COLUMN])
print("Anomalies for {}".format(slice_keys[i+1]), )
tfdv.display_anomalies(anomalies)
Explanation: 2. Display Anomalies
End of explanation
categorical_features = [
feature.steps()[0]
for feature in tfdv.utils.schema_util.get_categorical_features(
reference_schema)
]
Explanation: 3. Analyze Statistics Change Over time
3.1. Numerical feature means over time
End of explanation
baseline_means = dict()
for feature in baseline_stats.datasets[0].features:
if feature.path.step[0] == 'time': continue
if feature.path.step[0] not in categorical_features:
mean = feature.num_stats.mean
baseline_means[feature.path.step[0]] = mean
from collections import defaultdict
feature_means = defaultdict(list)
for slice_key in slice_keys[1:]:
ds = tfdv.get_slice_stats(serving_stats_list, slice_key).datasets[0]
for feature in ds.features:
if feature.path.step[0] == 'time': continue
if feature.path.step[0] not in categorical_features:
mean = feature.num_stats.mean
feature_means[feature.path.step[0]].append(mean)
import matplotlib.pyplot as plt
dataframe = pd.DataFrame(feature_means, index=slice_keys[1:])
num_features = len(feature_means)
ncolumns = 3
nrows = int(num_features // ncolumns) + 1
fig, axes = plt.subplots(nrows=nrows, ncols=ncolumns, figsize=(25, 25))
for i, col in enumerate(dataframe.columns[:num_features]):
r = i // ncolumns
c = i % ncolumns
p = dataframe[col].plot.line(ax=axes[r][c], title=col, rot=10)
p.hlines(baseline_means[col], xmin=0, xmax=len(dataframe.index), color='red')
p.text(0, baseline_means[col], 'baseline mean', fontsize=15)
Explanation: Get mean values from baseline statistics
End of explanation
categorical_feature_stats = dict()
for feature_name in categorical_features:
categorical_feature_stats[feature_name] = dict()
for slice_key in slice_keys[1:]:
categorical_feature_stats[feature_name][slice_key] = dict()
ds = tfdv.get_slice_stats(serving_stats_list, slice_key).datasets[0]
for feature in ds.features:
if feature.path.step[0] == feature_name:
val_freq = list(feature.string_stats.top_values)
for item in val_freq:
categorical_feature_stats[feature_name][slice_key][item.value] = item.frequency
break
num_features = len(categorical_features)
ncolumns = 2
nrows = int(num_features // ncolumns) + 1
fig, axes = plt.subplots(nrows=nrows, ncols=ncolumns, figsize=(25, 15))
for i, feature_name in enumerate(categorical_features):
dataframe = pd.DataFrame(
categorical_feature_stats[feature_name]).T
r = i // ncolumns
c = i % ncolumns
dataframe.plot.bar(ax=axes[r][c], stacked=True, rot=10)
Explanation: 3.3. Categorical feature distribution over time
End of explanation
<END_TASK> |
15,543 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Using GeoData With Madgrigal
This notebook will give an example of how to use GeoData plotting functions with data from Madgrigal. The example takes ISR data, plots out plasma parameters over both range and time for one beam and then the values from the same beam over one time.
Step1: Read in Data
The first step is to read the data into the GeoData format. The user does this by creating a GeoData instance, as input it takes the read function, in this case readMad_hdf5, and a list of inputs. The subsequent lines are taking the log scaled electron density and putting them in a linear scale.
Step2: Range Time Plots
This set of plots will show the fitted plasma parameter values as a function of range and time. The specific beam chosen is pointed toward zenith so the slant range can be thought of as altitude.
Step3: Range vs Parameter Plots
Now ploting data from a single time. | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib
import os
import scipy as sp
import matplotlib.pyplot as plt
from GeoData.GeoData import GeoData
from GeoData.utilityfuncs import readMad_hdf5
from GeoData.plotting import rangevsparam, rangevstime
Explanation: Using GeoData With Madgrigal
This notebook will give an example of how to use GeoData plotting functions with data from Madgrigal. The example takes ISR data, plots out plasma parameters over both range and time for one beam and then the values from the same beam over one time.
End of explanation
madfile = os.path.join( 'pfa140105.004.hdf5')
data1 = GeoData(readMad_hdf5,[madfile,['nel','te','ti','vo','dnel','dte','dti','dvo']])
data1.data['ne']=sp.power(10.,data1.data['nel'])
data1.data['dne']=sp.power(10.,data1.data['dnel'])
Explanation: Read in Data
The first step is to read the data into the GeoData format. The user does this by creating a GeoData instance, as input it takes the read function, in this case readMad_hdf5, and a list of inputs. The subsequent lines are taking the log scaled electron density and putting them in a linear scale.
End of explanation
fig1,axmat =plt.subplots(2,2,figsize = (12,9),sharey=True,sharex=True)
axvec = axmat.flatten()
desired_beam = data1.dataloc[0,1:]
rangevstime(data1,desired_beam,vbounds=[1e9,8e11],gkey='ne',fig=fig1,ax=axvec[0])
rangevstime(data1,desired_beam,vbounds=[500.,2500.],gkey='te',fig=fig1,ax=axvec[1])
rangevstime(data1,desired_beam,vbounds=[500.,2500.],gkey='ti',fig=fig1,ax=axvec[2])
rangevstime(data1,desired_beam,vbounds=[-50.,50.],gkey='vo',fig=fig1,ax=axvec[3])
plt.tight_layout()
plt.subplots_adjust(top=0.9)
Explanation: Range Time Plots
This set of plots will show the fitted plasma parameter values as a function of range and time. The specific beam chosen is pointed toward zenith so the slant range can be thought of as altitude.
End of explanation
fig2,axmat2 =plt.subplots(2,2,figsize = (12,9),sharey=True,sharex=False)
axvec2 = axmat2.flatten()
paramlist = ['ne','te','ti','vo']
time1=335
handvec = []
pbounds = [[1e10,8e11],[200.,3500.],[200.,3500.],[-100.,100.]]
for inum, iax in enumerate(axvec2):
ploth = rangevsparam(data1,desired_beam,time1,gkey=paramlist[inum],gkeyerr='d'+paramlist[inum],fig=fig1,ax=iax,it=False)
iax.grid(True)
iax.set_xlim(pbounds[inum])
handvec.append(ploth)
plt.tight_layout()
Explanation: Range vs Parameter Plots
Now ploting data from a single time.
End of explanation
<END_TASK> |
15,544 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
모형 최적화 분산 처리
ipyparallel
http
Step1: Map / Reduce
빅데이터 할 때 듣는 용어. 분산처리 할 때
* map(function, data)
Step2: Parallel Map
map/reduce 연산을 engine process들에게 맡겨서 동시 실행
Step3: 모형 저장
모형을 분산처리하기 위해서는 sklearn.externals 서브패키지의 joblib.dump 명령과 joblib.load 명령 사용
pikle형태로 지금의 모델 안에 어트리뷰트 가진 형태대로 세이브 하고 긁어오고 한다.
ipyparalle 을 사용한 분산 모형 최적화 | <ASSISTANT_TASK:>
Python Code:
from ipyparallel import Client
c = Client()
c.ids
dview = c[:]
dview
Explanation: 모형 최적화 분산 처리
ipyparallel
http://ipyparallel.readthedocs.org/en/latest/index.html
Engine <-> Client
Engine: 실제 계산이 실행되는 프로세스
Client: 엔진을 제어하기 위한 인터페이스
$ conda install ipyparallel
Engine 가동/중지
가동
$ ipcluster start -n 4
중지
Control-C
코어가 1개당 워커는 1개로 지정. 8개라고 해서 8개 다 돌리면 안 된다.
프론트엔드를 가진 프로그램이 있고 가지지 않은 프로그램이 있다.
코어가 1개면 무의미. 코어가 여러개 있을 경우에 Ipython Clusters에서 engines을 여러개로 동시에 돌릴 수 있다.
프론트엔드? 일반적으로 프론트엔드와 벡엔드라는 용어는 프로세스의 처음과 마지막 단계를 가리킨다. 프론트엔드는 사용자로부터 다양한 형태의 입력을 받아 벡엔드가 사용할 수 있는 규격을 따라 처리할 책임을 진다. 프론트엔드 개발자는 사용자가 접하게 되는 외적 부분의 구성 및 기능 구현에 중점을 두는 개발자, 백엔드개발자는 DB나 서버처럼 사용자가 접하지 않는 부분의 내부 지원을 담당하는 개발자
원래 크롤링은 동시에 수백개를 돌린다는 의미다. 크롤링 할 수 있는 것은 scrapy가 파이썬에서 유일하다.
그런데 이거 왜 안 배웠지? scrapy 설정을 잘 해야 한다. 안 그러면 디도스로 판단되어 막혀버린다.
중간에 어디다가 저장하는 것이 파싱? LXML? BS는 너무 느려서 사실 잘 안 쓰인다. 스크래피는 자체 파서가 있다. 그래서 그걸 쓰면 된다.
XPath? [@id="comment_wrapper"]/div[2]/div/div[2] 이런식으로 복사해서 쓰면 된다.
CSS selecter 문법과 유사하다.
robots.txt에서 크롤링해도 된다. 안된다가 나온다. 법적으로
Client
End of explanation
def fahrenheit(T):
return 9 / 5 * T + 32
temp = np.arange(0, 110, 10)
temp
F = map(fahrenheit, temp)
F, list(F)
def create_prime(primes, n):
for p in primes:
if n % p == 0:
return primes
primes.append(n)
return primes
from functools import reduce
reduce(create_prime, np.arange(2, 100), [2])
Explanation: Map / Reduce
빅데이터 할 때 듣는 용어. 분산처리 할 때
* map(function, data): data 각각에 function을 실행하여 결과 출력. data를 리스트로 넣는다.
* reduce(function, data): function을 실행할 때 마다 결과의 수가 감소. 최종적으로 하나의 수가 남는다. 대표적인 것이 counting(예를 들어 뉴스그룹. 몇 십 년치 데이터면 컴퓨터 몇 십대인데 이럴 경우에)
End of explanation
def pyprimes(kmax): #의미 생각하지 말고 소수 구하는 복잡한 함수다 정도만 알아두어라
p = np.zeros(1000)
result = []
if kmax > 1000:
kmax = 1000
k = 0
n = 2
while k < kmax:
i = 0
while i < k and n % p[i] != 0:
i = i + 1
if i == k:
p[k] = n
k = k + 1
result.append(n)
n = n + 1
return result
%time result = map(pyprimes, range(700, 1000)) #도커 안이라서 아래와 이것과 시간이 같게 나올 것이다. 아래 거는 서버에서 돌리면 다를듯
%time parallel_result = dview.map_sync(pyprimes, range(700, 1000)) #6명 중 1명이라도 답을 안준다면 안 주고 다 끝나고 나서 끝이다.
parallel_result == result
async_result = dview.map_async(pyprimes, range(700, 1000)) #안 끝나도 중간에 제어권 돌려주고 모니터링 알아서 해라.
async_result.progress #몇 명이 완성했는지 알려준다.
async_result.get()[0][-10:]
Explanation: Parallel Map
map/reduce 연산을 engine process들에게 맡겨서 동시 실행
End of explanation
from sklearn.datasets import fetch_20newsgroups
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.svm import SVC
from sklearn.pipeline import Pipeline
news = fetch_20newsgroups(subset="all")
n_samples = 3000
X_train = news.data[:n_samples]
y_train = news.target[:n_samples]
model = Pipeline([
('vect', TfidfVectorizer(stop_words="english", token_pattern="\b[a-z0-9_\-\.]+[a-z][a-z0-9_\-\.]+\b")),
('svc', SVC()),
])
from sklearn.externals import joblib
import os
from sklearn.cross_validation import KFold, cross_val_score
def persist_cv_splits(X, y, K=3, name="data", suffix="_cv_%03d.pkl"): #데이터를 나눈다. 나눠서 저장한다.
cv_split_filenames = []
cv = KFold(n_samples, K, shuffle=True, random_state=0)
for i, (train, test) in enumerate(cv):
cv_fold = ([X[k] for k in train], y[train],
[X[k] for k in test], y[test])
cv_split_filename = name + suffix % i
cv_split_filename = os.path.abspath(cv_split_filename)
joblib.dump(cv_fold, cv_split_filename)
cv_split_filenames.append(cv_split_filename)
return cv_split_filenames
cv_filenames = persist_cv_splits(X_train, y_train, name="news")
cv_filenames
def compute_evaluation(cv_split_filename, model, params):
from sklearn.externals import joblib
X_train_, y_train_, X_test_, y_test_ = joblib.load(cv_split_filename, mmap_mode="c")
model.set_params(**params)
model.fit(X_train_, y_train_)
test_scores = model.score(X_test_, y_test_)
return test_scores
from sklearn.grid_search import ParameterGrid
def parallel_grid_search(lb_view, model, cv_split_filenames, param_grid): #lb_view 엔진에 대한 view.
all_tasks = []
all_parameters = list(ParameterGrid(param_grid))
for i, params in enumerate(all_parameters):
task_for_params = []
for j, cv_split_filename in enumerate(cv_split_filenames):
t = lb_view.apply(compute_evaluation, cv_split_filename, model, params) #map이랑 유사. apply는 하나짜리 함수 실행. 여기 말고 엔진에 가서 실행
task_for_params.append(t)
all_tasks.append(task_for_params)
return all_parameters, all_tasks
import datetime
def print_progress(tasks):
progress = np.mean([task.ready() for task_group in tasks for task in task_group])
print("{0}:{1}%".format(datetime.datetime.now(), progress * 100.0))
return int(progress * 100.0)
from ipyparallel import Client
client = Client()
print(client.ids)
lb_view = client.load_balanced_view()
from sklearn.grid_search import GridSearchCV
parameters = {
"svc__gamma": np.logspace(-2, 1, 4),
"svc__C": np.logspace(-1, 1, 3),
}
all_parameters, all_tasks = parallel_grid_search(lb_view, model, cv_filenames, parameters)
import time
start_time = datetime.datetime.now()
while True:
progress = print_progress(all_tasks)
if progress >= 100:
break
time.sleep(1)
print("finish")
end_time = datetime.datetime.now()
print((end_time - start_time).total_seconds())
Explanation: 모형 저장
모형을 분산처리하기 위해서는 sklearn.externals 서브패키지의 joblib.dump 명령과 joblib.load 명령 사용
pikle형태로 지금의 모델 안에 어트리뷰트 가진 형태대로 세이브 하고 긁어오고 한다.
ipyparalle 을 사용한 분산 모형 최적화
End of explanation
<END_TASK> |
15,545 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Week 5 - Crafting the public interface.
Learning Objectives
Explain what a public interface is
Discuss the advantages of defining a public interface
Compare different public interfaces
Design a simple public interface
Inheritance
Last week we looked at inheritance, building a general class that we could then extend with additional functionality for special situations.
Each of the classes we create inheriting from our general class can be thought of as having a 'is-a' relationship with the general class. For example, looking at our Item example from last week Equipment is a Item, Consumable is a Item.
Step5: Composition
In week 3 we took example projects and broke them down into a collection of different classes. Many of you chose the cookbook example for the assignment and questioned whether things like ingredients should be attributes on the recipe class or classes in their own right. Often the answer is both. These are the interactions that change a collection of different classes into a functioning program. This is called composition. The Recipe object is a composite object, it has ingredients, it has instructions, etc.
This week we will look at how we can design our classes to be easy to use, for both programmer-class and class-class interactions.
Step6: This has the basic functionality implemented but there are some improvements we can make.
Before we look at making changes we can seek inspiration. Requests and Pandas are two packages well regarded for having well implemented interfaces.
Requests
Step7: The API documentation for requests
The Response class
Some useful features
Step12: The API documentation for the DataFrame object.
The actual code.
Some useful features
Step17: Viewing the ingredients now looks much better. Let's now look at the get_nutrition method.
There are still a number of areas that could be improved
When we call get_nutrition it is not clear what the different values returned actually are
We don't use the get_nutrition method when calculating the nutrition values in the Recipe class
There is no way to add additional types of nutrient
Ingredient and Recipe return different types from get_nutrition, tuple and list respectively
Recipe could not be used as an ingredient for another Recipe
Step18: WSGI
The value of building and documenting a interface to our code is not unique to object oriented programming.
Next week we will look at creating websites as an alternative to command line programs and GUIs. Python has a rich ecosystem of web servers and frameworks for creating web applications. Importantly, the vast majority use a common interface called WSGI.
WSGI is based on a simple exchange. The example below use the wsgiref package for the web server with the application implemented without using external packages. Next week, we will look at some of the more commonly used web servers and use a web framework to develop a more substantial web project.
Step24: Assignments
Modify the Ingredient and Recipe classes so that the following code works. | <ASSISTANT_TASK:>
Python Code:
class Item(object):
def __init__(self, name, description, location):
self.name = name
self.description = description
self.location = location
def update_location(self, new_location):
pass
class Equipment(Item):
pass
class Consumable(Item):
def __init__(self, name, description, location, initial_quantity, current_quantity, storage_temp, flammability):
self.name = name
self.description = description
self.location = location
self.initial_quantity = initial_quantity
self.current_quantity = current_quantity
self.flammability = flammability
def update_quantity_remaining(self, amount):
pass
Explanation: Week 5 - Crafting the public interface.
Learning Objectives
Explain what a public interface is
Discuss the advantages of defining a public interface
Compare different public interfaces
Design a simple public interface
Inheritance
Last week we looked at inheritance, building a general class that we could then extend with additional functionality for special situations.
Each of the classes we create inheriting from our general class can be thought of as having a 'is-a' relationship with the general class. For example, looking at our Item example from last week Equipment is a Item, Consumable is a Item.
End of explanation
class Ingredient(object):
The ingredient object that contains nutritional information
def __init__(self, name, carbs, protein, fat):
self.name = name
self.carbs = carbs
self.protein = protein
self.fat = fat
def get_nutrition(self):
Returns the nutritional information for the ingredient
return (self.carbs, self.protein, self.fat)
class Recipe(object):
The Recipe object containing the ingredients
def __init__(self, name, ingredients):
self.name = name
self.ingredients = ingredients
def get_nutrition(self):
Returns the nutritional information for the recipe
nutrition = [0, 0, 0]
for amount, ingredient in self.ingredients:
nutrition[0] += amount * ingredient.carbs
nutrition[1] += amount * ingredient.protein
nutrition[2] += amount * ingredient.fat
return nutrition
bread = Recipe('Bread', [(820, Ingredient('Flour', 0.77, 0.10, 0.01)),
(30, Ingredient('Oil', 0, 0, 1)),
(36, Ingredient('Sugar', 1, 0, 0)),
(7, Ingredient('Yeast', 0.3125, 0.5, 0.0625)),
(560, Ingredient('Water', 0, 0, 0))])
print(bread.ingredients)
print(bread.get_nutrition())
Explanation: Composition
In week 3 we took example projects and broke them down into a collection of different classes. Many of you chose the cookbook example for the assignment and questioned whether things like ingredients should be attributes on the recipe class or classes in their own right. Often the answer is both. These are the interactions that change a collection of different classes into a functioning program. This is called composition. The Recipe object is a composite object, it has ingredients, it has instructions, etc.
This week we will look at how we can design our classes to be easy to use, for both programmer-class and class-class interactions.
End of explanation
import requests
r = requests.get('https://api.github.com/repos/streety/biof509/events')
print(r.status_code)
print(r.headers['content-type'])
print(r.text[:1000])
print(r.json()[0]['payload']['commits'][0]['message'])
type(r)
Explanation: This has the basic functionality implemented but there are some improvements we can make.
Before we look at making changes we can seek inspiration. Requests and Pandas are two packages well regarded for having well implemented interfaces.
Requests: HTTP for Humans
Requests is a package used for making HTTP requests. There are options in the python standard library for making http requests but they can seem difficult to use.
End of explanation
import pandas as pd
data = pd.DataFrame([[0,1,2,3], [4,5,6,7], [8,9,10,11]], index=['a', 'b', 'c'], columns=['col1', 'col2', 'col3', 'col4'])
data
print(data.shape)
print(data['col1'])
print(data.col1)
import matplotlib.pyplot as plt
%matplotlib inline
data.plot()
data.to_csv('Wk05-temp.csv')
data2 = pd.read_csv('Wk05-temp.csv', index_col=0)
print(data2)
Explanation: The API documentation for requests
The Response class
Some useful features:
property
Pandas
pandas is an open source, BSD-licensed library providing high-performance, easy-to-use data structures and
data analysis tools for the Python programming language.
End of explanation
class Ingredient(object):
The ingredient object that contains nutritional information
def __init__(self, name, carbs, protein, fat):
self.name = name
self.carbs = carbs
self.protein = protein
self.fat = fat
def __repr__(self):
return 'Ingredient({0}, {1}, {2}, {3})'.format(self.name, self.carbs, self.protein, self.fat)
def get_nutrition(self):
Returns the nutritional information for the ingredient
return (self.carbs, self.protein, self.fat)
class Recipe(object):
The Recipe object containing the ingredients
def __init__(self, name, ingredients):
self.name = name
self.ingredients = ingredients
def get_nutrition(self):
Returns the nutritional information for the recipe
nutrition = [0, 0, 0]
for amount, ingredient in self.ingredients:
nutrition[0] += amount * ingredient.carbs
nutrition[1] += amount * ingredient.protein
nutrition[2] += amount * ingredient.fat
return nutrition
bread = Recipe('Bread', [(820, Ingredient('Flour', 0.77, 0.10, 0.01)),
(30, Ingredient('Oil', 0, 0, 1)),
(36, Ingredient('Sugar', 1, 0, 0)),
(7, Ingredient('Yeast', 0.3125, 0.5, 0.0625)),
(560, Ingredient('Water', 0, 0, 0))])
print(bread.ingredients)
print(bread.get_nutrition())
Explanation: The API documentation for the DataFrame object.
The actual code.
Some useful features:
* classmethod
* property
* __getitem__
* Public and private attributes/methods
* __getattr__
Cookbook
We can now return to our cookbook example.
Displaying the ingredients needs to be improved.
End of explanation
class Ingredient(object):
The ingredient object that contains nutritional information
def __init__(self, name, carbs, protein, fat):
self.name = name
self.carbs = carbs
self.protein = protein
self.fat = fat
def __repr__(self):
return 'Ingredient({0}, {1}, {2}, {3})'.format(self.name, self.carbs, self.protein, self.fat)
def get_nutrition(self):
Returns the nutritional information for the ingredient
return (self.carbs, self.protein, self.fat)
class Recipe(object):
The Recipe object containing the ingredients
def __init__(self, name, ingredients):
self.name = name
self.ingredients = ingredients
def get_nutrition(self):
Returns the nutritional information for the recipe
nutrition = [0, 0, 0]
for amount, ingredient in self.ingredients:
nutrition[0] += amount * ingredient.carbs
nutrition[1] += amount * ingredient.protein
nutrition[2] += amount * ingredient.fat
return nutrition
bread = Recipe('Bread', [(820, Ingredient('Flour', 0.77, 0.10, 0.01)),
(30, Ingredient('Oil', 0, 0, 1)),
(36, Ingredient('Sugar', 1, 0, 0)),
(7, Ingredient('Yeast', 0.3125, 0.5, 0.0625)),
(560, Ingredient('Water', 0, 0, 0))])
print(bread.ingredients)
print(bread.get_nutrition())
Explanation: Viewing the ingredients now looks much better. Let's now look at the get_nutrition method.
There are still a number of areas that could be improved
When we call get_nutrition it is not clear what the different values returned actually are
We don't use the get_nutrition method when calculating the nutrition values in the Recipe class
There is no way to add additional types of nutrient
Ingredient and Recipe return different types from get_nutrition, tuple and list respectively
Recipe could not be used as an ingredient for another Recipe
End of explanation
!cat Wk05-wsgi.py
Explanation: WSGI
The value of building and documenting a interface to our code is not unique to object oriented programming.
Next week we will look at creating websites as an alternative to command line programs and GUIs. Python has a rich ecosystem of web servers and frameworks for creating web applications. Importantly, the vast majority use a common interface called WSGI.
WSGI is based on a simple exchange. The example below use the wsgiref package for the web server with the application implemented without using external packages. Next week, we will look at some of the more commonly used web servers and use a web framework to develop a more substantial web project.
End of explanation
class Ingredient(object):
The ingredient object that contains nutritional information
def __init__(self, name, *args, **kwargs):
self.name = name
self.nums = []
for a in [*args]:
if isinstance(a, dict):
for key in a.keys():
setattr(self, key, a[key])
elif isinstance(a, float):
self.nums.append(a)
if len(self.nums) in [3,4]:
for n, val in zip(['carbs', 'protein', 'fat', 'cholesterol'], self.nums):
setattr(self, n, val)
elif isinstance(a, int):
self.nums.append(a)
if len(self.nums) in [3,4]:
for n, val in zip(['carbs', 'protein', 'fat', 'cholesterol'], self.nums):
setattr(self, n, val)
else:
print('Need correct nutritional information format')
def __repr__(self):
if getattr(self, 'cholesterol', False):
return 'Ingredient({0}, {1}, {2}, {3}, {4})'.format(self.name,
self.carbs,
self.protein,
self.fat,
self.cholesterol)
else:
return 'Ingredient({0}, {1}, {2}, {3})'.format(self.name,
self.carbs,
self.protein,
self.fat)
def get_nutrition(self):
Returns the nutritional information for the ingredient
return (self.carbs, self.protein, self.fat, self.cholestrol)
def get_name(self):
Returns the ingredient name
return self.name
class Recipe(object):
The Recipe object containing the ingredients
def __init__(self, name, *ingredients):
self.name = name
self.ingredients = [*ingredients][0]
self.number = len(*ingredients)
self.nutrition_ = {'carbs': 0, 'protein': 0, 'fat':0, 'cholesterol':0}
def __repr__(self):
return 'Recipe({0}, {1})'.format(self.name, self.ingredients)
def get_nutrition(self):
Returns the nutritional information for the recipe
#for _ in range(self.number):
nutrition = [0,0,0,0] # need to be length of dict
for amount, ingredient in self.ingredients:
# print(type(ingredient), ingredient) # test
try:
if getattr(ingredient, 'cholesterol', False):
nutrition[0] += amount * ingredient.carbs
nutrition[1] += amount * ingredient.protein
nutrition[2] += amount * ingredient.fat
nutrition[3] += amount * ingredient.cholesterol
else:
nutrition[0] += amount * ingredient.carbs
nutrition[1] += amount * ingredient.protein
nutrition[2] += amount * ingredient.fat
except AttributeError: # in case another recipe is in the ingredients (nested)
nu = ingredient.get_nutrition()
nu = [amount * x for x in nu]
nutrition[0] += nu[0]
nutrition[1] += nu[1]
nutrition[2] += nu[2]
nutrition[3] += nu[3]
return nutrition
@property
def nutrition(self):
facts = self.get_nutrition()
self.nutrition_['carbs'] = facts[0]
self.nutrition_['protein'] = facts[1]
self.nutrition_['fat'] = facts[2]
self.nutrition_['cholesterol'] = facts[3]
return self.nutrition_
def get_name(self):
return self.name
bread = Recipe('Bread', [(820, Ingredient('Flour', 0.77, 0.10, 0.01)),
(30, Ingredient('Oil', 0, 0, 1)),
(36, Ingredient('Sugar', 1, 0, 0)),
(7, Ingredient('Yeast', 0.3125, 0.5, 0.0625)),
(560, Ingredient('Water', 0, 0, 0))])
print(bread.ingredients)
# Should be roughly [(820, Ingredient(Flour, 0.77, 0.1, 0.01)), (30, Ingredient(Oil, 0, 0, 1)),
# (36, Ingredient(Sugar, 1, 0, 0)), (7, Ingredient(Yeast, 0.3125, 0.5, 0.0625)), (560, Ingredient(Water, 0, 0, 0))]
print(bread.nutrition)
#Should be roughly {'carbs': 669.5875, 'protein': 85.5, 'fat': 38.6375} the order is not important
eggs = Ingredient('Egg', {'carbs': 0.0077, 'protein': 0.1258, 'fat': 0.0994, 'cholesterol': 0.00423, 'awesome':100})
#eggs = Ingredient('Egg', {'carbs': 0.0077, 'protein': 0.1258, 'fat': 0.0994})
#eggs = Ingredient('Egg', 0.0077, 0.1258, 0.0994, 0.00423)
print(eggs)
#Points to note:
# - The different call to Ingredient, you can use isinstance or type to change the
# behaviour depending on the arguments supplied
# - Cholesterol as an extra nutrient, your implementation should accept any nutrient
# - Use of Recipe (bread) as an ingredient
basic_french_toast = Recipe('Basic French Toast', [(300, Ingredient('Egg', {'carbs': 0.0077, 'protein': 0.1258,
'fat': 0.0994, 'cholesterol': 0.00423})),
(0.25, bread)])
print(basic_french_toast.ingredients)
# Should be roughly:
# [(300, Ingredient(Egg, 0.0077, 0.1258, 0.0994)), (0.25, Recipe(Bread, [(820, Ingredient(Flour, 0.77, 0.1, 0.01)),
# (30, Ingredient(Oil, 0, 0, 1)), (36, Ingredient(Sugar, 1, 0, 0)), (7, Ingredient(Yeast, 0.3125, 0.5, 0.0625)),
# (560, Ingredient(Water, 0, 0, 0))]))]
# Note the formatting for the Recipe object, a __repr__ method will be needed
print(basic_french_toast.nutrition)
# Should be roughly {'protein': 59.115, 'carbs': 169.706875, 'cholesterol': 1.2690000000000001, 'fat': 39.479375000000005}
# The order is not important
Explanation: Assignments
Modify the Ingredient and Recipe classes so that the following code works.
End of explanation
<END_TASK> |
15,546 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Table of Contents
<p><div class="lev1"><a href="#SC-2---Can-We-Distinguish-"Close"-Model-Variants?"><span class="toc-item-num">1 </span>SC-2 - Can We Distinguish "Close" Model Variants?</a></div><div class="lev2"><a href="#Feature-Engineering"><span class="toc-item-num">1.1 </span>Feature Engineering</a></div><div class="lev2"><a href="#Classifier"><span class="toc-item-num">1.2 </span>Classifier</a></div><div class="lev2"><a href="#Finding-Optimal-Hyperparameters"><span class="toc-item-num">1.3 </span>Finding Optimal Hyperparameters</a></div><div class="lev2"><a href="#Observations"><span class="toc-item-num">1.4 </span>Observations</a></div>
# SC-2 - Can We Distinguish "Close" Model Variants? #
In this second experiment in seriation classification, the question is whether we can distinguish variants of the same basic structural model which differ in their details. In this case, I look at "lineage" models with four variants
Step2: In addition to needing a train/test split, we need to ensure reasonable class balance. A simple approach to this is simply to shuffle both lists before taking a random sample without replacement.
Step4: Feature Engineering
The goal here is to construct a standard training and test data matrix of numeric values, which will contain the sorted Laplacian eigenvalues of the graphs in each data set. One feature will thus represent the largest eigenvalue for each graph, a second feature will represent the second largest eigenvalue, and so on.
We do not necessarily assume that all of the graphs have the same number of vertices, although if there are marked differences, we would need to handle missing data for those graphs which had many fewer eigenvalues (or restrict our slice of the spectrum to the smallest number of eigenvalues present).
Step5: Classifier
We're going to be using a gradient boosted classifier, which has some of best accuracy of any of the standard classifier methods. Ultimately we'll figure out the best hyperparameters using cross-validation, but first we just want to see whether the approach gets us anywhere in the right ballpark -- remember, we can 80% accuracy with just eigenvalue distance, so we have to be in that neighborhood or higher to be worth the effort of switching to a more complex model.
Step6: Overall, the accuracy is low, but interestingly, there is a pattern. We never mistake seriations which have an "early" event from those with a "late" event, but we have trouble telling a early split from an early coalescence, and trouble telling a late split from a late coalescence. This is a slightly weird result, actually.
Finding Optimal Hyperparameters | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import networkx as nx
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import cPickle as pickle
from copy import deepcopy
from sklearn.utils import shuffle
%matplotlib inline
plt.style.use("fivethirtyeight")
sns.set()
all_graphs = pickle.load(open("train-cont-graphs.pkl",'r'))
all_labels = pickle.load(open("train-cont-labels.pkl",'r'))
Explanation: Table of Contents
<p><div class="lev1"><a href="#SC-2---Can-We-Distinguish-"Close"-Model-Variants?"><span class="toc-item-num">1 </span>SC-2 - Can We Distinguish "Close" Model Variants?</a></div><div class="lev2"><a href="#Feature-Engineering"><span class="toc-item-num">1.1 </span>Feature Engineering</a></div><div class="lev2"><a href="#Classifier"><span class="toc-item-num">1.2 </span>Classifier</a></div><div class="lev2"><a href="#Finding-Optimal-Hyperparameters"><span class="toc-item-num">1.3 </span>Finding Optimal Hyperparameters</a></div><div class="lev2"><a href="#Observations"><span class="toc-item-num">1.4 </span>Observations</a></div>
# SC-2 - Can We Distinguish "Close" Model Variants? #
In this second experiment in seriation classification, the question is whether we can distinguish variants of the same basic structural model which differ in their details. In this case, I look at "lineage" models with four variants: lineages that split early and evolve for a longer period, split late and have more time as a single unified lineage, lineages that coalesce early and evolve for a longer period, and late coalescence with a longer period of separate evolution. These models are clearly interesting from an archaeological perspective, but since distinguishing them visually in seriations may rely upon additional information to orient things, they may be topologically equivalent. Thus, my expectation in this experiment is very low classification performance, close to chance.
I use the same approach as sc-1: a gradient boosted classifier with the Laplacian eigenvalues as features.
End of explanation
def train_test_split(graph_list, label_list, test_fraction=0.20):
Randomly splits a set of graphs and labels into training and testing data sets. We need a custom function
because the dataset isn't a numeric matrix, but a list of NetworkX Graph objects. In case there is class
structure (i.e., we filled the arrays first with instances of one class, then another class...) we consistently
shuffle both lists.
graph_list, label_list = shuffle(graph_list, label_list)
rand_ix = np.random.random_integers(0, len(graph_list), size=int(len(graph_list) * test_fraction))
print "random indices: %s" % rand_ix
test_graphs = []
test_labels = []
train_graphs = []
train_labels = []
# first copy the chosen test values, without deleting anything since that would alter the indices
for ix in rand_ix:
test_graphs.append(graph_list[ix])
test_labels.append(label_list[ix])
# now copy the indices that are NOT in the test index list
for ix in range(0, len(graph_list)):
if ix in rand_ix:
continue
train_graphs.append(graph_list[ix])
train_labels.append(label_list[ix])
return (train_graphs, train_labels, test_graphs, test_labels)
Explanation: In addition to needing a train/test split, we need to ensure reasonable class balance. A simple approach to this is simply to shuffle both lists before taking a random sample without replacement.
End of explanation
train_graphs, train_labels, test_graphs, test_labels = train_test_split(all_graphs, all_labels, test_fraction=0.1)
print "train size: %s" % len(train_graphs)
print "test size: %s" % len(test_graphs)
def graphs_to_eigenvalue_matrix(graph_list, num_eigenvalues = None):
Given a list of NetworkX graphs, returns a numeric matrix where rows represent graphs,
and columns represent the reverse sorted eigenvalues of the Laplacian matrix for each graph,
possibly trimmed to only use the num_eigenvalues largest values. If num_eigenvalues is
unspecified, all eigenvalues are used.
# we either use all of the eigenvalues, or the number requested (and zero-pad if needed)
if num_eigenvalues is None:
ev_used = n
else:
ev_used = num_eigenvalues
data_mat = np.zeros((len(graph_list),ev_used))
for ix in range(0, len(graph_list)):
spectrum = sorted(nx.spectrum.laplacian_spectrum(graph_list[ix], weight=None), reverse=True)
# if the spectrum is shorter than the number of eigenvalues used (due to multiplicity), zero pad the result
if len(spectrum) < ev_used:
spectrum = np.lib.pad(spectrum, (0,ev_used-len(spectrum)), 'constant', constant_values=(0,0))
data_mat[ix,:] = spectrum[0:ev_used]
return data_mat
Explanation: Feature Engineering
The goal here is to construct a standard training and test data matrix of numeric values, which will contain the sorted Laplacian eigenvalues of the graphs in each data set. One feature will thus represent the largest eigenvalue for each graph, a second feature will represent the second largest eigenvalue, and so on.
We do not necessarily assume that all of the graphs have the same number of vertices, although if there are marked differences, we would need to handle missing data for those graphs which had many fewer eigenvalues (or restrict our slice of the spectrum to the smallest number of eigenvalues present).
End of explanation
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.metrics import accuracy_score, classification_report, confusion_matrix
train_matrix = graphs_to_eigenvalue_matrix(train_graphs, num_eigenvalues=20)
test_matrix = graphs_to_eigenvalue_matrix(test_graphs, num_eigenvalues=20)
print train_matrix.shape
print test_matrix.shape
clf = GradientBoostingClassifier(n_estimators = 250)
clf.fit(train_matrix, train_labels)
pred_label = clf.predict(test_matrix)
cm = confusion_matrix(test_labels, pred_label)
cmdf = pd.DataFrame(cm)
cmdf.columns = map(lambda x: 'predicted {}'.format(x), cmdf.columns)
cmdf.index = map(lambda x: 'actual {}'.format(x), cmdf.index)
print cmdf
print classification_report(test_labels, pred_label)
print "Accuracy on test: %0.3f" % accuracy_score(test_labels, pred_label)
Explanation: Classifier
We're going to be using a gradient boosted classifier, which has some of best accuracy of any of the standard classifier methods. Ultimately we'll figure out the best hyperparameters using cross-validation, but first we just want to see whether the approach gets us anywhere in the right ballpark -- remember, we can 80% accuracy with just eigenvalue distance, so we have to be in that neighborhood or higher to be worth the effort of switching to a more complex model.
End of explanation
from sklearn.pipeline import Pipeline
from sklearn.grid_search import GridSearchCV
pipeline = Pipeline([
('clf', GradientBoostingClassifier())
])
params = {
'clf__learning_rate': [5.0,2.0,1.0, 0.75, 0.5, 0.25, 0.1, 0.05, 0.01, 0.005],
'clf__n_estimators': [10,25,50,100,250,500,1000]
}
grid_search = GridSearchCV(pipeline, params, cv=5, n_jobs = -1, verbose = 1)
grid_search.fit(train_matrix, train_labels)
print("Best score: %0.3f" % grid_search.best_score_)
print("Best parameters:")
best_params = grid_search.best_estimator_.get_params()
for param in sorted(params.keys()):
print("param: %s: %r" % (param, best_params[param]))
pred_label = grid_search.predict(test_matrix)
cm = confusion_matrix(test_labels, pred_label)
cmdf = pd.DataFrame(cm)
cmdf.columns = map(lambda x: 'predicted {}'.format(x), cmdf.columns)
cmdf.index = map(lambda x: 'actual {}'.format(x), cmdf.index)
print cmdf
print classification_report(test_labels, pred_label)
print "Accuracy on test: %0.3f" % accuracy_score(test_labels, pred_label)
Explanation: Overall, the accuracy is low, but interestingly, there is a pattern. We never mistake seriations which have an "early" event from those with a "late" event, but we have trouble telling a early split from an early coalescence, and trouble telling a late split from a late coalescence. This is a slightly weird result, actually.
Finding Optimal Hyperparameters
End of explanation
<END_TASK> |
15,547 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
TensorFlow
TensorFlow is an open source library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them. The flexible architecture allows you to deploy computation on almost any platforms.
Programming Models for Deep Learning
Symbolic v.s. Imperative style programs
If you are a python or C++ programmer, then you are already familiar with imperative programs. Imperative style programs conduct the computation as we run them. Most of the code you write in python is imperative, for example
Step1: Get familiar with the following basic tensorflow methods
Step2: Linear Regression example | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
Explanation: TensorFlow
TensorFlow is an open source library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them. The flexible architecture allows you to deploy computation on almost any platforms.
Programming Models for Deep Learning
Symbolic v.s. Imperative style programs
If you are a python or C++ programmer, then you are already familiar with imperative programs. Imperative style programs conduct the computation as we run them. Most of the code you write in python is imperative, for example:
import numpy as np
a = np.ones(10)
b = np.ones(10) * 2
c = b * a
Symbolic programs are different. The following lines are an equivalent symbolic style program that achieves the same goal:
A = Variable()
B = Constant()
C = B * A
# compiles the function
f = compile(C)
# run the function
c = f.run(A=np.ones(10), B=np.ones(10)*2)
when C = B * A is executed, there is no actual computation happening. Instead, these operations generate a computation graph (symbolic graph) that represents the computation. Symbolic programs separates computation graph (1)definition, (2)compiling, and (3)running step.
Generally speaking, imperative programs are more flexible, while symblic programs are more efficient (graph optimizations, better garbage collections).
End of explanation
# Define C=B*A in a symbolic way
A = tf.Variable(tf.ones([10]))
B = tf.constant(np.ones(10)*2, tf.float32)
C = tf.multiply(A, B)
init = tf.global_variables_initializer()
# Launch the graph
with tf.Session() as sess:
# initialize variables
sess.run(init)
# run the graph and evaluate C
c = sess.run([C])
print('c:', c)
Explanation: Get familiar with the following basic tensorflow methods:
# define constant
tf.Constant()
# define trainable parameters
tf.Variable()
# holding mini-batch input data to the graph
tf.placeholder()
# common neural network layers
tf.nn.*()
# Launch the existing graph
tf.Session()
Now let's first implement 'C=B*A' in TensorFlow!
End of explanation
# Generate ground truth 100 x, y data points in NumPy, y = 3.0 * x + 1.0
# Regress for W and b that compute y_data = W * x_data + b
x_data = np.random.rand(100).astype(np.float32)
y_data = 3.0 * x_data + 1.0
plt.plot(x_data, y_data)
# define trainable variables
W = tf.Variable(tf.random_uniform([1], -1.0, 1.0))
b = tf.Variable(tf.zeros([1]))
# define graph operations
y = tf.multiply(W, x_data) + b
# define loss, L2
loss = tf.reduce_mean(tf.square(y - y_data))
# define optimizer for training
train_optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.5).minimize(loss)
# define the operation that initializes variables
init = tf.global_variables_initializer()
# Launch the graph
with tf.Session() as sess:
# initialization
sess.run(init)
# starting training
training_iters = 100
for step in range(training_iters):
if step % 20 == 0 or (step+1)==training_iters:
print(step, sess.run(W), sess.run(b))
# run optimizer during training
_ = sess.run([train_optimizer])
Explanation: Linear Regression example
End of explanation
<END_TASK> |
15,548 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
VIZBI Tutorial Session
Part 2
Step1: Run System Command
You can run system commands by adding ! at the beggining.
Remember
Step2: Magic!
In IPython Notebook, there is a nice feature called magic. Magic is a group of commands to execute some usuful functions just like system commands.
Two types of Magic
Line magics
Step3: Create file manually with %%writefile
Step4: Execute under other interpreters | <ASSISTANT_TASK:>
Python Code:
print('Hello IPython World!')
!pip list
result1 = 1+1
result2 = 2*3
result2
result1
print('2nd = ' + str(result2))
print('1st = ' + str(result1))
print('2nd = ' + str(result2))
print('1st = ' + str(result1))
Explanation: VIZBI Tutorial Session
Part 2: Cytoscape, IPython, Docker, and reproducible network data visualization workflows
Tuesday, 3/24/2015
Lesson 0: Introduction to IPython Notebook
by Keiichiro Ono
IPython Notebook is a simple tool to run your code in human-frienfly documents (Notebooks), and you can boost your productivity by learning some basic commands.
Keyboard Shortcuts
There are many keyboard shortcuts, but for now, you just need to learn the following:
Shift-Enter - Run cell
Ctrl-Enter - Run cell in-place
Alt-Enter - Run cell, insert below
Esc and Enter - Command mode and edit mode
Basic Commands in Command Mode
x - cut cell
v - paste cell below
SHIFT + v - paste cell above
dd - dlete cell
Complete list of shortcuts is available under Help menu:
OK, let's start!
End of explanation
!ls -alh
!ifconfig
Explanation: Run System Command
You can run system commands by adding ! at the beggining.
Remember: you are running this notebook in Linux container. You cannot use Windows/Mac commands even if you are using those machines!
End of explanation
# Import NetworkX library, which is already installed in your Docker container
import networkx as nx
# Create a ranom graph with 100 nodes using Barabashi-Albert Model ()
ba=nx.barabasi_albert_graph(100,5)
# Check the performance of a NetworkX function (calculate betweenness centrality) by running 10 times
%timeit -n 10 nx.betweenness_centrality(ba)
%%timeit -n 10
# Or, check performance of the entire cell
ws = nx.watts_strogatz_graph(100,3,0.1)
btw = nx.betweenness_centrality(ws)
Explanation: Magic!
In IPython Notebook, there is a nice feature called magic. Magic is a group of commands to execute some usuful functions just like system commands.
Two types of Magic
Line magics: Prepended by one % character, only to the end of the current line.
Cell magics: Start with %% and applied to the entire cell
Here is the list of handy magics:
Simple performance test with %timeit
End of explanation
%%writefile data/small_network.sif
node1 is_a node2
node2 child_of node3
node3 child_of node1
!cat data/small_network.sif
Explanation: Create file manually with %%writefile
End of explanation
%%bash
export FOO='Env var 1'
echo $FOO
%%javascript
var foo = function(a) {
return a+1;
};
console.log(foo(2));
Explanation: Execute under other interpreters
End of explanation
<END_TASK> |
15,549 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Spam Filter Implementation
Step3: Download the following files and extract them into a folder which we will reference below | <ASSISTANT_TASK:>
Python Code:
def tokenize(message):
message = message.lower()
all_words = re.findall('[a-z0-9]+', message)
return set(all_words)
from collections import defaultdict
def count_words(training_set):
training set consists of pairs (message, is_spam)
counts = defaultdict(lambda: [0, 0])
for message, is_spam in training_set:
for word in tokenize(message):
counts[word][0 if is_spam else 1] += 1
return counts
def word_probabilities(counts, total_spams, total_non_spams, k=0.5):
turn the word_counts into a list of triplets
w, p(w | spam) and p(w | ~spam)
return [(w,
(spam + k) / (total_spams + 2 * k),
(non_spam + k) / (total_non_spams + 2 * k))
for w, (spam, non_spam) in counts.items()]
def spam_probability(word_probs, message):
message_words = tokenize(message)
log_prob_if_spam = log_prob_if_not_spam = 0.0
# iterate through each word in our vocabulary
for word, prob_if_spam, prob_if_not_spam in word_probs:
# if *word* appears in the message,
# add the log probability of seeing it
if word in message_words:
log_prob_if_spam += math.log(prob_if_spam)
log_prob_if_not_spam += math.log(prob_if_not_spam)
# if *word* doesn't appear in the message
# add the log probability of _not_ seeing it
# which is log(1 - probability of seeing it)
else:
log_prob_if_spam += math.log(1.0 - prob_if_spam)
log_prob_if_not_spam += math.log(1.0 - prob_if_not_spam)
prob_if_spam = math.exp(log_prob_if_spam)
prob_if_not_spam = math.exp(log_prob_if_not_spam)
return prob_if_spam / (prob_if_spam + prob_if_not_spam)
class NaiveBayesClassifier:
def __init__(self, k=0.5):
self.k = k
self.word_probs = []
def train(self, training_set):
# count spam and non-spam messages
num_spams = len([is_spam for message, is_spam in training_set if is_spam])
num_non_spams = len(training_set) - num_spams
# run training data through our "pipeline"
word_counts = count_words(training_set)
self.word_probs = word_probabilities(word_counts, num_spams, num_non_spams, self.k)
def classify(self, message):
return spam_probability(self.word_probs, message)
Explanation: Spam Filter Implementation
End of explanation
import glob, re
import random
# modify the path with wherever you've put the files
path = r'../../data/*/*'
data = []
# glob.glob returns every filename that matches the wildcarded path
for fn in glob.glob(path):
is_spam = "ham" not in fn
with open(fn,'r') as file:
try:
for line in file:
if line.startswith("Subject:"):
# remove the leading "Subject: " and keep what's left
subject = re.sub(r"^Subject: ", "", line).strip()
data.append((subject, is_spam))
except UnicodeDecodeError:
pass
random.seed(0) # just so you get the same answers as me
train_data, test_data = split_data(data, 0.75)
classifier = NaiveBayesClassifier()
classifier.train(train_data)
from collections import Counter
# triplets (subject, actual is_spam, predicted spam probability)
classified = [(subject, is_spam, classifier.classify(subject))
for subject, is_spam in test_data]
# assume that spam_probability > 0.5 corresponds to spam prediction
# and count the combinations of (actual is_spam, predicted is_spam)
counts = Counter((is_spam, spam_probability > 0.5)
for _, is_spam, spam_probability in classified)
counts
# sort by spam_probability from smallest to largest
classified.sort(key=lambda row: row[2])
# the highest predicted spam probabilities among the non-spams
spammiest_hams = list(filter(lambda row: not row[1], classified))[-5:]
# the lowest predicted spam probabilities among the actual spams
hammiest_spams = list(filter(lambda row: row[1], classified))[:5]
spammiest_hams
hammiest_spams
Explanation: Download the following files and extract them into a folder which we will reference below:
- http://spamassassin.apache.org/old/publiccorpus/20021010_easy_ham.tar.bz2
- http://spamassassin.apache.org/old/publiccorpus/20021010_hard_ham.tar.bz2
- http://spamassassin.apache.org/old/publiccorpus/20021010_spam.tar.bz2
End of explanation
<END_TASK> |
15,550 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Searching and Sorting
Searching
Process of selecting information from a collection based on specific criteria.
Linear Search
Simplest approach to searching problem is sequential or linear search. It's basically starting from first element and goint one by one untill the item found.
In Python, a specific item can be found with in operator
Step1: Okay, using in operator gives us a great deal of simplicity, but we should know the behind the scenes of in operator.
Step2: Finding a specific item from unsorted list resulted worst case time of O(n), since it has to go all the way to the end n iteration.
What about if we search through Sorted sequence? Since we know that the list is sorted we might avoid going through the end, we can terminate it if we came to bigger number than our target. This produces the somewhat better version than the unsorted linear search but complexity is still the same.
Step3: Finding the Smallest Value
It is equivalen to min() function in Python. To be able to accomplish this we have to do linear search but this time we have to keep track of the smallest number. Complexity is O(n) again.
Step4: We can do the same thing for finding the maximum number, Python's max() implementation
Step5: The Binary Search
Step6: Sorting
Sorting is the process of ordering or arranging a collection of items.
Bubble Sort
Step7: Selection Sort
Step8: Insertion Sort
Step9: Working ith Sorted Lists
We can increase the efficiency of some algorithms by making input list a sorted list.
Maintaining a Sorted List
To maintain a sorted list new items must be inserted into their proper position. Instead of using append() method we have to locate proper position and use insert() method. | <ASSISTANT_TASK:>
Python Code:
theArray = range(0,100)
key = 101
if key in theArray:
print("The key is in the array.")
else:
print("The key is not in the array.")
Explanation: Searching and Sorting
Searching
Process of selecting information from a collection based on specific criteria.
Linear Search
Simplest approach to searching problem is sequential or linear search. It's basically starting from first element and goint one by one untill the item found.
In Python, a specific item can be found with in operator
End of explanation
def linearSearch(theValues, target):
n = len(theValues)
for i in range(n):
# If the target is in the ith element, return True
if theValues[i] == target:
return True
# If not found, return False.
return False
Explanation: Okay, using in operator gives us a great deal of simplicity, but we should know the behind the scenes of in operator.
End of explanation
def sortedLinearSearch(theValues, item):
n = len(theValues)
for i in range(n):
# If the target is found in the ith element, return True
if theValues[i] == item:
return True
# If target is largers than the ith element, it's not in the sequence.
elif theValues[i] > item:
return False
# The item is not in the sequence.
return False
Explanation: Finding a specific item from unsorted list resulted worst case time of O(n), since it has to go all the way to the end n iteration.
What about if we search through Sorted sequence? Since we know that the list is sorted we might avoid going through the end, we can terminate it if we came to bigger number than our target. This produces the somewhat better version than the unsorted linear search but complexity is still the same.
End of explanation
def findSmallest(theValues):
n = len(theValues)
# Assume the first item is the smallest value
smallest = theValues[0]
# Determine if any other item in the sequence is smaller.
for i in range(1,n):
if theValues[i] < smallest:
smallest = theValues[i]
# Return the smallest found.
return smallest
Explanation: Finding the Smallest Value
It is equivalen to min() function in Python. To be able to accomplish this we have to do linear search but this time we have to keep track of the smallest number. Complexity is O(n) again.
End of explanation
def findBiggest(theValues):
n = len(theValues)
# Assuming the first item is the biggest value
biggest = theValues[0]
# Determine if any other item in the sequence is bigger.
for i in range(1, n):
if theValues[i] > biggest:
biggest = theValues[i]
#Return the biggest found.
return biggest
Explanation: We can do the same thing for finding the maximum number, Python's max() implementation
End of explanation
def binarySearch(theValues, target):
# Start with the entire sequence of elements. 0:length
low = 0
high = len(theValues - 1)
# Repeatedly subdivide the sequence in half until the target is found.
while low <= high:
# Find the midpoint of the sequence.
mid = (high + low) // 2
# Does the midpoint contain the target?
if theValues[mid] == target:
return True
# Or does the target precede the midpoint?
elif target < theValues[mid]:
high = mid - 1 # Update the upper bound
# Or does it follow the midpoint
else:
low = mid + 1 # Update the lower bound
# If the sequence cannot be subdivided further, we're done.
return False
Explanation: The Binary Search
End of explanation
# Sorts a sequence in ascending order using the bubble sort algorith.
def bubbleSort(seq):
not_sorted = True
n = len(seq)
print "At the beginning: "
print seq
while not_sorted:
# If following statements fail next statement will stop the loop
not_sorted = False
for i in range(n-1):
if seq[i] <= seq[i+1]:
continue;
else:
temp = seq[i]
seq[i] = seq[i+1]
seq[i+1] = temp
not_sorted = True
print seq
return seq
import random
_list = random.sample(xrange(1, 101), 10)
_list
bubbleSort(_list)
Explanation: Sorting
Sorting is the process of ordering or arranging a collection of items.
Bubble Sort
End of explanation
# Sorts a sequence in ascending order using the selection sort algorithm
def selectionSort(theSeq):
n = len(theSeq)
for i in range(n-1):
# Assume the ith element is the smallest.
smallNdx = i
for j in range(i+1, n):
if theSeq[j] < theSeq[smallNdx]:
smallNdx = j
# Swap the ith value and smallNdx value only if the smallest value is
# not really in its proper position. Some implementations omit testing
# the condition and always swap the two values.
if smallNdx != i:
tmp = theSeq[i]
theSeq[i] = theSeq[smallNdx]
theSeq[smallNdx] = tmp
return theSeq
import random
_list = random.sample(xrange(1, 101), 10)
print _list
selectionSort(_list)
Explanation: Selection Sort
End of explanation
# Sorts a sequence in ascending order using the insertion sort algorithm.
def insertionSort(theSeq):
n = len(theSeq)
# Starts with the first item as the only sorted entry.
for i in range(1, n):
# Save the value to be positioned.
value = theSeq[i]
# Find the position where value fits in the ordered part of the list.
pos = i
while pos > 0 and value < theSeq[pos - 1]:
# Shift the items to the rigth during search
theSeq[pos] = theSeq[pos - 1]
pos -= 1
theSeq[pos] = value
return theSeq
import random
_list = random.sample(xrange(1, 101), 10)
_list
insertionSort(_list)
Explanation: Insertion Sort
End of explanation
# Modified version of the binary search that returns the index within
# a sorted sequence indicating where the target should be located.
def findSortedPosition(theList, target):
low = 0
high = len(theList) - 1
while low <= high:
mid = (high + low) // 2
if theList[mid] == target:
# Index of the target
return mid
elif target < theList[mid]:
high = mid - 1
else:
low = mid + 1
# Index where the target value should be.
return low
_list = range(1,24,2)
print(_list)
print("Index is ", findSortedPosition(_list, 12))
Explanation: Working ith Sorted Lists
We can increase the efficiency of some algorithms by making input list a sorted list.
Maintaining a Sorted List
To maintain a sorted list new items must be inserted into their proper position. Instead of using append() method we have to locate proper position and use insert() method.
End of explanation
<END_TASK> |
15,551 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Baseline of new song recommendation - MF + CNN
Step1: Matrix Factorisation
Step2: Let $S \in \mathbb{R}^{M \times D}, P \in \mathbb{R}^{N \times D}, Y \in \mathbb{R}^{M \times N}$ be the latent factors of songs and playlists, respectively.
The optimisation objective
Step3: Sanity check, RMSE
Step4: Map song features to song latent factors | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import os, sys, time, gzip
import pickle as pkl
import numpy as np
from scipy.sparse import lil_matrix, csr_matrix, issparse
import matplotlib.pyplot as plt
import seaborn as sns
from tqdm import tqdm
from tools import calc_metrics, diversity, pairwise_distance_hamming, softmax
np.seterr(all='raise')
TOPs = [5, 10, 20, 30, 50, 100, 200, 300, 500, 700, 1000]
datasets = ['aotm2011', '30music']
dix = 1
dataset_name = datasets[dix]
dataset_name
data_dir = 'data/%s/coldstart/setting1' % dataset_name
X_trndev = pkl.load(gzip.open(os.path.join(data_dir, 'X_trndev.pkl.gz'), 'rb'))
Y_trndev = pkl.load(gzip.open(os.path.join(data_dir, 'Y_trndev.pkl.gz'), 'rb'))
X_test = pkl.load(gzip.open(os.path.join(data_dir, 'X_test.pkl.gz'), 'rb'))
Y_test = pkl.load(gzip.open(os.path.join(data_dir, 'Y_test.pkl.gz'), 'rb'))
songs1 = pkl.load(gzip.open(os.path.join(data_dir, 'songs_train_dev_test_s1.pkl.gz'), 'rb'))
train_songs = songs1['train_song_set']
dev_songs = songs1['dev_song_set']
test_songs = songs1['test_song_set']
song2index_trndev = {sid: ix for ix, (sid, _) in enumerate(train_songs + dev_songs)}
song2index_test = {sid: ix for ix, (sid, _) in enumerate(test_songs)}
index2song_test = {ix: sid for ix, (sid, _) in enumerate(test_songs)}
_song2artist = pkl.load(gzip.open('data/msd/song2artist.pkl.gz', 'rb'))
song2artist = {sid: _song2artist[sid] for sid, _ in train_songs + dev_songs + test_songs if sid in _song2artist}
all_playlists = pkl.load(gzip.open(os.path.join(data_dir, 'playlists_s1.pkl.gz'), 'rb'))
artist2pop = dict()
test_songset = set(test_songs)
for pl, _ in all_playlists:
for sid in [sid for sid in pl if sid not in test_songset]:
if sid in song2artist:
aid = song2artist[sid]
try:
artist2pop[aid] += 1
except KeyError:
artist2pop[aid] = 1
song2genre = pkl.load(gzip.open('data/msd/song2genre.pkl.gz', 'rb'))
cliques_all = pkl.load(gzip.open(os.path.join(data_dir, 'cliques_trndev.pkl.gz'), 'rb'))
U = len(cliques_all)
pl2u = np.zeros(Y_test.shape[1], dtype=np.int32)
for u in range(U):
clq = cliques_all[u]
pl2u[clq] = u
song2pop = pkl.load(gzip.open(os.path.join(data_dir, 'song2pop.pkl.gz'), 'rb'))
Y_test.shape
Explanation: Baseline of new song recommendation - MF + CNN
End of explanation
X_trndev.shape
Y_trndev.shape
Explanation: Matrix Factorisation
End of explanation
M, N = Y_trndev.shape
D = 80
C = 1
n_sweeps = 200
np.random.seed(0)
S = np.random.rand(M, D)
P = np.random.rand(N, D)
# alternating least squares
for sweep in range(n_sweeps):
# fix S, optimise P
SS = np.dot(S.T, S) # D by D
np.fill_diagonal(SS, C + SS.diagonal())
P_new = np.dot(Y_trndev.transpose().dot(S), np.linalg.inv(SS).T) # N by D
pdiff = (P_new - P).ravel()
P = P_new
# fix P, optimise S
PP = np.dot(P.T, P) # D by D
np.fill_diagonal(PP, C + PP.diagonal())
S_new = np.dot(Y_trndev.dot(P), np.linalg.inv(PP).T) # M by D
sdiff = (S_new - S).ravel()
S = S_new
print('P diff: {:8.6f}, S diff: {:8.6f}'.format(np.sqrt(pdiff.dot(pdiff)), np.sqrt(sdiff.dot(sdiff))))
Explanation: Let $S \in \mathbb{R}^{M \times D}, P \in \mathbb{R}^{N \times D}, Y \in \mathbb{R}^{M \times N}$ be the latent factors of songs and playlists, respectively.
The optimisation objective:
$
\begin{aligned}
J = \sum_{m=1}^M \sum_{n=1}^N \left( y_{m,n} - \mathbf{s}m^\top \mathbf{p}_n \right)^2
+ C \left( \sum{m=1}^M \mathbf{s}m^\top \mathbf{s}_m + \sum{n=1}^N \mathbf{p}_n^\top \mathbf{p}_n \right)
\end{aligned}
$
Use alternating least squares optimisation method:
Fix $S$, then let
$
\begin{aligned}
\mathbf{0}
= \frac{\partial J}{\partial \mathbf{p}n}
= \sum{m=1}^M 2 \left( y_{m,n} - \mathbf{s}m^\top \mathbf{p}_n \right) (-\mathbf{s}_m) + 2 C \mathbf{p}_n
\end{aligned}
$
in other words
$
\begin{aligned}
\sum{m=1}^M y_{m,n} \mathbf{s}m
= \sum{m=1}^M (\mathbf{s}m^\top \mathbf{p}_n^) \mathbf{s}_m + C \mathbf{p}_n^
= \sum{m=1}^M \mathbf{s}m \mathbf{s}_m^\top \mathbf{p}_n^ + C \mathbf{p}_n^
= \left( \sum{m=1}^M \mathbf{s}m \mathbf{s}_m^\top + C \mathbf{I} \right) \mathbf{p}_n^
\end{aligned}
$
where $\mathbf{I} \in \mathbb{R}^{D \times D}$ diagonal matrix and the every element at diagonal is 1.
So
$
\begin{aligned}
\mathbf{p}_n^ = \left( \sum{m=1}^M \mathbf{s}m \mathbf{s}_m^\top + C \mathbf{I} \right)^{-1} \sum{m=1}^M y_{m,n} \mathbf{s}m
\end{aligned}
$
or equivalently
$
\begin{aligned}
\mathbf{p}_n^*
= \left( S^\top S + C \mathbf{I} \right)^{-1} \left( \mathbf{y}{:n}^\top S \right)^\top
= \left( S^\top S + C \mathbf{I} \right)^{-1} S^\top \mathbf{y}_{:n}
\end{aligned}
$
The matrix form is
$
\begin{aligned}
P'
= \left( \left( S^\top S + C \mathbf{I} \right)^{-1} S^\top Y \right)^\top
= Y^\top S \left( \left( S^\top S + C \mathbf{I} \right)^{-1} \right)^\top
\end{aligned}
$
Fix $S$, then let
$
\begin{aligned}
\mathbf{0}
= \frac{\partial J}{\partial \mathbf{s}m}
= \sum{n=1}^N 2 \left( y_{m,n} - \mathbf{s}m^\top \mathbf{p}_n \right) (-\mathbf{p}_n) + 2 C \mathbf{s}_m
\end{aligned}
$
by symmetry, we have
$
\begin{aligned}
\mathbf{s}_m^* = \left( \sum{n=1}^N \mathbf{p}n \mathbf{p}_n^\top + C \mathbf{I} \right)^{-1} \sum{n=1}^N y_{m,n} \mathbf{p}_n
\end{aligned}
$
The matrix form is
$
\begin{aligned}
S'
= \left( \left( P^\top P + C \mathbf{I} \right)^{-1} (Y P)^\top \right)^\top
= Y P \left( \left( P^\top P + C \mathbf{I} \right)^{-1} \right)^\top
\end{aligned}
$
End of explanation
Y_trndev_coo = Y_trndev.tocoo()
loss = 0.
for row, col in tqdm(zip(Y_trndev_coo.row, Y_trndev_coo.col)):
diff = S[row, :].dot(P[col, :]) - 1
loss += diff * diff
loss /= Y_trndev_coo.nnz
print('RMSE:', np.sqrt(loss))
Explanation: Sanity check, RMSE
End of explanation
rps = []
hitrates = {top: [] for top in TOPs}
aucs = []
spreads = []
novelties = {top: dict() for top in TOPs}
artist_diversities = {top: [] for top in TOPs}
genre_diversities = {top: [] for top in TOPs}
np.random.seed(0)
npos = Y_test.sum(axis=0).A.reshape(-1)
assert Y_test.shape[0] == len(test_songs)
for j in range(Y_test.shape[1]):
if (j+1) % 100 == 0:
sys.stdout.write('\r%d / %d' % (j+1, Y_test.shape[1]))
sys.stdout.flush()
if npos[j] < 1:
continue
y_true = Y_test[:, j].A.reshape(-1)
y_pred = np.zeros(len(test_songs))
for ix in range(len(test_songs)):
sid = index2song_test[ix]
# map song feature to song latent factor
# score (song, playlist) pair by the dot product of their latent factors
rp, hr_dict, auc = calc_metrics(y_true, y_pred, tops=TOPs)
rps.append(rp)
for top in TOPs:
hitrates[top].append(hr_dict[top])
aucs.append(auc)
# spread
y_pred_prob = softmax(y_pred)
spreads.append(-np.dot(y_pred_prob, np.log(y_pred_prob)))
# novelty
sortix = np.argsort(-y_pred)
u = pl2u[j]
for top in TOPs:
nov = np.mean([-np.log2(song2pop[index2song_test[ix]]) for ix in sortix[:top]])
try:
novelties[top][u].append(nov)
except KeyError:
novelties[top][u] = [nov]
# artist/genre diversity
for top in TOPs:
artist_vec = np.array([song2artist[index2song_test[ix]] for ix in sortix[:top]])
genre_vec = np.array([song2genre[index2song_test[ix]] if index2song_test[ix] in song2genre \
else str(np.random.rand()) for ix in sortix[:top]])
artist_diversities[top].append( diversity(artist_vec) )
genre_diversities[top].append( diversity(genre_vec) )
print('\n%d / %d' % (len(rps), Y_test.shape[1]))
perf = {dataset_name: {'Test': {'R-Precision': np.mean(rps),
'Hit-Rate': {top: np.mean(hitrates[top]) for top in TOPs},
'AUC': np.mean(aucs),
'Spread': np.mean(spreads),
'Novelty': {t: np.mean([np.mean(novelties[t][u]) for u in novelties[t]])
for t in TOPs},
'Artist-Diversity': {top: np.mean(artist_diversities[top]) for top in TOPs},
'Genre-Diversity': {top: np.mean(genre_diversities[top]) for top in TOPs}},
'Test_All': {'R-Precision': rps,
'Hit-Rate': {top: hitrates[top] for top in TOPs},
'AUC': aucs,
'Spread': spreads,
'Novelty': novelties,
'Artist-Diversity': artist_diversities,
'Genre-Diversity': genre_diversities}}}
perf[dataset_name]['Test']
fperf = os.path.join(data_dir, 'perf-mfcnn.pkl')
print(fperf)
pkl.dump(perf, open(fperf, 'wb'))
pkl.load(open(fperf, 'rb'))[dataset_name]['Test']
Explanation: Map song features to song latent factors
End of explanation
<END_TASK> |
15,552 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Ansible is
configuration manager
simple
extensible via modules
written in python
broad community
many external tools
playbook repository
used by openstack, openshift & tonns of project
# Configuration Manager
Explain infrastructure as code
# Advantages
No agents
Step1: ansible.cfg
It's the main configuration file. While all ansible are in yaml, ansible.cfg is in .ini format. Eg.
```
[stanza]
key = value
```
Let's check the content of a sample ansible.cfg
Step2: Inventories
a simple inventory file contains a static list of nodes to contact.
Generally, an inventory can be static or dynamic, as we will see in the following lessons.
Step3: Environment variables
N.B. ansible environment variables are not related with process environment
You defined your host groups in the environment, eg
Step4: Exercise
Dump env_name tied to the staging inventory.
which is the expected output?
what ties the "staging" inventory file to group_vars/staging?
Step5: Exercise | <ASSISTANT_TASK:>
Python Code:
cd /notebooks/exercise-00/
# Let's check our ansible directory
!tree
Explanation: Ansible is
configuration manager
simple
extensible via modules
written in python
broad community
many external tools
playbook repository
used by openstack, openshift & tonns of project
# Configuration Manager
Explain infrastructure as code
# Advantages
No agents: ansible copies python and all deployment scripts/modules to the target machine via ssh and executes them remotely. Some modules though require that target hosts contain specific python libraries.
Jobs are executed in parallel, but you can configure for serialization using different strategies for speed up, rollout or other purposes: (link)
Authentication can be passwordless (ssh/pki, kerberos) or with password.
Automation jobs (Playbooks) are described via YAML - a very concise and simple language. You can validate and lint files with yamllint and ansible-lint.
```
this_is:
a: yaml
file:
- with dict
- a list
```
Passwords are supported, but SSH keys with ssh-agent are one of the best ways to use Ansible. Though if you want to use Kerberos, that's good too.
You have a lot of options! Root logins are not required, you can login as any user, and then su or sudo to any user.
End of explanation
!cat ansible.cfg
Explanation: ansible.cfg
It's the main configuration file. While all ansible are in yaml, ansible.cfg is in .ini format. Eg.
```
[stanza]
key = value
```
Let's check the content of a sample ansible.cfg:
there's a lot of stuff in there
there will be more ;)
for now let's check only the uncommented ones.
End of explanation
!cat inventory
# You can have many inventory files
!cat staging
Explanation: Inventories
a simple inventory file contains a static list of nodes to contact.
Generally, an inventory can be static or dynamic, as we will see in the following lessons.
End of explanation
# group_vars - a directory containing environment files for various host groups.
!tree group_vars
# I set env_name in two different files
!grep env_name -r group_vars/
!cat group_vars/staging
# The debug module (-m debug) shows variables' content or dumps messages.
# by default uses the inventory set into ansible.cfg, thus writing
!ansible all -m debug -a 'var=env_name'
Explanation: Environment variables
N.B. ansible environment variables are not related with process environment
You defined your host groups in the environment, eg:
course
ansible
staging
Ansible defines two default groups: all and ungrouped.
You can assign variables to all hosts using the all group.
End of explanation
# Solution
!ansible all -i staging -m debug -a 'var=env_name'
# Use this cell for the exercise
Explanation: Exercise
Dump env_name tied to the staging inventory.
which is the expected output?
what ties the "staging" inventory file to group_vars/staging?
End of explanation
#
# Read the inventory and try to predict the output of
#
!ansible course -i staging -m debug -a 'var=proxy_env'
Explanation: Exercise
End of explanation
<END_TASK> |
15,553 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Assignment 4b-BA
Step3: Exercise 3
In this exercise, we are going to run VADER on our own tweets and evaluate it against the sentiment labels that we manually annotated for each tweet. We are going to make use of the following two functions
Step4: Exercise 3a
You will now run VADER on the tweets you've collected. You will process each tweet using the code we have shown you above. The goal is add information about each tweet (i.e. in every iteration of the loop) to each of the three lists listed below. We can use these lists to compare the Vader output to the correct labels you provided.
tweets | <ASSISTANT_TASK:>
Python Code:
import json
my_tweets = json.load(open('my_tweets.json'))
for id_, tweet_info in my_tweets.items():
print(id_, tweet_info)
break
Explanation: Assignment 4b-BA: Sentiment analysis using VADER
Due: Friday October 15, 2021, before 14:30
Please note that this is Assignment 4 for the Bachelor version of the Python course: Introduction to Python for Humanities and Social Sciences (L_AABAALG075)
Please submit your assignment (notebooks of parts 4a + 4b + JSON file) as a single .zip file.
Please name your zip file with the following naming convention: ASSIGNMENT_4_FIRSTNAME_LASTNAME.zip
Please submit your assignment on Canvas: Assignments --> Assignment 4
If you have questions about this chapter, please contact us at cltl.python.course@gmail.com. Questions and answers will be collected in this Q&A document, so please check if your question has already been answered.
Credits
The notebooks in this block have been originally created by Marten Postma and Isa Maks. Adaptations were made by Filip Ilievski.
Part I: VADER assignments
Preparation (nothing to submit):
To be able to answer the VADER questions you need to know how the tool works.
* Read more about the VADER tool in this blog.
* VADER provides 4 scores (positive, negative, neutral, compound). Be sure to understand what they mean and how they are calculated.
* VADER uses rules to handle linguistic phenomena such as negation and intensification. Be sure to understand which rules are used, how they work, and why they are important.
* VADER makes use of a sentiment lexicon. Have a look at the lexicon. Be sure to understand which information can be found there (lemma?, wordform?, part-of-speech?, polarity value?, word meaning?) What do all scores mean? You can inspect the README of the VADER system for more information.
Exercise 1
Consider the following sentences and their output as given by VADER. Analyze sentences 1 to 7, and explain the outcome for each sentence. Take into account both the rules applied by VADER and the lexicon that is used. You will find that some of the results are reasonable, but others are not. Explain what is going wrong or not when correct and incorrect results are produced.
```
INPUT SENTENCE 1 I love apples
VADER OUTPUT {'neg': 0.0, 'neu': 0.192, 'pos': 0.808, 'compound': 0.6369}
INPUT SENTENCE 2 I don't love apples
VADER OUTPUT {'neg': 0.627, 'neu': 0.373, 'pos': 0.0, 'compound': -0.5216}
INPUT SENTENCE 3 I love apples :-)
VADER OUTPUT {'neg': 0.0, 'neu': 0.133, 'pos': 0.867, 'compound': 0.7579}
INPUT SENTENCE 4 These houses are ruins
VADER OUTPUT {'neg': 0.492, 'neu': 0.508, 'pos': 0.0, 'compound': -0.4404}
INPUT SENTENCE 5 These houses are certainly not considered ruins
VADER OUTPUT {'neg': 0.0, 'neu': 0.51, 'pos': 0.49, 'compound': 0.5867}
INPUT SENTENCE 6 He lies in the chair in the garden
VADER OUTPUT {'neg': 0.286, 'neu': 0.714, 'pos': 0.0, 'compound': -0.4215}
INPUT SENTENCE 7 This house is like any house
VADER OUTPUT {'neg': 0.0, 'neu': 0.667, 'pos': 0.333, 'compound': 0.3612}
```
Exercise 2: Collecting 25 tweets for evaluation
Collect 25 tweets. Try to find tweets that are interesting for sentiment analysis, e.g., very positive, neutral, and negative tweets. These could be your own tweets (typed in) or collected from the Twitter stream. You can simply copy-paste tweets into the JSON file. Do not attempt to crawl them!
We will store the tweets in the file my_tweets.json (use a text editor to edit).
For each tweet, you should insert:
sentiment analysis label: negative | neutral | positive (this you determine yourself, this is not done by a computer)
the text of the tweet
the Tweet-URL
from:
"1": {
"sentiment_label": "",
"text_of_tweet": "",
"tweet_url": "",
to:
"1": {
"sentiment_label": "positive",
"text_of_tweet": "All across America people chose to get involved, get engaged and stand up. Each of us can make a difference, and all of us ought to try. So go keep changing the world in 2018.",
"tweet_url" : "https://twitter.com/BarackObama/status/946775615893655552",
},
You can load your tweets with the sentiment labels you provided in the following way:
End of explanation
def run_vader(nlp,
textual_unit,
lemmatize=False,
parts_of_speech_to_consider=set(),
verbose=0):
Run VADER on a sentence from spacy
:param str textual unit: a textual unit, e.g., sentence, sentences (one string)
(by looping over doc.sents)
:param bool lemmatize: If True, provide lemmas to VADER instead of words
:param set parts_of_speech_to_consider:
-empty set -> all parts of speech are provided
-non-empty set: only these parts of speech are considered
:param int verbose: if set to 1, information is printed
about input and output
:rtype: dict
:return: vader output dict
doc = nlp(textual_unit)
input_to_vader = []
for sent in doc.sents:
for token in sent:
if verbose >= 2:
print(token, token.pos_)
to_add = token.text
if lemmatize:
to_add = token.lemma_
if to_add == '-PRON-':
to_add = token.text
if parts_of_speech_to_consider:
if token.pos_ in parts_of_speech_to_consider:
input_to_vader.append(to_add)
else:
input_to_vader.append(to_add)
scores = vader_model.polarity_scores(' '.join(input_to_vader))
if verbose >= 1:
print()
print('INPUT SENTENCE', sent)
print('INPUT TO VADER', input_to_vader)
print('VADER OUTPUT', scores)
return scores
def vader_output_to_label(vader_output):
map vader output e.g.,
{'neg': 0.0, 'neu': 0.0, 'pos': 1.0, 'compound': 0.4215}
to one of the following values:
a) positive float -> 'positive'
b) 0.0 -> 'neutral'
c) negative float -> 'negative'
:param dict vader_output: output dict from vader
:rtype: str
:return: 'negative' | 'neutral' | 'positive'
compound = vader_output['compound']
if compound < 0:
return 'negative'
elif compound == 0.0:
return 'neutral'
elif compound > 0.0:
return 'positive'
assert vader_output_to_label( {'neg': 0.0, 'neu': 0.0, 'pos': 1.0, 'compound': 0.0}) == 'neutral'
assert vader_output_to_label( {'neg': 0.0, 'neu': 0.0, 'pos': 1.0, 'compound': 0.01}) == 'positive'
assert vader_output_to_label( {'neg': 0.0, 'neu': 0.0, 'pos': 1.0, 'compound': -0.01}) == 'negative'
import spacy
! python -m spacy download en_core_web_sm
nlp = spacy.load('en_core_web_sm')
from vaderSentiment.vaderSentiment import SentimentIntensityAnalyzer
vader_model = SentimentIntensityAnalyzer()
my_annotation = 'positive' # what you annotate yourself
sentence = "I like Python"
vader_output = run_vader(nlp, sentence)
vader_label = vader_output_to_label(vader_output)
accurate = my_annotation == vader_label
print()
print('SENTENCE', sentence) # the sentence
print('VADER OUTPUT', vader_output) # the VADER output
print('VADER LABEL', vader_label) # the VADER output mapped to a label, in this case 'positive'
print('MY ANNOTATION', my_annotation) # my annotation
print('ACCURACY', accurate) # did VADER predict the same label as my manual annotation?
Explanation: Exercise 3
In this exercise, we are going to run VADER on our own tweets and evaluate it against the sentiment labels that we manually annotated for each tweet. We are going to make use of the following two functions:
End of explanation
import json
my_tweets = json.load(open('my_tweets.json'))
tweets = []
all_vader_output = []
manual_annotation = []
for id_, tweet_info in my_tweets.items():
the_tweet = tweet_info['text_of_tweet']
vader_output = ''# run vader
vader_label = ''# convert vader output to category
tweets.append(the_tweet)
all_vader_output.append(vader_label)
manual_annotation.append(tweet_info['sentiment_label'])
Explanation: Exercise 3a
You will now run VADER on the tweets you've collected. You will process each tweet using the code we have shown you above. The goal is add information about each tweet (i.e. in every iteration of the loop) to each of the three lists listed below. We can use these lists to compare the Vader output to the correct labels you provided.
tweets: append your tweet
all_vader_output: append the vader_label: negative | neutral | positive
manual_annotation: append your annotation: negative | neutral | positive
You can use the code snippet below as a starting point.
End of explanation
<END_TASK> |
15,554 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Hello, I'm IPython-Dashboar, which is inspired by one of the greatest packages, IPython.
I'm a stand alone, light-weight web server for building, sharing graphs created in ipython. Aiming at building for data science, data analysis guys. Building an interactive visualization, collaborated dashboard, and real-time streaming graph.
If you are doing data exploring in IPython notebook, and draw some graph or select some metrics from database, which will be used to present to others, but you don't like to share the code or the complicated logic [ for private or page cleaning ], I think I can help you to do the present part job.
In this tutorial, I'll show you how to do data exploring in ipython notebook and how to share it to others without ipython.
1. Load packages [ One can't make bricks without straw ]
I recommend import package in a readable and reasonable order, which will be useful as the project gets larger.
Step2: 2. Load data
I've prepared a test data on this repo.
Don't forget to take a look at the data before exploring it, that's a good habit.
Step3: 3. Traditionaly way of plotting [ I really like ipython indeed, but ... ]
matplotlib + seaborn + mpld3
Step4: 4. Not enough even arm matplotlib with seaborn, mpld3
If you just wanna share this graph above to others ?
if you wanna hidden/show a specific field, eg, just show one years' population ?
If you wanna know the exact number of a bar when the hovering on the bar ?
if ...
if ...
5. How IPython-Dashboard make it simple
No need to code complicated graph settings
Flexible to define
Able to share one graph, multi-graph in a dashboard
More ? coming soon ...
5.1 Firstly, send you data to the IPython-Dashboard | <ASSISTANT_TASK:>
Python Code:
# built-in package
import os
import sys
import json
import time
import datetime as dt
# third-parth package
import dashboard as dash
import pandas as pd
import matplotlib as plt
import seaborn
import mpld3
# package configre
pd.options.display.max_columns = 100
pd.options.display.max_rows = 500
Explanation: Hello, I'm IPython-Dashboar, which is inspired by one of the greatest packages, IPython.
I'm a stand alone, light-weight web server for building, sharing graphs created in ipython. Aiming at building for data science, data analysis guys. Building an interactive visualization, collaborated dashboard, and real-time streaming graph.
If you are doing data exploring in IPython notebook, and draw some graph or select some metrics from database, which will be used to present to others, but you don't like to share the code or the complicated logic [ for private or page cleaning ], I think I can help you to do the present part job.
In this tutorial, I'll show you how to do data exploring in ipython notebook and how to share it to others without ipython.
1. Load packages [ One can't make bricks without straw ]
I recommend import package in a readable and reasonable order, which will be useful as the project gets larger.
End of explanation
url = https://github.com/litaotao/IPython-Dashboard/raw/v-0.1.2-visualiza-table/docs/people_number_by_province_lateset_10_years.csv
data = pd.read_csv(url)
data.head(3)
Explanation: 2. Load data
I've prepared a test data on this repo.
Don't forget to take a look at the data before exploring it, that's a good habit.
End of explanation
data.plot(x="地区", y=["2014年", "2013年"], kind="bar", figsize=(12, 5))
mpld3.display()
Explanation: 3. Traditionaly way of plotting [ I really like ipython indeed, but ... ]
matplotlib + seaborn + mpld3 : I really like it very much, but ...
raw html : ok, it a way indeed, but ...
End of explanation
dash.client.sender(data, key='chinese_population', force=True)
Explanation: 4. Not enough even arm matplotlib with seaborn, mpld3
If you just wanna share this graph above to others ?
if you wanna hidden/show a specific field, eg, just show one years' population ?
If you wanna know the exact number of a bar when the hovering on the bar ?
if ...
if ...
5. How IPython-Dashboard make it simple
No need to code complicated graph settings
Flexible to define
Able to share one graph, multi-graph in a dashboard
More ? coming soon ...
5.1 Firstly, send you data to the IPython-Dashboard
End of explanation
<END_TASK> |
15,555 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Creating Custom Groupings
We wanted the ability to compare school districts based on similar school districts as well as by statewide. This notebook creates the groupings. The output files for these groupings were used in the Tableau visualization.
The columns for these groupings were chosen based on the New York State Education Department's definition of similar schools. Link
Step1: We need to use the cleaned file instead of the filtered file because we need to get some of the columns that we dropped from the filtered file.
Step2: Apply the filtering logic that we used in the main notebook.
Step3: Mean, min, max graduation rates by state for export to the visualization.
Step4: Calculate the ratio of free and reduced lunch students to total students.
Step5: Determine the top quartile and bottom quartile boundaries for Large and Small school size.
Step6: Determine the top quartile and bottom quartile boundaries for free and reduced price lunch ratio.
Step7: Check even distribution of grouped similar schools row counts. | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import scipy as sp
import matplotlib as mpl
import matplotlib.cm as cm
import matplotlib.pyplot as plt
import pandas as pd
pd.set_option('display.width', 500)
pd.set_option('display.max_columns', 100)
pd.set_option('display.notebook_repr_html', True)
import seaborn as sns
sns.set_style("whitegrid")
sns.set_context("poster")
Explanation: Creating Custom Groupings
We wanted the ability to compare school districts based on similar school districts as well as by statewide. This notebook creates the groupings. The output files for these groupings were used in the Tableau visualization.
The columns for these groupings were chosen based on the New York State Education Department's definition of similar schools. Link: http://www.p12.nysed.gov/repcrd2004/information/similar-schools/guide.shtml
Start with the standard imports we have used for every notebook in this class.
End of explanation
districts = pd.read_csv("data/finaldata/cleaned.csv")
Explanation: We need to use the cleaned file instead of the filtered file because we need to get some of the columns that we dropped from the filtered file.
End of explanation
import math
print 'Total number of unique school districts: ' + str(len(np.unique(districts['Agency ID - NCES Assigned [District] Latest available year'])))
districts = districts[districts['Highest Grade Offered [District] 2009-10']=='12th Grade']
print 'Total number of school districts that have high schools: ' + str(len(districts))
districts = districts[districts['SURVYEAR']!='–']
print 'Total number of school districts that have a row on raw graduation data: ' + str(len(districts))
districts = districts[districts['AFGR']>=0]
print 'Total number of school districts with valid graduation data: ' + str(len(districts))
districts=districts.replace([np.inf, -np.inf], np.nan)
Explanation: Apply the filtering logic that we used in the main notebook.
End of explanation
#CITATION: http://stackoverflow.com/questions/17578115/pass-percentiles-to-pandas-agg-function
def percentile(n):
def percentile_(x):
return np.percentile(x, n)
percentile_.__name__ = 'percentile_%s' % n
return percentile_
stateresults = districts.groupby('State Name [District] Latest available year')['AFGR'].aggregate([np.min, percentile(25), np.mean, percentile(75), np.max])
stateresults.to_csv("data/finaldata/stateresults.csv")
stateresults.head(100)
districts['Urban-centric Locale [District] 2009-10'].value_counts()
Explanation: Mean, min, max graduation rates by state for export to the visualization.
End of explanation
districts['r_lunch_free_reduced'] = districts['Total Free and Reduced Lunch Students [Public School] 2009-10']/districts['Total Students [Public School] 2009-10']
Explanation: Calculate the ratio of free and reduced lunch students to total students.
End of explanation
topquartile = districts.groupby('Urban-centric Locale [District] 2009-10')[['Total Students [Public School] 2009-10']]
topq = topquartile.quantile(q=.75)
topq.head(20)
bottomquartile = districts.groupby('Urban-centric Locale [District] 2009-10')[['Total Students [Public School] 2009-10']]
bottomq = bottomquartile.quantile(q=.25)
bottomq.head(20)
groups = []
for name, group in topquartile:
groups.append(name)
districts['Student_Size'] = 'Medium'
for index in range(len(groups)):
districts['Student_Size'] = np.where((districts['Urban-centric Locale [District] 2009-10']==groups[index]) & (districts['Total Students [Public School] 2009-10']>topq.iloc[index]['Total Students [Public School] 2009-10']), 'Large', districts['Student_Size'])
districts['Student_Size'] = np.where((districts['Urban-centric Locale [District] 2009-10']==groups[index]) & (districts['Total Students [Public School] 2009-10']<=bottomq.iloc[index]['Total Students [Public School] 2009-10']), 'Small', districts['Student_Size'])
print districts['Student_Size'].value_counts()
Explanation: Determine the top quartile and bottom quartile boundaries for Large and Small school size.
End of explanation
topquartile = districts.groupby(['Urban-centric Locale [District] 2009-10', 'Student_Size'])[['r_lunch_free_reduced']]
topq = topquartile.quantile(q=.75)
topq.head(20)
bottomquartile = districts.groupby(['Urban-centric Locale [District] 2009-10', 'Student_Size'])[['r_lunch_free_reduced']]
bottomq = bottomquartile.quantile(q=.25)
bottomq.head(20)
groups = []
for name, group in topquartile:
groups.append(name)
districts['Lunch_Status'] = 'Average'
for index in range(len(groups)):
districts['Lunch_Status'] = np.where((districts['Student_Size']==groups[index][1]) & (districts['Urban-centric Locale [District] 2009-10']==groups[index][0]) & (districts['r_lunch_free_reduced']>topq.iloc[index]['r_lunch_free_reduced']), 'High', districts['Lunch_Status'])
districts['Lunch_Status'] = np.where((districts['Student_Size']==groups[index][1]) & (districts['Urban-centric Locale [District] 2009-10']==groups[index][0]) & (districts['r_lunch_free_reduced']<=bottomq.iloc[index]['r_lunch_free_reduced']), 'Low', districts['Lunch_Status'])
print districts['Lunch_Status'].value_counts()
Explanation: Determine the top quartile and bottom quartile boundaries for free and reduced price lunch ratio.
End of explanation
districts.groupby(['Urban-centric Locale [District] 2009-10', 'Student_Size', 'Lunch_Status'])['AFGR'].count()
similardistrictresults = districts.groupby(['Urban-centric Locale [District] 2009-10', 'Student_Size', 'Lunch_Status'])['AFGR'].aggregate([np.min, percentile(25), np.mean, percentile(75), np.max])
similardistrictresults.to_csv("data/finaldata/similardistrictresults.csv")
similardistrictresults.head(100)
districts.head()
highfull = districts[['Agency ID - NCES Assigned [District] Latest available year', 'Latitude [District] 2009-10', 'Longitude [District] 2009-10', 'Agency Name', 'State Name [District] Latest available year', 'Urban-centric Locale [District] 2009-10', 'Student_Size', 'Lunch_Status', 'AFGR', 'i_agency_type_regional_education_services', 'i_lgo_PK', 'i_lgo_K', 'Total Number Operational Schools [Public School] 2009-10', 'i_fin_sdlc_sec', 'r_frev_ao', 'r_frev_dis', 'Total Expenditures (TOTALEXP) per Pupil (V33) [District Finance] 2009-10', 'r_stud_reg_12_W_M', 'r_stud_reg_12_W_F']]
lowfull = districts[['Agency ID - NCES Assigned [District] Latest available year', 'i_fin_sdlc_elem_sec', 'i_fin_sdlc_voc', 'r_stud_reg_12_W_F', 'i_lgo_PK', 'r_stud_reg_12_W_M', 'i_lgo_K', 'i_agency_type_local_school_district', 'r_frev_ao', 'r_stud_re_B', 'r_stud_912']]
highnoge = districts[['Agency ID - NCES Assigned [District] Latest available year', 'i_lgo_K', 'i_lgo_PK', 'i_fin_sdlc_sec', 'r_st_TS', 'Total Number Operational Schools [Public School] 2009-10', 'Total Expenditures (TOTALEXP) per Pupil (V33) [District Finance] 2009-10', 'r_frev_ao', 'r_frev_dis', 'r_stud_912', 'i_fin_sdlc_voc']]
lownoge = districts[['Agency ID - NCES Assigned [District] Latest available year', 'i_lgo_K', 'i_lgo_PK', 'r_st_TS', 'i_fin_sdlc_voc', 'Total Expenditures (TOTALEXP) per Pupil (V33) [District Finance] 2009-10', 'r_stud_912', 'r_frev_ao', 'i_ma_metropolitan', 'i_ucl_city_small', 'i_ma_micropolitan']]
finalvisdistrict = highfull.merge(lowfull, 'left', 'Agency ID - NCES Assigned [District] Latest available year', suffixes=('', '_LOWFULL'))
finalvisdistrict = finalvisdistrict.merge(highnoge, 'left', 'Agency ID - NCES Assigned [District] Latest available year', suffixes=('', '_HIGHNOGE'))
finalvisdistrict = finalvisdistrict.merge(lownoge, 'left', 'Agency ID - NCES Assigned [District] Latest available year', suffixes=('', '_LOWNOGE'))
finalvisdistrict.head()
finalvisdistrict.to_csv("data/finaldata/tableaudistricts.csv", index=False)
Explanation: Check even distribution of grouped similar schools row counts.
End of explanation
<END_TASK> |
15,556 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Fire up graphlab create
Step1: Load some house value vs. crime rate data
Dataset is from Philadelphia, PA and includes average house sales price in a number of neighborhoods. The attributes of each neighborhood we have include the crime rate ('CrimeRate'), miles from Center City ('MilesPhila'), town name ('Name'), and county name ('County').
Step2: Exploring the data
The house price in a town is correlated with the crime rate of that town. Low crime towns tend to be associated with higher house prices and vice versa.
Step3: Fit the regression model using crime as the feature
Step4: Let's see what our fit looks like
Matplotlib is a Python plotting library that is also useful for plotting. You can install it with
Step5: Above
Step6: Refit our simple regression model on this modified dataset
Step7: Look at the fit
Step8: Compare coefficients for full-data fit versus no-Center-City fit
Visually, the fit seems different, but let's quantify this by examining the estimated coefficients of our original fit and that of the modified dataset with Center City removed.
Step9: Above
Step10: Do the coefficients change much? | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import graphlab
Explanation: Fire up graphlab create
End of explanation
sales = graphlab.SFrame.read_csv('Philadelphia_Crime_Rate_noNA.csv/')
sales
Explanation: Load some house value vs. crime rate data
Dataset is from Philadelphia, PA and includes average house sales price in a number of neighborhoods. The attributes of each neighborhood we have include the crime rate ('CrimeRate'), miles from Center City ('MilesPhila'), town name ('Name'), and county name ('County').
End of explanation
graphlab.canvas.set_target('ipynb')
sales.show(view="Scatter Plot", x="CrimeRate", y="HousePrice")
Explanation: Exploring the data
The house price in a town is correlated with the crime rate of that town. Low crime towns tend to be associated with higher house prices and vice versa.
End of explanation
crime_model = graphlab.linear_regression.create(sales, target='HousePrice', features=['CrimeRate'],validation_set=None,verbose=False)
Explanation: Fit the regression model using crime as the feature
End of explanation
import matplotlib.pyplot as plt
plt.plot(sales['CrimeRate'],sales['HousePrice'],'.',
sales['CrimeRate'],crime_model.predict(sales),'-')
Explanation: Let's see what our fit looks like
Matplotlib is a Python plotting library that is also useful for plotting. You can install it with:
'pip install matplotlib'
End of explanation
sales_noCC = sales[sales['MilesPhila'] != 0.0]
sales_noCC.show(view="Scatter Plot", x="CrimeRate", y="HousePrice")
Explanation: Above: blue dots are original data, green line is the fit from the simple regression.
Remove Center City and redo the analysis
Center City is the one observation with an extremely high crime rate, yet house prices are not very low. This point does not follow the trend of the rest of the data very well. A question is how much including Center City is influencing our fit on the other datapoints. Let's remove this datapoint and see what happens.
End of explanation
crime_model_noCC = graphlab.linear_regression.create(sales_noCC, target='HousePrice', features=['CrimeRate'],validation_set=None, verbose=False)
Explanation: Refit our simple regression model on this modified dataset:
End of explanation
plt.plot(sales_noCC['CrimeRate'],sales_noCC['HousePrice'],'.',
sales_noCC['CrimeRate'],crime_model.predict(sales_noCC),'-')
Explanation: Look at the fit:
End of explanation
crime_model.get('coefficients')
crime_model_noCC.get('coefficients')
Explanation: Compare coefficients for full-data fit versus no-Center-City fit
Visually, the fit seems different, but let's quantify this by examining the estimated coefficients of our original fit and that of the modified dataset with Center City removed.
End of explanation
sales_nohighend = sales_noCC[sales_noCC['HousePrice'] < 350000]
crime_model_nohighend = graphlab.linear_regression.create(sales_nohighend, target='HousePrice', features=['CrimeRate'],validation_set=None, verbose=False)
Explanation: Above: We see that for the "no Center City" version, per unit increase in crime, the predicted decrease in house prices is 2,287. In contrast, for the original dataset, the drop is only 576 per unit increase in crime. This is significantly different!
High leverage points:
Center City is said to be a "high leverage" point because it is at an extreme x value where there are not other observations. As a result, recalling the closed-form solution for simple regression, this point has the potential to dramatically change the least squares line since the center of x mass is heavily influenced by this one point and the least squares line will try to fit close to that outlying (in x) point. If a high leverage point follows the trend of the other data, this might not have much effect. On the other hand, if this point somehow differs, it can be strongly influential in the resulting fit.
Influential observations:
An influential observation is one where the removal of the point significantly changes the fit. As discussed above, high leverage points are good candidates for being influential observations, but need not be. Other observations that are not leverage points can also be influential observations (e.g., strongly outlying in y even if x is a typical value).
Remove high-value outlier neighborhoods and redo analysis
Based on the discussion above, a question is whether the outlying high-value towns are strongly influencing the fit. Let's remove them and see what happens.
End of explanation
crime_model_noCC.get('coefficients')
crime_model_nohighend.get('coefficients')
Explanation: Do the coefficients change much?
End of explanation
<END_TASK> |
15,557 | <SYSTEM_TASK:>
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
<END_TASK>
<USER_TASK:>
Problem:
Given a distance matrix, with similarity between various professors : | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import scipy.cluster
data_matrix = load_data()
Z = scipy.cluster.hierarchy.linkage(np.array(data_matrix), 'ward')
cluster_labels = scipy.cluster.hierarchy.cut_tree(Z, n_clusters=2).reshape(-1, ).tolist()
<END_TASK> |
15,558 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Basic FISSA usage
This notebook contains a step-by-step example of how to use the function-based high-level interface to the FISSA toolbox, fissa.run_fissa.
For more details about the methodology behind FISSA, please see our paper
Step1: We also need to import some plotting dependencies which we'll make use in this notebook to display the results.
Step2: Running FISSA
With the function-based interface to FISSA, everything is handled in a single function call to fissa.run_fissa. The function takes as its input is the raw signals, and returns the decontaminated signals.
The mandatory inputs to fissa.run_fissa are
Step3: The function-based interface is very straight forward, but note that you can only access the result which is returned by the function.
If you need to access the raw traces, ROI masks, or demixing matrix, you need to use the more flexible object-oriented (class based) interface using fissa.Experiment instead.
An example of this is given in our object-oriented example usage notebook.
Working with results
The output of fissa.run_fissa is structured as a 2-d array of 2-d arrays (it can't be a 4-d array because of trials generally don't have the same number of frames).
The results from the cell (ROI) numbered c and the trial (TIFF) numbered t are located at result[c, t][0,
Step4: Let's next plot the traces across all ROIs and trials.
Step5: Comparing ROI signal to neuropil region signals
It can be very instructive to compare the signal in the central ROI with the surrounding neuropil regions. These can be found for cell c and trial t in raw[c, t][i,
Step6: df/f<sub>0</sub>
The default output from fissa.run_fissa is in the same units as the raw input (candelas per unit area).
It is often desirable to calculate the intensity of a signal relative to the baseline value, df/f<sub>0</sub>, for the traces.
fissa.run_fissa will do this for you provide the argument return_deltaf=True, and the sampling frequency of your TIFF files with freq=sample_frequency.
The sampling frequency must be provided because the data is smoothed in order to determine the baseline value f<sub>0</sub>.
When return_deltaf=True, run_fissa will return the df/f<sub>0</sub> output instead of the source signal traces scaled at the recording intensity.
If you need to access both the standard FISSA output and the df/f<sub>0</sub> output at the same time, you need to use the more flexible fissa.Experiment FISSA interface instead, as described in this example.
Step7: Note that by default, f<sub>0</sub> is determined as the minimum across all trials (all TIFFs) to ensure that results are directly comparable between trials, but you can normalise each trial individually instead if you prefer by providing the parameter deltaf_across_trials=False.
Step8: Since FISSA is very good at removing contamination from the ROI signals, the minimum value on the decontaminated trace will typically be 0..
Consequently, we use the minimum value of the (smoothed) raw signal to provide the f<sub>0</sub> from the raw trace for both the raw and decontaminated df/f<sub>0</sub>.
We can plot the df/f<sub>0</sub> for every cell during every trial as follows.
Step9: For comparison purposes, we can also plot the df/f<sub>0</sub> for the raw data against the decontaminated signal.
Step10: Caching
After using FISSA to decontaminate the data collected in an experiment, you will probably want to save the output for later use, so you don't have to keep re-running FISSA on the data.
To facilitate this, an option to cache the outputs is built into FISSA.
If you provide fissa.run_fissa with an identifier to the experiment being analysed in the folder argument, it will cache results into the corresponding directory.
Later, if you call fissa.run_fissa again with the same folder argument, it will load the saved results from that cache folder instead of recomputing them.
Step11: Let's run FISSA on this experiment again, but this time save the results to the experiment's output directory.
Step12: A subsequent call to fissa.run_fissa which uses the same experiment folder argument will load the cached data instead of re-running the FISSA signal separation routine from scratch.
Step13: Exporting to MATLAB
The results can easily be exported to a MATLAB-compatible MAT-file as follows.
If we provide export_to_matfile=True to fissa.run_fissa, it will export the data a matfile named "separated.mat" within the cache directory (the cache directory as provided with the folder argument).
Step14: Alternatively, we can export to a matfile with a custom file name by setting the export_to_matfile argument to the target path.
Step15: Loading the generated file (e.g. "output_folder/separated.mat") in MATLAB will provide you with all of FISSA's outputs.
These are structured in the same way as the raw and result variables returned by fissa.run_fissa.
With the python interface, the outputs are 2d numpy.ndarrays each element of which is itself a 2d numpy.ndarrays.
Meanwhile, when the output is loaded into MATLAB the data is structured as a 2d cell-array each element of which is a 2d matrix.
Additionally, note that whilst Python indexes from 0, MATLAB indexes from 1 instead.
As a consequence of this, the results seen on Python for a given roi and trial experiment.result[roi, trial] correspond to the index S.result{roi + 1, trial + 1} on MATLAB.
Our first plot in this notebook can be replicated in MATLAB as follows
Step16: Analysis parameters
FISSA has several user-definable settings, which can be set as optional arguments to fissa.run_fissa.
Step17: We can plot the new results for our example trace from before. Although we doubled the number of neuropil regions around the cell, very little has changed for this example because there were not many sources of contamination.
However, there will be more of a difference if your data has more neuropil sources per unit area within the image.
Step18: Working with very large tiff files
By default, FISSA loads entire TIFF files into memory at once and then manipulates all ROIs within the TIFF.
This is more efficient, but can be problematic when working with very large TIFF files which are too big to be loaded into memory all at once.
If you run out of memory when running FISSA, you can try reducing the number of workers during the memory-intensive preparation step.
Step19: Alternatively, you can activate FISSA's low memory mode.
In this mode, it will load and process frames one at a time.
This will run a fair bit slower than the default mode, but has a much lower memory requirement. | <ASSISTANT_TASK:>
Python Code:
# Import the FISSA toolbox
import fissa
Explanation: Basic FISSA usage
This notebook contains a step-by-step example of how to use the function-based high-level interface to the FISSA toolbox, fissa.run_fissa.
For more details about the methodology behind FISSA, please see our paper:
Keemink, S. W., Lowe, S. C., Pakan, J. M. P., Dylda, E., van Rossum, M. C. W., and Rochefort, N. L. FISSA: A neuropil decontamination toolbox for calcium imaging signals, Scientific Reports, 8(1):3493, 2018. doi: 10.1038/s41598-018-21640-2.
See basic_usage_func.py (or basic_usage_func_windows.py for Windows users) for a short example script outside of a notebook interface.
Import packages
First, we need to import fissa.
End of explanation
# For plotting our results, import numpy and matplotlib
import matplotlib.pyplot as plt
import numpy as np
# Fetch the colormap object for Cynthia Brewer's Paired color scheme
colors = plt.get_cmap("Paired")
Explanation: We also need to import some plotting dependencies which we'll make use in this notebook to display the results.
End of explanation
# Define path to imagery and to the ROI set
images_location = "exampleData/20150529"
rois_location = "exampleData/20150429.zip"
# Call FISSA using the function-based interface
result, raw = fissa.run_fissa(images_location, rois_location)
Explanation: Running FISSA
With the function-based interface to FISSA, everything is handled in a single function call to fissa.run_fissa. The function takes as its input is the raw signals, and returns the decontaminated signals.
The mandatory inputs to fissa.run_fissa are:
the experiment images
the regions of interest (ROIs) to extract
Images can be given as a path to a folder containing tiff stacks:
python
images = "folder"
Each of these tiff-stacks in the folder (e.g. "folder/trial_001.tif") is a trial with many frames.
Although we refer to one trial as an image, it is actually a video recording.
Alternatively, the image data can be given as a list of paths to tiffs:
python
images = ["folder/trial_001.tif", "folder/trial_002.tif", "folder/trial_003.tif"]
or as a list of arrays which you have already loaded into memory:
python
images = [array1, array2, array3, ...]
For the regions of interest (ROIs) input, you can either provide a single set of ROIs, or a set of ROIs for every image.
If the ROIs were defined using ImageJ, use ImageJ's export function to save them in a zip.
Then, provide the ROI filename.
python
rois = "rois.zip" # for a single set of ROIs used across all images
The same set of ROIs will be used for every image in images.
Sometimes there is motion between trials causing the alignment of the ROIs to drift.
In such a situation, you may need to use a slightly different location of the ROIs for each trial.
This can be handled by providing FISSA with a list of ROI sets — one ROI set (i.e. one ImageJ zip file) per trial.
python
rois = ["rois1.zip", "rois2.zip", ...] # for a unique roiset for each image
Please note that the ROIs defined in each ROI set must correspond to the same physical reigons across all trials, and that the order must be consistent.
That is to say, the 1st ROI listed in each ROI set must correspond to the same item appearing in each trial, etc.
In this notebook, we will demonstrate how to use FISSA with ImageJ ROI sets, saved as zip files.
However, you are not restricted to providing your ROIs to FISSA in this format.
FISSA will also accept ROIs which are arbitrarily defined by providing them as arrays (numpy.ndarray objects).
ROIs provided in this way can be defined either as boolean-valued masks indicating the presence of a ROI per-pixel in the image, or defined as a list of coordinates defining the boundary of the ROI.
For examples of such usage, see our Suite2p, CNMF, and SIMA example notebooks.
As an example, we will run FISSA on a small test dataset.
The test dataset can be found and downloaded from the examples folder of the fissa repository, along with the source for this example notebook.
End of explanation
# Plot sample trace
# Select the ROI and trial to plot
roi = 2
trial = 1
# Create the figure
plt.figure(figsize=(12, 6))
plt.plot(
raw[roi, trial][0, :],
lw=2,
label="Raw",
color=colors((roi * 2) % colors.N),
)
plt.plot(
result[roi, trial][0, :],
lw=2,
label="Decontaminated",
color=colors((roi * 2 + 1) % colors.N),
)
plt.title("ROI {}, Trial {}".format(roi, trial), fontsize=15)
plt.xlabel("Time (frame number)", fontsize=15)
plt.ylabel("Signal intensity (candela per unit area)", fontsize=15)
plt.grid()
plt.legend()
plt.show()
Explanation: The function-based interface is very straight forward, but note that you can only access the result which is returned by the function.
If you need to access the raw traces, ROI masks, or demixing matrix, you need to use the more flexible object-oriented (class based) interface using fissa.Experiment instead.
An example of this is given in our object-oriented example usage notebook.
Working with results
The output of fissa.run_fissa is structured as a 2-d array of 2-d arrays (it can't be a 4-d array because of trials generally don't have the same number of frames).
The results from the cell (ROI) numbered c and the trial (TIFF) numbered t are located at result[c, t][0, :].
The fourth and final dimension works through frames within the TIFF file (time).
The third dimension iterates over output signals.
The 0-th entry of this is the signal which most closely corresponds to the raw signal within the ROI, and is FISSA's best guess for the decontaminated cell source signal.
The other signals are the isolated signals from contaminants such as neuropil and neighbouring cells.
Let's compare the raw signal to the separated signal for a single trial from an example ROI.
End of explanation
# Plot all ROIs and trials
# Get the number of ROIs and trials
n_roi = result.shape[0]
n_trial = result.shape[1]
# Find the maximum signal intensities for each ROI
roi_max_raw = [
np.max([np.max(raw[i_roi, i_trial][0]) for i_trial in range(n_trial)])
for i_roi in range(n_roi)
]
roi_max_result = [
np.max([np.max(result[i_roi, i_trial][0]) for i_trial in range(n_trial)])
for i_roi in range(n_roi)
]
roi_max = np.maximum(roi_max_raw, roi_max_result)
# Plot our figure using subplot panels
plt.figure(figsize=(16, 10))
for i_roi in range(n_roi):
for i_trial in range(n_trial):
# Make subplot axes
i_subplot = 1 + i_trial * n_roi + i_roi
plt.subplot(n_trial, n_roi, i_subplot)
# Plot the data
plt.plot(
raw[i_roi][i_trial][0, :],
label="Raw",
color=colors((i_roi * 2) % colors.N),
)
plt.plot(
result[i_roi][i_trial][0, :],
label="Decontaminated",
color=colors((i_roi * 2 + 1) % colors.N),
)
# Labels and boiler plate
plt.ylim([-0.05 * roi_max[i_roi], roi_max[i_roi] * 1.05])
if i_roi == 0:
plt.ylabel(
"Trial {}\n\nSignal intensity\n(candela per unit area)".format(
i_trial + 1
)
)
if i_trial == 0:
plt.title("ROI {}".format(i_roi))
plt.legend()
if i_trial == n_trial - 1:
plt.xlabel("Time (frame number)")
plt.show()
Explanation: Let's next plot the traces across all ROIs and trials.
End of explanation
# Get the number of neuropil/surrounding regions.
# The raw data has the raw ROI signal in raw[:, :][0] and raw surround
# signals in the rest of the 3rd dimension.
nRegions = raw[0, 0].shape[0] - 1
# Select the ROI and trial to plot
roi = 2
trial = 1
# Create the figure
plt.figure(figsize=(12, 12))
# Plot extracted traces for each neuropil subregion
plt.subplot(2, 1, 1)
# Plot trace of raw ROI signal
plt.plot(
raw[roi, trial][0, :],
lw=2,
label="Raw ROI signal",
color=colors((roi * 2) % colors.N),
)
# Plot traces from each neuropil region
for i_neuropil in range(1, nRegions + 1):
alpha = i_neuropil / nRegions
plt.plot(
raw[roi, trial][i_neuropil, :],
lw=2,
label="Neuropil region {}".format(i_neuropil),
color="k",
alpha=alpha,
)
plt.ylim([0, 125])
plt.grid()
plt.legend()
plt.ylabel("Signal intensity (candela per unit area)", fontsize=15)
plt.title("ROI {}, Trial {}, neuropil region traces".format(roi, trial), fontsize=15)
# Plot the ROI signal
plt.subplot(2, 1, 2)
# Plot trace of raw ROI signal
plt.plot(raw[roi, trial][0, :], lw=2, label="Raw", color=colors((roi * 2) % colors.N))
# Plot decontaminated signal matched to the ROI
plt.plot(
result[roi, trial][0, :],
lw=2,
label="Decontaminated",
color=colors((roi * 2 + 1) % colors.N),
)
plt.ylim([0, 125])
plt.grid()
plt.legend()
plt.xlabel("Time (frame number)", fontsize=15)
plt.ylabel("Signal intensity (candela per unit area)", fontsize=15)
plt.title("ROI {}, Trial {}, raw and decontaminated".format(roi, trial), fontsize=15)
plt.show()
Explanation: Comparing ROI signal to neuropil region signals
It can be very instructive to compare the signal in the central ROI with the surrounding neuropil regions. These can be found for cell c and trial t in raw[c, t][i, :], with i=0 being the cell, and i=1,2,3,... indicating the surrounding regions.
Below we compare directly the raw ROI trace, the decontaminated trace, and the surrounding neuropil region traces.
End of explanation
sample_frequency = 10 # Hz
deltaf_result, deltaf_raw = fissa.run_fissa(
images_location, rois_location, freq=sample_frequency, return_deltaf=True
)
Explanation: df/f<sub>0</sub>
The default output from fissa.run_fissa is in the same units as the raw input (candelas per unit area).
It is often desirable to calculate the intensity of a signal relative to the baseline value, df/f<sub>0</sub>, for the traces.
fissa.run_fissa will do this for you provide the argument return_deltaf=True, and the sampling frequency of your TIFF files with freq=sample_frequency.
The sampling frequency must be provided because the data is smoothed in order to determine the baseline value f<sub>0</sub>.
When return_deltaf=True, run_fissa will return the df/f<sub>0</sub> output instead of the source signal traces scaled at the recording intensity.
If you need to access both the standard FISSA output and the df/f<sub>0</sub> output at the same time, you need to use the more flexible fissa.Experiment FISSA interface instead, as described in this example.
End of explanation
# Plot sample trace
# Select the ROI and trial to plot
roi = 2
trial = 1
# Create the figure
plt.figure(figsize=(12, 6))
plt.plot(
deltaf_raw[roi, trial][0, :],
lw=2,
label="Raw",
color=colors((roi * 2) % colors.N),
)
plt.plot(
deltaf_result[roi, trial][0, :],
lw=2,
label="Decontaminated",
color=colors((roi * 2 + 1) % colors.N),
)
plt.title("ROI {}, Trial {}".format(roi, trial), fontsize=15)
plt.xlabel("Time (frame number)", fontsize=15)
plt.ylabel(r"$\Delta f\,/\,f_0$", fontsize=15)
plt.grid()
plt.legend()
plt.show()
Explanation: Note that by default, f<sub>0</sub> is determined as the minimum across all trials (all TIFFs) to ensure that results are directly comparable between trials, but you can normalise each trial individually instead if you prefer by providing the parameter deltaf_across_trials=False.
End of explanation
# Get the number of ROIs and trials
n_roi = result.shape[0]
n_trial = result.shape[1]
# Find the maximum signal intensities for each ROI,
# so we can scale ylim the same across subplots
roi_max = [
np.max([np.max(result[i_roi, i_trial][0]) for i_trial in range(n_trial)])
for i_roi in range(n_roi)
]
# Plot our figure using subplot panels
plt.figure(figsize=(16, 10))
for i_roi in range(n_roi):
for i_trial in range(n_trial):
# Make subplot axes
i_subplot = 1 + i_trial * n_roi + i_roi
plt.subplot(n_trial, n_roi, i_subplot)
# Plot the data
plt.plot(
result[i_roi][i_trial][0, :],
color=colors((i_roi * 2 + 1) % colors.N),
)
# Labels and boiler plate
plt.ylim([-0.05 * roi_max[i_roi], roi_max[i_roi] * 1.05])
if i_roi == 0:
plt.ylabel("Trial {}\n\n".format(i_trial + 1) + r"$\Delta f\,/\,f_0$")
if i_trial == 0:
plt.title("ROI {}".format(i_roi))
if i_trial == n_trial - 1:
plt.xlabel("Time (frame number)")
plt.show()
Explanation: Since FISSA is very good at removing contamination from the ROI signals, the minimum value on the decontaminated trace will typically be 0..
Consequently, we use the minimum value of the (smoothed) raw signal to provide the f<sub>0</sub> from the raw trace for both the raw and decontaminated df/f<sub>0</sub>.
We can plot the df/f<sub>0</sub> for every cell during every trial as follows.
End of explanation
# Plot all ROIs and trials
# Get the number of ROIs and trials
n_roi = deltaf_result.shape[0]
n_trial = deltaf_result.shape[1]
# Find the maximum signal intensities for each ROI
roi_max_raw = [
np.max([np.max(deltaf_raw[i_roi, i_trial][0]) for i_trial in range(n_trial)])
for i_roi in range(n_roi)
]
roi_max_result = [
np.max([np.max(deltaf_result[i_roi, i_trial][0]) for i_trial in range(n_trial)])
for i_roi in range(n_roi)
]
roi_max = np.maximum(roi_max_raw, roi_max_result)
# Plot our figure using subplot panels
plt.figure(figsize=(16, 10))
for i_roi in range(n_roi):
for i_trial in range(n_trial):
# Make subplot axes
i_subplot = 1 + i_trial * n_roi + i_roi
plt.subplot(n_trial, n_roi, i_subplot)
# Plot the data
plt.plot(
deltaf_raw[i_roi][i_trial][0, :],
label="Raw",
color=colors((i_roi * 2) % colors.N),
)
plt.plot(
deltaf_result[i_roi][i_trial][0, :],
label="Decontaminated",
color=colors((i_roi * 2 + 1) % colors.N),
)
# Labels and boiler plate
plt.ylim([-0.05 * roi_max[i_roi], roi_max[i_roi] * 1.05])
if i_roi == 0:
plt.ylabel("Trial {}\n\n".format(i_trial + 1) + r"$\Delta f\,/\,f_0$")
if i_trial == 0:
plt.title("ROI {}".format(i_roi))
plt.legend()
if i_trial == n_trial - 1:
plt.xlabel("Time (frame number)")
plt.show()
Explanation: For comparison purposes, we can also plot the df/f<sub>0</sub> for the raw data against the decontaminated signal.
End of explanation
# Define the folder where FISSA's outputs will be cached, so they can be
# quickly reloaded in the future without having to recompute them.
#
# This argument is optional; if it is not provided, FISSA will not save its
# results for later use.
#
# If the output directory already exists, FISSA will load the contents of
# the cache instead of recomputing it.
#
# Note: you *must* use a different folder for each experiment, otherwise
# FISSA will load the existing data instead of computing results for the
# new experiment.
#
# In this example, we will use the current datetime as the name of the
# experiment, but you can name your experiments however you want to.
# If you want to take advantage of the caching of results, you should use
# a more descriptive name than this so you can identify the actual
# dataset that the FISSA results correspond to, and load them appropriately.
import datetime
output_folder = "fissa-example_{}".format(
datetime.datetime.now().strftime("%Y-%m-%d_%H-%M-%S")
)
print(output_folder)
Explanation: Caching
After using FISSA to decontaminate the data collected in an experiment, you will probably want to save the output for later use, so you don't have to keep re-running FISSA on the data.
To facilitate this, an option to cache the outputs is built into FISSA.
If you provide fissa.run_fissa with an identifier to the experiment being analysed in the folder argument, it will cache results into the corresponding directory.
Later, if you call fissa.run_fissa again with the same folder argument, it will load the saved results from that cache folder instead of recomputing them.
End of explanation
# Run FISSA, saving to results to output_folder
result, raw = fissa.run_fissa(images_location, rois_location, folder=output_folder)
Explanation: Let's run FISSA on this experiment again, but this time save the results to the experiment's output directory.
End of explanation
# Run FISSA, loading results from cache in output_folder
result, raw = fissa.run_fissa(images_location, rois_location, folder=output_folder)
Explanation: A subsequent call to fissa.run_fissa which uses the same experiment folder argument will load the cached data instead of re-running the FISSA signal separation routine from scratch.
End of explanation
result, raw = fissa.run_fissa(
images_location, rois_location, folder=output_folder, export_to_matfile=True
)
Explanation: Exporting to MATLAB
The results can easily be exported to a MATLAB-compatible MAT-file as follows.
If we provide export_to_matfile=True to fissa.run_fissa, it will export the data a matfile named "separated.mat" within the cache directory (the cache directory as provided with the folder argument).
End of explanation
result, raw = fissa.run_fissa(
images_location, rois_location, export_to_matfile="experiment_results.mat"
)
Explanation: Alternatively, we can export to a matfile with a custom file name by setting the export_to_matfile argument to the target path.
End of explanation
# Call FISSA with elevated verbosity
result = fissa.run_fissa(images_location, rois_location, verbosity=2)
Explanation: Loading the generated file (e.g. "output_folder/separated.mat") in MATLAB will provide you with all of FISSA's outputs.
These are structured in the same way as the raw and result variables returned by fissa.run_fissa.
With the python interface, the outputs are 2d numpy.ndarrays each element of which is itself a 2d numpy.ndarrays.
Meanwhile, when the output is loaded into MATLAB the data is structured as a 2d cell-array each element of which is a 2d matrix.
Additionally, note that whilst Python indexes from 0, MATLAB indexes from 1 instead.
As a consequence of this, the results seen on Python for a given roi and trial experiment.result[roi, trial] correspond to the index S.result{roi + 1, trial + 1} on MATLAB.
Our first plot in this notebook can be replicated in MATLAB as follows:
octave
%% Plot example traces in MATLAB
% Load FISSA output data in MATLAB
% ... either from the automatic file name within a cache
% S = load('fissa-example/separated.mat')
% ... or from a custom output path
S = load('experiment_results.mat')
% Select the second trial
% (On Python, this would be trial = 1)
trial = 2;
% Plot the result traces for each ROI
figure;
hold on;
for i_roi = 1:size(S.result, 1);
plot(S.result{i_roi, trial}(1, :));
end
xlabel('Time (frame number)');
ylabel('Signal intensity (candela per unit area)');
grid on;
box on;
set(gca,'TickDir','out');
Customisation
Controlling verbosity
The level of verbosity of FISSA can be controlled with the verbosity parameter.
The default is verbosity=1.
If the verbosity parameter is higher, FISSA will print out more information while it is processing.
This can be helpful for debugging puproses.
The verbosity reaches its maximum at verbosity=6.
If verbosity=0, FISSA will run silently.
End of explanation
# FISSA uses multiprocessing to speed up its processing.
# By default, it will spawn one worker per CPU core on your machine.
# However, if you have a lot of cores and not much memory, you many not
# be able to suport so many workers simultaneously.
# In particular, this can be problematic during the data preparation step
# in which TIFFs are loaded into memory.
# The default number of cores for the data preparation and separation steps
# can be changed as follows.
ncores_preparation = 4 # If None, uses all available cores
ncores_separation = None # if None, uses all available cores
# By default, FISSA uses 4 subregions for the neuropil region.
# If you have very dense data with a lot of different signals per unit area,
# you may wish to increase the number of regions.
n_regions = 8
# By default, each surrounding region has the same area as the central ROI.
# i.e. expansion = 1
# However, you may wish to increase or decrease this value.
expansion = 0.75
# The degree of signal sparsity can be controlled with the alpha parameter.
alpha = 0.02
# If you change the experiment parameters, you need to change the cache directory too.
# Otherwise FISSA will try to reload the results from the previous run instead of
# computing the new results. FISSA will throw an error if you try to load data which
# was generated with different analysis parameters to the current parameters.
output_folder2 = output_folder + "_alt"
# Run FISSA with these parameters
result, raw = fissa.run_fissa(
images_location,
rois_location,
output_folder2,
nRegions=n_regions,
expansion=expansion,
alpha=alpha,
ncores_preparation=ncores_preparation,
ncores_separation=ncores_separation,
)
Explanation: Analysis parameters
FISSA has several user-definable settings, which can be set as optional arguments to fissa.run_fissa.
End of explanation
n_roi = result.shape[0]
n_trial = result.shape[1]
i_roi = 3
plt.figure(figsize=(12, 6))
for i_trial in range(n_trial):
plt.plot(result[i_roi, i_trial][0, :], label="Trial {}".format(i_trial + 1))
plt.title("ROI {}".format(i_roi), fontsize=15)
plt.xlabel("Time (Frame number)", fontsize=15)
plt.ylabel("Signal intensity (candela per unit area)", fontsize=15)
plt.grid()
plt.legend()
plt.show()
Explanation: We can plot the new results for our example trace from before. Although we doubled the number of neuropil regions around the cell, very little has changed for this example because there were not many sources of contamination.
However, there will be more of a difference if your data has more neuropil sources per unit area within the image.
End of explanation
result = fissa.run_fissa(images_location, rois_location, ncores_preparation=1)
Explanation: Working with very large tiff files
By default, FISSA loads entire TIFF files into memory at once and then manipulates all ROIs within the TIFF.
This is more efficient, but can be problematic when working with very large TIFF files which are too big to be loaded into memory all at once.
If you run out of memory when running FISSA, you can try reducing the number of workers during the memory-intensive preparation step.
End of explanation
result, raw = fissa.run_fissa(images_location, rois_location, lowmemory_mode=True)
Explanation: Alternatively, you can activate FISSA's low memory mode.
In this mode, it will load and process frames one at a time.
This will run a fair bit slower than the default mode, but has a much lower memory requirement.
End of explanation
<END_TASK> |
15,559 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Copyright 2019 The TensorFlow Authors.
Step1: 不均衡データの分類
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https
Step2: データの処理と調査
Kaggle Credit Card Fraud データセットをダウンロードする
Pandas は、構造化データの読み込みと処理を支援するユーティリティが多数含まれる Python ライブラリです。Pandas を使用し、URL から CSV を Pandas DataFrame にダウンロードします。
注意
Step3: クラスラベルの不均衡を調べる
データセットの不均衡を見てみましょう。
Step4: これは、陽性サンプルの割合が少ないことを示しています。
データをクリーニング、分割、正規化する
生データにはいくつかの問題があります。まず、TimeカラムとAmountカラムはむらがあり過ぎてそのままでは使用できません。Timeカラムは意味が明確ではないため削除し、Amountカラムのログを取って範囲を縮小します。
Step5: データセットをトレーニングセット、検証セット、テストセットに分割します。検証セットはモデルを適合させる間に使用され、損失とメトリクスを評価しますが、モデルはこのデータに適合しません。テストセットはトレーニング段階では全く使用されず、モデルがどの程度新しいデータを一般化したかを評価するために最後にだけ使用されます。これはトレーニングデータ不足による過学習が重大な懸念事項である不均衡データセットでは特に重要です。
Step6: sklearn の StandardScaler を使用して入力特徴を正規化します。これで平均は 0、標準偏差は 1 に設定されます。
注意
Step7: 警告
Step8: モデルとメトリクスを定義する
密に接続された非表示レイヤー、過学習を防ぐドロップアウトレイヤー、取引が不正である確率を返す出力シグモイドレイヤーを持つ単純なニューラルネットワークを作成する関数を定義します。
Step9: 有用なメトリクスを理解する
上記で定義したメトリクスのいくつかは、モデルで計算できるため、パフォーマンス評価の際に有用なことに着目してください。
偽陰性と偽陽性は誤って分類されたサンプルです。
真陰性と真陽性は正しく分類されたサンプルです。
正解率は正しく分類された例の割合です。
$\frac{\text{true samples}}{\text{total samples}}$
適合率は正しく分類された予測陽性の割合です。
$\frac{\text{true positives}}{\text{true positives + false positives}}$
再現率は正しく分類された実際の陽性の割合です。
$\frac{\text{true positives}}{\text{true positives + false negatives}}$
AUC は受信者動作特性曲線 (ROC-AUC) の曲線下の面積を指します。この指標は、分類器がランダムな正のサンプルをランダムな負のサンプルよりも高くランク付けする確率に等しくなります。
AUPRC は適合率-再現率曲線の曲線下の面積を指します。この指標は、さまざまな確率しきい値の適合率と再現率のペアを計算します。
注意
Step10: モデルをテスト実行します。
Step11: オプション
Step12: 設定する正しいバイアスは、以下から導き出すことができます。
$$ p_0 = pos/(pos + neg) = 1/(1+e^{-b_0}) $$ $$ b_0 = -log_e(1/p_0 - 1) $$ $$ b_0 = log_e(pos/neg)$$
Step13: それを初期バイアスとして設定すると、モデルははるかに合理的な初期推測ができるようになります。
これはpos/total = 0.0018に近い値になるはずです。
Step14: この初期化では、初期損失はおおよそ次のようになります。
$$-p_0log(p_0)-(1-p_0)log(1-p_0) = 0.01317$$
Step15: この初期の損失は、単純な初期化を行った場合の約 50 分の 1 です。
この方法だと、陽性の例がないことを学習するだけのためにモデルが最初の数エポックを費やす必要がありません。また、これによって、トレーニング中の損失のプロットが読みやすくなります。
初期の重みをチェックポイントする
さまざまなトレーニングの実行を比較しやすくするために、この初期モデルの重みをチェックポイントファイルに保持し、トレーニングの前に各モデルにロードします。
Step16: バイアス修正が有効であることを確認する
先に進む前に、慎重なバイアス初期化が実際に役立ったかどうかを素早く確認します。
この慎重な初期化を行った場合と行わなかった場合でモデルを 20 エポックトレーニングしてから損失を比較します。
Step17: 上の図を見れば一目瞭然ですが、検証損失に関しては、この問題ではこのように慎重に初期化することによって、明確なアドバンテージを得ることができます。
モデルをトレーニングする
Step18: トレーニング履歴を確認する
このセクションでは、トレーニングと検証のセットでモデルの精度と損失のプロットを作成します。これらは、過適合をチェックするのに役立ちます。詳細については、過適合と学習不足チュートリアルを参照してください。
さらに、上で作成した任意のメトリクスのプロットを作成することができます。 例として、下記には偽陰性が含まれています。
Step19: 注意
Step20: テストデータセットでモデルを評価し、上記で作成した行列の結果を表示します。
Step21: モデルがすべてを完璧に予測した場合は、これは対角行列になり、主な対角線から外れた値が不正確な予測を示してゼロになります。 この場合、行列は偽陽性が比較的少ないことを示し、これは誤ってフラグが立てられた正当な取引が比較的少ないことを意味します。 しかし、偽陽性の数が増えればコストがかかる可能性はありますが、偽陰性の数はさらに少なくした方が良いでしょう。偽陽性は顧客にカード利用履歴の確認を求めるメールを送信する可能性があるのに対し、偽陰性は不正な取引を成立させてしまう可能性があるため、このトレードオフはむしろ望ましいといえます。
ROC をプロットする
次に、ROC をプロットします。このプロットは、出力しきい値を調整するだけでモデルが到達できるパフォーマンス範囲が一目で分かるので有用です。
Step22: AUPRC をプロットする
AUPRC をプロットします。補間された適合率-再現率曲線の下の領域は、分類しきい値のさまざまな値に対して(再現率、適合率)点をプロットすることにより取得できます。計算方法によっては、PR AUC はモデルの平均適合率と同等になる場合があります。
Step23: 適合率は比較的高いように見えますが、再現率と ROC 曲線の下の曲線下面積 (AUC) は、期待するほど高いものではありません。適合率と再現率の両方を最大化しようとすると、分類器はしばしば課題に直面します。不均衡データセットを扱う場合は特にそうです。大切な問題のコンテキストでは異なるタイプのエラーにかかるコストを考慮することが重要です。 この例では、偽陰性(不正な取引が見逃されている)は金銭的コストを伴う可能性がある一方で、偽陽性(取引が不正であると誤ってフラグが立てられている)はユーザーの幸福度を低下させる可能性があります。
クラスの重み
クラスの重みを計算する
最終目的は不正な取引を特定することですが、処理する陽性サンプルがそれほど多くないので、利用可能な数少ない例の分類器に大きな重み付けをします。 これを行うには、パラメータを介して各クラスの重みを Keras に渡します。 これにより、モデルは十分に表現されていないクラスの例にも「より注意を払う」ようになります。
Step24: クラスの重みでモデルをトレーニングする
次に、クラスの重みでモデルを再トレーニングして評価し、それが予測にどのように影響するかを確認します。
注意
Step25: トレーニング履歴を確認する
Step26: メトリクスを評価する
Step27: ここでは、クラスの重みを使用すると偽陽性が多くなるため、正解率と適合率が低くなりますが、逆にモデルがより多くの真陽性を検出したため、再現率と AUC が高くなっていることが分かります。このモデルは正解率は低いものの、再現率が高くなるので、より多くの不正取引を特定します。もちろん、両タイプのエラーにはコストがかかります。(あまりにも多くの正当な取引を不正取引としてフラグを立ててユーザーに迷惑をかけたくはないはずです。)アプリケーションのこういった異なるタイプのエラー間のトレードオフは、慎重に検討してください
ROC をプロットする
Step28: AUPRC をプロットする
Step29: オーバーサンプリング
マイノリティクラスをオーバーサンプリングする
関連したアプローチとして、マイノリティクラスをオーバーサンプリングしてデータセットを再サンプルするという方法があります。
Step30: NumPy を使用する
陽性の例から適切な数のランダムインデックスを選択して、手動でデータセットのバランスをとることができます。
Step31: tf.dataを使用する
もしtf.dataを使用している場合、バランスの取れた例を作成する最も簡単な方法は、positiveとnegativeのデータセットから開始し、それらをマージすることです。その他の例については、tf.data ガイドをご覧ください。
Step32: 各データセットは(feature, label)のペアを提供します。
Step33: tf.data.Dataset.sample_from_datasets を使用し、この 2 つをマージします。
Step34: このデータセットを使用するには、エポックごとのステップ数が必要です。
この場合の「エポック」の定義はあまり明確ではありません。それぞれの陰性の例を 1 度見るのに必要なバッチ数だとしましょう。
Step35: オーバーサンプリングデータをトレーニングする
ここで、クラスの重みを使用する代わりに、再サンプルされたデータセットを使用してモデルをトレーニングし、それらの手法がどう比較されるかを確認してみましょう。
注意
Step36: トレーニングプロセスが勾配の更新ごとにデータセット全体を考慮する場合は、このオーバーサンプリングは基本的にクラスの重み付けと同じになります。
しかし、ここで行ったようにバッチ単位でモデルをトレーニングする場合、オーバーサンプリングされたデータはより滑らかな勾配信号を提供します。それぞれの陽性の例を大きな重みを持つ 1 つのバッチで表示する代わりに、毎回小さな重みを持つ多くの異なるバッチで表示します。
このような滑らかな勾配信号は、モデルのトレーニングを容易にします。
トレーニング履歴を確認する
トレーニングデータは検証データやテストデータとは全く異なる分散を持つため、ここでのメトリクスの分散は異なることに注意してください。
Step37: 再トレーニングする
バランスの取れたデータの方がトレーニングしやすいため、上記のトレーニング方法ではすぐに過学習してしまう可能性があります。
したがって、エポックを分割して、tf.keras.callbacks.EarlyStopping がトレーニングを停止するタイミングをより細かく制御できるようにします。
Step38: トレーニング履歴を再確認する
Step39: メトリクスを評価する
Step40: ROC をプロットする
Step41: AUPRC をプロットする | <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 The TensorFlow Authors.
End of explanation
import tensorflow as tf
from tensorflow import keras
import os
import tempfile
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import sklearn
from sklearn.metrics import confusion_matrix
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
mpl.rcParams['figure.figsize'] = (12, 10)
colors = plt.rcParams['axes.prop_cycle'].by_key()['color']
Explanation: 不均衡データの分類
<table class="tfo-notebook-buttons" align="left">
<td><a target="_blank" href="https://www.tensorflow.org/tutorials/structured_data/imbalanced_data"><img src="https://www.tensorflow.org/images/tf_logo_32px.png">TensorFlow.org で表示</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/tutorials/structured_data/imbalanced_data.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png">Google Colab で実行</a></td>
<td><a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ja/tutorials/structured_data/imbalanced_data.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">GitHub でソースを表示</a></td>
<td><a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/tutorials/structured_data/imbalanced_data.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png">ノートブックをダウンロード</a></td>
</table>
このチュートリアルでは、1 つのクラスの例の数が他のクラスの例の数を大幅に上回る、非常に不均衡なデータセットを分類する方法を示します。Kaggle でホストされているクレジットカード不正検出データセットを使用します。目的は、合計 284,807 件のトランザクションからわずか 492 件の不正なトランザクションを検出することです。Keras を使用してモデルを定義し、クラスの重み付けを使用してモデルが不均衡なデータから学習できるようにします。
このチュートリアルには、次の完全なコードが含まれています。
Pandas を使用して CSV ファイルを読み込む。
トレーニングセット、検証セット、テストセットを作成する。
Keras を使用してモデルの定義してトレーニングする(クラスの重みの設定を含む)。
様々なメトリクス(適合率や再現率を含む)を使用してモデルを評価する。
不均衡データを扱うための一般的なテクニックを試す。
クラスの重み付け
オーバーサンプリング
Setup
End of explanation
file = tf.keras.utils
raw_df = pd.read_csv('https://storage.googleapis.com/download.tensorflow.org/data/creditcard.csv')
raw_df.head()
raw_df[['Time', 'V1', 'V2', 'V3', 'V4', 'V5', 'V26', 'V27', 'V28', 'Amount', 'Class']].describe()
Explanation: データの処理と調査
Kaggle Credit Card Fraud データセットをダウンロードする
Pandas は、構造化データの読み込みと処理を支援するユーティリティが多数含まれる Python ライブラリです。Pandas を使用し、URL から CSV を Pandas DataFrame にダウンロードします。
注意: このデータセットは、Worldline と ULB (Université Libre de Bruxelles) の機械学習グループによるビッグデータマイニングと不正検出に関する共同研究で収集および分析されたものです。関連トピックに関する現在および過去のプロジェクトの詳細は、こちらと DefeatFraud プロジェクトのページをご覧ください。
End of explanation
neg, pos = np.bincount(raw_df['Class'])
total = neg + pos
print('Examples:\n Total: {}\n Positive: {} ({:.2f}% of total)\n'.format(
total, pos, 100 * pos / total))
Explanation: クラスラベルの不均衡を調べる
データセットの不均衡を見てみましょう。
End of explanation
cleaned_df = raw_df.copy()
# You don't want the `Time` column.
cleaned_df.pop('Time')
# The `Amount` column covers a huge range. Convert to log-space.
eps = 0.001 # 0 => 0.1¢
cleaned_df['Log Ammount'] = np.log(cleaned_df.pop('Amount')+eps)
Explanation: これは、陽性サンプルの割合が少ないことを示しています。
データをクリーニング、分割、正規化する
生データにはいくつかの問題があります。まず、TimeカラムとAmountカラムはむらがあり過ぎてそのままでは使用できません。Timeカラムは意味が明確ではないため削除し、Amountカラムのログを取って範囲を縮小します。
End of explanation
# Use a utility from sklearn to split and shuffle your dataset.
train_df, test_df = train_test_split(cleaned_df, test_size=0.2)
train_df, val_df = train_test_split(train_df, test_size=0.2)
# Form np arrays of labels and features.
train_labels = np.array(train_df.pop('Class'))
bool_train_labels = train_labels != 0
val_labels = np.array(val_df.pop('Class'))
test_labels = np.array(test_df.pop('Class'))
train_features = np.array(train_df)
val_features = np.array(val_df)
test_features = np.array(test_df)
Explanation: データセットをトレーニングセット、検証セット、テストセットに分割します。検証セットはモデルを適合させる間に使用され、損失とメトリクスを評価しますが、モデルはこのデータに適合しません。テストセットはトレーニング段階では全く使用されず、モデルがどの程度新しいデータを一般化したかを評価するために最後にだけ使用されます。これはトレーニングデータ不足による過学習が重大な懸念事項である不均衡データセットでは特に重要です。
End of explanation
scaler = StandardScaler()
train_features = scaler.fit_transform(train_features)
val_features = scaler.transform(val_features)
test_features = scaler.transform(test_features)
train_features = np.clip(train_features, -5, 5)
val_features = np.clip(val_features, -5, 5)
test_features = np.clip(test_features, -5, 5)
print('Training labels shape:', train_labels.shape)
print('Validation labels shape:', val_labels.shape)
print('Test labels shape:', test_labels.shape)
print('Training features shape:', train_features.shape)
print('Validation features shape:', val_features.shape)
print('Test features shape:', test_features.shape)
Explanation: sklearn の StandardScaler を使用して入力特徴を正規化します。これで平均は 0、標準偏差は 1 に設定されます。
注意: StandardScalerはtrain_featuresを使用する場合にのみ適合し、モデルが検証セットやテストセットでピークを迎えることがないようにします。
End of explanation
pos_df = pd.DataFrame(train_features[ bool_train_labels], columns=train_df.columns)
neg_df = pd.DataFrame(train_features[~bool_train_labels], columns=train_df.columns)
sns.jointplot(x=pos_df['V5'], y=pos_df['V6'],
kind='hex', xlim=(-5,5), ylim=(-5,5))
plt.suptitle("Positive distribution")
sns.jointplot(x=neg_df['V5'], y=neg_df['V6'],
kind='hex', xlim=(-5,5), ylim=(-5,5))
_ = plt.suptitle("Negative distribution")
Explanation: 警告: モデルをデプロイする場合には、前処理の計算を保存することが非常に重要です。最も簡単なのは、それらをレイヤーとして実装し、エクスポート前にモデルに加える方法です。
データ分散を確認する
次に、いくつかの特徴における陽性の例と陰性の例の分散を比較します。 この時点で自問すべき点は、次のとおりです。
それらの分散には意味がありますか?
はい。入力を正規化したので、ほとんどが+/- 2の範囲内に集中しています。
分散間の差は見られますか?
はい。陽性の例には、はるかに高い極値が含まれています。
End of explanation
METRICS = [
keras.metrics.TruePositives(name='tp'),
keras.metrics.FalsePositives(name='fp'),
keras.metrics.TrueNegatives(name='tn'),
keras.metrics.FalseNegatives(name='fn'),
keras.metrics.BinaryAccuracy(name='accuracy'),
keras.metrics.Precision(name='precision'),
keras.metrics.Recall(name='recall'),
keras.metrics.AUC(name='auc'),
keras.metrics.AUC(name='prc', curve='PR'), # precision-recall curve
]
def make_model(metrics=METRICS, output_bias=None):
if output_bias is not None:
output_bias = tf.keras.initializers.Constant(output_bias)
model = keras.Sequential([
keras.layers.Dense(
16, activation='relu',
input_shape=(train_features.shape[-1],)),
keras.layers.Dropout(0.5),
keras.layers.Dense(1, activation='sigmoid',
bias_initializer=output_bias),
])
model.compile(
optimizer=keras.optimizers.Adam(learning_rate=1e-3),
loss=keras.losses.BinaryCrossentropy(),
metrics=metrics)
return model
Explanation: モデルとメトリクスを定義する
密に接続された非表示レイヤー、過学習を防ぐドロップアウトレイヤー、取引が不正である確率を返す出力シグモイドレイヤーを持つ単純なニューラルネットワークを作成する関数を定義します。
End of explanation
EPOCHS = 100
BATCH_SIZE = 2048
early_stopping = tf.keras.callbacks.EarlyStopping(
monitor='val_prc',
verbose=1,
patience=10,
mode='max',
restore_best_weights=True)
model = make_model()
model.summary()
Explanation: 有用なメトリクスを理解する
上記で定義したメトリクスのいくつかは、モデルで計算できるため、パフォーマンス評価の際に有用なことに着目してください。
偽陰性と偽陽性は誤って分類されたサンプルです。
真陰性と真陽性は正しく分類されたサンプルです。
正解率は正しく分類された例の割合です。
$\frac{\text{true samples}}{\text{total samples}}$
適合率は正しく分類された予測陽性の割合です。
$\frac{\text{true positives}}{\text{true positives + false positives}}$
再現率は正しく分類された実際の陽性の割合です。
$\frac{\text{true positives}}{\text{true positives + false negatives}}$
AUC は受信者動作特性曲線 (ROC-AUC) の曲線下の面積を指します。この指標は、分類器がランダムな正のサンプルをランダムな負のサンプルよりも高くランク付けする確率に等しくなります。
AUPRC は適合率-再現率曲線の曲線下の面積を指します。この指標は、さまざまな確率しきい値の適合率と再現率のペアを計算します。
注意: 精度は、このタスクに役立つ指標ではありません。常に False を予測することで、このタスクの精度を 99.8% 以上にすることができるからです。
詳細は以下を参照してください。
真陽性、偽陽性、偽陰性、真陰性
正解度
適合率と再現率
ROC-AUC
適合率-再現率と ROC 曲線の関係
ベースラインモデル
モデルを構築する
次に、前に定義した関数を使用してモデルを作成し、トレーニングします。モデルはデフォルトよりも大きいバッチサイズ 2048 を使って適合されていることに注目してください。これは、各バッチに必ずいくつかの陽性サンプルが含まれるようにするために重要です。もし、バッチサイズが小さすぎると、学習できる不正取引が全くないという可能性があります。
注意: このモデルはクラスの不均衡をうまく処理できません。後ほどこのチュートリアル内で改善します。
End of explanation
model.predict(train_features[:10])
Explanation: モデルをテスト実行します。
End of explanation
results = model.evaluate(train_features, train_labels, batch_size=BATCH_SIZE, verbose=0)
print("Loss: {:0.4f}".format(results[0]))
Explanation: オプション: 正しい初期バイアスを設定する
これら初期の推測はあまり良いとは言えません。データセットは不均衡であることが分かっています。それを反映できるように、出力レイヤーのバイアスを設定します。(参照: ニューラルネットワークのトレーニングのレシピ: 「init well」)これは初期収束に有用です。
デフォルトのバイアス初期化では、損失はmath.log(2) = 0.69314程度になります。
End of explanation
initial_bias = np.log([pos/neg])
initial_bias
Explanation: 設定する正しいバイアスは、以下から導き出すことができます。
$$ p_0 = pos/(pos + neg) = 1/(1+e^{-b_0}) $$ $$ b_0 = -log_e(1/p_0 - 1) $$ $$ b_0 = log_e(pos/neg)$$
End of explanation
model = make_model(output_bias=initial_bias)
model.predict(train_features[:10])
Explanation: それを初期バイアスとして設定すると、モデルははるかに合理的な初期推測ができるようになります。
これはpos/total = 0.0018に近い値になるはずです。
End of explanation
results = model.evaluate(train_features, train_labels, batch_size=BATCH_SIZE, verbose=0)
print("Loss: {:0.4f}".format(results[0]))
Explanation: この初期化では、初期損失はおおよそ次のようになります。
$$-p_0log(p_0)-(1-p_0)log(1-p_0) = 0.01317$$
End of explanation
initial_weights = os.path.join(tempfile.mkdtemp(), 'initial_weights')
model.save_weights(initial_weights)
Explanation: この初期の損失は、単純な初期化を行った場合の約 50 分の 1 です。
この方法だと、陽性の例がないことを学習するだけのためにモデルが最初の数エポックを費やす必要がありません。また、これによって、トレーニング中の損失のプロットが読みやすくなります。
初期の重みをチェックポイントする
さまざまなトレーニングの実行を比較しやすくするために、この初期モデルの重みをチェックポイントファイルに保持し、トレーニングの前に各モデルにロードします。
End of explanation
model = make_model()
model.load_weights(initial_weights)
model.layers[-1].bias.assign([0.0])
zero_bias_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=20,
validation_data=(val_features, val_labels),
verbose=0)
model = make_model()
model.load_weights(initial_weights)
careful_bias_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=20,
validation_data=(val_features, val_labels),
verbose=0)
def plot_loss(history, label, n):
# Use a log scale on y-axis to show the wide range of values.
plt.semilogy(history.epoch, history.history['loss'],
color=colors[n], label='Train ' + label)
plt.semilogy(history.epoch, history.history['val_loss'],
color=colors[n], label='Val ' + label,
linestyle="--")
plt.xlabel('Epoch')
plt.ylabel('Loss')
plot_loss(zero_bias_history, "Zero Bias", 0)
plot_loss(careful_bias_history, "Careful Bias", 1)
Explanation: バイアス修正が有効であることを確認する
先に進む前に、慎重なバイアス初期化が実際に役立ったかどうかを素早く確認します。
この慎重な初期化を行った場合と行わなかった場合でモデルを 20 エポックトレーニングしてから損失を比較します。
End of explanation
model = make_model()
model.load_weights(initial_weights)
baseline_history = model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
callbacks=[early_stopping],
validation_data=(val_features, val_labels))
Explanation: 上の図を見れば一目瞭然ですが、検証損失に関しては、この問題ではこのように慎重に初期化することによって、明確なアドバンテージを得ることができます。
モデルをトレーニングする
End of explanation
def plot_metrics(history):
metrics = ['loss', 'prc', 'precision', 'recall']
for n, metric in enumerate(metrics):
name = metric.replace("_"," ").capitalize()
plt.subplot(2,2,n+1)
plt.plot(history.epoch, history.history[metric], color=colors[0], label='Train')
plt.plot(history.epoch, history.history['val_'+metric],
color=colors[0], linestyle="--", label='Val')
plt.xlabel('Epoch')
plt.ylabel(name)
if metric == 'loss':
plt.ylim([0, plt.ylim()[1]])
elif metric == 'auc':
plt.ylim([0.8,1])
else:
plt.ylim([0,1])
plt.legend();
plot_metrics(baseline_history)
Explanation: トレーニング履歴を確認する
このセクションでは、トレーニングと検証のセットでモデルの精度と損失のプロットを作成します。これらは、過適合をチェックするのに役立ちます。詳細については、過適合と学習不足チュートリアルを参照してください。
さらに、上で作成した任意のメトリクスのプロットを作成することができます。 例として、下記には偽陰性が含まれています。
End of explanation
train_predictions_baseline = model.predict(train_features, batch_size=BATCH_SIZE)
test_predictions_baseline = model.predict(test_features, batch_size=BATCH_SIZE)
def plot_cm(labels, predictions, p=0.5):
cm = confusion_matrix(labels, predictions > p)
plt.figure(figsize=(5,5))
sns.heatmap(cm, annot=True, fmt="d")
plt.title('Confusion matrix @{:.2f}'.format(p))
plt.ylabel('Actual label')
plt.xlabel('Predicted label')
print('Legitimate Transactions Detected (True Negatives): ', cm[0][0])
print('Legitimate Transactions Incorrectly Detected (False Positives): ', cm[0][1])
print('Fraudulent Transactions Missed (False Negatives): ', cm[1][0])
print('Fraudulent Transactions Detected (True Positives): ', cm[1][1])
print('Total Fraudulent Transactions: ', np.sum(cm[1]))
Explanation: 注意: 一般的に、検証曲線はトレーニング曲線よりも優れています。 これは主に、モデルを評価する際にドロップアウトレイヤーがアクティブでないということに起因します。
メトリクスを評価する
混同行列を使用して、実際のラベルと予測されたラベルを要約できます。ここで、X 軸は予測されたラベルであり、Y 軸は実際のラベルです。
End of explanation
baseline_results = model.evaluate(test_features, test_labels,
batch_size=BATCH_SIZE, verbose=0)
for name, value in zip(model.metrics_names, baseline_results):
print(name, ': ', value)
print()
plot_cm(test_labels, test_predictions_baseline)
Explanation: テストデータセットでモデルを評価し、上記で作成した行列の結果を表示します。
End of explanation
def plot_roc(name, labels, predictions, **kwargs):
fp, tp, _ = sklearn.metrics.roc_curve(labels, predictions)
plt.plot(100*fp, 100*tp, label=name, linewidth=2, **kwargs)
plt.xlabel('False positives [%]')
plt.ylabel('True positives [%]')
plt.xlim([-0.5,20])
plt.ylim([80,100.5])
plt.grid(True)
ax = plt.gca()
ax.set_aspect('equal')
plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plt.legend(loc='lower right');
Explanation: モデルがすべてを完璧に予測した場合は、これは対角行列になり、主な対角線から外れた値が不正確な予測を示してゼロになります。 この場合、行列は偽陽性が比較的少ないことを示し、これは誤ってフラグが立てられた正当な取引が比較的少ないことを意味します。 しかし、偽陽性の数が増えればコストがかかる可能性はありますが、偽陰性の数はさらに少なくした方が良いでしょう。偽陽性は顧客にカード利用履歴の確認を求めるメールを送信する可能性があるのに対し、偽陰性は不正な取引を成立させてしまう可能性があるため、このトレードオフはむしろ望ましいといえます。
ROC をプロットする
次に、ROC をプロットします。このプロットは、出力しきい値を調整するだけでモデルが到達できるパフォーマンス範囲が一目で分かるので有用です。
End of explanation
def plot_prc(name, labels, predictions, **kwargs):
precision, recall, _ = sklearn.metrics.precision_recall_curve(labels, predictions)
plt.plot(precision, recall, label=name, linewidth=2, **kwargs)
plt.xlabel('Recall')
plt.ylabel('Precision')
plt.grid(True)
ax = plt.gca()
ax.set_aspect('equal')
plot_prc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_prc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plt.legend(loc='lower right');
Explanation: AUPRC をプロットする
AUPRC をプロットします。補間された適合率-再現率曲線の下の領域は、分類しきい値のさまざまな値に対して(再現率、適合率)点をプロットすることにより取得できます。計算方法によっては、PR AUC はモデルの平均適合率と同等になる場合があります。
End of explanation
# Scaling by total/2 helps keep the loss to a similar magnitude.
# The sum of the weights of all examples stays the same.
weight_for_0 = (1 / neg) * (total / 2.0)
weight_for_1 = (1 / pos) * (total / 2.0)
class_weight = {0: weight_for_0, 1: weight_for_1}
print('Weight for class 0: {:.2f}'.format(weight_for_0))
print('Weight for class 1: {:.2f}'.format(weight_for_1))
Explanation: 適合率は比較的高いように見えますが、再現率と ROC 曲線の下の曲線下面積 (AUC) は、期待するほど高いものではありません。適合率と再現率の両方を最大化しようとすると、分類器はしばしば課題に直面します。不均衡データセットを扱う場合は特にそうです。大切な問題のコンテキストでは異なるタイプのエラーにかかるコストを考慮することが重要です。 この例では、偽陰性(不正な取引が見逃されている)は金銭的コストを伴う可能性がある一方で、偽陽性(取引が不正であると誤ってフラグが立てられている)はユーザーの幸福度を低下させる可能性があります。
クラスの重み
クラスの重みを計算する
最終目的は不正な取引を特定することですが、処理する陽性サンプルがそれほど多くないので、利用可能な数少ない例の分類器に大きな重み付けをします。 これを行うには、パラメータを介して各クラスの重みを Keras に渡します。 これにより、モデルは十分に表現されていないクラスの例にも「より注意を払う」ようになります。
End of explanation
weighted_model = make_model()
weighted_model.load_weights(initial_weights)
weighted_history = weighted_model.fit(
train_features,
train_labels,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
callbacks=[early_stopping],
validation_data=(val_features, val_labels),
# The class weights go here
class_weight=class_weight)
Explanation: クラスの重みでモデルをトレーニングする
次に、クラスの重みでモデルを再トレーニングして評価し、それが予測にどのように影響するかを確認します。
注意: class_weights を使用すると、損失の範囲が変更されます。オプティマイザにもよりますが、これはトレーニングの安定性に影響を与える可能性があります。tf.keras.optimizers.SGD のように、ステップサイズが勾配の大きさに依存するオプティマイザは失敗する可能性があります。ここで使用されているオプティマイザ tf.keras.optimizers.Adam は、スケーリングの変更による影響を受けません。また、重み付けのため、総損失は 2 つのモデル間で比較できないことに注意してください。
End of explanation
plot_metrics(weighted_history)
Explanation: トレーニング履歴を確認する
End of explanation
train_predictions_weighted = weighted_model.predict(train_features, batch_size=BATCH_SIZE)
test_predictions_weighted = weighted_model.predict(test_features, batch_size=BATCH_SIZE)
weighted_results = weighted_model.evaluate(test_features, test_labels,
batch_size=BATCH_SIZE, verbose=0)
for name, value in zip(weighted_model.metrics_names, weighted_results):
print(name, ': ', value)
print()
plot_cm(test_labels, test_predictions_weighted)
Explanation: メトリクスを評価する
End of explanation
plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plot_roc("Train Weighted", train_labels, train_predictions_weighted, color=colors[1])
plot_roc("Test Weighted", test_labels, test_predictions_weighted, color=colors[1], linestyle='--')
plt.legend(loc='lower right');
Explanation: ここでは、クラスの重みを使用すると偽陽性が多くなるため、正解率と適合率が低くなりますが、逆にモデルがより多くの真陽性を検出したため、再現率と AUC が高くなっていることが分かります。このモデルは正解率は低いものの、再現率が高くなるので、より多くの不正取引を特定します。もちろん、両タイプのエラーにはコストがかかります。(あまりにも多くの正当な取引を不正取引としてフラグを立ててユーザーに迷惑をかけたくはないはずです。)アプリケーションのこういった異なるタイプのエラー間のトレードオフは、慎重に検討してください
ROC をプロットする
End of explanation
plot_prc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_prc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plot_prc("Train Weighted", train_labels, train_predictions_weighted, color=colors[1])
plot_prc("Test Weighted", test_labels, test_predictions_weighted, color=colors[1], linestyle='--')
plt.legend(loc='lower right');
Explanation: AUPRC をプロットする
End of explanation
pos_features = train_features[bool_train_labels]
neg_features = train_features[~bool_train_labels]
pos_labels = train_labels[bool_train_labels]
neg_labels = train_labels[~bool_train_labels]
Explanation: オーバーサンプリング
マイノリティクラスをオーバーサンプリングする
関連したアプローチとして、マイノリティクラスをオーバーサンプリングしてデータセットを再サンプルするという方法があります。
End of explanation
ids = np.arange(len(pos_features))
choices = np.random.choice(ids, len(neg_features))
res_pos_features = pos_features[choices]
res_pos_labels = pos_labels[choices]
res_pos_features.shape
resampled_features = np.concatenate([res_pos_features, neg_features], axis=0)
resampled_labels = np.concatenate([res_pos_labels, neg_labels], axis=0)
order = np.arange(len(resampled_labels))
np.random.shuffle(order)
resampled_features = resampled_features[order]
resampled_labels = resampled_labels[order]
resampled_features.shape
Explanation: NumPy を使用する
陽性の例から適切な数のランダムインデックスを選択して、手動でデータセットのバランスをとることができます。
End of explanation
BUFFER_SIZE = 100000
def make_ds(features, labels):
ds = tf.data.Dataset.from_tensor_slices((features, labels))#.cache()
ds = ds.shuffle(BUFFER_SIZE).repeat()
return ds
pos_ds = make_ds(pos_features, pos_labels)
neg_ds = make_ds(neg_features, neg_labels)
Explanation: tf.dataを使用する
もしtf.dataを使用している場合、バランスの取れた例を作成する最も簡単な方法は、positiveとnegativeのデータセットから開始し、それらをマージすることです。その他の例については、tf.data ガイドをご覧ください。
End of explanation
for features, label in pos_ds.take(1):
print("Features:\n", features.numpy())
print()
print("Label: ", label.numpy())
Explanation: 各データセットは(feature, label)のペアを提供します。
End of explanation
resampled_ds = tf.data.Dataset.sample_from_datasets([pos_ds, neg_ds], weights=[0.5, 0.5])
resampled_ds = resampled_ds.batch(BATCH_SIZE).prefetch(2)
for features, label in resampled_ds.take(1):
print(label.numpy().mean())
Explanation: tf.data.Dataset.sample_from_datasets を使用し、この 2 つをマージします。
End of explanation
resampled_steps_per_epoch = np.ceil(2.0*neg/BATCH_SIZE)
resampled_steps_per_epoch
Explanation: このデータセットを使用するには、エポックごとのステップ数が必要です。
この場合の「エポック」の定義はあまり明確ではありません。それぞれの陰性の例を 1 度見るのに必要なバッチ数だとしましょう。
End of explanation
resampled_model = make_model()
resampled_model.load_weights(initial_weights)
# Reset the bias to zero, since this dataset is balanced.
output_layer = resampled_model.layers[-1]
output_layer.bias.assign([0])
val_ds = tf.data.Dataset.from_tensor_slices((val_features, val_labels)).cache()
val_ds = val_ds.batch(BATCH_SIZE).prefetch(2)
resampled_history = resampled_model.fit(
resampled_ds,
epochs=EPOCHS,
steps_per_epoch=resampled_steps_per_epoch,
callbacks=[early_stopping],
validation_data=val_ds)
Explanation: オーバーサンプリングデータをトレーニングする
ここで、クラスの重みを使用する代わりに、再サンプルされたデータセットを使用してモデルをトレーニングし、それらの手法がどう比較されるかを確認してみましょう。
注意: 陽性の例を複製することでデータのバランスをとっているため、データセットの総サイズは大きくなり、各エポックではより多くのトレーニングステップが実行されます。
End of explanation
plot_metrics(resampled_history)
Explanation: トレーニングプロセスが勾配の更新ごとにデータセット全体を考慮する場合は、このオーバーサンプリングは基本的にクラスの重み付けと同じになります。
しかし、ここで行ったようにバッチ単位でモデルをトレーニングする場合、オーバーサンプリングされたデータはより滑らかな勾配信号を提供します。それぞれの陽性の例を大きな重みを持つ 1 つのバッチで表示する代わりに、毎回小さな重みを持つ多くの異なるバッチで表示します。
このような滑らかな勾配信号は、モデルのトレーニングを容易にします。
トレーニング履歴を確認する
トレーニングデータは検証データやテストデータとは全く異なる分散を持つため、ここでのメトリクスの分散は異なることに注意してください。
End of explanation
resampled_model = make_model()
resampled_model.load_weights(initial_weights)
# Reset the bias to zero, since this dataset is balanced.
output_layer = resampled_model.layers[-1]
output_layer.bias.assign([0])
resampled_history = resampled_model.fit(
resampled_ds,
# These are not real epochs
steps_per_epoch=20,
epochs=10*EPOCHS,
callbacks=[early_stopping],
validation_data=(val_ds))
Explanation: 再トレーニングする
バランスの取れたデータの方がトレーニングしやすいため、上記のトレーニング方法ではすぐに過学習してしまう可能性があります。
したがって、エポックを分割して、tf.keras.callbacks.EarlyStopping がトレーニングを停止するタイミングをより細かく制御できるようにします。
End of explanation
plot_metrics(resampled_history)
Explanation: トレーニング履歴を再確認する
End of explanation
train_predictions_resampled = resampled_model.predict(train_features, batch_size=BATCH_SIZE)
test_predictions_resampled = resampled_model.predict(test_features, batch_size=BATCH_SIZE)
resampled_results = resampled_model.evaluate(test_features, test_labels,
batch_size=BATCH_SIZE, verbose=0)
for name, value in zip(resampled_model.metrics_names, resampled_results):
print(name, ': ', value)
print()
plot_cm(test_labels, test_predictions_resampled)
Explanation: メトリクスを評価する
End of explanation
plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plot_roc("Train Weighted", train_labels, train_predictions_weighted, color=colors[1])
plot_roc("Test Weighted", test_labels, test_predictions_weighted, color=colors[1], linestyle='--')
plot_roc("Train Resampled", train_labels, train_predictions_resampled, color=colors[2])
plot_roc("Test Resampled", test_labels, test_predictions_resampled, color=colors[2], linestyle='--')
plt.legend(loc='lower right');
Explanation: ROC をプロットする
End of explanation
plot_prc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0])
plot_prc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--')
plot_prc("Train Weighted", train_labels, train_predictions_weighted, color=colors[1])
plot_prc("Test Weighted", test_labels, test_predictions_weighted, color=colors[1], linestyle='--')
plot_prc("Train Resampled", train_labels, train_predictions_resampled, color=colors[2])
plot_prc("Test Resampled", test_labels, test_predictions_resampled, color=colors[2], linestyle='--')
plt.legend(loc='lower right');
Explanation: AUPRC をプロットする
End of explanation
<END_TASK> |
15,560 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Multiplying Numpy Arrays
Step2: LAB CHALLENGE | <ASSISTANT_TASK:>
Python Code:
import numpy as np
one_dimensional = np.array([1,1,1,2,3,3,3,3,3])
one_dimensional
one_dimensional.shape # not yet rows & columns
one_dimensional.reshape((9,-1)) # let numpy figure out how many columns
one_dimensional # still the same
one_dimensional.ndim
two_dimensional = one_dimensional.reshape(1,9) # recycle same name
two_dimensional.shape # is now 2D even if just the one row
two_dimensional.ndim
class M:
Symbolic representation of multiply, add
def __init__(self, s):
self.s = str(s)
def __mul__(self, other):
return M(self.s + " * " + other.s) # string
def __add__(self, other):
return M(self.s + " + " + other.s)
def __repr__(self):
return self.s
#Demo
one = M(1)
two = M(2)
print(one * two)
A,B,C = map(M, ['A','B','C']) # create three M type objects
m_array = np.array([A,B,C]) # put them in a numpy array
m_array.dtype # infers type (Object)
m_array = m_array.reshape((-1, len(m_array))) # make this 2 dimensional
m_array.shape # transpose works for > 1 dimension
m_array.T # stand it up (3,1) vs (1,3) shape
m_array.dot(m_array.T) # row dot column i.e. self * self.T
m_array.T[1,0] = M('Z') # transpose is not a copy
m_array # original has changes
m_array * m_array # dot versus element-wise
Explanation: Multiplying Numpy Arrays: dot vs __mul__
Data Science has everything to do with linear algebra.
When we want to do a weighted sum, we can put the weights in a row vector, and what they multiply in a column vector.
Assigning weights, usually iteratively, in response to back propagation, is at the heart of machine learning, from logistic regression to neural networks.
Lets go over the basics of creating row and column vectors, such that dot products become possible.
You will find np.dot(A, B) works the same as A.dot(B) when it comes to numpy arrays.
End of explanation
from pandas import Series
A = Series(np.arange(10))
Explanation: LAB CHALLENGE:
Create two arrays of compatiable dimensions and form their dot product.
numpy.random.randint is a good source of random numbers (for data).
End of explanation
<END_TASK> |
15,561 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Meshgrid & mgrid
x - jest najszybciej zmieniajacym się wskażnikiem (row-major) więc indeskowanie trzeba "odwrócic" względem kolejności argumentów w funkcji f
Step1: Porównanie z próbkowaniem "ręcznym"
Step2: Dobrze
Step3: Z jest row major więc można też napisać
Step4: mgrid
mgrid zachowuje się odwrotnie
X,Y = np.meshgrid(np.arange(0,nx),np.arange(0,ny))
Yn, Xn = np.mgrid[0
Step5: Zresztą sprawdzmy | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
nx = 11
ny = 5
x1,y1 = 1,2
X,Y = np.meshgrid(np.linspace(0,x1,nx),np.linspace(0,y1,ny))
X.shape
f = lambda X_,Y_:np.sin(X_**2+Y_**2)
Z = f(X,Y)
plt.contourf(X,Y,Z)
# To samo co:
# plt.contourf(X.T,Y.T,Z.T)
plt.imshow(Z,interpolation='nearest',origin='lower')
X
Y
Explanation: Meshgrid & mgrid
x - jest najszybciej zmieniajacym się wskażnikiem (row-major) więc indeskowanie trzeba "odwrócic" względem kolejności argumentów w funkcji f:
$$x,y \to j,i$$
End of explanation
i,j = 2,3
print ("dla x i y", X[i,j],Y[i,j],"jest", Z[i,j],f(X[i,j],Y[i,j]),\
"powinno byc rowne", f(x1/float(nx-1)*i,y1/float(ny-1)*j) )
Explanation: Porównanie z próbkowaniem "ręcznym":
End of explanation
i,j = 2,3
print ("dla x i y" ,X[j,i],Y[j,i],"jest", Z[j,i],f(X[j,i],Y[j,i]),\
"powinno byc rowne", f(x1/float(nx-1)*i,y1/float(ny-1)*j))
Explanation: Dobrze:
End of explanation
print Z[j,i],Z.flatten()[j*nx+i]
Explanation: Z jest row major więc można też napisać:
End of explanation
Yn, Xn = np.mgrid[0:ny,0:nx]
Xn.shape
Xn
Yn
Xn/(float(nx-1)*x1)
X1,Y1 = Xn*(x1/float(nx-1)),Yn*(y1/float(ny-1))
np.allclose(X, X1),np.allclose(Y, Y1),np.allclose(Y, Y1)
Explanation: mgrid
mgrid zachowuje się odwrotnie
X,Y = np.meshgrid(np.arange(0,nx),np.arange(0,ny))
Yn, Xn = np.mgrid[0:ny,0:nx]
End of explanation
Z.strides
np.meshgrid(np.arange(nx),np.arange(ny))
list(reversed(np.mgrid[0:ny,0:nx]))
np.meshgrid(np.arange(ny),np.arange(nx),indexing='ij')
np.mgrid[0:ny,0:nx]
Explanation: Zresztą sprawdzmy:
End of explanation
<END_TASK> |
15,562 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
<a href='http
Step1: PairGrid
Pairgrid is a subplot grid for plotting pairwise relationships in a dataset.
Step2: pairplot
pairplot is a simpler version of PairGrid (you'll use quite often)
Step3: Facet Grid
FacetGrid is the general way to create grids of plots based off of a feature
Step4: JointGrid
JointGrid is the general version for jointplot() type grids, for a quick example | <ASSISTANT_TASK:>
Python Code:
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
iris = sns.load_dataset('iris')
iris.head()
Explanation: <a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>
Grids
Grids are general types of plots that allow you to map plot types to rows and columns of a grid, this helps you create similar plots separated by features.
End of explanation
# Just the Grid
sns.PairGrid(iris)
# Then you map to the grid
g = sns.PairGrid(iris)
g.map(plt.scatter)
# Map to upper,lower, and diagonal
g = sns.PairGrid(iris)
g.map_diag(plt.hist)
g.map_upper(plt.scatter)
g.map_lower(sns.kdeplot)
Explanation: PairGrid
Pairgrid is a subplot grid for plotting pairwise relationships in a dataset.
End of explanation
sns.pairplot(iris)
sns.pairplot(iris,hue='species',palette='rainbow')
Explanation: pairplot
pairplot is a simpler version of PairGrid (you'll use quite often)
End of explanation
tips = sns.load_dataset('tips')
tips.head()
# Just the Grid
g = sns.FacetGrid(tips, col="time", row="smoker")
g = sns.FacetGrid(tips, col="time", row="smoker")
g = g.map(plt.hist, "total_bill")
g = sns.FacetGrid(tips, col="time", row="smoker",hue='sex')
# Notice hwo the arguments come after plt.scatter call
g = g.map(plt.scatter, "total_bill", "tip").add_legend()
Explanation: Facet Grid
FacetGrid is the general way to create grids of plots based off of a feature:
End of explanation
g = sns.JointGrid(x="total_bill", y="tip", data=tips)
g = sns.JointGrid(x="total_bill", y="tip", data=tips)
g = g.plot(sns.regplot, sns.distplot)
Explanation: JointGrid
JointGrid is the general version for jointplot() type grids, for a quick example:
End of explanation
<END_TASK> |
15,563 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
My installtion instructions
Step1: Import Policy, RL agent, ...
Step3: Define a Callback Function
Step4: Create and wrap the environment
Step5: Define and train the PPO agent
Step8: Plotting helpers
Stable Baselines has some built-in plotting helper, that you can find in stable_baselines.results_plotter. However, to show how to do it yourself, we are going to use custom plotting functions.
Step9: Make a GIF of a Trained Agent
C.f. https
Step10: Evaluate the trained agent | <ASSISTANT_TASK:>
Python Code:
import stable_baselines
stable_baselines.__version__
Explanation: My installtion instructions: https://gitlab.com/-/snippets/2057703
Source: https://colab.research.google.com/github/Stable-Baselines-Team/rl-colab-notebooks/blob/master/monitor_training.ipynb
See also: https://stable-baselines.readthedocs.io/en/master/guide/examples.html#try-it-online-with-colab-notebooks
Stable Baselines, a Fork of OpenAI Baselines - Monitor Training and Plotting
Github Repo: https://github.com/hill-a/stable-baselines
Medium article: https://medium.com/@araffin/stable-baselines-a-fork-of-openai-baselines-df87c4b2fc82
RL Baselines Zoo is a collection of pre-trained Reinforcement Learning agents using Stable-Baselines.
It also provides basic scripts for training, evaluating agents, tuning hyperparameters and recording videos.
Documentation is available online: https://stable-baselines.readthedocs.io/
Install Dependencies and Stable Baselines Using Pip
List of full dependencies can be found in the README.
sudo apt-get update && sudo apt-get install cmake libopenmpi-dev zlib1g-dev
pip install stable-baselines[mpi]
C.f. https://stackoverflow.com/a/61318224
Google Colab
Uncomment the following cells if you use Google Colab
Import Baselines
End of explanation
import os
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import gym
from stable_baselines.common.policies import MlpPolicy
from stable_baselines.common.vec_env import DummyVecEnv
from stable_baselines import PPO2
from stable_baselines.bench import Monitor
from stable_baselines.common.callbacks import BaseCallback
from stable_baselines.results_plotter import load_results, ts2xy
from stable_baselines import results_plotter
sns.set_context("talk")
Explanation: Import Policy, RL agent, ...
End of explanation
class SaveOnBestTrainingRewardCallback(BaseCallback):
Callback for saving a model (the check is done every ``check_freq`` steps)
based on the training reward (in practice, we recommend using ``EvalCallback``).
:param check_freq: (int)
:param log_dir: (str) Path to the folder where the model will be saved.
It must contains the file created by the ``Monitor`` wrapper.
:param verbose: (int)
def __init__(self, check_freq: int, log_dir: str, verbose=1):
super(SaveOnBestTrainingRewardCallback, self).__init__(verbose)
self.check_freq = check_freq
self.log_dir = log_dir
self.save_path = os.path.join(log_dir, 'best_model')
self.best_mean_reward = -np.inf
def _init_callback(self) -> None:
# Create folder if needed
if self.save_path is not None:
os.makedirs(self.save_path, exist_ok=True)
def _on_step(self) -> bool:
if self.n_calls % self.check_freq == 0:
# Retrieve training reward
x, y = ts2xy(load_results(self.log_dir), 'timesteps')
if len(x) > 0:
# Mean training reward over the last 100 episodes
mean_reward = np.mean(y[-100:])
if self.verbose > 0:
print("Num timesteps: {}".format(self.num_timesteps))
print("Best mean reward: {:.2f} - Last mean reward per episode: {:.2f}".format(self.best_mean_reward, mean_reward))
# New best model, you could save the agent here
if mean_reward > self.best_mean_reward:
self.best_mean_reward = mean_reward
# Example for saving best model
if self.verbose > 0:
print("Saving new best model to {}".format(self.save_path))
self.model.save(self.save_path)
return True
Explanation: Define a Callback Function
End of explanation
# Create log dir
log_dir = "/tmp/gym/"
os.makedirs(log_dir, exist_ok=True)
# Create and wrap the environment
env = gym.make('CartPole-v1')
# Logs will be saved in log_dir/monitor.csv
env = Monitor(env, log_dir)
env = DummyVecEnv([lambda: env]) # PPO2 requires a vectorized environment to run
Explanation: Create and wrap the environment
End of explanation
# Create the callback: check every 1000 steps
callback = SaveOnBestTrainingRewardCallback(check_freq=1000, log_dir=log_dir)
model = PPO2(MlpPolicy, env, verbose=1)
model.learn(total_timesteps=100000, callback=callback)
Explanation: Define and train the PPO agent
End of explanation
# Helper from the library
results_plotter.plot_results([log_dir], 1e5, results_plotter.X_TIMESTEPS, "PPO Cartpole")
def moving_average(values, window):
Smooth values by doing a moving average
:param values: (numpy array)
:param window: (int)
:return: (numpy array)
weights = np.repeat(1.0, window) / window
return np.convolve(values, weights, 'valid')
def plot_results(log_folder, title='Learning Curve'):
plot the results
:param log_folder: (str) the save location of the results to plot
:param title: (str) the title of the task to plot
x, y = ts2xy(load_results(log_folder), 'timesteps')
y = moving_average(y, window=50)
# Truncate x
x = x[len(x) - len(y):]
fig = plt.figure(title, figsize=(16,6))
plt.plot(x, y)
plt.xlabel('Number of Timesteps')
plt.ylabel('Rewards')
plt.title(title + " Smoothed")
plt.show()
plot_results(log_dir)
Explanation: Plotting helpers
Stable Baselines has some built-in plotting helper, that you can find in stable_baselines.results_plotter. However, to show how to do it yourself, we are going to use custom plotting functions.
End of explanation
import imageio
images = []
obs = model.env.reset()
img = model.env.render(mode='rgb_array')
for i in range(350):
images.append(img)
action, _ = model.predict(obs)
obs, _, _ ,_ = model.env.step(action)
img = model.env.render(mode='rgb_array')
imageio.mimsave('ppo_cartpole.gif', [np.array(img) for i, img in enumerate(images) if i%2 == 0], fps=29)
import IPython
from IPython.display import Image
Image(open('ppo_cartpole.gif','rb').read()) # https://stackoverflow.com/questions/61110188/how-to-display-a-gif-in-jupyter-notebook-using-google-colab
Explanation: Make a GIF of a Trained Agent
C.f. https://stable-baselines.readthedocs.io/en/master/guide/examples.html#bonus-make-a-gif-of-a-trained-agent
End of explanation
env._max_episode_steps = 1000
reward_list = []
NUM_EPISODES = 100
for episode_index in range(NUM_EPISODES):
reward_sum = 0
obs = env.reset()
done = False
while not done:
action, _states = model.predict(obs)
obs, reward, done, info = env.step(action)
reward_sum += reward
#env.render() # Cannot render on Google Colab
reward_list.append(reward_sum)
print("Mean reward:", sum(reward_list) / NUM_EPISODES)
env.close()
Explanation: Evaluate the trained agent
End of explanation
<END_TASK> |
15,564 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Using Indic NLP Library
https
Step2: Transliteration
Step3: Using Silpa
https
Step4: Using BengaliStemmer
https
Step5: Using BanglaStemmer
https
Step6: Using Avro
https | <ASSISTANT_TASK:>
Python Code:
from indicnlp.morph import unsupervised_morph
morph = unsupervised_morph.UnsupervisedMorphAnalyzer("bn")
text = u\
করা করেছিলাম করেছি করতে করেছিল হয়েছে হয়েছিল হয় হওয়ার হবে আবিষ্কৃত আবিষ্কার অভিষিক্ত অভিষেক অভিষেকের আমি আমার আমাদের তুমি তোমার তোমাদের বসা বসেছিল বসে বসি বসেছিলাম বস বসার\
word_token = text.split(" ")
word_morph = []
for i in word_token:
word_morph.append(morph.morph_analyze(i))
import pandas as pd
indic = pd.DataFrame({"1_Word": word_token, "2_Morpheme": word_morph})
indic
Explanation: Using Indic NLP Library
https://github.com/anoopkunchukuttan/indic_nlp_library
Morphological Analysis
End of explanation
from indicnlp.transliterate.unicode_transliterate import ItransTransliterator
bangla_text = "ami apni tumi tomar tomader amar apnar apnader akash"
text_trans = ItransTransliterator.from_itrans(bangla_text, "bn")
print repr(text_trans).decode("unicode_escape")
Explanation: Transliteration
End of explanation
from transliteration import getInstance
trans = getInstance()
text_trans = trans.transliterate(bangla_text, "bn_IN")
print repr(text_trans).decode("unicode_escape")
Explanation: Using Silpa
https://github.com/libindic/Silpa-Flask
Transliteration
End of explanation
import rbs
word_stem1 = []
for i in word_token:
word_stem1.append(rbs.stemWord(i, True))
bs1 = pd.DataFrame({"1_Word": word_token, "2_Stem": word_stem1})
bs1
Explanation: Using BengaliStemmer
https://github.com/gdebasis/BengaliStemmer
Stemming
End of explanation
import jnius_config
jnius_config.set_classpath(".", "path to class")
from jnius import autoclass
cls = autoclass("RuleFileParser")
stemmer = cls()
word_stem2 = []
for i in word_token:
word_stem2.append(stemmer.stemOfWord(i))
bs2 = pd.DataFrame({"1_Word": word_token, "2_Stem": word_stem2})
bs2
Explanation: Using BanglaStemmer
https://github.com/rafi-kamal/Bangla-Stemmer
Stemming
End of explanation
from pyavrophonetic import avro
trans_text = avro.parse(bangla_text)
print repr(trans_text).decode("unicode_escape")
Explanation: Using Avro
https://github.com/kaustavdm/pyAvroPhonetic
Transliteration
End of explanation
<END_TASK> |
15,565 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
<img src='static/uff-bw.svg' width='20%' align='left'/>
Multi-Objective Optimization with Estimation of Distribution Algorithms
Luis Martí/IC/UFF
http
Step1: How we handle multiple -and conflictive- objectives?
It's "easy"
Step2: Planting a constant seed to always have the same results (and avoid surprises in class). -you should not do this in a real-world case!
Step3: To start, lets have a visual example of the Pareto dominance relationship in action.
In this notebook we will deal with two-objective problems in order to simplify visualization.
Therefore, we can create
Step5: An illustrative MOP
Step6: Preparing a DEAP toolbox with Dent.
Step7: Defining attributes, individuals and population.
Step8: Creating an example population distributed as a mesh.
Step9: Visualizing Dent
Step10: We also need a_given_individual.
Step11: Implementing the Pareto dominance relation between two individuals.
Step12: Note
Step13: Lets compute the set of individuals that are dominated by a_given_individual, the ones that dominate it (its dominators) and the remaining ones.
Step14: Having a_given_individual (blue dot) we can now plot those that are dominated by it (in green), those that dominate it (in red) and those that are uncomparable.
Step15: Obtaining the nondominated front.
Step16: So, is this the end?
Ok, now we know how to solve MOPs by sampling the search space.
MOPs, in the general case are NP-hard problems.
Brute force is never the solution in just a-little-more-complex cases.
An example, solving the TSP problem using brute force
Step17: Describing attributes, individuals and population and defining the selection, mating and mutation operators.
Step18: Let's also use the toolbox to store other configuration parameters of the algorithm. This will show itself usefull when performing massive experiments.
Step19: A compact NSGA-II implementation
Storing all the required information in the toolbox and using DEAP's algorithms.eaMuPlusLambda function allows us to create a very compact -albeit not a 100% exact copy of the original- implementation of NSGA-II.
Step20: Running the algorithm
We are now ready to run our NSGA-II.
Step21: We can now get the Pareto fronts in the results (res).
Step22: Resulting Pareto fronts
Step23: It is better to make an animated plot of the evolution as it takes place.
Animating the evolutionary process
We create a stats to store the individuals not only their objective function values.
Step24: Re-run the algorithm to get the data necessary for plotting.
Step25: The previous animation makes the notebook too big for online viewing. To circumvent this, it is better to save the animation as video and (manually) upload it to YouTube.
Step26: Here it is clearly visible how the algorithm "jumps" from one local-optimum to a better one as evolution takes place.
MOP benchmark problem toolkits
Each problem instance is meant to test the algorithms with regard with a given feature
Step27: DTLZ7 has many disconnected Pareto-optimal fronts.
<div align='center'><img src='http
Step28: How does our NSGA-II behaves when faced with different benchmark problems?
Step29: Running NSGA-II solving all problems. Now it takes longer.
Step30: Creating this animation takes more programming effort.
Step31: Saving the animation as video and uploading it to YouTube.
Step32: It is interesting how the algorithm deals with each problem
Step33: We add a experiment_name to toolbox that we will fill up later on.
Step34: We can now replicate this toolbox instance and then modify the mutation probabilities.
Step35: Now toolboxes is a list of copies of the same toolbox. One for each experiment configuration (population size).
...but we still have to set the population sizes in the elements of toolboxes.
Step36: Experiment design
As we are dealing with stochastic methods their results should be reported relying on an statistical analysis.
A given experiment (a toolbox instance in our case) should be repeated a sufficient amount of times.
In theory, the more runs the better, but how much in enough? In practice, we could say that about 30 runs is enough.
The non-dominated fronts produced by each experiment run should be compared to each other.
We have seen in class that a number of performance indicators, like the hypervolume, additive and multiplicative epsilon indicators, among others, have been proposed for that task.
We can use statistical visualizations like box plots or violin plots to make a visual assessment of the indicator values produced in each run.
We must apply a set of statistical hypothesis tests in order to reach an statistically valid judgment of the results of an algorithms.
Note
Step37: Running experiments in parallel
As we are now solving more demanding problems it would be nice to make our algorithms to run in parallel and profit from modern multi-core CPUs.
In DEAP it is very simple to parallelize an algorithm (if it has been properly programmed) by providing a parallel map() function throu the toolbox.
Local parallelization can be achieved using Python's multiprocessing or concurrent.futures modules.
Cluster parallelization can be achived using IPython Parallel or SCOOP, that seems to be recommended by the DEAP guys as it was part of it.
Note
Step38: A side-effect of using process-based parallelization
Process-based parallelization based on multiprocessing requires that the parameters passed to map() be pickleable.
The direct consequence is that lambda functions can not be directly used.
This is will certainly ruin the party to all lambda fans out there! -me included.
Hence we need to write some wrapper functions instead.
But, that wrapper function can take care of filtering out dominated individuals in the results.
Step39: All set! Run the experiments...
Step40: As you can see, even this relatively small experiment took lots of time!
As running the experiments takes so long, lets save the results so we can use them whenever we want.
Step41: In case you need it, this file is included in the github repository.
To load the results we would just have to
Step42: results is a dictionary, but a pandas DataFrame is a more handy container for the results.
Step43: A first glace at the results
Step44: The local Pareto-optimal fronts are clearly visible!
Calculating performance indicators
As already mentioned, we need to evaluate the quality of the solutions produced in every execution of the algorithm.
We will use the hypervolumne indicator for that.
We already filtered each population a leave only the non-dominated individuals.
Calculating the reference point
Step45: We can now compute the hypervolume of the Pareto-optimal fronts yielded by each algorithm run.
Step46: How can we interpret the indicators?
Option A
Step47: Option B
Step48: Option C
Step49: The Kruskal-Wallis H-test tests the null hypothesis that the population median of all of the groups are equal.
It is a non-parametric version of ANOVA.
The test works on 2 or more independent samples, which may have different sizes.
Note that rejecting the null hypothesis does not indicate which of the groups differs.
Post-hoc comparisons between groups are required to determine which groups are different.
Step50: We now can assert that the results are not the same but which ones are different or similar to the others the others?
In case that the null hypothesis of the Kruskal-Wallis is rejected the Conover–Inman procedure (Conover, 1999, pp. 288-290) can be applied in a pairwise manner in order to determine if the results of one algorithm were significantly better than those of the other.
Conover, W. J. (1999). Practical Nonparametric Statistics. John Wiley & Sons, New York, 3rd edition.
Note
Step51: We now know in what cases the difference is sufficient as to say that one result is better than the other.
Another alternative is the Friedman test.
Its null hypothesis that repeated measurements of the same individuals have the same distribution.
It is often used to test for consistency among measurements obtained in different ways.
For example, if two measurement techniques are used on the same set of individuals, the Friedman test can be used to determine if the two measurement techniques are consistent.
Step52: Mann–Whitney U test (also called the Mann–Whitney–Wilcoxon (MWW), Wilcoxon rank-sum test (WRS), or Wilcoxon–Mann–Whitney test) is a nonparametric test of the null hypothesis that two populations are the same against an alternative hypothesis, especially that a particular population tends to have larger values than the other.
It has greater efficiency than the $t$-test on non-normal distributions, such as a mixture of normal distributions, and it is nearly as efficient as the $t$-test on normal distributions.
Step53: The familywise error rate (FWER) is the probability of making one or more false discoveries, or type I errors, among all the hypotheses when performing multiple hypotheses tests.
Example
Step54: Let's apply the corrected alpha to raw_p_values. If we have a cell with a True value that means that those two results are the same. | <ASSISTANT_TASK:>
Python Code:
import time, array, random, copy, math
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
Explanation: <img src='static/uff-bw.svg' width='20%' align='left'/>
Multi-Objective Optimization with Estimation of Distribution Algorithms
Luis Martí/IC/UFF
http://lmarti.com; lmarti@ic.uff.br
Why we need artificial intelligence?
What does intelligence implies:
* infer, deduct, learn, create and adapt;
* to be able to deal with NP-hard problems $\rightarrow$ search and optimization problems;
* handle uncertainly, contradiction and noise.
<br/><p/><br/>
<div align='center'>
AI is the how computer science attempts to answer the question 'What are we?'
</div>
In this talk
Multi-objective optimization problems (MOPs).
Multi-objective evolutionary algorithms (MOEAs/EMOAs).
Many-objective problems and the need for better MOEAs.
Multi-objective estimation of distribution algorithms.
Experiment design and comparing results.
Salient issues and research directions.
About the slides
<img src='http://jupyter.org/assets/nav_logo.svg' width='38%'>
You may notice that I will be running some code inside the slides.
That is because the slides are programmed as a Jupyter (IPython) notebook.
If you are viewing this as a "plain" notebook, be warned that the slide version is the best way of viewing it.
You can get them from https://github.com/lmarti/scalable-moedas-talk.
You are free to try them and experiment on your own.
End of explanation
from deap import algorithms, base, benchmarks, tools, creator
Explanation: How we handle multiple -and conflictive- objectives?
It's "easy": we do it all the time.
<br/>
<div align='center'><img src='http://imgs.xkcd.com/comics/fuck_grapefruit.png' width='65%' align='center'/>
taken from http://xkcd.com/388/</div>
Multi-objective optimization
Most -if not all- optimization problems involve more than one objective function to be optimized simultaneously.
Sometimes those other objectives are converted to constraints or fixed to default values, but they are still there.
Multi-objective optimization has been applied in many fields of science where optimal decisions need to be taken in the presence of trade-offs between two or more conflicting objectives.
A Multi-objective Optimization Problem (MOP)
$$
\begin{array}{rl}
\mathrm{minimize} & \mathbf{F}(\mathbf{x})=\langle f_1(\mathbf{x}),\ldots,f_M(\mathbf{x})\rangle\,,\
\mathrm{subject}\ \mathrm{to} & c_1(\mathbf{x}),\ldots,c_C(\mathbf{x})\le 0\,,\
& d_1(\mathbf{x}),\ldots,d_D(\mathbf{x})= 0\,,\
& \text{with}\ \mathbf{x}\in\mathcal{D}\,,
\end{array}
$$
$\mathcal{D}$ is known as the decision set or search set.
functions $f_1(\mathbf{x}),\ldots,f_M(\mathbf{x})$ are the objective functions.
Image set, $\mathcal{O}$, result of the projection of $\mathcal{D}$ via $f_1(\mathbf{x}),\ldots,f_M(\mathbf{x})$ is called objective set ($\mathbf{F}:\mathcal{D}\rightarrow\mathcal{O}$).
$c_1(\mathbf{x}),\ldots,c_C(\mathbf{x})\le 0$ and $d_1(\mathbf{x}),\ldots,d_D(\mathbf{x})= 0$ express the constraints imposed on the values of $\mathbf{x}$.
Note 1: In case you are -still- wondering, a maximization problem can be posed as the minimization one: $\min\ -\mathbf{F}(\mathbf{x})$.
Note 2: If $M=1$ the problem reduces to a single-objective optimization problem.
Example: A two variables and two objectives MOP
<div align='center'><img src='static/mop-2d.jpg' height='56%' align='center'/></div>
MOP (optimal) solutions
Usually, there is not a unique solution that minimizes all objective functions simultaneously, but, instead, a set of equally good trade-off solutions.
Optimality can be defined in terms of the Pareto dominance relation:
* having $\mathbf{x},\mathbf{y}\in\mathcal{D}$, $\mathbf{x}$ is said to dominate $\mathbf{y}$ (expressed as $\mathbf{x}\preccurlyeq\mathbf{y}$) iff $\forall f_j$, $f_j(\mathbf{x})\leq f_j(\mathbf{y})$ and $\exists f_i$ such that $f_i(\mathbf{x})< f_i(\mathbf{y})$.
* Having the set $\mathcal{A}$. $\mathcal{A}^\ast$, the non-dominated subset of $\mathcal{A}$, is defined as
$$
\mathcal{A}^\ast=\left{ \mathbf{x}\in\mathcal{A} \left|\not\exists\mathbf{y}\in\mathcal{A}:\mathbf{y}\preccurlyeq\mathbf{x}\right.\right}.
$$
The Pareto-optimal set, $\mathcal{D}^{\ast}$, is the solution of the problem. It is the subset of non-dominated elements of $\mathcal{D}$. It is also known as the efficient set.
It consists of solutions that cannot be improved in any of the objectives without degrading at least one of the other objectives.
Its image in objective set is called the Pareto-optimal front, $\mathcal{O}^\ast$.
Evolutionary algorithms generally yield a set of non-dominated solutions, $\mathcal{P}^\ast$, that approximates $\mathcal{D}^{\ast}$.
Visualizing the Pareto dominance relation
We will be using DEAP, a python module for evolutionary computing.
End of explanation
random.seed(a=42)
Explanation: Planting a constant seed to always have the same results (and avoid surprises in class). -you should not do this in a real-world case!
End of explanation
creator.create("FitnessMin", base.Fitness, weights=(-1.0,-1.0))
creator.create("Individual", array.array, typecode='d',
fitness=creator.FitnessMin)
Explanation: To start, lets have a visual example of the Pareto dominance relationship in action.
In this notebook we will deal with two-objective problems in order to simplify visualization.
Therefore, we can create:
End of explanation
def dent(individual, lbda = 0.85):
Implements the test problem Dent
Num. variables = 2; bounds in [-1.5, 1.5]; num. objetives = 2.
@author Cesar Revelo
d = lbda * math.exp(-(individual[0] - individual[1]) ** 2)
f1 = 0.5 * (math.sqrt(1 + (individual[0] + individual[1]) ** 2) + \
math.sqrt(1 + (individual[0] - individual[1]) ** 2) + \
individual[0] - individual[1]) + d
f2 = 0.5 * (math.sqrt(1 + (individual[0] + individual[1]) ** 2) + \
math.sqrt(1 + (individual[0] - individual[1]) ** 2) - \
individual[0] + individual[1]) + d
return f1, f2
Explanation: An illustrative MOP: Dent
$$
\begin{array}{rl}
\text{minimize} & f_1(\mathbf{x}),f_2(\mathbf{x}) \
\text{such that} & f_1(\mathbf{x}) = \frac{1}{2}\left( \sqrt{1 + (x_1 + x_2)^2} \sqrt{1 + (x_1 - x_2)^2} + x_1 -x_2\right) + d,\
& f_2(\mathbf{x}) = \frac{1}{2}\left( \sqrt{1 + (x_1 + x_2)^2} \sqrt{1 + (x_1 - x_2)^2} - x_1 -x_2\right) + d,\
\text{with}& d = \lambda e^{-\left(x_1-x_2\right)^2}\ (\text{generally }\lambda=0.85) \text{ and }\
& \mathbf{x}\in \left[-1.5,1.5\right]^2.
\end{array}
$$
Implementing the Dent problem
End of explanation
toolbox = base.Toolbox()
BOUND_LOW, BOUND_UP = -1.5, 1.5
NDIM = 2
toolbox.register("evaluate", dent)
Explanation: Preparing a DEAP toolbox with Dent.
End of explanation
def uniform(low, up, size=None):
try:
return [random.uniform(a, b) for a, b in zip(low, up)]
except TypeError:
return [random.uniform(a, b) for a, b in zip([low] * size, [up] * size)]
toolbox.register("attr_float", uniform, BOUND_LOW, BOUND_UP, NDIM)
toolbox.register("individual", tools.initIterate,
creator.Individual, toolbox.attr_float)
toolbox.register("population", tools.initRepeat, list,
toolbox.individual)
Explanation: Defining attributes, individuals and population.
End of explanation
num_samples = 50
limits = [np.arange(BOUND_LOW, BOUND_UP, (BOUND_UP - BOUND_LOW)/num_samples)] * NDIM
sample_x = np.meshgrid(*limits)
flat = []
for i in range(len(sample_x)):
x_i = sample_x[i]
flat.append(x_i.reshape(num_samples**NDIM))
example_pop = toolbox.population(n=num_samples**NDIM)
for i, ind in enumerate(example_pop):
for j in range(len(flat)):
ind[j] = flat[j][i]
fitnesses = toolbox.map(toolbox.evaluate, example_pop)
for ind, fit in zip(example_pop, fitnesses):
ind.fitness.values = fit
Explanation: Creating an example population distributed as a mesh.
End of explanation
plt.figure(figsize=(11,5))
plt.subplot(1,2,1)
for ind in example_pop: plt.plot(ind[0], ind[1], 'k.', ms=3)
plt.xlabel('$x_1$');plt.ylabel('$x_2$');plt.title('Decision space');
plt.subplot(1,2,2)
for ind in example_pop: plt.plot(ind.fitness.values[0], ind.fitness.values[1], 'k.', ms=3)
plt.xlabel('$f_1(\mathbf{x})$');plt.ylabel('$f_2(\mathbf{x})$');
plt.xlim((0.5,3.6));plt.ylim((0.5,3.6)); plt.title('Objective space');
Explanation: Visualizing Dent
End of explanation
a_given_individual = toolbox.population(n=1)[0]
a_given_individual[0] = 0.5
a_given_individual[1] = 0.5
a_given_individual.fitness.values = toolbox.evaluate(a_given_individual)
Explanation: We also need a_given_individual.
End of explanation
def pareto_dominance(ind1, ind2):
'Returns `True` if `ind1` dominates `ind2`.'
extrictly_better = False
for item1 in ind1.fitness.values:
for item2 in ind2.fitness.values:
if item1 > item2:
return False
if not extrictly_better and item1 < item2:
extrictly_better = True
return extrictly_better
Explanation: Implementing the Pareto dominance relation between two individuals.
End of explanation
def efficient_pareto_dominance(ind1, ind2):
return tools.emo.isDominated(ind1.fitness.values, ind2.fitness.values)
Explanation: Note: Bear in mind that DEAP implements a Pareto dominance relation that probably is more efficient than this implementation. The previous function would be something like:
End of explanation
dominated = [ind for ind in example_pop
if pareto_dominance(a_given_individual, ind)]
dominators = [ind for ind in example_pop
if pareto_dominance(ind, a_given_individual)]
others = [ind for ind in example_pop
if not ind in dominated and not ind in dominators]
def plot_dent():
'Plots the points in decision and objective spaces.'
plt.figure(figsize=(10,5))
plt.subplot(1,2,1)
for ind in dominators: plt.plot(ind[0], ind[1], 'r.')
for ind in dominated: plt.plot(ind[0], ind[1], 'g.')
for ind in others: plt.plot(ind[0], ind[1], 'k.', ms=3)
plt.plot(a_given_individual[0], a_given_individual[1], 'bo', ms=6);
plt.xlabel('$x_1$');plt.ylabel('$x_2$');
plt.title('Decision space');
plt.subplot(1,2,2)
for ind in dominators: plt.plot(ind.fitness.values[0], ind.fitness.values[1], 'r.', alpha=0.7)
for ind in dominated: plt.plot(ind.fitness.values[0], ind.fitness.values[1], 'g.', alpha=0.7)
for ind in others: plt.plot(ind.fitness.values[0], ind.fitness.values[1], 'k.', alpha=0.7, ms=3)
plt.plot(a_given_individual.fitness.values[0], a_given_individual.fitness.values[1], 'bo', ms=6);
plt.xlabel('$f_1(\mathbf{x})$');plt.ylabel('$f_2(\mathbf{x})$');
plt.xlim((0.5,3.6));plt.ylim((0.5,3.6));
plt.title('Objective space');
plt.tight_layout()
Explanation: Lets compute the set of individuals that are dominated by a_given_individual, the ones that dominate it (its dominators) and the remaining ones.
End of explanation
plot_dent()
Explanation: Having a_given_individual (blue dot) we can now plot those that are dominated by it (in green), those that dominate it (in red) and those that are uncomparable.
End of explanation
non_dom = tools.sortNondominated(example_pop, k=len(example_pop),
first_front_only=True)[0]
plt.figure(figsize=(5,5))
for ind in example_pop:
plt.plot(ind.fitness.values[0], ind.fitness.values[1], 'k.', ms=3, alpha=0.5)
for ind in non_dom:
plt.plot(ind.fitness.values[0], ind.fitness.values[1], 'bo', alpha=0.74, ms=5)
Explanation: Obtaining the nondominated front.
End of explanation
toolbox = base.Toolbox()
BOUND_LOW, BOUND_UP = 0.0, 1.0
toolbox.register("evaluate", lambda ind: benchmarks.dtlz3(ind, 2))
Explanation: So, is this the end?
Ok, now we know how to solve MOPs by sampling the search space.
MOPs, in the general case are NP-hard problems.
Brute force is never the solution in just a-little-more-complex cases.
An example, solving the TSP problem using brute force:
<table>
<tr><th>$n$ cities</th><th>time</th>
<tr><td>10</td><td>3 secs</td></tr>
<tr><td>12</td><td>3 secs × 11 × 12 = 6.6 mins</td></tr>
<tr><td>14</td><td>6.6 mins × 13 × 14 = 20 hours</td></tr>
<tr><td>24</td><td>3 secs × 24! / 10! = <a href="https://www.google.com/search?q=3+seconds+*+24!+%2F+10!+in+years">16 billion years</a></td></tr></table>
Note: See my PhD EC course notebooks https://github.com/lmarti/evolutionary-computation-course on solving the TSP problem using EAs.
Preference-based alternatives
A Decision Maker can define a set of weights $w_1,\ldots,w_M$ for each function $f_1(),\ldots,f_M()$.
We can convert a MOP into a SOP:
$$
\begin{array}{rl}
\mathrm{minimize} & F(\mathbf{x})= w_1f_1(\mathbf{x})+\cdots + w_if_i(\mathbf{x}) +\cdots +w_Mf_M(\mathbf{x})\,,\
\mathrm{subject}\ \mathrm{to} & c_1(\mathbf{x}),\ldots,c_C(\mathbf{x})\le 0\,,\
& d_1(\mathbf{x}),\ldots,d_D(\mathbf{x})= 0\,,\
& \text{with}\ \mathbf{x}\in\mathcal{D}\,,
\end{array}
$$
A single-objective optimizer $\implies$ only one solution not the complete PF.
Mathematical programming.
Requires (a lot) of a priori knowledge but is relatively simple.
The Decision Maker
<div align='center'><img src='static/920267262856611.jpeg' width='56%'></div>
Lexicographical ordering of the objectives
<table><tr><td width='25%'>
<div align='center'><img align='center'src="http://upload.wikimedia.org/wikipedia/commons/f/fb/Animal_Farm_-_1st_edition.jpg" width="100%"></div>
</td><td width='75%'>
<h3> All objectives are important...</h3>
<h2>...but some objectives are more important than others.</h2>
</td></tr></table>
Better idea: Use the Pareto dominance relation to guide the search
We can use the Pareto dominance relation to determine how good an individual is.
Ideas:
For a solution $\mathbf{x}$, how many individuals dominate $\mathbf{x}$?
... and how many $\mathbf{x}$ dominates?
This looks like the perfect task for an evolutionary algorithm.
Evolutionary Algorithms
<div align='center'><img src='static/moea.png' width='65%'></div>
Mating selection + Variation (Offsping generation) + Enviromental selection $\implies$ global + local parallel search features.
Elements to take into account using evolutionary algorithms
Individual representation (binary, Gray, floating-point, etc.);
evaluation and fitness assignment;
mating selection, that establishes a partial order of individuals in the population using their fitness function value as reference and determines the degree at which individuals in the population will take part in the generation of new (offspring) individuals.
variation, that applies a range of evolution-inspired operators, like crossover, mutation, etc., to synthesize offspring individuals from the current (parent) population.
This process is supposed to prime the fittest individuals so they play a bigger role in the generation of the offspring.
environmental selection, that merges the parent and offspring individuals to produce the population that will be used in the next iteration. This process often involves the deletion of some individuals using a given criterion in order to keep the amount of individuals bellow a certain threshold.
stopping criterion, that determines when the algorithm should be stopped, either because the optimum was reach or because the optimization process is not progressing.
Pseudocode of an evolutionary algorithm
```
def evolutionary_algorithm():
populations = [] # a list with all the populations
populations[0] = initialize_population(pop_size)
t = 0
while not stop_criterion(populations[t]):
fitnesses = evaluate(populations[t])
offspring = matting_and_variation(populations[t],
fitnesses)
populations[t+1] = environmental_selection(
populations[t],
offspring)
t = t+1
```
The crossover operator
One point crossover
<img src='https://upload.wikimedia.org/wikipedia/commons/5/56/OnePointCrossover.svg' width='47%'>
Two-point crossover
<img src='https://upload.wikimedia.org/wikipedia/commons/c/cd/TwoPointCrossover.svg' width='47%'>
The Non-dominated Sorting Genetic Algorithm (NSGA-II)
NSGA-II algorithm is one of the pillars of the EMO field.
Deb, K., Pratap, A., Agarwal, S., Meyarivan, T., A fast and elitist multiobjective genetic algorithm: NSGA-II, IEEE Transactions on Evolutionary Computation, vol.6, no.2, pp.182,197, Apr 2002 doi: 10.1109/4235.996017.
Key element of NSGA-II
Fitness assignment relies on the Pareto dominance relation:
Rank individuals according the dominance relations established between them.
Individuals with the same domination rank are then compared using a local crowding distance.
NSGA-II fitness assigment in detail
The first step consists in classifying the individuals in a series of categories $\mathcal{F}_1,\ldots,\mathcal{F}_L$.
Each of these categories store individuals that are only dominated by the elements of the previous categories,
$$
\begin{array}{rl}
\forall \mathbf{x}\in\mathcal{F}i: &\exists \mathbf{y}\in\mathcal{F}{i-1} \text{ such that } \mathbf{y}\preccurlyeq\mathbf{x},\text{ and }\
&\not\exists\mathbf{z}\in \mathcal{P}t\setminus\left( \mathcal{F}_1\cup\ldots\cup\mathcal{F}{i-1}
\right)\text{ that }\mathrm{z}\preccurlyeq\mathrm{x}\,;
\end{array}
$$
with $\mathcal{F}_1$ equal to $\mathcal{P}_t^\ast$, the set of non-dominated individuals of $\mathcal{P}_t$.
After all individuals are ranked a local crowding distance is assigned to them.
The use of this distance primes individuals more isolated with respect to others.
Crowding distance
For each category set $\mathcal{F}_l$, having $f_l=|\mathcal{F}_l|$,
for each individual $\mathrm{x}i\in\mathcal{F}_l$, set $d{i}=0$.
for each objective function $m=1,\ldots,M$,
$\mathbf{I}=\mathrm{sort}\left(\mathcal{F}_l,m\right)$ (generate index vector).
$d_{I_1}^{(l)}=d_{I_{f_l}}^{(l)}=\infty$. (key)
for $i=2,\ldots,f_l-1$,
Update distances as,
$$
d_i = d_i + \frac{f_m\left(\mathrm{x}{I{i+1}}\right)-f_m\left(\mathrm{x}{I{i+1}}\right)} {f_m\left(\mathrm{x}{I{1}}\right)-f_m\left(\mathrm{x}{I{f_l}}\right)}$$
Here the $\mathrm{sort}\left(\mathcal{F},m\right)$ function produces an ordered index vector $\mathbf{I}$ with respect to objective function $m$.
<div align='center'><img src='static/population.png' width='74%'/></div>
Sorting the population by rank and distance.
Having the individual ranks and their local distances they are sorted using the crowded comparison operator, stated as:
An individual $\mathrm{x}i$ _is better than $\mathrm{x}_j$ if:
$\mathrm{x}_i$ has a better rank: $\mathrm{x}_i\in\mathcal{F}_k$, $\mathrm{x}_j\in\mathcal{F}_l$ and $k<l$, or;
if $k=l$ and $d_i>d_j$.
Now we have key element of the the non-dominated sorting GA.
Implementing NSGA-II
We will deal with DTLZ3, which is a more difficult test problem.
DTLZ problems can be configured to have as many objectives as desired, but as we want to visualize results we will stick to two objectives.
The Pareto-optimal front of DTLZ3 lies in the first orthant of a unit (radius 1) hypersphere located at the coordinate origin ($\mathbf{0}$).
It has many local optima that run parallel to the global optima and render the optimization process more complicated.
<div align='center'><img src='http://www.cs.cinvestav.mx/~emoobook/apendix-e/galeria4/dtlz3a.jpg' width="65%" align='center'/></div>
from Coello Coello, Lamont and Van Veldhuizen (2007) Evolutionary Algorithms for Solving Multi-Objective Problems, Second Edition. Springer Appendix E.
New toolbox instance with the necessary components.
End of explanation
toolbox.register("attr_float", uniform, BOUND_LOW, BOUND_UP, NDIM)
toolbox.register("individual", tools.initIterate, creator.Individual, toolbox.attr_float)
toolbox.register("population", tools.initRepeat, list, toolbox.individual)
toolbox.register("mate", tools.cxSimulatedBinaryBounded, low=BOUND_LOW, up=BOUND_UP, eta=20.0)
toolbox.register("mutate", tools.mutPolynomialBounded, low=BOUND_LOW, up=BOUND_UP, eta=20.0, indpb=1.0/NDIM)
toolbox.register("select", tools.selNSGA2)
Explanation: Describing attributes, individuals and population and defining the selection, mating and mutation operators.
End of explanation
toolbox.pop_size = 50
toolbox.max_gen = 500
toolbox.mut_prob = 0.2
Explanation: Let's also use the toolbox to store other configuration parameters of the algorithm. This will show itself usefull when performing massive experiments.
End of explanation
def nsga_ii(toolbox, stats=None, verbose=False):
pop = toolbox.population(n=toolbox.pop_size)
pop = toolbox.select(pop, len(pop))
return algorithms.eaMuPlusLambda(pop, toolbox, mu=toolbox.pop_size,
lambda_=toolbox.pop_size,
cxpb=1-toolbox.mut_prob,
mutpb=toolbox.mut_prob,
stats=stats,
ngen=toolbox.max_gen,
verbose=verbose)
Explanation: A compact NSGA-II implementation
Storing all the required information in the toolbox and using DEAP's algorithms.eaMuPlusLambda function allows us to create a very compact -albeit not a 100% exact copy of the original- implementation of NSGA-II.
End of explanation
%time res, logbook = nsga_ii(toolbox)
Explanation: Running the algorithm
We are now ready to run our NSGA-II.
End of explanation
fronts = tools.emo.sortLogNondominated(res, len(res))
Explanation: We can now get the Pareto fronts in the results (res).
End of explanation
plot_colors = ('b','r', 'g', 'm', 'y', 'k', 'c')
fig, ax = plt.subplots(1, figsize=(4,4))
for i,inds in enumerate(fronts):
par = [toolbox.evaluate(ind) for ind in inds]
df = pd.DataFrame(par)
df.plot(ax=ax, kind='scatter', label='Front ' + str(i+1),
x=df.columns[0], y=df.columns[1],
color=plot_colors[i % len(plot_colors)])
plt.xlabel('$f_1(\mathbf{x})$');plt.ylabel('$f_2(\mathbf{x})$');
Explanation: Resulting Pareto fronts
End of explanation
stats = tools.Statistics()
stats.register("pop", copy.deepcopy)
toolbox.max_gen = 4000 # we need more generations!
Explanation: It is better to make an animated plot of the evolution as it takes place.
Animating the evolutionary process
We create a stats to store the individuals not only their objective function values.
End of explanation
%time res, logbook = nsga_ii(toolbox, stats=stats)
from JSAnimation import IPython_display
import matplotlib.colors as colors
from matplotlib import animation
def animate(frame_index, logbook):
'Updates all plots to match frame _i_ of the animation.'
ax.clear()
fronts = tools.emo.sortLogNondominated(logbook.select('pop')[frame_index],
len(logbook.select('pop')[frame_index]))
for i,inds in enumerate(fronts):
par = [toolbox.evaluate(ind) for ind in inds]
df = pd.DataFrame(par)
df.plot(ax=ax, kind='scatter', label='Front ' + str(i+1),
x=df.columns[0], y =df.columns[1], alpha=0.47,
color=plot_colors[i % len(plot_colors)])
ax.set_title('$t=$' + str(frame_index))
ax.set_xlabel('$f_1(\mathbf{x})$');ax.set_ylabel('$f_2(\mathbf{x})$')
return None
fig = plt.figure(figsize=(4,4))
ax = fig.gca()
anim = animation.FuncAnimation(fig, lambda i: animate(i, logbook),
frames=len(logbook), interval=60,
blit=True)
anim
Explanation: Re-run the algorithm to get the data necessary for plotting.
End of explanation
anim.save('nsgaii-dtlz3.mp4', fps=15, bitrate=-1, dpi=500)
from IPython.display import YouTubeVideo
YouTubeVideo('Cm7r4cJq59s')
Explanation: The previous animation makes the notebook too big for online viewing. To circumvent this, it is better to save the animation as video and (manually) upload it to YouTube.
End of explanation
def dtlz5(ind, n_objs):
from functools import reduce
g = lambda x: sum([(a - 0.5)**2 for a in x])
gval = g(ind[n_objs-1:])
theta = lambda x: math.pi / (4.0 * (1 + gval)) * (1 + 2 * gval * x)
fit = [(1 + gval) * math.cos(math.pi / 2.0 * ind[0]) *
reduce(lambda x,y: x*y, [math.cos(theta(a)) for a in ind[1:]])]
for m in reversed(range(1, n_objs)):
if m == 1:
fit.append((1 + gval) * math.sin(math.pi / 2.0 * ind[0]))
else:
fit.append((1 + gval) * math.cos(math.pi / 2.0 * ind[0]) *
reduce(lambda x,y: x*y, [math.cos(theta(a)) for a in ind[1:m-1]], 1) *
math.sin(theta(ind[m-1])))
return fit
def dtlz6(ind, n_objs):
from functools import reduce
gval = sum([a**0.1 for a in ind[n_objs-1:]])
theta = lambda x: math.pi / (4.0 * (1 + gval)) * (1 + 2 * gval * x)
fit = [(1 + gval) * math.cos(math.pi / 2.0 * ind[0]) *
reduce(lambda x,y: x*y, [math.cos(theta(a)) for a in ind[1:]])]
for m in reversed(range(1, n_objs)):
if m == 1:
fit.append((1 + gval) * math.sin(math.pi / 2.0 * ind[0]))
else:
fit.append((1 + gval) * math.cos(math.pi / 2.0 * ind[0]) *
reduce(lambda x,y: x*y, [math.cos(theta(a)) for a in ind[1:m-1]], 1) *
math.sin(theta(ind[m-1])))
return fit
Explanation: Here it is clearly visible how the algorithm "jumps" from one local-optimum to a better one as evolution takes place.
MOP benchmark problem toolkits
Each problem instance is meant to test the algorithms with regard with a given feature: local optima, convexity, discontinuity, bias, or a combination of them.
ZDT1-6: Two-objective problems with a fixed number of decision variables.
E. Zitzler, K. Deb, and L. Thiele. Comparison of Multiobjective Evolutionary Algorithms: Empirical Results. Evolutionary Computation, 8(2):173-195, 2000. (pdf)
DTLZ1-7: $m$-objective problems with $n$ variables.
K. Deb, L. Thiele, M. Laumanns and E. Zitzler. Scalable Multi-Objective Optimization Test Problems. CEC 2002, p. 825 - 830, IEEE Press, 2002. (pdf)
CEC'09: Two- and three- objective problems that very complex Pareto sets.
Zhang, Q., Zhou, A., Zhao, S., & Suganthan, P. N. (2009). Multiobjective optimization test instances for the CEC 2009 special session and competition. In 2009 IEEE Congress on Evolutionary Computation (pp. 1–30). (pdf)
WFG1-9: $m$-objective problems with $n$ variables, very complex.
Huband, S., Hingston, P., Barone, L., & While, L. (2006). A review of multiobjective test problems and a scalable test problem toolkit. IEEE Transactions on Evolutionary Computation, 10(5), 477–506. doi:10.1109/TEVC.2005.861417
DTLZ5 and DTLZ6 have an $m-1$-dimensional Pareto-optimal front.
* This means that in 3D the Pareto optimal front is a 2D curve.
<div align='center'><img src='http://www.cs.cinvestav.mx/~emoobook/apendix-e/galeria4/dtlz5a.jpg' width="38%" align='center'/></div>
In two dimensions the front is a point.
End of explanation
def dtlz7(ind, n_objs):
gval = 1 + 9.0 / len(ind[n_objs-1:]) * sum([a for a in ind[n_objs-1:]])
fit = [ind for ind in ind[:n_objs-1]]
fit.append((1 + gval) * (n_objs - sum([a / (1.0 + gval) * (1 + math.sin(3 * math.pi * a)) for a in ind[:n_objs-1]])))
return fit
Explanation: DTLZ7 has many disconnected Pareto-optimal fronts.
<div align='center'><img src='http://www.cs.cinvestav.mx/~emoobook/apendix-e/galeria4/dtlz7b.jpg' width="38%" align='center'/></div>
End of explanation
problem_instances = {'ZDT1': benchmarks.zdt1, 'ZDT2': benchmarks.zdt2,
'ZDT3': benchmarks.zdt3, 'ZDT4': benchmarks.zdt4,
'DTLZ1': lambda ind: benchmarks.dtlz1(ind,2),
'DTLZ2': lambda ind: benchmarks.dtlz2(ind,2),
'DTLZ3': lambda ind: benchmarks.dtlz3(ind,2),
'DTLZ4': lambda ind: benchmarks.dtlz4(ind,2, 100),
'DTLZ5': lambda ind: dtlz5(ind,2),
'DTLZ6': lambda ind: dtlz6(ind,2),
'DTLZ7': lambda ind: dtlz7(ind,2)}
toolbox.max_gen = 1000
stats = tools.Statistics(lambda ind: ind.fitness.values)
stats.register("obj_vals", np.copy)
def run_problem(toolbox, problem):
toolbox.register('evaluate', problem)
return nsga_ii(toolbox, stats=stats)
Explanation: How does our NSGA-II behaves when faced with different benchmark problems?
End of explanation
%time results = {problem: run_problem(toolbox, problem_instances[problem]) \
for problem in problem_instances}
Explanation: Running NSGA-II solving all problems. Now it takes longer.
End of explanation
class MultiProblemAnimation:
def init(self, fig, results):
self.results = results
self.axs = [fig.add_subplot(3,4,i+1) for i in range(len(results))]
self.plots =[]
for i, problem in enumerate(sorted(results)):
(res, logbook) = self.results[problem]
pop = pd.DataFrame(data=logbook.select('obj_vals')[0])
plot = self.axs[i].plot(pop[0], pop[1], 'b.', alpha=0.47)[0]
self.plots.append(plot)
fig.tight_layout()
def animate(self, t):
'Updates all plots to match frame _i_ of the animation.'
for i, problem in enumerate(sorted(results)):
#self.axs[i].clear()
(res, logbook) = self.results[problem]
pop = pd.DataFrame(data=logbook.select('obj_vals')[t])
self.plots[i].set_data(pop[0], pop[1])
self.axs[i].set_title(problem + '; $t=' + str(t)+'$')
self.axs[i].set_xlim((0, max(1,pop.max()[0])))
self.axs[i].set_ylim((0, max(1,pop.max()[1])))
return self.axs
mpa = MultiProblemAnimation()
fig = plt.figure(figsize=(14,6))
anim = animation.FuncAnimation(fig, mpa.animate, init_func=mpa.init(fig,results),
frames=toolbox.max_gen, interval=60, blit=True)
anim
Explanation: Creating this animation takes more programming effort.
End of explanation
anim.save('nsgaii-benchmarks.mp4', fps=15, bitrate=-1, dpi=500)
YouTubeVideo('8t-aWcpDH0U')
Explanation: Saving the animation as video and uploading it to YouTube.
End of explanation
toolbox = base.Toolbox()
BOUND_LOW, BOUND_UP = 0.0, 1.0
NDIM = 30
# the explanation of this... a few lines bellow
def eval_helper(ind):
return benchmarks.dtlz3(ind, 2)
toolbox.register("evaluate", eval_helper)
def uniform(low, up, size=None):
try:
return [random.uniform(a, b) for a, b in zip(low, up)]
except TypeError:
return [random.uniform(a, b) for a, b in zip([low] * size, [up] * size)]
toolbox.register("attr_float", uniform, BOUND_LOW, BOUND_UP, NDIM)
toolbox.register("individual", tools.initIterate, creator.Individual, toolbox.attr_float)
toolbox.register("population", tools.initRepeat, list, toolbox.individual)
toolbox.register("mate", tools.cxSimulatedBinaryBounded, low=BOUND_LOW, up=BOUND_UP, eta=20.0)
toolbox.register("mutate", tools.mutPolynomialBounded, low=BOUND_LOW, up=BOUND_UP, eta=20.0, indpb=1.0/NDIM)
toolbox.register("select", tools.selNSGA2)
toolbox.pop_size = 200
toolbox.max_gen = 500
Explanation: It is interesting how the algorithm deals with each problem: clearly some problems are harder than others.
In some cases it "hits" the Pareto front and then slowly explores it.
Experiment design and reporting results
Watching an animation of an EMO algorithm solve a problem is certainly fun.
It also allows us to understand many particularities of the problem being solved.
But, as Carlos Coello would say, we are not in an art appreciation class.
We should follow the key concepts provided by the scientific method.
I urge you to study the experimental design topic in depth, as it is an essential knowledge.
<p>
<div class="alert alert-info" role="alert">
**Evolutionary algorithms are stochastic algorithms; therefore their results must be assessed by repeating experiments until you reach an statistically valid conclusion.**
</div>
</p>
We need to evaluate performance
Closeness to the Pareto-optimal front.
Diversity of solutions.
Coverage of the Pareto-optimal fronts.
<p>
<div class="alert alert-success" role="alert">
<span class="label label-success">Research hint!</span> The design analysis and application of performance indicators is one of the main research topic in the EMO field.</div>
</p>
The hypervolume indicator
<table align='center' width="92%">
<tr>
<td width='50%'>
<img src='https://ls11-www.cs.uni-dortmund.de/_media/rudolph/hypervolume/hv.png' width='92%'>
</td>
<td width='50%'>
<img src='https://ls11-www.cs.uni-dortmund.de/_media/rudolph/hypervolume/hvemm3d.png' width='92%'>
</td>
</tr>
</table>
Note: Taken from Günter Rudolph's site on the hypervolume indicator.
Formalization of the hypervolume
For a set of solutions $\mathcal{A}$,
$$
I_\mathrm{hyp}\left(\mathcal{A}\right) = \mathrm{volume}\left(
\bigcup_{\forall \mathbf{a}\in\mathcal{A}}{\mathrm{hypercube}(\mathbf{a},\mathbf{r})}\right)\,.
$$
We need a reference point, $\mathbf{r}$.
Hypervolume is Pareto compliant (Fleischer, 2003): for sets $\mathcal{A}$ and $\mathcal{B}$, $\mathcal{A}\preccurlyeq\mathcal{B} \implies I_\mathrm{hyp}(A)>I_\mathrm{hyp}(B)$.
Calculating hypervolume is #P-hard, i.e. superpolynomial runtime unless P = NP (Bringmann and Friedrich, 2008).
An illustrative simple/sample experiment
Let's make a relatively simple experiment:
Hypothesis: The mutation probability of NSGA-II matters when solving the DTLZ3 problem.
Procedure: We must perform an experiment testing different mutation probabilities while keeping the other parameters constant.
Notation
As usual we need to establish some notation:
Multi-objective problem (or just problem): A multi-objective optimization problem, as defined above.
MOEA: An evolutionary computation method used to solve multi-objective problems.
Experiment: a combination of problem and MOEA and a set of values of their parameters.
Experiment run: The result of running an experiment.
We will use toolbox instances to define experiments.
We start by creating a toolbox that will contain the configuration that will be shared across all experiments.
End of explanation
toolbox.experiment_name = "$P_\mathrm{mut}="
Explanation: We add a experiment_name to toolbox that we will fill up later on.
End of explanation
mut_probs = (0.05, 0.15, 0.3)
number_of_experiments = len(mut_probs)
toolboxes=list([copy.copy(toolbox) for _ in range(number_of_experiments)])
Explanation: We can now replicate this toolbox instance and then modify the mutation probabilities.
End of explanation
for i, toolbox in enumerate(toolboxes):
toolbox.mut_prob = mut_probs[i]
toolbox.experiment_name = toolbox.experiment_name + str(mut_probs[i]) +'$'
for toolbox in toolboxes:
print(toolbox.experiment_name, toolbox.mut_prob)
Explanation: Now toolboxes is a list of copies of the same toolbox. One for each experiment configuration (population size).
...but we still have to set the population sizes in the elements of toolboxes.
End of explanation
number_of_runs = 42
Explanation: Experiment design
As we are dealing with stochastic methods their results should be reported relying on an statistical analysis.
A given experiment (a toolbox instance in our case) should be repeated a sufficient amount of times.
In theory, the more runs the better, but how much in enough? In practice, we could say that about 30 runs is enough.
The non-dominated fronts produced by each experiment run should be compared to each other.
We have seen in class that a number of performance indicators, like the hypervolume, additive and multiplicative epsilon indicators, among others, have been proposed for that task.
We can use statistical visualizations like box plots or violin plots to make a visual assessment of the indicator values produced in each run.
We must apply a set of statistical hypothesis tests in order to reach an statistically valid judgment of the results of an algorithms.
Note: I personally like the number 42 as it is the answer to The Ultimate Question of Life, the Universe, and Everything.
End of explanation
from IPython.html import widgets
from IPython.display import display
progress_bar = widgets.IntProgressWidget(description="Starting...",
max=len(toolboxes)*number_of_runs)
Explanation: Running experiments in parallel
As we are now solving more demanding problems it would be nice to make our algorithms to run in parallel and profit from modern multi-core CPUs.
In DEAP it is very simple to parallelize an algorithm (if it has been properly programmed) by providing a parallel map() function throu the toolbox.
Local parallelization can be achieved using Python's multiprocessing or concurrent.futures modules.
Cluster parallelization can be achived using IPython Parallel or SCOOP, that seems to be recommended by the DEAP guys as it was part of it.
Note: You can have a very good summary about this issue in http://blog.liang2.tw/2014-handy-dist-computing/.
Progress feedback
Another issue with these long experiments has to do being patient.
A little bit of feedback on the experiment execution would be cool.
We can use the integer progress bar from IPython widgets and report every time an experiment run is finished.
End of explanation
def run_algo_wrapper(toolboox):
result,a = nsga_ii(toolbox)
pareto_sets = tools.emo.sortLogNondominated(result, len(result))
return pareto_sets[0]
Explanation: A side-effect of using process-based parallelization
Process-based parallelization based on multiprocessing requires that the parameters passed to map() be pickleable.
The direct consequence is that lambda functions can not be directly used.
This is will certainly ruin the party to all lambda fans out there! -me included.
Hence we need to write some wrapper functions instead.
But, that wrapper function can take care of filtering out dominated individuals in the results.
End of explanation
%%time
from multiprocessing import Pool
display(progress_bar)
results = {}
pool = Pool()
for toolbox in toolboxes:
results[toolbox.experiment_name] = pool.map(run_algo_wrapper, [toolbox] * number_of_runs)
progress_bar.value +=number_of_runs
progress_bar.description = "Finished %03d of %03d:" % (progress_bar.value, progress_bar.max)
Explanation: All set! Run the experiments...
End of explanation
import pickle
pickle.dump(results, open('nsga_ii_dtlz3-results.pickle', 'wb'))
Explanation: As you can see, even this relatively small experiment took lots of time!
As running the experiments takes so long, lets save the results so we can use them whenever we want.
End of explanation
loaded_results = pickle.load(open('nsga_ii_dtlz3-results.pickle', 'rb'))
results = loaded_results # <-- (un)comment when needed
Explanation: In case you need it, this file is included in the github repository.
To load the results we would just have to:
End of explanation
res = pd.DataFrame(results)
res.head()
Explanation: results is a dictionary, but a pandas DataFrame is a more handy container for the results.
End of explanation
a = res.applymap(lambda pop: [toolbox.evaluate(ind) for ind in pop])
plt.figure(figsize=(11,3))
for i, col in enumerate(a.columns):
plt.subplot(1, len(a.columns), i+1)
for pop in a[col]:
x = pd.DataFrame(data=pop)
plt.scatter(x[0], x[1], marker='.', alpha=0.5)
plt.title(col)
Explanation: A first glace at the results
End of explanation
def calculate_reference(results, epsilon=0.1):
alldata = np.concatenate(np.concatenate(results.values))
obj_vals = [toolbox.evaluate(ind) for ind in alldata]
return np.max(obj_vals, axis=0) + epsilon
reference = calculate_reference(res)
reference
Explanation: The local Pareto-optimal fronts are clearly visible!
Calculating performance indicators
As already mentioned, we need to evaluate the quality of the solutions produced in every execution of the algorithm.
We will use the hypervolumne indicator for that.
We already filtered each population a leave only the non-dominated individuals.
Calculating the reference point: a point that is “worst” than any other individual in every objective.
End of explanation
import deap.benchmarks.tools as bt
hypervols = res.applymap(lambda pop: bt.hypervolume(pop, reference))
hypervols.head()
Explanation: We can now compute the hypervolume of the Pareto-optimal fronts yielded by each algorithm run.
End of explanation
hypervols.describe()
Explanation: How can we interpret the indicators?
Option A: Tabular form
End of explanation
import seaborn
seaborn.set(style="whitegrid")
fig = plt.figure(figsize=(15,3))
plt.subplot(1,2,1, title='Violin plots of NSGA-II with $P_{\mathrm{mut}}$')
seaborn.violinplot(hypervols, alpha=0.74)
plt.ylabel('Hypervolume'); plt.xlabel('Mutation probabilities')
plt.subplot(1,2,2, title='Box plots of NSGA-II with $P_{\mathrm{mut}}$')
seaborn.boxplot(hypervols, alpha=0.74)
plt.ylabel('Hypervolume'); plt.xlabel('Mutation probabilities');
Explanation: Option B: Visualization
End of explanation
import itertools
import scipy.stats as stats
def compute_stat_matrix(data, stat_func, alpha=0.05):
'''A function that applies `stat_func` to all combinations of columns in `data`.
Returns a squared matrix with the p-values'''
p_values = pd.DataFrame(columns=data.columns, index=data.columns)
for a,b in itertools.combinations(data.columns,2):
s,p = stat_func(data[a], data[b])
p_values[a].ix[b] = p
p_values[b].ix[a] = p
return p_values
Explanation: Option C: Statistical hypothesis test
Choosing the correct statistical test is essential to properly report the results.
Nonparametric statistics can lend a helping hand.
Parametric statistics could be a better choice in some cases.
Parametric statistics require that all data follow a known distribution (frequently a normal one).
Some tests -like the normality test- can be apply to verify that data meet the parametric stats requirements.
In my experience that is very unlikely that all your EMO result meet those characteristics.
We start by writing a function that helps us tabulate the results of the application of an statistical hypothesis test.
End of explanation
stats.kruskal(*[hypervols[col] for col in hypervols.columns])
Explanation: The Kruskal-Wallis H-test tests the null hypothesis that the population median of all of the groups are equal.
It is a non-parametric version of ANOVA.
The test works on 2 or more independent samples, which may have different sizes.
Note that rejecting the null hypothesis does not indicate which of the groups differs.
Post-hoc comparisons between groups are required to determine which groups are different.
End of explanation
def conover_inman_procedure(data, alpha=0.05):
num_runs = len(data)
num_algos = len(data.columns)
N = num_runs*num_algos
_,p_value = stats.kruskal(*[data[col] for col in data.columns])
ranked = stats.rankdata(np.concatenate([data[col] for col in data.columns]))
ranksums = []
for i in range(num_algos):
ranksums.append(np.sum(ranked[num_runs*i:num_runs*(i+1)]))
S_sq = (np.sum(ranked**2) - N*((N+1)**2)/4)/(N-1)
right_side = stats.t.cdf(1-(alpha/2), N-num_algos) * \
math.sqrt((S_sq*((N-1-p_value)/(N-1)))*2/num_runs)
res = pd.DataFrame(columns=data.columns, index=data.columns)
for i,j in itertools.combinations(np.arange(num_algos),2):
res[res.columns[i]].ix[j] = abs(ranksums[i] - ranksums[j]/num_runs) > right_side
res[res.columns[j]].ix[i] = abs(ranksums[i] - ranksums[j]/num_runs) > right_side
return res
conover_inman_procedure(hypervols)
Explanation: We now can assert that the results are not the same but which ones are different or similar to the others the others?
In case that the null hypothesis of the Kruskal-Wallis is rejected the Conover–Inman procedure (Conover, 1999, pp. 288-290) can be applied in a pairwise manner in order to determine if the results of one algorithm were significantly better than those of the other.
Conover, W. J. (1999). Practical Nonparametric Statistics. John Wiley & Sons, New York, 3rd edition.
Note: If you want to get an extended summary of this method check out my PhD thesis.
End of explanation
hyp_transp = hypervols.transpose()
measurements = [list(hyp_transp[col]) for col in hyp_transp.columns]
stats.friedmanchisquare(*measurements)
Explanation: We now know in what cases the difference is sufficient as to say that one result is better than the other.
Another alternative is the Friedman test.
Its null hypothesis that repeated measurements of the same individuals have the same distribution.
It is often used to test for consistency among measurements obtained in different ways.
For example, if two measurement techniques are used on the same set of individuals, the Friedman test can be used to determine if the two measurement techniques are consistent.
End of explanation
raw_p_values=compute_stat_matrix(hypervols, stats.mannwhitneyu)
raw_p_values
Explanation: Mann–Whitney U test (also called the Mann–Whitney–Wilcoxon (MWW), Wilcoxon rank-sum test (WRS), or Wilcoxon–Mann–Whitney test) is a nonparametric test of the null hypothesis that two populations are the same against an alternative hypothesis, especially that a particular population tends to have larger values than the other.
It has greater efficiency than the $t$-test on non-normal distributions, such as a mixture of normal distributions, and it is nearly as efficient as the $t$-test on normal distributions.
End of explanation
from scipy.misc import comb
alpha=0.05
alpha_sid = 1 - (1-alpha)**(1/comb(len(hypervols.columns), 2))
alpha_sid
Explanation: The familywise error rate (FWER) is the probability of making one or more false discoveries, or type I errors, among all the hypotheses when performing multiple hypotheses tests.
Example: When performing a test, there is a $\alpha$ chance of making a type I error. If we make $m$ tests, then the probability of making one type I error is $m\alpha$. Therefore, if an $\alpha=0.05$ is used and 5 pairwise comparisons are made, we will have a $5\times0.05 = 0.25$ chance of making a type I error.
FWER procedures (such as the Bonferroni correction) exert a more stringent control over false discovery compared to False discovery rate controlling procedures.
FWER controlling seek to reduce the probability of even one false discovery, as opposed to the expected proportion of false discoveries.
Thus, FDR procedures have greater power at the cost of increased rates of type I errors, i.e., rejecting the null hypothesis of no effect when it should be accepted.
One of these corrections is the Šidák correction as it is less conservative than the Bonferroni correction:
$$\alpha_{SID} = 1-(1-\alpha)^\frac{1}{m},$$
where $m$ is the number of tests.
In our case $m$ is the number of combinations of algorithm configurations taken two at a time.
There are other corrections that can be used.
End of explanation
raw_p_values.applymap(lambda value: value <= alpha_sid)
Explanation: Let's apply the corrected alpha to raw_p_values. If we have a cell with a True value that means that those two results are the same.
End of explanation
<END_TASK> |
15,566 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Think Bayes
Second Edition
Copyright 2020 Allen B. Downey
License
Step1: Introduction
Three dimensions!
The Grizzly Bear Problem
In 1996 and 1997 Mowat and Strobeck deployed bear traps in locations in British Columbia and Alberta, in an effort to estimate the population of grizzly bears. They describe the experiment in "Estimating Population Size of Grizzly Bears Using Hair Capture, DNA Profiling, and Mark-Recapture Analysis"
The "trap" consists of a lure and several strands of barbed wire intended to capture samples of hair from bears that visit the lure. Using the hair samples, the researchers use DNA analysis to identify individual bears.
During the first session, on June 29, 1996, the researchers deployed traps at 76 sites. Returning 10 days later, they obtained 1043 hair samples and identified 23 different bears. During a second 10-day session they obtained 1191 samples from 19 different bears, where 4 of the 19 were from bears they had identified in the first batch.
To estimate the population of bears from this data, we need a model for the probability that each bear will be observed during each session. As a starting place, we'll make the simplest assumption, that every bear in the population has the same (unknown) probability of being sampled during each round.
With these assumptions we can compute the probability of the data for a range of possible populations.
As an example, let's suppose that the actual population of bears is 200.
After the first session, 23 of the 200 bears have been identified.
During the second session, if we choose 19 bears at random, what is the probability that 4 of them were previously identified?
I'll define
N
Step2: So that's the distribution of k given N, K, and n.
Now let's go the other way
Step3: So that's our prior.
To compute the likelihood of the data, we can use hypergeom with constants K and n, and a range of values of N.
Step4: We can compute the posterior in the usual way.
Step5: And here's what it looks like.
Step6: The most likely value is 109.
Step7: But the distribution is skewed to the right, so the posterior mean is substantially higher.
Step9: Two parameter model
Step10: Two parameters better than one?
Step11: The Lincoln index problem
A few years ago my occasional correspondent John D. Cook wrote an excellent
blog post about the Lincoln index, which is a way to estimate the
number of errors in a document (or program) by comparing results from
two independent testers.
http
Step13: Unknown probabilities
Step14: Chao et al
Step15: Spina bifida
Step16: Diabetes | <ASSISTANT_TASK:>
Python Code:
# If we're running on Colab, install empiricaldist
# https://pypi.org/project/empiricaldist/
import sys
IN_COLAB = 'google.colab' in sys.modules
if IN_COLAB:
!pip install empiricaldist
# Get utils.py and create directories
import os
if not os.path.exists('utils.py'):
!wget https://github.com/AllenDowney/ThinkBayes2/raw/master/code/soln/utils.py
if not os.path.exists('figs'):
!mkdir figs
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from empiricaldist import Pmf, Cdf
from utils import decorate, savefig
Explanation: Think Bayes
Second Edition
Copyright 2020 Allen B. Downey
License: Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)
End of explanation
from scipy.stats import hypergeom
N = 100
K = 23
n = 19
ks = np.arange(12)
ps = hypergeom(N, K, n).pmf(ks)
plt.bar(ks, ps)
decorate(xlabel='Number of bears observed twice',
ylabel='PMF',
title='Hypergeometric distribution of k (known population 200)')
Explanation: Introduction
Three dimensions!
The Grizzly Bear Problem
In 1996 and 1997 Mowat and Strobeck deployed bear traps in locations in British Columbia and Alberta, in an effort to estimate the population of grizzly bears. They describe the experiment in "Estimating Population Size of Grizzly Bears Using Hair Capture, DNA Profiling, and Mark-Recapture Analysis"
The "trap" consists of a lure and several strands of barbed wire intended to capture samples of hair from bears that visit the lure. Using the hair samples, the researchers use DNA analysis to identify individual bears.
During the first session, on June 29, 1996, the researchers deployed traps at 76 sites. Returning 10 days later, they obtained 1043 hair samples and identified 23 different bears. During a second 10-day session they obtained 1191 samples from 19 different bears, where 4 of the 19 were from bears they had identified in the first batch.
To estimate the population of bears from this data, we need a model for the probability that each bear will be observed during each session. As a starting place, we'll make the simplest assumption, that every bear in the population has the same (unknown) probability of being sampled during each round.
With these assumptions we can compute the probability of the data for a range of possible populations.
As an example, let's suppose that the actual population of bears is 200.
After the first session, 23 of the 200 bears have been identified.
During the second session, if we choose 19 bears at random, what is the probability that 4 of them were previously identified?
I'll define
N: actual (unknown) population size, 200.
K: number of bears identified in the first session, 23.
n: number of bears observed in the second session, 19 in the example.
k: the number of bears in the second session that had previously been identified, 4.
For given values of N, K, and n, the distribution of k is described by the hypergeometric distribution:
$PMF(k) = {K \choose k}{N-K \choose n-k}/{N \choose n}$
To understand why, consider:
The denominator, ${ N \choose n}$, is the number of subsets of $n$ we could choose from a population of $N$ bears.
The numerator is the number of subsets that contain $k$ bears from the previously identified $K$ and $n-k$ from the previously unseen $N-K$.
SciPy provides hypergeom, which we can use to compute this PMF.
End of explanation
Ns = np.arange(50, 501)
prior_N = Pmf(1, Ns)
prior_N.index.name = 'N'
Explanation: So that's the distribution of k given N, K, and n.
Now let's go the other way: given K, n, and k, how can we estimate the total population, N?
As a starting place, let's suppose that, prior to this study, an expert in this domain would have estimated that the population is between 50 and 500, and equally likely to be any value in that range.
End of explanation
K = 23
n = 19
k = 4
likelihood = hypergeom(Ns, K, n).pmf(k)
Explanation: So that's our prior.
To compute the likelihood of the data, we can use hypergeom with constants K and n, and a range of values of N.
End of explanation
posterior_N = prior_N * likelihood
posterior_N.normalize()
Explanation: We can compute the posterior in the usual way.
End of explanation
posterior_N.plot()
decorate(xlabel='Population of bears (N)',
ylabel='PDF',
title='Posterior distribution of N')
Explanation: And here's what it looks like.
End of explanation
posterior_N.max_prob()
Explanation: The most likely value is 109.
End of explanation
posterior_N.mean()
posterior_N.credible_interval(0.9)
Explanation: But the distribution is skewed to the right, so the posterior mean is substantially higher.
End of explanation
ps = np.linspace(0, 1, 101)
prior_p = Pmf(1, ps)
prior_p.index.name = 'p'
from utils import make_joint
joint_prior = make_joint(prior_N, prior_p)
N_mesh, p_mesh = np.meshgrid(Ns, ps)
N_mesh.shape
from scipy.stats import binom
like1 = binom.pmf(K, N_mesh, p_mesh)
like1.sum()
like2 = binom.pmf(k, K, p_mesh) * binom.pmf(n-k, N_mesh-K, p_mesh)
like2.sum()
from utils import normalize
joint_posterior = joint_prior * like1 * like2
normalize(joint_posterior)
def plot_contour(joint, **options):
Plot a joint distribution.
joint: DataFrame representing a joint PMF
cs = plt.contour(joint.columns, joint.index, joint, **options)
decorate(xlabel=joint.columns.name,
ylabel=joint.index.name)
return cs
plot_contour(joint_posterior)
decorate(title='Joint posterior distribution of N and p')
from utils import marginal
marginal_N = marginal(joint_posterior, 0)
posterior_N.plot(color='gray')
marginal_N.plot()
decorate(xlabel='Population of bears (N)',
ylabel='PDF',
title='Posterior marginal distribution of N')
marginal_N.mean(), marginal_N.credible_interval(0.9)
marginal_p = marginal(joint_posterior, 1)
marginal_p.plot()
decorate(xlabel='Probability of observing a bear',
ylabel='PDF',
title='Posterior marginal distribution of p')
from seaborn import JointGrid
def joint_plot(joint, **options):
x = joint.columns.name
x = 'x' if x is None else x
y = joint.index.name
y = 'y' if y is None else y
# make a JointGrid with minimal data
data = pd.DataFrame({x:[0], y:[0]})
g = JointGrid(x, y, data, **options)
# replace the contour plot
g.ax_joint.contour(joint.columns,
joint.index,
joint,
cmap='viridis')
# replace the marginals
marginal_x = marginal(joint, 0)
g.ax_marg_x.plot(marginal_x.qs, marginal_x.ps)
marginal_y = marginal(joint, 1)
g.ax_marg_y.plot(marginal_y.ps, marginal_y.qs)
joint_plot(joint_posterior)
Explanation: Two parameter model
End of explanation
mean = (23 + 19) / 2
N1 = 138
p = mean/N1
p
from scipy.stats import binom
binom(N1, p).std()
binom(N1, p).pmf([23, 19]).prod()
N2 = 173
p = mean/N2
p
binom(N2, p).std()
binom(N2, p).pmf([23, 19]).prod()
Explanation: Two parameters better than one?
End of explanation
n0 = 20
n1 = 15
k11 = 3
k10 = n0 - k11
k01 = n1 - k11
k10, k01
Ns = np.arange(32, 350)
prior_N = Pmf(1, Ns)
prior_N.index.name = 'N'
p0, p1 = 0.2, 0.15
like0 = binom.pmf(n0, Ns, p0)
like0.sum()
like1 = binom.pmf(k01, Ns-n0, p1) * binom.pmf(k11, n0, p1)
like1.sum()
likelihood = like0 * like1
likelihood.shape
prior_N.shape
posterior_N = prior_N * likelihood
posterior_N.normalize()
posterior_N.plot()
decorate(xlabel='n',
ylabel='PMF',
title='Posterior marginal distribution of n with known p1, p2')
posterior_N.mean()
Explanation: The Lincoln index problem
A few years ago my occasional correspondent John D. Cook wrote an excellent
blog post about the Lincoln index, which is a way to estimate the
number of errors in a document (or program) by comparing results from
two independent testers.
http://www.johndcook.com/blog/2010/07/13/lincoln-index/
Here's his presentation of the problem:
"Suppose you have a tester who finds 20 bugs in your program. You
want to estimate how many bugs are really in the program. You know
there are at least 20 bugs, and if you have supreme confidence in your
tester, you may suppose there are around 20 bugs. But maybe your
tester isn't very good. Maybe there are hundreds of bugs. How can you
have any idea how many bugs there are? There's no way to know with one
tester. But if you have two testers, you can get a good idea, even if
you don't know how skilled the testers are."
Suppose the first tester finds 20 bugs, the second finds 15, and they
find 3 in common; how can we estimate the number of bugs?
End of explanation
p0 = np.linspace(0, 1, 61)
prior_p0 = Pmf(1, p0)
prior_p0.index.name = 'p0'
p1 = np.linspace(0, 1, 51)
prior_p1 = Pmf(1, p1)
prior_p1.index.name = 'p1'
from utils import make_joint
joint = make_joint(prior_p0, prior_p1)
joint.shape
joint_pmf = Pmf(joint.transpose().stack())
joint_pmf.head()
joint_prior = make_joint(prior_N, joint_pmf)
joint_prior.shape
joint_prior.head()
likelihood = joint_prior.copy()
Ns = joint_prior.columns
for (p0, p1) in joint_prior.index:
like0 = binom.pmf(n0, Ns, p0)
like1 = binom.pmf(k01, Ns-n0, p1) * binom.pmf(k11, n0, p1)
likelihood.loc[p0, p1] = like0 * like1
likelihood.to_numpy().sum()
from utils import normalize
joint_posterior = joint_prior * likelihood
normalize(joint_posterior)
joint_posterior.shape
from utils import marginal
posterior_N = marginal(joint_posterior, 0)
posterior_N.plot()
posterior_pmf = marginal(joint_posterior, 1)
posterior_pmf.shape
posterior_joint_ps = posterior_pmf.unstack().transpose()
posterior_joint_ps.head()
def plot_contour(joint, **options):
Plot a joint distribution.
joint: DataFrame representing a joint PMF
cs = plt.contour(joint.columns, joint.index, joint, **options)
decorate(xlabel=joint.columns.name,
ylabel=joint.index.name)
return cs
plot_contour(posterior_joint_ps)
decorate(title='Posterior joint distribution for p1 and p2')
posterior_p1 = marginal(posterior_joint_ps, 0)
posterior_p2 = marginal(posterior_joint_ps, 1)
posterior_p1.plot(label='p1')
posterior_p2.plot(label='p2')
decorate(xlabel='Probability of finding a bug',
ylabel='PDF',
title='Posterior marginal distributions of p1 and p2')
posterior_p1.mean(), posterior_p1.credible_interval(0.9)
posterior_p2.mean(), posterior_p2.credible_interval(0.9)
joint_plot(posterior_joint_ps)
Explanation: Unknown probabilities
End of explanation
data = [-1000, 63, 55, 18, 69, 17, 21, 28]
index = pd.MultiIndex.from_product([[0, 1]]*3)
Kijk = pd.Series(data, index)
Kijk
Kijk.xs(1, level=0).sum()
Kijk.xs(1, level=1).sum()
Kijk.xs(1, level=2).sum()
n = pd.Series(1, range(3))
for level in n.index:
n[level] = Kijk.xs(1, level=level).sum()
n
Kixx = Kijk.sum(level=0)
Kixx
Kijx = Kijk.sum(level=[0,1])
Kijx
def num_observed(K):
return np.asarray(K)[1:].sum()
s0 = num_observed(Kixx)
s0
s1 = num_observed(Kijx)
s1
s2 = num_observed(Kijk)
s2
s = [s0, s1, s2]
s
k = pd.concat([Kixx, Kijx, Kijk])
k
k[1]
k[0,1]
k[0,0,1]
k[0]
N = 300
k0 = N - s[0]
k0
k00 = N - s[1]
k00
k000 = N - s[2]
k000
Ns = np.arange(s2, 500, 5)
prior_N = Pmf(1, Ns)
prior_N.index.name = 'N'
ps = np.linspace(0, 1, 101)
prior_p = Pmf(1, ps)
prior_p.index.name = 'p'
from utils import make_joint
joint_prior = make_joint(prior_N, prior_p)
def likelihood_round1(ps, n, k, s, joint_prior):
like = joint_prior.copy()
for N in joint_prior:
like[N] = binom.pmf(n[0], N, ps)
return like
like1 = likelihood_round1(ps, n, k, s, joint_prior)
def likelihood_round2(ps, n, k, s, joint_prior):
like = joint_prior.copy()
for N in joint_prior:
k0 = N - s[0]
like[N] = (binom.pmf(k[0,1], k0, ps) *
binom.pmf(k[1,1], k[1], ps))
return like
like2 = likelihood_round2(ps, n, k, s, joint_prior)
like2.to_numpy().sum()
joint_posterior2 = joint_prior * like1 * like2
normalize(joint_posterior2)
plot_contour(joint_posterior2)
decorate(title='Joint posterior of N and p')
marginal_N = marginal(joint_posterior2, 0)
marginal_N.plot()
decorate(xlabel='N',
ylabel='PDF',
title='Posterior marginal distribution of N')
marginal_N.mean(), marginal_N.credible_interval(0.9)
marginal_p = marginal(joint_posterior2, 1)
marginal_p.plot()
decorate(xlabel='p',
ylabel='PDF',
title='Posterior marginal distribution of p')
def likelihood_round3(ps, n, k, s, joint_prior):
like = joint_prior.copy()
for N in joint_prior:
k00 = N - s[1]
like[N] = (binom.pmf(k[0,0,1], k00, ps) *
binom.pmf(k[0,1,1], k[0,1], ps) *
binom.pmf(k[1,0,1], k[1,0], ps) *
binom.pmf(k[1,1,1], k[1,1], ps))
return like
like3 = likelihood_round3(ps, n, k, s, joint_prior)
like3.to_numpy().sum()
joint_posterior3 = joint_posterior2 * like3
normalize(joint_posterior3)
plot_contour(joint_posterior3)
decorate(title='Joint posterior of N and p')
marginal_N = marginal(joint_posterior3, 0)
marginal_N.plot()
decorate(xlabel='N',
ylabel='PDF',
title='Posterior marginal distribution of N')
marginal_N.mean(), marginal_N.credible_interval(0.9)
marginal_p = marginal(joint_posterior3, 1)
marginal_p.plot()
decorate(xlabel='p',
ylabel='PDF',
title='Posterior marginal distribution of p')
joint_plot(joint_posterior3)
Explanation: Chao et al
End of explanation
data_sb = [-100, 60, 49, 4, 247, 112, 142, 12]
num_observed(data_sb)
def make_stats(data, num_rounds):
index = pd.MultiIndex.from_product([[0, 1]]*num_rounds)
K = pd.Series(data, index)
n = pd.Series(0, range(num_rounds))
for level in n.index:
n[level] = K.xs(1, level=level).sum()
t = [K.sum(level=list(range(i+1)))
for i in range(num_rounds)]
s = [num_observed(Kx) for Kx in t]
k = pd.concat(t)
return n, k, s
n, k, s = make_stats(data_sb, 3)
n
k
s
Ns = np.arange(s[2], 1000, 5)
prior_N = Pmf(1, Ns)
prior_N.index.name = 'N'
prior_N.shape
probs0 = np.linspace(0.5, 1.0, 51)
prior_p0 = Pmf(1, probs0)
prior_p0.index.name = 'p0'
prior_p0.head()
probs1 = np.linspace(0.1, 0.5, 41)
prior_p1 = Pmf(1, probs1)
prior_p1.index.name = 'p1'
prior_p1.head()
probs2 = np.linspace(0.1, 0.4, 31)
prior_p2 = Pmf(1, probs2)
prior_p2.index.name = 'p2'
prior_p2.head()
def make_joint3(prior0, prior1, prior2):
joint2 = make_joint(prior0, prior1)
joint2_pmf = Pmf(joint2.transpose().stack())
joint3 = make_joint(prior2, joint2_pmf)
return joint3
joint_prior = make_joint3(prior_p0, prior_p1, prior_N)
joint_prior.head()
likelihood = joint_prior.copy()
Ns = joint_prior.columns
for (p0, p1) in joint_prior.index:
like0 = binom.pmf(k[1], Ns, p0)
k0 = Ns - s[0]
like1 = binom.pmf(k[0,1], k0, p1) * binom.pmf(k[1,1], k[1], p1)
likelihood.loc[p0, p1] = like0 * like1
likelihood.to_numpy().sum()
from utils import normalize
joint_posterior = joint_prior * likelihood
normalize(joint_posterior)
joint_posterior.shape
from utils import marginal
posterior_N = marginal(joint_posterior, 0)
posterior_N.plot()
decorate(xlabel='N',
ylabel='PDF',
title='Posterior for N after two rounds')
posterior_N.mean(), posterior_N.credible_interval(0.9)
posterior_pmf = marginal(joint_posterior, 1)
posterior_pmf.head()
posterior_joint_ps = posterior_pmf.unstack().transpose()
posterior_joint_ps.head()
plot_contour(posterior_joint_ps)
decorate(title='Posterior joint distribution for p1 and p2')
posterior_p1 = marginal(posterior_joint_ps, 0)
posterior_p2 = marginal(posterior_joint_ps, 1)
posterior_p1.plot(label='p1')
posterior_p2.plot(label='p2')
decorate(xlabel='Probability of finding a bug',
ylabel='PDF',
title='Posterior marginal distributions of p1 and p2')
joint3 = make_joint(prior_p2, posterior_pmf)
joint3.head()
joint3_pmf = Pmf(joint3.stack())
joint3_pmf.head()
prior4 = make_joint(posterior_N, joint3_pmf)
prior4.head()
prior4.shape
joint2 = make_joint(posterior_N, prior_p2)
joint2.head()
like2 = joint2.copy()
Ns = joint2.columns
for p2 in joint2.index:
k00 = Ns - s[1]
like = (binom.pmf(k[0,0,1], k00, p2) *
binom.pmf(k[0,1,1], k[0,1], p2) *
binom.pmf(k[1,0,1], k[1,0], p2) *
binom.pmf(k[1,1,1], k[1,1], p2))
like2.loc[p2] = like
like2.to_numpy().sum()
like2.head()
like4 = prior4.copy()
for (p0, p1, p2) in prior4.index:
like4.loc[p0, p1, p2] = like2.loc[p2]
like4.to_numpy().sum()
posterior4 = prior4 * like4
normalize(posterior4)
marginal_N = marginal(posterior4, 0)
marginal_N.plot()
decorate(xlabel='N',
ylabel='PDF',
title='Posterior for N after three rounds')
marginal_N.mean(), marginal_N.credible_interval(0.9)
posterior_p012 = marginal(posterior4, 1)
posterior_p012.unstack().head()
posterior_p2 = marginal(posterior_p012.unstack(), 0)
posterior_p2.plot()
posterior_p01 = marginal(posterior_p012.unstack(), 1)
joint_plot(posterior_p01.unstack().transpose())
Explanation: Spina bifida
End of explanation
data4 = [-10000, 10, 182, 8, 74, 7, 20, 14, 709, 12, 650, 46, 104, 18, 157, 58]
num_observed(data4)
n, k, s = make_stats(data4, 4)
n
k
s
Explanation: Diabetes
End of explanation
<END_TASK> |
15,567 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Copyright 2018 The TensorFlow Authors.
Step1: Ragged Tensors
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: Overview
Your data comes in many shapes; your tensors should too.
Ragged tensors are the TensorFlow equivalent of nested variable-length
lists. They make it easy to store and process data with non-uniform shapes,
including
Step3: There are also a number of methods and operations that are
specific to ragged tensors, including factory methods, conversion methods,
and value-mapping operations.
For a list of supported ops, see the tf.ragged package
documentation.
As with normal tensors, you can use Python-style indexing to access specific
slices of a ragged tensor. For more information, see the section on
Indexing below.
Step4: And just like normal tensors, you can use Python arithmetic and comparison
operators to perform elementwise operations. For more information, see the section on
Overloaded Operators below.
Step5: If you need to perform an elementwise transformation to the values of a RaggedTensor, you can use tf.ragged.map_flat_values, which takes a function plus one or more arguments, and applies the function to transform the RaggedTensor's values.
Step6: Constructing a ragged tensor
The simplest way to construct a ragged tensor is using
tf.ragged.constant, which builds the
RaggedTensor corresponding to a given nested Python list
Step7: Ragged tensors can also be constructed by pairing flat values tensors with
row-partitioning tensors indicating how those values should be divided into
rows, using factory classmethods such as tf.RaggedTensor.from_value_rowids,
tf.RaggedTensor.from_row_lengths, and
tf.RaggedTensor.from_row_splits.
tf.RaggedTensor.from_value_rowids
If you know which row each value belongs in, then you can build a RaggedTensor using a value_rowids row-partitioning tensor
Step8: tf.RaggedTensor.from_row_lengths
If you know how long each row is, then you can use a row_lengths row-partitioning tensor
Step9: tf.RaggedTensor.from_row_splits
If you know the index where each row starts and ends, then you can use a row_splits row-partitioning tensor
Step10: See the tf.RaggedTensor class documentation for a full list of factory methods.
What you can store in a ragged tensor
As with normal Tensors, the values in a RaggedTensor must all have the same
type; and the values must all be at the same nesting depth (the rank of the
tensor)
Step11: Example use case
The following example demonstrates how RaggedTensors can be used to construct
and combine unigram and bigram embeddings for a batch of variable-length
queries, using special markers for the beginning and end of each sentence.
For more details on the ops used in this example, see the tf.ragged package documentation.
Step12: Ragged tensors
Step13: The method tf.RaggedTensor.bounding_shape can be used to find a tight
bounding shape for a given RaggedTensor
Step14: Ragged vs sparse tensors
A ragged tensor should not be thought of as a type of sparse tensor, but
rather as a dense tensor with an irregular shape.
As an illustrative example, consider how array operations such as concat,
stack, and tile are defined for ragged vs. sparse tensors. Concatenating
ragged tensors joins each row to form a single row with the combined length
Step15: But concatenating sparse tensors is equivalent to concatenating the corresponding dense tensors,
as illustrated by the following example (where Ø indicates missing values)
Step16: For another example of why this distinction is important, consider the
definition of “the mean value of each row” for an op such as tf.reduce_mean.
For a ragged tensor, the mean value for a row is the sum of the
row’s values divided by the row’s width.
But for a sparse tensor, the mean value for a row is the sum of the
row’s values divided by the sparse tensor’s overall width (which is
greater than or equal to the width of the longest row).
Overloaded operators
The RaggedTensor class overloads the standard Python arithmetic and comparison
operators, making it easy to perform basic elementwise math
Step17: Since the overloaded operators perform elementwise computations, the inputs to
all binary operations must have the same shape, or be broadcastable to the same
shape. In the simplest broadcasting case, a single scalar is combined
elementwise with each value in a ragged tensor
Step18: For a discussion of more advanced cases, see the section on
Broadcasting.
Ragged tensors overload the same set of operators as normal Tensors
Step19: Indexing a 3-D ragged tensor with 2 ragged dimensions
Step20: RaggedTensors supports multidimensional indexing and slicing, with one
restriction
Step21: Evaluating ragged tensors
Eager execution
In eager execution mode, ragged tensors are evaluated immediately. To access the
values they contain, you can
Step22: Use Python indexing. If the tensor piece you select contains no ragged
dimensions, then it will be returned as an EagerTensor. You can then use
the numpy() method to access the value directly.
Step23: Decompose the ragged tensor into its components, using the
tf.RaggedTensor.values
and
tf.RaggedTensor.row_splits
properties, or row-partitioning methods such as tf.RaggedTensor.row_lengths()
and tf.RaggedTensor.value_rowids().
Step24: Graph execution
In graph execution mode, ragged tensors can be evaluated using session.run(),
just like standard tensors.
Step25: The resulting value will be a
tf.ragged.RaggedTensorValue
instance. To access the values contained in a RaggedTensorValue, you can
Step26: Decompose the ragged tensor into its components, using the
tf.ragged.RaggedTensorValue.values
and
tf.ragged.RaggedTensorValue.row_splits
properties.
Step27: Broadcasting
Broadcasting is the process of making tensors with different shapes have
compatible shapes for elementwise operations. For more background on
broadcasting, see
Step28: Here are some examples of shapes that do not broadcast
Step29: RaggedTensor encoding
Ragged tensors are encoded using the RaggedTensor class. Internally, each
RaggedTensor consists of
Step30: Multiple ragged dimensions
A ragged tensor with multiple ragged dimensions is encoded by using a nested
RaggedTensor for the values tensor. Each nested RaggedTensor adds a single
ragged dimension.
Step31: The factory function tf.RaggedTensor.from_nested_row_splits may be used to construct a
RaggedTensor with multiple ragged dimensions directly, by providing a list of
row_splits tensors
Step32: Uniform Inner Dimensions
Ragged tensors with uniform inner dimensions are encoded by using a
multidimensional tf.Tensor for values.
Step33: Alternative row-partitioning schemes
The RaggedTensor class uses row_splits as the primary mechanism to store
information about how the values are partitioned into rows. However,
RaggedTensor also provides support for four alternative row-partitioning
schemes, which can be more convenient to use depending on how your data is
formatted. Internally, RaggedTensor uses these additional schemes to improve
efficiency in some contexts.
<dl>
<dt>Row lengths</dt>
<dd>`row_lengths` is a vector with shape `[nrows]`, which specifies the
length of each row.</dd>
<dt>Row starts</dt>
<dd>`row_starts` is a vector with shape `[nrows]`, which specifies the start
offset of each row. Equivalent to `row_splits[
Step34: The RaggedTensor class defines methods which can be used to construct
each of these row-partitioning tensors. | <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
import math
import tensorflow.compat.v1 as tf
Explanation: Ragged Tensors
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/r1/guide/ragged_tensors.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/r1/guide/ragged_tensors.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
Note: This is an archived TF1 notebook. These are configured
to run in TF2's
compatibility mode
but will run in TF1 as well. To use TF1 in Colab, use the
%tensorflow_version 1.x
magic.
Setup
End of explanation
digits = tf.ragged.constant([[3, 1, 4, 1], [], [5, 9, 2], [6], []])
words = tf.ragged.constant([["So", "long"], ["thanks", "for", "all", "the", "fish"]])
print(tf.add(digits, 3))
print(tf.reduce_mean(digits, axis=1))
print(tf.concat([digits, [[5, 3]]], axis=0))
print(tf.tile(digits, [1, 2]))
print(tf.strings.substr(words, 0, 2))
Explanation: Overview
Your data comes in many shapes; your tensors should too.
Ragged tensors are the TensorFlow equivalent of nested variable-length
lists. They make it easy to store and process data with non-uniform shapes,
including:
Variable-length features, such as the set of actors in a movie.
Batches of variable-length sequential inputs, such as sentences or video
clips.
Hierarchical inputs, such as text documents that are subdivided into
sections, paragraphs, sentences, and words.
Individual fields in structured inputs, such as protocol buffers.
What you can do with a ragged tensor
Ragged tensors are supported by more than a hundred TensorFlow operations,
including math operations (such as tf.add and tf.reduce_mean), array operations
(such as tf.concat and tf.tile), string manipulation ops (such as
tf.substr), and many others:
End of explanation
print(digits[0]) # First row
print(digits[:, :2]) # First two values in each row.
print(digits[:, -2:]) # Last two values in each row.
Explanation: There are also a number of methods and operations that are
specific to ragged tensors, including factory methods, conversion methods,
and value-mapping operations.
For a list of supported ops, see the tf.ragged package
documentation.
As with normal tensors, you can use Python-style indexing to access specific
slices of a ragged tensor. For more information, see the section on
Indexing below.
End of explanation
print(digits + 3)
print(digits + tf.ragged.constant([[1, 2, 3, 4], [], [5, 6, 7], [8], []]))
Explanation: And just like normal tensors, you can use Python arithmetic and comparison
operators to perform elementwise operations. For more information, see the section on
Overloaded Operators below.
End of explanation
times_two_plus_one = lambda x: x * 2 + 1
print(tf.ragged.map_flat_values(times_two_plus_one, digits))
Explanation: If you need to perform an elementwise transformation to the values of a RaggedTensor, you can use tf.ragged.map_flat_values, which takes a function plus one or more arguments, and applies the function to transform the RaggedTensor's values.
End of explanation
sentences = tf.ragged.constant([
["Let's", "build", "some", "ragged", "tensors", "!"],
["We", "can", "use", "tf.ragged.constant", "."]])
print(sentences)
paragraphs = tf.ragged.constant([
[['I', 'have', 'a', 'cat'], ['His', 'name', 'is', 'Mat']],
[['Do', 'you', 'want', 'to', 'come', 'visit'], ["I'm", 'free', 'tomorrow']],
])
print(paragraphs)
Explanation: Constructing a ragged tensor
The simplest way to construct a ragged tensor is using
tf.ragged.constant, which builds the
RaggedTensor corresponding to a given nested Python list:
End of explanation
print(tf.RaggedTensor.from_value_rowids(
values=[3, 1, 4, 1, 5, 9, 2, 6],
value_rowids=[0, 0, 0, 0, 2, 2, 2, 3]))
Explanation: Ragged tensors can also be constructed by pairing flat values tensors with
row-partitioning tensors indicating how those values should be divided into
rows, using factory classmethods such as tf.RaggedTensor.from_value_rowids,
tf.RaggedTensor.from_row_lengths, and
tf.RaggedTensor.from_row_splits.
tf.RaggedTensor.from_value_rowids
If you know which row each value belongs in, then you can build a RaggedTensor using a value_rowids row-partitioning tensor:
End of explanation
print(tf.RaggedTensor.from_row_lengths(
values=[3, 1, 4, 1, 5, 9, 2, 6],
row_lengths=[4, 0, 3, 1]))
Explanation: tf.RaggedTensor.from_row_lengths
If you know how long each row is, then you can use a row_lengths row-partitioning tensor:
End of explanation
print(tf.RaggedTensor.from_row_splits(
values=[3, 1, 4, 1, 5, 9, 2, 6],
row_splits=[0, 4, 4, 7, 8]))
Explanation: tf.RaggedTensor.from_row_splits
If you know the index where each row starts and ends, then you can use a row_splits row-partitioning tensor:
End of explanation
print(tf.ragged.constant([["Hi"], ["How", "are", "you"]])) # ok: type=string, rank=2
print(tf.ragged.constant([[[1, 2], [3]], [[4, 5]]])) # ok: type=int32, rank=3
try:
tf.ragged.constant([["one", "two"], [3, 4]]) # bad: multiple types
except ValueError as exception:
print(exception)
try:
tf.ragged.constant(["A", ["B", "C"]]) # bad: multiple nesting depths
except ValueError as exception:
print(exception)
Explanation: See the tf.RaggedTensor class documentation for a full list of factory methods.
What you can store in a ragged tensor
As with normal Tensors, the values in a RaggedTensor must all have the same
type; and the values must all be at the same nesting depth (the rank of the
tensor):
End of explanation
queries = tf.ragged.constant([['Who', 'is', 'Dan', 'Smith'],
['Pause'],
['Will', 'it', 'rain', 'later', 'today']])
# Create an embedding table.
num_buckets = 1024
embedding_size = 4
embedding_table = tf.Variable(
tf.truncated_normal([num_buckets, embedding_size],
stddev=1.0 / math.sqrt(embedding_size)))
# Look up the embedding for each word.
word_buckets = tf.strings.to_hash_bucket_fast(queries, num_buckets)
word_embeddings = tf.ragged.map_flat_values(
tf.nn.embedding_lookup, embedding_table, word_buckets) # ①
# Add markers to the beginning and end of each sentence.
marker = tf.fill([queries.nrows(), 1], '#')
padded = tf.concat([marker, queries, marker], axis=1) # ②
# Build word bigrams & look up embeddings.
bigrams = tf.string_join([padded[:, :-1], padded[:, 1:]], separator='+') # ③
bigram_buckets = tf.strings.to_hash_bucket_fast(bigrams, num_buckets)
bigram_embeddings = tf.ragged.map_flat_values(
tf.nn.embedding_lookup, embedding_table, bigram_buckets) # ④
# Find the average embedding for each sentence
all_embeddings = tf.concat([word_embeddings, bigram_embeddings], axis=1) # ⑤
avg_embedding = tf.reduce_mean(all_embeddings, axis=1) # ⑥
print(avg_embedding)
Explanation: Example use case
The following example demonstrates how RaggedTensors can be used to construct
and combine unigram and bigram embeddings for a batch of variable-length
queries, using special markers for the beginning and end of each sentence.
For more details on the ops used in this example, see the tf.ragged package documentation.
End of explanation
tf.ragged.constant([["Hi"], ["How", "are", "you"]]).shape
Explanation: Ragged tensors: definitions
Ragged and uniform dimensions
A ragged tensor is a tensor with one or more ragged dimensions,
which are dimensions whose slices may have different lengths. For example, the
inner (column) dimension of rt=[[3, 1, 4, 1], [], [5, 9, 2], [6], []] is
ragged, since the column slices (rt[0, :], ..., rt[4, :]) have different
lengths. Dimensions whose slices all have the same length are called uniform
dimensions.
The outermost dimension of a ragged tensor is always uniform, since it consists
of a single slice (and so there is no possibility for differing slice lengths).
In addition to the uniform outermost dimension, ragged tensors may also have
uniform inner dimensions. For example, we might store the word embeddings for
each word in a batch of sentences using a ragged tensor with shape
[num_sentences, (num_words), embedding_size], where the parentheses around
(num_words) indicate that the dimension is ragged.
Ragged tensors may have multiple ragged dimensions. For example, we could store
a batch of structured text documents using a tensor with shape [num_documents,
(num_paragraphs), (num_sentences), (num_words)] (where again parentheses are
used to indicate ragged dimensions).
Ragged tensor shape restrictions
The shape of a ragged tensor is currently restricted to have the following form:
A single uniform dimension
Followed by one or more ragged dimensions
Followed by zero or more uniform dimensions.
Note: These restrictions are a consequence of the current implementation, and we
may relax them in the future.
Rank and ragged rank
The total number of dimensions in a ragged tensor is called its rank, and
the number of ragged dimensions in a ragged tensor is called its ragged
rank. In graph execution mode (i.e., non-eager mode), a tensor's ragged rank
is fixed at creation time: it can't depend
on runtime values, and can't vary dynamically for different session runs.
A potentially ragged tensor is a value that might be
either a tf.Tensor or a tf.RaggedTensor. The
ragged rank of a tf.Tensor is defined to be zero.
RaggedTensor shapes
When describing the shape of a RaggedTensor, ragged dimensions are indicated by
enclosing them in parentheses. For example, as we saw above, the shape of a 3-D
RaggedTensor that stores word embeddings for each word in a batch of sentences
can be written as [num_sentences, (num_words), embedding_size].
The RaggedTensor.shape attribute returns a tf.TensorShape for a
ragged tensor, where ragged dimensions have size None:
End of explanation
print(tf.ragged.constant([["Hi"], ["How", "are", "you"]]).bounding_shape())
Explanation: The method tf.RaggedTensor.bounding_shape can be used to find a tight
bounding shape for a given RaggedTensor:
End of explanation
ragged_x = tf.ragged.constant([["John"], ["a", "big", "dog"], ["my", "cat"]])
ragged_y = tf.ragged.constant([["fell", "asleep"], ["barked"], ["is", "fuzzy"]])
print(tf.concat([ragged_x, ragged_y], axis=1))
Explanation: Ragged vs sparse tensors
A ragged tensor should not be thought of as a type of sparse tensor, but
rather as a dense tensor with an irregular shape.
As an illustrative example, consider how array operations such as concat,
stack, and tile are defined for ragged vs. sparse tensors. Concatenating
ragged tensors joins each row to form a single row with the combined length:
End of explanation
sparse_x = ragged_x.to_sparse()
sparse_y = ragged_y.to_sparse()
sparse_result = tf.sparse.concat(sp_inputs=[sparse_x, sparse_y], axis=1)
print(tf.sparse.to_dense(sparse_result, ''))
Explanation: But concatenating sparse tensors is equivalent to concatenating the corresponding dense tensors,
as illustrated by the following example (where Ø indicates missing values):
End of explanation
x = tf.ragged.constant([[1, 2], [3], [4, 5, 6]])
y = tf.ragged.constant([[1, 1], [2], [3, 3, 3]])
print(x + y)
Explanation: For another example of why this distinction is important, consider the
definition of “the mean value of each row” for an op such as tf.reduce_mean.
For a ragged tensor, the mean value for a row is the sum of the
row’s values divided by the row’s width.
But for a sparse tensor, the mean value for a row is the sum of the
row’s values divided by the sparse tensor’s overall width (which is
greater than or equal to the width of the longest row).
Overloaded operators
The RaggedTensor class overloads the standard Python arithmetic and comparison
operators, making it easy to perform basic elementwise math:
End of explanation
x = tf.ragged.constant([[1, 2], [3], [4, 5, 6]])
print(x + 3)
Explanation: Since the overloaded operators perform elementwise computations, the inputs to
all binary operations must have the same shape, or be broadcastable to the same
shape. In the simplest broadcasting case, a single scalar is combined
elementwise with each value in a ragged tensor:
End of explanation
queries = tf.ragged.constant(
[['Who', 'is', 'George', 'Washington'],
['What', 'is', 'the', 'weather', 'tomorrow'],
['Goodnight']])
print(queries[1])
print(queries[1, 2]) # A single word
print(queries[1:]) # Everything but the first row
print(queries[:, :3]) # The first 3 words of each query
print(queries[:, -2:]) # The last 2 words of each query
Explanation: For a discussion of more advanced cases, see the section on
Broadcasting.
Ragged tensors overload the same set of operators as normal Tensors: the unary
operators -, ~, and abs(); and the binary operators +, -, *, /,
//, %, **, &, |, ^, <, <=, >, and >=. Note that, as with
standard Tensors, binary == is not overloaded; you can use
tf.equal() to check elementwise equality.
Indexing
Ragged tensors support Python-style indexing, including multidimensional
indexing and slicing. The following examples demonstrate ragged tensor indexing
with a 2-D and a 3-D ragged tensor.
Indexing a 2-D ragged tensor with 1 ragged dimension
End of explanation
rt = tf.ragged.constant([[[1, 2, 3], [4]],
[[5], [], [6]],
[[7]],
[[8, 9], [10]]])
print(rt[1]) # Second row (2-D RaggedTensor)
print(rt[3, 0]) # First element of fourth row (1-D Tensor)
print(rt[:, 1:3]) # Items 1-3 of each row (3-D RaggedTensor)
print(rt[:, -1:]) # Last item of each row (3-D RaggedTensor)
Explanation: Indexing a 3-D ragged tensor with 2 ragged dimensions
End of explanation
ragged_sentences = tf.ragged.constant([
['Hi'], ['Welcome', 'to', 'the', 'fair'], ['Have', 'fun']])
print(ragged_sentences.to_tensor(default_value=''))
print(ragged_sentences.to_sparse())
x = [[1, 3, -1, -1], [2, -1, -1, -1], [4, 5, 8, 9]]
print(tf.RaggedTensor.from_tensor(x, padding=-1))
st = tf.SparseTensor(indices=[[0, 0], [2, 0], [2, 1]],
values=['a', 'b', 'c'],
dense_shape=[3, 3])
print(tf.RaggedTensor.from_sparse(st))
Explanation: RaggedTensors supports multidimensional indexing and slicing, with one
restriction: indexing into a ragged dimension is not allowed. This case is
problematic because the indicated value may exist in some rows but not others.
In such cases, it's not obvious whether we should (1) raise an IndexError; (2)
use a default value; or (3) skip that value and return a tensor with fewer rows
than we started with. Following the
guiding principles of Python
("In the face
of ambiguity, refuse the temptation to guess" ), we currently disallow this
operation.
Tensor Type Conversion
The RaggedTensor class defines methods that can be used to convert
between RaggedTensors and tf.Tensors or tf.SparseTensors:
End of explanation
rt = tf.ragged.constant([[1, 2], [3, 4, 5], [6], [], [7]])
print(rt.to_list())
Explanation: Evaluating ragged tensors
Eager execution
In eager execution mode, ragged tensors are evaluated immediately. To access the
values they contain, you can:
Use the
tf.RaggedTensor.to_list()
method, which converts the ragged tensor to a Python list.
End of explanation
print(rt[1].numpy())
Explanation: Use Python indexing. If the tensor piece you select contains no ragged
dimensions, then it will be returned as an EagerTensor. You can then use
the numpy() method to access the value directly.
End of explanation
print(rt.values)
print(rt.row_splits)
Explanation: Decompose the ragged tensor into its components, using the
tf.RaggedTensor.values
and
tf.RaggedTensor.row_splits
properties, or row-partitioning methods such as tf.RaggedTensor.row_lengths()
and tf.RaggedTensor.value_rowids().
End of explanation
with tf.Session() as session:
rt = tf.ragged.constant([[1, 2], [3, 4, 5], [6], [], [7]])
rt_value = session.run(rt)
Explanation: Graph execution
In graph execution mode, ragged tensors can be evaluated using session.run(),
just like standard tensors.
End of explanation
print(rt_value.to_list())
Explanation: The resulting value will be a
tf.ragged.RaggedTensorValue
instance. To access the values contained in a RaggedTensorValue, you can:
Use the
tf.ragged.RaggedTensorValue.to_list()
method, which converts the RaggedTensorValue to a Python list.
End of explanation
print(rt_value.values)
print(rt_value.row_splits)
tf.enable_eager_execution() # Resume eager execution mode.
Explanation: Decompose the ragged tensor into its components, using the
tf.ragged.RaggedTensorValue.values
and
tf.ragged.RaggedTensorValue.row_splits
properties.
End of explanation
# x (2D ragged): 2 x (num_rows)
# y (scalar)
# result (2D ragged): 2 x (num_rows)
x = tf.ragged.constant([[1, 2], [3]])
y = 3
print(x + y)
# x (2d ragged): 3 x (num_rows)
# y (2d tensor): 3 x 1
# Result (2d ragged): 3 x (num_rows)
x = tf.ragged.constant(
[[10, 87, 12],
[19, 53],
[12, 32]])
y = [[1000], [2000], [3000]]
print(x + y)
# x (3d ragged): 2 x (r1) x 2
# y (2d ragged): 1 x 1
# Result (3d ragged): 2 x (r1) x 2
x = tf.ragged.constant(
[[[1, 2], [3, 4], [5, 6]],
[[7, 8]]],
ragged_rank=1)
y = tf.constant([[10]])
print(x + y)
# x (3d ragged): 2 x (r1) x (r2) x 1
# y (1d tensor): 3
# Result (3d ragged): 2 x (r1) x (r2) x 3
x = tf.ragged.constant(
[
[
[[1], [2]],
[],
[[3]],
[[4]],
],
[
[[5], [6]],
[[7]]
]
],
ragged_rank=2)
y = tf.constant([10, 20, 30])
print(x + y)
Explanation: Broadcasting
Broadcasting is the process of making tensors with different shapes have
compatible shapes for elementwise operations. For more background on
broadcasting, see:
Numpy: Broadcasting
tf.broadcast_dynamic_shape
tf.broadcast_to
The basic steps for broadcasting two inputs x and y to have compatible
shapes are:
If x and y do not have the same number of dimensions, then add outer
dimensions (with size 1) until they do.
For each dimension where x and y have different sizes:
If x or y have size 1 in dimension d, then repeat its values
across dimension d to match the other input's size.
Otherwise, raise an exception (x and y are not broadcast
compatible).
Where the size of a tensor in a uniform dimension is a single number (the size
of slices across that dimension); and the size of a tensor in a ragged dimension
is a list of slice lengths (for all slices across that dimension).
Broadcasting examples
End of explanation
# x (2d ragged): 3 x (r1)
# y (2d tensor): 3 x 4 # trailing dimensions do not match
x = tf.ragged.constant([[1, 2], [3, 4, 5, 6], [7]])
y = tf.constant([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]])
try:
x + y
except tf.errors.InvalidArgumentError as exception:
print(exception)
# x (2d ragged): 3 x (r1)
# y (2d ragged): 3 x (r2) # ragged dimensions do not match.
x = tf.ragged.constant([[1, 2, 3], [4], [5, 6]])
y = tf.ragged.constant([[10, 20], [30, 40], [50]])
try:
x + y
except tf.errors.InvalidArgumentError as exception:
print(exception)
# x (3d ragged): 3 x (r1) x 2
# y (3d ragged): 3 x (r1) x 3 # trailing dimensions do not match
x = tf.ragged.constant([[[1, 2], [3, 4], [5, 6]],
[[7, 8], [9, 10]]])
y = tf.ragged.constant([[[1, 2, 0], [3, 4, 0], [5, 6, 0]],
[[7, 8, 0], [9, 10, 0]]])
try:
x + y
except tf.errors.InvalidArgumentError as exception:
print(exception)
Explanation: Here are some examples of shapes that do not broadcast:
End of explanation
rt = tf.RaggedTensor.from_row_splits(
values=[3, 1, 4, 1, 5, 9, 2],
row_splits=[0, 4, 4, 6, 7])
print(rt)
Explanation: RaggedTensor encoding
Ragged tensors are encoded using the RaggedTensor class. Internally, each
RaggedTensor consists of:
A values tensor, which concatenates the variable-length rows into a
flattened list.
A row_splits vector, which indicates how those flattened values are
divided into rows. In particular, the values for row rt[i] are stored in
the slice rt.values[rt.row_splits[i]:rt.row_splits[i+1]].
End of explanation
rt = tf.RaggedTensor.from_row_splits(
values=tf.RaggedTensor.from_row_splits(
values=[10, 11, 12, 13, 14, 15, 16, 17, 18, 19],
row_splits=[0, 3, 3, 5, 9, 10]),
row_splits=[0, 1, 1, 5])
print(rt)
print("Shape: {}".format(rt.shape))
print("Number of ragged dimensions: {}".format(rt.ragged_rank))
Explanation: Multiple ragged dimensions
A ragged tensor with multiple ragged dimensions is encoded by using a nested
RaggedTensor for the values tensor. Each nested RaggedTensor adds a single
ragged dimension.
End of explanation
rt = tf.RaggedTensor.from_nested_row_splits(
flat_values=[10, 11, 12, 13, 14, 15, 16, 17, 18, 19],
nested_row_splits=([0, 1, 1, 5], [0, 3, 3, 5, 9, 10]))
print(rt)
Explanation: The factory function tf.RaggedTensor.from_nested_row_splits may be used to construct a
RaggedTensor with multiple ragged dimensions directly, by providing a list of
row_splits tensors:
End of explanation
rt = tf.RaggedTensor.from_row_splits(
values=[[1, 3], [0, 0], [1, 3], [5, 3], [3, 3], [1, 2]],
row_splits=[0, 3, 4, 6])
print(rt)
print("Shape: {}".format(rt.shape))
print("Number of ragged dimensions: {}".format(rt.ragged_rank))
Explanation: Uniform Inner Dimensions
Ragged tensors with uniform inner dimensions are encoded by using a
multidimensional tf.Tensor for values.
End of explanation
values = [3, 1, 4, 1, 5, 9, 2, 6]
print(tf.RaggedTensor.from_row_splits(values, row_splits=[0, 4, 4, 7, 8, 8]))
print(tf.RaggedTensor.from_row_lengths(values, row_lengths=[4, 0, 3, 1, 0]))
print(tf.RaggedTensor.from_row_starts(values, row_starts=[0, 4, 4, 7, 8]))
print(tf.RaggedTensor.from_row_limits(values, row_limits=[4, 4, 7, 8, 8]))
print(tf.RaggedTensor.from_value_rowids(
values, value_rowids=[0, 0, 0, 0, 2, 2, 2, 3], nrows=5))
Explanation: Alternative row-partitioning schemes
The RaggedTensor class uses row_splits as the primary mechanism to store
information about how the values are partitioned into rows. However,
RaggedTensor also provides support for four alternative row-partitioning
schemes, which can be more convenient to use depending on how your data is
formatted. Internally, RaggedTensor uses these additional schemes to improve
efficiency in some contexts.
<dl>
<dt>Row lengths</dt>
<dd>`row_lengths` is a vector with shape `[nrows]`, which specifies the
length of each row.</dd>
<dt>Row starts</dt>
<dd>`row_starts` is a vector with shape `[nrows]`, which specifies the start
offset of each row. Equivalent to `row_splits[:-1]`.</dd>
<dt>Row limits</dt>
<dd>`row_limits` is a vector with shape `[nrows]`, which specifies the stop
offset of each row. Equivalent to `row_splits[1:]`.</dd>
<dt>Row indices and number of rows</dt>
<dd>`value_rowids` is a vector with shape `[nvals]`, corresponding
one-to-one with values, which specifies each value's row index. In
particular, the row `rt[row]` consists of the values `rt.values[j]` where
`value_rowids[j]==row`. \
`nrows` is an integer that specifies the number of rows in the
`RaggedTensor`. In particular, `nrows` is used to indicate trailing empty
rows.</dd>
</dl>
For example, the following ragged tensors are equivalent:
End of explanation
rt = tf.ragged.constant([[3, 1, 4, 1], [], [5, 9, 2], [6], []])
print(" values: {}".format(rt.values))
print(" row_splits: {}".format(rt.row_splits))
print(" row_lengths: {}".format(rt.row_lengths()))
print(" row_starts: {}".format(rt.row_starts()))
print(" row_limits: {}".format(rt.row_limits()))
print("value_rowids: {}".format(rt.value_rowids()))
Explanation: The RaggedTensor class defines methods which can be used to construct
each of these row-partitioning tensors.
End of explanation
<END_TASK> |
15,568 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Custom Estimator with Keras
Learning Objectives
- Learn how to create custom estimator using tf.keras
Introduction
Up until now we've been limited in our model architectures to premade estimators. But what if we want more control over the model? We can use the popular Keras API to create a custom model. Keras is a high-level API to build and train deep learning models. It is user-friendly, modular and makes writing custom building blocks of Tensorflow code much easier.
Once we've build a Keras model we then it to an estimator using tf.keras.estimator.model_to_estimator()This gives us access to all the flexibility of Keras for creating deep learning models, but also the production readiness of the estimator framework!
Step1: Train and Evaluate input functions
For the most part, we can use the same train and evaluation input functions that we had in previous labs. Note the function create_feature_keras_input below. We will use this to create the first layer of the model. This function is called in turn during the train_input_fn and eval_input_fn as well.
Step2: Feature Engineering
We'll use the same engineered features that we had in previous labs.
Step3: Calculate the number of feature columns that will be input to our Keras model
Step4: Build Custom Keras Model
Now we can begin building our Keras model. Have a look at the guide here to see more explanation.
Step5: Serving input function
Once we've constructed our model in Keras, we next create the serving input function. This is also similar to what we have done in previous labs. Note that we use our create_feature_keras_input function again so that we perform our feature engineering during inference.
Step6: Train and Evaluate
To train our model, we can use train_and_evaluate as we have before. Note that we use tf.keras.estimator.model_to_estimator to create our estimator. It takes as arguments the compiled keras model, the OUTDIR, and optionally a tf.estimator.Runconfig. Have a look at the documentation for tf.keras.estimator.model_to_estimator to make sure you understand how arguments are used. | <ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
import numpy as np
import shutil
print(tf.__version__)
Explanation: Custom Estimator with Keras
Learning Objectives
- Learn how to create custom estimator using tf.keras
Introduction
Up until now we've been limited in our model architectures to premade estimators. But what if we want more control over the model? We can use the popular Keras API to create a custom model. Keras is a high-level API to build and train deep learning models. It is user-friendly, modular and makes writing custom building blocks of Tensorflow code much easier.
Once we've build a Keras model we then it to an estimator using tf.keras.estimator.model_to_estimator()This gives us access to all the flexibility of Keras for creating deep learning models, but also the production readiness of the estimator framework!
End of explanation
CSV_COLUMN_NAMES = ["fare_amount","dayofweek","hourofday","pickuplon","pickuplat","dropofflon","dropofflat"]
CSV_DEFAULTS = [[0.0],[1],[0],[-74.0], [40.0], [-74.0], [40.7]]
def read_dataset(csv_path):
def parse_row(row):
# Decode the CSV row into list of TF tensors
fields = tf.decode_csv(records = row, record_defaults = CSV_DEFAULTS)
# Pack the result into a dictionary
features = dict(zip(CSV_COLUMN_NAMES, fields))
# NEW: Add engineered features
features = add_engineered_features(features)
# Separate the label from the features
label = features.pop("fare_amount") # remove label from features and store
return features, label
# Create a dataset containing the text lines.
dataset = tf.data.Dataset.list_files(file_pattern = csv_path) # (i.e. data_file_*.csv)
dataset = dataset.flat_map(map_func = lambda filename: tf.data.TextLineDataset(filenames = filename).skip(count = 1))
# Parse each CSV row into correct (features,label) format for Estimator API
dataset = dataset.map(map_func = parse_row)
return dataset
def create_feature_keras_input(features, label):
features = tf.feature_column.input_layer(features = features, feature_columns = create_feature_columns())
return features, label
def train_input_fn(csv_path, batch_size = 128):
#1. Convert CSV into tf.data.Dataset with (features, label) format
dataset = read_dataset(csv_path)
#2. Shuffle, repeat, and batch the examples.
dataset = dataset.shuffle(buffer_size = 1000).repeat(count = None).batch(batch_size = batch_size)
#3. Create single feature tensor for input to Keras Model
dataset = dataset.map(map_func = create_feature_keras_input)
return dataset
def eval_input_fn(csv_path, batch_size = 128):
#1. Convert CSV into tf.data.Dataset with (features, label) format
dataset = read_dataset(csv_path)
#2.Batch the examples.
dataset = dataset.batch(batch_size = batch_size)
#3. Create single feature tensor for input to Keras Model
dataset = dataset.map(map_func = create_feature_keras_input)
return dataset
Explanation: Train and Evaluate input functions
For the most part, we can use the same train and evaluation input functions that we had in previous labs. Note the function create_feature_keras_input below. We will use this to create the first layer of the model. This function is called in turn during the train_input_fn and eval_input_fn as well.
End of explanation
def add_engineered_features(features):
features["dayofweek"] = features["dayofweek"] - 1 # subtract one since our days of week are 1-7 instead of 0-6
features["latdiff"] = features["pickuplat"] - features["dropofflat"] # East/West
features["londiff"] = features["pickuplon"] - features["dropofflon"] # North/South
features["euclidean_dist"] = tf.sqrt(x = features["latdiff"]**2 + features["londiff"]**2)
return features
def create_feature_columns():
# One hot encode dayofweek and hourofday
fc_dayofweek = tf.feature_column.categorical_column_with_identity(key = "dayofweek", num_buckets = 7)
fc_hourofday = tf.feature_column.categorical_column_with_identity(key = "hourofday", num_buckets = 24)
# Cross features to get combination of day and hour
fc_day_hr = tf.feature_column.crossed_column(keys = [fc_dayofweek, fc_hourofday], hash_bucket_size = 24 * 7)
# Bucketize latitudes and longitudes
NBUCKETS = 16
latbuckets = np.linspace(start = 38.0, stop = 42.0, num = NBUCKETS).tolist()
lonbuckets = np.linspace(start = -76.0, stop = -72.0, num = NBUCKETS).tolist()
fc_bucketized_plat = tf.feature_column.bucketized_column(source_column = tf.feature_column.numeric_column(key = "pickuplon"), boundaries = lonbuckets)
fc_bucketized_plon = tf.feature_column.bucketized_column(source_column = tf.feature_column.numeric_column(key = "pickuplat"), boundaries = latbuckets)
fc_bucketized_dlat = tf.feature_column.bucketized_column(source_column = tf.feature_column.numeric_column(key = "dropofflon"), boundaries = lonbuckets)
fc_bucketized_dlon = tf.feature_column.bucketized_column(source_column = tf.feature_column.numeric_column(key = "dropofflat"), boundaries = latbuckets)
feature_columns = [
#1. Engineered using tf.feature_column module
tf.feature_column.indicator_column(categorical_column = fc_day_hr), # 168 columns
fc_bucketized_plat, # 16 + 1 = 17 columns
fc_bucketized_plon, # 16 + 1 = 17 columns
fc_bucketized_dlat, # 16 + 1 = 17 columns
fc_bucketized_dlon, # 16 + 1 = 17 columns
#2. Engineered in input functions
tf.feature_column.numeric_column(key = "latdiff"), # 1 column
tf.feature_column.numeric_column(key = "londiff"), # 1 column
tf.feature_column.numeric_column(key = "euclidean_dist") # 1 column
]
return feature_columns
Explanation: Feature Engineering
We'll use the same engineered features that we had in previous labs.
End of explanation
num_feature_columns = 168 + (16 + 1) * 4 + 3
print("num_feature_columns = {}".format(num_feature_columns))
Explanation: Calculate the number of feature columns that will be input to our Keras model
End of explanation
def create_keras_model():
model = tf.keras.Sequential()
model.add(tf.keras.layers.InputLayer(input_shape = (num_feature_columns,), name = "dense_input"))
model.add(tf.keras.layers.Dense(units = 64, activation = "relu", name = "dense0"))
model.add(tf.keras.layers.Dense(units = 64, activation = "relu", name = "dense1"))
model.add(tf.keras.layers.Dense(units = 64, activation = "relu", name = "dense2"))
model.add(tf.keras.layers.Dense(units = 64, activation = "relu", name = "dense3"))
model.add(tf.keras.layers.Dense(units = 8, activation = "relu", name = "dense4"))
model.add(tf.keras.layers.Dense(units = 1, activation = None, name = "logits"))
def rmse(y_true, y_pred): # Root Mean Squared Error
return tf.sqrt(x = tf.reduce_mean(input_tensor = tf.square(x = y_pred - y_true)))
model.compile(
optimizer = tf.train.AdamOptimizer(),
loss = "mean_squared_error",
metrics = [rmse])
return model
Explanation: Build Custom Keras Model
Now we can begin building our Keras model. Have a look at the guide here to see more explanation.
End of explanation
# Create serving input function
def serving_input_fn():
feature_placeholders = {
"dayofweek": tf.placeholder(dtype = tf.int32, shape = [None]),
"hourofday": tf.placeholder(dtype = tf.int32, shape = [None]),
"pickuplon": tf.placeholder(dtype = tf.float32, shape = [None]),
"pickuplat": tf.placeholder(dtype = tf.float32, shape = [None]),
"dropofflon": tf.placeholder(dtype = tf.float32, shape = [None]),
"dropofflat": tf.placeholder(dtype = tf.float32, shape = [None]),
}
features = {key: tensor for key, tensor in feature_placeholders.items()}
# Perform our feature engineering during inference as well
features, _ = create_feature_keras_input((add_engineered_features(features)), None)
return tf.estimator.export.ServingInputReceiver(features = {"dense_input": features}, receiver_tensors = feature_placeholders)
Explanation: Serving input function
Once we've constructed our model in Keras, we next create the serving input function. This is also similar to what we have done in previous labs. Note that we use our create_feature_keras_input function again so that we perform our feature engineering during inference.
End of explanation
def train_and_evaluate(output_dir):
tf.logging.set_verbosity(v = tf.logging.INFO) # so loss is printed during training
estimator = tf.keras.estimator.model_to_estimator(
keras_model = create_keras_model(),
model_dir = output_dir,
config = tf.estimator.RunConfig(
tf_random_seed = 1, # for reproducibility
save_checkpoints_steps = 100 # checkpoint every N steps
)
)
train_spec = tf.estimator.TrainSpec(
input_fn = lambda: train_input_fn(csv_path = "./taxi-train.csv"),
max_steps = 500)
exporter = tf.estimator.LatestExporter(name = 'exporter', serving_input_receiver_fn = serving_input_fn)
eval_spec = tf.estimator.EvalSpec(
input_fn = lambda: eval_input_fn(csv_path = "./taxi-valid.csv"),
steps = None,
start_delay_secs = 10, # wait at least N seconds before first evaluation (default 120)
throttle_secs = 10, # wait at least N seconds before each subsequent evaluation (default 600)
exporters = exporter) # export SavedModel once at the end of training
tf.estimator.train_and_evaluate(
estimator = estimator,
train_spec = train_spec,
eval_spec = eval_spec)
%%time
OUTDIR = "taxi_trained"
shutil.rmtree(path = OUTDIR, ignore_errors = True) # start fresh each time
tf.summary.FileWriterCache.clear() # ensure filewriter cache is clear for TensorBoard events file
train_and_evaluate(OUTDIR)
Explanation: Train and Evaluate
To train our model, we can use train_and_evaluate as we have before. Note that we use tf.keras.estimator.model_to_estimator to create our estimator. It takes as arguments the compiled keras model, the OUTDIR, and optionally a tf.estimator.Runconfig. Have a look at the documentation for tf.keras.estimator.model_to_estimator to make sure you understand how arguments are used.
End of explanation
<END_TASK> |
15,569 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Resumen de NLTK
Step2: Gramáticas Independientes del Contexto (CFG)
Noam Chmosky definió una jerarquía de lenguajes y gramáticas que se utiliza habitualmente en Lingüística e Informática para clasificar lenguajes y gramáticas formales. Cuando queremos modelar fenómenos lingüísticos de las lenguas naturales, el tipo de gramática más adeacuado es el conocido como Tipo 2 o Gramáticas Independientes del Contexto o Context-Free Grammars (CFG) en inglés.
Vamos a definir una gramática simplemente como un conjunto de reglas de reescritura o transformación. Sin entrar en muchos detalles sobre las restricciones que tienen que cumplir las reglas de las gramáticas de Tipo 2, es importante que tengamos en cuenta lo siguiente
Step3: Fíjate cómo hemos definido nuestra gramática
Step4: Con el objeto grammar1 ya creado, creamos el analizador con el método nltk.ChatParser.
Step5: Una vez creado nuestro analizador ya lo podemos utilizar. Tenemos a nuestro alcance el método .parse para analizar sintácticamente cualquier oración que se especifique como una cadena de palabras. Nuestra gramática es bastante limitada, pero podemos utilizarla para analizar la oración I shot an elephant in my pajamas. Si imprimimos el resultado del método, obtenemos el árbol sintáctico.
Step6: Por si no te has dado cuenta, la oración I shot an elephant in my pajamas es ambigua en inglés
Step7: Recuerda que para imprimir el árbol sintáctico hay que iterar (con un bucle for, por ejemplo) sobre el objeto que devuelve el método parse() y utilizar la función print.
Step9: A continuación modifico ligeramente mi gramática g1 para incluir una nueva categoría gramatical PRO y añadir algo de volcabulario nuevo. Compara ambos ejemplos
Step10: NOTA IMPORTANTE sobre errores y el comportamiento de parse()
Cuando un analizador reconoce todo el vocabulario de una oración de entrada pero es incapaz de analizarla, el método parse() no da error pero devuelve un objeto vacío. En este caso, la oración es agramatical según nuestra gramática.
Step11: Sin embargo, cuando el analizador no reconoce todo el vocabulario (porque utilizamos una palabra no definida dentro del léxico), el método parse() falla y muestra un mensaje de error de tipo ValueError como el siguiente. Fíjate solo en la última línea
Step13: Tenlo en cuenta a la hora de detectar errores en tu código.
Gramáticas en español
Visto un primer ejemplo de CFG, vamos a cambiar de lengua y crear un analizador para oraciones sencillas en español. El procedimiento es el mismo, definimos nuestra gramática en formato de Chomsky en un fichero aparte o en una cadena de texto, la parseamos con el método nltk.CFG.fromstring() y creamos un analizador con el método nltk.ChartParser()
Step15: Vamos a probar si es capaz de analizar distintas oraciones es español. Para hacerlo más divertido, vamos a guardar varias oraciones separadas por un intro (simbolizado por el metacarácter \n) en una lista de cadenas llamda oraciones. Iteramos sobre esas oraciones, las imprimimos, después las rompemos en listas de palabras (con el método .split()) e imprimimos el resultado de analizarlas con nuestro analizador.
Step18: Vamos a aumentar la cobertura de nuestra gramática de modo que sea capaz de reconocer y analizar oraciones coordinadas. Para ello, modificamos la regla en la que definimos la oración añadiendo una definición recursivaque defina oración como la secuencia de una oración (O) seguida de una conjunción (Conj) y de otra oración (O). Por último añadimos también algo de léxico nuevo
Step21: Recuerda que una gramática no es un programa
Step24: ¿Podemos extender g4 para que reconozca oraciones subordinadas introducidas con verbos de lengua o de pensamiento. Me refiero a oraciones del tipo | <ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
from __future__ import division
import nltk
Explanation: Resumen de NLTK: Análisis sintáctico
Este resumen se corresponde con el capítulo 8 del NLTK Book Analyzing Sentence Structure. La lectura del capítulo es muy recomendable.
En este resumen vamos a repasar cómo crear gramáticas con NLTK y cómo crear herramientas que nos permitan analizar sintácticamente oraciones sencillas.
Para empezar, necesitamos importar el módulo nltk que nos da acceso a todas las funcionalidades:
End of explanation
g1 =
S -> NP VP
NP -> Det N | Det N PP | 'I'
VP -> V NP | VP PP
PP -> P NP
Det -> 'an' | 'my'
N -> 'elephant' | 'pajamas'
V -> 'shot'
P -> 'in'
Explanation: Gramáticas Independientes del Contexto (CFG)
Noam Chmosky definió una jerarquía de lenguajes y gramáticas que se utiliza habitualmente en Lingüística e Informática para clasificar lenguajes y gramáticas formales. Cuando queremos modelar fenómenos lingüísticos de las lenguas naturales, el tipo de gramática más adeacuado es el conocido como Tipo 2 o Gramáticas Independientes del Contexto o Context-Free Grammars (CFG) en inglés.
Vamos a definir una gramática simplemente como un conjunto de reglas de reescritura o transformación. Sin entrar en muchos detalles sobre las restricciones que tienen que cumplir las reglas de las gramáticas de Tipo 2, es importante que tengamos en cuenta lo siguiente:
Las gramáticas formales manejan dos tipos de alfabetos.
Los símbolos no terminales son los componentes intermedios que utilizamos en las reglas. Todo símbolo no terminal tiene que ser definido como una secuenca de otros símbolos. En nuestro caso, los no terminales van a ser las categorías sintácticas.
Los símbolos terminales son los componentes finales reconocidos por la gramática. En nuestro caso, los terminales van a ser las palabras de las oraciones que queremos analizar sintácticamente.
Todas las reglas de una gramática formal tienen la forma Símbolo1 -> Símbolo2, Símbolo3... SímboloN y se leen como el Símbolo1 se define/está formado/se reescribe como una secuencia formada por Símbolo2, Símbolo3, etc.
En las gramáticas independientes del contexto, la parte situada a la izquierda de la flecha -> es siempre un único símbolo no terminal.
Gramáticas Generativas en NLTK
Pues bien, para definir nuestras gramáticas en NLTK podemos escribirlas en un fichero aparte o como una cadena de texto siguiendo el formalismo de las gramaticas generativas de Chomsky. Vamos a definir una sencilla gramática capaz de reconocer la famosa frase de los hermanos Marx I shot an elephant in my pajamas, y la vamos a guardar como una cadena de texto en la variable g1.
End of explanation
grammar1 = nltk.CFG.fromstring(g1)
Explanation: Fíjate cómo hemos definido nuestra gramática:
Hemos encerrado todo entre triples comillas dobles. Recuerda que esta sintaxis de Python permite crear cadenas que contengan retornos de carro y ocupen más de una línea de longitud.
Para los no terminales utilizamos las convenciones habituales para las estructuras sintácticas y las categorías de palabras y los escribimos en mayúsculas. Las etiquetas son autoexplicativas, aunque estén en inglés.
Lo no terminales van escritos entre comillas simples.
Cuando un no terminal se puede definir de más de una forma, marcamos la disyunción con la barra vertical |.
Tenemos reglas que se interpretan de la siguiente manera: una oración se define como una sintagma nominal y un sintagma verbal; un sintagma nominal se define como un determinante y un nombre, o un determinante, un nombre y un sintagma preposicional, o la palabra I, etc.
A partir de nuestra gramática en una cadena de texto, necesitamos crear un analizador que podamos utilizar posterioremente. Para ello, es imprescindible parsearla antes con el método nltk.CFG.fromstring().
End of explanation
analyzer = nltk.ChartParser(grammar1)
Explanation: Con el objeto grammar1 ya creado, creamos el analizador con el método nltk.ChatParser.
End of explanation
oracion = "I shot an elephant in my pajamas".split()
# guardamos todos los posibles análisis sintácticos en trees
trees = analyzer.parse(oracion)
for tree in trees:
print(tree)
Explanation: Una vez creado nuestro analizador ya lo podemos utilizar. Tenemos a nuestro alcance el método .parse para analizar sintácticamente cualquier oración que se especifique como una cadena de palabras. Nuestra gramática es bastante limitada, pero podemos utilizarla para analizar la oración I shot an elephant in my pajamas. Si imprimimos el resultado del método, obtenemos el árbol sintáctico.
End of explanation
print(analyzer.parse_one(oracion))
Explanation: Por si no te has dado cuenta, la oración I shot an elephant in my pajamas es ambigua en inglés: se trata del típico ejemplo de PP attachment (saber exactamente a qué nodo está modificando un sintagma preposicional). Existe una doble interpretación para el sintagma preposicional in my pajamas: En el momento del disparo, ¿quién llevaba puesto el pijama? ¿El elefante o yo? Pues bien, nuestra gramática recoge esta ambigüedad y sería capaz de analizarla de dos maneras diferentes, tal y como se muestra en la celda anterior.
En el caso de que nos interese solo generar uno de los posibles análisis, podemos utilizar el método parse_one(), como se muestra a continuación.
End of explanation
print(analyzer.parse(oracion))
Explanation: Recuerda que para imprimir el árbol sintáctico hay que iterar (con un bucle for, por ejemplo) sobre el objeto que devuelve el método parse() y utilizar la función print.
End of explanation
g1v2 =
S -> NP VP
NP -> Det N | Det N PP | PRO
VP -> V NP | VP PP
PP -> P NP
Det -> 'an' | 'my'
PRO -> 'I' | 'you'
N -> 'elephant' | 'pajamas'
V -> 'shot'
P -> 'in'
grammar1v2 = nltk.CFG.fromstring(g1v2)
analyzer1v2 = nltk.ChartParser(grammar1v2)
# itero sobre la estructura que devuelve parse()
for tree in analyzer1v2.parse(oracion):
print(tree)
print("\n", "-------------------------------", "\n")
for tree in analyzer1v2.parse("you shot my elephant".split()):
print(tree)
Explanation: A continuación modifico ligeramente mi gramática g1 para incluir una nueva categoría gramatical PRO y añadir algo de volcabulario nuevo. Compara ambos ejemplos:
End of explanation
for tree in analyzer.parse("shot an pajamas elephant my I".split()):
print("El análisis sintáctico es el siguiente")
print(tree)
Explanation: NOTA IMPORTANTE sobre errores y el comportamiento de parse()
Cuando un analizador reconoce todo el vocabulario de una oración de entrada pero es incapaz de analizarla, el método parse() no da error pero devuelve un objeto vacío. En este caso, la oración es agramatical según nuestra gramática.
End of explanation
for tree in analyzer.parse("our time is running out".split()):
print("El análisis sintáctico es el siguiente")
print(tree)
Explanation: Sin embargo, cuando el analizador no reconoce todo el vocabulario (porque utilizamos una palabra no definida dentro del léxico), el método parse() falla y muestra un mensaje de error de tipo ValueError como el siguiente. Fíjate solo en la última línea:
End of explanation
g2 = u
O -> SN SV
SN -> Det N | Det N Adj | Det Adj N | NProp | SN SP
SV -> V | V SN | V SP | V SN SP
SP -> Prep SN
Det -> 'el' | 'la' | 'un' | 'una'
N -> 'niño' | 'niña' | 'manzana' | 'pera' | 'cuchillo'
NProp -> 'Juan' | 'Ana' | 'Perico'
Adj -> 'bonito' | 'pequeña' | 'verde'
V -> 'come' | 'salta' | 'pela' | 'persigue'
Prep -> 'de' | 'con' | 'desde' | 'a'
grammar2 = nltk.CFG.fromstring(g2)
analizador2 = nltk.ChartParser(grammar2)
Explanation: Tenlo en cuenta a la hora de detectar errores en tu código.
Gramáticas en español
Visto un primer ejemplo de CFG, vamos a cambiar de lengua y crear un analizador para oraciones sencillas en español. El procedimiento es el mismo, definimos nuestra gramática en formato de Chomsky en un fichero aparte o en una cadena de texto, la parseamos con el método nltk.CFG.fromstring() y creamos un analizador con el método nltk.ChartParser():
End of explanation
oraciones = uAna salta
la niña pela una manzana verde con el cuchillo
Juan come un cuchillo bonito desde el niño
un manzana bonito salta el cuchillo desde el niño verde
el cuchillo verde persigue a la pequeña manzana de Ana
el cuchillo verde persigue a Ana.split("\n")
for oracion in oraciones:
print(oracion)
for tree in analizador2.parse(oracion.split()):
print(tree, "\n")
Explanation: Vamos a probar si es capaz de analizar distintas oraciones es español. Para hacerlo más divertido, vamos a guardar varias oraciones separadas por un intro (simbolizado por el metacarácter \n) en una lista de cadenas llamda oraciones. Iteramos sobre esas oraciones, las imprimimos, después las rompemos en listas de palabras (con el método .split()) e imprimimos el resultado de analizarlas con nuestro analizador.
End of explanation
g3 = u
O -> SN SV | O Conj O
SN -> Det N | Det N Adj | Det Adj N | NProp | SN SP
SV -> V | V SN | V SP | V SN SP
SP -> Prep SN
Det -> 'el' | 'la' | 'un' | 'una'
N -> 'niño' | 'niña' | 'manzana' | 'pera' | 'cuchillo'
NProp -> 'Juan' | 'Ana' | 'Perico'
Adj -> 'bonito' | 'pequeña' | 'verde'
V -> 'come' | 'salta' | 'pela' | 'persigue'
Prep -> 'de' | 'con' | 'desde' | 'a'
Conj -> 'y' | 'pero'
# Ahora fijate cómo creamos en analizador en un solo paso
# compáralo con los ejemplos anteriores
analizador3 = nltk.ChartParser(nltk.CFG.fromstring(g3))
for tree in analizador3.parse(ula manzana salta y el niño come pero el cuchillo
verde persigue a la pequeña manzana de Ana.split()):
print(tree)
Explanation: Vamos a aumentar la cobertura de nuestra gramática de modo que sea capaz de reconocer y analizar oraciones coordinadas. Para ello, modificamos la regla en la que definimos la oración añadiendo una definición recursivaque defina oración como la secuencia de una oración (O) seguida de una conjunción (Conj) y de otra oración (O). Por último añadimos también algo de léxico nuevo: un par de conjunciones.
End of explanation
# ojo, son sencillas, pero contienen oraciones impersonales, verbos copulativos, sujetos elípticos
oraciones = umañana es viernes
hoy es jueves
tenéis sueño
hace frío
Pepe hace sueño.split("\n")
# escribe tu gramática en esta celda
g4 =
analyzer4 = nltk.ChartParser(nltk.CFG.fromtring(g4))
# ¿qué tal funciona?
for oracion in oraciones:
print(oracion)
for tree in analyzer4.parse(oracion.split()):
print(tree, "\n")
Explanation: Recuerda que una gramática no es un programa: es una simple descripción que permite establecer qué estructuras sintácticas están bien formadas (la oraciones gramaticales) y cuáles no (las oraciones agramaticales). Cuando una oración es reconocida por una gramática (y en consecuencia, está bien formada), el analizador puede representar la estructura en forma de árbol.
NLTK proporciona acceso a distintos tipo de analizadores (árboles de dependencias, gramáticas probabilísticas, etc), aunque nosotros solo hemos utilizado el más sencillo de ellos: nltk.ChartParser(). Estos analizadores sí son programitas que permiten leer una gramática y analizar las oraciones que proporcionemos como entrada del método parse().
Otro ejemplo
En clase improvisamos un poco y proponemos el siguiente ejemplo de gramática. Vamos a ir complicándola de manera incremental. Comencemos con unas cuantas oraciones de ejemplo.
End of explanation
oraciones = uPepe cree que mañana es viernes
María dice que Pepe cree que mañana es viernes.split()
# escribe la extensión de tu gramática en esta celda
g5 =
analyzer5 = nltk.ChartParser(nltk.CFG.fromstring(g5))
# ¿qué tal funciona?
for oracion in oraciones:
print(oracion)
for tree in analyzer5.parse(oracion.split()):
print(tree, "\n")
Explanation: ¿Podemos extender g4 para que reconozca oraciones subordinadas introducidas con verbos de lengua o de pensamiento. Me refiero a oraciones del tipo: Pepe cree que mañana es viernes, María dice que Pepe cree que mañana es viernes, etc.
Aumenta tu vocabulario añadiendo tantos terminales como te haga falta.
End of explanation
<END_TASK> |
15,570 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Lab 8 - Image Analysis and Padding
Part 1 - Pyraf Imexam Tutorial
You'll be creating a lot of image files in this lab that you will need to include with your submission. To make your life easy, create a directory and move your Jupyter notebook file into it before beginning. Save all images to this directory, which you will zip before you submit it to Moodle
Start up pyraf, which should already be installed on your machines, by opening a terminal and typing in the following two commands
source activate iraf27
pyraf
A pyraf instance will start and you will have a new prompt that looks like this
Step1: Numpy has a really handy built-in function for padding images called pad. The required inputs are the array to be padded, the size of the pad region (in each direction), and the nature of the padding (lots of interpolation options are available, but we just need a constant value). The default for a constant value is to propagate the values at the edges outward, but we want in this case to draw a clear distinction between where we have data and where we don't, so we can set the padded region values to some specific constant value. Often astronomers pad arrays with either zeros or NaNs, but NaNs can be problematic when executing the next step (shifting the images) and zero is too close to the actual data values in our sample image below to stand out, so I've chosen an arbitrary value below based solely on its ability to stand out relative to the rest of the colorbar so that I can tell where the padded regions is.
Step2: The next thing that we'd want to do with padded images is to shift the image to some common center so that it can be combined with other images.
Step3: One last thing we may wish to do before combining is to replace our arbitrarily set constant pad values with NaNs so that when we combine images, these won't be treated as equal to the real values in the other parts of the array. This replacement is pretty easy in python using the following handy syntax. Because the shifting of the array is done via interpolation, the padded region values will no longer all be exactly the same. For that reason, you will often need to tweak the value below which you choose to replace pixels with NaNs to coincide with the padded region. To wrap your brain around this a bit, try changing the value after the <= to 0.001 and/or 0.0005 below. Before doing so, you'll need to reexecute the cell above that contains newarr2 = interp.shift(newarr,(2.5,4.5), cval=-0.001) in order to refresh the array with a clean version.
note that if you want to combine images with nans in numpy you need to use the special functions nanmean, nanmedian, etc. rather than just mean, median, etc.
Step4: Exercise 4
Take the image that you used for exercises 1-3 (or another of your choice), read it in, pad it by 25 pixels in each direction with the padded region set to some constant value that stands out relative to the real-valued array, shift it by +10.3 pixels in x and -15.8 pixels in y, replace the padded values with NaNs and display below. | <ASSISTANT_TASK:>
Python Code:
# The standard fare:
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
%matplotlib inline
# Recall our use of this module to work with FITS files in Lab 4:
from astropy.io import fits
#A dummy image - just a Gaussian PSF with a standard deviation of 5 pixels
import astropy.convolution as conv
gauss = conv.Gaussian2DKernel(stddev=5)
gauss.shape
Explanation: Lab 8 - Image Analysis and Padding
Part 1 - Pyraf Imexam Tutorial
You'll be creating a lot of image files in this lab that you will need to include with your submission. To make your life easy, create a directory and move your Jupyter notebook file into it before beginning. Save all images to this directory, which you will zip before you submit it to Moodle
Start up pyraf, which should already be installed on your machines, by opening a terminal and typing in the following two commands
source activate iraf27
pyraf
A pyraf instance will start and you will have a new prompt that looks like this: -->
Now open DS9 normally.
Back in your pyraf window, type the following two commands.
display dev$pix 1
imexam
An image of a spiral galaxy (M51?) will appear in DS9 and there will also be a round blinking cursor in the window. If this is the case, great - you can move on to the next step! If this is not the case, please ask Kim or Kate for help.
Once you have a blinking cursor, with your mouse over the ds9 window type "q" on the keyboard to quit. You will not have your --> prompt back in the Pyraf window.
Back in your DS9 window, choose your favorite Messier object and pull the DSS image up by clicking Analysis --> Image Servers --> DSS (SAO) and then entering your object. Make sure you pull up an image that has a fair number of stars. Save the image to your lab folder by clicking File --> Save Image --> FITS. Close DS9 and then reopen it and load in the image with File --> Open.
Once your image is open in DS9, go back to your Pyraf terminal and type imexam again. You should once again have a round blinking cursor.
Now let's investigate some of imexam's functionality. A basic list of commands are below. For all of them, you should center the blinking cursor on a star in the image before you click the one-letter command (you may wish to zoom and pan around in the image in order to do this effectively). The output of the command will appear in your Pyraf window or in a separate graphical window that will pop up.
Exercise 1
For each of the commands below, execute it on several different stars in the image until you're absolutely sure that you know what it does. Once you're sure, for each command write:
(a) a description of what it does in your own words (more detailed than the short descriptions in the list below). If the function produces a plot, choose a representative one and take a screenshot with ctrl + shift + 4 and include it with your explanation
Note: The mac screenshot command will create a cursor that you have to drag to capture the part of the screen you want to save. Click in the upper left, hold down the mouse button, and drag to the lower right before releasing. Screenshots get saved by default to your desktop. Rename each one with something descriptive and place it in your lab folder. To include it in the markdown cell together with your explanation use the basic syntax ![caption](filename).
(b) an example of when you think this measurement might be useful
Basic Imexam commands:
a Aperture sum, with radius region_size
c Return column plot
e Return a contour plot in a region around the cursor
h Return a histogram in the region around the cursor
j 1D [Gaussian1D default] line fit
k 1D [Gaussian1D default] column fit
l Return line plot
m Square region stats, in [region_size],default is median
r Return the radial profile plot
s Surface plot in the region around the cursor
z prints grid of pixel values
This Markdown cell is for your explanations
Exercise 2
You may have noticed that the parameters for the things like the size of a region over which statistics are computed are set by default, but you can modify them. Generically, the command to do this is :epar + the letter of the command. For example :epar s allows you to modify the parameters of the surface plot function. Generally this only works for functions that pop up a graphics window with a plot.
Come up with one example of a (graphical) function above where you might want to modify the default parameters. Run the function with the default parameters on a certain star and then with your modified parameters. Include the otput of each and explain the advantages of modifying the parameters in words.
Your answers for exercise 2 go here
Exercise 3
For your purposes later in this class, it is also useful to know how to record the output of imexam as a text file. To do this, first quit imexam by typing q. At your regular Pyraf prompt, type epar imexam. This will open up a little gui with various parameters that you can set, much like when you used epar to modify the parameters for some of the functions within imexam. About a third of the way down, you'll see a prompt that says (logfile). If you enter a filename (e.g. log.txt) here, it will be associated with imexam. Enter log.txt as the logfile and click "Save and Quit" then restart imexam. Note that the output will not always be recorded to the text file by default. Rather, you need to toggle logging on and off with the command "w".
Toggle the log on, record the "a" output for 20 different stars in your image, then toggle the log off and check the output. Include the log with the files that you hand in for this lab.
Your answers for Exercise 3 go here
Part 2 - Padding and Shifting Images
Once you've found the centers of stars in an image, as you did in your homework last week, you need to be able to shift all of the images in your sequence to line up. Ideally, you do this without losing any of the information at the edges of your detector. For example, if I had a 1024x1024 image and I shifted it by -1 pixel in x and -2 pixels in y, I'd be shifting some pixels with real information off of the edge of the array and the rightmost column and the topmost two columns would now be blank. To avoid this loss of information, we typically "pad" images before we shift them to a common orientation. In our case for the Smith telescope, since we don't dither, we don't need to pad our images by very much and can add a small strip around each one. Let's try it.
End of explanation
newarr = np.pad(gauss,5,'constant', constant_values=-0.001)
newarr.shape
plt.imshow(newarr)
plt.colorbar()
Explanation: Numpy has a really handy built-in function for padding images called pad. The required inputs are the array to be padded, the size of the pad region (in each direction), and the nature of the padding (lots of interpolation options are available, but we just need a constant value). The default for a constant value is to propagate the values at the edges outward, but we want in this case to draw a clear distinction between where we have data and where we don't, so we can set the padded region values to some specific constant value. Often astronomers pad arrays with either zeros or NaNs, but NaNs can be problematic when executing the next step (shifting the images) and zero is too close to the actual data values in our sample image below to stand out, so I've chosen an arbitrary value below based solely on its ability to stand out relative to the rest of the colorbar so that I can tell where the padded regions is.
End of explanation
import scipy.ndimage.interpolation as interp
newarr2 = interp.shift(newarr,(2.5,4.5), cval=-0.001)
plt.imshow(newarr2)
plt.colorbar()
Explanation: The next thing that we'd want to do with padded images is to shift the image to some common center so that it can be combined with other images.
End of explanation
newarr2[newarr2 <= -0.0001] = np.nan
plt.imshow(newarr2)
plt.colorbar()
Explanation: One last thing we may wish to do before combining is to replace our arbitrarily set constant pad values with NaNs so that when we combine images, these won't be treated as equal to the real values in the other parts of the array. This replacement is pretty easy in python using the following handy syntax. Because the shifting of the array is done via interpolation, the padded region values will no longer all be exactly the same. For that reason, you will often need to tweak the value below which you choose to replace pixels with NaNs to coincide with the padded region. To wrap your brain around this a bit, try changing the value after the <= to 0.001 and/or 0.0005 below. Before doing so, you'll need to reexecute the cell above that contains newarr2 = interp.shift(newarr,(2.5,4.5), cval=-0.001) in order to refresh the array with a clean version.
note that if you want to combine images with nans in numpy you need to use the special functions nanmean, nanmedian, etc. rather than just mean, median, etc.
End of explanation
#your code goes here
Explanation: Exercise 4
Take the image that you used for exercises 1-3 (or another of your choice), read it in, pad it by 25 pixels in each direction with the padded region set to some constant value that stands out relative to the real-valued array, shift it by +10.3 pixels in x and -15.8 pixels in y, replace the padded values with NaNs and display below.
End of explanation
<END_TASK> |
15,571 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Demonstration of MCE IRL code & environments
This is just tabular environments & vanilla MCE IRL.
Step1: IRL on a random MDP
Testing both linear reward models & MLP reward models.
Step2: Same thing, but on grid world
The true reward here is not linear in the reduced feature space (i.e $(x,y)$ coordinates). Finding an appropriate linear reward is impossible (as I will demonstration), but an MLP should Just Work(tm). | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
%load_ext autoreload
%autoreload 2
import copy
import numpy as np
import seaborn as sns
import pandas as pd
import matplotlib.pyplot as plt
import scipy
import torch as th
import imitation.algorithms.tabular_irl as tirl
import imitation.envs.examples.model_envs as menv
sns.set(context="notebook")
np.random.seed(42)
Explanation: Demonstration of MCE IRL code & environments
This is just tabular environments & vanilla MCE IRL.
End of explanation
mdp = menv.RandomMDP(
n_states=16,
n_actions=3,
branch_factor=2,
horizon=10,
random_obs=True,
obs_dim=5,
generator_seed=42,
)
V, Q, pi = tirl.mce_partition_fh(mdp)
Dt, D = tirl.mce_occupancy_measures(mdp, pi=pi)
demo_counts = D @ mdp.observation_matrix
(obs_dim,) = demo_counts.shape
rmodel = tirl.LinearRewardModel(obs_dim)
opt = th.optim.Adam(rmodel.parameters(), lr=0.1)
D_fake = tirl.mce_irl(mdp, opt, rmodel, D, linf_eps=1e-1)
rmodel = tirl.MLPRewardModel(obs_dim, [32, 32])
opt = th.optim.Adam(rmodel.parameters(), lr=0.1)
D_fake = tirl.mce_irl(mdp, opt, rmodel, D, linf_eps=1e-2)
Explanation: IRL on a random MDP
Testing both linear reward models & MLP reward models.
End of explanation
# Same experiments, but on grid world
mdp = menv.CliffWorld(width=7, height=4, horizon=8, use_xy_obs=True)
V, Q, pi = tirl.mce_partition_fh(mdp)
Dt, D = tirl.mce_occupancy_measures(mdp, pi=pi)
demo_counts = D @ mdp.observation_matrix
(obs_dim,) = demo_counts.shape
rmodel = tirl.LinearRewardModel(obs_dim)
opt = th.optim.Adam(rmodel.parameters(), lr=1.0)
D_fake = tirl.mce_irl(mdp, opt, rmodel, D, linf_eps=0.1)
mdp.draw_value_vec(D)
plt.title("Cliff World $p(s)$")
plt.xlabel("x-coord")
plt.ylabel("y-coord")
plt.show()
mdp.draw_value_vec(D_fake)
plt.title("Occupancy for linear reward function")
plt.show()
plt.subplot(1, 2, 1)
mdp.draw_value_vec(rmodel(th.as_tensor(mdp.observation_matrix)).detach().numpy())
plt.title("Inferred reward")
plt.subplot(1, 2, 2)
mdp.draw_value_vec(mdp.reward_matrix)
plt.title("True reward")
plt.show()
rmodel = tirl.MLPRewardModel(
obs_dim,
[
1024,
],
activation=th.nn.ReLU,
)
opt = th.optim.Adam(rmodel.parameters(), lr=1e-3)
D_fake_mlp = tirl.mce_irl(mdp, opt, rmodel, D, linf_eps=3e-2, print_interval=250)
mdp.draw_value_vec(D_fake_mlp)
plt.title("Occupancy for MLP reward function")
plt.show()
plt.subplot(1, 2, 1)
mdp.draw_value_vec(rmodel(th.as_tensor(mdp.observation_matrix)).detach().numpy())
plt.title("Inferred reward")
plt.subplot(1, 2, 2)
mdp.draw_value_vec(mdp.reward_matrix)
plt.title("True reward")
plt.show()
Explanation: Same thing, but on grid world
The true reward here is not linear in the reduced feature space (i.e $(x,y)$ coordinates). Finding an appropriate linear reward is impossible (as I will demonstration), but an MLP should Just Work(tm).
End of explanation
<END_TASK> |
15,572 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Building your Deep Neural Network
Step2: 2 - Outline of the Assignment
To build your neural network, you will be implementing several "helper functions". These helper functions will be used in the next assignment to build a two-layer neural network and an L-layer neural network. Each small helper function you will implement will have detailed instructions that will walk you through the necessary steps. Here is an outline of this assignment, you will
Step4: Expected output
Step6: Expected output
Step8: Expected output
Step10: Expected output
Step12: <table style="width
Step14: Expected Output
Step16: Expected Output
Step18: Expected output with sigmoid
Step20: Expected Output
<table style="width | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import h5py
import matplotlib.pyplot as plt
from testCases_v3 import *
from dnn_utils_v2 import sigmoid, sigmoid_backward, relu, relu_backward
%matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
%load_ext autoreload
%autoreload 2
np.random.seed(1)
Explanation: Building your Deep Neural Network: Step by Step
Welcome to your week 4 assignment (part 1 of 2)! You have previously trained a 2-layer Neural Network (with a single hidden layer). This week, you will build a deep neural network, with as many layers as you want!
In this notebook, you will implement all the functions required to build a deep neural network.
In the next assignment, you will use these functions to build a deep neural network for image classification.
After this assignment you will be able to:
- Use non-linear units like ReLU to improve your model
- Build a deeper neural network (with more than 1 hidden layer)
- Implement an easy-to-use neural network class
Notation:
- Superscript $[l]$ denotes a quantity associated with the $l^{th}$ layer.
- Example: $a^{[L]}$ is the $L^{th}$ layer activation. $W^{[L]}$ and $b^{[L]}$ are the $L^{th}$ layer parameters.
- Superscript $(i)$ denotes a quantity associated with the $i^{th}$ example.
- Example: $x^{(i)}$ is the $i^{th}$ training example.
- Lowerscript $i$ denotes the $i^{th}$ entry of a vector.
- Example: $a^{[l]}_i$ denotes the $i^{th}$ entry of the $l^{th}$ layer's activations).
Let's get started!
1 - Packages
Let's first import all the packages that you will need during this assignment.
- numpy is the main package for scientific computing with Python.
- matplotlib is a library to plot graphs in Python.
- dnn_utils provides some necessary functions for this notebook.
- testCases provides some test cases to assess the correctness of your functions
- np.random.seed(1) is used to keep all the random function calls consistent. It will help us grade your work. Please don't change the seed.
End of explanation
# GRADED FUNCTION: initialize_parameters
def initialize_parameters(n_x, n_h, n_y):
Argument:
n_x -- size of the input layer
n_h -- size of the hidden layer
n_y -- size of the output layer
Returns:
parameters -- python dictionary containing your parameters:
W1 -- weight matrix of shape (n_h, n_x)
b1 -- bias vector of shape (n_h, 1)
W2 -- weight matrix of shape (n_y, n_h)
b2 -- bias vector of shape (n_y, 1)
np.random.seed(1)
### START CODE HERE ### (≈ 4 lines of code)
W1 = None
b1 = None
W2 = None
b2 = None
### END CODE HERE ###
assert(W1.shape == (n_h, n_x))
assert(b1.shape == (n_h, 1))
assert(W2.shape == (n_y, n_h))
assert(b2.shape == (n_y, 1))
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
parameters = initialize_parameters(3,2,1)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
Explanation: 2 - Outline of the Assignment
To build your neural network, you will be implementing several "helper functions". These helper functions will be used in the next assignment to build a two-layer neural network and an L-layer neural network. Each small helper function you will implement will have detailed instructions that will walk you through the necessary steps. Here is an outline of this assignment, you will:
Initialize the parameters for a two-layer network and for an $L$-layer neural network.
Implement the forward propagation module (shown in purple in the figure below).
Complete the LINEAR part of a layer's forward propagation step (resulting in $Z^{[l]}$).
We give you the ACTIVATION function (relu/sigmoid).
Combine the previous two steps into a new [LINEAR->ACTIVATION] forward function.
Stack the [LINEAR->RELU] forward function L-1 time (for layers 1 through L-1) and add a [LINEAR->SIGMOID] at the end (for the final layer $L$). This gives you a new L_model_forward function.
Compute the loss.
Implement the backward propagation module (denoted in red in the figure below).
Complete the LINEAR part of a layer's backward propagation step.
We give you the gradient of the ACTIVATE function (relu_backward/sigmoid_backward)
Combine the previous two steps into a new [LINEAR->ACTIVATION] backward function.
Stack [LINEAR->RELU] backward L-1 times and add [LINEAR->SIGMOID] backward in a new L_model_backward function
Finally update the parameters.
<img src="images/final outline.png" style="width:800px;height:500px;">
<caption><center> Figure 1</center></caption><br>
Note that for every forward function, there is a corresponding backward function. That is why at every step of your forward module you will be storing some values in a cache. The cached values are useful for computing gradients. In the backpropagation module you will then use the cache to calculate the gradients. This assignment will show you exactly how to carry out each of these steps.
3 - Initialization
You will write two helper functions that will initialize the parameters for your model. The first function will be used to initialize parameters for a two layer model. The second one will generalize this initialization process to $L$ layers.
3.1 - 2-layer Neural Network
Exercise: Create and initialize the parameters of the 2-layer neural network.
Instructions:
- The model's structure is: LINEAR -> RELU -> LINEAR -> SIGMOID.
- Use random initialization for the weight matrices. Use np.random.randn(shape)*0.01 with the correct shape.
- Use zero initialization for the biases. Use np.zeros(shape).
End of explanation
# GRADED FUNCTION: initialize_parameters_deep
def initialize_parameters_deep(layer_dims):
Arguments:
layer_dims -- python array (list) containing the dimensions of each layer in our network
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
Wl -- weight matrix of shape (layer_dims[l], layer_dims[l-1])
bl -- bias vector of shape (layer_dims[l], 1)
np.random.seed(3)
parameters = {}
L = len(layer_dims) # number of layers in the network
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = None
parameters['b' + str(l)] = None
### END CODE HERE ###
assert(parameters['W' + str(l)].shape == (layer_dims[l], layer_dims[l-1]))
assert(parameters['b' + str(l)].shape == (layer_dims[l], 1))
return parameters
parameters = initialize_parameters_deep([5,4,3])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
Explanation: Expected output:
<table style="width:80%">
<tr>
<td> **W1** </td>
<td> [[ 0.01624345 -0.00611756 -0.00528172]
[-0.01072969 0.00865408 -0.02301539]] </td>
</tr>
<tr>
<td> **b1**</td>
<td>[[ 0.]
[ 0.]]</td>
</tr>
<tr>
<td>**W2**</td>
<td> [[ 0.01744812 -0.00761207]]</td>
</tr>
<tr>
<td> **b2** </td>
<td> [[ 0.]] </td>
</tr>
</table>
3.2 - L-layer Neural Network
The initialization for a deeper L-layer neural network is more complicated because there are many more weight matrices and bias vectors. When completing the initialize_parameters_deep, you should make sure that your dimensions match between each layer. Recall that $n^{[l]}$ is the number of units in layer $l$. Thus for example if the size of our input $X$ is $(12288, 209)$ (with $m=209$ examples) then:
<table style="width:100%">
<tr>
<td> </td>
<td> **Shape of W** </td>
<td> **Shape of b** </td>
<td> **Activation** </td>
<td> **Shape of Activation** </td>
<tr>
<tr>
<td> **Layer 1** </td>
<td> $(n^{[1]},12288)$ </td>
<td> $(n^{[1]},1)$ </td>
<td> $Z^{[1]} = W^{[1]} X + b^{[1]} $ </td>
<td> $(n^{[1]},209)$ </td>
<tr>
<tr>
<td> **Layer 2** </td>
<td> $(n^{[2]}, n^{[1]})$ </td>
<td> $(n^{[2]},1)$ </td>
<td>$Z^{[2]} = W^{[2]} A^{[1]} + b^{[2]}$ </td>
<td> $(n^{[2]}, 209)$ </td>
<tr>
<tr>
<td> $\vdots$ </td>
<td> $\vdots$ </td>
<td> $\vdots$ </td>
<td> $\vdots$</td>
<td> $\vdots$ </td>
<tr>
<tr>
<td> **Layer L-1** </td>
<td> $(n^{[L-1]}, n^{[L-2]})$ </td>
<td> $(n^{[L-1]}, 1)$ </td>
<td>$Z^{[L-1]} = W^{[L-1]} A^{[L-2]} + b^{[L-1]}$ </td>
<td> $(n^{[L-1]}, 209)$ </td>
<tr>
<tr>
<td> **Layer L** </td>
<td> $(n^{[L]}, n^{[L-1]})$ </td>
<td> $(n^{[L]}, 1)$ </td>
<td> $Z^{[L]} = W^{[L]} A^{[L-1]} + b^{[L]}$</td>
<td> $(n^{[L]}, 209)$ </td>
<tr>
</table>
Remember that when we compute $W X + b$ in python, it carries out broadcasting. For example, if:
$$ W = \begin{bmatrix}
j & k & l\
m & n & o \
p & q & r
\end{bmatrix}\;\;\; X = \begin{bmatrix}
a & b & c\
d & e & f \
g & h & i
\end{bmatrix} \;\;\; b =\begin{bmatrix}
s \
t \
u
\end{bmatrix}\tag{2}$$
Then $WX + b$ will be:
$$ WX + b = \begin{bmatrix}
(ja + kd + lg) + s & (jb + ke + lh) + s & (jc + kf + li)+ s\
(ma + nd + og) + t & (mb + ne + oh) + t & (mc + nf + oi) + t\
(pa + qd + rg) + u & (pb + qe + rh) + u & (pc + qf + ri)+ u
\end{bmatrix}\tag{3} $$
Exercise: Implement initialization for an L-layer Neural Network.
Instructions:
- The model's structure is [LINEAR -> RELU] $ \times$ (L-1) -> LINEAR -> SIGMOID. I.e., it has $L-1$ layers using a ReLU activation function followed by an output layer with a sigmoid activation function.
- Use random initialization for the weight matrices. Use np.random.rand(shape) * 0.01.
- Use zeros initialization for the biases. Use np.zeros(shape).
- We will store $n^{[l]}$, the number of units in different layers, in a variable layer_dims. For example, the layer_dims for the "Planar Data classification model" from last week would have been [2,4,1]: There were two inputs, one hidden layer with 4 hidden units, and an output layer with 1 output unit. Thus means W1's shape was (4,2), b1 was (4,1), W2 was (1,4) and b2 was (1,1). Now you will generalize this to $L$ layers!
- Here is the implementation for $L=1$ (one layer neural network). It should inspire you to implement the general case (L-layer neural network).
python
if L == 1:
parameters["W" + str(L)] = np.random.randn(layer_dims[1], layer_dims[0]) * 0.01
parameters["b" + str(L)] = np.zeros((layer_dims[1], 1))
End of explanation
# GRADED FUNCTION: linear_forward
def linear_forward(A, W, b):
Implement the linear part of a layer's forward propagation.
Arguments:
A -- activations from previous layer (or input data): (size of previous layer, number of examples)
W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)
b -- bias vector, numpy array of shape (size of the current layer, 1)
Returns:
Z -- the input of the activation function, also called pre-activation parameter
cache -- a python dictionary containing "A", "W" and "b" ; stored for computing the backward pass efficiently
### START CODE HERE ### (≈ 1 line of code)
Z = None
### END CODE HERE ###
assert(Z.shape == (W.shape[0], A.shape[1]))
cache = (A, W, b)
return Z, cache
A, W, b = linear_forward_test_case()
Z, linear_cache = linear_forward(A, W, b)
print("Z = " + str(Z))
Explanation: Expected output:
<table style="width:80%">
<tr>
<td> **W1** </td>
<td>[[ 0.01788628 0.0043651 0.00096497 -0.01863493 -0.00277388]
[-0.00354759 -0.00082741 -0.00627001 -0.00043818 -0.00477218]
[-0.01313865 0.00884622 0.00881318 0.01709573 0.00050034]
[-0.00404677 -0.0054536 -0.01546477 0.00982367 -0.01101068]]</td>
</tr>
<tr>
<td>**b1** </td>
<td>[[ 0.]
[ 0.]
[ 0.]
[ 0.]]</td>
</tr>
<tr>
<td>**W2** </td>
<td>[[-0.01185047 -0.0020565 0.01486148 0.00236716]
[-0.01023785 -0.00712993 0.00625245 -0.00160513]
[-0.00768836 -0.00230031 0.00745056 0.01976111]]</td>
</tr>
<tr>
<td>**b2** </td>
<td>[[ 0.]
[ 0.]
[ 0.]]</td>
</tr>
</table>
4 - Forward propagation module
4.1 - Linear Forward
Now that you have initialized your parameters, you will do the forward propagation module. You will start by implementing some basic functions that you will use later when implementing the model. You will complete three functions in this order:
LINEAR
LINEAR -> ACTIVATION where ACTIVATION will be either ReLU or Sigmoid.
[LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID (whole model)
The linear forward module (vectorized over all the examples) computes the following equations:
$$Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}\tag{4}$$
where $A^{[0]} = X$.
Exercise: Build the linear part of forward propagation.
Reminder:
The mathematical representation of this unit is $Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}$. You may also find np.dot() useful. If your dimensions don't match, printing W.shape may help.
End of explanation
# GRADED FUNCTION: linear_activation_forward
def linear_activation_forward(A_prev, W, b, activation):
Implement the forward propagation for the LINEAR->ACTIVATION layer
Arguments:
A_prev -- activations from previous layer (or input data): (size of previous layer, number of examples)
W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)
b -- bias vector, numpy array of shape (size of the current layer, 1)
activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu"
Returns:
A -- the output of the activation function, also called the post-activation value
cache -- a python dictionary containing "linear_cache" and "activation_cache";
stored for computing the backward pass efficiently
if activation == "sigmoid":
# Inputs: "A_prev, W, b". Outputs: "A, activation_cache".
### START CODE HERE ### (≈ 2 lines of code)
Z, linear_cache = None
A, activation_cache = None
### END CODE HERE ###
elif activation == "relu":
# Inputs: "A_prev, W, b". Outputs: "A, activation_cache".
### START CODE HERE ### (≈ 2 lines of code)
Z, linear_cache = None
A, activation_cache = None
### END CODE HERE ###
assert (A.shape == (W.shape[0], A_prev.shape[1]))
cache = (linear_cache, activation_cache)
return A, cache
A_prev, W, b = linear_activation_forward_test_case()
A, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = "sigmoid")
print("With sigmoid: A = " + str(A))
A, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = "relu")
print("With ReLU: A = " + str(A))
Explanation: Expected output:
<table style="width:35%">
<tr>
<td> **Z** </td>
<td> [[ 3.26295337 -1.23429987]] </td>
</tr>
</table>
4.2 - Linear-Activation Forward
In this notebook, you will use two activation functions:
Sigmoid: $\sigma(Z) = \sigma(W A + b) = \frac{1}{ 1 + e^{-(W A + b)}}$. We have provided you with the sigmoid function. This function returns two items: the activation value "a" and a "cache" that contains "Z" (it's what we will feed in to the corresponding backward function). To use it you could just call:
python
A, activation_cache = sigmoid(Z)
ReLU: The mathematical formula for ReLu is $A = RELU(Z) = max(0, Z)$. We have provided you with the relu function. This function returns two items: the activation value "A" and a "cache" that contains "Z" (it's what we will feed in to the corresponding backward function). To use it you could just call:
python
A, activation_cache = relu(Z)
For more convenience, you are going to group two functions (Linear and Activation) into one function (LINEAR->ACTIVATION). Hence, you will implement a function that does the LINEAR forward step followed by an ACTIVATION forward step.
Exercise: Implement the forward propagation of the LINEAR->ACTIVATION layer. Mathematical relation is: $A^{[l]} = g(Z^{[l]}) = g(W^{[l]}A^{[l-1]} +b^{[l]})$ where the activation "g" can be sigmoid() or relu(). Use linear_forward() and the correct activation function.
End of explanation
# GRADED FUNCTION: L_model_forward
def L_model_forward(X, parameters):
Implement forward propagation for the [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID computation
Arguments:
X -- data, numpy array of shape (input size, number of examples)
parameters -- output of initialize_parameters_deep()
Returns:
AL -- last post-activation value
caches -- list of caches containing:
every cache of linear_relu_forward() (there are L-1 of them, indexed from 0 to L-2)
the cache of linear_sigmoid_forward() (there is one, indexed L-1)
caches = []
A = X
L = len(parameters) // 2 # number of layers in the neural network
# Implement [LINEAR -> RELU]*(L-1). Add "cache" to the "caches" list.
for l in range(1, L):
A_prev = A
### START CODE HERE ### (≈ 2 lines of code)
A, cache = None
### END CODE HERE ###
# Implement LINEAR -> SIGMOID. Add "cache" to the "caches" list.
### START CODE HERE ### (≈ 2 lines of code)
AL, cache = None
### END CODE HERE ###
assert(AL.shape == (1,X.shape[1]))
return AL, caches
X, parameters = L_model_forward_test_case_2hidden()
AL, caches = L_model_forward(X, parameters)
print("AL = " + str(AL))
print("Length of caches list = " + str(len(caches)))
Explanation: Expected output:
<table style="width:35%">
<tr>
<td> **With sigmoid: A ** </td>
<td > [[ 0.96890023 0.11013289]]</td>
</tr>
<tr>
<td> **With ReLU: A ** </td>
<td > [[ 3.43896131 0. ]]</td>
</tr>
</table>
Note: In deep learning, the "[LINEAR->ACTIVATION]" computation is counted as a single layer in the neural network, not two layers.
d) L-Layer Model
For even more convenience when implementing the $L$-layer Neural Net, you will need a function that replicates the previous one (linear_activation_forward with RELU) $L-1$ times, then follows that with one linear_activation_forward with SIGMOID.
<img src="images/model_architecture_kiank.png" style="width:600px;height:300px;">
<caption><center> Figure 2 : [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID model</center></caption><br>
Exercise: Implement the forward propagation of the above model.
Instruction: In the code below, the variable AL will denote $A^{[L]} = \sigma(Z^{[L]}) = \sigma(W^{[L]} A^{[L-1]} + b^{[L]})$. (This is sometimes also called Yhat, i.e., this is $\hat{Y}$.)
Tips:
- Use the functions you had previously written
- Use a for loop to replicate [LINEAR->RELU] (L-1) times
- Don't forget to keep track of the caches in the "caches" list. To add a new value c to a list, you can use list.append(c).
End of explanation
# GRADED FUNCTION: compute_cost
def compute_cost(AL, Y):
Implement the cost function defined by equation (7).
Arguments:
AL -- probability vector corresponding to your label predictions, shape (1, number of examples)
Y -- true "label" vector (for example: containing 0 if non-cat, 1 if cat), shape (1, number of examples)
Returns:
cost -- cross-entropy cost
m = Y.shape[1]
# Compute loss from aL and y.
### START CODE HERE ### (≈ 1 lines of code)
cost = None
### END CODE HERE ###
cost = np.squeeze(cost) # To make sure your cost's shape is what we expect (e.g. this turns [[17]] into 17).
assert(cost.shape == ())
return cost
Y, AL = compute_cost_test_case()
print("cost = " + str(compute_cost(AL, Y)))
Explanation: <table style="width:50%">
<tr>
<td> **AL** </td>
<td > [[ 0.03921668 0.70498921 0.19734387 0.04728177]]</td>
</tr>
<tr>
<td> **Length of caches list ** </td>
<td > 3 </td>
</tr>
</table>
Great! Now you have a full forward propagation that takes the input X and outputs a row vector $A^{[L]}$ containing your predictions. It also records all intermediate values in "caches". Using $A^{[L]}$, you can compute the cost of your predictions.
5 - Cost function
Now you will implement forward and backward propagation. You need to compute the cost, because you want to check if your model is actually learning.
Exercise: Compute the cross-entropy cost $J$, using the following formula: $$-\frac{1}{m} \sum\limits_{i = 1}^{m} (y^{(i)}\log\left(a^{[L] (i)}\right) + (1-y^{(i)})\log\left(1- a^{L}\right)) \tag{7}$$
End of explanation
# GRADED FUNCTION: linear_backward
def linear_backward(dZ, cache):
Implement the linear portion of backward propagation for a single layer (layer l)
Arguments:
dZ -- Gradient of the cost with respect to the linear output (of current layer l)
cache -- tuple of values (A_prev, W, b) coming from the forward propagation in the current layer
Returns:
dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev
dW -- Gradient of the cost with respect to W (current layer l), same shape as W
db -- Gradient of the cost with respect to b (current layer l), same shape as b
A_prev, W, b = cache
m = A_prev.shape[1]
### START CODE HERE ### (≈ 3 lines of code)
dW = None
db = None
dA_prev = None
### END CODE HERE ###
assert (dA_prev.shape == A_prev.shape)
assert (dW.shape == W.shape)
assert (db.shape == b.shape)
return dA_prev, dW, db
# Set up some test inputs
dZ, linear_cache = linear_backward_test_case()
dA_prev, dW, db = linear_backward(dZ, linear_cache)
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db))
Explanation: Expected Output:
<table>
<tr>
<td>**cost** </td>
<td> 0.41493159961539694</td>
</tr>
</table>
6 - Backward propagation module
Just like with forward propagation, you will implement helper functions for backpropagation. Remember that back propagation is used to calculate the gradient of the loss function with respect to the parameters.
Reminder:
<img src="images/backprop_kiank.png" style="width:650px;height:250px;">
<caption><center> Figure 3 : Forward and Backward propagation for LINEAR->RELU->LINEAR->SIGMOID <br> The purple blocks represent the forward propagation, and the red blocks represent the backward propagation. </center></caption>
<!--
For those of you who are expert in calculus (you don't need to be to do this assignment), the chain rule of calculus can be used to derive the derivative of the loss $\mathcal{L}$ with respect to $z^{[1]}$ in a 2-layer network as follows:
$$\frac{d \mathcal{L}(a^{[2]},y)}{{dz^{[1]}}} = \frac{d\mathcal{L}(a^{[2]},y)}{{da^{[2]}}}\frac{{da^{[2]}}}{{dz^{[2]}}}\frac{{dz^{[2]}}}{{da^{[1]}}}\frac{{da^{[1]}}}{{dz^{[1]}}} \tag{8} $$
In order to calculate the gradient $dW^{[1]} = \frac{\partial L}{\partial W^{[1]}}$, you use the previous chain rule and you do $dW^{[1]} = dz^{[1]} \times \frac{\partial z^{[1]} }{\partial W^{[1]}}$. During the backpropagation, at each step you multiply your current gradient by the gradient corresponding to the specific layer to get the gradient you wanted.
Equivalently, in order to calculate the gradient $db^{[1]} = \frac{\partial L}{\partial b^{[1]}}$, you use the previous chain rule and you do $db^{[1]} = dz^{[1]} \times \frac{\partial z^{[1]} }{\partial b^{[1]}}$.
This is why we talk about **backpropagation**.
!-->
Now, similar to forward propagation, you are going to build the backward propagation in three steps:
- LINEAR backward
- LINEAR -> ACTIVATION backward where ACTIVATION computes the derivative of either the ReLU or sigmoid activation
- [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID backward (whole model)
6.1 - Linear backward
For layer $l$, the linear part is: $Z^{[l]} = W^{[l]} A^{[l-1]} + b^{[l]}$ (followed by an activation).
Suppose you have already calculated the derivative $dZ^{[l]} = \frac{\partial \mathcal{L} }{\partial Z^{[l]}}$. You want to get $(dW^{[l]}, db^{[l]} dA^{[l-1]})$.
<img src="images/linearback_kiank.png" style="width:250px;height:300px;">
<caption><center> Figure 4 </center></caption>
The three outputs $(dW^{[l]}, db^{[l]}, dA^{[l]})$ are computed using the input $dZ^{[l]}$.Here are the formulas you need:
$$ dW^{[l]} = \frac{\partial \mathcal{L} }{\partial W^{[l]}} = \frac{1}{m} dZ^{[l]} A^{[l-1] T} \tag{8}$$
$$ db^{[l]} = \frac{\partial \mathcal{L} }{\partial b^{[l]}} = \frac{1}{m} \sum_{i = 1}^{m} dZ^{l}\tag{9}$$
$$ dA^{[l-1]} = \frac{\partial \mathcal{L} }{\partial A^{[l-1]}} = W^{[l] T} dZ^{[l]} \tag{10}$$
Exercise: Use the 3 formulas above to implement linear_backward().
End of explanation
# GRADED FUNCTION: linear_activation_backward
def linear_activation_backward(dA, cache, activation):
Implement the backward propagation for the LINEAR->ACTIVATION layer.
Arguments:
dA -- post-activation gradient for current layer l
cache -- tuple of values (linear_cache, activation_cache) we store for computing backward propagation efficiently
activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu"
Returns:
dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev
dW -- Gradient of the cost with respect to W (current layer l), same shape as W
db -- Gradient of the cost with respect to b (current layer l), same shape as b
linear_cache, activation_cache = cache
if activation == "relu":
### START CODE HERE ### (≈ 2 lines of code)
dZ = None
dA_prev, dW, db = None
### END CODE HERE ###
elif activation == "sigmoid":
### START CODE HERE ### (≈ 2 lines of code)
dZ = None
dA_prev, dW, db = None
### END CODE HERE ###
return dA_prev, dW, db
AL, linear_activation_cache = linear_activation_backward_test_case()
dA_prev, dW, db = linear_activation_backward(AL, linear_activation_cache, activation = "sigmoid")
print ("sigmoid:")
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db) + "\n")
dA_prev, dW, db = linear_activation_backward(AL, linear_activation_cache, activation = "relu")
print ("relu:")
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db))
Explanation: Expected Output:
<table style="width:90%">
<tr>
<td> **dA_prev** </td>
<td > [[ 0.51822968 -0.19517421]
[-0.40506361 0.15255393]
[ 2.37496825 -0.89445391]] </td>
</tr>
<tr>
<td> **dW** </td>
<td > [[-0.10076895 1.40685096 1.64992505]] </td>
</tr>
<tr>
<td> **db** </td>
<td> [[ 0.50629448]] </td>
</tr>
</table>
6.2 - Linear-Activation backward
Next, you will create a function that merges the two helper functions: linear_backward and the backward step for the activation linear_activation_backward.
To help you implement linear_activation_backward, we provided two backward functions:
- sigmoid_backward: Implements the backward propagation for SIGMOID unit. You can call it as follows:
python
dZ = sigmoid_backward(dA, activation_cache)
relu_backward: Implements the backward propagation for RELU unit. You can call it as follows:
python
dZ = relu_backward(dA, activation_cache)
If $g(.)$ is the activation function,
sigmoid_backward and relu_backward compute $$dZ^{[l]} = dA^{[l]} * g'(Z^{[l]}) \tag{11}$$.
Exercise: Implement the backpropagation for the LINEAR->ACTIVATION layer.
End of explanation
# GRADED FUNCTION: L_model_backward
def L_model_backward(AL, Y, caches):
Implement the backward propagation for the [LINEAR->RELU] * (L-1) -> LINEAR -> SIGMOID group
Arguments:
AL -- probability vector, output of the forward propagation (L_model_forward())
Y -- true "label" vector (containing 0 if non-cat, 1 if cat)
caches -- list of caches containing:
every cache of linear_activation_forward() with "relu" (it's caches[l], for l in range(L-1) i.e l = 0...L-2)
the cache of linear_activation_forward() with "sigmoid" (it's caches[L-1])
Returns:
grads -- A dictionary with the gradients
grads["dA" + str(l)] = ...
grads["dW" + str(l)] = ...
grads["db" + str(l)] = ...
grads = {}
L = len(caches) # the number of layers
m = AL.shape[1]
Y = Y.reshape(AL.shape) # after this line, Y is the same shape as AL
# Initializing the backpropagation
### START CODE HERE ### (1 line of code)
dAL = None
### END CODE HERE ###
# Lth layer (SIGMOID -> LINEAR) gradients. Inputs: "AL, Y, caches". Outputs: "grads["dAL"], grads["dWL"], grads["dbL"]
### START CODE HERE ### (approx. 2 lines)
current_cache = None
grads["dA" + str(L)], grads["dW" + str(L)], grads["db" + str(L)] = None
### END CODE HERE ###
for l in reversed(range(L-1)):
# lth layer: (RELU -> LINEAR) gradients.
# Inputs: "grads["dA" + str(l + 2)], caches". Outputs: "grads["dA" + str(l + 1)] , grads["dW" + str(l + 1)] , grads["db" + str(l + 1)]
### START CODE HERE ### (approx. 5 lines)
current_cache = None
dA_prev_temp, dW_temp, db_temp = None
grads["dA" + str(l + 1)] = None
grads["dW" + str(l + 1)] = None
grads["db" + str(l + 1)] = None
### END CODE HERE ###
return grads
AL, Y_assess, caches = L_model_backward_test_case()
grads = L_model_backward(AL, Y_assess, caches)
print_grads(grads)
Explanation: Expected output with sigmoid:
<table style="width:100%">
<tr>
<td > dA_prev </td>
<td >[[ 0.11017994 0.01105339]
[ 0.09466817 0.00949723]
[-0.05743092 -0.00576154]] </td>
</tr>
<tr>
<td > dW </td>
<td > [[ 0.10266786 0.09778551 -0.01968084]] </td>
</tr>
<tr>
<td > db </td>
<td > [[-0.05729622]] </td>
</tr>
</table>
Expected output with relu:
<table style="width:100%">
<tr>
<td > dA_prev </td>
<td > [[ 0.44090989 0. ]
[ 0.37883606 0. ]
[-0.2298228 0. ]] </td>
</tr>
<tr>
<td > dW </td>
<td > [[ 0.44513824 0.37371418 -0.10478989]] </td>
</tr>
<tr>
<td > db </td>
<td > [[-0.20837892]] </td>
</tr>
</table>
6.3 - L-Model Backward
Now you will implement the backward function for the whole network. Recall that when you implemented the L_model_forward function, at each iteration, you stored a cache which contains (X,W,b, and z). In the back propagation module, you will use those variables to compute the gradients. Therefore, in the L_model_backward function, you will iterate through all the hidden layers backward, starting from layer $L$. On each step, you will use the cached values for layer $l$ to backpropagate through layer $l$. Figure 5 below shows the backward pass.
<img src="images/mn_backward.png" style="width:450px;height:300px;">
<caption><center> Figure 5 : Backward pass </center></caption>
Initializing backpropagation:
To backpropagate through this network, we know that the output is,
$A^{[L]} = \sigma(Z^{[L]})$. Your code thus needs to compute dAL $= \frac{\partial \mathcal{L}}{\partial A^{[L]}}$.
To do so, use this formula (derived using calculus which you don't need in-depth knowledge of):
python
dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL)) # derivative of cost with respect to AL
You can then use this post-activation gradient dAL to keep going backward. As seen in Figure 5, you can now feed in dAL into the LINEAR->SIGMOID backward function you implemented (which will use the cached values stored by the L_model_forward function). After that, you will have to use a for loop to iterate through all the other layers using the LINEAR->RELU backward function. You should store each dA, dW, and db in the grads dictionary. To do so, use this formula :
$$grads["dW" + str(l)] = dW^{[l]}\tag{15} $$
For example, for $l=3$ this would store $dW^{[l]}$ in grads["dW3"].
Exercise: Implement backpropagation for the [LINEAR->RELU] $\times$ (L-1) -> LINEAR -> SIGMOID model.
End of explanation
# GRADED FUNCTION: update_parameters
def update_parameters(parameters, grads, learning_rate):
Update parameters using gradient descent
Arguments:
parameters -- python dictionary containing your parameters
grads -- python dictionary containing your gradients, output of L_model_backward
Returns:
parameters -- python dictionary containing your updated parameters
parameters["W" + str(l)] = ...
parameters["b" + str(l)] = ...
L = len(parameters) // 2 # number of layers in the neural network
# Update rule for each parameter. Use a for loop.
### START CODE HERE ### (≈ 3 lines of code)
### END CODE HERE ###
return parameters
parameters, grads = update_parameters_test_case()
parameters = update_parameters(parameters, grads, 0.1)
print ("W1 = "+ str(parameters["W1"]))
print ("b1 = "+ str(parameters["b1"]))
print ("W2 = "+ str(parameters["W2"]))
print ("b2 = "+ str(parameters["b2"]))
Explanation: Expected Output
<table style="width:60%">
<tr>
<td > dW1 </td>
<td > [[ 0.41010002 0.07807203 0.13798444 0.10502167]
[ 0. 0. 0. 0. ]
[ 0.05283652 0.01005865 0.01777766 0.0135308 ]] </td>
</tr>
<tr>
<td > db1 </td>
<td > [[-0.22007063]
[ 0. ]
[-0.02835349]] </td>
</tr>
<tr>
<td > dA1 </td>
<td > [[ 0.12913162 -0.44014127]
[-0.14175655 0.48317296]
[ 0.01663708 -0.05670698]] </td>
</tr>
</table>
6.4 - Update Parameters
In this section you will update the parameters of the model, using gradient descent:
$$ W^{[l]} = W^{[l]} - \alpha \text{ } dW^{[l]} \tag{16}$$
$$ b^{[l]} = b^{[l]} - \alpha \text{ } db^{[l]} \tag{17}$$
where $\alpha$ is the learning rate. After computing the updated parameters, store them in the parameters dictionary.
Exercise: Implement update_parameters() to update your parameters using gradient descent.
Instructions:
Update parameters using gradient descent on every $W^{[l]}$ and $b^{[l]}$ for $l = 1, 2, ..., L$.
End of explanation
<END_TASK> |
15,573 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Notes for Think Stats by Allen B. Downey
Step1: Chapter 01
Glossary
anecdotal evidence - is an evidence based on personal experience rather than based on well-designed and scrupulous study.
cross-sectional study - is a study that colllects data about a population at a particular point in time.
longitudinal study - is a study that follow the same group repeatedly and collects the data over time.
Chapter 02
Mean - central tendency
$$ \overline{x} = \frac{1}{n} \sum_i x_i \ $$
Step2: Variance
$$ S^2 = \frac{1}{n} \sum_i (x_i - \overline{x})^2 $$
Step3: Standard Deviation
$$ \sigma = \sqrt{S^{2}} $$
Step4: Effect size - Cohen'd
Having groups G1 and G2, with number of elements given as N1 and N2, the effect size is given as
Step5: It is calculated with delta degree of freedom = 1!
Step6: Chapter 03
Probability Mass Function
Probability mass function maps each value to its probability.
Probability of a group always adds to one.
Step7: DataFrame Indexing
Step8: Chapter 04
Percentile Rank
Percentile rank is a metric that presents how big is the subset of the data that the values in the subset
are equal or below any given value.
Step9: Percentile
Percentile is the opposite operation to percentile rank - it maps a percentile rank to a value.
Step10: Cumulative Distribution Function
Cumulative Distribution Function is the function that maps from a value to its percentile rank.
Step11: It can be plotted using .hist()
Step12: Interquartile Range
Interquartile range is the difference between the 7th and 25th percentiles.
It is used as a measure of the spread of a distribution.
Chapter 05
Exponential Distribution
$$ f(x, \lambda ) = \left{\begin{matrix}
\lambda e^{- \lambda x} & x \geq 0
\
0 & x < 0
\end{matrix}\right. $$
Variable $ \lambda $ defines the shape of the distribution.
The exponential distribution is used analyzing a series of events and measure times between them.
If the events are equally likely to occur at any time, the distribution of inverarrival times tends to look like an exponential distribution.
Step13: Normal (Gaussian) Distribution
$$ f(x | \mu, \sigma ^{2} ) = \frac{1}{\sqrt{2\pi\sigma ^{2}}} e^{- \frac{(x - \mu )^{2}}{2 \sigma^{2}}} $$
The Gaussian distribution is described by two variables
Step14: Lognormal Distribution
$$ f(x | \mu, \sigma ^{2} ) = \frac{1}{\sigma x \sqrt{2\pi}} e^{- \frac{(ln(x) - \mu )^{2}}{2 \sigma^{2}}} $$
where
Step15: Pareto Distribution
The Pareto distribution originated from the economics as description of wealth ion the society.
If is often described using so called "Matthew principle"
Step16: Weibull Distribution
The Weibull Distribution is given as
Step17: Different scale below
Step18: Chapter 06
Moments
$k^{th}$ central moment is given as
Step19: Pearson's Correlation
The Pearson's correlation is computed by dividing the deviations by the standard deviations
Step20: Chapter 08
Mean Squared Error
Mean squared error is a way to measure a quality of an estimator.
It is important to mention that it is very sensitive to outliers and large values.
$$ MSE = \frac{1}{n}\sum (Y_{i}-\hat{Y_{i}})^{2} $$
Step21: Chapter 09
T-test
A t-test is an analysis framework used to determine the difference between two sample means from two normally distributed populations with unknown variances.
Chi-Squared Test
Definition
Step22: Exponentially-weighted moving average (EWMA)
Another approach is to calculate weighted average where the most recent values has the highest weight and the weights from previous values drop off exponentially.
The span parameter roughly corresponds to the window size of a moving average. It controls how fast the weights drop off, so it determines the number of points that make a non-negligible contribution to each average. | <ASSISTANT_TASK:>
Python Code:
from typing import List
import numpy as np
import pandas as pd
from scipy import stats
import matplotlib.pyplot as plt
import sklearn
% matplotlib inline
Explanation: Notes for Think Stats by Allen B. Downey
End of explanation
sample = [1, 3, 5, 6]
np.mean(sample)
pd.DataFrame(sample).mean()
Explanation: Chapter 01
Glossary
anecdotal evidence - is an evidence based on personal experience rather than based on well-designed and scrupulous study.
cross-sectional study - is a study that colllects data about a population at a particular point in time.
longitudinal study - is a study that follow the same group repeatedly and collects the data over time.
Chapter 02
Mean - central tendency
$$ \overline{x} = \frac{1}{n} \sum_i x_i \ $$
End of explanation
np.var(sample)
# Warning! Pandas variance by default is normalized by N-1!
# That can be changed by using ddof(delta degrees of freedom) = 0
pd.DataFrame(sample).var(ddof = 0)
Explanation: Variance
$$ S^2 = \frac{1}{n} \sum_i (x_i - \overline{x})^2 $$
End of explanation
np.std(sample)
# Warning! Pandas std is calculated with variance by N-1!
# That can be changed by using ddof(delta degrees of freedom) = 0
pd.DataFrame(sample).std(ddof = 0)
Explanation: Standard Deviation
$$ \sigma = \sqrt{S^{2}} $$
End of explanation
def effect_size(g1: pd.DataFrame, g2: pd.DataFrame) -> float:
diff = g1.mean() - g2.mean()
var_g1, var_g2 = g1.var(ddof=1), g2.var(ddof=1)
n1, n2 = len(g1), len(g2)
pooled_var = (var_g1 * (n1 - 1) + var_g2 * (n2 - 1)) / ((n1 - 1) + (n2 - 1))
cohen_d = diff / np.sqrt(pooled_var)
return cohen_d
Explanation: Effect size - Cohen'd
Having groups G1 and G2, with number of elements given as N1 and N2, the effect size is given as:
$$ Cohen'd = \frac{\overline{G1} - \overline{G2}}{\sqrt{(\sigma (G1) \cdot (N1-1) + \sigma (G2) \cdot (N2-1)) / ((N1-1) + (N2-1))}} $$
End of explanation
effect_size(pd.DataFrame([1, 2, 3, 4]), pd.DataFrame([3, 3, 1, 2]))
Explanation: It is calculated with delta degree of freedom = 1!
End of explanation
s = pd.Series([1, 2, 3, 4, 2])
def pmf(series: pd.Series) -> pd.Series:
return series.value_counts().sort_index() / series.count()
pmf(s)
Explanation: Chapter 03
Probability Mass Function
Probability mass function maps each value to its probability.
Probability of a group always adds to one.
End of explanation
array = np.random.randn(4, 2)
array
df = pd.DataFrame(array)
df
columns = ['A', 'B']
df = pd.DataFrame(data=array,
columns=columns)
df
index = ['a', 'b', 'c', 'd']
df = pd.DataFrame(data=array,
columns=columns,
index=index)
df
df['A']
df.loc['a']
df.iloc[0]
indices = ['a', 'c']
df.loc[indices]
df['a':'c']
df[0:2]
df[:2]
df['A'].loc['a']
Explanation: DataFrame Indexing
End of explanation
# Data
a = [1, 2, 3, 3, 4, 5]
stats.percentileofscore(a=a, score=2.5)
stats.percentileofscore(a=a, score=2)
stats.percentileofscore(a=a, score=3)
Explanation: Chapter 04
Percentile Rank
Percentile rank is a metric that presents how big is the subset of the data that the values in the subset
are equal or below any given value.
End of explanation
np.percentile(a=a, q=50)
np.percentile(a=a, q=70)
# Pandas uses quantiles with different interpolation methods.
pd.DataFrame(data=a).quantile(q=0.5)
Explanation: Percentile
Percentile is the opposite operation to percentile rank - it maps a percentile rank to a value.
End of explanation
series = pd.Series(np.random.randn(25))
series.head()
cdf = series.value_counts().sort_index().cumsum()
cdf.plot()
Explanation: Cumulative Distribution Function
Cumulative Distribution Function is the function that maps from a value to its percentile rank.
End of explanation
series.hist(cumulative=True)
Explanation: It can be plotted using .hist()
End of explanation
r1 = np.random.exponential(scale=1.0, size=100000)
r2 = np.random.exponential(scale=0.5, size=100000)
plt.hist(r1, bins = 200)
plt.xlim((0, 10))
plt.show()
plt.hist(r2, bins = 200)
plt.xlim((0, 10))
plt.show()
Explanation: Interquartile Range
Interquartile range is the difference between the 7th and 25th percentiles.
It is used as a measure of the spread of a distribution.
Chapter 05
Exponential Distribution
$$ f(x, \lambda ) = \left{\begin{matrix}
\lambda e^{- \lambda x} & x \geq 0
\
0 & x < 0
\end{matrix}\right. $$
Variable $ \lambda $ defines the shape of the distribution.
The exponential distribution is used analyzing a series of events and measure times between them.
If the events are equally likely to occur at any time, the distribution of inverarrival times tends to look like an exponential distribution.
End of explanation
g1 = np.random.normal(loc=0.0, scale=1.0, size=100000)
g2 = np.random.normal(loc=0.0, scale=3.0, size=100000)
plt.hist(g1, bins = 200)
plt.xlim((-10, 10))
plt.show()
plt.hist(g2, bins = 200)
plt.xlim((-10, 10))
plt.show()
Explanation: Normal (Gaussian) Distribution
$$ f(x | \mu, \sigma ^{2} ) = \frac{1}{\sqrt{2\pi\sigma ^{2}}} e^{- \frac{(x - \mu )^{2}}{2 \sigma^{2}}} $$
The Gaussian distribution is described by two variables:
- mean $\mu$
- standard deviation $\sigma$
If $\mu=0$ and $\sigma=1$, the distribution is called standard normal distribution.
The Gaussian distribution approximates a lot of natural pheonomena.
It describes the variability in the data, where the forces behind them is additive.
Physical processes are expected to be the sum of many independant processes and often the have distributions nearly the normal distribution.
End of explanation
l1 = np.random.lognormal(mean=0.0, sigma=0.5, size=100000)
l2= np.random.lognormal(mean=0.0, sigma=1, size=100000)
plt.hist(l1, bins = 1000)
plt.xlim((0, 10))
plt.ylim((0, 4500))
plt.show()
plt.hist(l2, bins = 1000)
plt.xlim((0, 10))
plt.ylim((0, 4500))
plt.show()
Explanation: Lognormal Distribution
$$ f(x | \mu, \sigma ^{2} ) = \frac{1}{\sigma x \sqrt{2\pi}} e^{- \frac{(ln(x) - \mu )^{2}}{2 \sigma^{2}}} $$
where:
- $\mu$ is mean of the corresponding Gaussian distribution
- $\sigma$ is standard deviation of the corresponding Gaussian distribution
The lognormal distribution is similar to the Gaussian distribution.
The difference is that it is assumed that the processes behind the outcome are multiplicative, instead of additive as in the Gaussian distribution.
End of explanation
p1 = (np.random.pareto(1, size=1000) + 1) * 1
p2 = (np.random.pareto(2, size=1000) + 1) * 1
plt.hist(p1, bins=100)
plt.ylim((0, 1000))
plt.show()
plt.hist(p2, bins = 100)
plt.ylim((0, 1000))
plt.show()
Explanation: Pareto Distribution
The Pareto distribution originated from the economics as description of wealth ion the society.
If is often described using so called "Matthew principle": "rich get reacher, poor get poorer".
The probability density for the Pareto distribution is given as:
$$p(x) = \frac{am^a}{x^{a+1}}$$
where:
- a is the shape
- m is the scale
Pareto distribution given like that can be obtained:
python
p = (np.random.pareto(a, size=1000) + 1) * m
End of explanation
w1 = np.random.weibull(a=0.8, size=1000000)
w2 = np.random.weibull(a=1, size=1000000)
w3 = np.random.weibull(a=5, size=1000000)
w4 = np.random.weibull(a=10, size=1000000)
plt.hist(w1, bins = 200)
plt.xlim((-1, 15))
plt.ylim((0, 200000))
plt.show()
plt.hist(w2, bins = 200)
plt.xlim((-1, 15))
plt.ylim((0, 200000))
plt.show()
Explanation: Weibull Distribution
The Weibull Distribution is given as:
$$ f(x ; \lambda, a) = \left{\begin{matrix}
\frac{a}{\lambda}(\frac{x}{\lambda})^{a-1}e^{-(\frac{x}{\lambda})^{a}}
& x \geq 0\
0 & x < 0
\end{matrix}\right. $$
where:
- a is the shape
- $\lambda$ is the scale
If the quantity X is a "time-to-failure", the Weibull distribution gives a distribution for which the failure rate is proportional to a power of time.
The shape parameter, a, is that power plus one, and so this parameter can be interpreted directly as follows:
1) a < 1 - indicates that the failure rate decreases over time (Lindy effect). This happens if there is significant "infant mortality", or defective items failing early and the failure rate decreasing over time as the defective items are weeded out of the population. In the context of the diffusion of innovations, this means negative word of mouth: the hazard function is a monotonically decreasing function of the proportion of adopters.
2) a = 1 - indicates that the failure rate is constant over time. This might suggest random external events are causing mortality, or failure. The Weibull distribution reduces to an exponential distribution.
3) a > 1 - indicates that the failure rate increases with time. This happens if there is an "aging" process, or parts that are more likely to fail as time goes on. In the context of the diffusion of innovations, this means positive word of mouth: the hazard function is a monotonically increasing function of the proportion of adopters.
End of explanation
plt.hist(w3, bins = 200)
plt.xlim((-1, 5))
plt.ylim((0, 25000))
plt.show()
plt.hist(w4, bins = 200)
plt.xlim((-1, 5))
plt.ylim((0, 25000))
plt.show()
Explanation: Different scale below:
End of explanation
Z = np.array([[0, 2], [1, 1], [2, 0]]).T
Z
np.cov(Z, ddof=1)
Explanation: Chapter 06
Moments
$k^{th}$ central moment is given as:
$$ m_{k} = \frac{1}{n}\sum_{i}^{ }(x_{i} - \bar{x})^{k} $$
Second order momentu, when $k = 2 $, is the variance.
If the measured value is in f.e. $cm$, the first moment is also in $cm$, but the second is in $cm^{2}$, the third in $cm^{3}$, the forth $cm^{4}$, and so on.
Skewness
Skewness is a property that describes the shape of a distribution.
If the distribution is focused around its central tendency, it is unskewed.
If the values focues on the left of the central tendency, it is described as "left skewed".
If the values focues on the right of the central tendency, it is called "right skewed".
Pearson's median skewness coefficient
Pearson's median skewness coefficient is a measure of skewness baed on the difference between the sample mean and median:
$$ g_{p}=3 \frac{(\bar x - \tilde{x})}{\sigma} $$
where:
- $ \tilde{x} $ is the median
- $ \sigma $ is the standard deviation
Robustness of a statistic
A statistic is robust if the outliers have relatively small impact on the value of the statistic.
Chapter 07
Correlation
A correlation is a statistic intended to quantify the strength of the relationship between two variables.
Few challenges present themselves during such analysis:
- usually the variables have different units
- usually the variables come from different distributions
There are two common approaches trying to solve the challenges:
- Transforming each value to a standard score (example: Pearson product-moment correlation coefficient)
- Transforming each value to a rank (example: Spearman rank correlation coefficient)
Covariance
Covariance is a measurement of the tendency of two variables to vary together.
It is given as:
$$ Cov(X, Y) = \frac{1}{n-1}\sum (x_{i} - \bar x)(y_{i} - \bar y) $$
where:
- X and Y are two series of the same lengths
End of explanation
stats.spearmanr([1, 2, 3, 4, 5], [5, 6, 7, 8, 7])
Explanation: Pearson's Correlation
The Pearson's correlation is computed by dividing the deviations by the standard deviations:
$$p = \frac{Conv(X, Y)}{\sigma_{X} \sigma_{Y}}$$
Pearson's correlations ia always between -1 and +1.
If the value $p$ is positive, the correlated values change is similar manner, when one is high, the other one tends to be high as well, when one is low, the other one tends to be low.
If the value $p$ is positive, the correlated values change is similar manner, when one is high, the other one tends to be high as well, when one is low, the other one tends to be low.
The magnitude of the correlation, $p$, describes the strength of the correlation when 1 is the perfect, positive correlation.
Pearson's correlation works
Spearman's Rank
Spearman's rank is more robust than the Pearson's correlations. It mitgates the effect of outliers and skewed distributions.
If the relationship is nonlinear, the Pearson'c correlation tends to underestimate the strength of the relationship.
End of explanation
from sklearn.metrics import mean_squared_error
y_true = [3, -0.5, 2, 7]
y_pred = [2.5, 0.0, 2, 8]
mean_squared_error(y_true, y_pred)
Explanation: Chapter 08
Mean Squared Error
Mean squared error is a way to measure a quality of an estimator.
It is important to mention that it is very sensitive to outliers and large values.
$$ MSE = \frac{1}{n}\sum (Y_{i}-\hat{Y_{i}})^{2} $$
End of explanation
trend = pd.Series([1, 2, 3, 4, 5, 6, 5, 4, 3, 2, 1])
noise = pd.Series(np.random.random(11))
season = trend + noise
season.name = 'season'
two_day_window = season.rolling(window=2).mean()
two_day_window.name = 'rolling mean'
two_day_window
plt.figure(figsize=(7, 7))
plt.plot(season)
plt.plot(two_day_window)
plt.legend()
Explanation: Chapter 09
T-test
A t-test is an analysis framework used to determine the difference between two sample means from two normally distributed populations with unknown variances.
Chi-Squared Test
Definition:
https://www.chegg.com/homework-help/definitions/chi-square-test-14
Example of chi-squared test methodology.
https://www.spss-tutorials.com/chi-square-independence-test/
Holm–Bonferroni method
Holm-Bonferroni method is used to counteract the problem of multiple comparisons.
Errors
In hypothesis testing, there are two types of error one can make:
- false positive - assuming that something is significant, when, in reality, it is not.
- false negative - assuming that something is not significant when it is.
Chapter 10
Least Squares Fit
Least square fit is given as:
$$ y = ax + b $$
where:
- a - slope
- b - inter
It is a good approach to estimate an unknown value or correlation between values if the relation is linear.
Coefficient of determination
Another way to measure goodness of fit is the coefficient of determination, known as $R^2$ and called R-squared:
There is a relationship between the Pearson's coefficient of correlation:
$$ R^2 = p^2 $$
Thus, if Pearson's correlation is 0.5 or -0.5, then the R-squared is 0.25.
Chapter 11
Logistic Regression
Linear regression can be generalized to handle various kind of dependent variables.
Types of variables
Endogenous variables are dependent variables, they are kind of variables one would like to predict.
Exogenous variables are explanatory variables, which are variables used to predict or explain dependent variables.
Chapter 12
Vocabulary
trend = a smooth function that captures persistent changes
seasonality = periodic variation (possibly daily, weekly, monghtly, yearly cycles)
noise = random variations around a longterm trend
Moving average
One way to simply measure seasonality is moving average.
It is computed by calculating mean over a certain window and move the window, usually by the smallest period.
End of explanation
ewma2 = season.ewm(span=2).mean()
ewma2.name = 'ewma2'
ewma3 = season.ewm(span=3).mean()
ewma3.name = 'ewma3'
ewma5 = season.ewm(span=5).mean()
ewma5.name = 'ewma5'
plt.figure(figsize=(7, 7))
plt.plot(season)
plt.plot(ewma2)
plt.plot(ewma3)
plt.plot(ewma5)
plt.legend()
Explanation: Exponentially-weighted moving average (EWMA)
Another approach is to calculate weighted average where the most recent values has the highest weight and the weights from previous values drop off exponentially.
The span parameter roughly corresponds to the window size of a moving average. It controls how fast the weights drop off, so it determines the number of points that make a non-negligible contribution to each average.
End of explanation
<END_TASK> |
15,574 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
MNIST Preprocessing
Joeri Hermans (Technical Student, IT-DB-SAS, CERN)
Departement of Knowledge Engineering
Maastricht University, The Netherlands
Step1: Preparation
To get started, we first load all the required imports. Please make sure you installed dist-keras, and seaborn. Furthermore, we assume that you have access to an installation which provides Apache Spark.
Before you start this notebook, place the MNIST dataset (which is provided in a zip in examples/data within this repository) on HDFS. Or in the case HDFS is not available, place it on the local filesystem. But make sure the path to the file is identical for all computing nodes.
Step2: In the following cell, adapt the parameters to fit your personal requirements.
Step3: As shown in the output of the cell above, we see that every pixel is associated with a seperate column. In order to ensure compatibility with Apache Spark, we vectorize the columns, and add the resulting vectors as a seperate column. However, in order to achieve this, we first need a list of the required columns. This is shown in the cell below.
Step4: Once we have a list of columns names, we can pass this to Spark's VectorAssembler. This VectorAssembler will take a list of features, vectorize them, and place them in a column defined in outputCol.
Step5: Once we have the inputs for our Neural Network (features column) after applying the VectorAssembler, we should also define the outputs. Since we are dealing with a classification task, the output of our Neural Network should be a one-hot encoded vector with 10 elements. For this, we provide a OneHotTransformer which accomplish this exact task.
Step6: MNIST
MNIST is a dataset of handwritten digits. Every image is a 28 by 28 pixel grayscale image. This means that every pixel has a value between 0 and 255. Some examples of instances within this dataset are shown in the cells below.
Normalization
In this Section, we will normalize the feature vectors between the 0 and 1 range.
Step7: Convolutions
In order to make the dense vectors compatible with convolution operations in Keras, we add another column which contains the matrix form of these images. We provide a utility class (MatrixTransformer), which helps you with this.
Step8: Dense Transformation
At the moment, dist-keras does not support SparseVectors due to the numpy dependency. As a result, we have to convert the SparseVector to a DenseVector. We added a simple utility transformer which does this for you.
Step9: Artificial Enlargement
We want to make the dataset 100 times larger to simulate larger datasets, and to evaluate optimizer performance.
Step10: Writing to HDFS
In order to prevent constant preprocessing, and ensure optimizer performance, we write the data to HDFS in a Parquet format. | <ASSISTANT_TASK:>
Python Code:
!(date +%d\ %B\ %G)
Explanation: MNIST Preprocessing
Joeri Hermans (Technical Student, IT-DB-SAS, CERN)
Departement of Knowledge Engineering
Maastricht University, The Netherlands
End of explanation
%matplotlib inline
import numpy as np
import seaborn as sns
import time
from pyspark import SparkContext
from pyspark import SparkConf
from matplotlib import pyplot as plt
from pyspark.ml.feature import StandardScaler
from pyspark.ml.feature import VectorAssembler
from pyspark.ml.feature import OneHotEncoder
from pyspark.ml.feature import MinMaxScaler
from pyspark.ml.feature import StringIndexer
from distkeras.transformers import *
from distkeras.utils import *
Explanation: Preparation
To get started, we first load all the required imports. Please make sure you installed dist-keras, and seaborn. Furthermore, we assume that you have access to an installation which provides Apache Spark.
Before you start this notebook, place the MNIST dataset (which is provided in a zip in examples/data within this repository) on HDFS. Or in the case HDFS is not available, place it on the local filesystem. But make sure the path to the file is identical for all computing nodes.
End of explanation
# Modify these variables according to your needs.
application_name = "MNIST Preprocessing"
using_spark_2 = False
local = False
path_train = "data/mnist_train.csv"
path_test = "data/mnist_test.csv"
if local:
# Tell master to use local resources.
master = "local[*]"
num_processes = 3
num_executors = 1
else:
# Tell master to use YARN.
master = "yarn-client"
num_executors = 20
num_processes = 1
# This variable is derived from the number of cores and executors, and will be used to assign the number of model trainers.
num_workers = num_executors * num_processes
print("Number of desired executors: " + `num_executors`)
print("Number of desired processes / executor: " + `num_processes`)
print("Total number of workers: " + `num_workers`)
import os
# Use the DataBricks CSV reader, this has some nice functionality regarding invalid values.
os.environ['PYSPARK_SUBMIT_ARGS'] = '--packages com.databricks:spark-csv_2.10:1.4.0 pyspark-shell'
conf = SparkConf()
conf.set("spark.app.name", application_name)
conf.set("spark.master", master)
conf.set("spark.executor.cores", `num_processes`)
conf.set("spark.executor.instances", `num_executors`)
conf.set("spark.executor.memory", "20g")
conf.set("spark.yarn.executor.memoryOverhead", "2")
conf.set("spark.locality.wait", "0")
conf.set("spark.serializer", "org.apache.spark.serializer.KryoSerializer");
# Check if the user is running Spark 2.0 +
if using_spark_2:
sc = SparkSession.builder.config(conf=conf) \
.appName(application_name) \
.getOrCreate()
else:
# Create the Spark context.
sc = SparkContext(conf=conf)
# Add the missing imports
from pyspark import SQLContext
sqlContext = SQLContext(sc)
# Record time of starting point.
time_start = time.time()
# Check if we are using Spark 2.0
if using_spark_2:
reader = sc
else:
reader = sqlContext
# Read the training set.
raw_dataset_train = reader.read.format('com.databricks.spark.csv') \
.options(header='true', inferSchema='true') \
.load(path_train)
# Read the test set.
raw_dataset_test = reader.read.format('com.databricks.spark.csv') \
.options(header='true', inferSchema='true') \
.load(path_test)
# Repartition the datasets.
raw_dataset_train = raw_dataset_train.repartition(num_workers)
raw_dataset_test = raw_dataset_test.repartition(num_workers)
Explanation: In the following cell, adapt the parameters to fit your personal requirements.
End of explanation
# First, we would like to extract the desired features from the raw dataset.
# We do this by constructing a list with all desired columns.
features = raw_dataset_train.columns
features.remove('label')
Explanation: As shown in the output of the cell above, we see that every pixel is associated with a seperate column. In order to ensure compatibility with Apache Spark, we vectorize the columns, and add the resulting vectors as a seperate column. However, in order to achieve this, we first need a list of the required columns. This is shown in the cell below.
End of explanation
# Next, we use Spark's VectorAssembler to "assemble" (create) a vector of all desired features.
# http://spark.apache.org/docs/latest/ml-features.html#vectorassembler
vector_assembler = VectorAssembler(inputCols=features, outputCol="features")
# This transformer will take all columns specified in features, and create an additional column "features" which will contain all the desired features aggregated into a single vector.
training_set = vector_assembler.transform(raw_dataset_train)
test_set = vector_assembler.transform(raw_dataset_test)
Explanation: Once we have a list of columns names, we can pass this to Spark's VectorAssembler. This VectorAssembler will take a list of features, vectorize them, and place them in a column defined in outputCol.
End of explanation
# Define the number of output classes.
nb_classes = 10
encoder = OneHotTransformer(nb_classes, input_col="label", output_col="label_encoded")
training_set = encoder.transform(training_set)
test_set = encoder.transform(test_set)
Explanation: Once we have the inputs for our Neural Network (features column) after applying the VectorAssembler, we should also define the outputs. Since we are dealing with a classification task, the output of our Neural Network should be a one-hot encoded vector with 10 elements. For this, we provide a OneHotTransformer which accomplish this exact task.
End of explanation
# Clear the datasets in the case you ran this cell before.
training_set = training_set.select("features", "label", "label_encoded")
test_set = test_set.select("features", "label", "label_encoded")
# Allocate a MinMaxTransformer using Distributed Keras.
# o_min -> original_minimum
# n_min -> new_minimum
transformer = MinMaxTransformer(n_min=0.0, n_max=1.0, \
o_min=0.0, o_max=250.0, \
input_col="features", \
output_col="features_normalized")
# Transform the datasets.
training_set = transformer.transform(training_set)
test_set = transformer.transform(test_set)
Explanation: MNIST
MNIST is a dataset of handwritten digits. Every image is a 28 by 28 pixel grayscale image. This means that every pixel has a value between 0 and 255. Some examples of instances within this dataset are shown in the cells below.
Normalization
In this Section, we will normalize the feature vectors between the 0 and 1 range.
End of explanation
reshape_transformer = ReshapeTransformer("features_normalized", "matrix", (28, 28, 1))
training_set = reshape_transformer.transform(training_set)
test_set = reshape_transformer.transform(test_set)
Explanation: Convolutions
In order to make the dense vectors compatible with convolution operations in Keras, we add another column which contains the matrix form of these images. We provide a utility class (MatrixTransformer), which helps you with this.
End of explanation
dense_transformer = DenseTransformer(input_col="features_normalized", output_col="features_normalized_dense")
training_set = dense_transformer.transform(training_set)
test_set = dense_transformer.transform(test_set)
Explanation: Dense Transformation
At the moment, dist-keras does not support SparseVectors due to the numpy dependency. As a result, we have to convert the SparseVector to a DenseVector. We added a simple utility transformer which does this for you.
End of explanation
df = training_set
expansion = 10
for i in range(0, expansion):
df = df.unionAll(training_set)
training_set = df
training_set.cache()
Explanation: Artificial Enlargement
We want to make the dataset 100 times larger to simulate larger datasets, and to evaluate optimizer performance.
End of explanation
training_set.write.parquet("data/mnist_train.parquet")
test_set.write.parquet("data/mnist_test.parquet")
# Record end of transformation.
time_end = time.time()
dt = time_end - time_start
print("Took " + str(dt) + " seconds.")
!hdfs dfs -rm -r data/mnist_test.parquet
!hdfs dfs -rm -r data/mnist_train.parquet
Explanation: Writing to HDFS
In order to prevent constant preprocessing, and ensure optimizer performance, we write the data to HDFS in a Parquet format.
End of explanation
<END_TASK> |
15,575 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Mission trend for upper limit on aspect solution RMS error
Using the method described in the sim_asol_centroids notebook, this plots the mission trend of the upper limit on 1-axis aspect solution RMS error. This shows that nearly all observations are comfortably below the SE31 image reconstruction budget. However there
is a detectable trend upward that correlates with increase ACA CCD temperature, and
some outliers are approaching or have exceeded the limit.
Obsid 14462 was noted independently because of large yaw gyro biases. Obsid 14557
was highlighted in V&V for extremely large centroid residuals. The root cause
was not fully understood, but perhaps this was the
result of a very unlucky configuration of CCD hot pixels.
<img src="asol_rms_err_upper_limit.png">
Step1: Figure 1 - Using the method described in the sim_asol_centroids notebook, this plot shows the mission trend of the upper limit on 1-axis aspect solution RMS error. | <ASSISTANT_TASK:>
Python Code:
from mica.vv import get_rms_data
from astropy.table import Table
import matplotlib.pyplot as plt
from Ska.Matplotlib import plot_cxctime
from astropy.time import Time
%matplotlib inline
# Get the mica table of V&V data
dat = Table(get_rms_data())
# Filter table to select only useful and final data
datm = dat['obsid isdefault tstart used slot type dy_rms dz_rms'.split()]
ok = (dat['type'] == 'GUIDE') & (dat['used'] == 1) & (dat['isdefault'] == 1)
datm = datm[ok]
datm.sort('obsid')
datm
# Group by obsid
datmg = datm['obsid tstart dy_rms dz_rms'.split()].group_by('obsid')
datmg.groups[1]
# Select the minimum within each obsid
datmg_min = datmg.groups.aggregate(np.min)
# Make the plot
dyz_rms = np.maximum(datmg_min['dy_rms'], datmg_min['dz_rms'])
for sym, alpha in (('.b', 0.8), (',r', 0.5), (',y', 0.15)):
plot_cxctime(datmg_min['tstart'], dyz_rms, sym, alpha=alpha)
plt.grid()
x0, x1 = plt.xlim()
plt.hlines(0.177, x0, x1, linestyles='--', colors='r')
plt.text(x0, 0.177, ' SE31 image reconstruction budget', va='bottom', ha='left')
plt.text(Time('2013:240').plot_date, 0.21, 'Obsid 14557', ha='right')
plt.text(Time('2012:180').plot_date, 0.255, 'Obsid 14462', ha='right')
plt.ylabel('Error upper limit (arcsec)');
plt.title('Aspect solution 1-axis RMS error upper limit vs. time');
plt.savefig('asol_rms_err_upper_limit.png');
Explanation: Mission trend for upper limit on aspect solution RMS error
Using the method described in the sim_asol_centroids notebook, this plots the mission trend of the upper limit on 1-axis aspect solution RMS error. This shows that nearly all observations are comfortably below the SE31 image reconstruction budget. However there
is a detectable trend upward that correlates with increase ACA CCD temperature, and
some outliers are approaching or have exceeded the limit.
Obsid 14462 was noted independently because of large yaw gyro biases. Obsid 14557
was highlighted in V&V for extremely large centroid residuals. The root cause
was not fully understood, but perhaps this was the
result of a very unlucky configuration of CCD hot pixels.
<img src="asol_rms_err_upper_limit.png">
End of explanation
# Show outliers
ok = dyz_rms > 0.2
datmg_min[ok]
Explanation: Figure 1 - Using the method described in the sim_asol_centroids notebook, this plot shows the mission trend of the upper limit on 1-axis aspect solution RMS error.
End of explanation
<END_TASK> |
15,576 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Linear Regression Tutorial
Author
Step1: Now lets read the first set of data, and make a simple scatter plot.
Step2: Luckly for us, we do not need to implement linear regression, since scikit learn already has a very efficient implementation. The straight line can be seen in the plot below, showing how linear regression attempts to draw a straight line that will best minimize the residual sum of squares between the observed responses in the dataset, and the responses predicted by the linear approximation.
The coefficients, the residual sum of squares and the variance score are also calculated.
Note
Step3: Residuals
From the notes, we learnt that we use ordinary linear regression when y is dependant on x since the algorithm reduces the vertical residual (y_observed - y predicted). The figure below outlines this using a different method for linear regression (using a polyfit with 1 polynomial).
Step4: Now let us plot the residual (y - y predicted) vs x.
Step5: As seen the the histogram, the residual error should be (somewhat) normally distributed and centered around zero. This post explains why.
If the residuals are not randomly distributed around zero, consider applying a transform to the data or applying non-linear regression. In addition to looking at the residuals, one could use the statsmodels library to take a statistical approach to ordinary least squares regression.
Step6: The important parts of the summary are the
Step7: If we want to be even more fancier, we can use the seaborn library to plot Linear regression with marginal distributions which also states the pearsonr and p value on the plot. Using the statsmodels approach is more rigourous, but sns provides quick visualizations.
Step8: Usually we calculate the (vertical) residual, or the difference in the observed and predicted in the y. This is because "the use of the least squares method to calculate the best-fitting line through a two-dimensional scatter plot typically requires the user to assume that one of the variables depends on the other. (We caculate the difference in the y) However, in many cases the relationship between the two variables is more complex, and it is not valid to say that one variable is independent and the other is dependent. When analysing such data researchers should consider plotting the three regression lines that can be calculated for any two-dimensional scatter plot."
Regression using Horizontal Residual
If X is dependant on y, then the regression line can be made based on horizontal residuals as shown below.
Step9: Total Least Squares Regression
Finally, a line of best fit can be made using Total least squares regression, a least squares data modeling technique in which observational errors on both dependent and independent variables are taken into account. This is done by minizing the errors perpendicular to the line, rather than just vertically. It is more complicated to implement than standard linear regression, but there is Fortran code called ODRPACK that has this efficiently implemented and wrapped scipy.odr Python module (which can be used out of the box). The details of odr are in the Scipy documentation and in even more detail in the ODRPACK guide.
In the code below (inspired from here uses an inital guess for the parameters, and makes a fit using total least squares regression.
Step10: Plotting all three regression lines gives a fuller picture of the data, and comparing their slopes provides a simple graphical assessment of the correlation coefficient. Plotting the orthogonal regression line (red) provides additional information because it makes no assumptions about the dependence or independence of the variables; as such, it appears to more accurately describe the trend in the data compared to either of the ordinary least squares regression lines. | <ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from math import log
from sklearn import linear_model
#comment below if not using ipython notebook
%matplotlib inline
Explanation: Linear Regression Tutorial
Author: Andrew Andrade (andrew@andrewandrade.ca)
This is part one of a series of tutorials related to regression used in data science. The coorosponding notes can be found here.
In this tutorial, We will first learn to fit a simple line using Least Squares Linear Regression (LSLR), plot residuals, residual distribution, statistics approach to linear regression, horizontal residuals and end with total least squares linear regression.
Part 1: Fitting a line using LSLR
First let us import the necessary libraries and read the data file. You can follow along by downloading the dataset from here: TODO.
End of explanation
#read csv
anscombe_i = pd.read_csv('../datasets/anscombe_i.csv')
plt.scatter(anscombe_i.x, anscombe_i.y, color='black')
plt.ylabel("Y")
plt.xlabel("X")
Explanation: Now lets read the first set of data, and make a simple scatter plot.
End of explanation
regr_i = linear_model.LinearRegression()
#We need to reshape the data to be a matrix
# with only one column
X = anscombe_i.x.reshape((len(anscombe_i.x), 1))
y = anscombe_i.y.reshape((len(anscombe_i.y), 1))
#Fit a line
regr_i.fit(X,y)
# The coefficients
print('Coefficients: \n', regr_i.coef_)
# The mean square error
print("Residual sum of squares: %.2f"
% np.mean((regr_i.predict(X) - y) ** 2))
# Explained variance score: 1 is perfect prediction
print('Variance score: %.2f' % regr_i.score(X, y))
plt.plot(X,regr_i.predict(X), color='green',
linewidth=3)
plt.scatter(anscombe_i.x, anscombe_i.y, color='black')
plt.ylabel("X")
plt.xlabel("y")
Explanation: Luckly for us, we do not need to implement linear regression, since scikit learn already has a very efficient implementation. The straight line can be seen in the plot below, showing how linear regression attempts to draw a straight line that will best minimize the residual sum of squares between the observed responses in the dataset, and the responses predicted by the linear approximation.
The coefficients, the residual sum of squares and the variance score are also calculated.
Note: from reading the documentation this method computes the least squares solution using a singular value decomposition of X. If X is a matrix of size (n, p) this method has a cost of O($n p^2$), assuming that $n \geq p$. A more efficient alternative (for large number of features) is to use Stochastic Gradient Descent or another method outlined in the linear models documentation
If you do not know what BigO is, please read the background information from the notes (or take a algorithms course).
End of explanation
from pylab import *
# determine the line-fit
k,d = polyfit(anscombe_i.x,y,1)
yfit = k*anscombe_i.x+d
# plot the data
figure(1)
scatter(anscombe_i.x,y, color='black')
plot(anscombe_i.x, yfit, 'green')
#plot line from point to regression line
for ii in range(len(X)):
plot([anscombe_i.x[ii], anscombe_i.x[ii]], [yfit[ii], y[ii]], 'k')
xlabel('X')
ylabel('Y')
Explanation: Residuals
From the notes, we learnt that we use ordinary linear regression when y is dependant on x since the algorithm reduces the vertical residual (y_observed - y predicted). The figure below outlines this using a different method for linear regression (using a polyfit with 1 polynomial).
End of explanation
import pylab as P
figure(1)
scatter(anscombe_i.x,y, color='black')
plot(anscombe_i.x, yfit, 'green')
#plot line from point to regression line
for ii in range(len(X)):
plot([anscombe_i.x[ii], anscombe_i.x[ii]], [yfit[ii], y[ii]], 'k')
xlabel('X')
ylabel('Y')
residual_error= anscombe_i.y - yfit
error_mean = np.mean(residual_error)
error_sigma = np.std(residual_error)
plt.figure(2)
plt.scatter(anscombe_i.x,residual_error,label='residual error')
plt.xlabel("X")
plt.ylabel("residual error")
plt.figure(3)
n, bins, patches = plt.hist(residual_error, 10, normed=1, facecolor='blue', alpha=0.75)
y_pdf = P.normpdf( bins, error_mean, error_sigma)
l = P.plot(bins, y_pdf, 'k--', linewidth=1.5)
plt.xlabel("residual error in y")
plt.title("Residual Distribution")
Explanation: Now let us plot the residual (y - y predicted) vs x.
End of explanation
# load statsmodels as alias ``sm``
import statsmodels.api as sm
y = anscombe_i.y
X = anscombe_i.x
# Adds a constant term to the predictor
# y = mx +b
X = sm.add_constant(X)
#fit ordinary least squares
est = sm.OLS(y, X)
est = est.fit()
est.summary()
Explanation: As seen the the histogram, the residual error should be (somewhat) normally distributed and centered around zero. This post explains why.
If the residuals are not randomly distributed around zero, consider applying a transform to the data or applying non-linear regression. In addition to looking at the residuals, one could use the statsmodels library to take a statistical approach to ordinary least squares regression.
End of explanation
plt.scatter(anscombe_i.x, anscombe_i.y, color='black')
X_prime = np.linspace(min(anscombe_i.x), max(anscombe_i.x), 100)[:, np.newaxis]
# add constant as we did before
X_prime = sm.add_constant(X_prime)
y_hat = est.predict(X_prime)
# Add the regression line (provides same as above)
plt.plot(X_prime[:, 1], y_hat, 'r')
Explanation: The important parts of the summary are the:
R-squared (or coefficeient of determination which is the statistical measure of how well the regression line approximates the real data points.
Adj. R-squared (adjusted based on the number of observations and the degrees-of-freedom of the residuals)
P > |t| which is the P-value that the null-hypothesis that the coefficient = 0 is true. If it is less than the confidence level, often 0.05, it indicates that there is a statistically significant relationship between the term and the response.
[95.0% Conf. Interval] The lower and upper values. See here for more details
If these measures do make make sense to you, consider learning or revising statistics. http://onlinestatbook.com or http://stattrek.com/tutorials/ap-statistics-tutorial.aspx are great free resources which outlines all the necessary background to be a great statstician and data scientist. Both http://onlinestatbook.com/2/regression/inferential.html, and http://stattrek.com/regression/slope-confidence-interval.aspx?Tutorial=AP provide the specifics of confidence intervals for linear regression
We can now plot the fitted line to the data and observe the same results as the previous two methods for linear regression.
End of explanation
import seaborn as sns
#this just makes the plots pretty (in my opion)
sns.set(style="darkgrid", color_codes=True)
g = sns.jointplot("x", "y", data=anscombe_i, kind="reg",
xlim=(0, 20), ylim=(0, 12), color="r", size=7)
Explanation: If we want to be even more fancier, we can use the seaborn library to plot Linear regression with marginal distributions which also states the pearsonr and p value on the plot. Using the statsmodels approach is more rigourous, but sns provides quick visualizations.
End of explanation
X = anscombe_i.x.reshape((len(anscombe_i.x), 1))
y = anscombe_i.y.reshape((len(anscombe_i.y), 1))
k,d = polyfit(anscombe_i.y,anscombe_i.x,1)
xfit = k*y+d
figure(2)
# plot the data
scatter(anscombe_i.x,y, color='black')
plot(xfit, y, 'blue')
for ii in range(len(y)):
plot([xfit[ii], anscombe_i.x[ii]], [y[ii], y[ii]], 'k')
xlabel('X')
ylabel('Y')
Explanation: Usually we calculate the (vertical) residual, or the difference in the observed and predicted in the y. This is because "the use of the least squares method to calculate the best-fitting line through a two-dimensional scatter plot typically requires the user to assume that one of the variables depends on the other. (We caculate the difference in the y) However, in many cases the relationship between the two variables is more complex, and it is not valid to say that one variable is independent and the other is dependent. When analysing such data researchers should consider plotting the three regression lines that can be calculated for any two-dimensional scatter plot."
Regression using Horizontal Residual
If X is dependant on y, then the regression line can be made based on horizontal residuals as shown below.
End of explanation
from scipy.odr import Model, Data, ODR
from scipy.stats import linregress
import numpy as np
def orthoregress(x, y):
# get initial guess by first running linear regression
linregression = linregress(x, y)
model = Model(fit_function)
data = Data(x, y)
od = ODR(data, model, beta0=linregression[0:2])
out = od.run()
return list(out.beta)
def fit_function(p, x):
#return y = m x + b
return (p[0] * x) + p[1]
m, b = orthoregress(anscombe_i.x, anscombe_i.y)
# determine the line-fit
y_ortho_fit = m*anscombe_i.x+b
# plot the data
scatter(anscombe_i.x,anscombe_i.y, color = 'black')
plot(anscombe_i.x, y_ortho_fit, 'r')
xlabel('X')
ylabel('Y')
Explanation: Total Least Squares Regression
Finally, a line of best fit can be made using Total least squares regression, a least squares data modeling technique in which observational errors on both dependent and independent variables are taken into account. This is done by minizing the errors perpendicular to the line, rather than just vertically. It is more complicated to implement than standard linear regression, but there is Fortran code called ODRPACK that has this efficiently implemented and wrapped scipy.odr Python module (which can be used out of the box). The details of odr are in the Scipy documentation and in even more detail in the ODRPACK guide.
In the code below (inspired from here uses an inital guess for the parameters, and makes a fit using total least squares regression.
End of explanation
scatter(anscombe_i.x,anscombe_i.y,color = 'black')
plot(xfit, anscombe_i.y, 'b', label= "horizontal residuals")
plot(anscombe_i.x, yfit, 'g', label= "vertical residuals")
plot(anscombe_i.x, y_ortho_fit, 'r', label = "perpendicular residuals" )
plt.legend(bbox_to_anchor=(0., 1.02, 1., .102), loc=3,
ncol=2, mode="expand", borderaxespad=0.)
Explanation: Plotting all three regression lines gives a fuller picture of the data, and comparing their slopes provides a simple graphical assessment of the correlation coefficient. Plotting the orthogonal regression line (red) provides additional information because it makes no assumptions about the dependence or independence of the variables; as such, it appears to more accurately describe the trend in the data compared to either of the ordinary least squares regression lines.
End of explanation
<END_TASK> |
15,577 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Convolutional Neural Networks
Step1: Run the next cell to load the "SIGNS" dataset you are going to use.
Step2: As a reminder, the SIGNS dataset is a collection of 6 signs representing numbers from 0 to 5.
<img src="images/SIGNS.png" style="width
Step3: In Course 2, you had built a fully-connected network for this dataset. But since this is an image dataset, it is more natural to apply a ConvNet to it.
To get started, let's examine the shapes of your data.
Step5: 1.1 - Create placeholders
TensorFlow requires that you create placeholders for the input data that will be fed into the model when running the session.
Exercise
Step7: Expected Output
<table>
<tr>
<td>
X = Tensor("Placeholder
Step9: Expected Output
Step11: Expected Output
Step13: Expected Output
Step14: Run the following cell to train your model for 100 epochs. Check if your cost after epoch 0 and 5 matches our output. If not, stop the cell and go back to your code!
Step15: Expected output | <ASSISTANT_TASK:>
Python Code:
import math
import numpy as np
import h5py
import matplotlib.pyplot as plt
import scipy
from PIL import Image
from scipy import ndimage
import tensorflow as tf
from tensorflow.python.framework import ops
from cnn_utils import *
%matplotlib inline
np.random.seed(1)
Explanation: Convolutional Neural Networks: Application
Welcome to Course 4's second assignment! In this notebook, you will:
Implement helper functions that you will use when implementing a TensorFlow model
Implement a fully functioning ConvNet using TensorFlow
After this assignment you will be able to:
Build and train a ConvNet in TensorFlow for a classification problem
We assume here that you are already familiar with TensorFlow. If you are not, please refer the TensorFlow Tutorial of the third week of Course 2 ("Improving deep neural networks").
1.0 - TensorFlow model
In the previous assignment, you built helper functions using numpy to understand the mechanics behind convolutional neural networks. Most practical applications of deep learning today are built using programming frameworks, which have many built-in functions you can simply call.
As usual, we will start by loading in the packages.
End of explanation
# Loading the data (signs)
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
Explanation: Run the next cell to load the "SIGNS" dataset you are going to use.
End of explanation
# Example of a picture
index = 6
plt.imshow(X_train_orig[index])
print ("y = " + str(np.squeeze(Y_train_orig[:, index])))
Explanation: As a reminder, the SIGNS dataset is a collection of 6 signs representing numbers from 0 to 5.
<img src="images/SIGNS.png" style="width:800px;height:300px;">
The next cell will show you an example of a labelled image in the dataset. Feel free to change the value of index below and re-run to see different examples.
End of explanation
X_train = X_train_orig/255.
X_test = X_test_orig/255.
Y_train = convert_to_one_hot(Y_train_orig, 6).T
Y_test = convert_to_one_hot(Y_test_orig, 6).T
print ("number of training examples = " + str(X_train.shape[0]))
print ("number of test examples = " + str(X_test.shape[0]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))
conv_layers = {}
Explanation: In Course 2, you had built a fully-connected network for this dataset. But since this is an image dataset, it is more natural to apply a ConvNet to it.
To get started, let's examine the shapes of your data.
End of explanation
# GRADED FUNCTION: create_placeholders
def create_placeholders(n_H0, n_W0, n_C0, n_y):
Creates the placeholders for the tensorflow session.
Arguments:
n_H0 -- scalar, height of an input image
n_W0 -- scalar, width of an input image
n_C0 -- scalar, number of channels of the input
n_y -- scalar, number of classes
Returns:
X -- placeholder for the data input, of shape [None, n_H0, n_W0, n_C0] and dtype "float"
Y -- placeholder for the input labels, of shape [None, n_y] and dtype "float"
### START CODE HERE ### (≈2 lines)
X = tf.placeholder(tf.float32, shape=[None, n_H0, n_W0, n_C0])
Y = tf.placeholder(tf.float32, shape=[None, n_y])
### END CODE HERE ###
return X, Y
X, Y = create_placeholders(64, 64, 3, 6)
print ("X = " + str(X))
print ("Y = " + str(Y))
Explanation: 1.1 - Create placeholders
TensorFlow requires that you create placeholders for the input data that will be fed into the model when running the session.
Exercise: Implement the function below to create placeholders for the input image X and the output Y. You should not define the number of training examples for the moment. To do so, you could use "None" as the batch size, it will give you the flexibility to choose it later. Hence X should be of dimension [None, n_H0, n_W0, n_C0] and Y should be of dimension [None, n_y]. Hint.
End of explanation
# GRADED FUNCTION: initialize_parameters
def initialize_parameters():
Initializes weight parameters to build a neural network with tensorflow. The shapes are:
W1 : [4, 4, 3, 8]
W2 : [2, 2, 8, 16]
Returns:
parameters -- a dictionary of tensors containing W1, W2
tf.set_random_seed(1) # so that your "random" numbers match ours
### START CODE HERE ### (approx. 2 lines of code)
W1 = tf.get_variable("W1", [4,4,3,8], initializer = tf.contrib.layers.xavier_initializer(seed = 0))
W2 = tf.get_variable("W2", [2,2,8,16], initializer = tf.contrib.layers.xavier_initializer(seed = 0))
### END CODE HERE ###
parameters = {"W1": W1,
"W2": W2}
return parameters
tf.reset_default_graph()
with tf.Session() as sess_test:
parameters = initialize_parameters()
init = tf.global_variables_initializer()
sess_test.run(init)
print("W1 = " + str(parameters["W1"].eval()[1,1,1]))
print("W2 = " + str(parameters["W2"].eval()[1,1,1]))
Explanation: Expected Output
<table>
<tr>
<td>
X = Tensor("Placeholder:0", shape=(?, 64, 64, 3), dtype=float32)
</td>
</tr>
<tr>
<td>
Y = Tensor("Placeholder_1:0", shape=(?, 6), dtype=float32)
</td>
</tr>
</table>
1.2 - Initialize parameters
You will initialize weights/filters $W1$ and $W2$ using tf.contrib.layers.xavier_initializer(seed = 0). You don't need to worry about bias variables as you will soon see that TensorFlow functions take care of the bias. Note also that you will only initialize the weights/filters for the conv2d functions. TensorFlow initializes the layers for the fully connected part automatically. We will talk more about that later in this assignment.
Exercise: Implement initialize_parameters(). The dimensions for each group of filters are provided below. Reminder - to initialize a parameter $W$ of shape [1,2,3,4] in Tensorflow, use:
python
W = tf.get_variable("W", [1,2,3,4], initializer = ...)
More Info.
End of explanation
# GRADED FUNCTION: forward_propagation
def forward_propagation(X, parameters):
Implements the forward propagation for the model:
CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED
Arguments:
X -- input dataset placeholder, of shape (input size, number of examples)
parameters -- python dictionary containing your parameters "W1", "W2"
the shapes are given in initialize_parameters
Returns:
Z3 -- the output of the last LINEAR unit
# Retrieve the parameters from the dictionary "parameters"
W1 = parameters['W1']
W2 = parameters['W2']
### START CODE HERE ###
# CONV2D: stride of 1, padding 'SAME'
Z1 = tf.nn.conv2d(X,W1, strides = [1,1,1,1], padding = 'SAME')
# RELU
A1 = tf.nn.relu(Z1)
# MAXPOOL: window 8x8, sride 8, padding 'SAME'
P1 = tf.nn.max_pool(A1, ksize = [1,8,8,1], strides = [1,8,8,1], padding = 'SAME')
# CONV2D: filters W2, stride 1, padding 'SAME'
Z2 = tf.nn.conv2d(P1,W2, strides = [1,1,1,1], padding = 'SAME')
# RELU
A2 = tf.nn.relu(Z2)
# MAXPOOL: window 4x4, stride 4, padding 'SAME'
P2 = tf.nn.max_pool(A2, ksize = [1,4,4,1], strides = [1,4,4,1], padding = 'SAME')
# FLATTEN
F = tf.contrib.layers.flatten(P2)
# FULLY-CONNECTED without non-linear activation function (not not call softmax).
# 6 neurons in output layer. Hint: one of the arguments should be "activation_fn=None"
Z3 = tf.contrib.layers.fully_connected(F, num_outputs=6,activation_fn=None,weights_regularizer = tf.contrib.layers.l2_regularizer(0.01))
### END CODE HERE ###
return Z3
tf.reset_default_graph()
with tf.Session() as sess:
np.random.seed(1)
X, Y = create_placeholders(64, 64, 3, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
init = tf.global_variables_initializer()
sess.run(init)
a = sess.run(Z3, {X: np.random.randn(2,64,64,3), Y: np.random.randn(2,6)})
print("Z3 = " + str(a))
Explanation: Expected Output:
<table>
<tr>
<td>
W1 =
</td>
<td>
[ 0.00131723 0.14176141 -0.04434952 0.09197326 0.14984085 -0.03514394 <br>
-0.06847463 0.05245192]
</td>
</tr>
<tr>
<td>
W2 =
</td>
<td>
[-0.08566415 0.17750949 0.11974221 0.16773748 -0.0830943 -0.08058 <br>
-0.00577033 -0.14643836 0.24162132 -0.05857408 -0.19055021 0.1345228 <br>
-0.22779644 -0.1601823 -0.16117483 -0.10286498]
</td>
</tr>
</table>
1.2 - Forward propagation
In TensorFlow, there are built-in functions that carry out the convolution steps for you.
tf.nn.conv2d(X,W1, strides = [1,s,s,1], padding = 'SAME'): given an input $X$ and a group of filters $W1$, this function convolves $W1$'s filters on X. The third input ([1,f,f,1]) represents the strides for each dimension of the input (m, n_H_prev, n_W_prev, n_C_prev). You can read the full documentation here
tf.nn.max_pool(A, ksize = [1,f,f,1], strides = [1,s,s,1], padding = 'SAME'): given an input A, this function uses a window of size (f, f) and strides of size (s, s) to carry out max pooling over each window. You can read the full documentation here
tf.nn.relu(Z1): computes the elementwise ReLU of Z1 (which can be any shape). You can read the full documentation here.
tf.contrib.layers.flatten(P): given an input P, this function flattens each example into a 1D vector it while maintaining the batch-size. It returns a flattened tensor with shape [batch_size, k]. You can read the full documentation here.
tf.contrib.layers.fully_connected(F, num_outputs): given a the flattened input F, it returns the output computed using a fully connected layer. You can read the full documentation here.
In the last function above (tf.contrib.layers.fully_connected), the fully connected layer automatically initializes weights in the graph and keeps on training them as you train the model. Hence, you did not need to initialize those weights when initializing the parameters.
Exercise:
Implement the forward_propagation function below to build the following model: CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED. You should use the functions above.
In detail, we will use the following parameters for all the steps:
- Conv2D: stride 1, padding is "SAME"
- ReLU
- Max pool: Use an 8 by 8 filter size and an 8 by 8 stride, padding is "SAME"
- Conv2D: stride 1, padding is "SAME"
- ReLU
- Max pool: Use a 4 by 4 filter size and a 4 by 4 stride, padding is "SAME"
- Flatten the previous output.
- FULLYCONNECTED (FC) layer: Apply a fully connected layer without an non-linear activation function. Do not call the softmax here. This will result in 6 neurons in the output layer, which then get passed later to a softmax. In TensorFlow, the softmax and cost function are lumped together into a single function, which you'll call in a different function when computing the cost.
End of explanation
# GRADED FUNCTION: compute_cost
def compute_cost(Z3, Y):
Computes the cost
Arguments:
Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (6, number of examples)
Y -- "true" labels vector placeholder, same shape as Z3
Returns:
cost - Tensor of the cost function
### START CODE HERE ### (1 line of code)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = Z3, labels = Y))+tf.get_collection(tf.GraphKeys.REGULARIZATION_LOSSES)
### END CODE HERE ###
return cost
tf.reset_default_graph()
with tf.Session() as sess:
np.random.seed(1)
X, Y = create_placeholders(64, 64, 3, 6)
parameters = initialize_parameters()
Z3 = forward_propagation(X, parameters)
cost = compute_cost(Z3, Y)
init = tf.global_variables_initializer()
sess.run(init)
a = sess.run(cost, {X: np.random.randn(4,64,64,3), Y: np.random.randn(4,6)})
print("cost = " + str(a))
Explanation: Expected Output:
<table>
<td>
Z3 =
</td>
<td>
[[-0.44670227 -1.57208765 -1.53049231 -2.31013036 -1.29104376 0.46852064] <br>
[-0.17601591 -1.57972014 -1.4737016 -2.61672091 -1.00810647 0.5747785 ]]
</td>
</table>
1.3 - Compute cost
Implement the compute cost function below. You might find these two functions helpful:
tf.nn.softmax_cross_entropy_with_logits(logits = Z3, labels = Y): computes the softmax entropy loss. This function both computes the softmax activation function as well as the resulting loss. You can check the full documentation here.
tf.reduce_mean: computes the mean of elements across dimensions of a tensor. Use this to sum the losses over all the examples to get the overall cost. You can check the full documentation here.
Exercise: Compute the cost below using the function above.
End of explanation
# GRADED FUNCTION: model
def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.009,
num_epochs = 100, minibatch_size = 64, print_cost = True):
Implements a three-layer ConvNet in Tensorflow:
CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> FULLYCONNECTED
Arguments:
X_train -- training set, of shape (None, 64, 64, 3)
Y_train -- test set, of shape (None, n_y = 6)
X_test -- training set, of shape (None, 64, 64, 3)
Y_test -- test set, of shape (None, n_y = 6)
learning_rate -- learning rate of the optimization
num_epochs -- number of epochs of the optimization loop
minibatch_size -- size of a minibatch
print_cost -- True to print the cost every 100 epochs
Returns:
train_accuracy -- real number, accuracy on the train set (X_train)
test_accuracy -- real number, testing accuracy on the test set (X_test)
parameters -- parameters learnt by the model. They can then be used to predict.
ops.reset_default_graph() # to be able to rerun the model without overwriting tf variables
tf.set_random_seed(1) # to keep results consistent (tensorflow seed)
seed = 3 # to keep results consistent (numpy seed)
(m, n_H0, n_W0, n_C0) = X_train.shape
n_y = Y_train.shape[1]
costs = [] # To keep track of the cost
# Create Placeholders of the correct shape
### START CODE HERE ### (1 line)
X, Y = create_placeholders(n_H0, n_W0, n_C0, n_y)
### END CODE HERE ###
# Initialize parameters
### START CODE HERE ### (1 line)
parameters = initialize_parameters()
### END CODE HERE ###
# Forward propagation: Build the forward propagation in the tensorflow graph
### START CODE HERE ### (1 line)
Z3 = forward_propagation(X, parameters)
### END CODE HERE ###
# Cost function: Add cost function to tensorflow graph
### START CODE HERE ### (1 line)
cost = compute_cost(Z3, Y)
### END CODE HERE ###
# Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer that minimizes the cost.
### START CODE HERE ### (1 line)
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
### END CODE HERE ###
# Initialize all the variables globally
init = tf.global_variables_initializer()
# Start the session to compute the tensorflow graph
with tf.Session() as sess:
# Run the initialization
sess.run(init)
# Do the training loop
for epoch in range(num_epochs):
minibatch_cost = 0.
num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set
seed = seed + 1
minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed)
for minibatch in minibatches:
# Select a minibatch
(minibatch_X, minibatch_Y) = minibatch
# IMPORTANT: The line that runs the graph on a minibatch.
# Run the session to execute the optimizer and the cost, the feedict should contain a minibatch for (X,Y).
### START CODE HERE ### (1 line)
_ , temp_cost = sess.run([optimizer,cost],feed_dict={X:minibatch_X, Y:minibatch_Y})
### END CODE HERE ###
minibatch_cost += temp_cost / num_minibatches
# Print the cost every epoch
if print_cost == True and epoch % 5 == 0:
print ("Cost after epoch %i: %f" % (epoch, minibatch_cost))
if print_cost == True and epoch % 1 == 0:
costs.append(minibatch_cost)
# plot the cost
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
# Calculate the correct predictions
predict_op = tf.argmax(Z3, 1)
correct_prediction = tf.equal(predict_op, tf.argmax(Y, 1))
# Calculate accuracy on the test set
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
print(accuracy)
train_accuracy = accuracy.eval({X: X_train, Y: Y_train})
test_accuracy = accuracy.eval({X: X_test, Y: Y_test})
print("Train Accuracy:", train_accuracy)
print("Test Accuracy:", test_accuracy)
return train_accuracy, test_accuracy, parameters
Explanation: Expected Output:
<table>
<td>
cost =
</td>
<td>
2.91034
</td>
</table>
1.4 Model
Finally you will merge the helper functions you implemented above to build a model. You will train it on the SIGNS dataset.
You have implemented random_mini_batches() in the Optimization programming assignment of course 2. Remember that this function returns a list of mini-batches.
Exercise: Complete the function below.
The model below should:
create placeholders
initialize parameters
forward propagate
compute the cost
create an optimizer
Finally you will create a session and run a for loop for num_epochs, get the mini-batches, and then for each mini-batch you will optimize the function. Hint for initializing the variables
End of explanation
_, _, parameters = model(X_train, Y_train, X_test, Y_test)
Explanation: Run the following cell to train your model for 100 epochs. Check if your cost after epoch 0 and 5 matches our output. If not, stop the cell and go back to your code!
End of explanation
fname = "images/thumbs_up.jpg"
image = np.array(ndimage.imread(fname, flatten=False))
my_image = scipy.misc.imresize(image, size=(64,64))
plt.imshow(my_image)
Explanation: Expected output: although it may not match perfectly, your expected output should be close to ours and your cost value should decrease.
<table>
<tr>
<td>
**Cost after epoch 0 =**
</td>
<td>
1.917929
</td>
</tr>
<tr>
<td>
**Cost after epoch 5 =**
</td>
<td>
1.506757
</td>
</tr>
<tr>
<td>
**Train Accuracy =**
</td>
<td>
0.940741
</td>
</tr>
<tr>
<td>
**Test Accuracy =**
</td>
<td>
0.783333
</td>
</tr>
</table>
Congratulations! You have finised the assignment and built a model that recognizes SIGN language with almost 80% accuracy on the test set. If you wish, feel free to play around with this dataset further. You can actually improve its accuracy by spending more time tuning the hyperparameters, or using regularization (as this model clearly has a high variance).
Once again, here's a thumbs up for your work!
End of explanation
<END_TASK> |
15,578 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Is there a relationship between ads and sales?
Simple Linear Regression is an approach for predicting a quantitative response using a single feature (or "predictor" or "input variable"). Generally speaking, coefficients are estimated using the least squares criterion, which means we are find the line (mathematically) which minimizes the sum of squared residuals (or "sum of squared errors").
Step1: Plotting the Least Squares Line
Step2: null hypothesis
Step3: The most common way to evaluate the overall fit of a linear model is by the R-squared value. R-squared is the proportion of variance explained, meaning the proportion of variance in the observed data that is explained by the model, or the reduction in error over the null model.
Step4: Multiple Linear Regression
Step5: Interpretation
Step6: Interpretation
Step7: What if one of our predictors was categorical, rather than numeric?
Step8: How do we interpret the IsLarge coefficient? For a given amount of TV/Radio/Newspaper ad spending, being a large market is associated with an average increase in sales of 57.42 widgets (compared to small market which is called the baseline level).
Now what if we had categorical predictors with more than two categories? Say Area
Step9: Again, we have to represent Area numerically, but we cannot simply encode it as 0=rural, 1=suburban, 2=urban because it would imply an ordered relationship between suburban and urban. Instead, another dummy
Step10: In general, if you have a categorical feature with k levels, you create k-1 dummy variables. Because the other dummies capture all the information about the feature. The "left out" will be the baseline.
Let's include the new dummy variables in the model
Step11: How do we interpret, again? Holding all other variables fixed, being a suburban area is associated with an average decrease in sales of 106.56 widgets (compared to baseline which is rural). Being an urban area is associated with an average of 268 widgets sales increase.
All of the above is limited by the fact that it can only make good predictions if there is a linear relationship between the features and the response | <ASSISTANT_TASK:>
Python Code:
# standard import if you're using "formula notation"
import statsmodels.formula.api as smf
lm = smf.ols(formula='Sales ~ TV', data=data).fit()
lm.params
# lets make a prediction if TV advertising would spend $50,000
# Statsmodels formula interface expects a datarames
X_new = pd.DataFrame({'TV':[50]})
X_new
lm.predict(X_new)
Explanation: Is there a relationship between ads and sales?
Simple Linear Regression is an approach for predicting a quantitative response using a single feature (or "predictor" or "input variable"). Generally speaking, coefficients are estimated using the least squares criterion, which means we are find the line (mathematically) which minimizes the sum of squared residuals (or "sum of squared errors").
End of explanation
# create a dataframe with the minimum and maximum values of TV
X_new = pd.DataFrame({'TV':[data.TV.min(), data.TV.max()]})
X_new
preds = lm.predict(X_new)
preds
# first plot the observed data, then plot the least squares line
data.plot(kind='scatter', x='TV', y='Sales')
plt.plot(X_new, preds, c='red', linewidth=2)
# confidence intervals
lm.conf_int()
Explanation: Plotting the Least Squares Line
End of explanation
lm.pvalues
Explanation: null hypothesis: there is no relationship between TV ads and sales
alternative hypothesis: there is relationship between TV ads and sales .
Typically we reject the null (and thus believe the alternative) if 95% confidence interval does not include zero. The p-value is the probability that the coefficient is actually zero:
End of explanation
lm.rsquared
Explanation: The most common way to evaluate the overall fit of a linear model is by the R-squared value. R-squared is the proportion of variance explained, meaning the proportion of variance in the observed data that is explained by the model, or the reduction in error over the null model.
End of explanation
# create a fitted model with all three features
lm = smf.ols(formula='Sales ~ TV + Radio + Newspaper', data=data).fit()
lm.params
Explanation: Multiple Linear Regression
End of explanation
lm.summary()
Explanation: Interpretation: For a given amount of Radio and Newspaper ad spending, an increase of $1000 in TV ad spending is associated with an increase in Sales of 45.765 widgets.
End of explanation
# redo above examples with scikit-learn
feature_cols = ['TV', 'Radio', 'Newspaper']
X = data[feature_cols]
y = data.Sales
# usual scikit-learn pattern; import, instantiate, fit
from sklearn.linear_model import LinearRegression
lm = LinearRegression()
lm.fit(X,y)
lm.intercept_
lm.coef_
# pair the feature names with the coefficients
zip(feature_cols, lm.coef_)
lm.predict([100, 25, 25])
list(zip(feature_cols, lm.coef_))
# calculate the R-squared
lm.score(X, y)
Explanation: Interpretation: TV and Radio have significant p-values whereas Newspaper does not. Thus we reject the null hypothesis for TV and Radio, and fail to reject the null hypothesis for Newspaper. TV and Radio are both positively associated with Sales. The model has a slightly higher R-squared (0.897) than the previous model, which means that this model provides a better fit to the data than a model that only includes TV.
Rule of thumb: only keep predictors in the model if they have small p-values; check if R-squared value goes up as you add new predictors. But keep in mind that R-squared is susceptible to overfitting, and thus there is no guarantee that high value is better.
R-squared will always increase as you add more features to the model, even if they are unrelated to the response. Thus, selecting the model with the highest R-squared is not a reliable approach for choosing the best linear model.
There is alternative to R-squared called adjusted R-squared that penalizes model complexity (to control for overfitting), but it generally under-penalizes complexity.
A better approach to feature selection is cross-validation. Cross-validation can be applied to any model, not only linear. Exampe with scikit-learn:
End of explanation
# set a seed for reproducibility
np.random.seed(12345)
nums = np.random.rand(len(data))
mask_large = nums > 0.5 # random cathegorical data small/large
# initially set Size to small, then change roughly half to be large
data['Size'] = 'small'
data.loc[mask_large,'Size'] = 'large' # apply mask
data.head()
# for scikit-learn, we need to represent all data numerically;
data['IsLarge'] = data.Size.map({'small':0, 'large':1})
data.head()
# redo multiple linear regression and include IsLarge predictor
feature_cols = ['TV', 'Radio', 'Newspaper', 'IsLarge']
X = data[feature_cols]
y = data.Sales
# instantiate, fit
lm = LinearRegression()
lm.fit(X,y)
list(zip(feature_cols, lm.coef_))
Explanation: What if one of our predictors was categorical, rather than numeric?
End of explanation
# for reproducibilitty
np.random.seed(123456)
# assign roughly one third of observations in each category
nums = np.random.rand(len(data))
mask_suburban = (nums > 0.33) & (nums < 0.66)
mask_urban = (nums > 0.66)
data['Area'] = 'rural'
data.loc[mask_suburban, 'Area'] = 'suburban'
data.loc[mask_urban, 'Area'] = 'urban'
data.head()
Explanation: How do we interpret the IsLarge coefficient? For a given amount of TV/Radio/Newspaper ad spending, being a large market is associated with an average increase in sales of 57.42 widgets (compared to small market which is called the baseline level).
Now what if we had categorical predictors with more than two categories? Say Area: rural, suburban, urban?
End of explanation
# create three dummy variables using get_dummies, then exclude the first dummy column
area_dummies = pd.get_dummies(data.Area, prefix='Area').iloc[:,1:]
area_dummies.head()
data = pd.concat([data, area_dummies], axis=1)
data.head()
Explanation: Again, we have to represent Area numerically, but we cannot simply encode it as 0=rural, 1=suburban, 2=urban because it would imply an ordered relationship between suburban and urban. Instead, another dummy:
End of explanation
feature_cols = feature_cols + ['Area_suburban', 'Area_urban']
feature_cols
X = data[feature_cols]
y = data.Sales
lm = LinearRegression()
lm.fit(X,y)
list(zip(feature_cols, lm.coef_))
Explanation: In general, if you have a categorical feature with k levels, you create k-1 dummy variables. Because the other dummies capture all the information about the feature. The "left out" will be the baseline.
Let's include the new dummy variables in the model
End of explanation
lm.predict([100,46,45, 1, 1, 0])
Explanation: How do we interpret, again? Holding all other variables fixed, being a suburban area is associated with an average decrease in sales of 106.56 widgets (compared to baseline which is rural). Being an urban area is associated with an average of 268 widgets sales increase.
All of the above is limited by the fact that it can only make good predictions if there is a linear relationship between the features and the response
End of explanation
<END_TASK> |
15,579 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
In this Tutorial we will explore how to work with columnar data in HoloViews. Columnar data has a fixed list of column headings, with values stored in an arbitrarily long list of rows. Spreadsheets, relational databases, CSV files, and many other typical data sources fit naturally into this format. HoloViews defines an extensible system of interfaces to load, manipulate, and visualize this kind of data, as well as allowing conversion of any of the non-columnar data types into columnar data for analysis or data interchange.
By default HoloViews will use one of three storage formats for columnar data
Step1: Simple Dataset
Usually when working with data we have one or more independent variables, taking the form of categories, labels, discrete sample coordinates, or bins. These variables are what we refer to as key dimensions (or kdims for short) in HoloViews. The observer or dependent variables, on the other hand, are referred to as value dimensions (vdims), and are ordinarily measured or calculated given the independent variables. The simplest useful form of a Dataset object is therefore a column 'x' and a column 'y' corresponding to the key dimensions and value dimensions respectively. An obvious visual representation of this data is a Table
Step2: However, this data has many more meaningful visual representations, and therefore the first important concept is that Dataset objects are interchangeable as long as their dimensionality allows it, meaning that you can easily create the different objects from the same data (and cast between the objects once created)
Step3: Each of these three plots uses the same data, but represents a different assumption about the semantic meaning of that data -- the Scatter plot is appropriate if that data consists of independent samples, the Curve plot is appropriate for samples chosen from an underlying smooth function, and the Bars plot is appropriate for independent categories of data. Since all these plots have the same dimensionality, they can easily be converted to each other, but there is normally only one of these representations that is semantically appropriate for the underlying data. For this particular data, the semantically appropriate choice is Curve, since the y values are samples from the continuous function exp.
As a guide to which Elements can be converted to each other, those of the same dimensionality here should be interchangeable, because of the underlying similarity of their columnar representation
Step4: Literals
In addition to the main storage formats, Dataset Elements support construction from three Python literal formats
Step5: For these inputs, the data will need to be copied to a new data structure, having one of the three storage formats above. By default Dataset will try to construct a simple array, falling back to either pandas dataframes (if available) or the dictionary-based format if the data is not purely numeric. Additionally, the interfaces will try to maintain the provided data's type, so numpy arrays and pandas DataFrames will therefore always be parsed by the array and dataframe interfaces first respectively.
Step6: Dataset will attempt to parse the supplied data, falling back to each consecutive interface if the previous could not interpret the data. The default list of fallbacks and simultaneously the list of allowed datatypes is
Step7: To select a particular storage format explicitly, supply one or more allowed datatypes
Step8: Sharing Data
Since the formats with labelled columns do not require any specific order, each Element can effectively become a view into a single set of data. By specifying different key and value dimensions, many Elements can show different values, while sharing the same underlying data source.
Step9: We can quickly confirm that the data is actually shared
Step10: For columnar data, this approach is much more efficient than creating copies of the data for each Element, and allows for some advanced features like linked brushing in the Bokeh backend.
Converting to raw data
Column types make it easy to export the data to the three basic formats
Step11: Pandas DataFrame
Step12: Dataset dictionary
Step13: Creating tabular data from Elements using the .table and .dframe methods
If you have data in some other HoloViews element and would like to use the columnar data features, you can easily tabularize any of the core Element types into a Table Element, using the .table() method. Similarly, the .dframe() method will convert an Element into a pandas DataFrame. These methods are very useful if you want to then transform the data into a different Element type, or to perform different types of analysis.
Tabularizing simple Elements
For a simple example, we can create a Curve of an exponential function and convert it to a Table with the .table method, with the same result as creating the Table directly from the data as done earlier on this Tutorial
Step14: Similarly, we can get a pandas dataframe of the Curve using curve.dframe(). Here we wrap that call as raw HTML to allow automated testing of this notebook, but just calling curve.dframe() would give the same result visually
Step15: Although 2D image-like objects are not inherently well suited to a flat columnar representation, serializing them by converting to tabular data is a good way to reveal the differences between Image and Raster elements. Rasters are a very simple type of element, using array-like integer indexing of rows and columns from their top-left corner as in computer graphics applications. Conversely, Image elements are a higher-level abstraction that provides a general-purpose continuous Cartesian coordinate system, with x and y increasing to the right and upwards as in mathematical applications, and each point interpreted as a sample representing the pixel in which it is located (and thus centered within that pixel). Given the same data, the .table() representation will show how the data is being interpreted (and accessed) differently in the two cases (as explained in detail in the Continuous Coordinates Tutorial)
Step16: Tabularizing space containers
Even deeply nested objects can be deconstructed in this way, serializing them to make it easier to get your raw data out of a collection of specialized Element types. Let's say we want to make multiple observations of a noisy signal. We can collect the data into a HoloMap to visualize it and then call .table() to get a columnar object where we can perform operations or transform it to other Element types. Deconstructing nested data in this way only works if the data is homogenous. In practical terms, the requirement is that your data structure contains Elements (of any types) in these Container types
Step17: Now we can serialize this data just as before, where this time we get a four-column (4D) table. The key dimensions of both the HoloMap and the Images, as well as the z-values of each Image, are all merged into a single table. We can visualize the samples we have collected by converting it to a Scatter3D object.
Step18: Here the z dimension is shown by color, as in the original images, and the other three dimensions determine where the datapoint is shown in 3D. This way of deconstructing will work for any data structure that satisfies the conditions described above, no matter how nested. If we vary the amount of noise while continuing to performing multiple observations, we can create an NdLayout of HoloMaps, one for each level of noise, and animated by the observation number.
Step19: And again, we can easily convert the object to a Table
Step20: Applying operations to the data
Sorting by columns
Once data is in columnar form, it is simple to apply a variety of operations. For instance, Dataset can be sorted by their dimensions using the .sort() method. By default, this method will sort by the key dimensions, but any other dimension(s) can be supplied to specify sorting along any other dimensions
Step21: Working with categorical or grouped data
Data is often grouped in various ways, and the Dataset interface provides various means to easily compare between groups and apply statistical aggregates. We'll start by generating some synthetic data with two groups along the x-axis and 4 groups along the y axis.
Step22: Since there are repeat observations of the same x- and y-values, we have to reduce the data before we display it or else use a datatype that supports plotting distributions in this way. The BoxWhisker type allows doing exactly that
Step23: Aggregating/Reducing dimensions
Most types require the data to be non-duplicated before being displayed. For this purpose, HoloViews makes it easy to aggregate and reduce the data. These two operations are simple inverses of each other--aggregate computes a statistic for each group in the supplied dimensions, while reduce combines all the groups except the supplied dimensions. Supplying only a function and no dimensions will simply aggregate or reduce all available key dimensions.
Step24: (A) aggregates over both the x and y dimension, computing the mean for each x/y group, while (B) reduces the x dimension leaving just the mean for each group along y.
Collapsing multiple Dataset Elements
When multiple observations are broken out into a HoloMap they can easily be combined using the collapse method. Here we create a number of Curves with increasingly larger y-values. By collapsing them with a function and a spreadfn we can compute the mean curve with a confidence interval. We then simply cast the collapsed Curve to a Spread and Curve Element to visualize them.
Step25: Working with complex data
In the last section we only scratched the surface of what the Dataset interface can do. When it really comes into its own is when working with high-dimensional datasets. As an illustration, we'll load a dataset of some macro-economic indicators for OECD countries from 1964-1990, cached on the HoloViews website.
Step26: We'll also take this opportunity to set default options for all the following plots.
Step27: Loading the data
As we saw above, we can supply a dataframe to any Dataset type. When dealing with so many dimensions it would be cumbersome to supply all the dimensions explicitly, but luckily Dataset can easily infer the dimensions from the dataframe itself. We simply supply the kdims, and it will infer that all other numeric dimensions should be treated as value dimensions (vdims).
Step28: To get an overview of the data we'll quickly sort it and then view the data for one year.
Step29: Most of the examples above focus on converting a Table to simple Element types, but HoloViews also provides powerful container objects to explore high-dimensional data, such as HoloMap, NdOverlay, NdLayout, and GridSpace. HoloMaps work as a useful interchange format from which you can conveniently convert to the other container types using its .overlay(), .layout(), and .grid() methods. This way we can easily create an overlay of GDP Growth curves by year for each country. Here Year is a key dimension and GDP Growth a value dimension. We are then left with the Country dimension, which we can overlay using the .overlay() method.
Step30: Now that we've extracted the gdp_curves, we can apply some operations to them. As in the simpler example above we will collapse the HoloMap of Curves using a number of functions to visualize the distribution of GDP Growth rates over time. First we find the mean curve with np.std as the spreadfn and cast the result to a Spread type, then we compute the min, mean and max curve in the same way and put them all inside an Overlay.
Step31: Many HoloViews Element types support multiple kdims, including HeatMap, Points, Scatter, Scatter3D, and Bars. Bars in particular allows you to lay out your data in groups, categories and stacks. By supplying the index of that dimension as a plotting option you can choose to lay out your data as groups of bars, categories in each group, and stacks. Here we choose to lay out the trade surplus of each country with groups for each year, no categories, and stacked by country. Finally, we choose to color the Bars for each item in the stack.
Step32: This plot contains a lot of data, and so it's probably a good idea to focus on specific aspects of it, telling a simpler story about them. For instance, using the .select method we can then customize the palettes (e.g. to use consistent colors per country across multiple analyses).
Palettes can customized by selecting only a subrange of the underlying cmap to draw the colors from. The Palette draws samples from the colormap using the supplied sample_fn, which by default just draws linear samples but may be overriden with any function that draws samples in the supplied ranges. By slicing the Set1 colormap we draw colors only from the upper half of the palette and then reverse it.
Step33: Many HoloViews Elements support multiple key and value dimensions. A HeatMap is indexed by two kdims, so we can visualize each of the economic indicators by year and country in a Layout. Layouts are useful for heterogeneous data you want to lay out next to each other.
Before we display the Layout let's apply some styling; we'll suppress the value labels applied to a HeatMap by default and substitute it for a colorbar. Additionally we up the number of xticks that are drawn and rotate them by 90 degrees to avoid overlapping. Flipping the y-axis ensures that the countries appear in alphabetical order. Finally we reduce some of the margins of the Layout and increase the size.
Step34: Another way of combining heterogeneous data dimensions is to map them to a multi-dimensional plot type. Scatter Elements, for example, support multiple vdims, which may be mapped onto the color and size of the drawn points in addition to the y-axis position.
As for the Curves above we supply 'Year' as the sole key dimension and rely on the Table to automatically convert the Country to a map dimension, which we'll overlay. However this time we select both GDP Growth and Unemployment, to be plotted as points. To get a sensible chart, we adjust the scaling_factor for the points to get a reasonable distribution in sizes and apply a categorical Palette so we can distinguish each country.
Step35: In this way we can plot any dimension against any other dimension, very easily allowing us to iterate through different ways of revealing relationships in the dataset.
Step36: This view, for example, immediately highlights the high unemployment rates of the 1980s.
Since all HoloViews Elements are composable, we can generate complex figures just by applying the * operator. We'll simply reuse the GDP curves we generated earlier, combine them with the scatter points (which indicate the unemployment rate by size) and annotate the data with some descriptions of what happened economically in these years.
Step37: Since we didn't map the country to some other container type, we get a widget allowing us to view the plot separately for each country, reducing the forest of curves we encountered before to manageable chunks.
While looking at the plots individually like this allows us to study trends for each country, we may want to lay out a subset of the countries side by side, e.g. for non-interactive publications. We can easily achieve this by selecting the countries we want to view and and then applying the .layout method. We'll also want to restore the square aspect ratio so the plots compose nicely.
Step38: Finally, let's combine some plots for each country into a Layout, giving us a quick overview of each economic indicator for each country | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import holoviews as hv
from IPython.display import HTML
hv.notebook_extension()
Explanation: In this Tutorial we will explore how to work with columnar data in HoloViews. Columnar data has a fixed list of column headings, with values stored in an arbitrarily long list of rows. Spreadsheets, relational databases, CSV files, and many other typical data sources fit naturally into this format. HoloViews defines an extensible system of interfaces to load, manipulate, and visualize this kind of data, as well as allowing conversion of any of the non-columnar data types into columnar data for analysis or data interchange.
By default HoloViews will use one of three storage formats for columnar data:
A pure Python dictionary containing each column.
A purely NumPy-based format for numeric data.
Pandas DataFrames
End of explanation
xs = range(10)
ys = np.exp(xs)
table = hv.Table((xs, ys), kdims=['x'], vdims=['y'])
table
Explanation: Simple Dataset
Usually when working with data we have one or more independent variables, taking the form of categories, labels, discrete sample coordinates, or bins. These variables are what we refer to as key dimensions (or kdims for short) in HoloViews. The observer or dependent variables, on the other hand, are referred to as value dimensions (vdims), and are ordinarily measured or calculated given the independent variables. The simplest useful form of a Dataset object is therefore a column 'x' and a column 'y' corresponding to the key dimensions and value dimensions respectively. An obvious visual representation of this data is a Table:
End of explanation
hv.Scatter(table) + hv.Curve(table) + hv.Bars(table)
Explanation: However, this data has many more meaningful visual representations, and therefore the first important concept is that Dataset objects are interchangeable as long as their dimensionality allows it, meaning that you can easily create the different objects from the same data (and cast between the objects once created):
End of explanation
print(repr(hv.Scatter({'x': xs, 'y': ys}) +
hv.Scatter(np.column_stack([xs, ys])) +
hv.Scatter(pd.DataFrame({'x': xs, 'y': ys}))))
Explanation: Each of these three plots uses the same data, but represents a different assumption about the semantic meaning of that data -- the Scatter plot is appropriate if that data consists of independent samples, the Curve plot is appropriate for samples chosen from an underlying smooth function, and the Bars plot is appropriate for independent categories of data. Since all these plots have the same dimensionality, they can easily be converted to each other, but there is normally only one of these representations that is semantically appropriate for the underlying data. For this particular data, the semantically appropriate choice is Curve, since the y values are samples from the continuous function exp.
As a guide to which Elements can be converted to each other, those of the same dimensionality here should be interchangeable, because of the underlying similarity of their columnar representation:
0D: BoxWhisker, Spikes, Distribution*,
1D: Scatter, Curve, ErrorBars, Spread, Bars, BoxWhisker, Regression*
2D: Points, HeatMap, Bars, BoxWhisker, Bivariate*
3D: Scatter3D, Trisurface, VectorField, BoxWhisker, Bars
* - requires Seaborn
This categorization is based only on the kdims, which define the space in which the data has been sampled or defined. An Element can also have any number of value dimensions (vdims), which may be mapped onto various attributes of a plot such as the color, size, and orientation of the plotted items. For a reference of how to use these various Element types, see the Elements Tutorial.
Data types and Constructors
As discussed above, Dataset provide an extensible interface to store and operate on data in different formats. All interfaces support a number of standard constructors.
Storage formats
Dataset types can be constructed using one of three supported formats, (a) a dictionary of columns, (b) an NxD array with N rows and D columns, or (c) pandas dataframes:
End of explanation
print(repr(hv.Scatter(ys) + hv.Scatter((xs, ys)) + hv.Scatter(zip(xs, ys))))
Explanation: Literals
In addition to the main storage formats, Dataset Elements support construction from three Python literal formats: (a) An iterator of y-values, (b) a tuple of columns, and (c) an iterator of row tuples.
End of explanation
df = pd.DataFrame({'x': xs, 'y': ys, 'z': ys*2})
print(type(hv.Scatter(df).data))
Explanation: For these inputs, the data will need to be copied to a new data structure, having one of the three storage formats above. By default Dataset will try to construct a simple array, falling back to either pandas dataframes (if available) or the dictionary-based format if the data is not purely numeric. Additionally, the interfaces will try to maintain the provided data's type, so numpy arrays and pandas DataFrames will therefore always be parsed by the array and dataframe interfaces first respectively.
End of explanation
hv.Dataset.datatype
Explanation: Dataset will attempt to parse the supplied data, falling back to each consecutive interface if the previous could not interpret the data. The default list of fallbacks and simultaneously the list of allowed datatypes is:
End of explanation
print(type(hv.Scatter((xs, ys), datatype=['array']).data))
print(type(hv.Scatter((xs, ys), datatype=['dictionary']).data))
print(type(hv.Scatter((xs, ys), datatype=['dataframe']).data))
Explanation: To select a particular storage format explicitly, supply one or more allowed datatypes:
End of explanation
overlay = hv.Scatter(df, kdims='x', vdims='y') * hv.Scatter(df, kdims='x', vdims='z')
overlay
Explanation: Sharing Data
Since the formats with labelled columns do not require any specific order, each Element can effectively become a view into a single set of data. By specifying different key and value dimensions, many Elements can show different values, while sharing the same underlying data source.
End of explanation
overlay.Scatter.I.data is overlay.Scatter.II.data
Explanation: We can quickly confirm that the data is actually shared:
End of explanation
table.array()
Explanation: For columnar data, this approach is much more efficient than creating copies of the data for each Element, and allows for some advanced features like linked brushing in the Bokeh backend.
Converting to raw data
Column types make it easy to export the data to the three basic formats: arrays, dataframes, and a dictionary of columns.
Array
End of explanation
HTML(table.dframe().head().to_html())
Explanation: Pandas DataFrame
End of explanation
table.columns()
Explanation: Dataset dictionary
End of explanation
xs = np.arange(10)
curve = hv.Curve(zip(xs, np.exp(xs)))
curve * hv.Scatter(zip(xs, curve)) + curve.table()
Explanation: Creating tabular data from Elements using the .table and .dframe methods
If you have data in some other HoloViews element and would like to use the columnar data features, you can easily tabularize any of the core Element types into a Table Element, using the .table() method. Similarly, the .dframe() method will convert an Element into a pandas DataFrame. These methods are very useful if you want to then transform the data into a different Element type, or to perform different types of analysis.
Tabularizing simple Elements
For a simple example, we can create a Curve of an exponential function and convert it to a Table with the .table method, with the same result as creating the Table directly from the data as done earlier on this Tutorial:
End of explanation
HTML(curve.dframe().to_html())
Explanation: Similarly, we can get a pandas dataframe of the Curve using curve.dframe(). Here we wrap that call as raw HTML to allow automated testing of this notebook, but just calling curve.dframe() would give the same result visually:
End of explanation
%%opts Points (s=200) [size_index=None]
extents = (-1.6,-2.7,2.0,3)
np.random.seed(42)
mat = np.random.rand(3, 3)
img = hv.Image(mat, bounds=extents)
raster = hv.Raster(mat)
img * hv.Points(img) + img.table() + \
raster * hv.Points(raster) + raster.table()
Explanation: Although 2D image-like objects are not inherently well suited to a flat columnar representation, serializing them by converting to tabular data is a good way to reveal the differences between Image and Raster elements. Rasters are a very simple type of element, using array-like integer indexing of rows and columns from their top-left corner as in computer graphics applications. Conversely, Image elements are a higher-level abstraction that provides a general-purpose continuous Cartesian coordinate system, with x and y increasing to the right and upwards as in mathematical applications, and each point interpreted as a sample representing the pixel in which it is located (and thus centered within that pixel). Given the same data, the .table() representation will show how the data is being interpreted (and accessed) differently in the two cases (as explained in detail in the Continuous Coordinates Tutorial):
End of explanation
obs_hmap = hv.HoloMap({i: hv.Image(np.random.randn(10, 10), bounds=(0,0,3,3))
for i in range(3)}, key_dimensions=['Observation'])
obs_hmap
Explanation: Tabularizing space containers
Even deeply nested objects can be deconstructed in this way, serializing them to make it easier to get your raw data out of a collection of specialized Element types. Let's say we want to make multiple observations of a noisy signal. We can collect the data into a HoloMap to visualize it and then call .table() to get a columnar object where we can perform operations or transform it to other Element types. Deconstructing nested data in this way only works if the data is homogenous. In practical terms, the requirement is that your data structure contains Elements (of any types) in these Container types: NdLayout, GridSpace, HoloMap, and NdOverlay, with all dimensions consistent throughout (so that they can all fit into the same set of columns).
Let's now go back to the Image example. We will now collect a number of observations of some noisy data into a HoloMap and display it:
End of explanation
%%opts Layout [fig_size=150] Scatter3D [color_index=3 size_index=None] (cmap='hot' edgecolor='k' s=50)
obs_hmap.table().to.scatter3d() + obs_hmap.table()
Explanation: Now we can serialize this data just as before, where this time we get a four-column (4D) table. The key dimensions of both the HoloMap and the Images, as well as the z-values of each Image, are all merged into a single table. We can visualize the samples we have collected by converting it to a Scatter3D object.
End of explanation
from itertools import product
extents = (0,0,3,3)
error_hmap = hv.HoloMap({(i, j): hv.Image(j*np.random.randn(3, 3), bounds=extents)
for i, j in product(range(3), np.linspace(0, 1, 3))},
key_dimensions=['Observation', 'noise'])
noise_layout = error_hmap.layout('noise')
noise_layout
Explanation: Here the z dimension is shown by color, as in the original images, and the other three dimensions determine where the datapoint is shown in 3D. This way of deconstructing will work for any data structure that satisfies the conditions described above, no matter how nested. If we vary the amount of noise while continuing to performing multiple observations, we can create an NdLayout of HoloMaps, one for each level of noise, and animated by the observation number.
End of explanation
%%opts Table [fig_size=150]
noise_layout.table()
Explanation: And again, we can easily convert the object to a Table:
End of explanation
bars = hv.Bars((['C', 'A', 'B', 'D'], [2, 7, 3, 4]))
bars + bars.sort() + bars.sort(['y'])
Explanation: Applying operations to the data
Sorting by columns
Once data is in columnar form, it is simple to apply a variety of operations. For instance, Dataset can be sorted by their dimensions using the .sort() method. By default, this method will sort by the key dimensions, but any other dimension(s) can be supplied to specify sorting along any other dimensions:
End of explanation
n = np.arange(1000)
xs = np.repeat(range(2), 500)
ys = n%4
zs = np.random.randn(1000)
table = hv.Table((xs, ys, zs), kdims=['x', 'y'], vdims=['z'])
table
Explanation: Working with categorical or grouped data
Data is often grouped in various ways, and the Dataset interface provides various means to easily compare between groups and apply statistical aggregates. We'll start by generating some synthetic data with two groups along the x-axis and 4 groups along the y axis.
End of explanation
%%opts BoxWhisker [aspect=2 fig_size=200 bgcolor='w']
hv.BoxWhisker(table)
Explanation: Since there are repeat observations of the same x- and y-values, we have to reduce the data before we display it or else use a datatype that supports plotting distributions in this way. The BoxWhisker type allows doing exactly that:
End of explanation
%%opts Bars [show_legend=False] {+axiswise}
hv.Bars(table).aggregate(function=np.mean) + hv.Bars(table).reduce(x=np.mean)
Explanation: Aggregating/Reducing dimensions
Most types require the data to be non-duplicated before being displayed. For this purpose, HoloViews makes it easy to aggregate and reduce the data. These two operations are simple inverses of each other--aggregate computes a statistic for each group in the supplied dimensions, while reduce combines all the groups except the supplied dimensions. Supplying only a function and no dimensions will simply aggregate or reduce all available key dimensions.
End of explanation
hmap = hv.HoloMap({i: hv.Curve(np.arange(10)*i) for i in range(10)})
collapsed = hmap.collapse(function=np.mean, spreadfn=np.std)
hv.Spread(collapsed) * hv.Curve(collapsed) + collapsed.table()
Explanation: (A) aggregates over both the x and y dimension, computing the mean for each x/y group, while (B) reduces the x dimension leaving just the mean for each group along y.
Collapsing multiple Dataset Elements
When multiple observations are broken out into a HoloMap they can easily be combined using the collapse method. Here we create a number of Curves with increasingly larger y-values. By collapsing them with a function and a spreadfn we can compute the mean curve with a confidence interval. We then simply cast the collapsed Curve to a Spread and Curve Element to visualize them.
End of explanation
macro_df = pd.read_csv('http://assets.holoviews.org/macro.csv', '\t')
dimensions = {'unem': 'Unemployment',
'capmob': 'Capital Mobility',
'gdp': 'GDP Growth',
'trade': 'Trade',
'year': 'Year',
'country': 'Country'}
macro_df = macro_df.rename(columns=dimensions)
Explanation: Working with complex data
In the last section we only scratched the surface of what the Dataset interface can do. When it really comes into its own is when working with high-dimensional datasets. As an illustration, we'll load a dataset of some macro-economic indicators for OECD countries from 1964-1990, cached on the HoloViews website.
End of explanation
%output dpi=100
options = hv.Store.options()
opts = hv.Options('plot', aspect=2, fig_size=250, show_frame=False, show_grid=True, legend_position='right')
options.NdOverlay = opts
options.Overlay = opts
Explanation: We'll also take this opportunity to set default options for all the following plots.
End of explanation
macro = hv.Table(macro_df, kdims=['Year', 'Country'])
Explanation: Loading the data
As we saw above, we can supply a dataframe to any Dataset type. When dealing with so many dimensions it would be cumbersome to supply all the dimensions explicitly, but luckily Dataset can easily infer the dimensions from the dataframe itself. We simply supply the kdims, and it will infer that all other numeric dimensions should be treated as value dimensions (vdims).
End of explanation
%%opts Table [aspect=1.5 fig_size=300]
macro = macro.sort()
macro[1988]
Explanation: To get an overview of the data we'll quickly sort it and then view the data for one year.
End of explanation
%%opts Curve (color=Palette('Set3'))
gdp_curves = macro.to.curve('Year', 'GDP Growth')
gdp_curves.overlay('Country')
Explanation: Most of the examples above focus on converting a Table to simple Element types, but HoloViews also provides powerful container objects to explore high-dimensional data, such as HoloMap, NdOverlay, NdLayout, and GridSpace. HoloMaps work as a useful interchange format from which you can conveniently convert to the other container types using its .overlay(), .layout(), and .grid() methods. This way we can easily create an overlay of GDP Growth curves by year for each country. Here Year is a key dimension and GDP Growth a value dimension. We are then left with the Country dimension, which we can overlay using the .overlay() method.
End of explanation
%%opts Overlay [bgcolor='w' legend_position='top_right'] Curve (color='k' linewidth=1) Spread (facecolor='gray' alpha=0.2)
hv.Spread(gdp_curves.collapse('Country', np.mean, np.std), label='std') *\
hv.Overlay([gdp_curves.collapse('Country', fn).relabel(name)(style=dict(linestyle=ls))
for name, fn, ls in [('max', np.max, '--'), ('mean', np.mean, '-'), ('min', np.min, '--')]])
Explanation: Now that we've extracted the gdp_curves, we can apply some operations to them. As in the simpler example above we will collapse the HoloMap of Curves using a number of functions to visualize the distribution of GDP Growth rates over time. First we find the mean curve with np.std as the spreadfn and cast the result to a Spread type, then we compute the min, mean and max curve in the same way and put them all inside an Overlay.
End of explanation
%opts Bars [bgcolor='w' aspect=3 figure_size=450 show_frame=False]
%%opts Bars [category_index=2 stack_index=0 group_index=1 legend_position='top' legend_cols=7 color_by=['stack']] (color=Palette('Dark2'))
macro.to.bars(['Country', 'Year'], 'Trade', [])
Explanation: Many HoloViews Element types support multiple kdims, including HeatMap, Points, Scatter, Scatter3D, and Bars. Bars in particular allows you to lay out your data in groups, categories and stacks. By supplying the index of that dimension as a plotting option you can choose to lay out your data as groups of bars, categories in each group, and stacks. Here we choose to lay out the trade surplus of each country with groups for each year, no categories, and stacked by country. Finally, we choose to color the Bars for each item in the stack.
End of explanation
%%opts Bars [padding=0.02 color_by=['group']] (alpha=0.6, color=Palette('Set1', reverse=True)[0.:.2])
countries = {'Belgium', 'Netherlands', 'Sweden', 'Norway'}
macro.to.bars(['Country', 'Year'], 'Unemployment').select(Year=(1978, 1985), Country=countries)
Explanation: This plot contains a lot of data, and so it's probably a good idea to focus on specific aspects of it, telling a simpler story about them. For instance, using the .select method we can then customize the palettes (e.g. to use consistent colors per country across multiple analyses).
Palettes can customized by selecting only a subrange of the underlying cmap to draw the colors from. The Palette draws samples from the colormap using the supplied sample_fn, which by default just draws linear samples but may be overriden with any function that draws samples in the supplied ranges. By slicing the Set1 colormap we draw colors only from the upper half of the palette and then reverse it.
End of explanation
%opts HeatMap [show_values=False xticks=40 xrotation=90 aspect=1.2 invert_yaxis=True colorbar=True]
%opts Layout [figure_size=120 aspect_weight=0.5 hspace=0.8 vspace=0]
hv.Layout([macro.to.heatmap(['Year', 'Country'], value)
for value in macro.data.columns[2:]]).cols(2)
Explanation: Many HoloViews Elements support multiple key and value dimensions. A HeatMap is indexed by two kdims, so we can visualize each of the economic indicators by year and country in a Layout. Layouts are useful for heterogeneous data you want to lay out next to each other.
Before we display the Layout let's apply some styling; we'll suppress the value labels applied to a HeatMap by default and substitute it for a colorbar. Additionally we up the number of xticks that are drawn and rotate them by 90 degrees to avoid overlapping. Flipping the y-axis ensures that the countries appear in alphabetical order. Finally we reduce some of the margins of the Layout and increase the size.
End of explanation
%%opts Scatter [scaling_method='width' scaling_factor=2] (color=Palette('Set3') edgecolors='k')
gdp_unem_scatter = macro.to.scatter('Year', ['GDP Growth', 'Unemployment'])
gdp_unem_scatter.overlay('Country')
Explanation: Another way of combining heterogeneous data dimensions is to map them to a multi-dimensional plot type. Scatter Elements, for example, support multiple vdims, which may be mapped onto the color and size of the drawn points in addition to the y-axis position.
As for the Curves above we supply 'Year' as the sole key dimension and rely on the Table to automatically convert the Country to a map dimension, which we'll overlay. However this time we select both GDP Growth and Unemployment, to be plotted as points. To get a sensible chart, we adjust the scaling_factor for the points to get a reasonable distribution in sizes and apply a categorical Palette so we can distinguish each country.
End of explanation
%%opts NdOverlay [legend_cols=2] Scatter [size_index=1] (color=Palette('Blues'))
macro.to.scatter('GDP Growth', 'Unemployment', ['Year']).overlay()
Explanation: In this way we can plot any dimension against any other dimension, very easily allowing us to iterate through different ways of revealing relationships in the dataset.
End of explanation
%%opts Curve (color='k') Scatter [color_index=2 size_index=2 scaling_factor=1.4] (cmap='Blues' edgecolors='k')
macro_overlay = gdp_curves * gdp_unem_scatter
annotations = hv.Arrow(1973, 8, 'Oil Crisis', 'v') * hv.Arrow(1975, 6, 'Stagflation', 'v') *\
hv.Arrow(1979, 8, 'Energy Crisis', 'v') * hv.Arrow(1981.9, 5, 'Early Eighties\n Recession', 'v')
macro_overlay * annotations
Explanation: This view, for example, immediately highlights the high unemployment rates of the 1980s.
Since all HoloViews Elements are composable, we can generate complex figures just by applying the * operator. We'll simply reuse the GDP curves we generated earlier, combine them with the scatter points (which indicate the unemployment rate by size) and annotate the data with some descriptions of what happened economically in these years.
End of explanation
%opts Overlay [aspect=1]
%%opts NdLayout [figure_size=100] Scatter [color_index=2] (cmap='Reds')
countries = {'United States', 'Canada', 'United Kingdom'}
(gdp_curves * gdp_unem_scatter).select(Country=countries).layout('Country')
Explanation: Since we didn't map the country to some other container type, we get a widget allowing us to view the plot separately for each country, reducing the forest of curves we encountered before to manageable chunks.
While looking at the plots individually like this allows us to study trends for each country, we may want to lay out a subset of the countries side by side, e.g. for non-interactive publications. We can easily achieve this by selecting the countries we want to view and and then applying the .layout method. We'll also want to restore the square aspect ratio so the plots compose nicely.
End of explanation
%%opts Layout [fig_size=100] Scatter [color_index=2] (cmap='Reds')
(macro_overlay.relabel('GDP Growth', depth=1) +\
macro.to.curve('Year', 'Unemployment', ['Country'], group='Unemployment',) +\
macro.to.curve('Year', 'Trade', ['Country'], group='Trade') +\
macro.to.scatter('GDP Growth', 'Unemployment', ['Country'])).cols(2)
Explanation: Finally, let's combine some plots for each country into a Layout, giving us a quick overview of each economic indicator for each country:
End of explanation
<END_TASK> |
15,580 | <SYSTEM_TASK:>
Given the following text problem statement, write Python code to implement the functionality described below in problem statement
<END_TASK>
<USER_TASK:>
Problem:
I have a logistic regression model using Pytorch, where my input is high-dimensional and my output must be a scalar - 0, 1 or 2. | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import torch
softmax_output = load_data()
y = torch.argmax(softmax_output, dim=1).view(-1, 1)
<END_TASK> |
15,581 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
Step3: Explore the Data
Play around with view_sentence_range to view different parts of the data.
Step6: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing
Step8: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
Step10: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
Step12: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
Step15: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below
Step18: Process Decoder Input
Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
Step21: Encoding
Implement encoding_layer() to create a Encoder RNN layer
Step24: Decoding - Training
Create a training decoding layer
Step27: Decoding - Inference
Create inference decoder
Step30: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Embed the target sequences
Construct the decoder LSTM cell (just like you constructed the encoder cell above)
Create an output layer to map the outputs of the decoder to the elements of our vocabulary
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits.
Note
Step33: Build the Neural Network
Apply the functions you implemented above to
Step34: Neural Network Training
Hyperparameters
Tune the following parameters
Step36: Build the Graph
Build the graph using the neural network you implemented.
Step40: Batch and pad the source and target sequences
Step43: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
Step45: Save Parameters
Save the batch_size and save_path parameters for inference.
Step47: Checkpoint
Step50: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
Step52: Translate
This will translate translate_sentence from English to French. | <ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
Explanation: Language Translation
In this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.
Get the Data
Since translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.
End of explanation
view_sentence_range = (10, 20)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
def to_id(text, vocab, add_eos=False):
ids = []
for sentence in text.split('\n'):
sw_ids = [] # sentence words ids
for word in sentence.split():
sw_ids.append(vocab[word])
if add_eos:
sw_ids.append(vocab['<EOS>'])
ids.append(sw_ids)
return ids
source_id_text = to_id(source_text, source_vocab_to_int)
target_id_text = to_id(target_text, target_vocab_to_int, True)
return source_id_text, target_id_text
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
Explanation: Implement Preprocessing Function
Text to Word Ids
As you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of target_text. This will help the neural network predict when the sentence should end.
You can get the <EOS> word id by doing:
python
target_vocab_to_int['<EOS>']
You can get other word ids using source_vocab_to_int and target_vocab_to_int.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
import problem_unittests as tests
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
from tensorflow.python.layers.core import Dense
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.1'), 'Please use TensorFlow version 1.1 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Explanation: Check the Version of TensorFlow and Access to GPU
This will check to make sure you have the correct version of TensorFlow and access to a GPU
End of explanation
def model_inputs():
Create TF Placeholders for input, targets, learning rate, and lengths of source and target sequences.
:return: Tuple (input, targets, learning rate, keep probability, target sequence length,
max target sequence length, source sequence length)
inputs = tf.placeholder(tf.int32, (None, None), name='input')
targets = tf.placeholder(tf.int32, (None, None), name='targets')
lr = tf.placeholder(tf.float32, name='learning_rate')
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
tsl = tf.placeholder(tf.int32, (None,), name='target_sequence_length')
mtl = tf.reduce_max(tsl, name='max_target_len')
ssl = tf.placeholder(tf.int32, (None,), name='source_sequence_length')
return inputs, targets, lr, keep_prob, tsl, mtl, ssl
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
Explanation: Build the Neural Network
You'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:
- model_inputs
- process_decoder_input
- encoding_layer
- decoding_layer_train
- decoding_layer_infer
- decoding_layer
- seq2seq_model
Input
Implement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
Input text placeholder named "input" using the TF Placeholder name parameter with rank 2.
Targets placeholder with rank 2.
Learning rate placeholder with rank 0.
Keep probability placeholder named "keep_prob" using the TF Placeholder name parameter with rank 0.
Target sequence length placeholder named "target_sequence_length" with rank 1
Max target sequence length tensor named "max_target_len" getting its value from applying tf.reduce_max on the target_sequence_length placeholder. Rank 0.
Source sequence length placeholder named "source_sequence_length" with rank 1
Return the placeholders in the following the tuple (input, targets, learning rate, keep probability, target sequence length, max target sequence length, source sequence length)
End of explanation
def process_decoder_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for encoding
:param target_data: Target Placehoder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
go_id = target_vocab_to_int['<GO>']
# Ref: udacity/deep-learning.git:seq2seq/sequence_to_sequence_implementation.ipynb
target_data = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])
return tf.concat([tf.fill([batch_size, 1], go_id), target_data], 1)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_encoding_input(process_decoder_input)
Explanation: Process Decoder Input
Implement process_decoder_input by removing the last word id from each batch in target_data and concat the GO ID to the begining of each batch.
End of explanation
from imp import reload
reload(tests)
# RNN cell
def make_cell(rnn_size, seed=42):
initializer = tf.random_uniform_initializer(-0.1, 0.1, seed=seed)
cell = tf.contrib.rnn.LSTMCell(rnn_size, initializer=initializer)
return cell
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob,
source_sequence_length, source_vocab_size,
encoding_embedding_size):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:param source_sequence_length: a list of the lengths of each sequence in the batch
:param source_vocab_size: vocabulary size of source data
:param encoding_embedding_size: embedding size of source data
:return: tuple (RNN output, RNN state)
inputs = tf.contrib.layers.embed_sequence(rnn_inputs,
source_vocab_size,
encoding_embedding_size)
cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)])
return tf.nn.dynamic_rnn(cell, inputs,
sequence_length=source_sequence_length,
dtype=tf.float32)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
Explanation: Encoding
Implement encoding_layer() to create a Encoder RNN layer:
* Embed the encoder input using tf.contrib.layers.embed_sequence
* Construct a stacked tf.contrib.rnn.LSTMCell wrapped in a tf.contrib.rnn.DropoutWrapper
* Pass cell and embedded input to tf.nn.dynamic_rnn()
End of explanation
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input,
target_sequence_length, max_summary_length,
output_layer, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_summary_length: The length of the longest sequence in the batch
:param output_layer: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing training logits and sample_id
s2s = tf.contrib.seq2seq
# Apply dropout
drop_cell = tf.contrib.rnn.DropoutWrapper(dec_cell, keep_prob)
# Create the decoder
helper = s2s.TrainingHelper(dec_embed_input, target_sequence_length)
decoder = s2s.BasicDecoder(drop_cell, helper, encoder_state, output_layer)
# Perform dynamic decoding
return s2s.dynamic_decode(decoder, impute_finished=True,
maximum_iterations=max_summary_length)[0]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
Explanation: Decoding - Training
Create a training decoding layer:
* Create a tf.contrib.seq2seq.TrainingHelper
* Create a tf.contrib.seq2seq.BasicDecoder
* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode
End of explanation
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id,
end_of_sequence_id, max_target_sequence_length,
vocab_size, output_layer, batch_size, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param max_target_sequence_length: Maximum length of target sequences
:param vocab_size: Size of decoder/target vocabulary
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_layer: Function to apply the output layer
:param batch_size: Batch size
:param keep_prob: Dropout keep probability
:return: BasicDecoderOutput containing inference logits and sample_id
s2s = tf.contrib.seq2seq
# vocab_size is not in use?
start_tokens = tf.tile(tf.constant([start_of_sequence_id], dtype=tf.int32),
[batch_size], name='start_tokens')
helper = s2s.GreedyEmbeddingHelper(dec_embeddings, start_tokens,
end_of_sequence_id)
drop_cell = tf.contrib.rnn.DropoutWrapper(dec_cell, keep_prob)
decoder = s2s.BasicDecoder(drop_cell, helper, encoder_state, output_layer)
return s2s.dynamic_decode(decoder, impute_finished=True,
maximum_iterations=max_target_sequence_length)[0]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
Explanation: Decoding - Inference
Create inference decoder:
* Create a tf.contrib.seq2seq.GreedyEmbeddingHelper
* Create a tf.contrib.seq2seq.BasicDecoder
* Obtain the decoder outputs from tf.contrib.seq2seq.dynamic_decode
End of explanation
def decoding_layer(dec_input, encoder_state,
target_sequence_length, max_target_sequence_length,
rnn_size,
num_layers, target_vocab_to_int, target_vocab_size,
batch_size, keep_prob, decoding_embedding_size):
Create decoding layer
:param dec_input: Decoder input
:param encoder_state: Encoder state
:param target_sequence_length: The lengths of each sequence in the target batch
:param max_target_sequence_length: Maximum length of target sequences
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param target_vocab_size: Size of target vocabulary
:param batch_size: The size of the batch
:param keep_prob: Dropout keep probability
:param decoding_embedding_size: Decoding embedding size
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
dec_cell = tf.contrib.rnn.MultiRNNCell([make_cell(rnn_size) for _ in range(num_layers)])
out_kernel_initializer = tf.truncated_normal_initializer(mean = 0.0, stddev=0.1)
output_layer = Dense(target_vocab_size, kernel_initializer=out_kernel_initializer)
dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)
with tf.variable_scope("decoding") as decoding_scope:
train_logits = decoding_layer_train(encoder_state,
dec_cell,
dec_embed_input,
target_sequence_length,
max_target_sequence_length,
output_layer,
keep_prob)
with tf.variable_scope("decoding", reuse=True) as decoding_scope:
start_of_sequence_id = target_vocab_to_int['<GO>']
end_of_sequence_id = target_vocab_to_int['<EOS>']
inference_logits = decoding_layer_infer(encoder_state,
dec_cell,
dec_embeddings,
start_of_sequence_id,
end_of_sequence_id,
max_target_sequence_length,
target_vocab_size,
output_layer,
batch_size,
keep_prob)
return train_logits, inference_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
Explanation: Build the Decoding Layer
Implement decoding_layer() to create a Decoder RNN layer.
Embed the target sequences
Construct the decoder LSTM cell (just like you constructed the encoder cell above)
Create an output layer to map the outputs of the decoder to the elements of our vocabulary
Use the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, target_sequence_length, max_target_sequence_length, output_layer, keep_prob) function to get the training logits.
Use your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, max_target_sequence_length, vocab_size, output_layer, batch_size, keep_prob) function to get the inference logits.
Note: You'll need to use tf.variable_scope to share variables between training and inference.
End of explanation
def seq2seq_model(input_data, target_data, keep_prob, batch_size,
source_sequence_length, target_sequence_length,
max_target_sentence_length,
source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size,
rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param source_sequence_length: Sequence Lengths of source sequences in the batch
:param target_sequence_length: Sequence Lengths of target sequences in the batch
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training BasicDecoderOutput, Inference BasicDecoderOutput)
_, encoder_state = encoding_layer(input_data, rnn_size,
num_layers, keep_prob,
source_sequence_length,
source_vocab_size,
enc_embedding_size)
dec_input = process_decoder_input(target_data,
target_vocab_to_int,
batch_size)
return decoding_layer(dec_input, encoder_state,
target_sequence_length,
max_target_sentence_length,
rnn_size,
num_layers,
target_vocab_to_int,
target_vocab_size,
batch_size,
keep_prob,
dec_embedding_size)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
Explanation: Build the Neural Network
Apply the functions you implemented above to:
Encode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob, source_sequence_length, source_vocab_size, encoding_embedding_size).
Process target data using your process_decoder_input(target_data, target_vocab_to_int, batch_size) function.
Decode the encoded input using your decoding_layer(dec_input, enc_state, target_sequence_length, max_target_sentence_length, rnn_size, num_layers, target_vocab_to_int, target_vocab_size, batch_size, keep_prob, dec_embedding_size) function.
End of explanation
# Number of Epochs
epochs = 4
# Batch Size
batch_size = 512
# RNN Size
rnn_size = 512
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 50
decoding_embedding_size = 50
# Learning Rate
learning_rate = 0.001
# Dropout Keep Probability
keep_probability = 0.5
display_step = 80
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set num_layers to the number of layers.
Set encoding_embedding_size to the size of the embedding for the encoder.
Set decoding_embedding_size to the size of the embedding for the decoder.
Set learning_rate to the learning rate.
Set keep_probability to the Dropout keep probability
Set display_step to state how many steps between each debug output statement
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_target_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob, target_sequence_length, max_target_sequence_length, source_sequence_length = model_inputs()
#sequence_length = tf.placeholder_with_default(max_target_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(tf.reverse(input_data, [-1]),
targets,
keep_prob,
batch_size,
source_sequence_length,
target_sequence_length,
max_target_sequence_length,
len(source_vocab_to_int),
len(target_vocab_to_int),
encoding_embedding_size,
decoding_embedding_size,
rnn_size,
num_layers,
target_vocab_to_int)
training_logits = tf.identity(train_logits.rnn_output, name='logits')
inference_logits = tf.identity(inference_logits.sample_id, name='predictions')
masks = tf.sequence_mask(target_sequence_length, max_target_sequence_length, dtype=tf.float32, name='masks')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
training_logits,
targets,
masks)
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
def pad_sentence_batch(sentence_batch, pad_int):
Pad sentences with <PAD> so that each sentence of a batch has the same length
max_sentence = max([len(sentence) for sentence in sentence_batch])
return [sentence + [pad_int] * (max_sentence - len(sentence)) for sentence in sentence_batch]
def get_batches(sources, targets, batch_size, source_pad_int, target_pad_int):
Batch targets, sources, and the lengths of their sentences together
for batch_i in range(0, len(sources)//batch_size):
start_i = batch_i * batch_size
# Slice the right amount for the batch
sources_batch = sources[start_i:start_i + batch_size]
targets_batch = targets[start_i:start_i + batch_size]
# Pad
pad_sources_batch = np.array(pad_sentence_batch(sources_batch, source_pad_int))
pad_targets_batch = np.array(pad_sentence_batch(targets_batch, target_pad_int))
# Need the lengths for the _lengths parameters
pad_targets_lengths = []
for target in pad_targets_batch:
pad_targets_lengths.append(len(target))
pad_source_lengths = []
for source in pad_sources_batch:
pad_source_lengths.append(len(source))
yield pad_sources_batch, pad_targets_batch, pad_source_lengths, pad_targets_lengths
Explanation: Batch and pad the source and target sequences
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1])],
'constant')
return np.mean(np.equal(target, logits))
# Split data to training and validation sets
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = source_int_text[:batch_size]
valid_target = target_int_text[:batch_size]
(valid_sources_batch, valid_targets_batch, valid_sources_lengths, valid_targets_lengths ) = next(get_batches(valid_source,
valid_target,
batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>']))
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch, sources_lengths, targets_lengths) in enumerate(
get_batches(train_source, train_target, batch_size,
source_vocab_to_int['<PAD>'],
target_vocab_to_int['<PAD>'])):
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
target_sequence_length: targets_lengths,
source_sequence_length: sources_lengths,
keep_prob: keep_probability})
if batch_i % display_step == 0 and batch_i > 0:
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch,
source_sequence_length: sources_lengths,
target_sequence_length: targets_lengths,
keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_sources_batch,
source_sequence_length: valid_sources_lengths,
target_sequence_length: valid_targets_lengths,
keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(valid_targets_batch, batch_valid_logits)
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.4f}, Validation Accuracy: {:>6.4f}, Loss: {:>6.4f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
Explanation: Save Parameters
Save the batch_size and save_path parameters for inference.
End of explanation
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
Explanation: Checkpoint
End of explanation
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
return [
vocab_to_int[x] if x in vocab_to_int else vocab_to_int['<UNK>']
for x in sentence.lower().split()
]
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
Explanation: Sentence to Sequence
To feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.
Convert the sentence to lowercase
Convert words into ids using vocab_to_int
Convert words not in the vocabulary, to the <UNK> word id.
End of explanation
translate_sentence = 'he saw a old yellow truck .'
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('predictions:0')
target_sequence_length = loaded_graph.get_tensor_by_name('target_sequence_length:0')
source_sequence_length = loaded_graph.get_tensor_by_name('source_sequence_length:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence]*batch_size,
target_sequence_length: [len(translate_sentence)*2]*batch_size,
source_sequence_length: [len(translate_sentence)]*batch_size,
keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in translate_logits]))
print(' French Words: {}'.format(" ".join([target_int_to_vocab[i] for i in translate_logits])))
Explanation: Translate
This will translate translate_sentence from English to French.
End of explanation
<END_TASK> |
15,582 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Maelstrom example
Simon van Heeringen s.vanheeringen@science.ru.nl
These examples will reproduce the results from my poster at the Ninth Annual RECOMB/ISCB Conference on Regulatory & Systems Genomics meeting (RSG with DREAM 2016).
Maelstrom is an ensemble algorithm that was developed with the goal to make differential regulatory motif analysis easy.
It takes as input a set of regions (for instance from ChIP-seq, ATAC-seq or DNaseI experiments) and measurements of two or more experiments (for instance log2-transformed, normalized read counts).
Alternatively, you can provide labels, for instance from clustering.
This means your input data should look something like this
Step1: Get the data!
We'll start by downloading the processed ATAC-seq and RNA-seq in hematopoietic cells, from the data of
Corces et al. 2016 and Lara-Astasio et al. 2014.
Most of these files can be found on NCBI GEO
Step2: Read human ATAC-seq table
Maelstrom expects a location as the first column, in chrom
Step3: Extract relevant data
We'll a few specific cell types, which will be selected on basis of the numerical IDs.
All read counts will be log2-transformed and if there are replicates we'll take the mean.
Step4: Read mouse ATAC-seq table
Step5: Inspect the data
Let's have a look at the data.
From the boxplots below you'll see that the signal varies a lot between experiments.
As a quick normalization we will scale the data.
Step6: Let's do the same for the mouse data.
Step8: Select cell type specific peaks
Here, we select around 10,000 peaks, that are relatively specific for at least one cell type.
Step9: Run maelstrom
Running maelstrom will actually take a while,
as the methods are not yet optimized for speed in any way.
The results from my runs are included in this repository.
If you want to run maelstrom, remove the comments and run the cell.
You can run maelstrom from the command line in a very similar manner
Step10: Correlation of motifs with TF gene expression
Step11: Conservation of motifs between human and mouse
Here we will take the top 10% motifs identified by each methods in both mouse and human and calculate the overlap (Jaccard index; intersection divided by union). | <ASSISTANT_TASK:>
Python Code:
import os
import urllib
import hashlib
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from scipy.stats import pearsonr,spearmanr
from sklearn.preprocessing import scale
from gimmemotifs.maelstrom import run_maelstrom
%matplotlib inline
# Ignore warnings (just for clarity of tutorial)
import warnings
warnings.filterwarnings('ignore')
from gimmemotifs.config import MotifConfig
cfg = MotifConfig()
motif_dir = cfg.get_motif_dir()
base_dir = motif_dir.replace("motif_databases", "")
ncpus = int(cfg.get_default_params()["ncpus"])
if ncpus <= 2:
config_file = os.path.join(base_dir, "gimmemotifs.cfg")
print "GimmeMotifs currently uses only 2 cores"
print "If possible, I recommend to change the ncpus paramater in {}".format(config_file)
Explanation: Maelstrom example
Simon van Heeringen s.vanheeringen@science.ru.nl
These examples will reproduce the results from my poster at the Ninth Annual RECOMB/ISCB Conference on Regulatory & Systems Genomics meeting (RSG with DREAM 2016).
Maelstrom is an ensemble algorithm that was developed with the goal to make differential regulatory motif analysis easy.
It takes as input a set of regions (for instance from ChIP-seq, ATAC-seq or DNaseI experiments) and measurements of two or more experiments (for instance log2-transformed, normalized read counts).
Alternatively, you can provide labels, for instance from clustering.
This means your input data should look something like this:
loc NK Monocytes T-cells B-cells
chr12:93507547-93507747 3.11846121722 2.52277241968 1.93320358405 0.197177179733
chr7:38236460-38236660 1.0980120443 0.502311376556 0.200701906431 0.190757068752
chr10:21357147-21357347 0.528935300354 -0.0669540487727 -1.04367733597 -0.34370315226
chr6:115521512-115521712 0.406247786632 -0.37661318381 -0.480209252108 -0.667499767004
chr2:97359808-97360008 1.50162092566 0.905358101064 0.719059595262 0.0313480230265
chr16:16684549-16684749 0.233838577502 -0.362675820232 -0.837804056065 -0.746483496024
chrX:138964544-138964744 0.330000689312 -0.29126319574 -0.686082532015 -0.777470189034
chr2:186923973-186924173 0.430448401897 -0.258029531121 -1.16410548462 -0.723913541425
chrX:113834470-113834670 0.560122313347 -0.0366707259833 -0.686082532015 -0.692926848415
Or like this:
loc cluster
chr15:49258903-49259103 NK
chr10:72370313-72370513 NK
chr4:40579259-40579459 Monocytes
chr10:82225678-82225878 T-cells
chr5:134237941-134238141 B-cells
chr5:58858731-58858931 B-cells
chr20:24941608-24941808 NK
chr5:124203116-124203316 NK
chr17:40094476-40094676 Erythroblast
chr17:28659327-28659527 T-cells
Both these formats are tab-separated.
Maelstrom will run a variety of algorithms to predict discriminative motifs, and integrate these results using rank aggregation.
End of explanation
fnames = {
"GSE74912_ATACseq_All_Counts.txt.gz":
("https://www.ncbi.nlm.nih.gov/geo/download/?acc=GSE74912&format=file&file=GSE74912_ATACseq_All_Counts.txt.gz",
"8bb507507f17809eb5ea658646263e77"),
"GSE60101_1256271tableS2.txt.gz":
("https://www.ncbi.nlm.nih.gov/geo/download/?acc=GSE60101&format=file&file=GSE60101_1256271tableS2.txt.gz",
"88ea3f015fd5e196f39db737770f291d"),
"GSE74246_RNAseq_All_Counts.txt.gz":
("https://www.ncbi.nlm.nih.gov/geo/download/?acc=GSE74246&format=file&file=GSE74246_RNAseq_All_Counts.txt.gz",
"7cc54350efbd6253192bc29354c4ee33"),
"GSE59992_counts_table.txt.gz":
("https://ndownloader.figshare.com/files/6802764",
"207437745cb2a1fa6f5986587df0e170"),
}
for fname, (url, chksum) in fnames.items():
md5 = ""
if os.path.exists(fname):
md5 = hashlib.md5(open(fname).read()).hexdigest()
# Download file if it doesn't exist, or if is corrupt
if md5 != chksum:
print "Downloading {}".format(fname)
urllib.urlretrieve (url, fname)
Explanation: Get the data!
We'll start by downloading the processed ATAC-seq and RNA-seq in hematopoietic cells, from the data of
Corces et al. 2016 and Lara-Astasio et al. 2014.
Most of these files can be found on NCBI GEO:
Human ATAC-seq GSE74912
Human RNA-seq GSE74246
Mouse RNA-seq GSE60101
The GEO submission for the mouse ATAC-seq does not contain a counts table, so we'll download one I have created from figshare: https://figshare.com/articles/GSE59992_counts_table_txt_gz/4159920.
End of explanation
# Read the file
df = pd.read_table("GSE74912_ATACseq_All_Counts.txt.gz")
# Create regions of 200 bp and combine chromosome, start and end into one column (chrom:start-end)
middle = ((df["Start"] + df["End"]) / 2).astype(int)
df["Start"] = middle - 100
df["End"] = middle + 100
df["loc"] = df["Chr"] + ":" + df["Start"].astype(str) + "-" + df["End"].astype(str)
df = df.set_index("loc")
Explanation: Read human ATAC-seq table
Maelstrom expects a location as the first column, in chrom:start-end format.
So we'll create this.
Additionally, we'll update the size of the regions to 200bp instead of the 500bp used in this file.
For motif analysis, 200bp should be fine and the analysis will run faster.
End of explanation
# Maps experiment codes used in GSE74912 to cell types
exp_map = {
1: "HSC",
2: "MPP",
3: "LMPP",
4: "CMP",
5: "GMP",
6: "MEP",
7: "Mono",
9: "CD4",
10: "CD8",
11: "Nkcell",
13: "Bcell",
14: "CLP",
15: "Ery",
}
hg_columns = []
df_avg = pd.DataFrame(index=df.index)
for code,exp in exp_map.items():
# Get all columns that match the experiment code
cols = df.columns[df.columns.str.contains(r'({}|-{}[AB])'.format(exp,code))]
cols = [c for c in cols if not c.startswith("SU")]
# Take the mean of the log2-transformed read count
df_avg[exp] = np.log2(df[cols] + 1).mean(1)
hg_columns.append(exp)
df_avg = df_avg[hg_columns]
Explanation: Extract relevant data
We'll a few specific cell types, which will be selected on basis of the numerical IDs.
All read counts will be log2-transformed and if there are replicates we'll take the mean.
End of explanation
# Read mouse data
df_mm = pd.read_table("GSE59992_counts_table.txt.gz")
df_mm["loc"] = df_mm["chrom"] + ":" + df_mm["start"].astype(str) + "-" + df_mm["end"].astype(str)
df_mm = df_mm.set_index("loc")
df_mm = df_mm[["Lsk", "CMP", "GMP", "MEP", "Monocytes", "EryA", "CD4", "CD8", "B-cells", "NK"]]
df_mm.columns = ["MPP", "CMP", "GMP", "MEP", "Monocytes", "Erythroblast", "CD4", "CD8", "Bcell", "Nkcell"]
df_mm = np.log2(df_mm + 1)
Explanation: Read mouse ATAC-seq table
End of explanation
sns.boxplot(df_avg);
plt.ylabel("log2 read count");
# Normalize
df_avg = df_avg.apply(scale, 0)
sns.boxplot(df_avg);
plt.ylabel("scaled log2 read count");
Explanation: Inspect the data
Let's have a look at the data.
From the boxplots below you'll see that the signal varies a lot between experiments.
As a quick normalization we will scale the data.
End of explanation
sns.boxplot(df_mm);
plt.ylabel("log2 read count");
# Normalize
df_mm = df_mm.apply(scale, 0)
sns.boxplot(df_mm);
plt.ylabel("scaled log2 read count");
Explanation: Let's do the same for the mouse data.
End of explanation
def select_peaks(df, n=10000):
Select around `n` peaks in total; for each cell type the highest.
# How many peaks per cell type
npeaks = 10000 / len(df.columns)
selection = pd.DataFrame()
for x in df.columns:
# All other columns
others = [c for c in df.columns if not c == x]
# Difference between column of interest and the highest of the other
high = df[x] - df[others].max(1)
# Select the top `npeaks` peaks
idx = high.sort_values().tail(npeaks).index
selection = pd.concat((selection, df.loc[idx]))
return selection
hg_selection = select_peaks(df_avg)
mm_selection = select_peaks(df_mm)
hg_selection.to_csv("hg19.most_variable.10k.txt", sep="\t")
mm_selection.to_csv("mm10.most_variable.10k.txt", sep="\t")
for name,df in [("human", hg_selection), ("mouse", mm_selection)]:
fig = plt.figure();
d = df.corr()
cm = sns.clustermap(d, cmap="RdBu_r", vmin=-1, vmax=1)
plt.setp(cm.ax_heatmap.yaxis.get_majorticklabels(), rotation=0);
plt.title("Correlation matrix of {} hematopoietic ATAC-seq data".format(name));
plt.savefig("correlation.matrix.{}.svg".format(name))
Explanation: Select cell type specific peaks
Here, we select around 10,000 peaks, that are relatively specific for at least one cell type.
End of explanation
#run_maelstrom("hg19.most_variable.10k.txt", "hg19", "hg19.maelstrom.out")
#run_maelstrom("mm10.most_variable.10k.txt", "mm10", "mm10.maelstrom.out")
for species in ["hg19", "mm10"]:
df_result = pd.read_table("{}.most_variable.out/final.out.csv".format(species),
index_col=0, comment="#")
m2f = pd.read_table(os.path.join(motif_dir, "gimme.vertebrate.v3.1.motif2factors.txt"),
index_col=0)
# Truncate for lines with many factors
m2f["factors"] = m2f["factors"].str.replace(r"(\b(LOC|AL|AC|BX|CR)[0-9\.]+|\w+_XENTR),?", "").str[:40]
m2f = m2f.dropna()
f = list(df_result.max(1).sort_values()[-50:].index)
df_vis = df_result.loc[f].join(m2f).set_index("factors").dropna()
# Plot heatmap
cm = sns.clustermap(df_vis, cmap="viridis", figsize=(10,20))
# Rotate labels
plt.setp(cm.ax_heatmap.yaxis.get_majorticklabels(), rotation=0);
plt.title("Maelstrom output {}".format(species))
plt.savefig("heatmap_motif_activity.{}.svg".format(species))
Explanation: Run maelstrom
Running maelstrom will actually take a while,
as the methods are not yet optimized for speed in any way.
The results from my runs are included in this repository.
If you want to run maelstrom, remove the comments and run the cell.
You can run maelstrom from the command line in a very similar manner:
$ gimme maelstrom hg19.most_variable.10k.txt hg19 test.maelstrom.out
The directories hg19.maelstrom.out and mm10.maelstrom.out contain the results.
End of explanation
def load_mouse_expression(logtransform=True):
df_result = pd.read_table("mm10.most_variable.out/final.out.csv", index_col=0, comment="#")
df_exp = pd.read_table("GSE60101_1256271tableS2.txt.gz",skiprows=1)
df_exp["NAME"] = df_exp["NAME"].str.upper()
df_exp = df_exp.set_index("NAME")
tr = {
"B":"Bcell",
"CD4":"CD4",
"CD8":"CD8",
"NK":"Nkcell",
"EryA":"Erythroblast",
"CMP":"CMP",
"GMP":"GMP",
"Granulocyte":"Granulocyte",
"MEP":"MEP",
"Mono":"Monocytes",
"MPP":"MPP",
}
# Only use cell types for which we have ATAC-seq
df_exp = df_exp[tr.keys()]
df_exp.columns = [tr[col] for col in df_exp.columns]
df_exp = df_exp[df_result.columns]
if logtransform:
df_exp = np.log2(df_exp + 1)
return df_exp
def load_human_expression(logtransform=True):
df_result = pd.read_table("hg19.most_variable.out/final.out.csv", index_col=0, comment="#")
df_rnaseq = pd.read_table("GSE74246_RNAseq_All_Counts.txt.gz", index_col=0)
df_exp = pd.DataFrame(index=df_rnaseq.index)
for code,exp in exp_map.items():
# Get all columns that match the experiment code
cols = df_rnaseq.columns[df_rnaseq.columns.str.lower().str.contains(r'({}|-{}[AB])'.format(exp.lower(),code))]
cols = [c for c in cols if not c.startswith("SU")]
# Take the mean of the log2-transformed read count
if logtransform:
df_exp[exp] = np.log2(df_rnaseq[cols] + 1).mean(1)
else:
df_exp[exp] = df_rnaseq[cols].mean(1)
df_exp = df_exp[df_result.columns]
return df_exp
def calc_correlation(df_exp, m2f, genome="hg19", min_exp=0):
fnames = [
"final.out.csv",
"activity.classic.count.out.txt",
"activity.lightning.score.out.txt",
"activity.mwu.score.out.txt",
"activity.rf.score.out.txt",
"activity.mara.count.out.txt",
"activity.lasso.score.out.txt"
]
df_map = m2f.copy()
s = df_map['factors'].str.split(',').apply(pd.Series, 1).stack()
s.index = s.index.droplevel(-1) # to line up with df's index
s.name = 'factor' # needs a name to join
del df_map['factors']
df_map = df_map.join(s)
e = df_map.join(df_exp, on="factor")
e = e[np.any(e.iloc[:,1:] >= min_exp,1)]
df_corr = e[["factor"]].copy()
df_map = df_corr.copy()
for fname in fnames:
df_test = pd.read_table("{}.most_variable.out/{}".format(genome, fname), index_col=0, comment="#")
l = len(df_test.columns)
m = df_map.join(df_test)
df_combined = pd.concat((e.iloc[:,1:], m .iloc[:,1:]),1)
name = fname.replace("activity.", "").replace(".score.out.txt", "").replace(".count.out.txt", "")
name = name.replace(".out.csv", "")
df_corr[name] = df_combined.apply(lambda row: pearsonr(row[:l], row[l:])[0], 1)
df_corr = df_corr.dropna()
return df_corr
# Only take TFs that have an expression level at least equal to this value
# in at least one cell type
EXPRESSION_CUTOFF = 4
df_exp = {}
df_corr = {}
df_exp["mm10"] = load_mouse_expression()
df_exp["hg19"] = load_human_expression()
for species in df_exp.keys():
df_corr[species] = calc_correlation(df_exp[species], m2f, genome=species, min_exp=EXPRESSION_CUTOFF)
df_corr[species] = df_corr[species].drop("factor", 1)
df_corr[species] = df_corr[species].groupby(df_corr[species].index).max()
dfs = {}
# Combine all individual correlation dataframes
for species in df_exp.keys():
dfs[species] = pd.DataFrame((df_corr[species] >= 0.8).sum() )
dfs[species] = dfs[species].reset_index()
dfs[species].columns = ["method", "Number of motifs"]
dfs[species]["species"] = species
df_sum = pd.concat(dfs.values())
sns.factorplot(y="Number of motifs", x="method", hue="species", data=df_sum, kind="bar",
order=["final", "mwu", "classic", "mara", "lasso", "rf", "lightning"])
plt.savefig("expression_correlation.svg")
Explanation: Correlation of motifs with TF gene expression
End of explanation
def get_overlap(hg_result, mm_result, cutoff_hg, cutoff_mm, norm=True):
df_overlap = pd.DataFrame(index=mm_result.columns)
for c1 in hg_result.columns:
a = []
for c2 in mm_result.columns:
x = set(hg_result[c1][hg_result[c1] >= cutoff_hg].index)
y = set(mm_result[c2][mm_result[c2] >= cutoff_mm].index)
if len(x | y) > 0:
if norm:
a.append(len(x & y) / float(len(x | y)))
else:
a.append(len(x & y))
else:
a.append(0)
df_overlap[c1] = a
return df_overlap
fnames = {
"Ensemble":"final.out.csv",
"Hypergeometric":"activity.classic.count.out.txt",
"CD Multiclass":"activity.lightning.score.out.txt",
"Mann-Whitney U":"activity.mwu.score.out.txt",
"Random Forest":"activity.rf.score.out.txt",
"Mara": "activity.mara.count.out.txt",
"Lasso": "activity.lasso.score.out.txt",
}
mm_columns = ["CD4", "CD8", "Nkcell", "Bcell", "Monocytes"]
hs_columns = ["CD4", "CD8", "Nkcell", "Bcell", "Mono"]
for name in fnames:
# Load the data
mm_result = pd.read_table("mm10.most_variable.out/" + fnames[name], index_col=0, comment="#")
hg_result = pd.read_table("hg19.most_variable.out/" + fnames[name], index_col=0, comment="#")
# Get the columns in the same order
mm_result = mm_result[mm_columns]
hg_result = hg_result[hs_columns]
# Get the 10% most "significant" motifs
cutoff_mm = np.percentile(mm_result, 90)
cutoff_hg = np.percentile(hg_result, 90)
for norm in [True, False]:
df_overlap = get_overlap(hg_result, mm_result, cutoff_hg, cutoff_mm, norm=norm)
fig = plt.figure()
if norm:
sns.heatmap(df_overlap, cmap="Reds", square=True, vmin=0, vmax=0.4)
else:
sns.heatmap(df_overlap, cmap="Reds", square=True, vmin=0, vmax=30)
plt.title("{} {}/{} (norm={})".format(name, cutoff_hg,cutoff_mm, norm))
plt.savefig("{}{}.svg".format(name, {True:".norm",False:""}[norm]))
Explanation: Conservation of motifs between human and mouse
Here we will take the top 10% motifs identified by each methods in both mouse and human and calculate the overlap (Jaccard index; intersection divided by union).
End of explanation
<END_TASK> |
15,583 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
En este ejercicio ingresaremos un año y lo imprimiremos como numero romano.
Step1: La idea es ir achicando el año, con el mayor numero romano posible, sin embargo nos dimos cuenta que teniamos problemas con los "9", por lo que agregamos 900, 400, 90, 40, 9 y 4.
Step2: Vimos que el if es muy extenso, podemos usar lista, o mejor "tuplas". | <ASSISTANT_TASK:>
Python Code:
# suponemos que ponemos un año de verdad, por eso no pongo condiciones
año = int(input("Ingrese su año: "))
añooriginal = año
Explanation: En este ejercicio ingresaremos un año y lo imprimiremos como numero romano.
End of explanation
resultado = ""
while año != 0:
if año >= 1000:
veces = año // 1000
resultado += "M" * veces
año %= 1000
elif año >= 900:
año -= 900
resultado += "CM"
elif año >= 500:
año -= 500
resultado += "D"
elif año >= 400:
año -= 400
resultado += "CD"
elif año >= 100:
veces = año // 100
resultado += "C" * veces
año %= 100
elif año >= 90:
año -= 90
resultado += "XC"
elif año >= 50:
año -= 50
resultado += "L"
elif año >= 40:
año -= 40
resultado += "XL"
elif año >= 10:
veces = año // 10
año -= 10
resultado += "X"
elif año >= 9:
año -= 9
resultado += "IX"
elif año >= 5:
año -= 5
resultado += "V"
elif año >= 4:
año -= 4
resultado += "IV"
else:
resultado += "I" * año
año = 0
print(resultado)
Explanation: La idea es ir achicando el año, con el mayor numero romano posible, sin embargo nos dimos cuenta que teniamos problemas con los "9", por lo que agregamos 900, 400, 90, 40, 9 y 4.
End of explanation
valores = (1000, 900, 500, 400, 100, 90, 50, 40, 10, 9, 5, 4, 1)
letras = ("M", "CM", "D", "CD", "C", "XC", "L", "XL", "X", "IX", "V", "IV", "I")
valores[3]= 123
# nos damos cuenta que la tupla no se puede modificar a diferencia de la lista...
año = añooriginal
res = ""
while año != 0:
for i in range(len(valores)):
if valores[i] <= año:
res += letras[i]
año -= valores[i]
break
print(res)
Explanation: Vimos que el if es muy extenso, podemos usar lista, o mejor "tuplas".
End of explanation
<END_TASK> |
15,584 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Vertex client library
Step1: Install the latest GA version of google-cloud-storage library as well.
Step2: Restart the kernel
Once you've installed the Vertex client library and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
Step3: Before you begin
GPU runtime
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex APIs and Compute Engine APIs.
The Google Cloud SDK is already installed in Google Cloud Notebook.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note
Step4: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.
Americas
Step5: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
Step6: Authenticate your Google Cloud account
If you are using Google Cloud Notebook, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps
Step7: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Import Vertex client library
Import the Vertex client library into our Python environment.
Step8: Vertex constants
Setup up the following constants for Vertex
Step9: AutoML constants
Set constants unique to AutoML datasets and training
Step10: Hardware Accelerators
Set the hardware accelerators (e.g., GPU), if any, for prediction.
Set the variable DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify
Step11: Container (Docker) image
For AutoML batch prediction, the container image for the serving binary is pre-determined by the Vertex prediction service. More specifically, the service will pick the appropriate container for the model depending on the hardware accelerator you selected.
Machine Type
Next, set the machine type to use for prediction.
Set the variable DEPLOY_COMPUTE to configure the compute resources for the VM you will use for prediction.
machine type
n1-standard
Step12: Tutorial
Now you are ready to start creating your own AutoML tabular regression model.
Set up clients
The Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.
You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.
Dataset Service for Dataset resources.
Model Service for Model resources.
Pipeline Service for training.
Endpoint Service for deployment.
Prediction Service for serving.
Step13: Dataset
Now that your clients are ready, your first step is to create a Dataset resource instance. This step differs from Vision, Video and Language. For those products, after the Dataset resource is created, one then separately imports the data, using the import_data method.
For tabular, importing of the data is deferred until the training pipeline starts training the model. What do we do different? Well, first you won't be calling the import_data method. Instead, when you create the dataset instance you specify the Cloud Storage location of the CSV file or BigQuery location of the data table, which contains your tabular data as part of the Dataset resource's metadata.
Cloud Storage
metadata = {"input_config"
Step14: Quick peek at your data
You will use a version of the NOAA historical weather data dataset that is stored in a public BigQuery table.
TODO
Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the table (wc -l) and then peek at the first few rows.
Step15: Dataset
Now that your clients are ready, your first step in training a model is to create a managed dataset instance, and then upload your labeled data to it.
Create Dataset resource instance
Use the helper function create_dataset to create the instance of a Dataset resource. This function does the following
Step16: Now save the unique dataset identifier for the Dataset resource instance you created.
Step17: Train the model
Now train an AutoML tabular regression model using your Vertex Dataset resource. To train the model, do the following steps
Step18: Construct the task requirements
Next, construct the task requirements. Unlike other parameters which take a Python (JSON-like) dictionary, the task field takes a Google protobuf Struct, which is very similar to a Python dictionary. Use the json_format.ParseDict method for the conversion.
The minimal fields you need to specify are
Step19: Now save the unique identifier of the training pipeline you created.
Step20: Get information on a training pipeline
Now get pipeline information for just this training pipeline instance. The helper function gets the job information for just this job by calling the the job client service's get_training_pipeline method, with the following parameter
Step21: Deployment
Training the above model may take upwards of 30 minutes time.
Once your model is done training, you can calculate the actual time it took to train the model by subtracting end_time from start_time. For your model, you will need to know the fully qualified Vertex Model resource identifier, which the pipeline service assigned to it. You can get this from the returned pipeline instance as the field model_to_deploy.name.
Step22: Model information
Now that your model is trained, you can get some information on your model.
Evaluate the Model resource
Now find out how good the model service believes your model is. As part of training, some portion of the dataset was set aside as the test (holdout) data, which is used by the pipeline service to evaluate the model.
List evaluations for all slices
Use this helper function list_model_evaluations, which takes the following parameter
Step23: Deploy the Model resource
Now deploy the trained Vertex Model resource you created with AutoML. This requires two steps
Step24: Now get the unique identifier for the Endpoint resource you created.
Step25: Compute instance scaling
You have several choices on scaling the compute instances for handling your online prediction requests
Step26: Deploy Model resource to the Endpoint resource
Use this helper function deploy_model to deploy the Model resource to the Endpoint resource you created for serving predictions, with the following parameters
Step27: Make a online prediction request
Now do a online prediction to your deployed model.
Make test item
You will use synthetic data as a test data item. Don't be concerned that we are using synthetic data -- we just want to demonstrate how to make a prediction.
Step28: Make a prediction
Now you have a test item. Use this helper function predict_item, which takes the following parameters
Step29: Undeploy the Model resource
Now undeploy your Model resource from the serving Endpoint resoure. Use this helper function undeploy_model, which takes the following parameters
Step30: Cleaning up
To clean up all GCP resources used in this project, you can delete the GCP
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial | <ASSISTANT_TASK:>
Python Code:
import os
import sys
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install -U google-cloud-aiplatform $USER_FLAG
Explanation: Vertex client library: AutoML tabular regression model for online prediction
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/automl/showcase_automl_tabular_regression_online_bq.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/automl/showcase_automl_tabular_regression_online_bq.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
<br/><br/><br/>
Overview
This tutorial demonstrates how to use the Vertex client library for Python to create tabular regression models and do online prediction using Google Cloud's AutoML.
Dataset
The dataset used for this tutorial is the GSOD dataset from BigQuery public datasets. The version of the dataset you will use only the fields year, month and day to predict the value of mean daily temperature (mean_temp).
Objective
In this tutorial, you create an AutoML tabular regression model and deploy for online prediction from a Python script using the Vertex client library. You can alternatively create and deploy models using the gcloud command-line tool or online using the Google Cloud Console.
The steps performed include:
Create a Vertex Dataset resource.
Train the model.
View the model evaluation.
Deploy the Model resource to a serving Endpoint resource.
Make a prediction.
Undeploy the Model.
Costs
This tutorial uses billable components of Google Cloud (GCP):
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Installation
Install the latest version of Vertex client library.
End of explanation
! pip3 install -U google-cloud-storage $USER_FLAG
Explanation: Install the latest GA version of google-cloud-storage library as well.
End of explanation
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
Explanation: Restart the kernel
Once you've installed the Vertex client library and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.
End of explanation
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
Explanation: Before you begin
GPU runtime
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the Vertex APIs and Compute Engine APIs.
The Google Cloud SDK is already installed in Google Cloud Notebook.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
End of explanation
REGION = "us-central1" # @param {type: "string"}
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You may not use a multi-regional bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see the Vertex locations documentation
End of explanation
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
End of explanation
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebook, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key page.
Click Create service account.
In the Service account name field, enter a name, and click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
import time
from google.cloud.aiplatform import gapic as aip
from google.protobuf import json_format
from google.protobuf.json_format import MessageToJson, ParseDict
from google.protobuf.struct_pb2 import Struct, Value
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
Import Vertex client library
Import the Vertex client library into our Python environment.
End of explanation
# API service endpoint
API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION)
# Vertex location root path for your dataset, model and endpoint resources
PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
Explanation: Vertex constants
Setup up the following constants for Vertex:
API_ENDPOINT: The Vertex API service endpoint for dataset, model, job, pipeline and endpoint services.
PARENT: The Vertex location root path for dataset, model, job, pipeline and endpoint resources.
End of explanation
# Tabular Dataset type
DATA_SCHEMA = "gs://google-cloud-aiplatform/schema/dataset/metadata/tables_1.0.0.yaml"
# Tabular Labeling type
LABEL_SCHEMA = (
"gs://google-cloud-aiplatform/schema/dataset/ioformat/table_io_format_1.0.0.yaml"
)
# Tabular Training task
TRAINING_SCHEMA = "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_tables_1.0.0.yaml"
Explanation: AutoML constants
Set constants unique to AutoML datasets and training:
Dataset Schemas: Tells the Dataset resource service which type of dataset it is.
Data Labeling (Annotations) Schemas: Tells the Dataset resource service how the data is labeled (annotated).
Dataset Training Schemas: Tells the Pipeline resource service the task (e.g., classification) to train the model for.
End of explanation
if os.getenv("IS_TESTING_DEPOLY_GPU"):
DEPLOY_GPU, DEPLOY_NGPU = (
aip.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_DEPOLY_GPU")),
)
else:
DEPLOY_GPU, DEPLOY_NGPU = (aip.AcceleratorType.NVIDIA_TESLA_K80, 1)
Explanation: Hardware Accelerators
Set the hardware accelerators (e.g., GPU), if any, for prediction.
Set the variable DEPLOY_GPU/DEPLOY_NGPU to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify:
(aip.AcceleratorType.NVIDIA_TESLA_K80, 4)
For GPU, available accelerators include:
- aip.AcceleratorType.NVIDIA_TESLA_K80
- aip.AcceleratorType.NVIDIA_TESLA_P100
- aip.AcceleratorType.NVIDIA_TESLA_P4
- aip.AcceleratorType.NVIDIA_TESLA_T4
- aip.AcceleratorType.NVIDIA_TESLA_V100
Otherwise specify (None, None) to use a container image to run on a CPU.
End of explanation
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
Explanation: Container (Docker) image
For AutoML batch prediction, the container image for the serving binary is pre-determined by the Vertex prediction service. More specifically, the service will pick the appropriate container for the model depending on the hardware accelerator you selected.
Machine Type
Next, set the machine type to use for prediction.
Set the variable DEPLOY_COMPUTE to configure the compute resources for the VM you will use for prediction.
machine type
n1-standard: 3.75GB of memory per vCPU.
n1-highmem: 6.5GB of memory per vCPU
n1-highcpu: 0.9 GB of memory per vCPU
vCPUs: number of [2, 4, 8, 16, 32, 64, 96 ]
Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs
End of explanation
# client options same for all services
client_options = {"api_endpoint": API_ENDPOINT}
def create_dataset_client():
client = aip.DatasetServiceClient(client_options=client_options)
return client
def create_model_client():
client = aip.ModelServiceClient(client_options=client_options)
return client
def create_pipeline_client():
client = aip.PipelineServiceClient(client_options=client_options)
return client
def create_endpoint_client():
client = aip.EndpointServiceClient(client_options=client_options)
return client
def create_prediction_client():
client = aip.PredictionServiceClient(client_options=client_options)
return client
clients = {}
clients["dataset"] = create_dataset_client()
clients["model"] = create_model_client()
clients["pipeline"] = create_pipeline_client()
clients["endpoint"] = create_endpoint_client()
clients["prediction"] = create_prediction_client()
for client in clients.items():
print(client)
Explanation: Tutorial
Now you are ready to start creating your own AutoML tabular regression model.
Set up clients
The Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.
You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.
Dataset Service for Dataset resources.
Model Service for Model resources.
Pipeline Service for training.
Endpoint Service for deployment.
Prediction Service for serving.
End of explanation
IMPORT_FILE = "bq://bigquery-public-data.samples.gsod"
Explanation: Dataset
Now that your clients are ready, your first step is to create a Dataset resource instance. This step differs from Vision, Video and Language. For those products, after the Dataset resource is created, one then separately imports the data, using the import_data method.
For tabular, importing of the data is deferred until the training pipeline starts training the model. What do we do different? Well, first you won't be calling the import_data method. Instead, when you create the dataset instance you specify the Cloud Storage location of the CSV file or BigQuery location of the data table, which contains your tabular data as part of the Dataset resource's metadata.
Cloud Storage
metadata = {"input_config": {"gcs_source": {"uri": [gcs_uri]}}}
The format for a Cloud Storage path is:
gs://[bucket_name]/[folder(s)/[file]
BigQuery
metadata = {"input_config": {"bigquery_source": {"uri": [gcs_uri]}}}
The format for a BigQuery path is:
bq://[collection].[dataset].[table]
Note that the uri field is a list, whereby you can input multiple CSV files or BigQuery tables when your data is split across files.
Data preparation
The Vertex Dataset resource for tabular has a couple of requirements for your tabular data.
Must be in a CSV file or a BigQuery query.
Location of BigQuery training data.
Now set the variable IMPORT_FILE to the location of the data table in BigQuery.
End of explanation
!bq head -n 10 $IMPORT_FILE
Explanation: Quick peek at your data
You will use a version of the NOAA historical weather data dataset that is stored in a public BigQuery table.
TODO
Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the table (wc -l) and then peek at the first few rows.
End of explanation
TIMEOUT = 90
def create_dataset(name, schema, src_uri=None, labels=None, timeout=TIMEOUT):
start_time = time.time()
try:
if src_uri.startswith("gs://"):
metadata = {"input_config": {"gcs_source": {"uri": [src_uri]}}}
elif src_uri.startswith("bq://"):
metadata = {"input_config": {"bigquery_source": {"uri": [src_uri]}}}
dataset = aip.Dataset(
display_name=name,
metadata_schema_uri=schema,
labels=labels,
metadata=json_format.ParseDict(metadata, Value()),
)
operation = clients["dataset"].create_dataset(parent=PARENT, dataset=dataset)
print("Long running operation:", operation.operation.name)
result = operation.result(timeout=TIMEOUT)
print("time:", time.time() - start_time)
print("response")
print(" name:", result.name)
print(" display_name:", result.display_name)
print(" metadata_schema_uri:", result.metadata_schema_uri)
print(" metadata:", dict(result.metadata))
print(" create_time:", result.create_time)
print(" update_time:", result.update_time)
print(" etag:", result.etag)
print(" labels:", dict(result.labels))
return result
except Exception as e:
print("exception:", e)
return None
result = create_dataset("gsod-" + TIMESTAMP, DATA_SCHEMA, src_uri=IMPORT_FILE)
Explanation: Dataset
Now that your clients are ready, your first step in training a model is to create a managed dataset instance, and then upload your labeled data to it.
Create Dataset resource instance
Use the helper function create_dataset to create the instance of a Dataset resource. This function does the following:
Uses the dataset client service.
Creates an Vertex Dataset resource (aip.Dataset), with the following parameters:
display_name: The human-readable name you choose to give it.
metadata_schema_uri: The schema for the dataset type.
metadata: The Cloud Storage or BigQuery location of the tabular data.
Calls the client dataset service method create_dataset, with the following parameters:
parent: The Vertex location root path for your Database, Model and Endpoint resources.
dataset: The Vertex dataset object instance you created.
The method returns an operation object.
An operation object is how Vertex handles asynchronous calls for long running operations. While this step usually goes fast, when you first use it in your project, there is a longer delay due to provisioning.
You can use the operation object to get status on the operation (e.g., create Dataset resource) or to cancel the operation, by invoking an operation method:
| Method | Description |
| ----------- | ----------- |
| result() | Waits for the operation to complete and returns a result object in JSON format. |
| running() | Returns True/False on whether the operation is still running. |
| done() | Returns True/False on whether the operation is completed. |
| canceled() | Returns True/False on whether the operation was canceled. |
| cancel() | Cancels the operation (this may take up to 30 seconds). |
End of explanation
# The full unique ID for the dataset
dataset_id = result.name
# The short numeric ID for the dataset
dataset_short_id = dataset_id.split("/")[-1]
print(dataset_id)
Explanation: Now save the unique dataset identifier for the Dataset resource instance you created.
End of explanation
def create_pipeline(pipeline_name, model_name, dataset, schema, task):
dataset_id = dataset.split("/")[-1]
input_config = {
"dataset_id": dataset_id,
"fraction_split": {
"training_fraction": 0.8,
"validation_fraction": 0.1,
"test_fraction": 0.1,
},
}
training_pipeline = {
"display_name": pipeline_name,
"training_task_definition": schema,
"training_task_inputs": task,
"input_data_config": input_config,
"model_to_upload": {"display_name": model_name},
}
try:
pipeline = clients["pipeline"].create_training_pipeline(
parent=PARENT, training_pipeline=training_pipeline
)
print(pipeline)
except Exception as e:
print("exception:", e)
return None
return pipeline
Explanation: Train the model
Now train an AutoML tabular regression model using your Vertex Dataset resource. To train the model, do the following steps:
Create an Vertex training pipeline for the Dataset resource.
Execute the pipeline to start the training.
Create a training pipeline
You may ask, what do we use a pipeline for? You typically use pipelines when the job (such as training) has multiple steps, generally in sequential order: do step A, do step B, etc. By putting the steps into a pipeline, we gain the benefits of:
Being reusable for subsequent training jobs.
Can be containerized and ran as a batch job.
Can be distributed.
All the steps are associated with the same pipeline job for tracking progress.
Use this helper function create_pipeline, which takes the following parameters:
pipeline_name: A human readable name for the pipeline job.
model_name: A human readable name for the model.
dataset: The Vertex fully qualified dataset identifier.
schema: The dataset labeling (annotation) training schema.
task: A dictionary describing the requirements for the training job.
The helper function calls the Pipeline client service'smethod create_pipeline, which takes the following parameters:
parent: The Vertex location root path for your Dataset, Model and Endpoint resources.
training_pipeline: the full specification for the pipeline training job.
Let's look now deeper into the minimal requirements for constructing a training_pipeline specification:
display_name: A human readable name for the pipeline job.
training_task_definition: The dataset labeling (annotation) training schema.
training_task_inputs: A dictionary describing the requirements for the training job.
model_to_upload: A human readable name for the model.
input_data_config: The dataset specification.
dataset_id: The Vertex dataset identifier only (non-fully qualified) -- this is the last part of the fully-qualified identifier.
fraction_split: If specified, the percentages of the dataset to use for training, test and validation. Otherwise, the percentages are automatically selected by AutoML.
End of explanation
TRANSFORMATIONS = [
{"auto": {"column_name": "year"}},
{"auto": {"column_name": "month"}},
{"auto": {"column_name": "day"}},
]
label_column = "mean_temp"
PIPE_NAME = "gsod_pipe-" + TIMESTAMP
MODEL_NAME = "gsod_model-" + TIMESTAMP
task = Value(
struct_value=Struct(
fields={
"target_column": Value(string_value=label_column),
"prediction_type": Value(string_value="regression"),
"train_budget_milli_node_hours": Value(number_value=1000),
"disable_early_stopping": Value(bool_value=False),
"transformations": json_format.ParseDict(TRANSFORMATIONS, Value()),
}
)
)
response = create_pipeline(PIPE_NAME, MODEL_NAME, dataset_id, TRAINING_SCHEMA, task)
Explanation: Construct the task requirements
Next, construct the task requirements. Unlike other parameters which take a Python (JSON-like) dictionary, the task field takes a Google protobuf Struct, which is very similar to a Python dictionary. Use the json_format.ParseDict method for the conversion.
The minimal fields you need to specify are:
prediction_type: Whether we are doing "classification" or "regression".
target_column: The CSV heading column name for the column we want to predict (i.e., the label).
train_budget_milli_node_hours: The maximum time to budget (billed) for training the model, where 1000 = 1 hour.
disable_early_stopping: Whether True/False to let AutoML use its judgement to stop training early or train for the entire budget.
transformations: Specifies the feature engineering for each feature column.
For transformations, the list must have an entry for each column. The outer key field indicates the type of feature engineering for the corresponding column. In this tutorial, you set it to "auto" to tell AutoML to automatically determine it.
Finally, create the pipeline by calling the helper function create_pipeline, which returns an instance of a training pipeline object.
End of explanation
# The full unique ID for the pipeline
pipeline_id = response.name
# The short numeric ID for the pipeline
pipeline_short_id = pipeline_id.split("/")[-1]
print(pipeline_id)
Explanation: Now save the unique identifier of the training pipeline you created.
End of explanation
def get_training_pipeline(name, silent=False):
response = clients["pipeline"].get_training_pipeline(name=name)
if silent:
return response
print("pipeline")
print(" name:", response.name)
print(" display_name:", response.display_name)
print(" state:", response.state)
print(" training_task_definition:", response.training_task_definition)
print(" training_task_inputs:", dict(response.training_task_inputs))
print(" create_time:", response.create_time)
print(" start_time:", response.start_time)
print(" end_time:", response.end_time)
print(" update_time:", response.update_time)
print(" labels:", dict(response.labels))
return response
response = get_training_pipeline(pipeline_id)
Explanation: Get information on a training pipeline
Now get pipeline information for just this training pipeline instance. The helper function gets the job information for just this job by calling the the job client service's get_training_pipeline method, with the following parameter:
name: The Vertex fully qualified pipeline identifier.
When the model is done training, the pipeline state will be PIPELINE_STATE_SUCCEEDED.
End of explanation
while True:
response = get_training_pipeline(pipeline_id, True)
if response.state != aip.PipelineState.PIPELINE_STATE_SUCCEEDED:
print("Training job has not completed:", response.state)
model_to_deploy_id = None
if response.state == aip.PipelineState.PIPELINE_STATE_FAILED:
raise Exception("Training Job Failed")
else:
model_to_deploy = response.model_to_upload
model_to_deploy_id = model_to_deploy.name
print("Training Time:", response.end_time - response.start_time)
break
time.sleep(60)
print("model to deploy:", model_to_deploy_id)
Explanation: Deployment
Training the above model may take upwards of 30 minutes time.
Once your model is done training, you can calculate the actual time it took to train the model by subtracting end_time from start_time. For your model, you will need to know the fully qualified Vertex Model resource identifier, which the pipeline service assigned to it. You can get this from the returned pipeline instance as the field model_to_deploy.name.
End of explanation
def list_model_evaluations(name):
response = clients["model"].list_model_evaluations(parent=name)
for evaluation in response:
print("model_evaluation")
print(" name:", evaluation.name)
print(" metrics_schema_uri:", evaluation.metrics_schema_uri)
metrics = json_format.MessageToDict(evaluation._pb.metrics)
for metric in metrics.keys():
print(metric)
print("rootMeanSquaredError", metrics["rootMeanSquaredError"])
print("meanAbsoluteError", metrics["meanAbsoluteError"])
return evaluation.name
last_evaluation = list_model_evaluations(model_to_deploy_id)
Explanation: Model information
Now that your model is trained, you can get some information on your model.
Evaluate the Model resource
Now find out how good the model service believes your model is. As part of training, some portion of the dataset was set aside as the test (holdout) data, which is used by the pipeline service to evaluate the model.
List evaluations for all slices
Use this helper function list_model_evaluations, which takes the following parameter:
name: The Vertex fully qualified model identifier for the Model resource.
This helper function uses the model client service's list_model_evaluations method, which takes the same parameter. The response object from the call is a list, where each element is an evaluation metric.
For each evaluation -- you probably only have one, we then print all the key names for each metric in the evaluation, and for a small set (rootMeanSquaredError and meanAbsoluteError) you will print the result.
End of explanation
ENDPOINT_NAME = "gsod_endpoint-" + TIMESTAMP
def create_endpoint(display_name):
endpoint = {"display_name": display_name}
response = clients["endpoint"].create_endpoint(parent=PARENT, endpoint=endpoint)
print("Long running operation:", response.operation.name)
result = response.result(timeout=300)
print("result")
print(" name:", result.name)
print(" display_name:", result.display_name)
print(" description:", result.description)
print(" labels:", result.labels)
print(" create_time:", result.create_time)
print(" update_time:", result.update_time)
return result
result = create_endpoint(ENDPOINT_NAME)
Explanation: Deploy the Model resource
Now deploy the trained Vertex Model resource you created with AutoML. This requires two steps:
Create an Endpoint resource for deploying the Model resource to.
Deploy the Model resource to the Endpoint resource.
Create an Endpoint resource
Use this helper function create_endpoint to create an endpoint to deploy the model to for serving predictions, with the following parameter:
display_name: A human readable name for the Endpoint resource.
The helper function uses the endpoint client service's create_endpoint method, which takes the following parameter:
display_name: A human readable name for the Endpoint resource.
Creating an Endpoint resource returns a long running operation, since it may take a few moments to provision the Endpoint resource for serving. You call response.result(), which is a synchronous call and will return when the Endpoint resource is ready. The helper function returns the Vertex fully qualified identifier for the Endpoint resource: response.name.
End of explanation
# The full unique ID for the endpoint
endpoint_id = result.name
# The short numeric ID for the endpoint
endpoint_short_id = endpoint_id.split("/")[-1]
print(endpoint_id)
Explanation: Now get the unique identifier for the Endpoint resource you created.
End of explanation
MIN_NODES = 1
MAX_NODES = 1
Explanation: Compute instance scaling
You have several choices on scaling the compute instances for handling your online prediction requests:
Single Instance: The online prediction requests are processed on a single compute instance.
Set the minimum (MIN_NODES) and maximum (MAX_NODES) number of compute instances to one.
Manual Scaling: The online prediction requests are split across a fixed number of compute instances that you manually specified.
Set the minimum (MIN_NODES) and maximum (MAX_NODES) number of compute instances to the same number of nodes. When a model is first deployed to the instance, the fixed number of compute instances are provisioned and online prediction requests are evenly distributed across them.
Auto Scaling: The online prediction requests are split across a scaleable number of compute instances.
Set the minimum (MIN_NODES) number of compute instances to provision when a model is first deployed and to de-provision, and set the maximum (`MAX_NODES) number of compute instances to provision, depending on load conditions.
The minimum number of compute instances corresponds to the field min_replica_count and the maximum number of compute instances corresponds to the field max_replica_count, in your subsequent deployment request.
End of explanation
DEPLOYED_NAME = "gsod_deployed-" + TIMESTAMP
def deploy_model(
model, deployed_model_display_name, endpoint, traffic_split={"0": 100}
):
if DEPLOY_GPU:
machine_spec = {
"machine_type": DEPLOY_COMPUTE,
"accelerator_type": DEPLOY_GPU,
"accelerator_count": DEPLOY_NGPU,
}
else:
machine_spec = {
"machine_type": DEPLOY_COMPUTE,
"accelerator_count": 0,
}
deployed_model = {
"model": model,
"display_name": deployed_model_display_name,
"dedicated_resources": {
"min_replica_count": MIN_NODES,
"max_replica_count": MAX_NODES,
"machine_spec": machine_spec,
},
"disable_container_logging": False,
}
response = clients["endpoint"].deploy_model(
endpoint=endpoint, deployed_model=deployed_model, traffic_split=traffic_split
)
print("Long running operation:", response.operation.name)
result = response.result()
print("result")
deployed_model = result.deployed_model
print(" deployed_model")
print(" id:", deployed_model.id)
print(" model:", deployed_model.model)
print(" display_name:", deployed_model.display_name)
print(" create_time:", deployed_model.create_time)
return deployed_model.id
deployed_model_id = deploy_model(model_to_deploy_id, DEPLOYED_NAME, endpoint_id)
Explanation: Deploy Model resource to the Endpoint resource
Use this helper function deploy_model to deploy the Model resource to the Endpoint resource you created for serving predictions, with the following parameters:
model: The Vertex fully qualified model identifier of the model to upload (deploy) from the training pipeline.
deploy_model_display_name: A human readable name for the deployed model.
endpoint: The Vertex fully qualified endpoint identifier to deploy the model to.
The helper function calls the Endpoint client service's method deploy_model, which takes the following parameters:
endpoint: The Vertex fully qualified Endpoint resource identifier to deploy the Model resource to.
deployed_model: The requirements specification for deploying the model.
traffic_split: Percent of traffic at the endpoint that goes to this model, which is specified as a dictionary of one or more key/value pairs.
If only one model, then specify as { "0": 100 }, where "0" refers to this model being uploaded and 100 means 100% of the traffic.
If there are existing models on the endpoint, for which the traffic will be split, then use model_id to specify as { "0": percent, model_id: percent, ... }, where model_id is the model id of an existing model to the deployed endpoint. The percents must add up to 100.
Let's now dive deeper into the deployed_model parameter. This parameter is specified as a Python dictionary with the minimum required fields:
model: The Vertex fully qualified model identifier of the (upload) model to deploy.
display_name: A human readable name for the deployed model.
disable_container_logging: This disables logging of container events, such as execution failures (default is container logging is enabled). Container logging is typically enabled when debugging the deployment and then disabled when deployed for production.
dedicated_resources: This refers to how many compute instances (replicas) that are scaled for serving prediction requests.
machine_spec: The compute instance to provision. Use the variable you set earlier DEPLOY_GPU != None to use a GPU; otherwise only a CPU is allocated.
min_replica_count: The number of compute instances to initially provision, which you set earlier as the variable MIN_NODES.
max_replica_count: The maximum number of compute instances to scale to, which you set earlier as the variable MAX_NODES.
Traffic Split
Let's now dive deeper into the traffic_split parameter. This parameter is specified as a Python dictionary. This might at first be a tad bit confusing. Let me explain, you can deploy more than one instance of your model to an endpoint, and then set how much (percent) goes to each instance.
Why would you do that? Perhaps you already have a previous version deployed in production -- let's call that v1. You got better model evaluation on v2, but you don't know for certain that it is really better until you deploy to production. So in the case of traffic split, you might want to deploy v2 to the same endpoint as v1, but it only get's say 10% of the traffic. That way, you can monitor how well it does without disrupting the majority of users -- until you make a final decision.
Response
The method returns a long running operation response. We will wait sychronously for the operation to complete by calling the response.result(), which will block until the model is deployed. If this is the first time a model is deployed to the endpoint, it may take a few additional minutes to complete provisioning of resources.
End of explanation
INSTANCE = {"year": "1932", "month": "11", "day": "6"}
Explanation: Make a online prediction request
Now do a online prediction to your deployed model.
Make test item
You will use synthetic data as a test data item. Don't be concerned that we are using synthetic data -- we just want to demonstrate how to make a prediction.
End of explanation
def predict_item(data, endpoint, parameters_dict):
parameters = json_format.ParseDict(parameters_dict, Value())
# The format of each instance should conform to the deployed model's prediction input schema.
instances_list = [data]
instances = [json_format.ParseDict(s, Value()) for s in instances_list]
response = clients["prediction"].predict(
endpoint=endpoint, instances=instances, parameters=parameters
)
print("response")
print(" deployed_model_id:", response.deployed_model_id)
predictions = response.predictions
print("predictions")
for prediction in predictions:
print(" prediction:", dict(prediction))
predict_item(INSTANCE, endpoint_id, None)
Explanation: Make a prediction
Now you have a test item. Use this helper function predict_item, which takes the following parameters:
filename: The Cloud Storage path to the test item.
endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model resource was deployed.
parameters_dict: Additional filtering parameters for serving prediction results.
This function calls the prediction client service's predict method with the following parameters:
endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model resource was deployed.
instances: A list of instances (data items) to predict.
parameters: Additional filtering parameters for serving prediction results. Note, tabular models do not support additional parameters.
Request
The format of each instance is, where values must be specified as a string:
{ 'feature_1': 'value_1', 'feature_2': 'value_2', ... }
Since the predict() method can take multiple items (instances), you send your single test item as a list of one test item. As a final step, you package the instances list into Google's protobuf format -- which is what we pass to the predict() method.
Response
The response object returns a list, where each element in the list corresponds to the corresponding image in the request. You will see in the output for each prediction -- in this case there is just one:
value: The predicted value.
End of explanation
def undeploy_model(deployed_model_id, endpoint):
response = clients["endpoint"].undeploy_model(
endpoint=endpoint, deployed_model_id=deployed_model_id, traffic_split={}
)
print(response)
undeploy_model(deployed_model_id, endpoint_id)
Explanation: Undeploy the Model resource
Now undeploy your Model resource from the serving Endpoint resoure. Use this helper function undeploy_model, which takes the following parameters:
deployed_model_id: The model deployment identifier returned by the endpoint service when the Model resource was deployed to.
endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model is deployed to.
This function calls the endpoint client service's method undeploy_model, with the following parameters:
deployed_model_id: The model deployment identifier returned by the endpoint service when the Model resource was deployed.
endpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model resource is deployed.
traffic_split: How to split traffic among the remaining deployed models on the Endpoint resource.
Since this is the only deployed model on the Endpoint resource, you simply can leave traffic_split empty by setting it to {}.
End of explanation
delete_dataset = True
delete_pipeline = True
delete_model = True
delete_endpoint = True
delete_batchjob = True
delete_customjob = True
delete_hptjob = True
delete_bucket = True
# Delete the dataset using the Vertex fully qualified identifier for the dataset
try:
if delete_dataset and "dataset_id" in globals():
clients["dataset"].delete_dataset(name=dataset_id)
except Exception as e:
print(e)
# Delete the training pipeline using the Vertex fully qualified identifier for the pipeline
try:
if delete_pipeline and "pipeline_id" in globals():
clients["pipeline"].delete_training_pipeline(name=pipeline_id)
except Exception as e:
print(e)
# Delete the model using the Vertex fully qualified identifier for the model
try:
if delete_model and "model_to_deploy_id" in globals():
clients["model"].delete_model(name=model_to_deploy_id)
except Exception as e:
print(e)
# Delete the endpoint using the Vertex fully qualified identifier for the endpoint
try:
if delete_endpoint and "endpoint_id" in globals():
clients["endpoint"].delete_endpoint(name=endpoint_id)
except Exception as e:
print(e)
# Delete the batch job using the Vertex fully qualified identifier for the batch job
try:
if delete_batchjob and "batch_job_id" in globals():
clients["job"].delete_batch_prediction_job(name=batch_job_id)
except Exception as e:
print(e)
# Delete the custom job using the Vertex fully qualified identifier for the custom job
try:
if delete_customjob and "job_id" in globals():
clients["job"].delete_custom_job(name=job_id)
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex fully qualified identifier for the hyperparameter tuning job
try:
if delete_hptjob and "hpt_job_id" in globals():
clients["job"].delete_hyperparameter_tuning_job(name=hpt_job_id)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
Explanation: Cleaning up
To clean up all GCP resources used in this project, you can delete the GCP
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
Dataset
Pipeline
Model
Endpoint
Batch Job
Custom Job
Hyperparameter Tuning Job
Cloud Storage Bucket
End of explanation
<END_TASK> |
15,585 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Introduction
This is going to be a very basic examples of Linear Regression. Basically, we have generated data of total amount of meals and tips.
We would like to use this historical data to predict the tip for any given amount of bill.
The data is going to be perfect because I just want to show how easy it is to do Linear Regression.
Best example so far I have found to calculate Linear Regression.
http
Step1: Generate Data
We are going to generate a 1000 samples of random number between \$0 - \$100. And let's say that each meal the customer tips 10% of the amount
Step2: It's easier if we select the correct X and Y axis. Usually, The Y axis would be the value we want to predict and X would be the feed data.
Step3: As we can see from the graph that there's a strong correlation between amount of tip and meal. Now we want to calculate the regression line. We need the slope and intercept to feed in the formula. Y = MX + C
Step4: Let's say if the customer spent $70 how much the customer will tip
Step5: Large dataset
Now let's have a look at large dataset. Let's see how our Linear Regression performs. I'm going to create 100 million datasets.
Step6: Gradient Decent
Now, I'm going to use Gradient Decent to find the fitted line. It's been known that Gradient Decent is better for large dataset. Let's see how well it performs. I'm going to use the code example from
https
Step7: Let's see after 1000 interations how close are we. Pretty close I think
Step8: But you really don't need to write that on your own as Scikit provides that for you already. | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
import numpy as np
from scipy import stats
import collections
import time
from sklearn.linear_model import SGDRegressor
Explanation: Introduction
This is going to be a very basic examples of Linear Regression. Basically, we have generated data of total amount of meals and tips.
We would like to use this historical data to predict the tip for any given amount of bill.
The data is going to be perfect because I just want to show how easy it is to do Linear Regression.
Best example so far I have found to calculate Linear Regression.
http://onlinestatbook.com/2/regression/intro.html
http://spin.atomicobject.com/2014/06/24/gradient-descent-linear-regression/
https://github.com/mattnedrich/GradientDescentExample
Importing
End of explanation
total_bills = np.random.randint(100, size=1000)
tips = total_bills * 0.10
Explanation: Generate Data
We are going to generate a 1000 samples of random number between \$0 - \$100. And let's say that each meal the customer tips 10% of the amount
End of explanation
x = pd.Series(tips, name='tips')
y = pd.Series(total_bills, name='total_bills')
df = pd.concat([x, y], axis=1)
df.plot(kind='scatter', x='total_bills', y='tips');
Explanation: It's easier if we select the correct X and Y axis. Usually, The Y axis would be the value we want to predict and X would be the feed data.
End of explanation
slope, intercept, r_value, p_value, std_err = stats.linregress(x=total_bills, y=tips)
print("slope is %f and intercept is %s" % (slope,intercept))
Explanation: As we can see from the graph that there's a strong correlation between amount of tip and meal. Now we want to calculate the regression line. We need the slope and intercept to feed in the formula. Y = MX + C
End of explanation
predicted_tips = (slope * 70) + intercept
print('The customer will leave the tip of $%f' % predicted_tips)
Explanation: Let's say if the customer spent $70 how much the customer will tip
End of explanation
large_total_bills = np.random.randint(10000, size=100000000)
large_tips = total_bills * 0.10
now = time.time()
slope, intercept, r_value, p_value, std_err = stats.linregress(x=large_total_bills, y=large_tips)
predicted_tips = (slope * 700) + intercept
later = time.time()
difference = int(later - now)
print('The customer will leave the tip of $%f' % predicted_tips)
print('The time spent is %f seconds' % difference)
Explanation: Large dataset
Now let's have a look at large dataset. Let's see how our Linear Regression performs. I'm going to create 100 million datasets.
End of explanation
def compute_error_for_line_given_points (b, m, points):
totalError = 0
for i in range(0, len(points)):
totalError += (points[i].y - (m * points[i].x + b)) ** 2
return totalError / float(len(points))
def step_gradient(b_current, m_current, points, learningRate):
b_gradient = 0
m_gradient = 0
N = float(len(points))
for i in range(0, len(points)):
b_gradient += -(2/N) * (points[i].y - ((m_current*points[i].x) + b_current))
m_gradient += -(2/N) * points[i].x * (points[i].y - ((m_current * points[i].x) + b_current))
new_b = b_current - (learningRate * b_gradient)
new_m = m_current - (learningRate * m_gradient)
return [new_b, new_m]
def gradient_descent_runner(points, starting_b, starting_m, learning_rate, num_iterations):
b = starting_b
m = starting_m
for i in range(num_iterations):
b, m = step_gradient(b, m, points, learning_rate)
return [b, m]
class point:
def __init__(self,x,y):
self.x=x
self.y=y
x = np.random.randint(100, size=1000)
y = x * 0.10
np.column_stack((x,y))
points = []
collections.namedtuple('Point', ['x', 'y'])
for i in range(len(x)):
points.append(point(x[i],y[i]))
learning_rate = 0.0001
initial_b = 0 # initial y-intercept guess
initial_m = 0 # initial slope guess
num_iterations = 1000
print("Starting gradient descent at b = {0}, m = {1}, error = {2}".format(initial_b, initial_m, compute_error_for_line_given_points(initial_b, initial_m, points)))
print("Running...")
[b, m] = gradient_descent_runner(points, initial_b, initial_m, learning_rate, num_iterations)
print("After {0} iterations b = {1}, m = {2}, error = {3}".format(num_iterations, b, m, compute_error_for_line_given_points(b, m, points)))
Explanation: Gradient Decent
Now, I'm going to use Gradient Decent to find the fitted line. It's been known that Gradient Decent is better for large dataset. Let's see how well it performs. I'm going to use the code example from
https://github.com/mattnedrich/GradientDescentExample
End of explanation
gradient_predicted_tips = (m * 70) + b
gradient_predicted_tips
Explanation: Let's see after 1000 interations how close are we. Pretty close I think
End of explanation
x = np.random.randint(100, size=100000000)
y = x * 0.10
x = x[:,None]
now = time.time()
clf = SGDRegressor()
clf.fit(x, y)
later = time.time()
difference = int(later - now)
print("Time spent for SGDRegressor is %d seconds" % difference)
print("slope is %f and intercept is %s" % (clf.coef_, clf.intercept_[0]))
clf.predict(70) # How much tip
Explanation: But you really don't need to write that on your own as Scikit provides that for you already.
End of explanation
<END_TASK> |
15,586 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Evaluation results
Overall association statistics
The tables in this section show the standard association metrics between human scores and different types of machine scores. These results are computed on the evaluation set. The scores for each model have been truncated to values indicated in truncation range. When indicated, scaled scores are computed by re-scaling the predicted scores using mean and standard deviation of human scores as observed on the training data and mean and standard deviation of machine scores as predicted for the training set.
Step1: Descriptive holistic score statistics
The table shows distributional properties of human and system scores. SMD values lower then -0.15 or higher than 0.15 are <span class="highlight_color">highlighted</span>.
Step2: Association statistics
The table shows the standard association metrics between human scores and machine scores. Note that some evaluations (*_trim_round) are based on rounded scores computed by first truncating and then rounding the predicted score. | <ASSISTANT_TASK:>
Python Code:
def read_evals(model_list, file_format_summarize):
has_missing_trims = False
evals = []
for (model_id, model_name, config, csvdir, file_format) in model_list:
csv_file = os.path.join(csvdir, '{}_eval_short.{}'.format(model_id, file_format))
if os.path.exists(csv_file):
df_eval = DataReader.read_from_file(csv_file, index_col=0)
df_eval.index = [model_name]
# figure out whether the score was scaled
df_eval['system score type'] = 'scale' if config.get('use_scaled_predictions') == True or config.get('scale_with') is not None else 'raw'
# we want to display the truncation range, but this is slightly complicated
# we first check to see if the post-processing params file exists; if it does,
# we grab the trim_min and trim_max values from that file (which still could be None!)
trim_min, trim_max = None, None
postproc_file = os.path.join(csvdir, '{}_postprocessing_params.{}'.format(model_id, file_format))
if os.path.exists(postproc_file):
df_postproc = DataReader.read_from_file(postproc_file)
trim_min = df_postproc['trim_min'].values[0]
trim_max = df_postproc['trim_max'].values[0]
# if the trim_min or trim_max is still None, we then grab whatever is in the config
trim_min = config.get('trim_min') if trim_min is None else trim_min
trim_max = config.get('trim_max') if trim_max is None else trim_max
# finally, we calculate the max and min scores; if we couldn't get any trim values,
# then we default these to `?` and the set `has_missing_trims=True`
if trim_min is None:
min_score, has_missing_trims = '?', True
else:
min_score = float(trim_min) - config.get('trim_tolerance', 0.4998)
if trim_max is None:
max_score, has_missing_trims = '?', True
else:
max_score = float(trim_max) + config.get('trim_tolerance', 0.4998)
df_eval['truncation range'] = "[{}, {}]".format(min_score, max_score)
# rename the columns to remove reference to scale/raw scores
new_column_names = [col.split('.')[0] if not 'round' in col
else '{} (rounded)'.format(col.split('.')[0])
for col in df_eval.columns ]
df_eval.columns = new_column_names
evals.append(df_eval)
if len(evals) > 0:
df_evals = pd.concat(evals, sort=True)
else:
df_evals = pd.DataFrame()
return df_evals, has_missing_trims
df_eval, has_missing_trims = read_evals(model_list, file_format_summarize)
if has_missing_trims:
display(Markdown('**Note:** The minimum and/or maximum scores after truncation could not be '
'be computed in some cases. This is because `trim_min` and/or `trim_max` '
'could not be found in either the configuration file or the postprocessing '
'parameters file. Scores that could not be computed are shown as `?`.'))
if not df_eval.empty:
writer = DataWriter(summary_id)
writer.write_experiment_output(output_dir,
{'eval_short': df_eval},
index=True,
file_format=file_format_summarize)
Explanation: Evaluation results
Overall association statistics
The tables in this section show the standard association metrics between human scores and different types of machine scores. These results are computed on the evaluation set. The scores for each model have been truncated to values indicated in truncation range. When indicated, scaled scores are computed by re-scaling the predicted scores using mean and standard deviation of human scores as observed on the training data and mean and standard deviation of machine scores as predicted for the training set.
End of explanation
pd.options.display.width=10
formatter = partial(color_highlighter, low=-0.15, high=0.15)
if not df_eval.empty:
display(HTML(df_eval[['N', 'system score type', "truncation range", 'h_mean', 'h_sd',
'sys_mean', 'sys_sd', 'SMD']].to_html(index=True,
classes=['sortable'],
escape=False,
formatters={'SMD': formatter},
float_format=int_or_float_format_func)))
else:
display(Markdown("No information available for any of the models"))
Explanation: Descriptive holistic score statistics
The table shows distributional properties of human and system scores. SMD values lower then -0.15 or higher than 0.15 are <span class="highlight_color">highlighted</span>.
End of explanation
if not df_eval.empty:
wtkappa_col = 'wtkappa' if 'wtkappa' in df_eval else 'wtkappa (rounded)'
display(HTML(df_eval[['N',
'system score type',
'corr', 'R2', 'RMSE',
wtkappa_col,
'kappa (rounded)',
'exact_agr (rounded)',
'adj_agr (rounded)']].to_html(index=True,
classes=['sortable'],
escape=False,
float_format=int_or_float_format_func)))
else:
display(Markdown("No information available for any of the models"))
Explanation: Association statistics
The table shows the standard association metrics between human scores and machine scores. Note that some evaluations (*_trim_round) are based on rounded scores computed by first truncating and then rounding the predicted score.
End of explanation
<END_TASK> |
15,587 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
TensorFlow 'raw'
Step1: Get the MNIST data
Put it into useful subsets, and show some of it as a sanity check
Step2: Create the Network
Step3: Set up the Loss Function
So that we can perform Gradient Descent to improve the networks' parameters during training
Step4: Set up the Training Function
Step5: Set up the Initializer
NB
Step6: Batching of Training
For efficiency, we operate on data in batches, so that (for instance) a GPU can operate on multiple examples simultaneously
Step7: Test function to check final accuracy
Step8: Finally, the Training...
For each epoch, we call the training function N_BATCHES times, accumulating an estimate of the training loss and accuracy.
Then we evaluate the accuracy on the validation set.
TODO
Step9: Visualising the Weight Matrix
We can retrieve the value of the trained weight matrix from the output layer.
It can be interpreted as a collection of images, one per class.
Step10: Plot the weight images.
We should expect to recognize similarities to the target images | <ASSISTANT_TASK:>
Python Code:
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
%matplotlib inline
import gzip
import pickle
# Seed for reproducibility
np.random.seed(42)
Explanation: TensorFlow 'raw' :: MNIST MLP
This is a quick illustration of a single-layer network training on the MNIST data.
( Credit for the original workbook : Aymeric Damien :: https://github.com/aymericdamien/TensorFlow-Examples )
End of explanation
# Download the MNIST digits dataset (only if not present locally)
import os
import urllib.request
mnist_data = './data/MNIST'
mnist_pkl_gz = mnist_data+'/mnist.pkl.gz'
if not os.path.isfile(mnist_pkl_gz):
if not os.path.exists(mnist_data):
os.makedirs(mnist_data)
print("Downloading MNIST data file")
urllib.request.urlretrieve(
'http://deeplearning.net/data/mnist/mnist.pkl.gz',
mnist_pkl_gz)
print("MNIST data file available locally")
# Load training and test splits as numpy arrays
train, val, test = pickle.load(gzip.open(mnist_pkl_gz), encoding='iso-8859-1')
X_train, y_train = train
X_val, y_val = val
X_test, y_test = test
# The original 28x28 pixel images are flattened into 784 dimensional feature vectors
X_train.shape
# Plot the first few examples
plt.figure(figsize=(12,3))
for i in range(10):
plt.subplot(1, 10, i+1)
plt.imshow(X_train[i].reshape((28, 28)), cmap='gray', interpolation='nearest')
plt.axis('off')
Explanation: Get the MNIST data
Put it into useful subsets, and show some of it as a sanity check
End of explanation
# Network Parameters
n_input = 784 # MNIST data input (img shape: 28*28)
n_hidden_1 = 256 # 1st layer number of features
n_hidden_2 = 256 # 2nd layer number of features
n_classes = 10 # MNIST total classes (0-9 digits)
# tf Graph input
x = tf.placeholder("float32", [None, n_input], name='x_input')
#y = tf.placeholder("int32", [None, n_classes], name='y_target') # originally, a one-hot label
y = tf.placeholder("int32", [ None, ], name='y_target') # This is the label index instead
# Create model
def multilayer_perceptron(x, layer):
# Hidden layer with RELU activation
layer_1 = tf.add(tf.matmul(x, layer['h1']['weights']), layer['h1']['bias'])
layer_1 = tf.nn.relu(layer_1)
# Hidden layer with RELU activation
layer_2 = tf.add(tf.matmul(layer_1, layer['h2']['weights']), layer['h2']['bias'])
layer_2 = tf.nn.relu(layer_2)
# Output layer with linear activation
out_layer = tf.matmul(layer_2, layer['out']['weights']) + layer['out']['bias']
return out_layer
# Store layers weight & bias
layers=dict(
h1 = {
'weights':tf.Variable(tf.random_normal([n_input, n_hidden_1])),
'bias' :tf.Variable(tf.random_normal([n_hidden_1])),
},
h2 = {
'weights':tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])),
'bias' :tf.Variable(tf.random_normal([n_hidden_2])),
},
out = {
'weights':tf.Variable(tf.random_normal([n_hidden_2, n_classes])),
'bias' :tf.Variable(tf.random_normal([n_classes])),
},
)
# Construct model
logits = multilayer_perceptron(x, layers)
#pred = tf.argmax(logits, axis=1) # being deprecated
pred = tf.arg_max(logits, 1)
#pred = tf.reshape( tf.arg_max(logits, 1), [-1])
Explanation: Create the Network
End of explanation
# Define optimizer for the labels (expressed as a onehot encoding)
labels = tf.one_hot(indices=y, depth=10)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=labels))
Explanation: Set up the Loss Function
So that we can perform Gradient Descent to improve the networks' parameters during training
End of explanation
# Parameters for the training phase
learning_rate = 0.001
TRAINING_EPOCHS = 10
BATCH_SIZE = 100
display_step = 1
# Define optimizer
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
#optimizer = tf.train.RMSPropOptimizer(learning_rate=learning_rate).minimize(cost)
Explanation: Set up the Training Function
End of explanation
# Define an 'op' that initializes the variables
init = tf.global_variables_initializer()
Explanation: Set up the Initializer
NB: Do this after creating all the variables (including those inplicitly create in, say AdamOptimizer) since this figures out all the variables that need initializing at the point it is defined, apparently.
End of explanation
# We'll choose a batch size, and calculate the number of batches in an "epoch"
# (approximately one pass through the data).
N_BATCHES = len(X_train) // BATCH_SIZE
#N_VAL_BATCHES = len(X_val) // BATCH_SIZE
# For training, we want to sample examples at random in small batches
def batch_gen(X_, y_, N):
while True:
idx = np.random.choice(len(y_), N)
yield X_[idx], y_[idx]
# Minibatch generator(s) for the training and validation sets
train_batches = batch_gen(X_train, y_train, BATCH_SIZE)
#val_batches = batch_gen(X_val, y_val, BATCH_SIZE)
# Try sampling from the batch generator.
# Plot an image and corresponding label from the training batcher to verify they match.
X_batch, y_batch = next(train_batches)
plt.imshow(X_batch[0].reshape((28, 28)), cmap='gray', interpolation='nearest')
print(y_batch[0])
# Plot an image and corresponding label from the validation set to verify they match.
X_batch, y_batch = X_val, y_val
plt.imshow(X_batch[0].reshape((28, 28)), cmap='gray', interpolation='nearest')
print(y_batch[0])
X_batch.shape, y_batch.shape,
Explanation: Batching of Training
For efficiency, we operate on data in batches, so that (for instance) a GPU can operate on multiple examples simultaneously
End of explanation
#correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1)) # with one-hots
correct_prediction = tf.equal(pred, tf.cast(y, tf.int64)) # with indices
# Calculate accuracy
accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float32"))
Explanation: Test function to check final accuracy
End of explanation
# Launch the graph
with tf.Session() as sess:
sess.run(init) # Running this 'op' initialises the network weights
# Training cycle
for epoch in range(TRAINING_EPOCHS):
avg_cost = 0.
# Loop over all batches
for _ in range(N_BATCHES):
batch_x, batch_y = next(train_batches)
#print(batch_x.shape, batch_y.shape)
# Run optimization op (backprop) and cost op (to get loss value)
_, c = sess.run([optimizer, cost], feed_dict={x:batch_x, y:batch_y})
# Compute average loss
avg_cost += c / N_BATCHES
# Display logs per epoch step
if epoch % display_step == 0:
print("Epoch:", '%04d' % (epoch+1), "cost=","{:.2f}".format(avg_cost))
print("Optimization Finished!")
# Test model
accuracy_, y_, pred_ = sess.run([accuracy, y, pred ], feed_dict={x:X_val[0:10], y:y_val[0:10] })
print("Validation Accuracy: %.2f%% for first 10 examples" % ( 100. * accuracy_, ))
#print(y_)
#print(pred_)
print("Validation Accuracy: %.2f%%" % ( 100. * accuracy.eval({ x: X_val, y: y_val, }),))
print("DONE")
# This proves that when the sessions are done, the network is thrown away...
#with tf.Session() as sess:
# accuracy_, y_, pred_ = sess.run([accuracy, y, pred ], feed_dict={x:X_test[0:100], y:y_test[0:100] })
# print("Test Accuracy: %.2f%% for first 100 examples" % ( 100. * accuracy_, ))
#print("DONE")
Explanation: Finally, the Training...
For each epoch, we call the training function N_BATCHES times, accumulating an estimate of the training loss and accuracy.
Then we evaluate the accuracy on the validation set.
TODO : Print out the ratio of loss in the validation set vs the training set to help recognize overfitting.
End of explanation
#weights = l_out.W.get_value()
#print(weights.shape)
#with tf.Session() as sess:
# accuracy_, y_, pred_ = sess.run([accuracy, y, pred ], feed_dict={x:X_val[0:10], y:y_val[0:10] })
Explanation: Visualising the Weight Matrix
We can retrieve the value of the trained weight matrix from the output layer.
It can be interpreted as a collection of images, one per class.
End of explanation
#plt.figure(figsize=(12,3))
#for i in range(10):
# plt.subplot(1, 10, i+1)
# plt.imshow(weights[:,i].reshape((28, 28)), cmap='gray', interpolation='nearest')
# plt.axis('off')
Explanation: Plot the weight images.
We should expect to recognize similarities to the target images:
End of explanation
<END_TASK> |
15,588 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
First BERT Experiments
In this notebook we do some first experiments with BERT
Step1: Data
We use the same data as for all our previous experiments. Here we load the training, development and test data for a particular prompt.
Step3: Next, we build the label vocabulary, which maps every label in the training data to an index.
Step4: Model
We load the pretrained model and put it on a GPU if one is available. We also put the model in "training" mode, so that we can correctly update its internal parameters on the basis of our data sets.
Step7: Preprocessing
We preprocess the data by turning every example to an InputFeatures item. This item has all the attributes we need for finetuning BERT
Step8: Next, we initialize data loaders for each of our data sets. These data loaders present the data for training (for example, by grouping them into batches).
Step9: Evaluation
Our evaluation method takes a pretrained model and a dataloader. It has the model predict the labels for the items in the data loader, and returns the loss, the correct labels, and the predicted labels.
Step10: Training
Let's prepare the training. We set the training parameters and choose an optimizer and learning rate scheduler.
Step11: Now we do the actual training. In each epoch, we present the model with all training data and compute the loss on the training set and the development set. We save the model whenever the development loss improves. We end training when we haven't seen an improvement of the development loss for a specific number of epochs (the patience).
Optionally, we use gradient accumulation to accumulate the gradient for several training steps. This is useful when we want to use a larger batch size than our current GPU allows us to do.
Step12: Results
We load the pretrained model, set it to evaluation mode and compute its performance on the training, development and test set. We print out an evaluation report for the test set.
Note that different runs will give slightly different results. | <ASSISTANT_TASK:>
Python Code:
import torch
from pytorch_transformers.tokenization_bert import BertTokenizer
from pytorch_transformers.modeling_bert import BertForSequenceClassification
BERT_MODEL = 'bert-base-uncased'
BATCH_SIZE = 16 if "base" in BERT_MODEL else 2
GRADIENT_ACCUMULATION_STEPS = 1 if "base" in BERT_MODEL else 8
tokenizer = BertTokenizer.from_pretrained(BERT_MODEL)
Explanation: First BERT Experiments
In this notebook we do some first experiments with BERT: we finetune a BERT model+classifier on each of our datasets separately and compute the accuracy of the resulting classifier on the test data.
For these experiments we use the pytorch_transformers package. It contains a variety of neural network architectures for transfer learning and pretrained models, including BERT and XLNET.
Two different BERT models are relevant for our experiments:
BERT-base-uncased: a relatively small BERT model that should already give reasonable results,
BERT-large-uncased: a larger model for real state-of-the-art results.
End of explanation
import ndjson
import glob
prefix = "junkfood_but"
train_file = f"../data/interim/{prefix}_train_withprompt_diverse200.ndjson"
synth_files = glob.glob(f"../data/interim/{prefix}_train_withprompt_*.ndjson")
dev_file = f"../data/interim/{prefix}_dev_withprompt.ndjson"
test_file = f"../data/interim/{prefix}_test_withprompt.ndjson"
with open(train_file) as i:
train_data = ndjson.load(i)
synth_data = []
for f in synth_files:
if "allsynth" in f:
continue
with open(f) as i:
synth_data += ndjson.load(i)
with open(dev_file) as i:
dev_data = ndjson.load(i)
with open(test_file) as i:
test_data = ndjson.load(i)
Explanation: Data
We use the same data as for all our previous experiments. Here we load the training, development and test data for a particular prompt.
End of explanation
label2idx = {}
idx2label = {}
target_names = []
for item in train_data:
if item["label"] not in label2idx:
target_names.append(item["label"])
idx = len(label2idx)
label2idx[item["label"]] = idx
idx2label[idx] = item["label"]
print(label2idx)
print(idx2label)
import random
def sample(train_data, synth_data, label2idx, number):
Sample a fixed number of items from every label from
the training data and test data.
new_train_data = []
for label in label2idx:
data_for_label = [i for i in train_data if i["label"] == label]
# If there is more training data than the required number,
# take a random sample of n examples from the training data.
if len(data_for_label) >= number:
random.shuffle(data_for_label)
new_train_data += data_for_label[:number]
# If there is less training data than the required number,
# combine training data with synthetic data.
elif len(data_for_label) < number:
# Automatically add all training data
new_train_data += data_for_label
# Compute the required number of additional data
rest = number-len(data_for_label)
# Collect the synthetic data for the label
synth_data_for_label = [i for i in synth_data if i["label"] == label]
# If there is more synthetic data than required,
# take a random sample from the synthetic data.
if len(synth_data_for_label) > rest:
random.shuffle(synth_data_for_label)
new_train_data += synth_data_for_label[:rest]
# If there is less synthetic data than required,
# sample with replacement from this data until we have
# the required number.
else:
new_train_data += random.choices(synth_data_for_label, k=rest)
return new_train_data
def random_sample(train_data, train_size):
random.shuffle(train_data)
train_data = train_data[:train_size]
return train_data
#train_data = train_data + synth_data
#train_data = sample(train_data, synth_data, label2idx, 200)
#train_data = random_sample(train_data, 200)
print("Train data size:", len(train_data))
Explanation: Next, we build the label vocabulary, which maps every label in the training data to an index.
End of explanation
model = BertForSequenceClassification.from_pretrained(BERT_MODEL, num_labels=len(label2idx))
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
model.train()
Explanation: Model
We load the pretrained model and put it on a GPU if one is available. We also put the model in "training" mode, so that we can correctly update its internal parameters on the basis of our data sets.
End of explanation
import logging
import numpy as np
logging.basicConfig(format = '%(asctime)s - %(levelname)s - %(name)s - %(message)s',
datefmt = '%m/%d/%Y %H:%M:%S',
level = logging.INFO)
logger = logging.getLogger(__name__)
MAX_SEQ_LENGTH=100
class InputFeatures(object):
A single set of features of data.
def __init__(self, input_ids, input_mask, segment_ids, label_id):
self.input_ids = input_ids
self.input_mask = input_mask
self.segment_ids = segment_ids
self.label_id = label_id
def convert_examples_to_features(examples, label2idx, max_seq_length, tokenizer, verbose=0):
Loads a data file into a list of `InputBatch`s.
features = []
for (ex_index, ex) in enumerate(examples):
# TODO: should deal better with sentences > max tok length
input_ids = tokenizer.encode("[CLS] " + ex["text"] + " [SEP]")
segment_ids = [0] * len(input_ids)
# The mask has 1 for real tokens and 0 for padding tokens. Only real
# tokens are attended to.
input_mask = [1] * len(input_ids)
# Zero-pad up to the sequence length.
padding = [0] * (max_seq_length - len(input_ids))
input_ids += padding
input_mask += padding
segment_ids += padding
assert len(input_ids) == max_seq_length
assert len(input_mask) == max_seq_length
assert len(segment_ids) == max_seq_length
label_id = label2idx[ex["label"]]
if verbose and ex_index == 0:
logger.info("*** Example ***")
logger.info("text: %s" % ex["text"])
logger.info("input_ids: %s" % " ".join([str(x) for x in input_ids]))
logger.info("input_mask: %s" % " ".join([str(x) for x in input_mask]))
logger.info("segment_ids: %s" % " ".join([str(x) for x in segment_ids]))
logger.info("label:" + str(ex["label"]) + " id: " + str(label_id))
features.append(
InputFeatures(input_ids=input_ids,
input_mask=input_mask,
segment_ids=segment_ids,
label_id=label_id))
return features
train_features = convert_examples_to_features(train_data, label2idx, MAX_SEQ_LENGTH, tokenizer, verbose=0)
dev_features = convert_examples_to_features(dev_data, label2idx, MAX_SEQ_LENGTH, tokenizer)
test_features = convert_examples_to_features(test_data, label2idx, MAX_SEQ_LENGTH, tokenizer, verbose=1)
Explanation: Preprocessing
We preprocess the data by turning every example to an InputFeatures item. This item has all the attributes we need for finetuning BERT:
input ids: the ids of the tokens in the text
input mask: tells BERT what part of the input it should not look at (such as padding tokens)
segment ids: tells BERT what segment every token belongs to. BERT can take two different segments as input
label id: the id of this item's label
End of explanation
import torch
from torch.utils.data import TensorDataset, DataLoader, RandomSampler
def get_data_loader(features, max_seq_length, batch_size, shuffle=True):
all_input_ids = torch.tensor([f.input_ids for f in features], dtype=torch.long)
all_input_mask = torch.tensor([f.input_mask for f in features], dtype=torch.long)
all_segment_ids = torch.tensor([f.segment_ids for f in features], dtype=torch.long)
all_label_ids = torch.tensor([f.label_id for f in features], dtype=torch.long)
data = TensorDataset(all_input_ids, all_input_mask, all_segment_ids, all_label_ids)
dataloader = DataLoader(data, shuffle=shuffle, batch_size=batch_size)
return dataloader
train_dataloader = get_data_loader(train_features, MAX_SEQ_LENGTH, BATCH_SIZE)
dev_dataloader = get_data_loader(dev_features, MAX_SEQ_LENGTH, BATCH_SIZE)
test_dataloader = get_data_loader(test_features, MAX_SEQ_LENGTH, BATCH_SIZE, shuffle=False)
Explanation: Next, we initialize data loaders for each of our data sets. These data loaders present the data for training (for example, by grouping them into batches).
End of explanation
def evaluate(model, dataloader, verbose=False):
eval_loss = 0
nb_eval_steps = 0
predicted_labels, correct_labels = [], []
for step, batch in enumerate(tqdm(dataloader, desc="Evaluation iteration")):
batch = tuple(t.to(device) for t in batch)
input_ids, input_mask, segment_ids, label_ids = batch
with torch.no_grad():
tmp_eval_loss, logits = model(input_ids, segment_ids, input_mask, label_ids)
outputs = np.argmax(logits.to('cpu'), axis=1)
label_ids = label_ids.to('cpu').numpy()
predicted_labels += list(outputs)
correct_labels += list(label_ids)
eval_loss += tmp_eval_loss.mean().item()
nb_eval_steps += 1
eval_loss = eval_loss / nb_eval_steps
correct_labels = np.array(correct_labels)
predicted_labels = np.array(predicted_labels)
return eval_loss, correct_labels, predicted_labels
Explanation: Evaluation
Our evaluation method takes a pretrained model and a dataloader. It has the model predict the labels for the items in the data loader, and returns the loss, the correct labels, and the predicted labels.
End of explanation
from pytorch_transformers.optimization import AdamW, WarmupLinearSchedule
NUM_TRAIN_EPOCHS = 20
LEARNING_RATE = 1e-5
WARMUP_PROPORTION = 0.1
def warmup_linear(x, warmup=0.002):
if x < warmup:
return x/warmup
return 1.0 - x
num_train_steps = int(len(train_data) / BATCH_SIZE / GRADIENT_ACCUMULATION_STEPS * NUM_TRAIN_EPOCHS)
param_optimizer = list(model.named_parameters())
no_decay = ['bias', 'LayerNorm.bias', 'LayerNorm.weight']
optimizer_grouped_parameters = [
{'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)], 'weight_decay': 0.01},
{'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)], 'weight_decay': 0.0}
]
optimizer = AdamW(optimizer_grouped_parameters, lr=LEARNING_RATE, correct_bias=False)
scheduler = WarmupLinearSchedule(optimizer, warmup_steps=100, t_total=num_train_steps)
Explanation: Training
Let's prepare the training. We set the training parameters and choose an optimizer and learning rate scheduler.
End of explanation
import os
from tqdm import trange
from tqdm import tqdm_notebook as tqdm
from sklearn.metrics import classification_report, precision_recall_fscore_support
OUTPUT_DIR = "/tmp/"
MODEL_FILE_NAME = "pytorch_model.bin"
PATIENCE = 5
global_step = 0
model.train()
loss_history = []
best_epoch = 0
for epoch in trange(int(NUM_TRAIN_EPOCHS), desc="Epoch"):
tr_loss = 0
nb_tr_examples, nb_tr_steps = 0, 0
for step, batch in enumerate(tqdm(train_dataloader, desc="Training iteration")):
batch = tuple(t.to(device) for t in batch)
input_ids, input_mask, segment_ids, label_ids = batch
outputs = model(input_ids, segment_ids, input_mask, label_ids)
loss = outputs[0]
if GRADIENT_ACCUMULATION_STEPS > 1:
loss = loss / GRADIENT_ACCUMULATION_STEPS
loss.backward()
tr_loss += loss.item()
nb_tr_examples += input_ids.size(0)
nb_tr_steps += 1
if (step + 1) % GRADIENT_ACCUMULATION_STEPS == 0:
lr_this_step = LEARNING_RATE * warmup_linear(global_step/num_train_steps, WARMUP_PROPORTION)
for param_group in optimizer.param_groups:
param_group['lr'] = lr_this_step
optimizer.step()
optimizer.zero_grad()
global_step += 1
dev_loss, _, _ = evaluate(model, dev_dataloader)
print("Loss history:", loss_history)
print("Dev loss:", dev_loss)
if len(loss_history) == 0 or dev_loss < min(loss_history):
model_to_save = model.module if hasattr(model, 'module') else model
output_model_file = os.path.join(OUTPUT_DIR, MODEL_FILE_NAME)
torch.save(model_to_save.state_dict(), output_model_file)
best_epoch = epoch
if epoch-best_epoch >= PATIENCE:
print("No improvement on development set. Finish training.")
break
loss_history.append(dev_loss)
Explanation: Now we do the actual training. In each epoch, we present the model with all training data and compute the loss on the training set and the development set. We save the model whenever the development loss improves. We end training when we haven't seen an improvement of the development loss for a specific number of epochs (the patience).
Optionally, we use gradient accumulation to accumulate the gradient for several training steps. This is useful when we want to use a larger batch size than our current GPU allows us to do.
End of explanation
print("Loading model from", output_model_file)
device="cpu"
model_state_dict = torch.load(output_model_file, map_location=lambda storage, loc: storage)
model = BertForSequenceClassification.from_pretrained(BERT_MODEL, state_dict=model_state_dict, num_labels=len(label2idx))
model.to(device)
model.eval()
#_, train_correct, train_predicted = evaluate(model, train_dataloader)
#_, dev_correct, dev_predicted = evaluate(model, dev_dataloader)
_, test_correct, test_predicted = evaluate(model, test_dataloader, verbose=True)
#print("Training performance:", precision_recall_fscore_support(train_correct, train_predicted, average="micro"))
#print("Development performance:", precision_recall_fscore_support(dev_correct, dev_predicted, average="micro"))
print("Test performance:", precision_recall_fscore_support(test_correct, test_predicted, average="micro"))
print(classification_report(test_correct, test_predicted, target_names=target_names))
c = 0
for item, predicted, correct in zip(test_data, test_predicted, test_correct):
assert item["label"] == idx2label[correct]
c += (item["label"] == idx2label[predicted])
print("{}#{}#{}".format(item["text"], idx2label[correct], idx2label[predicted]))
print(c)
print(c/len(test_data))
Explanation: Results
We load the pretrained model, set it to evaluation mode and compute its performance on the training, development and test set. We print out an evaluation report for the test set.
Note that different runs will give slightly different results.
End of explanation
<END_TASK> |
15,589 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Intro
This notebook explores words embeddings.
Includes playing around with Word2Vec using Gensim and exploration of GloVe pretained embeddings.
Resources
https
Step1: Word2Vec
Step2: Train Word2Vec
Step3: Test Word2Vec Model
Step4: GloVe | <ASSISTANT_TASK:>
Python Code:
sentences = ["A brown fox jumped on the lazy dog",
"A brown fox jumped on the brown duck",
"A brown fox jumped on the lazy elephant",
"An elephant is eating green grass near the alpaca",
"A green alpaca tried to jump over an elephant",
"May you rest in a deep and dreamless slumber"]
# dummy tokenization
tokenized_sentences = [sent.strip().split() for sent in sentences]
# word to index
counter = collections.Counter(itertools.chain(*tokenized_sentences))
vocab = counter.most_common()
index_to_word = [x[0] for x in vocab]
word_to_index = dict([(w,i) for i,w in enumerate(index_to_word)])
Explanation: Intro
This notebook explores words embeddings.
Includes playing around with Word2Vec using Gensim and exploration of GloVe pretained embeddings.
Resources
https://rare-technologies.com/word2vec-tutorial/
https://radimrehurek.com/gensim/models/word2vec.html
https://blog.keras.io/using-pre-trained-word-embeddings-in-a-keras-model.html
https://machinelearningmastery.com/use-word-embedding-layers-deep-learning-keras/
Data
End of explanation
import gensim, logging
Explanation: Word2Vec
End of explanation
# parameters
size = 200 # size of NN layers, corresponding to word vector dimensionality
min_count = 1 # minimum word count in order to consider such word
workers = 4 # number of threads to run in parallel (only effect if you have Cython installed)
window = 10 # Context window size
downsampling = 1e-3 # Downsample setting for frequent words
print("Training model...")
model = gensim.models.Word2Vec([s.split() for s in sentences],
workers=workers,
size=size, min_count = min_count,
window = window, sample = downsampling)
# If you don't plan to train the model any further, calling
# init_sims will make the model much more memory-efficient.
model.init_sims(replace=True)
# save model
model_name = "w2v_{}_size{}_mincount{}_window{}".format(corpus_name, size, min_count, window)
model.save(model_name)
Explanation: Train Word2Vec
End of explanation
# load model
model = gensim.models.Word2Vec.load(model_name)
model.most_similar(positive=['woman', 'king'], negative=['man'], topn=1)
model.doesnt_match("breakfast cereal dinner lunch";.split())
model.similarity('woman', 'man')
# sentence to tensor
model[['brown', 'fox']].shape
import nltk
import itertools
import collections
nltk.FreqDist(itertools.chain(*[s.split() for s in sentences]))
collections.Counter(itertools.chain(*[s.split() for s in sentences]))
Explanation: Test Word2Vec Model
End of explanation
# load GloVe embeddings
EMBEDDING_DIM = 100
embeddings = {}
with open(join(data_folder, "glove", "glove.6B.100d.txt")) as glove:
for line in glove:
values = line.strip().split()
word = values[0]
vector = np.asarray(values[1:], dtype='float32')
embeddings[word] = vector
embeddings['objected']
for i, (k, v) in enumerate(embeddings.items()):
if k=='objected':
print(i)
# create embedding matrix
embeddings_matrix = np.zeros((len(word_to_index)+1, EMBEDDING_DIM))
for word, i in word_to_index.items():
if word in embeddings:
embeddings_matrix[i] = embeddings[word]
Explanation: GloVe
End of explanation
<END_TASK> |
15,590 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Fast GP implementations
The data file needed for this tutorial can be downloaded as follows
Step5: Benchmarking our implementation
Let's do some timing tests and compare them to what we get with two handy GP packages
Step6: <div style="background-color | <ASSISTANT_TASK:>
Python Code:
!wget https://raw.githubusercontent.com/rodluger/tutorials/master/gps/data/sample_transit.txt
!mv *.txt data/
Explanation: Fast GP implementations
The data file needed for this tutorial can be downloaded as follows:
End of explanation
import numpy as np
from scipy.linalg import cho_factor
def ExpSquaredKernel(t1, t2=None, A=1.0, l=1.0):
Return the ``N x M`` exponential squared
covariance matrix between time vectors `t1`
and `t2`. The kernel has amplitude `A` and
lengthscale `l`.
if t2 is None:
t2 = t1
T2, T1 = np.meshgrid(t2, t1)
return A ** 2 * np.exp(-0.5 * (T1 - T2) ** 2 / l ** 2)
def ln_gp_likelihood(t, y, sigma=0, A=1.0, l=1.0):
Return the log of the GP likelihood of the
data `y(t)` given uncertainty `sigma` and
an Exponential Squared Kernel with amplitude `A`
and length scale `sigma`.
# The covariance and its determinant
npts = len(t)
kernel = ExpSquaredKernel
K = kernel(t, A=A, l=l) + sigma ** 2 * np.eye(npts)
# The marginal log likelihood
log_like = -0.5 * np.dot(y.T, np.linalg.solve(K, y))
log_like -= 0.5 * np.linalg.slogdet(K)[1]
log_like -= 0.5 * npts * np.log(2 * np.pi)
return log_like
def draw_from_gaussian(mu, S, ndraws=1, eps=1e-12):
Generate samples from a multivariate gaussian
specified by covariance ``S`` and mean ``mu``.
(We derived these equations in Day 1, Notebook 01, Exercise 7.)
npts = S.shape[0]
L, _ = cho_factor(S + eps * np.eye(npts), lower=True)
L = np.tril(L)
u = np.random.randn(npts, ndraws)
x = np.dot(L, u) + mu[:, None]
return x.T
def compute_gp(t_train, y_train, t_test, sigma=0, A=1.0, l=1.0):
Compute the mean vector and covariance matrix of a GP
at times `t_test` given training points `y_train(t_train)`.
The training points have uncertainty `sigma` and the
kernel is assumed to be an Exponential Squared Kernel
with amplitude `A` and lengthscale `l`.
# Compute the required matrices
kernel = ExpSquaredKernel
Stt = kernel(t_train, A=1.0, l=1.0)
Stt += sigma ** 2 * np.eye(Stt.shape[0])
Spp = kernel(t_test, A=1.0, l=1.0)
Spt = kernel(t_test, t_train, A=1.0, l=1.0)
# Compute the mean and covariance of the GP
mu = np.dot(Spt, np.linalg.solve(Stt, y_train))
S = Spp - np.dot(Spt, np.linalg.solve(Stt, Spt.T))
return mu, S
Explanation: Benchmarking our implementation
Let's do some timing tests and compare them to what we get with two handy GP packages: george and celerite. We'll learn how to use both along the way.
Below is the code we wrote in the Day 1 tutorials to sample from and compute the likelihood of a GP.
End of explanation
import matplotlib.pyplot as plt
t, y, yerr = np.loadtxt("data/sample_transit.txt", unpack=True)
plt.errorbar(x, y, yerr=yerr, fmt=".k", capsize=0)
plt.xlabel("time")
plt.ylabel("relative flux");
Explanation: <div style="background-color: #D6EAF8; border-left: 15px solid #2E86C1;">
<h1 style="line-height:2.5em; margin-left:1em;">Exercise 1a</h1>
</div>
Let's time how long our custom implementation of a GP takes for a rather long dataset. Create a time array of 10,000 points between 0 and 10 and time how long it takes to sample the prior of the GP for the default kernel parameters (unit amplitude and timescale). Add a bit of noise to the sample and then time how long it takes to evaluate the log likelihood for the dataset. Make sure to store the value of the log likelihood for later.
<div style="background-color: #D6EAF8; border-left: 15px solid #2E86C1;">
<h1 style="line-height:2.5em; margin-left:1em;">Exercise 1b</h1>
</div>
Let's time how long it takes to do the same operations using the george package (pip install george).
The kernel we'll use is
python
kernel = amp ** 2 * george.kernels.ExpSquaredKernel(tau ** 2)
where amp = 1 and tau = 1 in this case.
To instantiate a GP using george, simply run
python
gp = george.GP(kernel)
The george package pre-computes a lot of matrices that are re-used in different operations, so before anything else, ask it to compute the GP model for your timeseries:
python
gp.compute(t, sigma)
Note that we've only given it the time array and the uncertainties, so as long as those remain the same, you don't have to re-compute anything. This will save you a lot of time in the long run!
Finally, the log likelihood is given by gp.log_likelihood(y) and a sample can be drawn by calling gp.sample().
How do the speeds compare? Did you get the same value of the likelihood (assuming you computed it for the same sample in both cases)?
<div style="background-color: #D6EAF8; border-left: 15px solid #2E86C1;">
<h1 style="line-height:2.5em; margin-left:1em;">Exercise 1c</h1>
</div>
george offers a fancy GP solver called the HODLR solver, which makes some approximations that dramatically speed up the matrix algebra. Instantiate the GP object again by passing the keyword solver=george.HODLRSolver and re-compute the log likelihood. How long did that take?
(I wasn't able to draw samples using the HODLR solver; unfortunately this may not be implemented.)
<div style="background-color: #D6EAF8; border-left: 15px solid #2E86C1;">
<h1 style="line-height:2.5em; margin-left:1em;">Exercise 2</h1>
</div>
The george package is super useful for GP modeling, and I recommend you read over the docs and examples. It implements several different kernels that come in handy in different situations, and it has support for multi-dimensional GPs. But if all you care about are GPs in one dimension (in this case, we're only doing GPs in the time domain, so we're good), then celerite is what it's all about:
bash
pip install celerite
Check out the docs here, as well as several tutorials. There is also a paper that discusses the math behind celerite. The basic idea is that for certain families of kernels, there exist extremely efficient methods of factorizing the covariance matrices. Whereas GP fitting typically scales with the number of datapoints $N$ as $N^3$, celerite is able to do everything in order $N$ (!!!) This is a huge advantage, especially for datasets with tens or hundreds of thousands of data points. Using george or any homebuilt GP model for datasets larger than about 10,000 points is simply intractable, but with celerite you can do it in a breeze.
Repeat the timing tests, but this time using celerite. Note that the Exponential Squared Kernel is not available in celerite, because it doesn't have the special form needed to make its factorization fast. Instead, use the Matern 3/2 kernel, which is qualitatively similar, and which can be approximated quite well in terms of the celerite basis functions:
python
kernel = celerite.terms.Matern32Term(np.log(1), np.log(1))
Note that celerite accepts the log of the amplitude and the log of the timescale. Other than this, you should be able to compute the likelihood and draw a sample with the same syntax as george.
How much faster did it run?
<div style="background-color: #D6EAF8; border-left: 15px solid #2E86C1;">
<h1 style="line-height:2.5em; margin-left:1em;">Exercise 3</h1>
</div>
Let's use celerite for a real application: fitting an exoplanet transit model in the presence of correlated noise.
Below is a (fictitious) light curve for a star with a transiting planet. There is a transit visible to the eye at $t = 0$, which (say) is when you'd expect the planet to transit if its orbit were perfectly periodic. However, a recent paper claims that the planet shows transit timing variations, which are indicative of a second, perturbing planet in the system, and that a transit at $t = 0$ can be ruled out at 3 $\sigma$. Your task is to verify this claim.
Assume you have no prior information on the planet other than the transit occurs in the observation window, the depth of the transit is somewhere in the range $(0, 1)$, and the transit duration is somewhere between $0.1$ and $1$ day. You don't know the exact process generating the noise, but you are certain that there's correlated noise in the dataset, so you'll have to pick a reasonable kernel and estimate its hyperparameters.
Fit the transit with a simple inverted Gaussian with three free parameters:
python
def transit_shape(depth, t0, dur):
return -depth * np.exp(-0.5 * (t - t0) ** 2 / (0.2 * dur) ** 2)
Read the celerite docs to figure out how to solve this problem efficiently.
HINT: I borrowed heavily from this tutorial, so you might want to take a look at it...
End of explanation
<END_TASK> |
15,591 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
display the first three animals
Step1: Sort the animals to see the 3 longest animals
Step2: What are the counts of the different values of the "animal" column? a.k.a. how many cats and how many dogs
Step3: Only select the dogs
Step4: Display all of the animals that are greater than 40 cm
Step5: 'length' is the animal's length in cm. Create a new column called inches that is the length in inches
Step6: Save the cats to a separate variable called "cats." Save the dogs to a separate variable called "dogs."
Step7: Display all of the animals that are cats and above 12 inches long. First do it using the "cats" variable, then do it using your normal dataframe.
Step8: What's the mean length of a cat?
Step9: What's the mean length of a dog?
Step10: Use groupby to accomplish both of the above tasks at once.
Step11: Make a histogram of the length of dogs. I apologize that it is so boring
Step12: Change your graphing style to be something else (anything else!)
Step13: Make a horizontal bar graph of the length of the animals, with their name as the label (look at the billionaires notebook I put on Slack!)
Step14: Make a sorted horizontal bar graph of the cats, with the larger cats on top | <ASSISTANT_TASK:>
Python Code:
columns_to_show = ['animal','name','length']
df[columns_to_show][0:3]
Explanation: display the first three animals
End of explanation
df.sort_values(by='length', ascending=False).head(3) #sort_values, not sort, and sort by length column
Explanation: Sort the animals to see the 3 longest animals
End of explanation
df['animal'].value_counts() #value_counts()
Explanation: What are the counts of the different values of the "animal" column? a.k.a. how many cats and how many dogs
End of explanation
is_dog = df['animal']=='dog'
df[df['animal'] == 'dog'] #df['animal]==dog will only give a list of true or false, if you want the rows back, you have to put a df[] to put the is_dog into a data frame
Explanation: Only select the dogs
End of explanation
df[df['length'] > 40]
Explanation: Display all of the animals that are greater than 40 cm
End of explanation
df['inches']=df['length']/2.54
df
Explanation: 'length' is the animal's length in cm. Create a new column called inches that is the length in inches
End of explanation
dogs=df['animal'] == 'dog'
cats=df['animal'] == 'cat'
Explanation: Save the cats to a separate variable called "cats." Save the dogs to a separate variable called "dogs."
End of explanation
df[(df['animal'] =='cat') & (df['inches']>12)] #panda only recognizes the conditions in parentaces
Explanation: Display all of the animals that are cats and above 12 inches long. First do it using the "cats" variable, then do it using your normal dataframe.
End of explanation
df[cats].describe()
#the mean length of a cat is 37.33333
Explanation: What's the mean length of a cat?
End of explanation
df[dogs].describe()
# the mean length of a dog is 50inches
Explanation: What's the mean length of a dog?
End of explanation
df.groupby([cats,dogs]).describe()
Explanation: Use groupby to accomplish both of the above tasks at once.
End of explanation
new_dog_data = df[df['animal'] == 'dog']
new_dog_data['length'].hist()
Explanation: Make a histogram of the length of dogs. I apologize that it is so boring
End of explanation
plt.style.use('ggplot')
new_dog_data['length'].hist()
Explanation: Change your graphing style to be something else (anything else!)
End of explanation
df.plot(kind='barh', x='name',y='length') #length='false' remove the length #barh: a hornizontal bar chat
Explanation: Make a horizontal bar graph of the length of the animals, with their name as the label (look at the billionaires notebook I put on Slack!)
End of explanation
df[df['animal']== 'cat'].sort_values(by='length').plot(kind='barh', x='name', y='length')
Explanation: Make a sorted horizontal bar graph of the cats, with the larger cats on top
End of explanation
<END_TASK> |
15,592 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Why Objects?
Provide modularity and reuse through hierarchical structures
Object oriented programming is a different way of thinking.
Programming With Objects
Step1: Initial concepts
An object is a container of data (attributes) and code (methods)
A class is a template for creating objects
Reuse is provided by
Step2: Attributes
Step3: Attributes are data associated with an object (instance) or class. Object attributes (and methods) are specified by using "self". Instance attributes and methods are accessed using the dot "." operator.
Step4: EXERCISE
Step7: A class diagram provides a more compact representation of a class. There are three sections.
- Class name
- Attributes
- Methods
Instance methods
- functions associated with the objects constructed for a class
- provide a way to transform data in objects
- use instance attributes (references to variables beginning with "self.")
Step9: EXERCISE
Step10: Exercise
Step11: Subclasses can have their own methods.
Exercise
Step12: The diamond arrow is a "has-a" relationship. For example, the Controller has-a ATMInput. This means that a Controller object has an instance variable for an ATMInput object.
Interaction Diagram for the ATM System
An interaction diagram specifies how components interact to achieve a use case.
Interactions are from one object to another object, indicating that the first object calls a method in the second object.
Rules for drawing lines in an interaction diagram
Step13: Look at Objects/ATMDiagrams.pdf for a solution.
What Else in Design?
Other diagrams | <ASSISTANT_TASK:>
Python Code:
from IPython.display import Image
Image(filename='Classes_vs_Objects.png')
Explanation: Why Objects?
Provide modularity and reuse through hierarchical structures
Object oriented programming is a different way of thinking.
Programming With Objects
End of explanation
# Definiting a Car class
class Car(object):
pass
Explanation: Initial concepts
An object is a container of data (attributes) and code (methods)
A class is a template for creating objects
Reuse is provided by:
reusing the same class to create many objects
"inheriting" data and code from other classes
End of explanation
from IPython.display import Image
Image(filename='ClassAttributes.png')
Explanation: Attributes
End of explanation
class Car(object):
# The following method is called when the class
# is created or "constructed". The variables "self.x" refers
# to the variable "x" in a created object.
def __init__(self, color, car_type, speed):
self.color = color
self.car_type = car_type
self.speed = speed
class Car(object):
# The following method is called when the class
# is created or "constructed". The variables "self.x" refers
# to the variable "x" in a created object.
def __init__(self, color, car_type, speed):
self.color = color
self.car_type = car_type
self.speed = speed
# Creating an object for a class with arguments in the __init__ method
car = Car("Blue", "HatchBack", 100)
car.color
# Creating an object for a class with arguments in the __init__ method
joe_car = Car("Blue", "Sedan", 100)
dave_car = Car("Red", "Sports", 150)
print ("Type of joe_car is %s. Type of dave_car is %s"% (type(joe_car), type(dave_car)))
# Accessed instance attributes
joe_car = Car("Blue", "Sedan", 100)
print ("Type of joe_car has (color, type, speed)=%s." % str((joe_car.color, joe_car.car_type, joe_car.speed)))
Explanation: Attributes are data associated with an object (instance) or class. Object attributes (and methods) are specified by using "self". Instance attributes and methods are accessed using the dot "." operator.
End of explanation
from IPython.display import Image
Image(filename='InstanceMethods.png')
#Class diagram
from IPython.display import Image
Image(filename='SingleClassDiagram.png', width=200, height=200)
Explanation: EXERCISE: Change the constructor for Car to include the attribute "doors".
Instance Methods
End of explanation
class Car(object):
def __init__(self, color, car_type, speed):
:param str color:
:param str car_type:
:param int speed:
self.color = color
self.car_type = car_type
self.speed = speed
def start(self):
print ("%s %s started!" % (self.color, self.car_type))
def stop(self):
pass
def turn(self, direction):
:parm str direction: left or right
pass
car = Car("Blue", "Sedan", 100)
car.start()
Explanation: A class diagram provides a more compact representation of a class. There are three sections.
- Class name
- Attributes
- Methods
Instance methods
- functions associated with the objects constructed for a class
- provide a way to transform data in objects
- use instance attributes (references to variables beginning with "self.")
End of explanation
from IPython.display import Image
Image(filename='SimpleClassHierarchy.png', width=400, height=400)
# Code for inheritance
class Sedan(Car):
# Sedan inherits from car
def __init__(self, color, speed):
:param str color:
:param int speed:
super().__init__(color, "Sedan", speed)
def play_cd(self):
print ("Playing cd in %s sedan" % self.color)
sedan = Sedan("Yellow", 1e6)
sedan.play_cd()
sedan.car_type
joe_car = Sedan("Blue", 100)
print ("Type of joe_car has (color, type, speed)=%s." % str((joe_car.color, joe_car.car_type, joe_car.speed)))
Explanation: EXERCISE: Implement the stop and turn methods. Run the methods.
Inheritance
Inheritance is a common way that classes reuse data and code from other classes. A child class or derived class gets attributes and methods from its parent class.
Programmatically:
- Specify inheritance in the class statement
- Constructor for derived class (class that inherits) have access to the constructor of its parent.
Inheritance is represented in diagrams as an arror from the child class to its parent class.
End of explanation
from IPython.display import Image
Image(filename='ClassInheritance.png', width=400, height=400)
Explanation: Exercise: Implement SportsCar and create dave_car from SportsCar. Print attributes of dave_car.
End of explanation
from IPython.display import Image
Image(filename='ATMClassDiagram.png', width=400, height=400)
Explanation: Subclasses can have their own methods.
Exercise: Add the play_cd() to Sedan and play_bluetooth() method to SportsCar. Construct a test to run these methods.
What Else?
Class attributes
Class methods
Object Oriented Design
A design methodology must specify:
- Components: What they do and how to build them
- Interactions: How the components interact to implement use cases
Object oriented designed
- Components are specified by class diagrams.
- Interactions are specified by interaction diagrams.
Class diagram for the ATM system
End of explanation
from IPython.display import Image
Image(filename='ATMAuthentication.png', width=800, height=800)
Explanation: The diamond arrow is a "has-a" relationship. For example, the Controller has-a ATMInput. This means that a Controller object has an instance variable for an ATMInput object.
Interaction Diagram for the ATM System
An interaction diagram specifies how components interact to achieve a use case.
Interactions are from one object to another object, indicating that the first object calls a method in the second object.
Rules for drawing lines in an interaction diagram:
- The calling object must know about the called object.
- The called object must have the method invoked by the calling object.
End of explanation
from IPython.display import Image
Image(filename='SciSheetsCoreClasses.png', width=300, height=30)
Explanation: Look at Objects/ATMDiagrams.pdf for a solution.
What Else in Design?
Other diagrams: state diagrams, package diagrams, ...
Object oriented design patterns
Complex Example of Class Hierarchy
End of explanation
<END_TASK> |
15,593 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Window functions
Stingray now has a bunch of window functions that can be used for various applications in signal processing.
Windows available include
Step1: create_window function in stingray.utils takes two parameters.
N
Step2: Parzen Window
Step3: Hamming Window
Step4: Hanning Window
Step5: Traingular Window
Step6: Welch Window
Step7: Blackmann's Window
Step8: Flat Top Window | <ASSISTANT_TASK:>
Python Code:
from stingray.utils import create_window
from scipy.fftpack import fft, fftshift, fftfreq
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
Explanation: Window functions
Stingray now has a bunch of window functions that can be used for various applications in signal processing.
Windows available include:
1. Uniform or Rectangular Window
2. Parzen window
3. Hamming window
4. Hanning Window
5. Triangular window
6. Welch Window
7. Blackmann Window
8. Flat-top Window
All windows are available in stingray.utils package and called be used by calling create_window function. Below are some of the examples demonstrating different window functions.
End of explanation
N = 100
window = create_window(N)
plt.plot(window)
plt.title("Uniform window")
plt.ylabel("Amplitude")
plt.xlabel("Sample Number (n)")
nfft = 2048
A = fft(uniform_window,nfft ) / (len(uniform_window)/2.0)
freq = fftfreq(nfft)
response = 20 * np.log10(np.abs(fftshift(A/(abs(A).max()))))
plt.plot(freq, response)
plt.title("Frequency response of the Uniform window")
plt.ylabel("Magnitude [dB]")
plt.xlabel("Normalized frequency [cycles per sample]")
Explanation: create_window function in stingray.utils takes two parameters.
N : Number of data points in the window
window_type : Type of window to create. Default is uniform.
Uniform Window
End of explanation
N = 100
window = create_window(N, window_type='parzen')
plt.plot(window)
plt.title("Parzen window")
plt.ylabel("Amplitude")
plt.xlabel("Sample Number (n)")
nfft = 2048
A = fft(window,nfft ) / (len(window)/2.0)
freq = fftfreq(nfft)
response = 20 * np.log10(np.abs(fftshift(A/(abs(A).max()))))
plt.plot(freq, response)
plt.title("Frequency response of the Parzen window")
plt.ylabel("Magnitude [dB]")
plt.xlabel("Normalized frequency [cycles per sample]")
Explanation: Parzen Window
End of explanation
N = 50
window = create_window(N, window_type='hamming')
plt.plot(window)
plt.title("Hamming window")
plt.ylabel("Amplitude")
plt.xlabel("Sample Number (n)")
nfft = 2048
A = fft(window,nfft ) / (len(window)/2.0)
freq = fftfreq(nfft)
response = 20 * np.log10(np.abs(fftshift(A/(abs(A).max()))))
plt.plot(freq, response)
plt.title("Frequency response of the Hamming window")
plt.ylabel("Magnitude [dB]")
plt.xlabel("Normalized frequency [cycles per sample]")
Explanation: Hamming Window
End of explanation
N = 50
window = create_window(N, window_type='hanning')
plt.plot(window)
plt.title("Hanning window")
plt.ylabel("Amplitude")
plt.xlabel("Sample Number (n)")
nfft = 2048
A = fft(window,nfft ) / (len(window)/2.0)
freq = fftfreq(nfft)
response = 20 * np.log10(np.abs(fftshift(A/(abs(A).max()))))
plt.plot(freq, response)
plt.title("Frequency response of the Hanning window")
plt.ylabel("Magnitude [dB]")
plt.xlabel("Normalized frequency [cycles per sample]")
Explanation: Hanning Window
End of explanation
N = 50
window = create_window(N, window_type='triangular')
plt.plot(window)
plt.title("Traingualr window")
plt.ylabel("Amplitude")
plt.xlabel("Sample Number (n)")
nfft = 2048
A = fft(window,nfft ) / (len(window)/2.0)
freq = fftfreq(nfft)
response = 20 * np.log10(np.abs(fftshift(A/(abs(A).max()))))
plt.plot(freq, response)
plt.title("Frequency response of the Triangular window")
plt.ylabel("Magnitude [dB]")
plt.xlabel("Normalized frequency [cycles per sample]")
Explanation: Traingular Window
End of explanation
N = 50
window = create_window(N, window_type='welch')
plt.plot(window)
plt.title("Welch window")
plt.ylabel("Amplitude")
plt.xlabel("Sample Number (n)")
nfft = 2048
A = fft(window,nfft ) / (len(window)/2.0)
freq = fftfreq(nfft)
response = 20 * np.log10(np.abs(fftshift(A/(abs(A).max()))))
plt.plot(freq, response)
plt.title("Frequency response of the Welch window")
plt.ylabel("Magnitude [dB]")
plt.xlabel("Normalized frequency [cycles per sample]")
Explanation: Welch Window
End of explanation
N = 50
window = create_window(N, window_type='blackmann')
plt.plot(window)
plt.title("Blackmann window")
plt.ylabel("Amplitude")
plt.xlabel("Sample Number (n)")
nfft = 2048
A = fft(window,nfft ) / (len(window)/2.0)
freq = fftfreq(nfft)
response = 20 * np.log10(np.abs(fftshift(A/(abs(A).max()))))
plt.plot(freq, response)
plt.title("Frequency response of the Blackmann window")
plt.ylabel("Magnitude [dB]")
plt.xlabel("Normalized frequency [cycles per sample]")
Explanation: Blackmann's Window
End of explanation
N = 50
window = create_window(N, window_type='flat-top')
plt.plot(window)
plt.title("Flat-top window")
plt.ylabel("Amplitude")
plt.xlabel("Sample Number (n)")
nfft = 2048
A = fft(window,nfft ) / (len(window)/2.0)
freq = fftfreq(nfft)
response = 20 * np.log10(np.abs(fftshift(A/(abs(A).max()))))
plt.plot(freq, response)
plt.title("Frequency response of the Flat-top window")
plt.ylabel("Magnitude [dB]")
plt.xlabel("Normalized frequency [cycles per sample]")
Explanation: Flat Top Window
End of explanation
<END_TASK> |
15,594 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Compute power and phase lock in label of the source space
Compute time-frequency maps of power and phase lock in the source space.
The inverse method is linear based on dSPM inverse operator.
The example also shows the difference in the time-frequency maps
when they are computed with and without subtracting the evoked response
from each epoch. The former results in induced activity only while the
latter also includes evoked (stimulus-locked) activity.
Step1: Set parameters | <ASSISTANT_TASK:>
Python Code:
# Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr>
#
# License: BSD-3-Clause
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne import io
from mne.datasets import sample
from mne.minimum_norm import read_inverse_operator, source_induced_power
print(__doc__)
Explanation: Compute power and phase lock in label of the source space
Compute time-frequency maps of power and phase lock in the source space.
The inverse method is linear based on dSPM inverse operator.
The example also shows the difference in the time-frequency maps
when they are computed with and without subtracting the evoked response
from each epoch. The former results in induced activity only while the
latter also includes evoked (stimulus-locked) activity.
End of explanation
data_path = sample.data_path()
meg_path = data_path / 'MEG' / 'sample'
raw_fname = meg_path / 'sample_audvis_raw.fif'
fname_inv = meg_path / 'sample_audvis-meg-oct-6-meg-inv.fif'
label_name = 'Aud-rh'
fname_label = meg_path / 'labels' / f'{label_name}.label'
tmin, tmax, event_id = -0.2, 0.5, 2
# Setup for reading the raw data
raw = io.read_raw_fif(raw_fname)
events = mne.find_events(raw, stim_channel='STI 014')
inverse_operator = read_inverse_operator(fname_inv)
include = []
raw.info['bads'] += ['MEG 2443', 'EEG 053'] # bads + 2 more
# Picks MEG channels
picks = mne.pick_types(raw.info, meg=True, eeg=False, eog=True,
stim=False, include=include, exclude='bads')
reject = dict(grad=4000e-13, mag=4e-12, eog=150e-6)
# Load epochs
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=reject,
preload=True)
# Compute a source estimate per frequency band including and excluding the
# evoked response
freqs = np.arange(7, 30, 2) # define frequencies of interest
label = mne.read_label(fname_label)
n_cycles = freqs / 3. # different number of cycle per frequency
# subtract the evoked response in order to exclude evoked activity
epochs_induced = epochs.copy().subtract_evoked()
plt.close('all')
for ii, (this_epochs, title) in enumerate(zip([epochs, epochs_induced],
['evoked + induced',
'induced only'])):
# compute the source space power and the inter-trial coherence
power, itc = source_induced_power(
this_epochs, inverse_operator, freqs, label, baseline=(-0.1, 0),
baseline_mode='percent', n_cycles=n_cycles, n_jobs=1)
power = np.mean(power, axis=0) # average over sources
itc = np.mean(itc, axis=0) # average over sources
times = epochs.times
##########################################################################
# View time-frequency plots
plt.subplots_adjust(0.1, 0.08, 0.96, 0.94, 0.2, 0.43)
plt.subplot(2, 2, 2 * ii + 1)
plt.imshow(20 * power,
extent=[times[0], times[-1], freqs[0], freqs[-1]],
aspect='auto', origin='lower', vmin=0., vmax=30., cmap='RdBu_r')
plt.xlabel('Time (s)')
plt.ylabel('Frequency (Hz)')
plt.title('Power (%s)' % title)
plt.colorbar()
plt.subplot(2, 2, 2 * ii + 2)
plt.imshow(itc,
extent=[times[0], times[-1], freqs[0], freqs[-1]],
aspect='auto', origin='lower', vmin=0, vmax=0.7,
cmap='RdBu_r')
plt.xlabel('Time (s)')
plt.ylabel('Frequency (Hz)')
plt.title('ITC (%s)' % title)
plt.colorbar()
plt.show()
Explanation: Set parameters
End of explanation
<END_TASK> |
15,595 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Fit X in the gmm model for 1, 2, ... 10 components. Hint
Step1: Calculate the AIC and BIC for each of these 10 models, and find the best model.
Step2: Plot the AIC and BIC
Step3: Define your PDF by evenly distributing 1000 points in some range. Look up what the eval method of the model instance does, and evaluate on your 1000 data points x. You should be able to extract a pdf, and the individual responsibilities for each of the components.
Step4: Plot x as a histogram, and the PDF values over your x_i values. | <ASSISTANT_TASK:>
Python Code:
gmms = [GMM(i).fit(X) for i in range(1,10)]
Explanation: Fit X in the gmm model for 1, 2, ... 10 components. Hint: You should create 10 instances of a GMM model, e.g. GMM(?).fit(X) would be one instance of a GMM model with ? components.
End of explanation
aics = [g.aic(X) for g in gmms]
bics = [g.bic(X) for g in gmms]
Explanation: Calculate the AIC and BIC for each of these 10 models, and find the best model.
End of explanation
plt.plot(aics)
plt.plot(bics)
Explanation: Plot the AIC and BIC
End of explanation
# Data x_i
x = np.linspace(-6,6,1000)
pdf = gmms[2].score_samples(x.reshape(-1,1))
Explanation: Define your PDF by evenly distributing 1000 points in some range. Look up what the eval method of the model instance does, and evaluate on your 1000 data points x. You should be able to extract a pdf, and the individual responsibilities for each of the components.
End of explanation
plt.plot(np.linspace(-6,6,1000),np.exp(pdf[0]))
plt.hist(X,bins='auto',normed=True)
Explanation: Plot x as a histogram, and the PDF values over your x_i values.
End of explanation
<END_TASK> |
15,596 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Copyright 2019 DeepMind Technologies Limited.
Step1: Environments
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https
Step2: The code below defines a dummy RL environment for use in the examples below.
Step3: Creating a Server and Client
Step4: For details on customizing the sampler, remover, and rate limiter, see below.
Example 1
Step5: The animation illustrates the state of the server at each step in the
above code block. Although each item is being set to have the same
priority value of 1.5, items do not need to have the same priority values.
In real world scenarios, items would have differing and
dynamically-calculated priority values.
<img src="https
Step6: Example 2
Step7: Inserting Complete Episodes
Step8: Sampling Complete Episodes in TensorFlow
Step9: Example 3
Step10: Inserting Sequences of Varying Length into Multiple Priority Tables
Step11: This diagram shows the state of the server after executing the above cell.
<img src="https
Step12: Creating a Server with a MaxHeap Sampler and a MinHeap Remover
Setting max_times_sampled=1 causes each item to be removed after it is
sampled once. The end result is a priority table that essentially functions
as a max priority queue.
Step13: Creating a Server with One Queue and One Circular Buffer
Behavior of canonical data structures such as
circular buffer or a max
priority queue can
be implemented in Reverb by modifying the sampler and remover
or by using the PriorityTable queue initializer.
Step14: Example 5 | <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
Explanation: Copyright 2019 DeepMind Technologies Limited.
End of explanation
!pip install dm-tree
!pip install dm-reverb[tensorflow]
import reverb
import tensorflow as tf
Explanation: Environments
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/deepmind/reverb/blob/master/examples/demo.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/deepmind/reverb/blob/master/examples/demo.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub</a>
</td>
</table>
Introduction
This colab is a demonstration of how to use Reverb through examples.
Setup
Installs the stable build of Reverb (dm-reverb) and TensorFlow (tf) to match.
End of explanation
OBSERVATION_SPEC = tf.TensorSpec([10, 10], tf.uint8)
ACTION_SPEC = tf.TensorSpec([2], tf.float32)
def agent_step(unused_timestep) -> tf.Tensor:
return tf.cast(tf.random.uniform(ACTION_SPEC.shape) > .5,
ACTION_SPEC.dtype)
def environment_step(unused_action) -> tf.Tensor:
return tf.cast(tf.random.uniform(OBSERVATION_SPEC.shape, maxval=256),
OBSERVATION_SPEC.dtype)
Explanation: The code below defines a dummy RL environment for use in the examples below.
End of explanation
# Initialize the reverb server.
simple_server = reverb.Server(
tables=[
reverb.Table(
name='my_table',
sampler=reverb.selectors.Prioritized(priority_exponent=0.8),
remover=reverb.selectors.Fifo(),
max_size=int(1e6),
# Sets Rate Limiter to a low number for the examples.
# Read the Rate Limiters section for usage info.
rate_limiter=reverb.rate_limiters.MinSize(2),
# The signature is optional but it is good practice to set it as it
# enables data validation and easier dataset construction. Note that
# we prefix all shapes with a 3 as the trajectories we'll be writing
# consist of 3 timesteps.
signature={
'actions':
tf.TensorSpec([3, *ACTION_SPEC.shape], ACTION_SPEC.dtype),
'observations':
tf.TensorSpec([3, *OBSERVATION_SPEC.shape],
OBSERVATION_SPEC.dtype),
},
)
],
# Sets the port to None to make the server pick one automatically.
# This can be omitted as it's the default.
port=None)
# Initializes the reverb client on the same port as the server.
client = reverb.Client(f'localhost:{simple_server.port}')
Explanation: Creating a Server and Client
End of explanation
# Dynamically adds trajectories of length 3 to 'my_table' using a client writer.
with client.trajectory_writer(num_keep_alive_refs=3) as writer:
timestep = environment_step(None)
for step in range(4):
action = agent_step(timestep)
writer.append({'action': action, 'observation': timestep})
timestep = environment_step(action)
if step >= 2:
# In this example, the item consists of the 3 most recent timesteps that
# were added to the writer and has a priority of 1.5.
writer.create_item(
table='my_table',
priority=1.5,
trajectory={
'actions': writer.history['action'][-3:],
'observations': writer.history['observation'][-3:],
}
)
Explanation: For details on customizing the sampler, remover, and rate limiter, see below.
Example 1: Overlapping Trajectories
Inserting Overlapping Trajectories
End of explanation
# Dataset samples sequences of length 3 and streams the timesteps one by one.
# This allows streaming large sequences that do not necessarily fit in memory.
dataset = reverb.TrajectoryDataset.from_table_signature(
server_address=f'localhost:{simple_server.port}',
table='my_table',
max_in_flight_samples_per_worker=10)
# Batches 2 sequences together.
# Shapes of items is now [2, 3, 10, 10].
batched_dataset = dataset.batch(2)
for sample in batched_dataset.take(1):
# Results in the following format.
print(sample.info.key) # ([2], uint64)
print(sample.info.probability) # ([2], float64)
print(sample.data['observations']) # ([2, 3, 10, 10], uint8)
print(sample.data['actions']) # ([2, 3, 2], float32)
Explanation: The animation illustrates the state of the server at each step in the
above code block. Although each item is being set to have the same
priority value of 1.5, items do not need to have the same priority values.
In real world scenarios, items would have differing and
dynamically-calculated priority values.
<img src="https://raw.githubusercontent.com/deepmind/reverb/master/docs/animations/diagram1.svg" />
Sampling Overlapping Trajectories in TensorFlow
End of explanation
EPISODE_LENGTH = 150
complete_episode_server = reverb.Server(tables=[
reverb.Table(
name='my_table',
sampler=reverb.selectors.Prioritized(priority_exponent=0.8),
remover=reverb.selectors.Fifo(),
max_size=int(1e6),
# Sets Rate Limiter to a low number for the examples.
# Read the Rate Limiters section for usage info.
rate_limiter=reverb.rate_limiters.MinSize(2),
# The signature is optional but it is good practice to set it as it
# enables data validation and easier dataset construction. Note that
# the number of observations is larger than the number of actions.
# The extra observation is the terminal state where no action is
# taken.
signature={
'actions':
tf.TensorSpec([EPISODE_LENGTH, *ACTION_SPEC.shape],
ACTION_SPEC.dtype),
'observations':
tf.TensorSpec([EPISODE_LENGTH + 1, *OBSERVATION_SPEC.shape],
OBSERVATION_SPEC.dtype),
},
),
])
# Initializes the reverb client on the same port.
client = reverb.Client(f'localhost:{complete_episode_server.port}')
Explanation: Example 2: Complete Episodes
Create a new server for this example to keep the elements of the priority table consistent.
End of explanation
# Writes whole episodes of varying length to a Reverb server.
NUM_EPISODES = 10
# We know that episodes are at most 150 steps so we set the writer buffer size
# to 151 (to capture the final observation).
with client.trajectory_writer(num_keep_alive_refs=151) as writer:
for _ in range(NUM_EPISODES):
timestep = environment_step(None)
for _ in range(EPISODE_LENGTH):
action = agent_step(timestep)
writer.append({'action': action, 'observation': timestep})
timestep = environment_step(action)
# The astute reader will recognize that the final timestep has not been
# appended to the writer. We'll go ahead and add it WITHOUT an action. The
# writer will automatically fill in the gap with `None` for the action
# column.
writer.append({'observation': timestep})
# Now that the entire episode has been added to the writer buffer we can an
# item with a trajectory that spans the entire episode. Note that the final
# action must not be included as it is None and the trajectory would be
# rejected if we tried to include it.
writer.create_item(
table='my_table',
priority=1.5,
trajectory={
'actions': writer.history['action'][:-1],
'observations': writer.history['observation'][:],
})
# This call blocks until all the items (in this case only one) have been
# sent to the server, inserted into respective tables and confirmations
# received by the writer.
writer.end_episode(timeout_ms=1000)
# Ending the episode also clears the history property which is why we are
# able to use `[:]` in when defining the trajectory above.
assert len(writer.history['action']) == 0
assert len(writer.history['observation']) == 0
Explanation: Inserting Complete Episodes
End of explanation
# Each sample is an entire episode.
# Adjusts the expected shapes to account for the whole episode length.
dataset = reverb.TrajectoryDataset.from_table_signature(
server_address=f'localhost:{complete_episode_server.port}',
table='my_table',
max_in_flight_samples_per_worker=10,
rate_limiter_timeout_ms=10)
# Batches 128 episodes together.
# Each item is an episode of the format (observations, actions) as above.
# Shape of items are now ([128, 151, 10, 10], [128, 150, 2]).
dataset = dataset.batch(128)
# Sample has type reverb.ReplaySample.
for sample in dataset.take(1):
# Results in the following format.
print(sample.info.key) # ([128], uint64)
print(sample.info.probability) # ([128], float64)
print(sample.data['observations']) # ([128, 151, 10, 10], uint8)
print(sample.data['actions']) # ([128, 150, 2], float32)
Explanation: Sampling Complete Episodes in TensorFlow
End of explanation
multitable_server = reverb.Server(
tables=[
reverb.Table(
name='my_table_a',
sampler=reverb.selectors.Prioritized(priority_exponent=0.8),
remover=reverb.selectors.Fifo(),
max_size=int(1e6),
# Sets Rate Limiter to a low number for the examples.
# Read the Rate Limiters section for usage info.
rate_limiter=reverb.rate_limiters.MinSize(1)),
reverb.Table(
name='my_table_b',
sampler=reverb.selectors.Prioritized(priority_exponent=0.8),
remover=reverb.selectors.Fifo(),
max_size=int(1e6),
# Sets Rate Limiter to a low number for the examples.
# Read the Rate Limiters section for usage info.
rate_limiter=reverb.rate_limiters.MinSize(1)),
])
client = reverb.Client('localhost:{}'.format(multitable_server.port))
Explanation: Example 3: Multiple Priority Tables
Create a server that maintains multiple priority tables.
End of explanation
with client.trajectory_writer(num_keep_alive_refs=3) as writer:
timestep = environment_step(None)
for step in range(4):
writer.append({'timestep': timestep})
action = agent_step(timestep)
timestep = environment_step(action)
if step >= 1:
writer.create_item(
table='my_table_b',
priority=4-step,
trajectory=writer.history['timestep'][-2:])
if step >= 2:
writer.create_item(
table='my_table_a',
priority=4-step,
trajectory=writer.history['timestep'][-3:])
Explanation: Inserting Sequences of Varying Length into Multiple Priority Tables
End of explanation
reverb.Server(tables=[
reverb.Table(
name='my_table',
sampler=reverb.selectors.Prioritized(priority_exponent=0.8),
remover=reverb.selectors.Fifo(),
max_size=int(1e6),
rate_limiter=reverb.rate_limiters.MinSize(100)),
])
Explanation: This diagram shows the state of the server after executing the above cell.
<img src="https://raw.githubusercontent.com/deepmind/reverb/master/docs/animations/diagram2.svg" />
Example 4: Samplers and Removers
Creating a Server with a Prioritized Sampler and a FIFO Remover
End of explanation
max_size = 1000
reverb.Server(tables=[
reverb.Table(
name='my_priority_queue',
sampler=reverb.selectors.MaxHeap(),
remover=reverb.selectors.MinHeap(),
max_size=max_size,
rate_limiter=reverb.rate_limiters.MinSize(int(0.95 * max_size)),
max_times_sampled=1,
)
])
Explanation: Creating a Server with a MaxHeap Sampler and a MinHeap Remover
Setting max_times_sampled=1 causes each item to be removed after it is
sampled once. The end result is a priority table that essentially functions
as a max priority queue.
End of explanation
reverb.Server(
tables=[
reverb.Table.queue(name='my_queue', max_size=10000),
reverb.Table(
name='my_circular_buffer',
sampler=reverb.selectors.Fifo(),
remover=reverb.selectors.Fifo(),
max_size=10000,
max_times_sampled=1,
rate_limiter=reverb.rate_limiters.MinSize(1)),
])
Explanation: Creating a Server with One Queue and One Circular Buffer
Behavior of canonical data structures such as
circular buffer or a max
priority queue can
be implemented in Reverb by modifying the sampler and remover
or by using the PriorityTable queue initializer.
End of explanation
reverb.Server(
tables=[
reverb.Table(
name='my_table',
sampler=reverb.selectors.Prioritized(priority_exponent=0.8),
remover=reverb.selectors.Fifo(),
max_size=int(1e6),
rate_limiter=reverb.rate_limiters.SampleToInsertRatio(
samples_per_insert=3.0, min_size_to_sample=3,
error_buffer=3.0)),
])
Explanation: Example 5: Rate Limiters
Creating a Server with a SampleToInsertRatio Rate Limiter
End of explanation
<END_TASK> |
15,597 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Example of the mixing_sp() function
Author
Step1: Problem setting
We will setup a simple problem in which we mix two Gaussian peaks in different ratios. The code below is going to create those peaks, and to plot them for reference.
Step2: We now create 4 intermediate $obs$ signals, with $F1$ = 20%,40%,60% and 80% of ref1.
Step3: Now we can use rp.mixing_sp() to retrieve $F1$.
We suppose here that we have some knowledge of $ref1$ and $ref2$. | <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import rampy as rp
Explanation: Example of the mixing_sp() function
Author: Charles Le Losq
This function allows one to mix two endmembers spectra, $ref1$ and $ref2$, to an observed one $obs$:
$obs = ref1 * F1 + ref2 * (1-F1)$ .
The calculation is done with performing least absolute regression, which presents advantages compared to least squares to fit problems with outliers as well as non-Gaussian character (see wikipedia for instance).
End of explanation
x = np.arange(0,100,1.0) # a dummy x axis
ref1 = 50.0*np.exp(-1/2*((x-40)/20)**2) + np.random.randn(len(x)) # a gaussian with added noise
ref2 = 70.0*np.exp(-1/2*((x-60)/15)**2) + np.random.randn(len(x)) # a gaussian with added noise
plt.figure()
plt.plot(x,ref1,label="ref1")
plt.plot(x,ref2,label="ref2")
plt.xlabel("X")
plt.ylabel("Y")
plt.legend()
Explanation: Problem setting
We will setup a simple problem in which we mix two Gaussian peaks in different ratios. The code below is going to create those peaks, and to plot them for reference.
End of explanation
F1_true = np.array([0.80,0.60,0.40,0.20])
obs = np.dot(ref1.reshape(-1,1),F1_true.reshape(1,-1)) + np.dot(ref2.reshape(-1,1),(1-F1_true.reshape(1,-1)))
plt.figure()
plt.plot(x,obs)
plt.xlabel("X")
plt.ylabel("Y")
plt.title("Observed signals")
Explanation: We now create 4 intermediate $obs$ signals, with $F1$ = 20%,40%,60% and 80% of ref1.
End of explanation
F1_meas = rp.mixing_sp(obs,ref1,ref2)
plt.figure()
plt.plot(F1_true,F1_meas,'ro',label="Measurements")
plt.plot([0,1],[0,1],'k-',label="1:1 line")
plt.xlabel("True $F1$ value")
plt.ylabel("Determined $F1$ value")
plt.legend()
Explanation: Now we can use rp.mixing_sp() to retrieve $F1$.
We suppose here that we have some knowledge of $ref1$ and $ref2$.
End of explanation
<END_TASK> |
15,598 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
0. Preparing Data
Before digging into the parser notebook, the version of the CWE xml file within this notebook is v3.0, which can be downloaded from this link. Here we loaded CWE v3.0 xml file. Therefore, if there is any new version of XML raw file, please make change for the following code. If the order of weakness table is changed, please change the code for function <b>extract_target_field_elements</b> in section 2.1.
Step1: 1. Introduction
The purpose of this notebook is to build the fields parser and extract the contents from various fields in the CWE 3.0 XML file so that the field content can be directly analyzed and stored into database. Guided by CWE Introduction notebook, this notebook will focus on the detail structure under Weakness table and how parser functions work within the weakness table.
To preserve the semantic information and not lose details during the transformation from the representation on website and XML file to the output file, we build a 3-step pipeline to modularize the parser functions for various fields in different semantic format. The 3-step pipeline contains the following steps
Step2: 2.2 XML Node Field Parser
Once the node is provided by the former function, its XML structure is consistent for that field, and cwe version, but it is inconsistent for different versions. For example, a table represented in XML is different than a paragraph in XML. However, even for the same expected structure such as a table, the XML may have different tags used within it.
The associated parser function then is tested to cover all possible cases of this field in the XML, and also interpreted against its .html format to understand its purpose. We again refer to the introductory notebook Sections 4 and 5 which convey respectively the potential usage of the field for PERCEIVE and the overall format of the structure when compared to others.
The purpose is then documented as part of the functions documentation (sub-sections of this Section), conveying the rationale behind what was deemed potentially useful in the content to be kept, as well as how certain tags were mapped to a data structure while being removed.
The parser function outputs one of the known data structures (i.e. memory representation) that is shared among the different fields of what is deemed relevant. For example, while 5 field nodes may have their relevant information in different ways, they may all be at the end of the day tables, named lists, or single blocks of texts. Sharing a common representation in this stage decouples the 3rd step of the pipeline from understanding the many ways the same information is stored in the XML.
Because different CWE versions may organize a .html table, bullet list etc in different ways even for the same field, this organization also decouples the previous and following section functions from being modified on every new version if necessary.
The following fields have parser functions as of today
Step3: Through function <b> parse_potential_mitigations</b>, the above Potential_Mitigations node for cwe_1022 will be parsed into the following data format in memory.
<b> How content represent in memory (cwe-1022)</b>
2.2.2 Parse Common_Consequences
Common_Consequences field has a nested structure under the field element.
To understand the nesting structure, here we use the Common_Consequences field in cwe-103 as example. Under Common_Consequences element, there are two field entries named by 'Consequence', which represent two different consequences associated with the weakness. Under each consequence element, three entry elements constitute one weakness consequence, including scope, impact, and note, which have the contents that our parser is intended to to extract.
To preserve the table format of Common_Consequences, we use the dictionary to pair the CWE id and the content we parse from Common_Consequences field. Since there are multiple consequences for one weakness, a list of dictionaries will be used to store the content, where the number of dictionaries is equal to the number of consequences. Since one consequence may have multiple impacts and scopes but only one note, we use tuple to store the content of impact and scope, while directly store the content of note. In summary, the data structure in memory can be represented as the following format
Step4: Through function <b> parse_common_consequences</b>, the above Common_Consequences node for cwe_103 will be parsed into the following data format in memory.
<b> How content represent in memory (cwe-103)</b>
3. Export Data Structure
At the point this notebook is being created, it is still an open end question on how will tables, bullet lists and other potential structures in CWE will be used for topic modeling. For example, should rows in a table be concatenated into a paragraph and used as a document? What if the same word is repeated as to indicate a lifecycle?
In order to keep this notebook future-proof, this section abstracts how we will handle each field representation (e.g. table, bullet list, etc) from the memory data structure. It also keeps it flexible for multi-purpose
Step5: 4. Main Execution
After developing the 3 steps parsing pipeline, this section will combine these 3 steps and produce the output file for different fields. As introduced in Section 2, although the parsing procedure keeps same for all fields, each field will have own parsing function, while the same format of fields may share a same exporting function. As a result, the main execution varies for each field.
4.1 Main execution for Potential_Mitigations
The main execution will combine the above 3 steps parsing pipeline for Potential_Mitigations. After developing function <b>export_data</b>, the following code should produce the output file that contains the parsed content of Potential_Mitigations for all CWE_id.
Step6: 4.2 Main execution for Common_Consequences
The main execution will combine the above 3 steps parsing pipeline for Common_Consequences. After developing function <b>export_data</b>, the following code should produce the output file that contains the parsed content of Potential_Mitigations for all CWE_id. | <ASSISTANT_TASK:>
Python Code:
cwe_xml_file='cwec_v3.0.xml'
Explanation: 0. Preparing Data
Before digging into the parser notebook, the version of the CWE xml file within this notebook is v3.0, which can be downloaded from this link. Here we loaded CWE v3.0 xml file. Therefore, if there is any new version of XML raw file, please make change for the following code. If the order of weakness table is changed, please change the code for function <b>extract_target_field_elements</b> in section 2.1.
End of explanation
def extract_target_field_elements(target_field, cwe_xml_file):
'''
This function responsibility is to abstract how nodes are found given their field name and should be used together with the histogram.
Args:
- target_field: the arg defines which nodes are found given by the field that we are aiming to target
- cwe_xml_file: the CWE xml file that this function will work and extract the target field nodes
Outcome:
- a list of nodes that have the pre-defined target field as the element tag
'''
# read xml file and store as the root element in memory
tree = lxml.etree.parse(cwe_xml_file)
root = tree.getroot()
# Remove namespaces from XML.
for elem in root.getiterator():
if not hasattr(elem.tag, 'find'): continue # (1)
i = elem.tag.find('}') # Counts the number of characters up to the '}' at the end of the XML namespace within the XML tag
if i >= 0:
elem.tag = elem.tag[i+1:] # Starts the tag a character after the '}'
# define the path of target field. Here we select all element nodes that the tag is the target field
target_field_path='Weakness/./'+target_field
# extract weakness table in the XML // if the order of weakness table is changed, please make change for the following code
weakness_table = root[0]
# generate all elements with the target field name
target_field_nodes=weakness_table.findall(target_field_path)
return target_field_nodes
Explanation: 1. Introduction
The purpose of this notebook is to build the fields parser and extract the contents from various fields in the CWE 3.0 XML file so that the field content can be directly analyzed and stored into database. Guided by CWE Introduction notebook, this notebook will focus on the detail structure under Weakness table and how parser functions work within the weakness table.
To preserve the semantic information and not lose details during the transformation from the representation on website and XML file to the output file, we build a 3-step pipeline to modularize the parser functions for various fields in different semantic format. The 3-step pipeline contains the following steps: searching XML Field node location, XML field node parser, and exporting the data structure to the output file based on the semantic format in Section 4 of CWE Introduction Notebook. More details will be explained in Section 2.
2. Parser Architecture
The overall parser architecture is constituted by the following three procedures: 1) extracting the nodes with the target field tag, 2) parsing the target field node to the representation in memory, and 3) exporting the data structure to the output file.
Section 2.1 explains the way to search XML field nodes with the target field tag. No matter parsing which field, the first step is to use Xpath and then locate all XML field nodes with the field tag we are intended to parse. The function in section 2.1 has been tested for all fields and thus can locate XML nodes with any given field naming. However, the function is inconsistent for different versions, since the order of weakness table might be different. It happens between v2.9 and v3.0.
Section 2.2 explains the way to parse and extract the content of the target field into the representation in memory. Since different fields have various nested structures in xml raw file and the content we will parse varies field by field, the worst situation is that there will be one parser function for each different field. However, from Section 4 in CWE Introduction Notebook, certain fields may share a same format on website, such as table or bullet list, the ideal situation is that we would have only 4 or 5 functions to represent the data in memory.
Section 3 addresses the way to export the data representation from Section 2.2. A set of functions in Section 3 should be equal to the number of data structures in Section 2.2.
2.1 XML Field Node Location
This function searches the tree for the specified field node provided (e.g. Potential_Mitigations) as input and returns the associated XML node of the field. The string containing the field name can be found in the Introductory Notebook's histogram on Section 4 . As it can be observed in that histogram, only certain fields are worthwhile parsing due to their occurrence frequency.
End of explanation
def parse_potential_mitigations(potential_mitigation_node):
'''
The parser function concern is abstracting how the Potential_Mitigations field is stored in XML,
and provide it in a common and simpler data structure
Args:
- potential_mitigations_node: the node that has Potential_Mitigations tag, such as the above image
Outcomes:
- A dictionary that pairs cwe_id as key and the mitigation list as value.
In the dictionary, the mitigation list will be a list of dictionaries that each dictionary pairs tag and the corresponding content for each mitigation.
More details can be found in the following example for cwe-1022
'''
# extract cwe_id from the attribute of potential_mitigations element's parent node
cwe_id=potential_mitigations_node.getparent().attrib.get('ID')
cwe_id='CWE_'+cwe_id
# the mitigation list that each element represents an indivudual mitigation element
mitigation_list=[]
target_field=potential_mitigations_node.tag
# for each mitigation node under the potential_mitigations node
for mitigation in list(potential_mitigations_node):
# the dictionary that contain the information for each mitigation element
mitigation_dict=dict()
# traverse all mitigation_element nodes under each mitigation node
for mitigation_element in list(mitigation):
# generate tag and content of each mitigation_element
mitigation_element_tag=mitigation_element.tag.lower()
mitigation_element_content=mitigation_element.text
## in case there is nested elements under mitigation_element but store the content from a same tag
# check whether there is an element under mitigation_element
if mitigation_element_content.isspace():
entry_element_content=''
# iterate all child elements below mitigation_element,
for mitigation_element_child in mitigation_element.iter():
# extract the content
mitigation_element_child_content=mitigation_element_child.text
# if there is no content under the element or if this a nested element that contain one more element, then move to the next
if mitigation_element_child_content.isspace():
continue
# if not, merge the content
else:
mitigation_element_content+=mitigation_element_child_content
# store the tag and content for each mitigation element to the dictionary
mitigation_dict[mitigation_element_tag]=mitigation_element_content.strip()
# add each mitigation element dictionary to mitigation_list
mitigation_list.append(mitigation_dict)
# pair the cwe_id with the mitigation contents
potential_mitigations_dict=dict()
potential_mitigations_dict[cwe_id]=mitigation_list
return potential_mitigations_dict
Explanation: 2.2 XML Node Field Parser
Once the node is provided by the former function, its XML structure is consistent for that field, and cwe version, but it is inconsistent for different versions. For example, a table represented in XML is different than a paragraph in XML. However, even for the same expected structure such as a table, the XML may have different tags used within it.
The associated parser function then is tested to cover all possible cases of this field in the XML, and also interpreted against its .html format to understand its purpose. We again refer to the introductory notebook Sections 4 and 5 which convey respectively the potential usage of the field for PERCEIVE and the overall format of the structure when compared to others.
The purpose is then documented as part of the functions documentation (sub-sections of this Section), conveying the rationale behind what was deemed potentially useful in the content to be kept, as well as how certain tags were mapped to a data structure while being removed.
The parser function outputs one of the known data structures (i.e. memory representation) that is shared among the different fields of what is deemed relevant. For example, while 5 field nodes may have their relevant information in different ways, they may all be at the end of the day tables, named lists, or single blocks of texts. Sharing a common representation in this stage decouples the 3rd step of the pipeline from understanding the many ways the same information is stored in the XML.
Because different CWE versions may organize a .html table, bullet list etc in different ways even for the same field, this organization also decouples the previous and following section functions from being modified on every new version if necessary.
The following fields have parser functions as of today:
|Field Name| Function Name|
|:---:|:----:|
|Potential_Mitigations|parse_potential_mitigations|
|Common_Consequences|parse_common_consequences|
2.2.1 Parse Potential_Mitigations
Potential_Mitigations field has a nested structure under the field element. To understand the nesting structure, here we use the following image for cwe-1022 as example. Under Potential_Mitigatations element, there are two mitigation entries named by 'Mitigation', which represent the way to mitigate the weakness in the development cycle. Under each mitigation node, there are multiple sub-enties that constitute one mitigation (phase and description in cwe-1022 example), which have the contents that our parser is intended to extract.
To preserve the named list format of Potential_Mitigations, we use the dictionary to pair the CWE id and the content we parse from Potential_Mitigations field. Since there are multiple mitigation methods to mitigate the weakness, a list of dictionaries will be used to store the content, where the number of dictionaries is equal to the number of mitigation methods. And then the tag and the corresponding content will be paired in each dictionary. In summary, the data structure in memory can be represented as the following format: {CWE_id: [{tag1:content1, tag2: content2..}, {tag1:content3, tag2:content4..}...]}. More details can be found in the example of cwe-1022.
There are two special cases when parsing Potential_Mitigations field:
1) Various sub-entries:
Some Mitigation nodes may contain more sub-entries, other than phase and description, such as strategy, effectiveness and effectiveness_notes. These entries can be found in cwe-1004 and cwe-106. In this case, the parser will store the tag and content as same as phase and description.
2) HTML tags under Description node:
In some cases, the content under Description node will be stored in multiple html elements, such as p, li, div, and ul. These html tags are used to separate the sentences from a paragraph. For example, there are two html elements <p> under the description of the second mitigation node in the following images. By comparing to how the contents are represented on the webiste, we conclude the tag <p> is not useful to be kept. Therefore, in this case, the parser will concatenate the content of description under the same mitigation node and remove the tag <p>.
Since the number of element varies depending on the CWE_id, here is the cardinality of these tags:
|Tag|Cardinality|
|:---:|:---:|
|Phase|1|
|Description|1|
|Strategy|0 or 1|
|Effectiveness|0 or 1|
|Effectiveness_Notes|0 or 1|
<b>How content represent on the website (cwe-1022)</b>
<b>How content represent in the xml file (cwe-1022)</b>
End of explanation
def parse_common_consequences(common_consequences_node):
'''
The parser function concern is abstracting how the Common_Consequences field is stored in XML,
and provide it in a common and simpler data structure
Args:
- common_consequences_node: the node that has Common_Consequences tag, such as the above image
Outcomes:
- A dictionary that pairs cwe_id as key and the consequence list as value.
In the dictionary, the consequence list will be a list of dictionaries that each dictionary pairs tag and the corresponding content for each consequence.
More details can be found in the following example for cwe-103.
'''
# extract cwe_id from the attribute of common_consequences element's parent node
cwe_id=common_consequences_node.getparent().attrib.get('ID')
cwe_id='CWE_'+cwe_id
# the consequence list that each element represents an indivudual consequence element
consequence_list=[]
target_field=common_consequences_node.tag
# for each consequence node under the common_consequence node
for consequence in list(common_consequences_node):
# the dictionary that contain the information for each consequence element
consequence_dict=dict()
# traverse all consequence_element nodes under each consequence node
for consequence_element in list(consequence):
# generate tag and content of each consequence_element
consequence_element_tag=consequence_element.tag.lower()
consequence_element_content=consequence_element.text.strip()
# parse the note content directly as the value
if consequence_element_tag=='note':
consequence_dict[consequence_element_tag]=consequence_element_content
# for scope and impact, parse the content for scope and impact as tuple
else:
# if the tag is already in the dictionary, add the content to the existing tuple
if consequence_element_tag in consequence_dict:
consequence_dict[consequence_element_tag]+=(consequence_element_content,)
# if not, create a tuple to contain the content
else:
consequence_dict[consequence_element_tag]=(consequence_element_content,)
# add each consequence element dictionary to conisequence_list
consequence_list.append(consequence_dict)
# pair the cwe_id with the consequence contents
common_consequences_dict=dict()
common_consequences_dict[cwe_id]=consequence_list
return common_consequences_dict
Explanation: Through function <b> parse_potential_mitigations</b>, the above Potential_Mitigations node for cwe_1022 will be parsed into the following data format in memory.
<b> How content represent in memory (cwe-1022)</b>
2.2.2 Parse Common_Consequences
Common_Consequences field has a nested structure under the field element.
To understand the nesting structure, here we use the Common_Consequences field in cwe-103 as example. Under Common_Consequences element, there are two field entries named by 'Consequence', which represent two different consequences associated with the weakness. Under each consequence element, three entry elements constitute one weakness consequence, including scope, impact, and note, which have the contents that our parser is intended to to extract.
To preserve the table format of Common_Consequences, we use the dictionary to pair the CWE id and the content we parse from Common_Consequences field. Since there are multiple consequences for one weakness, a list of dictionaries will be used to store the content, where the number of dictionaries is equal to the number of consequences. Since one consequence may have multiple impacts and scopes but only one note, we use tuple to store the content of impact and scope, while directly store the content of note. In summary, the data structure in memory can be represented as the following format: {CWE_id: [{'Scope':scope tuple, 'Impact':impact tuple, 'Note': Text}, {'Scope':scope tuple, 'Impact':impact tuple, 'Note': Text}...]}. More details can be found in the example of cwe-103.
Since the number of element varies depending on the field, here is the cardinality of these fields:
|Tag|Cardinality|
|:---:|:---:|
|Scope|1 or more|
|Impact|1 or more|
|Note|0 or 1|
<b>How content represent on the website (cwe-103)</b>
<b>How content represent in the xml file (cwe-103)</b>
End of explanation
def export_data(target_field_node):
'''This section code will be done in the future.'''
pass
Explanation: Through function <b> parse_common_consequences</b>, the above Common_Consequences node for cwe_103 will be parsed into the following data format in memory.
<b> How content represent in memory (cwe-103)</b>
3. Export Data Structure
At the point this notebook is being created, it is still an open end question on how will tables, bullet lists and other potential structures in CWE will be used for topic modeling. For example, should rows in a table be concatenated into a paragraph and used as a document? What if the same word is repeated as to indicate a lifecycle?
In order to keep this notebook future-proof, this section abstracts how we will handle each field representation (e.g. table, bullet list, etc) from the memory data structure. It also keeps it flexible for multi-purpose: A table may be parsed for content for topic modeling, but also for extracting graph relationships (e.g. the Related Attack Pattern and Related Weaknesses fields contain hyperlinks to other CWE entries which could be reshaped as a graph).
End of explanation
if __name__ == "__main__":
# extract the nodes, whose tag is Potential_Mitigations,from cwe_xml_file
potential_mitigations_nodes=extract_target_field_elements('Potential_Mitigations',cwe_xml_file)
# read each Potential_Mitigation node
for potential_mitigations_node in potential_mitigations_nodes:
# parse the content for each potential_mitigation node
potential_mitigations_info=parse_potential_mitigations(potential_mitigations_node)
# export the parsed content TO-DO
export_data(potential_mitigations_info)
Explanation: 4. Main Execution
After developing the 3 steps parsing pipeline, this section will combine these 3 steps and produce the output file for different fields. As introduced in Section 2, although the parsing procedure keeps same for all fields, each field will have own parsing function, while the same format of fields may share a same exporting function. As a result, the main execution varies for each field.
4.1 Main execution for Potential_Mitigations
The main execution will combine the above 3 steps parsing pipeline for Potential_Mitigations. After developing function <b>export_data</b>, the following code should produce the output file that contains the parsed content of Potential_Mitigations for all CWE_id.
End of explanation
if __name__ == "__main__":
# extract the nodes, whose tag is Common_Consequences, from cwe_xml_file
common_consequences_nodes=extract_target_field_elements('Common_Consequences',cwe_xml_file)
# read each Common_Consequences node
for common_consequences_node in common_consequences_nodes:
# parse the content for each common_consequence node
common_consequence_info=parse_common_consequences(common_consequences_node)
# export the parsed content TO-DO
export_data(common_consequence_info)
Explanation: 4.2 Main execution for Common_Consequences
The main execution will combine the above 3 steps parsing pipeline for Common_Consequences. After developing function <b>export_data</b>, the following code should produce the output file that contains the parsed content of Potential_Mitigations for all CWE_id.
End of explanation
<END_TASK> |
15,599 | <SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Make data
Step1: Display the same with Python (matplotlib)
Step2: Apply moveout with incorrect velocity
Step3: Slope estimation
Step4: Non-physical flattening by predictive painting
Step5: Velocity estimation by time warping
Predictive painting produces $t_0(t,x)$. Time warping converts it into $t(t_0,x)$.
Step6: We now want to fit
$t^2(t_0,x)-t_0^2 \approx \Delta S\,x^2$,
where $\Delta S = \frac{1}{v^2} - \frac{1}{v_0^2}$.
The least-squares fit is
$\Delta S = \displaystyle \frac{\int x^2\left[t^2(t_0,x)-t_0^2\right]\,dx}{\int x^4\,dx}$.
The velocity estimate is
$v = \displaystyle \frac{v_0}{\sqrt{\Delta S\,v_0^2 + 1}}$.
Step7: Last step - physical flattening | <ASSISTANT_TASK:>
Python Code:
%%file data.scons
Flow('trace',None,'spike n1=2001 d1=0.001 k1=1001 | ricker1 frequency=30')
Flow('gather','trace','spray axis=2 n=49 d=25 o=0 label=Offset unit=m | nmostretch inv=y half=n v0=2000')
Result('gather','window f1=888 n1=392 | grey title=Gather')
from m8r import view
view('gather')
Explanation: Make data
End of explanation
import m8r
gather = m8r.File('gather.rsf')
%matplotlib inline
import matplotlib.pylab as plt
import numpy as np
plt.imshow(np.transpose(gather[:,888:1280]),aspect='auto')
Explanation: Display the same with Python (matplotlib)
End of explanation
%%file nmo.scons
Flow('nmo','gather','nmostretch half=n v0=1800')
Result('nmo','window f1=888 n1=200 | grey title=NMO')
view('nmo')
Explanation: Apply moveout with incorrect velocity
End of explanation
%%file slope.scons
Flow('slope','nmo','dip rect1=100 rect2=5 order=2')
Result('slope','grey color=linearlfb mean=y scalebar=y title=Slope')
view('slope')
Explanation: Slope estimation
End of explanation
%%file flat.scons
Flow('paint','slope','pwpaint order=2')
Result('paint','window f1=888 n1=200 | contour title=Painting')
Flow('flat','nmo paint','iwarp warp=${SOURCES[1]}')
Result('flat','window f1=888 n1=200 | grey title=Flattening')
view('paint')
view('flat')
Explanation: Non-physical flattening by predictive painting
End of explanation
%%file twarp.scons
Flow('twarp','paint','math output=x1 | iwarp warp=$SOURCE')
Result('twarp','window j1=20 | transp | graph yreverse=y min2=0.888 max2=1.088 pad=n title="Time Warping" ')
view('twarp')
Explanation: Velocity estimation by time warping
Predictive painting produces $t_0(t,x)$. Time warping converts it into $t(t_0,x)$.
End of explanation
%%file lsfit.scons
Flow('num','twarp','math output="(input*input-x1*x1)*x2^2" | stack norm=n')
Flow('den','twarp','math output="x2^4" | stack norm=n')
Flow('vel','num den','div ${SOURCES[1]} | math output="1800/sqrt(1800*1800*input+1)" ')
Result('vel',
'''
window f1=888 n1=200 |
graph yreverse=y transp=y title="Estimated Velocity" label2=Velocity unit2=m/s grid2=y pad=n min2=1950 max2=2050
''')
view('vel')
Explanation: We now want to fit
$t^2(t_0,x)-t_0^2 \approx \Delta S\,x^2$,
where $\Delta S = \frac{1}{v^2} - \frac{1}{v_0^2}$.
The least-squares fit is
$\Delta S = \displaystyle \frac{\int x^2\left[t^2(t_0,x)-t_0^2\right]\,dx}{\int x^4\,dx}$.
The velocity estimate is
$v = \displaystyle \frac{v_0}{\sqrt{\Delta S\,v_0^2 + 1}}$.
End of explanation
%%file nmo2.scons
Flow('nmo2','gather vel','nmo half=n velocity=${SOURCES[1]}')
Result('nmo2','window f1=888 n1=200 | grey title="Physical Flattening" ')
view('nmo2')
Explanation: Last step - physical flattening
End of explanation
<END_TASK> |