text_prompt
stringlengths 168
30.3k
| code_prompt
stringlengths 67
124k
|
---|---|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now let's try applying a weighting in the frequency domain that doesn't change the TD variance.
| <ASSISTANT_TASK:>
Python Code:
invasd = sqrt(2./srate)/sqrt(shaped_psd1)
invasd[0] = 0.
whitener = sig.hann(seglen*srate)*roll(irfft(invasd), seglen*srate/2)
wtest = TimeSeries(sig.convolve(shaped_noise2, whitener, mode='same')[8*srate:-8*srate],
sample_rate=srate)
p1 = wtest.psd(seglen).plot()
mean(wtest.psd(seglen)) / (2./srate)
mean(wtest**2)
filt1 = sig.firwin2(seglen*srate, [0.,0.25,0.5,0.75,1.], [0.,0.5,1.,0.5,0.], window='hann')
filt1 /= sqrt(sum(filt1**2))
weight_test1 = TimeSeries(sig.convolve(noise, filt1, mode='same')[8*srate:-8*srate],
sample_rate=srate)
var(weight_test1)
filt2 = gen_filter.copy()
filt2 /= sqrt(sum(filt2**2))
weight_test2 = TimeSeries(sig.convolve(noise, filt2, mode='same')[8*srate:-8*srate],
sample_rate=srate)
var(weight_test2)
filt12 = sig.hann(seglen*srate)*roll(irfft(abs(rfft(filt1)*rfft(filt2))), seglen*srate/2)
weight_test12 = TimeSeries(sig.convolve(noise, filt12, mode='same')[8*srate:-8*srate],
sample_rate=srate)
plt.plot(freqs, abs(rfft(filt1)*rfft(filt2)))
plt.plot(filt12)
sum(filt1**2)
sum(abs(rfft(filt1))**2)/(seglen*srate/2.)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Pandas
Step2: <hr>
Step3: <hr>
Step4: Keyboard Shortcuts
Step5: В IPython встроенны несколько полезных magic commands, одна из них %time позволяет измерить время выполнения вашего кода.
Step6: %timeit (%% - multiline) выдает усредненное значение, выполнив в цикле наш код.
Step7: Также Jupyter позволяет запустить команды системного шелла
Step8: Выполним созданный файл
| <ASSISTANT_TASK:>
Python Code:
from IPython.display import IFrame
IFrame('http://jupyter.org/', width='100%', height=350)
from IPython.display import IFrame
IFrame('http://i.giphy.com/12Y8uwQsnpizCw.gif', width='100%', height=350)
from IPython.display import IFrame
IFrame('http://pandas.pydata.org/', width='100%', height=350)
from IPython.display import IFrame
IFrame('http://scikit-learn.org/', width='100%', height=350)
print('hello')
list(range(10))
list?
%time _ = [i for i in range(1000)]
%%timeit
_ = list(range(200))
! echo "print('wasup!')" > my.py
! ls
%run my.py
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We will start by node label prediction. Download this network. It contains protein communications in Baker’s yeast. Each node (protein) has a special binary attribute ICSC (intracellular signaling cascade).
Step2: It might not be clear from the picture above but the level of homogeneity is quite high. For each node we are able to compute the average value of label
Step3: Iterative Classification Method
Step4: Label Propagation
Step5: Link Prediction - Scoring functions
Step6: Shortest Path Length
Step7: Number of common neighbours
Step8: Jaccard Score
Step9: Adamic/Adar Score
Step10: Preferential Attachment score
Step11: Katz Score
Step12: Let's compare the scores behavious for pairs of nodes with or without edge in-between
| <ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
import scipy as sp
import networkx as nx
%matplotlib inline
g = nx.read_gml('./data/ppi.CC.gml.txt')
cc = list(nx.connected_components(g))
g = nx.subgraph(g,cc[0])
g = nx.relabel.convert_node_labels_to_integers(g)
labels = np.array(nx.get_node_attributes(g, 'ICSC').values(), dtype=float)
nx.draw_spring(g, node_color = labels)
nnICSC = np.asarray(map(lambda(v): np.mean(labels[g.neighbors(v)]), g.nodes_iter()))
nnICSC
plt.figure(figsize=(10,5))
plt.hist(nnICSC[np.where(labels == 1)], bins=6,)
lablNN = labels[:]
idx = np.random.randint(0,len(lablNN), size=40)
lablNN[idx] = np.nan
# Your code here
## Get the adjacency matrix
A = nx.adjacency_matrix( g )
## Find the unclassified nodes
unlabelled = np.isnan( lablNN )
## Slice the adjacency matrix
# B = A[unlabelled].tocsc()[:,~unlabelled].tocsr()
B = A.tocsc()[:,~unlabelled].tocsr()
## Compute the mean label of the labelled neighbours of each unlabelled node.
new_labels = B.dot( lablNN[~unlabelled] ) / B.sum( axis = 1 ).getA1( )
## Update the labels
lablNN[unlabelled] = new_labels[unlabelled]
# It is better to initialize like that
fixedLabels = labels[:]+1
curLabels = labels[:]+1
# And indicate labeled nodes instead of unlabeled
idxLabeled = np.zeros((g.number_of_nodes(),), dtype=bool)
idxLabeled[np.random.randint(0,len(labels), size=90)] = True
curLabels[~idxLabeled] = 0
A = nx.adj_matrix( g )
D = sp.sparse.diags( 1.0 / A.sum( axis = 1 ).getA1( ), offsets = 0 )
P = D.dot( A )
def LabelPropagation(G, idxLabeled, curLabels, fixedLabels, iters = 1000):
A = nx.adj_matrix( g )
D = sp.sparse.diags( 1.0 / A.sum( axis = 1 ).getA1( ), offsets = 0 )
P = D.dot( A )
# Your code here
return np.round(resultLabels)
g = nx.read_gml('./data/fblog.gml.txt')
vNum = g.number_of_nodes()
def matrixPlot(A):
plt.figure(1, figsize=(6, 6))
plt.imshow(A,
interpolation="none"
)
# Your code here
spath = nx.floyd_warshall_numpy( g )
matrixPlot( spath )
# Your code here
A = nx.adjacency_matrix( g )
common_neighbour = A.dot( A ).todense()
matrixPlot( common_neighbour )
# Your code here
jaccard_score = np.asarray( [ ( len( np.intersect1d( g[v].keys(), g[u].keys() ) ) + 0.0 ) / len( np.union1d( g[v].keys(), g[u].keys() ) )
for v in g.nodes_iter( ) for u in g.nodes_iter( ) ], dtype = np.float ).reshape( 2*[g.number_of_nodes()] )
matrixPlot( jaccard_score )
# Your code here
adar_score = np.asarray( [ np.sum( [ 1.0 / np.log( len( g[w].keys() ) ) for w in np.intersect1d( g[v].keys(), g[u].keys() ) ] )
for v in g.nodes_iter( ) for u in g.nodes_iter( ) ], dtype = np.float ).reshape( 2*[g.number_of_nodes()] )
matrixPlot( adar_score )
# Your code here
pref_score = np.asarray( [ len( g[v].keys() ) * len( g[u].keys() )
for v in g.nodes_iter( ) for u in g.nodes_iter( ) ], dtype = np.float ).reshape( 2*[g.number_of_nodes()] )
matrixPlot( pref_score )
# Your code here
A = nx.adjacency_matrix( g ).tocsc()
beta = 0.5
I = sp.sparse.eye(*A.shape)
katzScore = ( sp.sparse.linalg.inv( I - beta * A ) - I ).todense()
matrixPlot( katzScore )
A = np.asarray(nx.adj_matrix(g).todense())
xyTriu = np.vstack(np.triu_indices_from(A, k=1)).T
wEdge = [katzScore[xy[0],xy[1]] for xy in xyTriu if A[xy[0],xy[1]]]
woEdge = [katzScore[xy[0],xy[1]] for xy in xyTriu if ~A[xy[0],xy[1]]]
data = [wEdge, woEdge]
fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(10,5))
axes.violinplot(data, showmeans=True)
axes.set_xticklabels(['', 'With Edges', '', 'W/o Edges'])
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
Step2: Dataset Parameters
Step3: Here we see that there are three wavelengths Parameters, with the wavelengths@primary being filled with the input array (since we didn't override the components or manually pass a dictionary). Because of this, the flux_densities and sigmas are only visible for the binary component as well. (If we were to fill either of the two other arrays, the corresponding Parameters would become visible as well). We can see, however, that there is an entry for flux_densities and sigmas for each of the times we passed.
Step4: For information on the passband-dependent parameters, see the section on the lc dataset (these are used only to compute fluxes when rv_method=='flux-weighted')
Step5: flux_densities
Step6: sigmas
Step7: profile_func
Step8: profile_rest
Step9: profile_sv
Step10: Synthetics
Step11: The model for a line profile dataset will expose flux-densities at each time and for each component where the corresponding wavelengths Parameter was not empty. Here since we used the default and exposed line-profiles for the entire system, we have a single entry per-time.
Step12: Plotting
Step13: Mesh Fields
| <ASSISTANT_TASK:>
Python Code:
!pip install -I "phoebe>=2.1,<2.2"
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
b.add_dataset('lp', times=[0,1,2], wavelengths=np.linspace(549, 551, 101))
print b.filter(kind='lp')
print b.filter(kind='lp_dep')
print b.filter('wavelengths')
print b.get_parameter('wavelengths', component='binary')
print b.filter('flux_densities')
print b.get_parameter('flux_densities', time=0)
print b.filter('sigmas')
print b.get_parameter('sigmas', time=0)
print b.get_parameter('profile_func')
print b.get_parameter('profile_rest')
print b.get_parameter('profile_sv')
b.run_compute(irrad_method='none')
b['lp@model'].twigs
print b.filter('flux_densities', context='model')
print b.get_parameter('flux_densities', context='model', time=0)
afig, mplfig = b.filter(dataset='lp01', context='model', time=0).plot(show=True)
b.add_dataset('mesh', times=[0], dataset='mesh01')
print b['columns'].choices
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: PT3S
Step2: Install PT3S to site-packages
Step3: Logging
Step5: about from PT3S ... import ... and pip install -e .
Step6: ggf. Tests
Step7: Singeltests
Step8: reiner MockUp-Lauf
Step9: ### Singletest basierend auf diesem reinem MockUp-Lauf
Step10: ### weitere Beispiele für Singletests
Step11: Beispiel
Step12: Beispiel
Step13: Modell und Ergebnisse laden und nutzen
Step14: Modell
Step15: Ergebnisse
Step16: Non-Vector
Step17: Vector
Step18: Vector
Step19: Modell und Ergebnis "synchronisieren"
Step20: Modellsichten um Ergebnisse "ergänzen"
Step21: Vektorergebnisse
Step22: Rohrvektorergebnisse
Step23: Schnitte
Step24: Store in H5
Step25: Rm Plot Beispiele
Step26: Bsp1
Step27: Bsp2
Step28: Bsp3
Step29: NetworkX Beispiele
Step30: Die Laplace-Matrix ist definiert als L
Step31: Clean Up
Step32: regelungstechnisches Signalmodell
Step33: Dashboard Beispiele
Step34: TimeCurve Data
Step35: ausgewählter Zeitbereich
Step36: Fwes W
Step37: Pump Q
Step38: Pump Ny
Step39: Pump Nw
Step40: Pump Nyw-Pairs
Step41: Leckagen
Step42: DH
Step43: für Darstellungen, die mit 1 Zeit bzw. mit Aggregaten 1 Zeitraums arbeiten
Step44: Farben für Längsschnitte
Step45: Farben für Zeitkurven
Step46: Festlegung der Farben
Step47: Plot
Step48: Abschlussarbeiten
Step49: Develop (use local PT3S under Development)
| <ASSISTANT_TASK:>
Python Code:
import doctest
>>> from platform import python_version
>>> print(python_version())
3.8.8
doctest.testmod()
### ggf. Rechte erforderlich:
### entweder in PowerShell: Start-Process powershell -Verb runAs
### oder RechteMausTaste WindowsSymbol: Windows PowerShell (Administrator)
### dann (ohne ! in PowerShell auf pip-Verz.):
#!pip uninstall --yes PT3S
#!pip install PT3S --no-cache-dir
import logging
import os
logger = logging.getLogger('PT3S')
logFileName= r"PT3S.log"
loglevel = logging.DEBUG
logging.basicConfig(filename=logFileName
,filemode='w'
,level=loglevel
,format="%(asctime)s ; %(name)-60s ; %(levelname)-7s ; %(message)s")
fileHandler = logging.FileHandler(logFileName)
logger.addHandler(fileHandler)
consoleHandler = logging.StreamHandler()
consoleHandler.setFormatter(logging.Formatter("%(levelname)-7s ; %(message)s"))
consoleHandler.setLevel(logging.INFO)
logger.addHandler(consoleHandler)
#%pwd
# works only if pip install -e .is NOT active:
###from PT3S import Mx,Xm,Rm
# or if subdir is PT3S
#%cd -q ..
# ...
#%cd -q PT3S
# if pip install -e .IS active
# the local modules can be imported with:
#from PT3S
import Mx, Xm, Rm
# if %pwd is local devdir
import doctest
>>> import pandas as pd
>>> pd.__version__
'1.2.4'
doctest.testmod()
%run test.py -l -q -m 1 -t both \
-w OneLPipe -w LocalHeatingNetwork -w GPipe -w GPipes -w TinyWDN
# mit SIR 3S Daten (Modelle + Ergebnisse) aus Version 11
%run test.py -l -q -m 1 -t both \
-w OneLPipe -w LocalHeatingNetwork -w GPipe -w GPipes -w TinyWDN \
--testDir testdata11
# mit SIR 3S Daten (Modelle + Ergebnisse) aus Version 10
%run test.py -l -q -m 1 -t both \
-w OneLPipe -w LocalHeatingNetwork -w GPipe -w GPipes -w TinyWDN \
--testDir testdata10
# mit SIR 3S Daten (Modelle + Ergebnisse) aus Version 09
%run test.py -l -q -m 1 -t both \
-w OneLPipe -w LocalHeatingNetwork -w GPipe -w GPipes -w TinyWDN \
--testDir testdata09 --dotResolution NONE
%run test.py -l -q -m 0 \
-s Xm\. \
-x Xm\.vKNOTexpEBES \
-x Xm\.vROHRexpEBES \
-x Xm\._vRUES -x Xm\._vRSLW -x Xm\._vRSTN -x Xm\._vRXXX -x Xm\._vREdges \
-x Xm\.MxAdd \
-t both -y yes -z no \
-w OneLPipe -w LocalHeatingNetwork -w GPipe -w GPipes -w TinyWDN
%run test.py -l -q -m 0 \
-s Mx\. \
-x Mx\.FromH5 \
-x Mx\.ToH5 \
-t both -y yes -z no \
-w OneLPipe -w LocalHeatingNetwork -w GPipe -w GPipes -w TinyWDN
# test_reinerMockUpLauf
%run test.py -l -v -m 0 -t before \
-u yes \
-y yes -z yes \
-w DHNetwork -w LocalHeatingNetwork -w GPipes
%run test.py -l -q -m 0 \
-s Xm\._vRUES -s Xm\._vRSLW -s Xm\._vRSTN -s Xm\._vRXXX -s Xm\._vREdges \
-y no -z no \
-w DHNetwork -w LocalHeatingNetwork -w GPipes
%run test.py -l -q -m 0 \
-s Xm\.MxAdd \
-y no -z no \
-w DHNetwork -w LocalHeatingNetwork -w GPipes
%run test.py -l -q -m 0 \
-s Xm\.vKNOTexpEBES \
-y no -z no \
-w DHNetwork
%run test.py -l -q -m 0 \
-s Xm\.vROHRexpEBES \
-y no -z no \
-w DHNetwork
%run Mx.py -l -q -m 0 -s getMicrosecondsFromRefTime
%run Xm.py -l -q -m 0 -s Xm\.constructShortestPathFromNodeList -t both -y yes -z no -w GPipes -w LocalHeatingNetwork
%run Rm.py -l -q -m 0 \
-s pltMakeCategoricalColors \
-s pltMakeCategoricalCmap \
-s Rm\. \
-y no -z no \
-w DHNetwork
import pandas as pd
import numpy as np
import scipy
import networkx as nx
path='.'
xmlFile=os.path.join(path,'testdata\LocalHeatingNetwork.XML')
xm=Xm.Xm(xmlFile=xmlFile)
vVBEL=xm.dataFrames['vVBEL']
vVBEL.filter(items=['BESCHREIBUNG','NAME_i','NAME_k','LAYR','L','D']).sort_index(level=1)
vVBEL.dtypes
[viewOrTable for viewOrTable in sorted(xm.dataFrames.keys())]
# Weg A um an Ergebnisse zu kommen:
# Modell(-Views) um MX-Ergebnisse ergaenzen; MX-Ergebnissatz des Modells wird implizit gelesen und returned
mx=xm.MxSync()
# Weg B um an Ergebnisse zu kommen:
# einen MX-Ergebnissatz voellig unabhaengig von einem Modell lesen
# z.B. den MX-Ergebnissatz des Modells ...
(wDir,modelDir,modelName,mx1File)=xm.getWDirModelDirModelName()
# ... lesen
mx=Mx.Mx(mx1File=mx1File)
mx.df.filter(items=['ALLG~~~4639827058859487185~SNAPSHOTTYPE','KNOT~V-L~~5736262931552588702~PH'])
mx.df.filter(regex='^KNOT').filter(regex='PH$').plot()
timesReq=mx.df.index.tolist()
mxVecsFileData=mx.getMxsVecsFileData(timesReq)
for vecsFileResult in mxVecsFileData:
print(vecsFileResult.index)
vecsFileResult.filter(regex='^ROHR').filter(regex='^(?!.*VEC)')
vecsFileResult.filter(regex='^KNOT')
mx.dfVecAggs
xm.MxSync(mx=mx)
vROHR=xm.dataFrames['vROHR']
vROHR[['L','mx2NofPts','mx2Idx']]
xm.MxAdd(mx=mx)
vKNOT=xm.dataFrames['vKNOT']
vKNOT.dtypes
vVBEL=xm.dataFrames['vVBEL']
vVBEL.filter(items=['NAME_i','NAME_k','Z_i','KNOT~*~*~*~PH_i','Q']).sort_values(['Q','KNOT~*~*~*~PH_i'], ascending=[1,0])
vVBEL.dtypes
vROHR=xm.dataFrames['vROHR']
vROHR.dtypes
vFWVB=xm.dataFrames['vFWVB']
vFWVB.dtypes
vROHRVecResults=xm.dataFrames['vROHRVecResults']
vROHRVecResults[['pk','mx2Idx','IptIdx','ROHR~*~*~*~SVEC','ROHR~*~*~*~ZVEC','ROHR~*~*~*~MVEC']]
vAGSN=xm.dataFrames['vAGSN']
vAGSN.dtypes
dfOneVL=vAGSN[(vAGSN['LFDNR']=='1') & (vAGSN['Layer']==1)]
dfOneVL[['OBJTYPE','x','P','Q']]
dfOneRL=vAGSN[(vAGSN['LFDNR']=='1') & (vAGSN['Layer']==2)]
dfOneRL[['OBJTYPE','x','P','Q']]
plt.plot(dfOneVL['x'],dfOneVL['Q']
,dfOneRL['x'],dfOneRL['Q'])
plt.plot(dfOneVL['x'],dfOneVL['P']
,dfOneRL['x'],dfOneRL['P'])
plt.ylim((2.5, 5.5))
# SIR 3S MIN-Ergebnis lesen
xm.MxAdd(mx=mx,aggReq='TMIN')
vAGSN=xm.dataFrames['vAGSN']
dfOneVLTMIN=vAGSN[(vAGSN['LFDNR']=='1') & (vAGSN['Layer']==1)]
plt.plot(dfOneVL['x'],dfOneVL['P']
,dfOneRL['x'],dfOneRL['P']
,dfOneVL['x'],dfOneVLTMIN['P']
)
plt.ylim((2.5, 5.5))
xm.ToH5()
mx.ToH5()
# next read will be faster because H5 is read instead of XML / MX if H5 is newer than XML / MX
xm=Xm.Xm(xmlFile=xmlFile)
mx=Mx.Mx(mx1File=mx1File)
#xm?
#mx?
dir(Rm)
import Rm
dir(Rm)
rm=Rm.Rm(xm=xm,mx=mx)
plt.close('all')
fig=plt.figure(dpi=2*72,linewidth=1.)
# 3Classes und FixedLimits sind standardmaessig Falsch; RefPerc ist standardmaessig Wahr
pFWVB=rm.pltNetDHUS(
pltTitle='pltNetDHUS: Bsp1: Prozentdarstellung - keine Klassen'
,timeDeltaToT= pd.to_timedelta('30 seconds')
,pFIGNrcv=['WBLZ~WärmeblnzGes~\S*~\S+~WES'
,'WBLZ~WärmeblnzGes~\S*~\S+~WVB'
,'KNOT~PKON-Knoten~\S*~\S+~QM'
]
,pFIGNrcvTxt=['Erzeugung'
,'Verbrauch'
,'Kontrolle DH'
]
,pVICsDf=pd.DataFrame({'Kundenname': ['VIC1'],'Knotenname': ['V-K007']})
,CBShrink=1. # default: 0.3; ist hier wg. der ausgepraegten Querformat-Modellausdehnung zu klein
,CBLabelPad=-20 # default: -50; dito zu gross
)
plt.show()
plt.close('all')
fig=plt.figure(dpi=2*72,linewidth=1.)
# 3Classes und FixedLimits sind standardmaessig Falsch; RefPerc ist standardmaessig Wahr
pFWVB=rm.pltNetDHUS(
pltTitle='pltNetDHUS: Bsp2: Prozentdarstellung - Klassen'
,timeDeltaToT= pd.to_timedelta('30 seconds')
,pFIGNrcv=['WBLZ~WärmeblnzGes~\S*~\S+~WES'
,'WBLZ~WärmeblnzGes~\S*~\S+~WVB'
,'KNOT~PKON-Knoten~\S*~\S+~QM'
]
,pFIGNrcvTxt=['Erzeugung'
,'Verbrauch'
,'Kontrolle DH'
]
,CBShrink=1. # default: 0.3; ist hier wg. der ausgepraegten Querformat-Modellausdehnung zu klein
,CBLabelPad=-20 # default: -50; dito zu gross
,pFWVBMeasure3Classes=True
,pFWVBMeasureCBFixedLimitHigh=0.80
,pFWVBMeasureCBFixedLimitLow=0.66
)
plt.show()
plt.close('all')
fig=plt.figure(dpi=2*72,linewidth=1.)
# 3Classes und FixedLimits sind standardmaessig Falsch; RefPerc ist standardmaessig Wahr
pFWVB=rm.pltNetDHUS(
pltTitle='pltNetDHUS: Bsp3: keine Prozentdarstellung - Klassen'
,pFIGNrcv=['WBLZ~WärmeblnzGes~\S*~\S+~WES'
,'WBLZ~WärmeblnzGes~\S*~\S+~WVB'
,'KNOT~PKON-Knoten~\S*~\S+~QM'
]
,pFIGNrcvTxt=['Erzeugung'
,'Verbrauch'
,'Kontrolle DH'
]
,CBShrink=1. # default: 0.3; ist hier wg. der ausgepraegten Querformat-Modellausdehnung zu klein
,CBLabelPad=-20 # default: -50; dito zu gross
,pFWVBMeasure3Classes=True
,pFWVBMeasureInRefPerc=False
,pFWVBMeasure='FWVB~*~*~*~W'
,pFWVBMeasureCBFixedLimitHigh=200.
,pFWVBMeasureCBFixedLimitLow=130.
)
plt.show()
G=nx.from_pandas_edgelist(xm.dataFrames['vVBEL'], source='NAME_i', target='NAME_k', edge_attr=True,create_using=nx.MultiGraph())
for e, datadict in G.edges.items():
print(e)
print(datadict)
for n, nbrsdict in G.adj.items():
print("!{0:s}".format(n))
for nox, mgdct in nbrsdict.items():
print(" {0:s}".format(nox))
for mg,edct in mgdct.items():
print(" {0:d}: {1:s}".format(mg,str(edct)))
print(nx.dijkstra_path(G, 'V-L', 'R-L'))
max([d for n,d in nx.degree(G)])
spmtx=nx.adjacency_matrix(G) # Return type: SciPy sparse matrix
plt.spy(spmtx)
spmtx=nx.laplacian_matrix(G)
plt.spy(spmtx)
nl=[n for n in G.nodes()]
A=nx.to_scipy_sparse_matrix(G)
nlo=scipy.sparse.csgraph.reverse_cuthill_mckee(A)
optnl=[nl[idx] for idx in nlo]
spmtx=nx.laplacian_matrix(G,nodelist=optnl)
plt.spy(spmtx)
xm.delFiles()
mx.delFiles()
xmlFile=os.path.join(path,'testdata\DHNetwork.XML')
xm=Xm.Xm(xmlFile=xmlFile)
mx=xm.MxAdd()
vRSTN=xm.dataFrames['vRSTN']
rstnDiePGRPStellen=vRSTN[[
'CONT'
#,'CONT_PARENT'
,'KA'
,'BESCHREIBUNG'
,'ITYP_OBJTYPE'
,'ITYP_OBJATTR'
,'Chk'
,'ik_Chk'
# ,'OBJTYPE'
,'NAME_i'
,'NAME_k'
,'CONT_i'
# ,'TABL_Chk'
# ,'TABL'
# ,'KNOT'
# ,'RART'
# ,'RART_TYP'
,'RARTPG'
# ,'RCPL'
# ,'RCPL_KNOT1'
# ,'RCPL_KNOT2'
# ,'NAME_i_PUMP'
# ,'NAME_k_PUMP'
]].sort_values(by=['ITYP_OBJTYPE','ITYP_OBJATTR','CONT','KA'])[vRSTN['ITYP_OBJTYPE']=='PGRP']
rstnDiePGRPStellen
rstnDiePGRPStellen[rstnDiePGRPStellen['NAME_i']=='R-A-SS']
# Verbindungslinien
vREdges=xm.dataFrames['vREdges']
# Signalweg (mit Knotentyp_INFO)
G=nx.from_pandas_edgelist(vREdges, source='KnExt_Ki', target='KnExt_Kk', edge_attr=True,create_using=nx.DiGraph())
nx.shortest_path(G,'Leck_1_Ein_RSLW','KA-0008_RSTN')
# Signalweg (ohne Knotentyp)
G=nx.from_pandas_edgelist(vREdges, source='Kn_Ki', target='Kn_Kk', edge_attr=True,create_using=nx.DiGraph())
nx.shortest_path(G,'Leck_1_Ein','KA-0008')
G=nx.from_pandas_edgelist(vREdges, source='Kn_Ki', target='Kn_Kk', edge_attr=True,create_using=nx.Graph())
nl=[n for n in G.nodes()]
A=nx.to_scipy_sparse_matrix(G)
nlo=scipy.sparse.csgraph.reverse_cuthill_mckee(A)
optnl=[nl[idx] for idx in nlo]
spmtx=nx.laplacian_matrix(G,nodelist=optnl)
plt.spy(spmtx)
import pandas as pd
import matplotlib
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
import matplotlib.dates as mdates
from matplotlib import colors
from matplotlib.colorbar import make_axes
from mpl_toolkits.axes_grid1 import make_axes_locatable
import numpy as np
import scipy
import networkx as nx
import re
from itertools import chain
import math
import sys
path='.'
xmlFile=os.path.join(path,'testdata\DHNetwork.XML')
xm=Xm.Xm(xmlFile=xmlFile)
mx=xm.MxAdd()
xm.ToH5()
mx.ToH5()
tStartSz=mx.df.index[0]
tEndeSz=mx.df.index[-1]
tStartSz
tEndeSz
timeSpanSz=tEndeSz-tStartSz
timeSpanSz
tStart=tStartSz
tEnde=tEndeSz
tcData=mx.df.loc[tStart:tEnde,:]
tcData.index[0]
tcData.index[-1]
mx1DfFwesW=tcData.filter(regex='^FWES').filter(regex='W$')
mx1DfFwesW=mx1DfFwesW.reindex(sorted(mx1DfFwesW.columns),axis=1)
mx1DfFwesW.head()
plt.close()
size_DINA6quer=(5.8,4.1)
plt.rc('figure',figsize=size_DINA6quer)
fig=plt.figure()
mx1DfFwesW.plot()
timeSumMaxMx1DfFwesW=mx1DfFwesW.sum(axis=1).idxmax()
timeSumMaxMx1DfFwesW
timeSumMinMx1DfFwesW=mx1DfFwesW.sum(axis=1).idxmin()
timeSumMinMx1DfFwesW
mx1DfPumpQ=tcData.filter(regex='^PUMP').filter(regex='QM$')
mx1DfPumpQ=mx1DfPumpQ.reindex(sorted(mx1DfPumpQ.columns),axis=1)
mx1DfPumpQ.head()
plt.close()
size_DINA6quer=(5.8,4.1)
plt.rc('figure',figsize=size_DINA6quer)
fig=plt.figure()
mx1DfPumpQ.plot()
mx1DfPumpNy=tcData.filter(regex='^PUMP').filter(regex='N$')
mx1DfPumpNy=mx1DfPumpNy.reindex(sorted(mx1DfPumpNy.columns),axis=1)
mx1DfPumpNy.head()
mx1DfPumpWCols=[col for col in mx.df.columns.tolist()
if
re.match(Mx.reSir3sIDcompiled,col).group('NAME1') in ['wNA','wNB','wNC']
and
re.match(Mx.reSir3sIDcompiled,col).group('ATTRTYPE') in ['XA']
and
re.match(Mx.reSir3sIDcompiled,col).group('OBJTYPE') in ['RSLW']
]
mx1DfPumpNw=tcData.filter(items=mx1DfPumpWCols)
mx1DfPumpNw=mx1DfPumpNw.reindex(sorted(mx1DfPumpNw.columns),axis=1)
mx1DfPumpNw.head()
mx1DfPumpNywPairs=pd.concat([mx1DfPumpNy,mx1DfPumpNw],axis=1)
mx1DfPumpNywPairs=mx1DfPumpNywPairs.filter(items=list(chain.from_iterable(
[x for x in zip(mx1DfPumpNy.columns.tolist(),mx1DfPumpNw.columns.tolist())]
)
)
)
mx1DfPumpNywPairs.head()
plt.close()
size_DINA6quer=(5.8,4.1)
plt.rc('figure',figsize=size_DINA6quer)
fig=plt.figure()
mx1DfPumpNywPairs.plot()
mx1DfPipeLIO=tcData.filter(regex='^ROHR').filter(regex='LECKEINAUS$')
mx1DfPipeLIO=mx1DfPipeLIO.reindex(sorted(mx1DfPipeLIO.columns),axis=1)
mx1DfPipeLIO.head()
mx1DfPipeLQ=tcData.filter(regex='^ROHR').filter(regex='LECKMENGE$')
mx1DfPipeLQ=mx1DfPipeLQ.reindex(sorted(mx1DfPipeLQ.columns),axis=1)
mx1DfPipeLQ.head()
mx1DfPipeLPairs=pd.concat([mx1DfPipeLIO,mx1DfPipeLQ],axis=1)
mx1DfPipeLPairs=mx1DfPipeLPairs.filter(items=list(chain.from_iterable(
[x for x in zip(mx1DfPipeLIO.columns.tolist(),mx1DfPipeLQ.columns.tolist())]
)
)
)
mx1DfPipeLPairs.describe()
# effektive Leckmengen als ~LEAK = ~LECKEINAUS * ~LECKMENGE
colList=mx1DfPipeLPairs.columns.tolist()
for idx in range(0,len(colList),2):
col=colList[idx]
mo=re.match(Mx.reSir3sIDcompiled,col)
colNew=mo.group('OBJTYPE')
colNew=colNew+Mx.reSir3sIDSep+str(mo.group('NAME1'))
colNew=colNew+Mx.reSir3sIDSep+mo.group('NAME2')
colNew=colNew+Mx.reSir3sIDSep+mo.group('OBJTYPE_PK')
colNew=colNew+Mx.reSir3sIDSep+'LEAK'
mx1DfPipeLPairs[colNew]=mx1DfPipeLPairs.apply(lambda row: row[idx] * row[idx+1] , axis=1)
mx1DfPipeLeaks=mx1DfPipeLPairs.filter(regex='LEAK$')
mx1DfPipeLeaks.describe()
s=mx1DfPipeLeaks.max()
s=s[s>0]
s.index.tolist()
mx1DfPipeLeaks=mx1DfPipeLeaks.filter(items=s.index.tolist())
mx1DfPipeLeaks.head()
plt.close()
size_DINA6quer=(5.8,4.1)
plt.rc('figure',figsize=size_DINA6quer)
fig=plt.figure()
mx1DfPipeLeaks.plot()
colDct={}
for col in tcData.columns.tolist():
mo=re.match(Mx.reSir3sIDcompiled,col)
colNew=mo.group('OBJTYPE')
colNew=colNew+Mx.reSir3sIDSep+str(mo.group('NAME1'))
colNew=colNew+Mx.reSir3sIDSep+mo.group('NAME2')
#colNew=colNew+Mx.reSir3sIDSep+mo.group('OBJTYPE_PK')
colNew=colNew+Mx.reSir3sIDSep+mo.group('ATTRTYPE')
colDct[col]=colNew
df=tcData.rename(columns=colDct)
mx1DfDH=pd.concat([df['RSLW~wDH_RD_A~~XA']
,df['RMES~yDH_pRL_A~~XA']
,df['RSLW~wDH_MD_A~~XA']
,df['RADD~yDH_pMD_A~~XA']
]
, axis=1)
mx1DfDH.head()
plt.close()
size_DINA6quer=(5.8,4.1)
plt.rc('figure',figsize=size_DINA6quer)
fig=plt.figure()
mx1DfDH.plot()
mx1DfDHQ=pd.concat([df['RMES~QDHGes~~XA']
,df['ALLG~~~LINEPACKRATE']
]
, axis=1)
mx1DfDHQ.head()
mx1DfDHV=pd.concat([df['ALLG~~~LINEPACKGEOM']
]
, axis=1)
mx1DfDHV=mx1DfDHV-mx1DfDHV['ALLG~~~LINEPACKGEOM'][0]
mx1DfDHV.describe()
mx1DfDHV.shape
def mxAdd(xm,mx,timeReq=None,aggReq=None,timeReq2nd=None):
xm.MxAdd(mx=mx,timeReq=timeReq,aggReq=aggReq,timeReq2nd=timeReq2nd)
vAGSN=xm.dataFrames['vAGSN']
vKNOT=xm.dataFrames['vKNOT']
vROHR=xm.dataFrames['vROHR']
vFWVB=xm.dataFrames['vFWVB']
vVBEL=xm.dataFrames['vVBEL']
vAGSN=xm.dataFrames['vAGSN']
vAGSN['PH']=vAGSN.apply(lambda row: row.P*math.pow(10.,5.)/(row.RHO*9.81),axis=1)
vAGSN['PH']=vAGSN['PH']+vAGSN['Z'].astype('float64')
zBzg=30.
vAGSN['bBzg']=vAGSN.apply(lambda row: row.RHO*9.81/math.pow(10.,5.),axis=1)
vAGSN['zBzg']= (vAGSN['Z'].astype('float64')-zBzg)*vAGSN['bBzg']
vAGSN['zBzg0']= (vAGSN['Z'].astype('float64')-0 )*vAGSN['bBzg']
vAGSN['zBzgMin']= (vAGSN['Z'].astype('float64')-vAGSN['Z'].astype('float64').min())*vAGSN['bBzg']
vAGSN['bBzg']=vAGSN['P']+vAGSN['zBzg']
hpSL=vAGSN[(vAGSN['LFDNR']=='1') & (vAGSN['Layer']==1)]
hpRL=vAGSN[(vAGSN['LFDNR']=='1') & (vAGSN['Layer']==2)]
vROHR=vROHR[(vROHR.apply(lambda x: True if x.CONT_ID == '1001' else False,axis=1))]
return vAGSN,vKNOT,vROHR,vFWVB,vVBEL,hpSL,hpRL
vAGSN_TStart,vKNOT_TStart,vROHR_TStart,vFWVB_TStart,vVBEL_TStart,hpSL_Start,hpRL_Start=mxAdd(xm,mx,timeReq=mx.df.index[0])
vAGSN_TEnde,vKNOT_TEnde,vROHR_TEnde,vFWVB_TEnde,vVBEL_TEnde,hpSL_Ende,hpRL_Ende=mxAdd(xm,mx,timeReq=mx.df.index[-1])
vAGSN_SMin,vKNOT_SMin,vROHR_SMin,vFWVB_SMin,vVBEL_SMin,hpSL_SMin,hpRL_SMin=mxAdd(xm,mx,aggReq='TMIN')
vAGSN_SMax,vKNOT_SMax,vROHR_SMax,vFWVB_SMax,vVBEL_SMax,hpSL_SMax,hpRL_SMax=mxAdd(xm,mx,aggReq='TMAX')
# besserer Weg mehrere Zeiten auf 1x zu erhalten:
xm.MxAdd(mx=mx
,aggReq=['TIME','TMIN','TMAX','TIME'] # Start, Min, Max, Ende # z.B. P P_1 P_2 P_3
,timeReq=3*[mx.df.index[0]]+[mx.df.index[-1]]
,timeReq2nd=4*[mx.df.index[-1]]
,viewList=['vAGSN','vKNOT','vROHR','vFWVB','vVBEL']
,ForceNoH5Update=True)
vAGSN=xm.dataFrames['vAGSN']
vAGSN.filter(items=['P','P_1','P_2','P_3'],axis=1).head()
# bar
hpCSL='red'
hpCRL='blue'
# bBzg
hpCSL2='lightcoral'
hpCRL2='cornflowerblue'
hpZ='black'
# Q
hpCSL3='salmon'
hpCRL3='lightsteelblue'
# bar Min/Max
hpCSLMax='mediumvioletred'
hpCSLMin='palevioletred'
hpCRLMax='darkcyan'
hpCRLMin='aqua'
# Linienattribute für Paare von Linien
lwThick=mpl.rcParams['lines.linewidth']*2 # zuerst gezeichnete Linie (hell)
lsThin='--' # danach gezeichnete Linie (dunkel)
# fuer A,B,C: Auswahl aus konstruierten Tönen
ntcCat=10
ntcCatSub=3
tcCm=Rm.pltMakeCategoricalCmap(catagoryColors=[idx for idx in range(ntcCat)],nOfSubCatsReq=ntcCatSub)
catA=0 # blau
catB=1 # orange
catC=2 # grün
catFromIdx={}
catFromIdx[0]=catA
catFromIdx[1]=catB
catFromIdx[2]=catC
# DH: RD,MD,VD:
tcC_SL=plt.get_cmap("tab20b").colors[3*4+2]
tcC_RL=plt.get_cmap("tab20b").colors[0*4+2]
tcC_ML=plt.get_cmap("tab20b").colors[4*4+2]
tcC_SLl=plt.get_cmap("tab20b").colors[3*4+3]
tcC_RLl=plt.get_cmap("tab20b").colors[0*4+3]
tcC_MLl=plt.get_cmap("tab20b").colors[4*4+3]
# DH: RD,MD-Paare: Sequenzen
tcC_XL=[tcC_RLl,tcC_RL,tcC_MLl,tcC_ML]
tcLs_XL=[mpl.rcParams['lines.linestyle'],lsThin,mpl.rcParams['lines.linestyle'],lsThin]
tcLw_XL=[lwThick,mpl.rcParams['lines.linewidth'],lwThick,mpl.rcParams['lines.linewidth']]
# DH: Q,Linepackrate:
tcC_QDH=plt.get_cmap("tab20b").colors[2*4+0]
tcC_QLPRate=plt.get_cmap("tab20b").colors[2*4+3]
# DH: Q,Linepackrate-Paar: Sequenz
tcC_DH=[tcC_QDH,tcC_QLPRate]
tcLs_DH=[mpl.rcParams['lines.linestyle'],lsThin]
tcLw_DH=[lwThick,mpl.rcParams['lines.linewidth']]
# DH: V
tcC_VDH=plt.get_cmap("tab20b").colors[1*4+2]
# 2 Zeiten auswählen
time1=timeSumMaxMx1DfFwesW
time2=timeSumMinMx1DfFwesW
# Ergebnisse für die 2 Zeiten holen
vAGSN_T1,vKNOT_T1,vROHR_T1,vFWVB_T1,vVBEL_T1,hpSL_T1,hpRL_T1=mxAdd(xm,mx,timeReq=time1)
vAGSN_T2,vKNOT_T2,vROHR_T2,vFWVB_T2,vVBEL_T2,hpSL_T2,hpRL_T2=mxAdd(xm,mx,timeReq=time2)
# Ref/Cmp zuordnen
timeRef=time1
timeCmp=time2
hpSLRef=hpSL_T1
hpRLRef=hpRL_T1
hpSLCmp=hpSL_T2
hpRLCmp=hpRL_T2
vROHRRef=vROHR_T1
vROHRCmp=vROHR_T2
vROHR_NFD=pd.merge(vROHRRef
,hpRLRef[hpRLRef.IptIdx=='S']
,how='left'
,left_on='pk'
,right_on='OBJID'
,suffixes=('','_AGSN')).filter(items=vROHRRef.columns.tolist()+['OBJID'])
plt.close()
size_DINA3quer=(16.5, 11.7)
plt.rc('figure',figsize=size_DINA3quer)
plt.rc('figure',dpi=72)
plt.rc('savefig',dpi=72*2)
fig=plt.figure()
# .............................................................
# Paramtrierung
# .............................................................
# links: 1 NFD und 1 NDD
# .............................................................
# links nimmt die ganz linke Hälfte des Dashboards ein
leftTileRight=0.5
# rechts: 3 ZKs
# .............................................................
# rechts nimmt die ganz linke Hälfte des Dashboards ein
rightTileStart=1-leftTileRight
# allerdings wird Platz für die y-Achsen benötigt
rightTileYAxisXSpace=0.125
rightTileStart=rightTileStart+rightTileYAxisXSpace
rightTileH_pad=0.5
# x-Achse
#majLocator=mdates.MinuteLocator(interval=5)
majLocator=mdates.MinuteLocator(byminute=[0,5,10,15,20,25,30,35,40,45,50,55])
majFormatter=mdates.DateFormatter('%d.%m.%y: %H:%M')
# .............................................................
# links: 1 NFD und 1 NDD
# .............................................................
gs1 = gridspec.GridSpec(2, 1)
axNfd = fig.add_subplot(gs1[0])
axHp = fig.add_subplot(gs1[1])
gs1.tight_layout(fig, rect=[0, 0, leftTileRight, 1])
#rect : if rect is given, it is interpreted as a rectangle
#(left, bottom, right, top) in the normalized figure coordinate that the whole subplots area (including labels) will fit into
# .............................................................
# NFD
# .............................................................
Rm.Rm.pltNetPipes(vROHR_NFD
,query="CONT_ID == '1001'"
,fmask=lambda row: True if row.KVR_i=='2' and row.KVR_k=='2' else False
,pAx=axNfd
,pAttributeFunc=lambda row: math.fabs(row['ROHR~*~*~*~QMAV'])
,pAttributeColorMapMin=0.
,pAttributeColorMapMax=1600.
,CBLabel='Q [t/h]'
,sort_values_by=['pAttributeFunc']
,sort_values_ascending=True
,pAttributeColorMapFmask=lambda row: True if not pd.isnull(row.OBJID) else False
,pAttributeColorMap2ndFmask=lambda row: True if pd.isnull(row.OBJID) else False
,pAttributeColorMap2ndUsageStart=1./4. # nicht zu weiß
,pAttributeColorMap2ndUsageEnd=1./2. # nicht zu schwarz
)
# .............................................................
# HP
# .............................................................
# (negativer) Abstand der 2. y-Achse von der Zeichenfläche
yTwinedAxesPosDeltaHP=-0.100
axHp.set_ylabel('p [bar]')
axHp.set_ylim(0,16)
axHp.set_yticks(np.arange(0, 16.1, 1))
axHp.plot(hpSLRef['x'],hpSLRef['bBzg'],color=hpCSL2)
axHp.plot(hpRLRef['x'],hpRLRef['bBzg'],color=hpCRL2)
axHp.plot(hpSLRef['x'],hpSLRef['zBzgMin'],color=hpZ,ls='--')
axHp.plot(hpSLRef['x'],hpSLRef['P'],color=hpCSL)
axHp.plot(hpRLRef['x'],hpRLRef['P'],color=hpCRL)
hpSLRef = hpSLRef.apply(pd.to_numeric, errors='ignore')
hpSLCmp = hpSLCmp.apply(pd.to_numeric, errors='ignore')
axHp.fill_between(hpSLRef['x'], hpSLRef['P'], hpSLCmp['P'], color='grey', alpha=0.5)
hpRLRef = hpRLRef.apply(pd.to_numeric, errors='ignore')
hpRLCmp = hpRLCmp.apply(pd.to_numeric, errors='ignore')
axHp.fill_between(hpRLRef['x'], hpRLRef['P'], hpRLCmp['P'], color='grey', alpha=0.5)
axHp.plot(hpSL_SMax['x'],hpSL_SMax['P'],color=hpCSLMax,ls='--')
axHp.plot(hpSL_SMin['x'],hpSL_SMin['P'],color=hpCSLMin,ls='--')
axHp.plot(hpRL_SMax['x'],hpRL_SMax['P'],color=hpCRLMax,ls='--')
axHp.plot(hpRL_SMin['x'],hpRL_SMin['P'],color=hpCRLMin,ls='--')
# x-Achse
ax=axHp
axHp.set_xlim(0,23000)
axHp.set_xticks(np.arange(0, 23000.1, 1000))
plt.setp(ax.xaxis.get_majorticklabels(),rotation='vertical',ha='center')
ax.grid()
# 2. y-Achse
axHp_2nd = axHp.twinx()
axHp_2nd.spines["left"].set_position(("axes", yTwinedAxesPosDeltaHP))
Rm.pltMakePatchSpinesInvisible(axHp_2nd)
axHp_2nd.spines['left'].set_visible(True)
axHp_2nd.yaxis.set_label_position('left')
axHp_2nd.yaxis.set_ticks_position('left')
axHp_2nd.set_ylabel('Q [t/h]')
axHp_2nd.set_ylim(-1600,1600)
axHp_2nd.set_yticks(np.arange(-1600, 1600.1, 200))
axHp_2nd.plot(hpRLRef['x'],hpRLRef['Q'],color=hpCRL3,ls='-')
axHp_2nd.fill_between(hpRLRef['x'], hpRLRef['Q'], hpRLCmp['Q'], color='mediumslateblue', alpha=0.5)
# .............................................................
# rechts: 3 ZKs
# .............................................................
gs2 = gridspec.GridSpec(3, 1)
axTcUp = fig.add_subplot(gs2[0])
axTcMi = fig.add_subplot(gs2[1])
axTcBo = fig.add_subplot(gs2[2])
gs2.tight_layout(fig, rect=[rightTileStart, 0, 1, 1], h_pad=rightTileH_pad)
#pad : float
#padding between the figure edge and the edges of subplots, as a fraction of the font-size
#h_pad, w_pad : float
#padding (height/width) between edges of adjacent subplots
# (negativer) Abstand der 2. y-Achse von der Zeichenfläche
yTwinedAxesPosDelta=-0.175
# .............................................................
# oberste ZK
# .............................................................
axTcUp.stackplot(mx1DfFwesW.index.values #x
,np.row_stack([mx1DfFwesW[col].values for col in mx1DfFwesW.columns.tolist()]) #y
,colors=[tcCm.colors[catA*ntcCatSub+0],tcCm.colors[catB*ntcCatSub+0],tcCm.colors[catC*ntcCatSub+0]]
,labels=['A','B','C']
)
axTcUp.set_ylabel('W [MW]')
axTcUp.set_ylim(200000,600000)
axTcUp.set_yticks(np.arange(200000, 600001, 100000))
axTcUp.set_yticklabels(["{0:2.0f}".format(x) for x in np.arange(20, 60.1,10)])
# x-Achse
ax=axTcUp
ax.xaxis.set_major_locator(majLocator)
ax.xaxis.set_major_formatter(majFormatter)
plt.setp(ax.xaxis.get_majorticklabels(),rotation='vertical',ha='center')
ax.grid()
# wg. der x-Achse (nur in der obersten ZK) muss der eigentliche Zeichenbereich verkleinert werden
pos1 = ax.get_position()
pos2 = [pos1.x0, pos1.y0+0.10, pos1.width, pos1.height * 0.65]
ax.set_position(pos2)
# .............................................................
# mittlere ZK
# .............................................................
# Bsp. Plot mit pltTC
yAxes,yLines,vLines,yLinesLegendLabels=Rm.Rm.pltTC(pd.concat([mx1DfPumpNywPairs,mx1DfPumpQ],axis=1)
,tcLines={
'RSLW~wNA~~XA'
:{'label':'RSLW~wNA~~XA'
,'forceYType':'N'
,'color':tcCm.colors[catFromIdx[0]*ntcCatSub+ntcCatSub-1]
,'linestyle':'-'
,'linewidth':lwThick}
,'PUMP~R-A-SS~R-A-DS~N'
:{'label':'PUMP~R-A-SS~R-A-DS~N','color':tcCm.colors[catFromIdx[0]*ntcCatSub+0]
,'linestyle':lsThin}
,'RSLW~wNB~~XA'
:{'label':'RSLW~wNB~~XA'
,'forceYType':'N'
,'color':tcCm.colors[catFromIdx[1]*ntcCatSub+ntcCatSub-1]
,'linestyle':'-'
,'linewidth':lwThick}
,'PUMP~R-B-SS~R-B-DS~N'
:{'label':'PUMP~R-B-SS~R-B-DS~N','color':tcCm.colors[catFromIdx[1]*ntcCatSub+0]
,'linestyle':lsThin}
,'RSLW~wNC~~XA'
:{'label':'RSLW~wNC~~XA'
,'forceYType':'N'
,'color':tcCm.colors[catFromIdx[2]*ntcCatSub+ntcCatSub-1]
,'linestyle':'-'
,'linewidth':lwThick}
,'PUMP~R-C-SS~R-C-DS~N'
:{'label':'PUMP~R-C-SS~R-C-DS~N','color':tcCm.colors[catFromIdx[2]*ntcCatSub+0]
,'linestyle':lsThin}
,'PUMP~R-A-SS~R-A-DS~QM'
:{'label':'PUMP~R-A-SS~R-A-DS~QM','color':tcCm.colors[catFromIdx[0]*ntcCatSub+0]
,'linestyle':lsThin}
,'PUMP~R-B-SS~R-B-DS~QM'
:{'label':'PUMP~R-B-SS~R-B-DS~QM','color':tcCm.colors[catFromIdx[1]*ntcCatSub+0]
,'linestyle':lsThin}
,'PUMP~R-C-SS~R-C-DS~QM'
:{'label':'PUMP~R-C-SS~R-C-DS~QM','color':tcCm.colors[catFromIdx[2]*ntcCatSub+0]
,'linestyle':lsThin}
}
,pAx=axTcMi
,majLocator=majLocator
,majFormatter=majFormatter
,xTicksLabelsOff=True
,yTwinedAxesPosDeltaHPStart=0
,yTwinedAxesPosDeltaHP=-0.175
,lOff=True
)
yAxes['N'].set_ylabel('N [rpm]')
yAxes['N'].set_ylim(1250,1650)
yAxes['N'].set_yticks(np.arange(1250, 1651, 50))
yAxes['QM'].set_ylabel('Q [t/h]')
yAxes['QM'].set_ylim(0,4000)
yAxes['QM'].set_yticks(np.arange(0, 4001, 500))
# .............................................................
# untere ZK
# .............................................................
axTcBo.set_ylabel('p [bar]')
axTcBo.set_ylim(4,8)
axTcBo.set_yticks(np.arange(4, 8.1, .4))
for idx,col in enumerate(mx1DfDH.columns.tolist()):
line,=axTcBo.plot(mx1DfDH.index.values,mx1DfDH[col]
,color= tcC_XL[idx]
,ls=tcLs_XL[idx]
,lw=tcLw_XL[idx]
)
ax=axTcBo
ax.xaxis.set_major_locator(majLocator)
ax.xaxis.set_major_formatter(majFormatter)
ax.set_xticklabels([])
ax.grid()
# x-Achsenbeschriftung ausschalten
for tic in ax.xaxis.get_major_ticks():
tic.tick1On = tic.tick2On = False
# 2. y-Achse
axTcBo_2nd = axTcBo.twinx()
axTcBo_2nd.spines["left"].set_position(("axes", yTwinedAxesPosDelta))
Rm.pltMakePatchSpinesInvisible(axTcBo_2nd)
axTcBo_2nd.spines['left'].set_visible(True)
axTcBo_2nd.yaxis.set_label_position('left')
axTcBo_2nd.yaxis.set_ticks_position('left')
axTcBo_2nd.set_ylabel('Q [t/h]')
axTcBo_2nd.set_ylim(-100,100)
axTcBo_2nd.set_yticks(np.arange(-100, 101, 20))
for idx,col in enumerate(mx1DfDHQ.columns.tolist()):
line,=axTcBo_2nd.plot(mx1DfDHQ.index.values,mx1DfDHQ[col]
,color= tcC_DH[idx]
,ls=tcLs_DH[idx]
,lw=tcLw_DH[idx]
)
# x-Achsenbeschriftung ausschalten
ax=axTcBo_2nd
ax.xaxis.set_major_locator(majLocator)
ax.xaxis.set_major_formatter(majFormatter)
plt.setp(ax.xaxis.get_majorticklabels(),rotation='vertical',ha='center')
ax.set_xticklabels([])
ax.grid()
for tic in ax.xaxis.get_major_ticks():
tic.tick1On = tic.tick2On = False
# 3. y-Achse
axTcBo_3rd = axTcBo.twinx()
axTcBo_3rd.spines["left"].set_position(("axes", yTwinedAxesPosDelta*2))
Rm.pltMakePatchSpinesInvisible(axTcBo_3rd)
axTcBo_3rd.spines['left'].set_visible(True)
axTcBo_3rd.yaxis.set_label_position('left')
axTcBo_3rd.yaxis.set_ticks_position('left')
axTcBo_3rd.set_ylabel('dV [(N)m3]')
axTcBo_3rd.set_ylim(-1,1)
axTcBo_3rd.set_yticks(np.arange(-1, 1.1, .2))
line,=axTcBo_3rd.plot(mx1DfDHV.index.values,mx1DfDHV['ALLG~~~LINEPACKGEOM']
,color= tcC_VDH
,ls='-.'
)
# x-Achsenbeschriftung ausschalten
ax=axTcBo_3rd
ax.xaxis.set_major_locator(majLocator)
ax.xaxis.set_major_formatter(majFormatter)
plt.setp(ax.xaxis.get_majorticklabels(),rotation='vertical',ha='center')
ax.set_xticklabels([])
ax.grid()
for tic in ax.xaxis.get_major_ticks():
tic.tick1On = tic.tick2On = False
# Zeitcursor für 2 Zeiten in allen ZKs darstellen
axLst=[axTcUp,axTcMi,axTcBo]
for ax in axLst:
vLinePlotted=ax.axvline(x=timeRef, ymin=0, ymax=1
,label='Zeit 1'
,color='dimgrey'
# ,linestyle=linestyle
# ,linewidth=linewidth
)
vLinePlotted=ax.axvline(x=timeCmp, ymin=0, ymax=1
,label='Zeit 2'
,color='dimgrey'
,linestyle='--'
# ,linewidth=linewidth
)
plt.savefig('Dashboard.pdf',format='pdf',bbox_inches='tight')#,pad_inches=2)
# wieder 1. Zeit
mx=xm.MxAdd()
vROHR=xm.dataFrames['vROHR']
vROHR.info()
vKNOT=xm.dataFrames['vKNOT']
vKNOTexp=xm.vKNOTexpEBES()
import re
qsCols=[col for col in vKNOTexp.columns.tolist() if re.search('^qs_',col) != None]
qsCols
qsInfCols=[col for col in vKNOTexp.columns.tolist() if re.search('^qs[a-zA-Z0-9]+',col) != None]
qsInfCols
vROHRexp=xm.vROHRexpEBES(vKNOTexp)
vROHRexp.shape
vROHR.shape
vROHRexp['QAbs']=vROHRexp.apply(lambda row: math.fabs(row['ROHR~*~*~*~QMAV']),axis=1)
vROHRexp=vROHRexp[vROHRexp['KVR']=='1']
grpObj=vROHRexp.groupby(by=['qsigRank'],as_index=False)
d={col:'min' for col in ['qsigStr','qs_1_A','qs_2_B','qs_3_C']}
d.update({'qsigFWVB~*~*~*~W':'min'})
d.update({'qsigRank_sumL':'min'})
df=grpObj.agg(d).sort_values(by=['qsigRank'],ascending=True)
df
df[df['qsigStr']=='100']['qsigFWVB~*~*~*~W'].iloc[0]
plt.close()
size_DINA3quer=(16.5, 11.7)
dpiSize=72
fig=plt.figure(figsize=size_DINA3quer,dpi=dpiSize)
gs = gridspec.GridSpec(1, 1)
# ---
#
axNfd = fig.add_subplot(gs[0])
Rm.Rm.pltNetPipes(vROHRexp
,query="CONT_ID == '1001'"
,fmask=lambda row: True if row.KVR_i=='1' and row.KVR_k=='1' else False
,pAx=axNfd
,pAttribute='qsA'
,pAttributeColorMapFmask=lambda row: True if row.qsA >0 else False
,pAttributeColorMap2ndFmask=lambda row: True if row.qsA <=0 else False # da A 0 ist wird nur 1 Farbwert verwendet;
# ... keine Unterscheidung zwischen von A nicht versorgten und gar nicht versorgten
,pAttrLineSize='QAbs'
,sort_values_by=['QAbs']
,sort_values_ascending=True
,pAttrLineSizeFactor=3*1./(vROHRexp['QAbs'].std()*2.)
,pAttributeColorMap2ndUsageStart=1./3
,CBBinTicks=21
,CBLabel='Versorgung durch A in %'
)
txt=axNfd.set_title('EB von EG A (nicht durch EG A oder gar nicht von EGn versorgte: grau)')
gs.tight_layout(fig)
plt.show()
# dasselbe mit einer diskreten CM die eine explizite Aussage trifft was magenta und was cyan sein soll
cmap = matplotlib.colors.ListedColormap(np.vstack(
(
Rm.pltMakeCategoricalColors(list(matplotlib.colors.to_rgb('cyan')),nOfSubColorsReq=5,reversedOrder=False),
Rm.pltMakeCategoricalColors(list(matplotlib.colors.to_rgb('magenta')),nOfSubColorsReq=15,reversedOrder=True)
)
))
plt.close()
size_DINA3quer=(16.5, 11.7)
dpiSize=72
fig=plt.figure(figsize=size_DINA3quer,dpi=dpiSize)
gs = gridspec.GridSpec(1, 1)
# ---
#
axNfd = fig.add_subplot(gs[0])
Rm.Rm.pltNetPipes(vROHRexp
,query="CONT_ID == '1001'"
,fmask=lambda row: True if row.KVR_i=='1' and row.KVR_k=='1' else False
,pAx=axNfd
,pAttribute='qsA'
,pAttributeColorMap=cmap
,pAttributeColorMapFmask=lambda row: True if row.qsA >0 else False
,pAttributeColorMap2ndFmask=lambda row: True if row.qsA <=0 else False # da A 0 ist wird nur 1 Farbwert verwendet;
# ... keine Unterscheidung zwischen von A nicht versorgten und gar nicht versorgten
,pAttrLineSize='QAbs'
,sort_values_by=['QAbs']
,sort_values_ascending=True
,pAttrLineSizeFactor=3*1./(vROHRexp['QAbs'].std()*2.)
,pAttributeColorMap2ndUsageStart=1./3
,CBBinTicks=21
,CBLabel='Versorgung durch A in %'
)
txt=axNfd.set_title('EB von EG A (nicht durch EG A oder gar nicht von EGn versorgte: grau)')
gs.tight_layout(fig)
plt.show()
# diskrete CM für die QSIGs vorbelegen
# ------------------------------------
# Anz. EGn
anzEG=len(qsCols)
anzQSIG_moeglich=int(math.pow(2,anzEG))
# kont. CM ist Ausgangsbasis
cMap=plt.cm.get_cmap('jet') # kont.: hasattr(cMap,'from_list')
# Farben
randOffset=.3
cmQSIG=cMap.from_list('cmQSIG'
, cMap(np.linspace(0+randOffset, 1-randOffset, anzQSIG_moeglich))
, anzQSIG_moeglich)
plt.close()
size_DINA6quer=(5.8,4.1)
fig, ax = plt.subplots(figsize=size_DINA6quer)
fig.subplots_adjust(bottom=0.5)
norm=matplotlib.colors.Normalize(vmin=0, vmax=anzQSIG_moeglich)
cb=matplotlib.colorbar.ColorbarBase(ax, cmap=cmQSIG,norm=norm,orientation='horizontal')
cb.set_label('so viele Farben wie mögliche QSIGen')
plt.show()
# einzelne Farben darin ersetzen
def f(cMap,idxCol,newCol):
colors=cMap(np.arange(cMap.N,dtype=int)) # alle Farben
newCol=list(matplotlib.colors.to_rgb(newCol))
newCol.extend([1.])
colors[idxCol]=newCol
return matplotlib.colors.ListedColormap(colors)
# Farbe der 0-Einspeisergruppe festlegen
ZColor='lightgray'
cmQSIG=f(cmQSIG,int('000',2),ZColor)
# Farbe der 1-Einspeisergruppe festlegen
ZColor='darkgray'
cmQSIG=f(cmQSIG,int('111',2),ZColor)
# Farbe der A-Einspeisergruppe festlegen -1.
AColColor='magenta'#'mediumorchid'
cmQSIG=f(cmQSIG,int('100',2),AColColor)
# 2.
NAColor='orange'
cmQSIG=f(cmQSIG,int('010',2),NAColor)
# 3.
cmQSIG=f(cmQSIG,int('001',2),'r')
plt.close()
size_DINA6quer=(5.8,4.1)
fig, ax = plt.subplots(figsize=size_DINA6quer)
fig.subplots_adjust(bottom=0.5)
norm=matplotlib.colors.Normalize(vmin=0, vmax=anzQSIG_moeglich)
cb=matplotlib.colorbar.ColorbarBase(ax, cmap=cmQSIG,norm=norm,orientation='horizontal')
cb.set_label('so viele Farben wie mögliche QSIGen')
plt.show()
vROHRexp['qsigInt']=vROHRexp.apply(lambda row: int(row.qsigStr,2) ,axis=1)
vROHRexp['qsigInt'].unique() # der Wertebereich ist weder zwingend lückenlos,
#noch hat er zwingend soviele Ausprägungen wie die CM Farben hat
import numpy as np
plt.close()
size_DINA3quer=(16.5, 11.7)
dpiSize=72
fig=plt.figure(figsize=size_DINA3quer,dpi=dpiSize)
gs = gridspec.GridSpec(1, 1)
# ---
#
axNfd = fig.add_subplot(gs[0])
Rm.Rm.pltNetPipes(vROHRexp
,query="CONT_ID == '1001'"
,fmask=lambda row: True if row.KVR_i=='1' and row.KVR_k=='1' else False
,pAx=axNfd
,pAttributeColorMap=cmQSIG
,pAttribute='qsigInt'
,CBBinBounds=[idx for idx in range(anzQSIG_moeglich+1)]
,pAttrLineSize='QAbs'
,sort_values_by=['QAbs']
,sort_values_ascending=True
,pAttrLineSizeFactor=3*1./(vROHRexp['QAbs'].std()*2.)
# es werden so viele Ticks generiert wie Bounds Einträge hat - 1 mehr als Anzahl Farben
# wir wollen weniger Ticks (so viele wie Anzahl Farben) und diese in der Mitte
,CBTicks=[idx+.5 for idx in range(anzQSIG_moeglich)]
,CBTickLabels=["{0:b}".format(idx).zfill(anzEG) for idx in range(anzQSIG_moeglich)]
,CBLabel='QSIG'
)
txt=axNfd.set_title('Quellsignatur ABC (alle Signaturen in Farbskala)')
gs.tight_layout(fig)
plt.show()
# nur die Farben holen, welche vorkommen
# alle Farben holen, welche die cmap hat
ccolors = plt.get_cmap(cmQSIG)(np.arange(plt.get_cmap(cmQSIG).N,dtype=int))
# die gewuenschten Farben extrahieren
QSIGBzLst=sorted([int(idx,2) for idx in vROHRexp['qsigStr'].unique()])
ccolors=[ccolors[idx] for idx in QSIGBzLst]
cmQSIGBz = matplotlib.colors.ListedColormap(ccolors)
QSIGBzLst
plt.close()
size_DINA6quer=(5.8,4.1)
fig, ax = plt.subplots(figsize=size_DINA6quer)
fig.subplots_adjust(bottom=0.5)
norm=matplotlib.colors.Normalize(vmin=0, vmax=len(QSIGBzLst))
cb=matplotlib.colorbar.ColorbarBase(ax, cmap=cmQSIGBz,norm=norm,orientation='horizontal')
cb.set_label('so viele Farben wie vorkommende QSIGen')
plt.show()
QSIGBzLstIdx=[idx for idx in range(len(QSIGBzLst))]
QSIGBzLstIdx
vROHRexp['qsigIntBzIdx']=vROHRexp.apply(lambda row: QSIGBzLst.index(row.qsigInt) ,axis=1)
vROHRexp['qsigIntBzIdx'].unique()
# der Wertebereich ist jetzt lückenlos und hat soviele voneinander verschiedene Ausprägungen wie die CM Farben hat
plt.close()
size_DINA3quer=(16.5, 11.7)
dpiSize=72
fig=plt.figure(figsize=size_DINA3quer,dpi=dpiSize)
gs = gridspec.GridSpec(1, 1)
ticksLabels=["{0:b}".format(QSIGBzLst[idx]).zfill(anzEG) for idx in range(plt.get_cmap(cmQSIGBz).N)]
# ---
#
axNfd = fig.add_subplot(gs[0])
Rm.Rm.pltNetPipes(vROHRexp
,query="CONT_ID == '1001'"
,fmask=lambda row: True if row.KVR_i=='1' and row.KVR_k=='1' else False
,pAx=axNfd
,pAttributeColorMap=cmQSIGBz
,pAttribute='qsigIntBzIdx'
,CBBinBounds=[idx for idx in range(len(vROHRexp['qsigIntBzIdx'].unique())+1)]
,pAttrLineSize='QAbs'
,sort_values_by=['QAbs']
,sort_values_ascending=True
,pAttrLineSizeFactor=3*1./(vROHRexp['QAbs'].std()*2.)
# es werden so viele Ticks generiert wie Bounds Einträge hat - 1 mehr als Anzahl Farben
# wir wollen weniger Ticks (so viele wie Anzahl Farben) und diese in der Mitte
,CBTicks=[idx+.5 for idx in range(len(vROHRexp['qsigIntBzIdx'].unique()))]
,CBTickLabels=["dummy" for idx in range(len(vROHRexp['qsigIntBzIdx'].unique()))]
,CBLabel='QSIG'
)
txt=axNfd.set_title('Quellsignatur ABC (nur vorkommende Signaturen in Farbskala)')
ax = plt.gca()
#print(ax.get_yticks())
#print(ax.get_yticklabels())
newTickLabels=len(vROHRexp['qsigIntBzIdx'].unique())*[""]
df=vROHRexp[['qsigIntBzIdx','qsigStr','qsigRank_L','qsigRank_sumL']].drop_duplicates().reset_index()
for index, row in df.iterrows():
idx2=row['qsigIntBzIdx']
qsigStr=row['qsigStr']
newTickLabels[idx2]=qsigStr
ax.set_yticklabels(newTickLabels,rotation=90,va='center',ha='left')
gs.tight_layout(fig)
plt.show()
!python setup.py clean sdist
!twine upload -u PT3S -p PythonTools3S dist/*
### ggf. Rechte erforderlich:
### entweder in PowerShell: Start-Process powershell -Verb runAs
### oder RechteMausTaste WindowsSymbol: Windows PowerShell (Administrator)
### dann (ohne ! in PowerShell auf pip-Verz.):
!pip install -e .
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The dataset we just generated looks like this
Step2: Next up, let's split the dataset into a training and test set. The training set will be used to develop and tune our models. The test set will be completely left alone until the very end, at which point you'll run your finished models on it. Having a test set will allow you to get a good estimate of how well your models would perform out in the wild on unseen data.
Step3: We are going to try to predict the classes with a k Nearest Neighbor (kNN) classifier. Chapter 2 of the Introduction to Statistical Learning book provides a great intro to the theory behind kNN. We are huge fans of the ISLR book, so definitely check it out if you have the time. You could also have a look at this previous post that teaches you how to implement the algorithm from scratch in Python.
Step4: Actually, it looks like these models perform approximately as well on the test data. Here are the decision boundaries we learned on the training set, applied to the test set. See if you can figure out where the two models are making their mistakes.
Step5: Looking better! Let's check out the decision boundary for the k=50 model.
Step6: Most real-life datasets will be full of noise, so overfitting is always a danger. However, what exactly consistutes "overfitting" will differ tremendously between datasets, and you have to decide what course of action is optimal. The best guidance is your model's performance on the test set and on cross-validation. If a model that "should" be overfitting is actually generalizing well, then no need to worry.
Step7: We then run the code like this
Step8: Overfitting means modelling the noise
| <ASSISTANT_TASK:>
Python Code:
# Creating the dataset
# e.g. make_moons generates crescent-shaped data
# Check out make_classification, which generates ~linearly-separable data
from sklearn.datasets import make_moons
X, y = make_moons(
n_samples=500, # the number of observations
random_state=1,
noise=0.3 #0.3
)
# Take a peek
print(X[:10,])
print(y[:10])
import matplotlib.pyplot as plt
%matplotlib inline
# Plot the first feature against the other, color by class
plt.scatter(X[y == 1, 0], X[y == 1, 1], color="#EE3D34", marker="x")
plt.scatter(X[y == 0, 0], X[y == 0, 1], color="#4458A7", marker="o")
from sklearn.cross_validation import train_test_split
# Split into training and test sets
XTrain, XTest, yTrain, yTest = train_test_split(X, y, random_state=1)
from sklearn.neighbors import KNeighborsClassifier
from sklearn import metrics
knn99 = KNeighborsClassifier(n_neighbors = 99)
knn99.fit(XTrain, yTrain)
yPredK99 = knn99.predict(XTest)
print "Overall Error of k=99 Model:", 1 - round(metrics.accuracy_score(yTest, yPredK99), 2)
knn1 = KNeighborsClassifier(n_neighbors = 1)
knn1.fit(XTrain, yTrain)
yPredK1 = knn1.predict(XTest)
print "Overall Error of k=1 Model:", 1 - round(metrics.accuracy_score(yTest, yPredK1), 2)
knn50 = KNeighborsClassifier(n_neighbors = 50)
knn50.fit(XTrain, yTrain)
yPredK50 = knn50.predict(XTest)
print "Overall Error of k=50 Model:", 1 - round(metrics.accuracy_score(yTest, yPredK50), 2)
# Let's tone down the noise in the dataset
X_no_noise, y_no_noise = make_moons(
n_samples=500, # the number of observations
random_state=1,
noise=0.1
)
# Split into training and test sets
XTrain_no_noise, XTest_no_noise, yTrain_no_noise, yTest_no_noise = train_test_split(X_no_noise, y_no_noise, random_state=1)
model = KNeighborsClassifier()
datasets = {"Training Set": [XTrain, yTrain]}
explore_fitting_boundaries(model=model, n_neighbors=[1], datasets={"Training Set": [XTrain, yTrain]}, width=5)
explore_fitting_boundaries(model=model, n_neighbors=[1], datasets={"Training Set": [XTrain_no_noise, yTrain_no_noise]}, width=5)
import numpy as np
def detect_plot_dimension(X, h=0.02, b=0.05):
'''
Function to detect the plot dimensions (considering Training and Test data)
Parameters: X - the dataset
h - the granularity of the visual display of the boundary
b - the space buffer at the extreme values
'''
x_min, x_max = X[:, 0].min() - b, X[:, 0].max() + b
y_min, y_max = X[:, 1].min() - b, X[:, 1].max() + b
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
dimension = xx, yy
return dimension
def detect_decision_boundary(dimension, model):
'''
Function to detect the decision boundaries
Parameters: dimension - the plot dimension as calculated by detect_plot_dimension()
model - the model used for the prediction
'''
xx, yy = dimension # unpack the dimensions
boundary = model.predict(np.c_[xx.ravel(), yy.ravel()])
boundary = boundary.reshape(xx.shape) # Put the result into a color plot
return boundary
def plot_decision_boundary(panel, dimension, boundary, colors=['#DADDED', '#FBD8D8']):
'''
Function to plot decision boundary
Parameters: panel - the plotting element to modify
dimension - the plot dimension as calculated by detect_plot_dimension()
boundary - boundary tuple as calculated by detect_decision_boundary()
colors - the colors to use for the boundary surface
'''
xx, yy = dimension # unpack the dimensions
panel.contourf(xx, yy, boundary, cmap=ListedColormap(colors), alpha=1)
panel.contour(xx, yy, boundary, colors="g", alpha=1, linewidths=0.5) # the decision boundary in green
def plot_dataset(panel, X, y, colors=["#EE3D34", "#4458A7"], markers=["x", "o"]):
'''
Function to plot the observations of of a dataset with their class assigment
Parameters: panel - the plotting element to modify
X - the predictor dataset
y - the classification
colors - the color of the two classes
markers - the markers to represent the two classes
'''
panel.scatter(X[y == 1, 0], X[y == 1, 1], color=colors[0], marker=markers[0])#, s=6)
panel.scatter(X[y == 0, 0], X[y == 0, 1], color=colors[1], marker=markers[1])#, s=6)
def calculate_prediction_error(model, X, y):
'''
Function to estimate the prediction error
Parameters: model - the model used for the prediction
X - the predictor dataset
y - the classification
'''
yPred = model.predict(X)
score = 1 - round(metrics.accuracy_score(y, yPred), 2)
return score
def plot_prediction_error(panel, dimension, score, b=.3):
'''
Function to plot the prediction error in the bottom right of the plot
Parameters: panel - the plotting element to modify
dimension - the plot dimension as calculated by detect_plot_dimension()
score - the prediciton error as calculated by calcualte_prediciton_error()
b - the offset of the text from the bottom right
'''
xx, yy = dimension # unpack the dimensions
panel.text(xx.max() - b, yy.min() + b, ('%.2f' % score).lstrip('0'), size=15, horizontalalignment='right')
def explore_fitting_boundaries(model, n_neighbors, datasets, width): #direction="vertical"):
'''
Function to produce a grid of k number of neighbours (x-axis) against different datasets
tested (y-axis)
Note: Only supplying one dataset works
Parameters: model - the model to be tested (assumes KNeighborsClassifier())
n_neighbors - a list of the the number k to be tested
datasets - a dictionary of lists with each a training and a test set
figsize - the dimension of the plot
# direction - should the comparable plots be arranged verically or horizontally
'''
# determine the height of the plot given the aspect ration of each panel should be equal
height = float(width)/len(n_neighbors) * len(datasets.keys())
nrows = len(datasets.keys())
ncols = len(n_neighbors)
# set up the plot
figure, axes = plt.subplots(
nrows,
ncols,
figsize=(width, height),
sharex=True,
sharey=True
)
dimension = detect_plot_dimension(X, h=0.02) # the dimension each subplot based on the data
i = 0
for n in n_neighbors:
model.n_neighbors = n
model.fit(datasets["Training Set"][0], datasets["Training Set"][1])
boundary = detect_decision_boundary(dimension, model)
j = 0
for d in datasets.keys():
try:
panel = axes[j, i]
except (TypeError, IndexError):
if (nrows * ncols) == 1:
panel = axes
elif nrows == 1: # if we only have one dataset
panel = axes[i]
elif ncols == 1: # if we only try one number of neighbors
panel = axes[j]
plot_decision_boundary(panel, dimension, boundary) # plot the decision boundary
plot_dataset(panel, X=datasets[d][0], y=datasets[d][1]) # plot the observations
score = calculate_prediction_error(model, X=datasets[d][0], y=datasets[d][1])
plot_prediction_error(panel, dimension, score, b=0.2) # plot the score
# make compacted layout
panel.set_frame_on(False)
panel.set_xticks([])
panel.set_yticks([])
# format the axis labels
if i == 0:
panel.set_ylabel(d)
if j == 0:
panel.set_title('k={}'.format(n))
j += 1
i += 1
plt.subplots_adjust(hspace=0, wspace=0) # make compacted layout
from matplotlib.colors import ListedColormap
model = KNeighborsClassifier()
n_neighbors = [200, 100, 20, 5, 1]
datasets = {
"Training Set": [XTrain, yTrain],
"Test Set": [XTest, yTest]
}
width = 15
# # explore_fitting_boundaries(model, n_neighbors, datasets, width)
explore_fitting_boundaries(model=model, n_neighbors=n_neighbors, datasets=datasets, width=width)
from sklearn.cross_validation import train_test_split, cross_val_score
knn = KNeighborsClassifier()
# the range of number of neighbours we want to test
n_neighbors = np.arange(1, 141, 2)
# here we store the results of each model
train_scores = list()
test_scores = list()
cv_scores = list()
for n in n_neighbors:
knn.n_neighbors = n
knn.fit(XTrain, yTrain)
train_scores.append(1 - metrics.accuracy_score(yTrain, knn.predict(XTrain))) # this will over-estimate the accuracy
test_scores.append(1 - metrics.accuracy_score(yTest, knn.predict(XTest)))
cv_scores.append(1 - cross_val_score(knn, XTrain, yTrain, cv = 5).mean()) # we take the mean of the CV scores
print('The best choice for Train would be {}, or the Test {} and with CV {}'.format(
n_neighbors[train_scores == min(train_scores)],
n_neighbors[test_scores == min(test_scores)],
n_neighbors[cv_scores == min(cv_scores)]
)
)
plt.plot(n_neighbors, train_scores, c = "grey", label = "Training Set")
plt.plot(n_neighbors, test_scores, c = "orange", label = "Test Set")
plt.plot(n_neighbors, cv_scores, c = "green", label = "CV")
plt.xlabel('Number of K nearest neighbors')
plt.ylabel('Classification Error')
plt.gca().invert_xaxis()
plt.legend(loc = "lower left")
plt.show()
[Here](http://scott.fortmann-roe.com/docs/BiasVariance.html) is a nice introductory article on the topic that goes into more depth.
It would generalize well to new data. the other hand a too rigid model will not be able to learn particular patterns in the data and will gloss over them. Such a model would be considered to be underfitting.
All algorithms can be considered to have a certain degree of flexibility and this is not specific to kNN. Therefore the priciple of finding the sweet spot of flexibility that describes that pattern of the data well but is still generalisatble to new data is applies to all learning algorithms. A lot of work has been done in playing with the flexibility of models, for instance regularisation plays with this consideration.
Model complexity and model flexibility are often terms used to describe how ...
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: And some more specialized dependencies
Step2: Configuration for this figure.
Step3: Open a chest located on a remote globus endpoint and load a remote json configuration file.
Step4: We want to plot the spike depth, which is the 'H' field in the chest.
Step5: Plot the bubble height, the 'H' keys, vs. time.
Step6: What if we want to dig deeper?
Step7: Now let's plot it.
Step8: That doesn't look great. Let's zoom in. The data is cached, so this is fast!
Step9: Now where's that 99% threshold?
Step10: Let's re-do the analysis for a 95% threshold to make sure nothing changes.
Step11: That didn't change much. What about 90%?
Step12: That introduces an offset, but the spike velocity (slope) looks the same for late time.
| <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib
matplotlib.rcParams['figure.figsize'] = (10.0, 8.0)
import matplotlib.pyplot as plt
import numpy as np
from scipy.interpolate import interp1d, InterpolatedUnivariateSpline
from scipy.optimize import bisect
import json
from functools import partial
class Foo: pass
from chest import Chest
from slict import CachedSlict
from glopen import glopen, glopen_many
config = Foo()
config.name = "HighAspect/HA_conductivity_1.0E-5/HA_conductivity_1.0E-5"
#config.arch_end = "maxhutch#alpha-admin/~/pub/"
config.arch_end = "alcf#dtn_mira/projects/alpha-nek/experiments"
c = Chest(path = "{:s}-results".format(config.name),
open = partial(glopen, endpoint=config.arch_end),
open_many = partial(glopen_many, endpoint=config.arch_end),
available_memory=1e12)
sc = CachedSlict(c)
with glopen(
"{:s}.json".format(config.name), mode='r',
endpoint = config.arch_end,
) as f:
params = json.load(f)
c.prefetch(sc[:,'H'].full_keys())
c.prefetch(sc[:,'h'].full_keys())
c.prefetch(sc[:,'flux_proj_z'].full_keys())
c.prefetch(sc[:,'frame'].full_keys())
c.prefetch(sc[:,'t_yz'].full_keys())
c.prefetch(sc[:,'w_yz'].full_keys())
spl = InterpolatedUnivariateSpline(sc[:,'H'].keys(), sc[:,'h'].values(), k=3)
dHdt = spl.derivative()
Ts = np.linspace(sc[:,'H'].keys()[0], sc[:,'H'].keys()[-1], 1000)
V = -dHdt(Ts)
theory0 = np.sqrt(params["atwood"] * params["g"] * 2 * params["extent_mesh"][0] / np.pi)
theory1 = np.sqrt(
params["atwood"] * params["g"] * params["extent_mesh"][0] / np.pi
+ (2.*np.pi*params["viscosity"] / params["extent_mesh"][0])**2
) - (2.*np.pi*params["viscosity"] / params["extent_mesh"][0])
theory2 = params["atwood"] * params["g"] * params["extent_mesh"][0]**2 / (
113.816*params["viscosity"])
theory3 = 0.008440 * params["atwood"] * params["g"] * params["extent_mesh"][0]**2 / params["viscosity"]
flux_midplane = np.array([sc[t,'flux_proj_z'][1892] for t in sc[:,'flux_proj_z'].keys()])
flux_total = np.array([np.sum(sc[t,'flux_proj_z']) for t in sc[:,'flux_proj_z'].keys()])
fig, axs = plt.subplots(2,1, sharex=True)
axs[0].plot(Ts, spl(Ts)/128);
axs[0].plot(sc[:,'H'].keys(), -np.array(sc[:,'H'].values()), 'yo');
axs[0].set_ylabel('Depth (m)')
#axs[1].plot(sc[:,'frame'].values(), -V/128);
#axs[1].plot([Ts[0],Ts[-1]], [theory0, theory0], 'k--');
#axs[1].plot([Ts[0],Ts[-1]], [theory1, theory1], 'k-.');
#axs[1].plot([Ts[0],Ts[-1]], [theory2, theory2], 'k--');
#axs[1].plot([Ts[0],Ts[-1]], [theory3, theory3], 'k-.');
#axs[1].plot(sc[:,'frame'].values(), 16.*flux_midplane)
axs[1].plot(sc[:,'frame'].values(), sc[:,'Dissipated'].values())
#axs[1].plot(sc[:,'frame'].values(), flux_total)
axs[1].set_ylabel('Velocity (m/s)')
axs[1].set_xlabel('Time (s)');
#plt.xlim([20,40])
plt.savefig('Figure1.eps')
diss = np.array([np.sum(sc[T,'d_yz']) for T in sc[:,'d_yz'].keys()]);
plt.plot(diss)
flux = []
vflux = []
for T in sc[:,'w_yz'].keys():
tmp = sc[T,'w_yz'] * sc[T,'t_yz']
tmp[sc[T,'t_yz'] < 0] = 0
tmp[tmp < 0] = 0.
flux.append(np.sum(tmp) / sc[T,'h'])
tmp = sc[T,'w_yz'].copy()
tmp[tmp < 0] = 0.
vflux.append(np.sum(tmp) / sc[T,'h'])
plt.plot(flux)
plt.plot(vflux)
c.prefetch(sc[:,'t_proj_z'].full_keys())
Ts = sc[:,'t_proj_z'].keys()
Zs = np.linspace(params["root_mesh"][2], params["extent_mesh"][2], sc[Ts[0], 't_proj_z'].shape[0])
for t in Ts:
plt.plot(Zs,sc[t,'t_proj_z'], 'k-')
plt.xlim(xmin=params["root_mesh"][2],xmax=params["extent_mesh"][2]);
plt.ylim(ymin=-1.1, ymax=1.1)
plt.xlabel("Height (m)");
plt.ylabel(r"$\Delta \rho / \bar{\rho}$");
plt.title("Density profiles");
Ts = sc[:,'t_proj_z'].keys()
Zs = np.linspace(params["root_mesh"][2], params["extent_mesh"][2], sc[Ts[0], 't_proj_z'].shape[0])
for t in Ts:
plt.plot(Zs,sc[t,'t_proj_z'], 'k-')
plt.xlim(xmin=-1.5,xmax=1.5);
plt.ylim(ymin=-1.1, ymax=1.1);
plt.xlabel("Height (m)");
plt.ylabel(r"$\Delta \rho / \bar{\rho}$");
plt.title("Density profiles");
Ts = sc[:,'t_proj_z'].keys()
Zs = np.linspace(params["root_mesh"][2], params["extent_mesh"][2], sc[Ts[0], 't_proj_z'].shape[0])
for t in Ts:
plt.plot(Zs,sc[t,'t_proj_z'], 'k-')
plt.plot([-1.5, 0.], np.array([.98, .98]));
plt.xlim(xmin=-1.5,xmax=0);
plt.xlabel("Height (m)");
plt.ylabel(r"$\Delta \rho / \bar{\rho}$");
plt.title("Density profiles");
plt.ylim(ymin=0.9,ymax=1.1);
Ts = sc[:,'t_proj_z'].keys()
Zs = np.linspace(params["root_mesh"][2], params["extent_mesh"][2], sc[Ts[0], 't_proj_z'].shape[0])
my_H = []
for t in Ts:
f = interp1d(Zs, sc[t, 't_proj_z']-(.90))
my_H.append(bisect(f,params["root_mesh"][2], 0))
my_spl = InterpolatedUnivariateSpline(sc[:,'H'].keys(), my_H, k=4)
my_dHdt = my_spl.derivative()
Ts = np.linspace(sc[:,'H'].keys()[0], sc[:,'H'].keys()[-1], 1000)
my_V = -my_dHdt(Ts)
theory = np.sqrt(
params["atwood"] * params["g"] * params["extent_mesh"][0] / np.pi
+ (2.*np.pi*params["viscosity"] / params["extent_mesh"][0])**2
) - (2.*np.pi*params["viscosity"] / params["extent_mesh"][0])
fig, axs = plt.subplots(2,1, sharex=True)
axs[0].plot(Ts, -spl(Ts));
axs[0].plot(Ts, -my_spl(Ts), 'k--');
axs[0].plot(sc[:,'H'].keys(), -np.array(my_H), 'yo');
axs[0].set_ylabel('Depth (m)')
axs[1].plot(Ts, V);
axs[1].plot(Ts, my_V, 'k');
axs[1].plot([Ts[0],Ts[-1]], [theory, theory], 'k--');
axs[1].set_ylabel('Velocity (m/s)')
axs[1].set_xlabel('Time (s)');
Ts = sc[:,'t_proj_z'].keys()
Zs = np.linspace(params["root_mesh"][2], params["extent_mesh"][2], sc[Ts[0], 't_proj_z'].shape[0])
my_H = []
for t in Ts:
f = interp1d(Zs, sc[t, 't_proj_z']-(.80))
my_H.append(bisect(f,params["root_mesh"][2], 0))
my_spl = InterpolatedUnivariateSpline(sc[:,'H'].keys(), my_H, k=4)
my_dHdt = my_spl.derivative()
Ts = np.linspace(sc[:,'H'].keys()[0], sc[:,'H'].keys()[-1], 1000)
my_V = -my_dHdt(Ts)
theory = np.sqrt(
params["atwood"] * params["g"] * params["extent_mesh"][0] / np.pi
+ (2.*np.pi*params["viscosity"] / params["extent_mesh"][0])**2
) - (2.*np.pi*params["viscosity"] / params["extent_mesh"][0])
fig, axs = plt.subplots(2,1, sharex=True)
axs[0].plot(Ts, -spl(Ts));
axs[0].plot(Ts, -my_spl(Ts), 'k--');
axs[0].plot(sc[:,'H'].keys(), -np.array(my_H), 'yo');
axs[0].set_ylabel('Depth (m)')
axs[1].plot(Ts, V);
axs[1].plot(Ts, my_V, 'k');
axs[1].plot([Ts[0],Ts[-1]], [theory, theory], 'k--');
axs[1].set_ylabel('Velocity (m/s)')
axs[1].set_xlabel('Time (s)');
%install_ext http://raw.github.com/jrjohansson/version_information/master/version_information.py
%load_ext version_information
%version_information numpy, matplotlib, slict, chest, glopen, globussh
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: If you want to use the CoNLL-03 corpus, you need to download it and unpack it in your Flair data and model folder. This folder should be in your home-directory and it is named .flair. Once you have downloaded the corpus, unpack it into a folder .flair/datasets/conll_03. If you do not want to use the CoNLL-03 corpus, but rather the free W-NUT 17 corpus, you can use the Flair command
Step2: Declare the tag type to be predicted
Step3: Create the tag-dictionary for the tag-type
Step4: Load the embeddings
Step5: Load and initialize the sequence tagger
Step6: Load and initialize the trainer
Step7: If you have a GPU (otherwise maybe tweak the batch size, etc.), run the training with 150 epochs
Step8: Plot the training curves and results
Step9: Use the model via the predict method
| <ASSISTANT_TASK:>
Python Code:
from flair.data import Corpus
from flair.datasets import WNUT_17
from flair.embeddings import TokenEmbeddings, WordEmbeddings, StackedEmbeddings
from typing import List
corpus: Corpus = WNUT_17().downsample(0.1)
print(corpus)
tag_type = 'ner'
tag_dictionary = corpus.make_tag_dictionary(tag_type=tag_type)
print(tag_dictionary)
embedding_types: List[TokenEmbeddings] = [
WordEmbeddings('glove'),
# comment in this line to use character embeddings
# CharacterEmbeddings(),
# comment in these lines to use flair embeddings
# FlairEmbeddings('news-forward'),
# FlairEmbeddings('news-backward'),
]
embeddings: StackedEmbeddings = StackedEmbeddings(embeddings=embedding_types)
from flair.models import SequenceTagger
tagger: SequenceTagger = SequenceTagger(hidden_size=256,
embeddings=embeddings,
tag_dictionary=tag_dictionary,
tag_type=tag_type,
use_crf=True)
from flair.trainers import ModelTrainer
trainer: ModelTrainer = ModelTrainer(tagger, corpus)
trainer.train('resources/taggers/example-ner',
learning_rate=0.1,
mini_batch_size=32,
max_epochs=150)
from flair.visual.training_curves import Plotter
plotter = Plotter()
plotter.plot_training_curves('resources/taggers/example-ner/loss.tsv')
plotter.plot_weights('resources/taggers/example-ner/weights.txt')
from flair.data import Sentence
model = SequenceTagger.load('resources/taggers/example-ner/final-model.pt')
sentence = Sentence('John lives in the Empire State Building .')
model.predict(sentence)
print(sentence.to_tagged_string())
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 5. Check conda installs
Step2: 6. Check pip installs
Step3: 7. Download data
| <ASSISTANT_TASK:>
Python Code:
!python -V
# Should be 3.5
import numpy as np
import matplotlib.pyplot as plt
from scipy.signal import ricker
import pandas as pd
import requests
import numba
import ipyparallel as ipp
import obspy
import geopandas as gpd # Not a catastrophe if missing.
import folium # Not a catastrophe if missing.
import lasio
import welly
import bruges as b
import os
import requests
files = [
'2D_Land_vibro_data_2ms.tgz',
'3D_gathers_pstm_nmo_X1001.sgy',
'Penobscot_0-1000ms.sgy.gz',
'Penobscot_NumPy.npy.gz',
]
url = "https://s3.amazonaws.com/agilegeo/"
localpath = '' # For CWD.
for file in files:
print(file)
r = requests.get(url+file, stream=True)
chunk_size = 1024 * 1024 # 1MB
with open(os.path.join(localpath, file), 'wb') as fd:
for chunk in r.iter_content(chunk_size):
if chunk:
fd.write(chunk)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
| <ASSISTANT_TASK:>
Python Code:
import pandas as pd
df = pd.DataFrame({'datetime': ['2015-12-01 00:00:00-06:00', '2015-12-02 00:01:00-06:00', '2015-12-03 00:00:00-06:00']})
df['datetime'] = pd.to_datetime(df['datetime'])
df['datetime'] = df['datetime'].dt.tz_localize(None)
df.sort_values(by='datetime', inplace=True)
df['datetime'] = df['datetime'].dt.strftime('%d-%b-%Y %T')
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Lorenz system
Step3: Write a function solve_lorenz that solves the Lorenz system above for a particular initial condition $[x(0),y(0),z(0)]$. Your function should return a tuple of the solution array and time array.
Step5: Write a function plot_lorentz that
Step6: Use interact to explore your plot_lorenz function with
| <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from scipy.integrate import odeint
from IPython.html.widgets import interact, fixed
def lorentz_derivs(yvec, t, sigma, rho, beta):
x = yvec[0]
y = yvec[1]
z = yvec[2]
dx = sigma*(y-x)
dy = x*(rho-z)-y
dz = x*y - beta*z
return np.array([dx, dy, dz])
assert np.allclose(lorentz_derivs((1,1,1),0, 1.0, 1.0, 2.0),[0.0,-1.0,-1.0])
def solve_lorentz(ic, max_time=4.0, sigma=10.0, rho=28.0, beta=8.0/3.0):
Solve the Lorenz system for a single initial condition.
Parameters
----------
ic : array, list, tuple
Initial conditions [x,y,z].
max_time: float
The max time to use. Integrate with 250 points per time unit.
sigma, rho, beta: float
Parameters of the differential equation.
Returns
-------
soln : np.ndarray
The array of the solution. Each row will be the solution vector at that time.
t : np.ndarray
The array of time points used.
soln = odeint(lorentz_derivs, ic, t, args=(alpha, beta, delta, gamma), atol=1e-9, rtol=1e-8)
return soln, t
assert True # leave this to grade solve_lorenz
def plot_lorentz()
N = 5
colors = plt.cm.hot(np.linspace(0,1,N))
for i in range(N):
# To use these colors with plt.plot, pass them as the color argument
print(colors[i])
def plot_lorentz(N=10, max_time=4.0, sigma=10.0, rho=28.0, beta=8.0/3.0):
Plot [x(t),z(t)] for the Lorenz system.
Parameters
----------
N : int
Number of initial conditions and trajectories to plot.
max_time: float
Maximum time to use.
sigma, rho, beta: float
Parameters of the differential equation.
# YOUR CODE HERE
raise NotImplementedError()
plot_lorentz()
assert True # leave this to grade the plot_lorenz function
# YOUR CODE HERE
raise NotImplementedError()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Basic marker
Step2: Circle Marker
Step3: Icon Marker
Step4: RGB(A) to HEX colors
| <ASSISTANT_TASK:>
Python Code:
import folium
carte = folium.Map(location=[45.5236, -122.6750], zoom_start=12)
marker = folium.Marker([45.5, -122.7], popup='Un marker')
marker.add_to(carte)
carte
carte = folium.Map(location=[45.5236, -122.6750], zoom_start=12)
circle = folium.CircleMarker(
[45.5, -122.7],
radius=1000,
popup='Un cercle',
color="#e74c3c", # rouge
fill_color="#27ae60", # vert
fill_opacity=0.9
)
circle.add_to(carte)
carte
carte = folium.Map(location=[45.5236, -122.6750], zoom_start=12)
# add firt marker with bootstrap icon
icone1 = folium.Icon(icon="asterisk", icon_color="#9b59b6", color="lightblue")
marker1 = folium.Marker([45.5, -122.7], popup='Un icone', icon=icone1)
marker1.add_to(carte)
# add second marker with font-awesome icon
icone1 = folium.Icon(icon="globe", icon_color="#e67e22", color="lightgreen", prefix="fa")
marker1 = folium.Marker([45.5, -122.6], popup='Un icone', icon=icone1)
marker1.add_to(carte)
carte
import matplotlib
import matplotlib.pyplot as plt
color = plt.cm.winter(22)
color
matplotlib.colors.rgb2hex(color)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Next we choose a model and hyperparameters. Here we'll use a k-neighbors classifier with n_neighbors=1.
Step2: Then we train the model, and use it to predict labels for data we already know
Step3: Finally, we compute the fraction of correctly labeled points
Step4: We see an accuracy score of 1.0, which indicates that 100% of points were correctly labeled by our model!
Step5: We see here a more reasonable result
Step6: What comes out are two accuracy scores, which we could combine (by, say, taking the mean) to get a better measure of the global model performance.
Step7: Repeating the validation across different subsets of the data gives us an even better idea of the performance of the algorithm.
Step8: Because we have 150 samples, the leave one out cross-validation yields scores for 150 trials, and the score indicates either successful (1.0) or unsuccessful (0.0) prediction.
Step9: Other cross-validation schemes can be used similarly.
Step10: Now let's create some data to which we will fit our model
Step11: We can now visualize our data, along with polynomial fits of several degrees
Step12: The knob controlling model complexity in this case is the degree of the polynomial, which can be any non-negative integer.
Step13: This shows precisely the qualitative behavior we expect
Step14: Notice that finding this optimal model did not actually require us to compute the training score, but examining the relationship between the training score and validation score can give us useful insight into the performance of the model.
Step15: We will duplicate the preceding code to plot the validation curve for this larger dataset; for reference let's over-plot the previous results as well
Step16: The solid lines show the new results, while the fainter dashed lines show the results of the previous smaller dataset.
Step17: This is a valuable diagnostic, because it gives us a visual depiction of how our model responds to increasing training data.
Step18: Notice that like a normal estimator, this has not yet been applied to any data.
Step19: Now that this is fit, we can ask for the best parameters as follows
Step20: Finally, if we wish, we can use the best model and show the fit to our data using code from before
| <ASSISTANT_TASK:>
Python Code:
from sklearn.datasets import load_iris
iris = load_iris()
X = iris.data
y = iris.target
from sklearn.neighbors import KNeighborsClassifier
model = KNeighborsClassifier(n_neighbors=1)
model.fit(X, y)
y_model = model.predict(X)
from sklearn.metrics import accuracy_score
accuracy_score(y, y_model)
from sklearn.model_selection import train_test_split
# split the data with 50% in each set
X1, X2, y1, y2 = train_test_split(X, y, random_state=0,
train_size=0.5)
# fit the model on one set of data
model.fit(X1, y1)
# evaluate the model on the second set of data
y2_model = model.predict(X2)
accuracy_score(y2, y2_model)
y2_model = model.fit(X1, y1).predict(X2)
y1_model = model.fit(X2, y2).predict(X1)
accuracy_score(y1, y1_model), accuracy_score(y2, y2_model)
from sklearn.model_selection import cross_val_score
cross_val_score(model, X, y, cv=5)
from sklearn.model_selection import LeaveOneOut
scores = cross_val_score(model, X, y, cv=LeaveOneOut())
scores
scores.mean()
from sklearn.preprocessing import PolynomialFeatures
from sklearn.linear_model import LinearRegression
from sklearn.pipeline import make_pipeline
def PolynomialRegression(degree=2, **kwargs):
return make_pipeline(PolynomialFeatures(degree),
LinearRegression(**kwargs))
import numpy as np
def make_data(N, err=1.0, rseed=1):
# randomly sample the data
rng = np.random.RandomState(rseed)
X = rng.rand(N, 1) ** 2
y = 10 - 1. / (X.ravel() + 0.1)
if err > 0:
y += err * rng.randn(N)
return X, y
X, y = make_data(40)
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn; seaborn.set() # plot formatting
X_test = np.linspace(-0.1, 1.1, 500)[:, None]
plt.scatter(X.ravel(), y, color='black')
axis = plt.axis()
for degree in [1, 3, 5]:
y_test = PolynomialRegression(degree).fit(X, y).predict(X_test)
plt.plot(X_test.ravel(), y_test, label='degree={0}'.format(degree))
plt.xlim(-0.1, 1.0)
plt.ylim(-2, 12)
plt.legend(loc='best');
from sklearn.model_selection import validation_curve
degree = np.arange(0, 21)
train_score, val_score = validation_curve(PolynomialRegression(), X, y,
'polynomialfeatures__degree', degree, cv=7)
plt.plot(degree, np.median(train_score, 1), color='blue', label='training score')
plt.plot(degree, np.median(val_score, 1), color='red', label='validation score')
plt.legend(loc='best')
plt.ylim(0, 1)
plt.xlabel('degree')
plt.ylabel('score');
plt.scatter(X.ravel(), y)
lim = plt.axis()
y_test = PolynomialRegression(3).fit(X, y).predict(X_test)
plt.plot(X_test.ravel(), y_test);
plt.axis(lim);
X2, y2 = make_data(200)
plt.scatter(X2.ravel(), y2);
degree = np.arange(21)
train_score2, val_score2 = validation_curve(PolynomialRegression(), X2, y2,
'polynomialfeatures__degree', degree, cv=7)
plt.plot(degree, np.median(train_score2, 1), color='blue', label='training score')
plt.plot(degree, np.median(val_score2, 1), color='red', label='validation score')
plt.plot(degree, np.median(train_score, 1), color='blue', alpha=0.3, linestyle='dashed')
plt.plot(degree, np.median(val_score, 1), color='red', alpha=0.3, linestyle='dashed')
plt.legend(loc='lower center')
plt.ylim(0, 1)
plt.xlabel('degree')
plt.ylabel('score');
from sklearn.model_selection import learning_curve
fig, ax = plt.subplots(1, 2, figsize=(16, 6))
fig.subplots_adjust(left=0.0625, right=0.95, wspace=0.1)
for i, degree in enumerate([2, 9]):
N, train_lc, val_lc = learning_curve(PolynomialRegression(degree),
X, y, cv=7,
train_sizes=np.linspace(0.3, 1, 25))
ax[i].plot(N, np.mean(train_lc, 1), color='blue', label='training score')
ax[i].plot(N, np.mean(val_lc, 1), color='red', label='validation score')
ax[i].hlines(np.mean([train_lc[-1], val_lc[-1]]), N[0], N[-1],
color='gray', linestyle='dashed')
ax[i].set_ylim(0, 1)
ax[i].set_xlim(N[0], N[-1])
ax[i].set_xlabel('training size')
ax[i].set_ylabel('score')
ax[i].set_title('degree = {0}'.format(degree), size=14)
ax[i].legend(loc='best')
from sklearn.model_selection import GridSearchCV
param_grid = {'polynomialfeatures__degree': np.arange(21),
'linearregression__fit_intercept': [True, False],
'linearregression__normalize': [True, False]}
grid = GridSearchCV(PolynomialRegression(), param_grid, cv=7)
grid.fit(X, y);
grid.best_params_
model = grid.best_estimator_
plt.scatter(X.ravel(), y)
lim = plt.axis()
y_test = model.fit(X, y).predict(X_test)
plt.plot(X_test.ravel(), y_test);
plt.axis(lim);
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Contents
Step2: 2. Different ways of learning from data
Step3: In the plot we can easily see that the blue points are concentrated on the top-left corner, green ones in bottom left and red ones in top right.
Step4: So, in this case we got a classification accuracy of 60 %.
Step5: Why Probabilistic Graphical Models
Step6: In this case the parameters of the network would be $P(L)$, $P(W)$ and $P(T | L, W)$. So, we will need to store 5 values for $L$, 3 values for $W$ and 45 values for $P(T | L, W)$. So, a total of 45 + 5 + 3 = 53 values to completely parameterize the network which is actually more than 45 values which we need for $P (T, L, W)$. But in the cases of bigger networks graphical models help in saving space. We can take the example of the student network shown below
| <ASSISTANT_TASK:>
Python Code:
from IPython.display import Image
%run ../scripts/1/discretize.py
data
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
# Adding a little bit of noise so that it's easier to visualize
data_with_noise = data.iloc[:, :2] + np.random.normal(loc=0, scale=0.1, size=(150, 2))
plt.scatter(data_with_noise.length, data_with_noise.width, c=[ "bgr"[k] for k in data.iloc[:,2] ], s=200, alpha=0.3)
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(data[['length', 'width']].values, data.type.values, test_size=0.2)
classifier = DecisionTreeClassifier(max_depth=4)
classifier.fit(X_train, y_train)
classifier.predict(X_test)
classifier.score(X_test, y_test)
X_train, X_test = data[:120], data[120:]
X_train
# Computing the joint probability distribution over the training data
joint_prob = X_train.groupby(['length', 'width', 'type']).size() / 120
joint_prob
# Predicting values
# Selecting just the feature variables.
X_test_features = X_test.iloc[:, :2].values
X_test_actual_results = X_test.iloc[:, 2].values
predicted_values = []
for i in X_test_features:
predicted_values.append(joint_prob[i[0], i[1]].idxmax())
predicted_values = np.array(predicted_values)
predicted_values
# Comparing results with the actual data.
predicted_values == X_test_actual_results
score = (predicted_values == X_test_actual_results).sum() / 30
print(score)
Image(filename='../images/1/Iris_BN.png')
Image(filename='../images/1/student.png')
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <center><img src="https
Step2: A grouping pattern, avoiding quadratic time
Step3: the bad way, quadratic time
Step4: there is a better approach avoiding quadratic time, toward defaultdict
Step5: However there is an even better solution, as pointed out in the example subsection of the previous link
Step6: The Bunch pattern
Step7: However, inheriting from dict is discouraged by Alex
Step8: It is interesting to observe that this idiom has been merged within the standard library, starting from Python 3.3, as with the name of SimpleNamespace
Step9: If you need point to be iterable use the structured object namedtuple instead.
Step10: Citing Hetland, pag 11
Step11: Hidden squares
Step12: Hetland's words, pag. 35
Step13: Hetland, pag.36
Step14: try to do that with more populated lists...
Step15: maybe some optimization is performed because s is a string object.
Step16: a better approach using constant append to the top
Step17: maybe a little better since it doesn't loop with for explicitly.
Step18: The following is a naive implementation, recursive but in $\mathcal{O}(n^{2})$, where $n$ is the permutation length.
Step19: Hetland, pag. 78
Step20: So, a, c, and f can take part in the permutation. The others will have to sit in nonfavorite seats.
Step21: Counting Sort
| <ASSISTANT_TASK:>
Python Code:
__AUTHORS__ = {'am': ("Andrea Marino",
"andrea.marino@unifi.it",),
'mn': ("Massimo Nocentini",
"massimo.nocentini@unifi.it",
"https://github.com/massimo-nocentini/",)}
__KEYWORDS__ = ['Python', 'Jupyter', 'gotchas', 'keynote',]
from collections import defaultdict, Counter
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (8, 8)
girls = ['alice', 'allie', 'bernice', 'brenda', 'clarice', 'cilly']
boys = ['chris', 'christopher', 'arald', 'arnold', 'bob']
[(b, g) for b in boys for g in girls if b[0] == g[0]]
letterGirls = {}
for girl in girls:
letterGirls.setdefault(girl[0], []).append(girl)
[(b, g) for b in boys for g in letterGirls[b[0]]]
>>> s = [('yellow', 1), ('blue', 2), ('yellow', 3), ('blue', 4), ('red', 1)]
>>> d = defaultdict(list)
>>> for k, v in s:
... d[k].append(v)
...
>>> list(d.items())
[('blue', [2, 4]), ('red', [1]), ('yellow', [1, 3])]
class Bunch(dict):
def __init__(self, *args, **kwds):
super(Bunch, self).__init__(*args, **kwds)
self.__dict__ = self
>>> T = Bunch
>>> t = T(left=T(left="a", right="b"), right=T(left="c"))
>>> t.left
>>> t.left.right
>>> t['left']['right']
>>> "left" in t.right
"right" in t.right
class Bunch(object):
def __init__(self, **kwds):
self.__dict__.update(kwds)
from types import SimpleNamespace
x, y = 32, 64
point = SimpleNamespace(datum=y, squared=y*y, coord=x)
point
point.datum, point.squared, point.coord
[i for i in point]
def fast_countdown(count):
nums = []
for i in range(count):
nums.append(i)
nums.reverse()
return nums
def slow_countdown(count):
nums = []
for i in range(count):
nums.insert(0, i)
return nums
def printer(lst, chunk=10):
print("{}...{}".format(" ".join(map(str, lst[:chunk])),
" ".join(map(str, lst[-chunk:]))))
%timeit nums = fast_countdown(10**5)
%timeit nums = slow_countdown(10**5)
from collections import deque
def enhanced_slow_countdown(count):
nums = deque()
for i in range(count):
nums.appendleft(i)
return nums
%timeit nums = enhanced_slow_countdown(10**5)
from random import randrange
max_value = 10000
checks = 1000
L = [randrange(max_value) for i in range(checks)]
%timeit [randrange(max_value) in L for _ in range(checks)]
S = set(L) # convert the list to a set object.
%timeit [randrange(max_value) in S for _ in range(checks)]
lists = [[1, 2], [3, 4, 5], [6]]
sum(lists, [])
res = []
for lst in lists:
res.extend(lst)
res
def string_producer(length):
return ''.join([chr(randrange(ord('a'), ord('z'))) for _ in range(length)])
%%timeit
s = ""
for chunk in string_producer(10**5):
s += chunk
%%timeit
chunks = []
for chunk in string_producer(10**5):
chunks.append(chunk)
s = ''.join(chunks)
%timeit s = ''.join(string_producer(10**5))
def perm_isomorphism(M, domain):
iso = dict(enumerate(domain))
return [iso[M[i]] for i in range(len(M))]
def fix_perm(M, fix):
return [M[i] if i in fix else i for i in range(len(M))]
def naive_max_perm(M, A=None):
'''
Fix a permutation such that it is one-to-one and maximal, recursively.
consumes:
M - a permutation as a list of integers
A - a set of positions allowed to move
produces:
a set `fix` such that makes M maximal, ensuring to be one-to-one
'''
if A is None: A = set(range(len(M))) # init to handle first invocation, all elems can move
if len(A) == 1: return A # recursion base, unary perm can move, trivial
B = set(M[i] for i in A) # b in B iff b is desired by someone
C = A - B # c in C iff c isn't desired, so discard it
return naive_max_perm(M, A - C) if C else A # recur with desired position only
I = range(8) # the identity permutation
letters = "abcdefgh"
perm_isomorphism(I, letters)
M = [2, 2, 0, 5, 3, 5, 7, 4]
perm_isomorphism(M, letters)
fix = naive_max_perm(M)
max_M = fix_perm(M, fix)
perm_isomorphism(max_M, letters)
naive_max_perm(M)
def max_perm(M):
n = len(M) # How many elements?
A = set(range(n)) # A = {0, 1, ... , n-1}
count = Counter(M) # desired positions by frequencies
Q = deque([i for i in A if not count[i]]) # useless elements
while Q: # While useless elts. left...
i = Q.pop() # get one of them
A.remove(i) # remove it from the maximal permutation
j = M[i] # get its desired position
count[j] -= 1 # and release it for someone else
if not count[j]: # if such position isn't desired anymore
Q.appendleft(j) # enqueue such position in order to discard it
return A
fix = max_perm(M)
max_M = fix_perm(M, fix)
perm_isomorphism(max_M, letters)
def counting_sort(A, key=None, sort_boundary=None):
'''
Sorts the given collection A in linear time, assuming their elements are hashable.
This implementation implements a vanilla counting sort, working in linear time respect
iterable length and spacing between objects. It works best if elements are evenly, namely
*uniformly* distributed in the domain; on contrast, if they are sparse and concentrated
near accumulation points, traversing distances between them is time consuming.
If `sort_boundary` is instantiated to a float within [0,1], then the domain is ordered
using a classic loglinear algorithm before building the result.
'''
if key is None: key = lambda x: x
B, C = [], defaultdict(list)
for x in A:
C[key(x)].append(x)
domain = sorted(C) if sort_boundary and len(C) <= len(A)*sort_boundary \
else range(min(C), max(C)+1)
for k in domain:
B.extend(C[k])
return B
A = [randrange(50) for i in range(2*10**3)]
assert sorted(A) == counting_sort(A)
n, bins, patches = plt.hist(A, 10, facecolor='green', alpha=0.5)
plt.xlabel('elements'); plt.ylabel('frequencies'); plt.grid(True)
plt.show()
%timeit counting_sort(A)
%timeit counting_sort(A, sort_boundary=1)
B = ([randrange(50) for i in range(10**3)] +
[10**4 + randrange(50) for i in range(10**3)])
n, bins, patches = plt.hist(B, 100, facecolor='green', alpha=0.5)
plt.xlabel('elements'); plt.ylabel('frequencies'); plt.grid(True)
plt.show()
assert sorted(B) == counting_sort(B)
%timeit counting_sort(B)
%timeit counting_sort(B, sort_boundary=1/8)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Register a model
| <ASSISTANT_TASK:>
Python Code:
# restart your notebook if prompted on Colab
try:
import verta
except ImportError:
!pip install verta
import os
# Ensure credentials are set up, if not, use below
# os.environ['VERTA_EMAIL'] =
# os.environ['VERTA_DEV_KEY'] =
# os.environ['VERTA_HOST'] =
from verta import Client
client = Client(os.environ['VERTA_HOST'])
registered_model = client.get_or_create_registered_model(
name="census", labels=["research-purpose", "team-a"])
from verta.registry import VertaModelBase
class MyModel(VertaModelBase):
def __init__(self, artifacts):
self.weights = json.load(open(artifacts["weights"]))
def predict(self, input):
res = []
for row in input:
res.append(row[0] * self.weights[0] + row[1] * self.weights[1])
return res
from verta.environment import Python
model_version = registered_model.create_standard_model(
model_cls=MyModel,
artifacts = {"weights" : [5, 6]},
environment=Python(requirements=["json"]),
name="v0",
labels=["prototype"],
)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Keras 예제에서 잘라내기
Step2: 잘라내기 없이 MNIST에 대한 모델 훈련하기
Step3: 기준 테스트 정확성을 평가하고 나중에 사용할 수 있도록 모델을 저장합니다.
Step4: 잘라내기로 사전 훈련된 모델 미세 조정하기
Step5: 기준선과 비교하여 모델 훈련 및 평가하기
Step6: 이 예제의 경우, 기준선과 비교하여 잘라낸 후 테스트 정확성의 손실이 최소화됩니다.
Step7: 로그는 레이어별로 희소성의 진행률을 보여줍니다.
Step8: Colab이 아닌 사용자의 경우, TensorBoard.dev에서 이 코드 블록의 이전 실행의 결과를 볼 수 있습니다.
Step9: 그런 다음 TFLite용 압축 가능한 모델을 만듭니다.
Step10: 실제로 gzip을 통해 모델을 압축하는 도우미 함수를 정의하고 압축된 크기를 측정합니다.
Step11: 잘라내기로 3배 더 작아진 모델을 비교하고 확인합니다.
Step12: 잘라내기와 양자화를 결합하여 10배 더 작은 모델 생성하기
Step13: TF에서 TFLite까지 정확성의 지속성 확인하기
Step14: 잘라낸 양자화 모델을 평가하고 TensorFlow의 정확성이 TFLite 백엔드까지 유지되는지 확인합니다.
| <ASSISTANT_TASK:>
Python Code:
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
! pip install -q tensorflow-model-optimization
import tempfile
import os
import tensorflow as tf
import numpy as np
from tensorflow import keras
%load_ext tensorboard
# Load MNIST dataset
mnist = keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
# Normalize the input image so that each pixel value is between 0 and 1.
train_images = train_images / 255.0
test_images = test_images / 255.0
# Define the model architecture.
model = keras.Sequential([
keras.layers.InputLayer(input_shape=(28, 28)),
keras.layers.Reshape(target_shape=(28, 28, 1)),
keras.layers.Conv2D(filters=12, kernel_size=(3, 3), activation='relu'),
keras.layers.MaxPooling2D(pool_size=(2, 2)),
keras.layers.Flatten(),
keras.layers.Dense(10)
])
# Train the digit classification model
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.fit(
train_images,
train_labels,
epochs=4,
validation_split=0.1,
)
_, baseline_model_accuracy = model.evaluate(
test_images, test_labels, verbose=0)
print('Baseline test accuracy:', baseline_model_accuracy)
_, keras_file = tempfile.mkstemp('.h5')
tf.keras.models.save_model(model, keras_file, include_optimizer=False)
print('Saved baseline model to:', keras_file)
import tensorflow_model_optimization as tfmot
prune_low_magnitude = tfmot.sparsity.keras.prune_low_magnitude
# Compute end step to finish pruning after 2 epochs.
batch_size = 128
epochs = 2
validation_split = 0.1 # 10% of training set will be used for validation set.
num_images = train_images.shape[0] * (1 - validation_split)
end_step = np.ceil(num_images / batch_size).astype(np.int32) * epochs
# Define model for pruning.
pruning_params = {
'pruning_schedule': tfmot.sparsity.keras.PolynomialDecay(initial_sparsity=0.50,
final_sparsity=0.80,
begin_step=0,
end_step=end_step)
}
model_for_pruning = prune_low_magnitude(model, **pruning_params)
# `prune_low_magnitude` requires a recompile.
model_for_pruning.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model_for_pruning.summary()
logdir = tempfile.mkdtemp()
callbacks = [
tfmot.sparsity.keras.UpdatePruningStep(),
tfmot.sparsity.keras.PruningSummaries(log_dir=logdir),
]
model_for_pruning.fit(train_images, train_labels,
batch_size=batch_size, epochs=epochs, validation_split=validation_split,
callbacks=callbacks)
_, model_for_pruning_accuracy = model_for_pruning.evaluate(
test_images, test_labels, verbose=0)
print('Baseline test accuracy:', baseline_model_accuracy)
print('Pruned test accuracy:', model_for_pruning_accuracy)
#docs_infra: no_execute
%tensorboard --logdir={logdir}
model_for_export = tfmot.sparsity.keras.strip_pruning(model_for_pruning)
_, pruned_keras_file = tempfile.mkstemp('.h5')
tf.keras.models.save_model(model_for_export, pruned_keras_file, include_optimizer=False)
print('Saved pruned Keras model to:', pruned_keras_file)
converter = tf.lite.TFLiteConverter.from_keras_model(model_for_export)
pruned_tflite_model = converter.convert()
_, pruned_tflite_file = tempfile.mkstemp('.tflite')
with open(pruned_tflite_file, 'wb') as f:
f.write(pruned_tflite_model)
print('Saved pruned TFLite model to:', pruned_tflite_file)
def get_gzipped_model_size(file):
# Returns size of gzipped model, in bytes.
import os
import zipfile
_, zipped_file = tempfile.mkstemp('.zip')
with zipfile.ZipFile(zipped_file, 'w', compression=zipfile.ZIP_DEFLATED) as f:
f.write(file)
return os.path.getsize(zipped_file)
print("Size of gzipped baseline Keras model: %.2f bytes" % (get_gzipped_model_size(keras_file)))
print("Size of gzipped pruned Keras model: %.2f bytes" % (get_gzipped_model_size(pruned_keras_file)))
print("Size of gzipped pruned TFlite model: %.2f bytes" % (get_gzipped_model_size(pruned_tflite_file)))
converter = tf.lite.TFLiteConverter.from_keras_model(model_for_export)
converter.optimizations = [tf.lite.Optimize.DEFAULT]
quantized_and_pruned_tflite_model = converter.convert()
_, quantized_and_pruned_tflite_file = tempfile.mkstemp('.tflite')
with open(quantized_and_pruned_tflite_file, 'wb') as f:
f.write(quantized_and_pruned_tflite_model)
print('Saved quantized and pruned TFLite model to:', quantized_and_pruned_tflite_file)
print("Size of gzipped baseline Keras model: %.2f bytes" % (get_gzipped_model_size(keras_file)))
print("Size of gzipped pruned and quantized TFlite model: %.2f bytes" % (get_gzipped_model_size(quantized_and_pruned_tflite_file)))
import numpy as np
def evaluate_model(interpreter):
input_index = interpreter.get_input_details()[0]["index"]
output_index = interpreter.get_output_details()[0]["index"]
# Run predictions on ever y image in the "test" dataset.
prediction_digits = []
for i, test_image in enumerate(test_images):
if i % 1000 == 0:
print('Evaluated on {n} results so far.'.format(n=i))
# Pre-processing: add batch dimension and convert to float32 to match with
# the model's input data format.
test_image = np.expand_dims(test_image, axis=0).astype(np.float32)
interpreter.set_tensor(input_index, test_image)
# Run inference.
interpreter.invoke()
# Post-processing: remove batch dimension and find the digit with highest
# probability.
output = interpreter.tensor(output_index)
digit = np.argmax(output()[0])
prediction_digits.append(digit)
print('\n')
# Compare prediction results with ground truth labels to calculate accuracy.
prediction_digits = np.array(prediction_digits)
accuracy = (prediction_digits == test_labels).mean()
return accuracy
interpreter = tf.lite.Interpreter(model_content=quantized_and_pruned_tflite_model)
interpreter.allocate_tensors()
test_accuracy = evaluate_model(interpreter)
print('Pruned and quantized TFLite test_accuracy:', test_accuracy)
print('Pruned TF test accuracy:', model_for_pruning_accuracy)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: First we need to define materials that will be used in the problem
Step2: With our three materials, we can now create a Materials object that can be exported to an actual XML file.
Step3: Now let's move on to the geometry. This problem will be a square array of fuel pins and control rod guide tubes for which we can use OpenMC's lattice/universe feature. The basic universe will have three regions for the fuel, the clad, and the surrounding coolant. The first step is to create the bounding surfaces for fuel and clad, as well as the outer bounding surfaces of the problem.
Step4: With the surfaces defined, we can now construct a fuel pin cell from cells that are defined by intersections of half-spaces created by the surfaces.
Step5: Likewise, we can construct a control rod guide tube with the same surfaces.
Step6: Using the pin cell universe, we can construct a 17x17 rectangular lattice with a 1.26 cm pitch.
Step7: Next, we create a NumPy array of fuel pin and guide tube universes for the lattice.
Step8: OpenMC requires that there is a "root" universe. Let us create a root cell that is filled by the pin cell universe and then assign it to the root universe.
Step9: We now must create a geometry that is assigned a root universe and export it to XML.
Step10: With the geometry and materials finished, we now just need to define simulation parameters. In this case, we will use 10 inactive batches and 40 active batches each with 2500 particles.
Step11: Let us also create a plot to verify that our fuel assembly geometry was created successfully.
Step12: As we can see from the plot, we have a nice array of fuel and guide tube pin cells with fuel, cladding, and water!
Step13: Next, we will instantiate an openmc.mgxs.Library for the energy and delayed groups with our the fuel assembly geometry.
Step14: Now, we can run OpenMC to generate the cross sections.
Step15: Tally Data Processing
Step16: The statepoint is now ready to be analyzed by the Library. We simply have to load the tallies from the statepoint into the Library and our MGXS objects will compute the cross sections for us under-the-hood.
Step17: Using Tally Arithmetic to Compute the Delayed Neutron Precursor Concentrations
Step18: Another useful feature of the Python API is the ability to extract the surface currents for the interfaces and surfaces of a mesh. We can inspect the currents for the mesh by getting the pandas dataframe.
Step19: Cross Section Visualizations
| <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import math
import matplotlib.pyplot as plt
import numpy as np
import openmc
import openmc.mgxs
# 1.6 enriched fuel
fuel = openmc.Material(name='1.6% Fuel')
fuel.set_density('g/cm3', 10.31341)
fuel.add_nuclide('U235', 3.7503e-4)
fuel.add_nuclide('U238', 2.2625e-2)
fuel.add_nuclide('O16', 4.6007e-2)
# borated water
water = openmc.Material(name='Borated Water')
water.set_density('g/cm3', 0.740582)
water.add_nuclide('H1', 4.9457e-2)
water.add_nuclide('O16', 2.4732e-2)
water.add_nuclide('B10', 8.0042e-6)
# zircaloy
zircaloy = openmc.Material(name='Zircaloy')
zircaloy.set_density('g/cm3', 6.55)
zircaloy.add_nuclide('Zr90', 7.2758e-3)
# Create a materials collection and export to XML
materials = openmc.Materials((fuel, water, zircaloy))
materials.export_to_xml()
# Create cylinders for the fuel and clad
fuel_outer_radius = openmc.ZCylinder(r=0.39218)
clad_outer_radius = openmc.ZCylinder(r=0.45720)
# Create boundary planes to surround the geometry
min_x = openmc.XPlane(x0=-10.71, boundary_type='reflective')
max_x = openmc.XPlane(x0=+10.71, boundary_type='reflective')
min_y = openmc.YPlane(y0=-10.71, boundary_type='reflective')
max_y = openmc.YPlane(y0=+10.71, boundary_type='reflective')
min_z = openmc.ZPlane(z0=-10., boundary_type='reflective')
max_z = openmc.ZPlane(z0=+10., boundary_type='reflective')
# Create a Universe to encapsulate a fuel pin
fuel_pin_universe = openmc.Universe(name='1.6% Fuel Pin')
# Create fuel Cell
fuel_cell = openmc.Cell(name='1.6% Fuel')
fuel_cell.fill = fuel
fuel_cell.region = -fuel_outer_radius
fuel_pin_universe.add_cell(fuel_cell)
# Create a clad Cell
clad_cell = openmc.Cell(name='1.6% Clad')
clad_cell.fill = zircaloy
clad_cell.region = +fuel_outer_radius & -clad_outer_radius
fuel_pin_universe.add_cell(clad_cell)
# Create a moderator Cell
moderator_cell = openmc.Cell(name='1.6% Moderator')
moderator_cell.fill = water
moderator_cell.region = +clad_outer_radius
fuel_pin_universe.add_cell(moderator_cell)
# Create a Universe to encapsulate a control rod guide tube
guide_tube_universe = openmc.Universe(name='Guide Tube')
# Create guide tube Cell
guide_tube_cell = openmc.Cell(name='Guide Tube Water')
guide_tube_cell.fill = water
guide_tube_cell.region = -fuel_outer_radius
guide_tube_universe.add_cell(guide_tube_cell)
# Create a clad Cell
clad_cell = openmc.Cell(name='Guide Clad')
clad_cell.fill = zircaloy
clad_cell.region = +fuel_outer_radius & -clad_outer_radius
guide_tube_universe.add_cell(clad_cell)
# Create a moderator Cell
moderator_cell = openmc.Cell(name='Guide Tube Moderator')
moderator_cell.fill = water
moderator_cell.region = +clad_outer_radius
guide_tube_universe.add_cell(moderator_cell)
# Create fuel assembly Lattice
assembly = openmc.RectLattice(name='1.6% Fuel Assembly')
assembly.pitch = (1.26, 1.26)
assembly.lower_left = [-1.26 * 17. / 2.0] * 2
# Create array indices for guide tube locations in lattice
template_x = np.array([5, 8, 11, 3, 13, 2, 5, 8, 11, 14, 2, 5, 8,
11, 14, 2, 5, 8, 11, 14, 3, 13, 5, 8, 11])
template_y = np.array([2, 2, 2, 3, 3, 5, 5, 5, 5, 5, 8, 8, 8, 8,
8, 11, 11, 11, 11, 11, 13, 13, 14, 14, 14])
# Create universes array with the fuel pin and guide tube universes
universes = np.tile(fuel_pin_universe, (17,17))
universes[template_x, template_y] = guide_tube_universe
# Store the array of universes in the lattice
assembly.universes = universes
# Create root Cell
root_cell = openmc.Cell(name='root cell', fill=assembly)
# Add boundary planes
root_cell.region = +min_x & -max_x & +min_y & -max_y & +min_z & -max_z
# Create root Universe
root_universe = openmc.Universe(universe_id=0, name='root universe')
root_universe.add_cell(root_cell)
# Create Geometry and export to XML
geometry = openmc.Geometry(root_universe)
geometry.export_to_xml()
# OpenMC simulation parameters
batches = 50
inactive = 10
particles = 2500
# Instantiate a Settings object
settings = openmc.Settings()
settings.batches = batches
settings.inactive = inactive
settings.particles = particles
settings.output = {'tallies': False}
# Create an initial uniform spatial source distribution over fissionable zones
bounds = [-10.71, -10.71, -10, 10.71, 10.71, 10.]
uniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True)
settings.source = openmc.Source(space=uniform_dist)
# Export to "settings.xml"
settings.export_to_xml()
# Plot our geometry
plot = openmc.Plot.from_geometry(geometry)
plot.pixels = (250, 250)
plot.color_by = 'material'
openmc.plot_inline(plot)
# Instantiate a 20-group EnergyGroups object
energy_groups = openmc.mgxs.EnergyGroups()
energy_groups.group_edges = np.logspace(-3, 7.3, 21)
# Instantiate a 1-group EnergyGroups object
one_group = openmc.mgxs.EnergyGroups()
one_group.group_edges = np.array([energy_groups.group_edges[0], energy_groups.group_edges[-1]])
# Instantiate a tally mesh
mesh = openmc.RegularMesh(mesh_id=1)
mesh.dimension = [17, 17, 1]
mesh.lower_left = [-10.71, -10.71, -10000.]
mesh.width = [1.26, 1.26, 20000.]
# Initialize an 20-energy-group and 6-delayed-group MGXS Library
mgxs_lib = openmc.mgxs.Library(geometry)
mgxs_lib.energy_groups = energy_groups
mgxs_lib.num_delayed_groups = 6
# Specify multi-group cross section types to compute
mgxs_lib.mgxs_types = ['total', 'transport', 'nu-scatter matrix', 'kappa-fission', 'inverse-velocity', 'chi-prompt',
'prompt-nu-fission', 'chi-delayed', 'delayed-nu-fission', 'beta']
# Specify a "mesh" domain type for the cross section tally filters
mgxs_lib.domain_type = 'mesh'
# Specify the mesh domain over which to compute multi-group cross sections
mgxs_lib.domains = [mesh]
# Construct all tallies needed for the multi-group cross section library
mgxs_lib.build_library()
# Create a "tallies.xml" file for the MGXS Library
tallies_file = openmc.Tallies()
mgxs_lib.add_to_tallies_file(tallies_file, merge=True)
# Instantiate a current tally
mesh_filter = openmc.MeshSurfaceFilter(mesh)
current_tally = openmc.Tally(name='current tally')
current_tally.scores = ['current']
current_tally.filters = [mesh_filter]
# Add current tally to the tallies file
tallies_file.append(current_tally)
# Export to "tallies.xml"
tallies_file.export_to_xml()
# Run OpenMC
openmc.run()
# Load the last statepoint file
sp = openmc.StatePoint('statepoint.50.h5')
# Initialize MGXS Library with OpenMC statepoint data
mgxs_lib.load_from_statepoint(sp)
# Extrack the current tally separately
current_tally = sp.get_tally(name='current tally')
# Set the time constants for the delayed precursors (in seconds^-1)
precursor_halflife = np.array([55.6, 24.5, 16.3, 2.37, 0.424, 0.195])
precursor_lambda = math.log(2.0) / precursor_halflife
beta = mgxs_lib.get_mgxs(mesh, 'beta')
# Create a tally object with only the delayed group filter for the time constants
beta_filters = [f for f in beta.xs_tally.filters if type(f) is not openmc.DelayedGroupFilter]
lambda_tally = beta.xs_tally.summation(nuclides=beta.xs_tally.nuclides)
for f in beta_filters:
lambda_tally = lambda_tally.summation(filter_type=type(f), remove_filter=True) * 0. + 1.
# Set the mean of the lambda tally and reshape to account for nuclides and scores
lambda_tally._mean = precursor_lambda
lambda_tally._mean.shape = lambda_tally.std_dev.shape
# Set a total nuclide and lambda score
lambda_tally.nuclides = [openmc.Nuclide(name='total')]
lambda_tally.scores = ['lambda']
delayed_nu_fission = mgxs_lib.get_mgxs(mesh, 'delayed-nu-fission')
# Use tally arithmetic to compute the precursor concentrations
precursor_conc = beta.xs_tally.summation(filter_type=openmc.EnergyFilter, remove_filter=True) * \
delayed_nu_fission.xs_tally.summation(filter_type=openmc.EnergyFilter, remove_filter=True) / lambda_tally
# The difference is a derived tally which can generate Pandas DataFrames for inspection
precursor_conc.get_pandas_dataframe().head(10)
current_tally.get_pandas_dataframe().head(10)
# Extract the energy-condensed delayed neutron fraction tally
beta_by_group = beta.get_condensed_xs(one_group).xs_tally.summation(filter_type='energy', remove_filter=True)
beta_by_group.mean.shape = (17, 17, 6)
beta_by_group.mean[beta_by_group.mean == 0] = np.nan
# Plot the betas
plt.figure(figsize=(18,9))
fig = plt.subplot(231)
plt.imshow(beta_by_group.mean[:,:,0], interpolation='none', cmap='jet')
plt.colorbar()
plt.title('Beta - delayed group 1')
fig = plt.subplot(232)
plt.imshow(beta_by_group.mean[:,:,1], interpolation='none', cmap='jet')
plt.colorbar()
plt.title('Beta - delayed group 2')
fig = plt.subplot(233)
plt.imshow(beta_by_group.mean[:,:,2], interpolation='none', cmap='jet')
plt.colorbar()
plt.title('Beta - delayed group 3')
fig = plt.subplot(234)
plt.imshow(beta_by_group.mean[:,:,3], interpolation='none', cmap='jet')
plt.colorbar()
plt.title('Beta - delayed group 4')
fig = plt.subplot(235)
plt.imshow(beta_by_group.mean[:,:,4], interpolation='none', cmap='jet')
plt.colorbar()
plt.title('Beta - delayed group 5')
fig = plt.subplot(236)
plt.imshow(beta_by_group.mean[:,:,5], interpolation='none', cmap='jet')
plt.colorbar()
plt.title('Beta - delayed group 6')
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The following function runs a random model with a random independent variable y and four random covariates, using both the statsmodels and scikit-learn packages. The user can compare output from the two tools.
Step2: The two models produce the same results.
| <ASSISTANT_TASK:>
Python Code:
from data_cleaning_utils import import_data
dat = import_data('../Data/Test/pool82014-10-02cleaned_Subset.csv')
from regression import compare_OLS
compare_OLS(dat)
%matplotlib inline
from regression import user_model
user_model(data=dat)
%matplotlib inline
import pandas as pd
from regression import plot_pairs
plot_pairs(data=dat[['XCO2Dpp', 'XCH4Dpp', 'TempC', 'ChlAugL', 'TurbFNU',
'fDOMQSU', 'ODOmgL', 'pH', 'CH4uM', 'CO2uM']], minCorr=0.1, maxCorr=0.95)
dat.columns
dat.shape[1]
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Then, read the (sample) input tables for blocking purposes.
Step2: Combining Multiple Blockers
| <ASSISTANT_TASK:>
Python Code:
# Import py_entitymatching package
import py_entitymatching as em
import os
import pandas as pd
# Get the datasets directory
datasets_dir = em.get_install_path() + os.sep + 'datasets'
# Get the paths of the input tables
path_A = datasets_dir + os.sep + 'person_table_A.csv'
path_B = datasets_dir + os.sep + 'person_table_B.csv'
# Read the CSV files and set 'ID' as the key attribute
A = em.read_csv_metadata(path_A, key='ID')
B = em.read_csv_metadata(path_B, key='ID')
#Blocking plan :
#A, B --overlap blocking--> candset --attr-equiv-block--> candset |
# |
#A, B ------------rule-based-blocking--------------------> candset |----union--->candset
# |
#A, B -----------black-box-blocking----------------------> candset |
# Overlap blocking over input tables
ob = em.OverlapBlocker()
# block using name
C = ob.block_tables(A, B, 'name', 'name', word_level=True, overlap_size=1,
l_output_attrs=['name', 'birth_year'],
r_output_attrs=['name', 'birth_year'],
show_progress=False)
C
# Overlap blocking over input tables
ob = em.OverlapBlocker()
# block using name
C = ob.block_tables(A, B, 'name', 'name', word_level=True, overlap_size=1,
l_output_attrs=['name', 'birth_year'],
r_output_attrs=['name', 'birth_year'],
show_progress=False)
C
# Attribute equivalence blocking: block C using birth_year
ab = em.AttrEquivalenceBlocker()
D = ab.block_candset(C, 'birth_year', 'birth_year', show_progress=False)
D
# Rule-based blocking over input tables
# first get features that can be used
feature_table = em.get_features_for_blocking(A, B, validate_inferred_attr_types=False)
# Create rule-based blocker
rb = em.RuleBasedBlocker()
# Add rule : block tuples if name_name_lev(ltuple, rtuple) < 0.4
rb.add_rule(['name_name_lev_sim(ltuple, rtuple) < 0.4'], feature_table)
E = rb.block_tables(A, B, l_output_attrs=['name'], r_output_attrs=['name'], show_progress=False)
# Apply black box blocker
# Create black box blocker
bb = em.BlackBoxBlocker()
# define a black box function.
# The blocker function should drop tuple pairs whose last name do not match
# The function has to do the following steps
# 1) Get name attributes from each of the tuples
# 2) Split name attribute to get last name
# 3) if last names donot match return True
def my_function(x, y):
# x, y will be of type pandas series
# get name attribute
x_name = x['name']
y_name = y['name']
# get last names
x_name = x_name.split(' ')[1]
y_name = y_name.split(' ')[1]
# check if last names match
if x_name != y_name:
return True
else:
return False
bb.set_black_box_function(my_function)
F = bb.block_tables(A, B, l_output_attrs=['name'], r_output_attrs=['name'], show_progress=False)
F
# Combine all the blocker outputs
G = em.combine_blocker_outputs_via_union([D, E, F])
G
em.show_properties(G)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now we set up all necessary simulation parameters
Step2: Next we set up the system. As in part I, the orientation of the dipole moments is set directly on the particles, whereas the magnitude of the moments is taken into account when determining the prefactor of the dipolar P3M (for more details see part I).
Step3: Now we equilibrate for a while
Step4: As we need the magnetization of our system, we use <tt>espressomd.observables.MagneticDipoleMoment</tt> to calculate the total dipole moment of the system which is the magnetization times the volume of the system.
Step5: Now we set the desired number of loops for the sampling
Step6: and sample the first and second moment of the magnetization or total dipole moment, by averaging over all total dipole moments occurring during the simulation
Step7: For the estimator of the initial susceptibility $\chi_\mathrm{init}$ we need the magnitude of one single dipole moment
Step8: Now we can calculate $\chi_\mathrm{init}$ from our simulation data
Step9: and print the result
Step10: Compared with the value $\chi = 0.822 \pm 0.017$ of Ref. <a href='#[1]'>[1]</a> (see table 1) it should be very similar.
Step11: and at second with the more advanced one (see Ref. <a href='#[1]'>[1]</a> eq. (6)) which has a cubic accuracy in $\chi_\mathrm{L}$ and reads
Step12: Both of them should be smaller than our result, but the second one should be closer to our one. The deviation of the theoretical results to our simulation result can be explained by the fact that in the Langevin model there are no interactions between the particles incorporated at all and the more advanced (mean-field-type) one of Ref. <a href='#[1]'>[1]</a> do not take occurring cluster formations into account but assumes a homogeneous distribution of the particles. For higher values of the volume fraction $\phi$ and the dipolar interaction parameter $\lambda$ the deviations will increase as the cluster formation will become more pronounced.
Step13: Now for each value in this list we sample the total dipole moment / magnetization of the system for a while. Keep in mind that we only the current orientation of the dipole moments, i.e. the unit vector of the dipole moments, is saved in the particle list but not their magnitude. Thus we have to use $H\cdot \mu$ as the external magnetic field, where $\mu$ is the magnitude of a single magnetic dipole moment.
Step14: Now we define the Langevin function and the modified mean-field-approximation of the Langevin parameter of Ref. <a href='#[2]'>[2]</a>
Step15: We also want to plot the linear approximation at $\alpha = 0$ to see for which values of $\alpha$ this approximation holds. We use the initial susceptibility calculated in the first chapter of this part as the gradient. As we want the gradient of $M^*$ with respect to $\alpha$ which fulfills the relation
| <ASSISTANT_TASK:>
Python Code:
import espressomd
import espressomd.magnetostatics
espressomd.assert_features('DIPOLES', 'LENNARD_JONES')
import numpy as np
lj_sigma = 1
lj_epsilon = 1
lj_cut = 2**(1. / 6.) * lj_sigma
# magnetic field constant
mu_0 = 1.
# Particles
N = 1000
# Volume fraction
# phi = rho * 4. / 3. * np.pi * ( lj_sigma / 2 )**3.
phi = 0.0262
# Dipolar interaction parameter lambda = mu_0 m^2 /(4 pi sigma^3 kT)
dip_lambda = 3.
# Temperature
kT = 1.0
# Friction coefficient
gamma = 1.0
# Time step
dt = 0.02
# box size 3d
box_size = (N * np.pi * 4. / 3. * (lj_sigma / 2.)**3. / phi)**(1. / 3.)
system = espressomd.System(box_l=(box_size, box_size, box_size))
system.time_step = dt
# Lennard-Jones interaction
system.non_bonded_inter[0, 0].lennard_jones.set_params(epsilon=lj_epsilon, sigma=lj_sigma, cutoff=lj_cut, shift="auto")
# Random dipole moments
np.random.seed(seed=1)
dip_phi = 2 * np.pi * np.random.random((N, 1))
dip_cos_theta = 2 * np.random.random((N, 1)) - 1
dip_sin_theta = np.sin(np.arccos(dip_cos_theta))
dip = np.hstack((
dip_sin_theta * np.sin(dip_phi),
dip_sin_theta * np.cos(dip_phi),
dip_cos_theta))
# Random positions in system volume
pos = box_size * np.random.random((N, 3))
# Add particles
particles = system.part.add(pos=pos, rotation=N * [(True, True, True)], dip=dip)
# Remove overlap between particles by means of the steepest descent method
system.integrator.set_steepest_descent(
f_max=0, gamma=0.1, max_displacement=0.05)
while system.analysis.energy()["total"] > 5 * kT * N:
system.integrator.run(20)
# Switch to velocity Verlet integrator
system.integrator.set_vv()
system.thermostat.set_langevin(kT=kT, gamma=gamma, seed=1)
# tune verlet list skin
system.cell_system.skin = 0.8
# Setup dipolar P3M
accuracy = 5E-4
system.actors.add(espressomd.magnetostatics.DipolarP3M(accuracy=accuracy, prefactor=dip_lambda * lj_sigma**3 * kT))
print("Equilibration...")
equil_rounds = 10
equil_steps = 1000
for i in range(equil_rounds):
system.integrator.run(equil_steps)
print("progress: {:3.0f}%, dipolar energy: {:9.2f}".format(
(i + 1) * 100. / equil_rounds, system.analysis.energy()["dipolar"]), end="\r")
print("\nEquilibration done")
import espressomd.observables
dipm_tot_calc = espressomd.observables.MagneticDipoleMoment(ids=particles.id)
# Sampling
loops = 2000
print('Sampling ...')
# initialize array for hold the sampled dipole moments
dipms = np.full((loops, 3), np.nan)
# sample dipole moment
for i in range(loops):
system.integrator.run(10)
dipms[i, :] = dipm_tot_calc.calculate()
# print progress only every 10th cycle
if (i + 1) % 10 == 0:
print("progress: {:3.0f}%".format((i + 1) * 100. / loops), end="\r")
print("\nSampling done")
# calculate average first and second moment of total dipole moment
dipm_tot = np.mean(dipms, axis=0)
dipm_tot_2 = np.mean(dipms**2, axis=0)
# dipole moment
dipm = np.sqrt(dip_lambda * 4 * np.pi * lj_sigma**3. * kT / mu_0)
print("dipm = {}".format(dipm))
# susceptibility in 3d system
chi = mu_0 / (system.volume() * 3. * kT) * (np.sum(dipm_tot_2 * dipm**2.) - np.sum(np.square(dipm_tot * dipm)))
print('chi = %.4f' % chi)
chi_L = 8. * dip_lambda * phi
print('chi_L = %.4f' % chi_L)
chi_I = chi_L * (1 + chi_L / 3. + chi_L**2. / 144.)
print('chi_I = %.4f' % chi_I)
alphas = np.array([0, 0.25, 0.5, 1, 2, 3, 4, 8])
# remove all constraints
system.constraints.clear()
# array for magnetizations in field direction
magnetizations = np.full_like(alphas, np.nan)
# use result for alpha=0 from previous chapter
magnetizations[0] = np.average(dipm_tot)
# number of loops for sampling
loops_m = 500
for ndx, alpha in enumerate(alphas):
if alpha == 0:
continue
print("Sample for alpha = {}".format(alpha))
H_dipm = (alpha * kT)
H_field = [H_dipm, 0, 0]
print("Set magnetic field constraint...")
H_constraint = espressomd.constraints.HomogeneousMagneticField(H=H_field)
system.constraints.add(H_constraint)
print("done\n")
# Equilibration
print("Equilibration...")
for i in range(equil_rounds):
system.integrator.run(equil_steps)
print("progress: {:3.0f}%, dipolar energy: {:9.2f}".format(
(i + 1) * 100. / equil_rounds, system.analysis.energy()["dipolar"]), end="\r")
print("\nEquilibration done\n")
# Sampling
print("Sampling...")
magn_temp = np.full(loops_m, np.nan)
for i in range(loops_m):
system.integrator.run(20)
magn_temp[i] = dipm_tot_calc.calculate()[0]
print("progress: {:3.0f}%".format((i + 1) * 100. / loops_m), end="\r")
print("\n")
# save average magnetization
magnetizations[ndx] = np.mean(magn_temp)
print("Sampling for alpha = {} done \n".format(alpha))
print("magnetizations = {}".format(magnetizations))
print("total progress: {:5.1f}%\n".format(ndx * 100. / (len(alphas) - 1)))
# remove constraint
system.constraints.clear()
print("Magnetization curve sampling done")
# Langevin function
def L(y):
return np.cosh(y) / np.sinh(y) - 1 / y
# second order mean-field-model from Ref. [2]
def alpha_mean_field(alpha, dip_lambda, phi):
chi = 8. * dip_lambda * phi
return alpha + chi * L(alpha) + chi**2. / 16. * L(alpha) * (1. / alpha**2. - 1. / np.sinh(alpha)**2.)
import matplotlib.pyplot as plt
y = np.arange(0.01, 10, 0.1)
initial_susceptibility = system.volume() * kT * chi / (N * mu_0 * dipm**2)
plt.figure(figsize=(10, 10))
plt.ylim(0, 1.)
plt.xlabel(r'$\alpha$', fontsize=20)
plt.ylabel(r'$M^*$', fontsize=20)
plt.plot(y, L(y), label='Langevin function')
plt.plot(y, L(alpha_mean_field(y, dip_lambda, phi)),
label='modified mean-field-theory')
plt.plot(alphas, magnetizations / N, 'o', label='simulation results')
plt.plot(y, initial_susceptibility * y,
label=r'linear approximation at $\alpha = 0$')
plt.legend(fontsize=20)
plt.show()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Split data into training and validation sets
Step2: Redefining the problem
Step3: Scaling
Step4: Parameters
Step5: Basic RNN
Step6: Execution phase
| <ASSISTANT_TASK:>
Python Code:
# Import libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
# Import data, format dates, sort data ascending by date
data = pd.read_csv(
'data/AirPassengers.csv',
sep=',',
header=0,
names=['date','no_passengers'],
usecols=['date','no_passengers'])
data['date'] = pd.to_datetime(data['date'])
data = data.sort_values(by=['date'], ascending=True)
# Store date and no_passengers columns in numpy style arrays
dates = data['date'].as_matrix()
nos_passengers = data['no_passengers'].as_matrix()
# Plot monthly airline passengers
plt.figure(figsize=(16,8))
plt.plot(dates, nos_passengers)
plt.title('Monthly airline passengers', fontsize=18)
plt.xlabel('Date', fontsize=18)
plt.ylabel('No. of passengers [thousands]', fontsize=18)
plt.xticks(fontsize=16)
plt.yticks(fontsize=16)
plt.show()
# Number of steps (months) into the future to predict
n_pred_steps = 12
# Split data into training and validation sets
dates_train = dates[0:len(dates)-n_pred_steps]
nos_passengers_train = nos_passengers[0:len(nos_passengers)-n_pred_steps]
dates_valid = dates[len(dates)-n_pred_steps:len(dates)]
nos_passengers_valid = nos_passengers[len(nos_passengers)-n_pred_steps:len(nos_passengers)]
# Compute percent change from previous month to current month
previous_train = nos_passengers_train[0:len(nos_passengers_train)-1]
current_train = nos_passengers_train[1:len(nos_passengers_train)]
changes_train = ((current_train - previous_train) / previous_train) * 100
# Assume no percent change from previous month at first month
changes_train = np.insert(changes_train, 0, 0)
# Plot monthly airline passengers % change
plt.figure(figsize=(16,8))
plt.plot(dates_train, changes_train)
plt.title('Monthly airline passengers % change', fontsize=18)
plt.xlabel('Date', fontsize=18)
plt.ylabel('% Change', fontsize=18)
plt.xticks(fontsize=16)
plt.yticks(fontsize=16)
plt.show()
# Normalize percent changes
changes_train_min = min(changes_train)
changes_train_max = max(changes_train)
changes_train_normalized = (changes_train - changes_train_min) / (changes_train_max - changes_train_min)
# Plot monthly airline passengers normalized % change
plt.figure(figsize=(16,8))
plt.plot(dates_train, changes_train_normalized)
plt.title('Monthly airline passengers normalized % change', fontsize=18)
plt.xlabel('Date', fontsize=18)
plt.ylabel('Normalized % Change', fontsize=18)
plt.xticks(fontsize=16)
plt.yticks(fontsize=16)
plt.show()
# Initialize graph parameters
n_inputs = 1 # no. of nodes in input layer
n_hidden = 100 # no. of nodes in hidden layer
n_outputs = 1 # no. of nodes in output layer
# Initialize batch parameters
batch_size = 50 # no. of training examples to consider at each training iteration
n_steps = 12 # no. of steps in each input sequence
# Initialize optimizer parameters
n_iterations = 10000 # no. of training iterations to perform
learning_rate = 0.001 # learning rate for training
# Reset the default graph
tf.reset_default_graph()
# Define placeholder nodes for batches of training inputs and outputs, respectively, to occupy
X = tf.placeholder(tf.float32, [None, n_steps, n_inputs])
y = tf.placeholder(tf.float32, [None, n_steps, n_outputs])
# Define RNN cells and a dynamic RNN, which performs fully dynamic unrolling of inputs
cell = tf.contrib.rnn.BasicRNNCell(num_units=n_hidden, activation=tf.nn.relu)
rnn_outputs, states = tf.nn.dynamic_rnn(cell, X, dtype=tf.float32)
# Stack all the outputs by reshaping from [batch_size, n_steps, n_hidden] to [batch_size * n_steps, n_hidden]
stacked_rnn_outputs = tf.reshape(rnn_outputs, [-1, n_hidden])
# Project output by applying a single fully connected layer
stacked_outputs = tf.layers.dense(stacked_rnn_outputs, n_outputs)
# Unstack projection by reshaping from [batch_size * n_steps, n_outputs] to [batch_size, n_steps, n_outputs]
outputs = tf.reshape(stacked_outputs, [-1, n_steps, n_outputs])
# Define MSE loss function, an Adam optimizer, and the training operation.
loss = tf.reduce_mean(tf.square(outputs - y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
training_op = optimizer.minimize(loss)
# Create node in graph that will initialize all variables when run
init = tf.global_variables_initializer()
# Create a Saver node for saving and restoring model.
saver = tf.train.Saver()
def next_batch(input_sequence, batch_size, n_steps):
i_first = 0
i_last = len(input_sequence)
i_starts = np.random.randint(i_first, high=i_last-n_steps, size=(batch_size, 1))
i_sequences = i_starts + np.arange(0, n_steps + 1)
flat_i_sequences = np.ravel(i_sequences[:,:])
flat_sequences = input_sequence[flat_i_sequences]
sequences = flat_sequences.reshape(batch_size,-1)
return sequences[:, :-1].reshape(-1, n_steps, 1), sequences[:, 1:].reshape(-1, n_steps, 1)
with tf.Session() as sess:
init.run()
for iteration in range(n_iterations):
X_batch, y_batch = next_batch(changes_train_normalized, batch_size, n_steps)
sess.run(training_op, feed_dict={X: X_batch, y: y_batch})
if iteration % 1000 == 0:
mse = loss.eval(feed_dict={X: X_batch, y: y_batch})
print(iteration, "\tMSE:", mse)
saver.save(sess, "./AirPassengersModel")
changes_train_normalized_working = changes_train_normalized
changes_pred_normalized = np.array([])
with tf.Session() as sess:
saver.restore(sess, "./AirPassengersModel")
for pred in range(n_pred_steps):
flat_X_new = changes_train_normalized_working[len(changes_train_normalized_working)-n_steps:len(changes_train_normalized_working)]
X_new = flat_X_new.reshape(1,-1).reshape(-1, n_steps, 1)
y_pred = sess.run(outputs, feed_dict={X: X_new})
changes_pred_normalized = np.append(changes_pred_normalized, [y_pred[0, n_steps-1, 0]], axis=0)
changes_train_normalized_working = np.append(changes_train_normalized, changes_pred_normalized, axis=0)
changes_pred = changes_pred_normalized * (changes_train_max - changes_train_min) + changes_train_min
nos_passengers_pred = np.array([nos_passengers_train[len(nos_passengers_train)-1]])
for pred in range(n_pred_steps):
nos_passengers_pred = np.append(nos_passengers_pred, [nos_passengers_pred[pred] + nos_passengers_pred[pred] * (changes_pred[pred] / 100)], axis=0)
nos_passengers_pred = np.delete(nos_passengers_pred, 0)
# Plot monthly airline passengers (validation vs prediction)
plt.figure(figsize=(16,8))
plt.plot(dates_valid, nos_passengers_valid, '-', label="actual")
plt.plot(dates_valid, nos_passengers_pred, '--', label="predicted")
plt.title('Monthly airline passengers (actual vs predicted)', fontsize=18)
plt.xlabel('Month', fontsize=18)
plt.ylabel('No. of passengers [thousands]', fontsize=18)
plt.xticks(dates_valid, ['Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec'], rotation='horizontal', fontsize=16)
plt.yticks(fontsize=16)
plt.legend(loc="upper left", fontsize=16)
plt.show()
percent_error = np.absolute((nos_passengers_pred - nos_passengers_valid) / nos_passengers_valid) * 100
np.round(100-np.mean(percent_error))
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: What versions are we running?
Step2: Local Functions
Step3: Generate Data
Step4: View means of the various combinations (poisson mean values)
Step5: Briefly Describe Dataset
Step6: Observe
Step7: 1. Manual method, create design matrices and manually specify model
Step8: Create Model
Step9: Sample Model
Step10: View Diagnostics
Step11: Observe
Step12: Observe
Step13: Sample Model
Step14: View Traces
Step15: Transform coeffs
Step16: Observe
Step17: ... of 9.45 with a range [25%, 75%] of [4.17, 24.18], we see this is pretty close to the overall mean of
| <ASSISTANT_TASK:>
Python Code:
## Interactive magics
%matplotlib inline
%qtconsole --colors=linux
import sys
import warnings
warnings.filterwarnings('ignore')
import regex as re
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import patsy as pt
from scipy import optimize
# pymc3 libraries
import pymc3 as pm
import theano as thno
import theano.tensor as T
# import pystan
# get watermark
%install_ext https://raw.githubusercontent.com/rasbt/watermark/master/watermark.py
sns.set(style="darkgrid", palette="muted")
pd.set_option('display.mpl_style', 'default')
plt.rcParams['figure.figsize'] = 14, 6
np.random.seed(0)
%load_ext watermark
%watermark -dmvgp numpy,scipy,pandas,matplotlib,pymc3,theano
print('Recursion limit {}'.format(sys.getrecursionlimit()))
def strip_derived_rvs(rvs):
'''Convenience fn: remove PyMC3-generated RVs from a list'''
ret_rvs = []
for rv in rvs:
if not (re.search('_log',rv.name) or re.search('_interval',rv.name)):
ret_rvs.append(rv)
return ret_rvs
def plot_traces_pymc(trcs, varnames=None):
''' Convenience fn: plot traces with overlaid means and values '''
nrows = len(trcs.varnames)
if varnames is not None:
nrows = len(varnames)
ax = pm.traceplot(trcs, varnames=varnames, figsize=(12,nrows*1.4),
lines={k: v['mean'] for k, v in
pm.df_summary(trcs,varnames=varnames).iterrows()})
for i, mn in enumerate(pm.df_summary(trcs, varnames=varnames)['mean']):
ax[i,0].annotate('{:.2f}'.format(mn), xy=(mn,0), xycoords='data',
xytext=(5,10), textcoords='offset points', rotation=90,
va='bottom', fontsize='large', color='#AA0022')
# decide poisson theta values
theta_noalcohol_meds = 1 # no alcohol, took an antihist
theta_alcohol_meds = 3 # alcohol, took an antihist
theta_noalcohol_nomeds = 6 # no alcohol, no antihist
theta_alcohol_nomeds = 36 # alcohol, no antihist
# create samples
q = 1000
df = pd.DataFrame({
'nsneeze': np.concatenate((np.random.poisson(theta_noalcohol_meds, q),
np.random.poisson(theta_alcohol_meds, q),
np.random.poisson(theta_noalcohol_nomeds, q),
np.random.poisson(theta_alcohol_nomeds, q))),
'alcohol': np.concatenate((np.repeat(False, q),
np.repeat(True, q),
np.repeat(False, q),
np.repeat(True, q))),
'nomeds': np.concatenate((np.repeat(False, q),
np.repeat(False, q),
np.repeat(True, q),
np.repeat(True, q)))})
df.tail()
df.groupby(['alcohol','nomeds']).mean().unstack()
g = sns.factorplot(x='nsneeze', row='nomeds', col='alcohol', data=df,
kind='count', size=4, aspect=1.5)
fml = 'nsneeze ~ alcohol + antihist + alcohol:antihist' # full patsy formulation
fml = 'nsneeze ~ alcohol * nomeds' # lazy, alternative patsy formulation
(mx_en, mx_ex) = pt.dmatrices(fml, df, return_type='dataframe', NA_action='raise')
pd.concat((mx_ex.head(3),mx_ex.tail(3)))
with pm.Model() as mdl_fish:
# define priors, weakly informative Normal
b0 = pm.Normal('b0_intercept', mu=0, sd=10)
b1 = pm.Normal('b1_alcohol[T.True]', mu=0, sd=10)
b2 = pm.Normal('b2_nomeds[T.True]', mu=0, sd=10)
b3 = pm.Normal('b3_alcohol[T.True]:nomeds[T.True]', mu=0, sd=10)
# define linear model and exp link function
theta = (b0 +
b1 * mx_ex['alcohol[T.True]'] +
b2 * mx_ex['nomeds[T.True]'] +
b3 * mx_ex['alcohol[T.True]:nomeds[T.True]'])
## Define Poisson likelihood
y = pm.Poisson('y', mu=np.exp(theta), observed=mx_en['nsneeze'].values)
with mdl_fish:
start_MAP = pm.find_MAP(fmin=optimize.fmin_powell)
trc_fish = pm.sample(2000, start=start_MAP, njobs=2, step=pm.NUTS())
rvs_fish = [rv.name for rv in strip_derived_rvs(mdl_fish.unobserved_RVs)]
plot_traces_pymc(trc_fish[-1000:], varnames=rvs_fish)
np.exp(pm.df_summary(trc_fish[-1000:], varnames=rvs_fish)[['mean','hpd_2.5','hpd_97.5']])
with pm.Model() as mdl_fish_alt:
pm.glm.glm(fml, df, family=pm.glm.families.Poisson())
with mdl_fish_alt:
start_MAP_alt = pm.find_MAP(fmin=optimize.fmin_powell)
trc_fish_alt = pm.sample(4000, start=start_MAP_alt, njobs=1, step=pm.NUTS())
rvs_fish_alt = [rv.name for rv in strip_derived_rvs(mdl_fish_alt.unobserved_RVs)]
plot_traces_pymc(trc_fish_alt[-1000:], varnames=rvs_fish_alt)
np.exp(pm.df_summary(trc_fish_alt[-1000:], varnames=rvs_fish_alt)[['mean','hpd_2.5','hpd_97.5']])
np.percentile(trc_fish_alt[-1000:]['mu'], [25,50,75])
df['nsneeze'].mean()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Carte en moyenne temporelle sur la totalité de l'expérience
Step2: Carte en moyenne temporelle de $\chi$
| <ASSISTANT_TASK:>
Python Code:
filename = 'resultat.nc'
import numpy as np
import matplotlib.pyplot as plt
from pylab import *
import cartopy.crs as ccrs
from netCDF4 import Dataset
from scipy.special import logit, expit
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
data = Dataset(filename)
longitude=data.variables['longitude'][:]
latitude=data.variables['latitude'][:]
altitude=data.variables['altitude'][:]
Time = data.variables['Time'][:]
Ls = data.variables['Ls'][:]
dafirst = Time[0]
daint = Time[1] - dafirst
dalast = dafirst + (len(Time)-1)*daint
year = 0.
add = np.linspace(dafirst,dalast,num=len(Time)) ; add[0] = 0.
for iii in range(1,len(Ls)):
if Ls[iii] - Ls[iii-1] < 0: year = year+1.
add[iii] = year*360.
Ls_true = add + Ls
# Paramètres utilisateurs -----------------------------------------
earthtopo = False # ajouter les traits de côte actuels
varname = 'tsurf'
vmin = 220
vmax = 350
# Code ------------------------------------------------------------
dataplt = data.variables[varname][:,:,:]
fig = plt.figure(figsize=(12,8))
ax = plt.axes(projection=ccrs.PlateCarree())
ax.set_global()
if (earthtopo): ax.coastlines(resolution="110m",linewidth=1)
gl = ax.gridlines(linestyle='--',color='black',
draw_labels=True)
gl.xlabels_top = False
gl.ylabels_right = False
clevs = np.linspace(vmin,vmax,29)
plt.contourf(longitude, latitude, np.mean(dataplt[:,:,:],axis=0),
clevs, transform=ccrs.PlateCarree(),cmap="jet")
plt.title(r"Température de surface moyenne", size=14)
cb = plt.colorbar(ax=ax, orientation="vertical", pad=0.02, aspect=16, shrink=0.8)
cb.set_label(r'K',size=12,rotation=0,labelpad=15)
cb.ax.tick_params(labelsize=10)
plt.show()
def psatw(temp):
# METHOD GOFF GRATCH (HygroLP) - OVER WATER
# -----------------------------------------
log10ew = -7.90298*(373.16/temp-1) \
+ 5.02808 * np.log10(373.16/temp) \
- 1.3816e-7 * (10**(11.344 * (1-temp/373.16))-1) \
+ 8.1328e-3 * (10**(-3.49149 *(373.16/temp-1))-1) \
+ np.log10(1013.246)
return 100 * (10**(log10ew))
def psati(temp):
# METHOD GOFF GRATCH (HygroLP) - OVER ICE
# ---------------------------------------
log10ei = -9.09718*(273.16/temp-1) \
- 3.56654*np.log10(273.16/temp) \
+ 0.876793*(1-temp/273.16) \
+ np.log10(6.1071)
return 100 * (10**(log10ei))
def leafratio(temp,pvap,alti):
# all inputs in ISU
vpd = (psatw(temp) - pvap)/1e3 # in kPa
xhi=(expit(0.0545*(temp-273.15-25.)-0.5*np.log(vpd)-0.0815*alti*1e-3+1.189))
return xhi
# Paramètres utilisateurs -----------------------------------------
earthtopo = False # ajouter les traits de côte actuels
vmin = 0.
vmax = 1.
# Code ------------------------------------------------------------
psnc = data.variables['ps'][:,:,:]
tempnc = data.variables['temp'][:,:,:,:]
qvapnc = data.variables['h2o_vap'][:,:,:,:]
phisnc = data.variables['phisinit'][:,:]
pvap = qvapnc[:,0,:,:]*psnc[:,:,:]/0.622 # en Pa
alti = phisnc/9.81 # en metres
mask = alti
alti[alti <= 0] = np.nan
alti[alti > 0] = 1.
dataplt = np.zeros([len(Time),len(longitude),len(latitude)])
for itime in range(len(Time)):
for ilon in range(len(longitude)):
for ilat in range(len(latitude)):
dataplt[itime,ilon,ilat] = leafratio(tempnc[itime,0,ilon,ilat],
pvap[itime,ilon,ilat],
alti[ilon,ilat])
fig = plt.figure(figsize=(12,8))
ax = plt.axes(projection=ccrs.PlateCarree())
ax.set_global()
if (earthtopo): ax.coastlines(resolution="110m",linewidth=1)
gl = ax.gridlines(linestyle='--',color='black',
draw_labels=True)
gl.xlabels_top = False
gl.ylabels_right = False
clevs = np.linspace(vmin,vmax,21)
plt.contourf(longitude, latitude, np.mean(dataplt[:,:,:],axis=0)*mask,
clevs, transform=ccrs.PlateCarree(),cmap="jet")
plt.title(r"Leaf to ambient $CO_2$ partial pressures", size=14)
cb = plt.colorbar(ax=ax, orientation="vertical", pad=0.02, aspect=16, shrink=0.8)
cb.set_label(r'NU',size=12,rotation=0,labelpad=15)
cb.ax.tick_params(labelsize=10)
plt.show()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: We first define a function to prepare the datas in the format of keras (theano). The function also reduces the size of the imagesfrom 100X100 to 32X32.
Step2: We then load the training set and the test set and prepare them with the function prep_datas.
Step3: Image before/after compression
Step4: Lenet neural network
Step5: We build the neural network and fit it on the training set
Step6: We now compare with the real world images (with the deshear method)
Step7: with the labels of Peter
| <ASSISTANT_TASK:>
Python Code:
import os
import numpy as np
import tools as im
from matplotlib import pyplot as plt
from skimage.transform import resize
%matplotlib inline
path=os.getcwd()+'/' # finds the path of the folder in which the notebook is
path_train=path+'images/train/'
path_test=path+'images/test/'
path_real=path+'images/real_world/'
def prep_datas(xset,xlabels):
X=list(xset)
for i in range(len(X)):
X[i]=resize(X[i],(32,32,1)) #reduce the size of the image from 100X100 to 32X32. Also flattens the color levels
X=np.reshape(X,(len(X),1,32,32)) # reshape the liste to have the form required by keras (theano), ie (1,32,32)
X=np.array(X) #transforms it into an array
Y = np.eye(2, dtype='uint8')[xlabels] # generates vectors, here of two elements as required by keras (number of classes)
return X,Y
training_set, training_labels = im.load_images(path_train)
test_set, test_labels = im.load_images(path_test)
X_train,Y_train=prep_datas(training_set,training_labels)
X_test,Y_test=prep_datas(test_set,test_labels)
i=11
plt.subplot(1,2,1)
plt.imshow(training_set[i],cmap='gray')
plt.subplot(1,2,2)
plt.imshow(X_train[i][0],cmap='gray')
# import the necessary packages
from keras.models import Sequential
from keras.layers.convolutional import Convolution2D
from keras.layers.convolutional import MaxPooling2D
from keras.layers.core import Activation
from keras.layers.core import Flatten
from keras.layers.core import Dense
from keras.optimizers import SGD
# this code comes from http://www.pyimagesearch.com/2016/08/01/lenet-convolutional-neural-network-in-python/
class LeNet:
@staticmethod
def build(width, height, depth, classes, weightsPath=None):
# initialize the model
model = Sequential()
# first set of CONV => RELU => POOL
model.add(Convolution2D(20, 5, 5, border_mode="same",input_shape=(depth, height, width)))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
# second set of CONV => RELU => POOL
model.add(Convolution2D(50, 5, 5, border_mode="same"))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
# set of FC => RELU layers
model.add(Flatten())
model.add(Dense(500))
model.add(Activation("relu"))
# softmax classifier
model.add(Dense(classes))
model.add(Activation("softmax"))
# return the constructed network architecture
return model
model = LeNet.build(width=32, height=32, depth=1, classes=2)
opt = SGD(lr=0.01)#Sochastic gradient descent with learning rate 0.01
model.compile(loss="categorical_crossentropy", optimizer=opt,metrics=["accuracy"])
model.fit(X_train, Y_train, batch_size=10, nb_epoch=300,verbose=1)
y_pred = model.predict_classes(X_test)
print(y_pred)
print(test_labels)
real_world_set=[]
for i in np.arange(1,73):
filename=path+'images/real_world/'+str(i)+'.png'
real_world_set.append(im.deshear(filename))
fake_label=np.ones(len(real_world_set),dtype='int32')
X_real,Y_real=prep_datas(real_world_set,fake_label)
y_pred = model.predict_classes(X_real)
f=open(path+'images/real_world/labels.txt',"r")
lines=f.readlines()
result=[]
for x in lines:
result.append((x.split(' ')[1]).replace('\n',''))
f.close()
result=np.array([int(x) for x in result])
result[result>1]=1
plt.plot(y_pred,'o')
plt.plot(2*result,'o')
plt.ylim(-0.5,2.5);
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: MIDAS ADL
Step2: Figure 1
Step3: Mixing Frequencies
Step4: The arguments here are as follows
Step5: You can also call forecast directly. This will use the optimization results returned from eatimate to produce a forecast for every date in the index of the forecast inputs (here xf and ylf)
Step6: Comparison against univariate ARIMA model
Step7: The midas_adl function
Step8: You can also change the polynomial used to weight the MIDAS coefficients. The default is 'beta', but you can also specify exponential Almom weighting ('expalmon') or beta with non-zero last term ('betann')
Step9: Rolling and Recursive Forecasting
Step10: The recursive method is similar except that the start date does not change, so the range over which the fitting happens increases for each new forecast.
Step11: Nowcasting
Step12: Not surprisingly the RMSE drops considerably.
| <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import datetime
import numpy as np
import pandas as pd
from midas.mix import mix_freq
from midas.adl import estimate, forecast, midas_adl, rmse
gdp = pd.read_csv('../tests/data/gdp.csv', parse_dates=['DATE'], index_col='DATE')
pay = pd.read_csv('../tests/data/pay.csv', parse_dates=['DATE'], index_col='DATE')
gdp.tail()
pay.tail()
gdp_yoy = ((1. + (np.log(gdp.GDP) - np.log(gdp.GDP.shift(3)))) ** 4) - 1.
emp_yoy = ((1. + (np.log(pay.PAY) - np.log(pay.PAY.shift(1)))) ** 12) - 1.
df = pd.concat([gdp_yoy, emp_yoy], axis=1)
df.columns = ['gdp_yoy', 'emp_yoy']
df[['gdp_yoy','emp_yoy']].loc['1980-1-1':].plot(figsize=(15,4), style=['o','-'])
gdp['gdp_growth'] = (np.log(gdp.GDP) - np.log(gdp.GDP.shift(1))) * 100.
pay['emp_growth'] = (np.log(pay.PAY) - np.log(pay.PAY.shift(1))) * 100.
y, yl, x, yf, ylf, xf = mix_freq(gdp.gdp_growth, pay.emp_growth, "3m", 1, 3,
start_date=datetime.datetime(1985,1,1),
end_date=datetime.datetime(2009,1,1))
x.head()
res = estimate(y, yl, x, poly='beta')
res.x
fc = forecast(xf, ylf, res, poly='beta')
forecast_df = fc.join(yf)
forecast_df['gap'] = forecast_df.yfh - forecast_df.gdp_growth
forecast_df
gdp.join(fc)[['gdp_growth','yfh']].loc['2005-01-01':].plot(style=['-o','-+'], figsize=(12, 4))
import statsmodels.tsa.api as sm
m = sm.AR(gdp['1975-01-01':'2011-01-01'].gdp_growth,)
r = m.fit(maxlag=1)
r.params
fc_ar = r.predict(start='2005-01-01')
fc_ar.name = 'xx'
df_p = gdp.join(fc)[['gdp_growth','yfh']]
df_p.join(fc_ar)[['gdp_growth','yfh','xx']].loc['2005-01-01':].plot(style=['-o','-+'], figsize=(12, 4))
rmse_fc, fc = midas_adl(gdp.gdp_growth, pay.emp_growth,
start_date=datetime.datetime(1985,1,1),
end_date=datetime.datetime(2009,1,1),
xlag="3m",
ylag=1,
horizon=3)
rmse_fc
rmse_fc, fc = midas_adl(gdp.gdp_growth, pay.emp_growth,
start_date=datetime.datetime(1985,1,1),
end_date=datetime.datetime(2009,1,1),
xlag="3m",
ylag=1,
horizon=3,
poly='expalmon')
rmse_fc
results = {h: midas_adl(gdp.gdp_growth, pay.emp_growth,
start_date=datetime.datetime(1985,10,1),
end_date=datetime.datetime(2009,1,1),
xlag="3m",
ylag=1,
horizon=3,
forecast_horizon=h,
poly='beta',
method='rolling') for h in (1, 2, 5)}
results[1][0]
results = {h: midas_adl(gdp.gdp_growth, pay.emp_growth,
start_date=datetime.datetime(1985,10,1),
end_date=datetime.datetime(2009,1,1),
xlag="3m",
ylag=1,
horizon=3,
forecast_horizon=h,
poly='beta',
method='recursive') for h in (1, 2, 5)}
results[1][0]
rmse_fc, fc = midas_adl(gdp.gdp_growth, pay.emp_growth,
start_date=datetime.datetime(1985,1,1),
end_date=datetime.datetime(2009,1,1),
xlag="3m",
ylag=1,
horizon=1)
rmse_fc
cpi = pd.read_csv('CPIAUCSL.csv', parse_dates=['DATE'], index_col='DATE')
ffr = pd.read_csv('DFF_2_Vintages_Starting_2009_09_28.txt', sep='\t', parse_dates=['observation_date'],
index_col='observation_date')
cpi.head()
ffr.head(10)
cpi_yoy = ((1. + (np.log(cpi.CPIAUCSL) - np.log(cpi.CPIAUCSL.shift(1)))) ** 12) - 1.
cpi_yoy.head()
df = pd.concat([cpi_yoy, ffr.DFF_20090928 / 100.], axis=1)
df.columns = ['cpi_growth', 'dff']
df.loc['1980-1-1':'2010-1-1'].plot(figsize=(15,4), style=['-+','-.'])
cpi_growth = (np.log(cpi.CPIAUCSL) - np.log(cpi.CPIAUCSL.shift(1))) * 100.
y, yl, x, yf, ylf, xf = mix_freq(cpi_growth, ffr.DFF_20090928, "1m", 1, 1,
start_date=datetime.datetime(1975,10,1),
end_date=datetime.datetime(1991,1,1))
x.head()
res = estimate(y, yl, x)
fc = forecast(xf, ylf, res)
fc.join(yf).head()
pd.concat([cpi_growth, fc],axis=1).loc['2008-01-01':'2010-01-01'].plot(style=['-o','-+'], figsize=(12, 4))
results = {h: midas_adl(cpi_growth, ffr.DFF_20090928,
start_date=datetime.datetime(1975,7,1),
end_date=datetime.datetime(1990,11,1),
xlag="1m",
ylag=1,
horizon=1,
forecast_horizon=h,
method='rolling') for h in (1, 2, 5)}
(results[1][0], results[2][0], results[5][0])
results[1][1].plot(figsize=(12,4))
results = {h: midas_adl(cpi_growth, ffr.DFF_20090928,
start_date=datetime.datetime(1975,10,1),
end_date=datetime.datetime(1991,1,1),
xlag="1m",
ylag=1,
horizon=1,
forecast_horizon=h,
method='recursive') for h in (1, 2, 5)}
results[1][0]
results[1][1].plot()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The modified SVHN dataset can be found at
Step2: Below is a skeleton of the SVHN data iterator for you to fill out, with notes to help along the way. The goal is an object that returns, with each call, a tuple of (X, Y) for the input and the target bounding boxes.
Step3: Check your implementation! Below we grab an iteration and print out the output of the dataset. Importantly
Step4: If all goes well, you are ready to try training on this network! First, let's reset the dataset to zero (since you drew one example from above). We also add a test set for evaluation.
Step5: Model architecture
Step6: Below we plot the cost data over time to help you visualize the training progress. This is similiar to using the nvis command line tool to generate plots.
Step7: To understand how the network performed, we sample images and plot the network's predicted bounding box against the ground truth bounding box. We evaluate this on the test_set, which was not used to train the network.
Step8: Our ground truth box T and the model prediction y are both arrays of size (4, batch_size). We can plot an image below. Feel free to modify i to check performance on various test images. Red boxes are the model's guess, and blue boxes are the ground truth boxes.
| <ASSISTANT_TASK:>
Python Code:
from neon.backends import gen_backend
be = gen_backend(batch_size=128, backend='gpu')
# set the debug level to 10 (the minimum)
# to see all the output
import logging
main_logger = logging.getLogger('neon')
main_logger.setLevel(10)
import cPickle
fileName = 'data/svhn_64.p'
print("Loading {}...".format(fileName))
with open(fileName) as f:
svhn = cPickle.load(f)
# import some useful packages
from neon.data import NervanaDataIterator
import numpy as np
import cPickle
import os
class SVHN(NervanaDataIterator):
def __init__(self, X, Y, lshape):
# Load the numpy data into some variables. We divide the image by 255 to normalize the values
# between 0 and 1.
self.X = X / 255.
self.Y = Y
self.shape = lshape # shape of the input data (e.g. for images, (C, H, W))
# 1. assign some required and useful attributes
self.start = 0 # start at zero
self.ndata = ... # number of images in X (hint: use X.shape)
self.nfeatures = ... # number of features in X (hint: use X.shape)
# number of minibatches per epoch
# to calculate this, use the batchsize, which is stored in self.be.bsz
self.nbatches = ...
# 2. allocate memory on the GPU for a minibatch's worth of data.
# (e.g. use `self.be` to access the backend.). See the backend documentation.
# to get the minibatch size, use self.be.bsz
# hint: X should have shape (# features, mini-batch size)
# hint: use some of the attributes previously defined above
self.dev_X = ...
self.dev_Y = ...
def reset(self):
self.start = 0
def __iter__(self):
# 3. loop through minibatches in the dataset
for index in range(self.start, self.ndata, self.be.bsz):
# 3a. grab the right slice from the numpy arrays
inputs = ...
targets = ...
# The arrays X and Y data are in shape (batch_size, num_features),
# but the iterator needs to return data with shape (num_features, batch_size).
# here we transpose the data, and then store it as a contiguous array.
# numpy arrays need to be contiguous before being loaded onto the GPU.
inputs = np.ascontiguousarray(inputs.T)
targets = np.ascontiguousarray(targets.T)
# here we test your implementation
# your slice has to have the same shape as the GPU tensors you allocated
assert inputs.shape == self.dev_X.shape, \
"inputs has shape {}, but dev_X is {}".format(inputs.shape, self.dev_X.shape)
assert targets.shape == self.dev_Y.shape, \
"targets has shape {}, but dev_Y is {}".format(targets.shape, self.dev_Y.shape)
# 3b. transfer from numpy arrays to device
# - use the GPU memory buffers allocated previously,
# and call the myTensorBuffer.set() function.
self.dev_X ...
self.dev_Y ...
# 3c. yield a tuple of the device tensors.
# the first should of shape (num_features, batch_size)
# the second should of shape (4, batch_size)
yield (..., ...)
# setup datasets
train_set = SVHN(X=svhn['X_train'], Y=svhn['y_train'], lshape=(3, 64, 64))
# grab one iteration from the train_set
iterator = train_set.__iter__()
(X, Y) = iterator.next()
print X # this should be shape (12288, 128)
print Y # this should be shape (4, 128)
assert X.is_contiguous
assert Y.is_contiguous
train_set.reset()
# generate test set
test_set = SVHN(X=svhn['X_test'], Y=svhn['y_test'], lshape=(3, 64, 64))
from neon.callbacks.callbacks import Callbacks
from neon.initializers import Gaussian
from neon.layers import GeneralizedCost, Affine, Conv, Pooling, Linear, Dropout
from neon.models import Model
from neon.optimizers import GradientDescentMomentum, RMSProp
from neon.transforms import Rectlin, Logistic, CrossEntropyMulti, Misclassification, SumSquared
# set up weight initializer
...
# set up model layers
layers = []
layers.append(....)
# the last layer should be a linear layer with nout=4, for the 4 coordinates of the bounding box.
layers.append(Linear(nout=4, init=Gaussian(loc=0.0, scale=0.01)))
# use SumSquared cost
cost = GeneralizedCost(costfunc=SumSquared())
# setup optimizer
optimizer = RMSProp()
# initialize model object
mlp = Model(layers=layers)
# configure callbacks
callbacks = Callbacks(mlp, eval_set=test_set, eval_freq=1)
# run fit
mlp.fit(train_set, optimizer=optimizer, num_epochs=10, cost=cost, callbacks=callbacks)
from neon.visualizations.figure import cost_fig, hist_fig, deconv_summary_page
from neon.visualizations.data import h5_cost_data, h5_hist_data, h5_deconv_data
from bokeh.plotting import output_notebook, show
cost_data = h5_cost_data('data.h5', False)
output_notebook()
show(cost_fig(cost_data, 300, 600, epoch_axis=False))
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
# get a minibatch's worth of
# inputs (X) and targets (T)
iterator = test_set.__iter__()
(X, T) = iterator.next()
# fprop the input to get the model output
y = mlp.fprop(X)
# transfer from device to numpy arrays
y = y.get()
T = T.get()
plt.figure(2)
imgs_to_plot = [0, 1, 2, 3]
for i in imgs_to_plot:
plt.subplot(2, 2, i+1)
title = "test {}".format(i)
plt.imshow(X.get()[:, i].reshape(3, 64, 64).transpose(1, 2, 0))
ax = plt.gca()
ax.add_patch(plt.Rectangle((y[0,i], y[1,i]), y[2,i], y[3,i], fill=False, edgecolor="red")) # model guess
ax.add_patch(plt.Rectangle((T[0,i], T[1,i]), T[2,i], T[3,i], fill=False, edgecolor="blue")) # ground truth
plt.title(title)
plt.axis('off')
i=0
print "Target box had coordinates: {}".format(T[:,i])
print "Model prediction has coordinates: {}".format(y[:, i])
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Problème
Step2: Idée de la solution
| <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from jyquickhelper import add_notebook_menu
add_notebook_menu()
from IPython.display import Image
Image("http://www.xavierdupre.fr/app/code_beatrix/helpsphinx/_images/biodiversite_tri2.png")
from pyquickhelper.helpgen import NbImage
NbImage("data/hexa.png")
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: There are a few frequency bands that are already part of the possum class
Step2: Polarization Spectra
Step3: Faraday Rotation
Step4: Generating Parameters for a Large Number of Spectra
Step5: and output the parameters by printing the appropriate variables. Note that the complex source parameters are stored as tuples.
Step6: Generating a Large Number of Spectra
Step7: The real, imaginary, and absolute values of the Faraday spectrum are shown in the plot below
Step8: This concludes our discussion of the data generation process. In the next session, we will look at creating a 1D convolutional neural network (CNN) using the inception class, as well as how to utilize some of the analysis and plot functions that come with it.
Step9: We can create a simple inception layer using the _inception function, whose call is of the form
Step10: For the training dataset, the following variables are created
Step11: If we want more layers, we can just call the inception function more times. Note, however, that the function _inception is only a basic block and doesn't implement batch normalization, and consequently is only able to produce workable models that contain up to 2 inception layers. The function _inception2 implements batch normalization along with an activation function, so this function should be used when constructing deeper networks.
Step12: We can then add a couple fully connected (Dense) layers using the function _dense,
Step13: To complete our model, we can compile it using the function _compile, whose function call is
Step14: In the plots module, there is a function
Step15: <img src="tmp/i1layer.png" width="500" />
Step16: <img src="tmp/i5layer.png" width="500" />
Step17: In this case the model was subjected to Early Stopping by monitoring the validation loss with a patience of 25 epochs. The lowest validation loss occured for epoch = 51 (index starting at 0), although there is a slight downward trend, suggesting we may be able to get slightly better results by running for a larger number of epochs. However, running more epochs was found to produce only a marginal improvement at best.
Step18: To evaluate its performance, we can load in the test set using the _loadTest function and passing in the appropriate directory, which in this case is "data/test"
Step19: We can then use our trained model to predict the type of faraday spectrum for each instance using a 50% threshold for being classified as a complex source
Step20: and check the confusion matrix, which has already been loaded from sklearn
Step21: So most of the sources that were predicted to be complex aren't complex due to the large class imbalance. Another way to evaluate the model performace is to plot an ROC curve, which is conveniently included in the plots module as the function _plotROC. The area under the curve (auc) is computed and stored in the variable self.auc_, so can can check this value as well.
Step22: We can separate the complex and simple sources and store their parameters in a dataframe by calling the functions _getComplexParams and _getSimpleParams, which create the pandas dataframes self.dfComplex_ and self.dfSimple_ respectively. These can be created at the same time using the function
Step23: We can do some simple comparisons to see how the various parameters (chi, depth, flux, sig) are related to the probability outputs for the complex sources using the _plotParamProb function, whose function call is as follows
Step24: The model has a steep drop once the fluxes get below 0.2, but is doing quite well otherwise. Let's check how the model performs on the (absolute) difference in the two Faraday components.
| <ASSISTANT_TASK:>
Python Code:
from possum import *
spec = possum()
spec._createASKAP12()
print('Min Frequency (Hz): {:e}'.format(spec.nu_.min()))
print('Max Frequency (Hz): {:e}'.format(spec.nu_.max()))
spec = possum()
spec._createFrequency(600, 800, 100, store=True)
# ==========================================================
# Convert frequency to MHz for plotting purposes
# ==========================================================
nu = spec.nu_ / 1e6
plt.figure(1, figsize=(10,10))
plt.rc('text', usetex=True)
plt.rc('font', family='serif', size=18)
# ==========================================================
# Plot the polarization spectrum of the simple source
# ==========================================================
flux1 = 1
depth1 = 30
chi1 = 0
sig = 0.1
spec._createNspec(flux1, depth1, chi1, sig)
plt.subplot(2,1,1)
plt.errorbar(nu, spec.polarization_.real, yerr=sig, label=r'$Q$', color='dodgerblue')
plt.errorbar(nu, spec.polarization_.imag, yerr=sig, label=r'$U$', color='darkorange')
plt.xlabel(r'$\nu \: \left(\mathrm{MHz}\right)$', fontsize=20)
plt.ylabel(r'$P_\mathrm{\nu} \: \left[\rm Jy \, / \, beam\right]$', fontsize=20)
plt.title(r'$\rm Simple \, Source$', fontsize=25)
plt.tight_layout()
plt.legend(loc='best', fontsize=15)
# ==========================================================
# Plot the polarization spectrum of the complex source
# ==========================================================
flux2 = [1, 0.5]
depth2 = [30, -10]
chi2 = [0, 0.5]
sig = 0.1
spec._createNspec(flux2, depth2, chi2, sig)
plt.subplot(2,1,2)
plt.errorbar(nu, spec.polarization_.real, yerr=sig, label=r'$Q$', color='dodgerblue')
plt.errorbar(nu, spec.polarization_.imag, yerr=sig, label=r'$U$', color='darkorange')
plt.xlabel(r'$\nu \: \left(\mathrm{MHz}\right)$', fontsize=20)
plt.ylabel(r'$P_\mathrm{\nu} \: \left[\rm Jy \, / \, beam\right]$', fontsize=20)
plt.title(r'$\rm Complex \, Source$', fontsize=25)
plt.tight_layout()
plt.legend(loc='best', fontsize=15)
plt.show()
spec._createFaradaySpectrum(-100, 100)
plt.figure(1, figsize=(8, 6))
# ==================================================
# Plot the real, imag, and abs value
# ==================================================
plt.plot(spec.phi_, spec.faraday_.real, label=r'$\rm real$', color='dodgerblue')
plt.plot(spec.phi_, spec.faraday_.imag, label=r'$\rm imag$', color='darkorange')
plt.plot(spec.phi_, np.abs(spec.faraday_), label=r'$\rm abs$', color='black', linestyle='dashed')
# ==================================================
# Add the two faraday depths to the plot
# ==================================================
plt.vlines(x=depth2, ymin=-1, ymax=1, linestyle='dashed', color='forestgreen', alpha=0.5)
# ==================================================
# Add labels and display
# ==================================================
plt.xlabel(r'$\phi \: \left[\rm rad \, m^{2}\right]$', fontsize=20)
plt.ylabel(r'$F\left(\phi\right) \: \left[\mathrm{Jy \, / \, beam}\right]$', fontsize=20)
plt.legend(loc='best', fontsize=15)
plt.tight_layout()
plt.show()
spec = possum()
spec._generateParams(5, pcomplex=0.5)
print('Label: {:}'.format(spec.label_))
print('\nChi:')
for chi in spec.chi_:
print(chi)
print('\nDepth:')
for depth in spec.depth_:
print(depth)
print('\nFlux:')
for flux in spec.flux_:
print(flux)
print('\nSig: {:}'.format(spec.sig_))
spec = possum()
spec._simulateNspec(N=6, pcomplex=1.0, width=250)
print("Shape of X_: {:}".format(spec.X_.shape))
print("Shape of S_: {:}".format(spec.S_.shape))
plt.figure(figsize=(12,15))
for s, _ in enumerate(spec.label_):
plt.subplot(3,2,s+1)
plt.plot(spec.phi_, spec.X_[s,:,0], color='dodgerblue', label=r'$\rm real$')
plt.plot(spec.phi_, spec.X_[s,:,1], color='darkorange', label=r'$\rm imag$')
plt.plot(spec.phi_, np.abs(spec.X_[s,:,0] + 1j*spec.X_[s,:,1]), 'k--', label=r'$\rm abs$')
plt.vlines(x=spec.depth_[s], ymin=-1.2, ymax=1.2, linestyle='dashed', color='forestgreen', alpha=0.5)
plt.xlim(-100, 100)
plt.ylim(-1, 1)
plt.xlabel(r'$\phi \: \left[\rm rad \, m^{2}\right]$', fontsize=20)
plt.ylabel(r'$F\left(\phi\right) \: \left[\mathrm{Jy \, / \, beam}\right]$', fontsize=20)
plt.legend(loc='best', fontsize=14)
plt.suptitle(r'$\rm Complex \, Sources$', fontsize=25)
plt.tight_layout()
plt.subplots_adjust(top=0.93)
plt.show()
from inception import *
cnn = inception()
cnn._loadTrain('data/train')
cnn._loadValid('data/valid')
cnn._inception(convl=[3,5,23])
cnn._flatten()
cnn._dense(512, act='elu', drop=0.5, nlayers=2)
cnn._compile()
cnn._plotCNN('tmp/i1layer.png')
from inception import *
cnn = inception()
cnn._loadTrain('data/train/')
cnn._inception2(convl=[3,5,7], pool=[3,7])
cnn._inception2(convl=[3,5,9], pool=[3,7])
cnn._inception2(convl=[3,5,13], pool=[3,7])
cnn._inception2(convl=[3,5,17], pool=[3,7])
cnn._inception2(convl=[3,5,22], pool=[3,7])
cnn._convl()
cnn._flatten()
cnn._dense(512, act='elu', drop=0.5, nlayers=3)
cnn._compile(optimizer='rmsprop')
cnn._plotCNN(to_file='tmp/i5layer.png')
from inception import *
cnn = inception()
cnn._loadLog('models/log/i2layer_final.log')
cnn._plotAcc()
cnn._plotLoss()
cnn._loadModel('models/model/i2layer_final.h5')
cnn._loadTest('data/test/')
cnn._test(prob=0.50)
confusion_matrix(cnn.testLabel_, cnn.testPred_)
cnn._plotROC()
print(cnn.auc_)
cnn._getComplexParams()
cnn._getSimpleParams()
cnn._getParams()
print("Complex: \n{:}".format(cnn.dfComplex_.head()))
print("\nSimple: \n{:}".format(cnn.dfSimple_.head()))
print("\nCombined: \n{:}".format(cnn.dfParams_.head()))
cnn._plotParamProb('flux', kind='hex', gridsize=15)
cnn._plotParamProb('depth', kind='hex', gridsize=20)
plt.figure(1)
plt.scatter(cnn.dfSimple_['sig'], cnn.dfSimple_['prob'], color='dodgerblue', alpha=0.05, s=10)
plt.scatter(cnn.dfComplex_['sig'], cnn.dfComplex_['prob'], color='orange', alpha=0.05, s=20)
plt.xlabel(r'$\sigma$', fontsize=20)
plt.ylabel(r'$p_\mathrm{complex}$', fontsize=20)
plt.show()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Computing the trajectories and plotting the result
Step3: Let's call the function once to view the solutions. For this set of parameters, we see the trajectories swirling around two points, called attractors.
Step4: Using IPython's interactive function, we can explore how the trajectories behave as we change the various parameters.
Step5: The object returned by interactive is a Widget object and it has attributes that contain the current result and arguments
Step6: After interacting with the system, we can take the result and perform further computations. In this case, we compute the average positions in \(x\), \(y\) and \(z\).
Step7: Creating histograms of the average positions (across different trajectories) show that on average the trajectories swirl about the attractors.
| <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
from ipywidgets import interact, interactive
from IPython.display import clear_output, display, HTML
import numpy as np
from scipy import integrate
from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from matplotlib.colors import cnames
from matplotlib import animation
def solve_lorenz(N=10, angle=0.0, max_time=4.0, sigma=10.0, beta=8./3, rho=28.0):
fig = plt.figure()
ax = fig.add_axes([0, 0, 1, 1], projection='3d')
ax.axis('off')
# prepare the axes limits
ax.set_xlim((-25, 25))
ax.set_ylim((-35, 35))
ax.set_zlim((5, 55))
def lorenz_deriv(x_y_z, t0, sigma=sigma, beta=beta, rho=rho):
Compute the time-derivative of a Lorenz system.
x, y, z = x_y_z
return [sigma * (y - x), x * (rho - z) - y, x * y - beta * z]
# Choose random starting points, uniformly distributed from -15 to 15
np.random.seed(1)
x0 = -15 + 30 * np.random.random((N, 3))
# Solve for the trajectories
t = np.linspace(0, max_time, int(250*max_time))
x_t = np.asarray([integrate.odeint(lorenz_deriv, x0i, t)
for x0i in x0])
# choose a different color for each trajectory
colors = plt.cm.viridis(np.linspace(0, 1, N))
for i in range(N):
x, y, z = x_t[i,:,:].T
lines = ax.plot(x, y, z, '-', c=colors[i])
plt.setp(lines, linewidth=2)
ax.view_init(30, angle)
plt.show()
return t, x_t
t, x_t = solve_lorenz(angle=0, N=10)
w = interactive(solve_lorenz, angle=(0.,360.), max_time=(0.1, 4.0),
N=(0,50), sigma=(0.0,50.0), rho=(0.0,50.0))
display(w)
t, x_t = w.result
w.kwargs
xyz_avg = x_t.mean(axis=1)
xyz_avg.shape
plt.hist(xyz_avg[:,0])
plt.title('Average $x(t)$');
plt.hist(xyz_avg[:,1])
plt.title('Average $y(t)$');
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Now we can plot the results. The Probe records the vector coming out of the answer variable, so in order to interpret that we can do the dot-product of that vector with the BLUE and CIRCLE vectors in the Vocabulary.
Step2: It works! When the query changes from CIRCLE to BLUE at t=5, we get the desired change in output.
Step3: Timesteps
Step4: Of course, even if we do specify that we want a one-time-step delay on each connection, then the compiler would still be free to do this sort of optimization; it would just have to also add in some delays to make sure everything works out how the user expects.
Step5: Now that we have one way of controlling the flow of time in our network, we can use this to start implementing algorithms. For example, let's turn our combined variable into a memory. That is, rather than having the variable go back to zero if the input is removed, let's have it remember its value. An easy way to do this is just to connect it back to itself. But, of course, when we do this we need to specify that there should be a one-time-step delay here (i.e. the input should be the output on the previous time-step). (If you try to make a recurrent loop like this all with synapse=None, the compiler will give you an error since that's impossible).
Step6: Overall, I find this ability to control the flow of time to be extremely important when building up VSA-style algorithms. However, it is unclear to me what the best syntax for this sort of thing might be. The syntax shown above (i.e. setting the synapse parameter) is rather opaque and confusing to new readers. However, one other option that makes things a bit more clear would be to use a context manager, so you could do something like this
Step7: Another possibility that has been talked about, but not implemented yet, is to try something like this
Step8: Non-Linear Mapping Between VSAs
Step9: More advanced control flow
| <ASSISTANT_TASK:>
Python Code:
import nengo_spa as spa
import nengo
model = spa.Network()
with model:
# configure Nengo to just directly conpute things, rather than trying to implement the
# network with neurons
model.config[nengo.Ensemble].neuron_type = nengo.Direct()
model.config[nengo.Connection].synapse = None
# This defines the VSA to use, its dimensionality, and maintains the map from symbols to vectors
# If `strict` is True, then you have to predefine all symbols; otherwise it'll randomly generate
# new ones as you use them. The default VSA to use is HRR.
vocab = spa.Vocabulary(64, strict=False)
# Define the variables, and indicate they're all using the same VSA and vocabulary
color = spa.State(vocab)
shape = spa.State(vocab)
combined = spa.State(vocab)
query = spa.State(vocab)
answer = spa.State(vocab)
# Connect it up
color * shape >> combined
combined * ~query >> answer
# Define the inputs
# Note that it should be possible to infer what the vocabulary should be based on where we're
# connecting it to, but Nengo doesn't do that at the moment, so we have to specify that explicitly
# Also note that you can specify constants either as strings or via `spa.sym.X`
spa.Transcode(spa.sym.BLUE, output_vocab=vocab) >> color
spa.Transcode('CIRCLE', output_vocab=vocab) >> shape
spa.Transcode(lambda t: spa.sym.CIRCLE if t<=5 else spa.sym.BLUE, output_vocab=vocab) >> query
p_answer = nengo.Probe(answer.output)
sim = nengo.Simulator(model, dt=1, optimize=False)
sim.run(10)
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
plt.figure(figsize=(6,1.5), dpi=200)
plt.plot(sim.trange(), np.dot(sim.data[p_answer], vocab.vectors.T))
plt.legend(vocab.keys())
plt.xlabel('time')
plt.ylabel('similarity')
plt.show()
import nengo_gui.ipython
nengo_gui.ipython.InlineGUI(model)
import nengo_spa as spa
import nengo
model = spa.Network()
with model:
model.config[nengo.Ensemble].neuron_type = nengo.Direct()
model.config[nengo.Connection].synapse = None
vocab = spa.Vocabulary(64, strict=False)
color = spa.State(vocab)
shape = spa.State(vocab)
query = spa.State(vocab)
answer = spa.State(vocab)
(color*shape)*~query >> answer
spa.Transcode(spa.sym.BLUE, output_vocab=vocab) >> color
spa.Transcode('CIRCLE', output_vocab=vocab) >> shape
spa.Transcode(lambda t: spa.sym.CIRCLE if t<=5 else spa.sym.BLUE, output_vocab=vocab) >> query
p_answer = nengo.Probe(answer.output)
sim = nengo.Simulator(model, dt=1, optimize=False)
sim.run(10)
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
plt.figure(figsize=(6,1.5), dpi=200)
plt.plot(sim.trange(), np.dot(sim.data[p_answer], vocab.vectors.T))
plt.legend(vocab.keys())
plt.xlabel('time')
plt.ylabel('similarity')
plt.show()
import nengo_spa as spa
import nengo
model = spa.Network()
with model:
model.config[nengo.Ensemble].neuron_type = nengo.Direct()
model.config[nengo.Connection].synapse = 0 # this actually means one timestep!
vocab = spa.Vocabulary(64, strict=False)
color = spa.State(vocab)
shape = spa.State(vocab)
query = spa.State(vocab)
answer = spa.State(vocab)
(color*shape)*~query >> answer
spa.Transcode(spa.sym.BLUE, output_vocab=vocab) >> color
spa.Transcode('CIRCLE', output_vocab=vocab) >> shape
spa.Transcode(lambda t: spa.sym.CIRCLE if t<=5 else spa.sym.BLUE, output_vocab=vocab) >> query
p_answer = nengo.Probe(answer.output)
sim = nengo.Simulator(model, dt=1, optimize=False)
sim.run(10)
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
plt.figure(figsize=(6,1.5), dpi=200)
plt.plot(sim.trange(), np.dot(sim.data[p_answer], vocab.vectors.T))
plt.legend(vocab.keys())
plt.xlabel('time')
plt.ylabel('similarity')
plt.show()
import nengo_spa as spa
import nengo
model = spa.Network()
with model:
model.config[nengo.Ensemble].neuron_type = nengo.Direct()
model.config[nengo.Connection].synapse = None
vocab = spa.Vocabulary(64, strict=False)
color = spa.State(vocab)
shape = spa.State(vocab)
memory = spa.State(vocab)
query = spa.State(vocab)
answer = spa.State(vocab)
color * shape >> memory
memory * ~query >> answer
# make sure the connection from memory back to itself has a one-time-step delay
model.config[nengo.Connection].synapse = 0
memory >> memory
model.config[nengo.Connection].synapse = None
# present two different inputs on the first two timesteps
spa.Transcode(lambda t: 'BLUE' if t<=1 else ('RED' if t<=2 else '0'), output_vocab=vocab) >> color
spa.Transcode(lambda t: 'CIRCLE' if t<=1 else ('SQUARE' if t<=2 else '0'), output_vocab=vocab) >> shape
# present 4 queries, starting at t=5
def query_func(t):
if 5<t<=6: return 'BLUE'
if 6<t<=7: return 'CIRCLE'
if 7<t<=8: return 'RED'
if 8<t<=9: return 'SQUARE'
return '0'
spa.Transcode(query_func, output_vocab=vocab) >> query
p_answer = nengo.Probe(answer.output)
p_query = nengo.Probe(query.output)
sim = nengo.Simulator(model, dt=1, optimize=False)
sim.run(10)
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
plt.figure(figsize=(6,3), dpi=200)
plt.subplot(2, 1, 1)
plt.plot(sim.trange(), np.dot(sim.data[p_query], vocab.vectors.T))
plt.legend(vocab.keys(), loc='upper left')
plt.ylabel('query')
plt.subplot(2, 1, 2)
plt.plot(sim.trange(), np.dot(sim.data[p_answer], vocab.vectors.T))
plt.legend(vocab.keys(), loc='upper left')
plt.xlabel('time')
plt.ylabel('answer')
plt.show()
import nengo_spa as spa
import nengo
# define the parameters to use in different contexts
no_delay = nengo.Config(nengo.Connection)
no_delay[nengo.Connection].synapse = None
single_step_delay = nengo.Config(nengo.Connection)
single_step_delay[nengo.Connection].synapse = 0
model = spa.Network()
with model:
model.config[nengo.Ensemble].neuron_type = nengo.Direct()
vocab = spa.Vocabulary(64, strict=False)
color = spa.State(vocab)
shape = spa.State(vocab)
memory = spa.State(vocab)
query = spa.State(vocab)
answer = spa.State(vocab)
with no_delay:
color * shape >> memory
memory * ~query >> answer
with single_step_delay:
memory >> memory
with no_delay:
spa.Transcode(lambda t: 'BLUE' if t<=1 else ('RED' if t<=2 else '0'), output_vocab=vocab) >> color
spa.Transcode(lambda t: 'CIRCLE' if t<=1 else ('SQUARE' if t<=2 else '0'), output_vocab=vocab) >> shape
def query_func(t):
if 5<t<=6: return 'BLUE'
if 6<t<=7: return 'CIRCLE'
if 7<t<=8: return 'RED'
if 8<t<=9: return 'SQUARE'
return '0'
with no_delay:
spa.Transcode(query_func, output_vocab=vocab) >> query
p_answer = nengo.Probe(answer.output)
p_query = nengo.Probe(query.output)
sim = nengo.Simulator(model, dt=1, optimize=False)
sim.run(10)
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
plt.figure(figsize=(6,3), dpi=200)
plt.subplot(2, 1, 1)
plt.plot(sim.trange(), np.dot(sim.data[p_query], vocab.vectors.T))
plt.legend(vocab.keys(), loc='upper left')
plt.ylabel('query')
plt.subplot(2, 1, 2)
plt.plot(sim.trange(), np.dot(sim.data[p_answer], vocab.vectors.T))
plt.legend(vocab.keys(), loc='upper left')
plt.xlabel('time')
plt.ylabel('answer')
plt.show()
import nengo_spa as spa
import nengo
model = spa.Network()
with model:
model.config[nengo.Ensemble].neuron_type = nengo.Direct()
model.config[nengo.Connection].synapse = None
vocab1 = spa.Vocabulary(64)
vocab1.populate('DOG;CAT;HAT;CAR') # pre-populate the vocabulary so that we have generated vectors
vocab2 = spa.Vocabulary(512)
vocab2.populate('DOG;CAT;MOUSE;ELEPHANT') # pre-populate the vocabulary so that we have generated vectors
a = spa.State(vocab1) # use different vocabularies for the two components
b = spa.State(vocab2)
def input_func(t):
if 1<t<=2: return '0.7*DOG'
if 2<t<=3: return 'CAT'
if 3<t<=4: return 'HAT'
if 4<t<=5: return 'CAR'
if 5<t<=6: return '0.7*DOG+0.7*CAT'
return '0'
spa.Transcode(input_func, output_vocab=vocab1) >> a
# translate from one to the other
a.translate(vocab2) >> b
p_in = nengo.Probe(a.output)
p_out = nengo.Probe(b.output)
sim = nengo.Simulator(model, dt=1, optimize=False)
sim.run(10)
plt.figure(figsize=(6,3), dpi=200)
plt.subplot(2, 1, 1)
plt.plot(sim.trange(), np.dot(sim.data[p_in], vocab1.vectors.T))
plt.legend(vocab1.keys(), loc='upper right')
plt.ylabel('a')
plt.subplot(2, 1, 2)
plt.plot(sim.trange(), np.dot(sim.data[p_out], vocab2.vectors.T))
plt.legend(vocab2.keys(), loc='upper right')
plt.xlabel('time')
plt.ylabel('b')
plt.show()
import nengo_spa as spa
import nengo
model = spa.Network()
with model:
model.config[nengo.Ensemble].neuron_type = nengo.Direct()
model.config[nengo.Connection].synapse = None
vocab1 = spa.Vocabulary(64)
vocab1.populate('DOG;CAT;HAT;CAR') # pre-populate the vocabulary so that we have generated vectors
vocab2 = spa.Vocabulary(512)
vocab2.populate('DOG;CAT;MOUSE;ELEPHANT') # pre-populate the vocabulary so that we have generated vectors
a = spa.State(vocab1) # use different vocabularies for the two components
b = spa.State(vocab2)
#
am = spa.ThresholdingAssocMem(input_vocab=vocab1, output_vocab=vocab2, threshold=0.3, mapping=['DOG','CAT'], function=lambda x: x > 0.3)
def input_func(t):
if 1<t<=2: return '0.7*DOG'
if 2<t<=3: return 'CAT'
if 3<t<=4: return 'HAT'
if 4<t<=5: return 'CAR'
if 5<t<=6: return '0.7*DOG+0.7*CAT'
return '0'
spa.Transcode(input_func, output_vocab=vocab1) >> a
# translate from one to the other
a >> am
am >> b
p_in = nengo.Probe(a.output)
p_out = nengo.Probe(b.output)
sim = nengo.Simulator(model, dt=1, optimize=False)
sim.run(10)
plt.figure(figsize=(6,3), dpi=200)
plt.subplot(2, 1, 1)
plt.plot(sim.trange(), np.dot(sim.data[p_in], vocab1.vectors.T))
plt.legend(vocab1.keys(), loc='upper right')
plt.ylabel('a')
plt.subplot(2, 1, 2)
plt.plot(sim.trange(), np.dot(sim.data[p_out], vocab2.vectors.T))
plt.legend(vocab2.keys(), loc='upper right')
plt.xlabel('time')
plt.ylabel('b')
plt.show()
# Switch statements and/or ifmax
match state:
case 'DOG':
'PET' >> action
case 'CAT':
'FEED' >> action
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Basics
Step2: You can assign several variables at once
Step3: There is no "begin-end"! You use indentation to specify blocks. Here is simple IF statement
Step4: Types
Step5: Loops
Step6: while
Step7: Enumerate
Step8: Python code style
Step10: String Quotes
Step11: Some tricks
Step12: However, there is no built-in function for multiplication
Step13: , so we have to write our solution. Let's start with straightforward one
Step14: There is another way to implement it. It is to write it in a functional-programming style
Step16: Python is really good for fast prototyping
Step18: Faster solution for the problem
Step19: Even faster solution
Step20: Very important references
| <ASSISTANT_TASK:>
Python Code:
# you can mix text and code in one place and
# run code from a Web browser
a = 10
a
a, b = 1, 2
a, b
b, a = a, b
a, b
if a > b:
print("A is greater than B")
else:
print("B is greater than A")
# Integer
a = 1
print(a)
# Float
b = 1.0
print(b)
# String
c = "Hello world"
print(c)
# Unicode
d = u"Привет, мир!"
print(d)
# List (array)
e = [1, 2, 3]
print(e[2]) # 3
# Tuple (constant array)
f = (1, 2, 3)
print(f[0]) # 1
# Set
g = {1, 1, 1, 2}
print(g)
# Dictionary (hash table, hash map)
g = {1: 'One', 2: 'Two', 3: 'Three'}
print(g[1]) # 'One'
for i in range(10):
print(i)
i = 0
while i < 10:
print(i)
i += 1
items = ['apple', 'banana', 'stawberry', 'watermelon']
for item in items:
print(item)
for i, item in enumerate(items):
print(i, item)
# Variable name
my_variable = 1
# Class method and function names
def my_function():
pass
# Constants
MY_CONSTANT = 1
# Class name
class MyClass(object):
# 'private' variable - use underscore before a name
_my_variable = 1
# 'protected' variable - use two underscores before a name
__my_variable = 1
# magic methods
def __init__(self):
self._another_my_variable = 1
'string'
"another string"
Multiline
string
'''
Another
multiline
string
'''
sum([1,2,3,4,5])
mult([1,2,3,4,5])
def mult(array):
result = 1
for item in array:
result *= item
return result
mult([1,2,3,4,5])
from functools import reduce
def mult_functional(array):
return reduce(lambda prev_result, current_item: prev_result * current_item, array, 1)
mult_functional([1,2,3,4,5])
%timeit mult(range(1, 1000))
%timeit mult_functional(range(1, 1000))
def get_smallest_divisor_in_range(N):
Brute force all numbers from 2 to 1*2*3*4*5*6*...*N
until we get a number, which divides by all numbers
in a range of [1, N].
Example:
>>> get_smallest_devisor_in_range(10)
2520
range_multiplied = mult(range(2, N+1))
for x in range(2, range_multiplied, 2):
for divisor in range(3, N+1):
if x % divisor:
break
else:
break
return x
%time print(get_smallest_divisor_in_range(10))
%time print(get_smallest_divisor_in_range(20))
def get_smallest_divisor_in_range_fast(N):
Optimal solution for the problem is to count
LCM (Least common multiple) of all numbers in
a range of [1, N].
prime_divisors = {}
# Loop from 2 to N.
for x in range(2, N+1):
# Find and save all prime divisors of `x`.
for divisor in range(2, int(x**0.5) + 1):
power = 0
# Find the power of the `divisor` in `x`.
while x % divisor == 0:
x /= divisor
power += 1
if power > 0:
# Save the `divisor` with the greatest power into our `prime_divisors` dict (hash-map).
if divisor in prime_divisors:
if prime_divisors[divisor] < power:
prime_divisors[divisor] = power
else:
prime_divisors[divisor] = power
# Stop searching more divisors if `x` is already equals to `1` (all divisors are already found).
if x == 1:
break
else:
# If `x` is prime, we won't find any divisors and
# the above `for` loop will be over without hitting `break`,
# thus we just need to save `x` as prime_divisor in power of 1.
prime_divisors[x] = 1
# Having all prime divisors in their lowest powers we multiply all of them to get the answer.
least_common_multiple = 1
for divisor, power in prime_divisors.items():
least_common_multiple *= divisor ** power
return least_common_multiple
%time print(get_smallest_divisor_in_range_fast(10))
%time print(get_smallest_divisor_in_range_fast(20))
%time print(get_smallest_divisor_in_range_fast(10000))
try:
from math import gcd
except ImportError: # Python 2.x has `gcd` in `fractions` module instead of `math`
from fractions import gcd
def get_smallest_divisor_in_range_fastest(N):
least_common_multiple = 1
for x in range(2, N + 1):
least_common_multiple = (least_common_multiple * x) // gcd(least_common_multiple, x)
return least_common_multiple
%time print(get_smallest_divisor_in_range_fastest(10))
%time print(get_smallest_divisor_in_range_fastest(20))
%time print(get_smallest_divisor_in_range_fastest(10000))
import math
math?
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Exercise
Step2: Recursively computing values of a polynomial using difference equations
Step3: Second order polynomial
| <ASSISTANT_TASK:>
Python Code:
import numpy as np
import scipy.signal as signal
import matplotlib.pyplot as plt
%matplotlib inline
# Define the continuous-time linear time invariant system F
a = 2
b = 1
num = [1, b]
den = [1, a]
F = signal.lti(num, den)
# Plot a step response
(t, y) = signal.step(F)
plt.figure(figsize=(14,6))
plt.plot(t, y, linewidth=2)
# Solve the difference equation y_k = c y_{k-1} + d_0 u_k + d_1 u_{k-1}
h = 0.1 # The sampling time
c = 1.0/(1 + a*h)
d0 = (1 + b*h) / (1 + a*h)
d1 = -c
td = np.arange(35)* h #The sampling time instants
ud = np.ones(35) # The input signal is a step, limited in time to 3.5 seconds
yd = np.zeros(35) # A vector to hold the discrete output signal
yd[0] = c*0 + d0*ud[0] - d1*0 # The first sample of the output signal
for k in range(1,35): # And then the rest
yd[k] = c*yd[k-1] + d0*ud[k] + d1*ud[k-1]
plt.plot(td, yd, 'o', markersize=8)
plt.xlim([0, 3.5])
plt.ylim([0, 1])
plt.legend(('Continuous-time system', 'Discrete approximation'))
## Your python code goes here
def p1(x): return 4*x + 2 # Our first-order polynomial
# Compute values for x=[0,0.2, 0.4, ... 2] recursively using the difference equation
h = 0.2
x = h*np.arange(11) # Gives the array [0,0.2, 0.4, ... 2]
pd = np.zeros(11)
d1 = 4*h
# Need to compute the first value as the initial value for the difference equation,
pd[0] = p1(x[0])
for k in range(1,10): # Solve difference equation
pd[k] = pd[k-1] + d1
plt.figure(figsize=(14,6))
plt.plot(x, p1(x), linewidth=2)
plt.plot(x, pd, 'ro')
a0 = 2
a1 = -3
a2 = 2
def p2(x): return a0*x**2 + a1*x + a2 # Our second-order polynomial
# Compute values for x=[0,0.2, 0.4, ... 8] recursively using the difference equation
h = 0.2
x = h*np.arange(41) # Gives the array [0,0.2, 0.4, ... 2]
d1 = np.zeros(41) # The first differences
pd = np.zeros(41)
d2 = h**2*2*a0 # The constant, second difference
# Need to compute the first two values to get the initial values for the difference equation,
pd[0] = p2(x[0])
pd[1] = p2(x[1])
for k in range(2,41): # Solve difference equation
pd[k] = 2*pd[k-1] - pd[k-2] + d2
plt.figure(figsize=(14,6))
plt.plot(x, p2(x), linewidth=2) # Evaluating the polynomial
plt.plot(x, pd, 'ro') # The solution using the difference equation
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The Theory (section 6.3.3 - 6.3.5 of the syllabus)
Step2: Convenience function for setting up graphs
Step3: Draw the Theis type curve
Step4: If you don't specify the names of the parameters of the function, than you must specify the parameters in the order that they are defined in the function and can't leave any out. This is true until one parameter is
Step5: Finally, let's use the xlim and ylim. I'll specify the limits of the x-axis to be 1e-1 and 1e2 and of the y-axis to be 0 and 4. And to invert the y-axis I'll specify the limits of the y-axis in reversed order ylim=(4, 0) instead of ylim=(0, 4)
Step6: This is already quite a lot about generating a plot. With the simple convenience function newfig(), we can generate may forms of graphics, without much typing. newfig() is, therefore, really convenient. The cluttering that is prevents, also makes the code much easier to read.
Step7: Simulate a pumping test (Theis situation) using the double log and the half-log methods discussed in class
Step8: Explanation, what did we do?
Step9: In the loop we use this line which stores the piezometer information
Step10: Show the "measurements" versus time on double log scales
Step11: Show the measurements on double log scales versus $t/r^2$ instead of versus $t$
Step12: Clearly, all measurments now fall on the same overall curve.
Step13: Result after some trial and error
Step14: With the factor V and H now obtained, find $kD$ and $S$
Step15: In conclusion, the results are quite good, even though the interpreted kD (725 m2/d) deviates from the original kD (600 m2/d). It can only mean that the results are not too sensitive to the precise value of kD.
Step16: Interpret using the straight line on half-log paper
Step17: This looks good. Therefore we have
| <ASSISTANT_TASK:>
Python Code:
from scipy.special import exp1 as W
import numpy as np
import matplotlib.pyplot as plt
from scipy.special import exp1 as W
def newfig(title='forgot title?', xlabel='forgot the x-label?', ylabel='forgot the y-label?',
xlim=None, ylim=None, xscale='linear', yscale='linear', size_inches=(14, 8)):
'''Return an axes set up to your liking, to plot lines on
parameters
----------
title: string
the title above your figure
xlabel: string
the title of your horizontal axis
ylabel: string
the title of your vertical axis
xlim: a tuple with two values
the desired limits of your horizontal axis
ylim: a tuple with two values
the desired limits of your vertical axis
xscale: string
'log' sets the horizontal scale to logarithmic
yscale: string
'log' sets the vertical scale to logarithmic
size_inches : tuple (w, h)
telling the desired width w and height h of the figure
'''
plt.grid()
plt.xscale(xscale)
plt.yscale(yscale)
plt.title(title)
plt.xlabel(xlabel)
plt.ylabel(ylabel)
if xlim is not None: plt.xlim(xlim)
if ylim is not None: plt.ylim(ylim)
plt.gcf().set_size_inches(size_inches)
return plt.gca() # you don't have to use this, but if you one to return something you return the current axis
# This line shows what the axis will lokk like when called without any parameters (it will use the defaults)
newfig() # try pressing shift-tab when the cursor is within the ()
# Generate u-values. Logspace is used, because u will span multiple log-cycles. Hence we require here
# 51 points between and including the first and the last value, which are 1e-4 (=0.0001) and 1e1 (=10)
u = np.logspace(-4, 1, 51)
# Gen create the setup for the figure, using the convenience fucntion we created above
# Because all parameters of the function have names and default values, we just specify
# the parametrs that we need and leave out the ones for which we want to use the default values.
newfig(title='Theis type curve', xlabel='1/u', ylabel='W(u)', xscale='log', yscale='log')
# Then plot
plt.plot(1/u, W(u), label='Theis type curve')
plt.legend()
u = np.logspace(-4, 1, 51)
newfig('Theis type curve', '1/u', 'W(u)', xscale='log')
plt.plot(1/u, W(u), label='Theis type curve')
plt.legend()
u = np.logspace(-4, 1, 51)
newfig('Theis type curve', '1/u', 'W(u)', xscale='log', xlim=(1e-1, 1e2), ylim=(4, 0))
plt.plot(1/u, W(u), label='Theis type curve')
plt.legend()
help(newfig)
kD = 600 # m2/d
S = 0.001 # storage coefficient
Q = 1200 # m3/d, well extraction rate
# Piezometer specific data
names = ['pz1', 'pz2', 'pz3', 'pz4']
radii = [10, 30, 100, 250] # their distance from the pumping well
times = np.logspace(-3, 1, 41) # d, measurement times
markers = ['o', 'x', '+', 's'] # markers to use when plotting the measuring points
# Put these data into a dictionary, which gives us a handy database.
piez = dict() # generate an empty dictionary with the name 'piez'
# Fill the dict with data
for name, r, m in zip(names, radii, markers):
piez[name] = {'m':m, # each piezometer gets its own marker
'r': r, # its own distance to the pumping well
't': times, # its measurement times (may differ for each well)
's': Q/(4 * np.pi * kD) * W(r ** 2 * S / (4 * kD * times)) # and the (generated drawdown)
}
piez['pz3'] # show all the data pertaining to piezometer with name 'pz3'
piez['pz3']['r'] # show the radius of piezometer with name 'pz3'
newfig('"Measured" drawdowns', 'time [d]', 's [m]', xscale='log', yscale='log')
for name in piez:
pz = piez[name]
plt.plot(pz['t'], pz['s'], pz['m'], label=name) # '.' plots only the meaurements, not the line that connects them
plt.legend()
newfig('"Measured" drawdowns versus $t/r^2$', '$t/r^2$ [d/m2]', 's [m]', xscale='log', yscale='log')
for name in piez:
pz = piez[name]
plt.plot(pz['t']/pz['r'] ** 2, pz['s'], pz['m'], label=name) # '.' plots only the meaurements, not the line that connects them
plt.legend()
V = 100. # vertical factor
H = 100. # horizonta factor
u = np.logspace(-4, 1, 51)
newfig(f'Theis type curve, fitting with data using V={V}, H={H} ', '1/u and H times $t/r^2$', 'W(u) and V times $s$', xscale='log', yscale='log')
plt.plot(1/u, W(u), label='Theis type curve')
for nm in piez:
pz = piez[nm]
plt.plot(H * pz['t']/pz['r'] ** 2, V * pz['s'], pz['m'], label=nm)
plt.legend()
plt.show()
V = 0.76e1 # vertical factor
H = 2.4e6 # horizonta factor
u = np.logspace(-4, 1, 51)
newfig(f'Theis type curve, fitting with data using V={V}, H={H} ', '1/u and H times $t/r^2$', 'W(u) and V times $s$', xscale='log', yscale='log')
plt.plot(1/u, W(u), label='Theis type curve')
for nm in piez:
pz = piez[nm]
plt.plot(H * pz['t']/pz['r'] ** 2, V * pz['s'], pz['m'], label=nm)
plt.legend()
plt.show()
kD = 725 # m2/d interpreted
S = 0.0012 # interpreted
newfig(f'Compare "measured" with computed data using kD={kD} m2/d, and S={S} ', 't [d]', 's [m]', xscale='log', yscale='log')
for nm in piez:
pz = piez[nm]
L = plt.plot(pz['t'], pz['s'], pz['m'], label=f'{nm}, "measured"')
clr = L[0].get_color() # get the color from the previous line, to use it in the next line
plt.plot(pz['t'], Q /(4 * np.pi * kD) * W(pz['r'] ** 2 * S / (4 * kD * pz['t'])), '-', color=clr, label=f'{nm}, computed')
plt.legend()
plt.show()
kD = 725 # m2/d interpreted
S = 0.0012 # interpreted
newfig(f'Compare "measured" with computed data using kD={kD} m2/d, and S={S} ', 't [d]', 's [m]', xscale='log')
for nm in piez:
pz = piez[nm]
L = plt.plot(pz['t'], pz['s'], pz['m'], label=f'{nm}, "measured"')
clr = L[0].get_color() # get the color from the previous line, to use it in the next line
plt.plot(pz['t'], Q /(4 * np.pi * kD) * W(pz['r'] ** 2 * S / (4 * kD * pz['t'])), '-', color=clr, label=f'{nm}, computed')
plt.legend()
plt.show()
kD = 725 # m2/d interpreted
S = 0.0012 # interpreted
newfig(f'"measured" data with fitted straight line using A={A}, and B={B} ', 't [d]', 's [m]', xscale='log')
for nm in piez:
pz = piez[nm]
L = plt.plot(pz['t'], pz['s'], pz['m'], label=f'{nm}, "measured"')
# Also by trial and error determine the parameters of the straight line
A = 0.1625
B = 1250
# Straight line
plt.plot(piez['pz2']['t'], A * np.log(B * piez['pz2']['t'] ), label='straight line')
plt.legend()
plt.show()
kD = 590 # m2/d interpreted
S = 0.0012 # interpreted
newfig(f'Compare "measured" with computed data using kD={kD} m2/d, and S={S} ', 't [d]', 's [m]', xscale='log')
for nm in piez:
pz = piez[nm]
L = plt.plot(pz['t'], pz['s'], pz['m'], label=f'{nm}, "measured"')
clr = L[0].get_color() # get the color from the previous line, to use it in the next line
plt.plot(pz['t'], Q /(4 * np.pi * kD) * W(pz['r'] ** 2 * S / (4 * kD * pz['t'])), '-', color=clr, label=f'{nm}, computed')
plt.legend()
plt.show()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Install the required libraries
Step2: Wait for the message Configure docker credentials before moving on to the next cell.
Step3: Configure a Docker registry for Kubeflow Fairing
Step4: Use Kubeflow Fairing to build the Docker image
Step5: Run the next cell and wait until you see a message like Built image gcr.io/<your-project>/fairing-job/mnist
Step6: Create a Cloud Storage bucket
Step8: Distributed training
Step9: Create the training job
Step10: Check the job using kubectl
Step12: Get the training logs
Step16: Deploy TensorBoard
Step17: Set a variable defining your endpoint
Step18: Access the TensorBoard UI
Step19: Wait for the training job to finish
Step23: Serve the model
Step27: Deploy the UI for the MNIST web app
Step28: Access the MNIST web UI
| <ASSISTANT_TASK:>
Python Code:
import logging
import os
import uuid
from importlib import reload
from oauth2client.client import GoogleCredentials
credentials = GoogleCredentials.get_application_default()
import notebook_setup
reload(notebook_setup)
notebook_setup.notebook_setup()
import k8s_util
# Force a reload of Kubeflow. Since Kubeflow is a multi namespace module,
# doing the reload in notebook_setup may not be sufficient.
import kubeflow
reload(kubeflow)
from kubernetes import client as k8s_client
from kubernetes import config as k8s_config
from kubeflow.tfjob.api import tf_job_client as tf_job_client_module
from IPython.core.display import display, HTML
import yaml
from kubernetes import client as k8s_client
from kubernetes.client import rest as k8s_rest
from kubeflow import fairing
from kubeflow.fairing import utils as fairing_utils
from kubeflow.fairing.builders import append
from kubeflow.fairing.deployers import job
from kubeflow.fairing.preprocessors import base as base_preprocessor
# Setting up Google Container Registry (GCR) for storing output containers.
# You can use any Docker container registry instead of GCR.
GCP_PROJECT = fairing.cloud.gcp.guess_project_name()
DOCKER_REGISTRY = 'gcr.io/{}/fairing-job'.format(GCP_PROJECT)
namespace = fairing_utils.get_current_k8s_namespace()
logging.info(f"Running in project {GCP_PROJECT}")
logging.info(f"Running in namespace {namespace}")
logging.info(f"Using Docker registry {DOCKER_REGISTRY}")
# TODO(https://github.com/kubeflow/fairing/issues/426): We should get rid of this once the default
# Kaniko image is updated to a newer image than 0.7.0.
from kubeflow.fairing import constants
constants.constants.KANIKO_IMAGE = "gcr.io/kaniko-project/executor:v0.14.0"
from kubeflow.fairing.builders import cluster
# output_map is a map of extra files to add to the notebook.
# It is a map from source location to the location inside the context.
output_map = {
"Dockerfile.model": "Dockerfile",
"model.py": "model.py"
}
preprocessor = base_preprocessor.BasePreProcessor(
command=["python"], # The base class will set this.
input_files=[],
path_prefix="/app", # irrelevant since we aren't preprocessing any files
output_map=output_map)
preprocessor.preprocess()
# Use a Tensorflow image as the base image
# We use a custom Dockerfile
cluster_builder = cluster.cluster.ClusterBuilder(registry=DOCKER_REGISTRY,
base_image="", # base_image is set in the Dockerfile
preprocessor=preprocessor,
image_name="mnist",
dockerfile_path="Dockerfile",
pod_spec_mutators=[fairing.cloud.gcp.add_gcp_credentials_if_exists],
context_source=cluster.gcs_context.GCSContextSource())
cluster_builder.build()
logging.info(f"Built image {cluster_builder.image_tag}")
from google.cloud import storage
bucket = f"{GCP_PROJECT}-mnist"
client = storage.Client()
b = storage.Bucket(client=client, name=bucket)
if not b.exists():
logging.info(f"Creating bucket {bucket}")
b.create()
else:
logging.info(f"Bucket {bucket} already exists")
train_name = f"mnist-train-{uuid.uuid4().hex[:4]}"
num_ps = 1
num_workers = 2
model_dir = f"gs://{bucket}/mnist"
export_path = f"gs://{bucket}/mnist/export"
train_steps = 200
batch_size = 100
learning_rate = .01
image = cluster_builder.image_tag
train_spec = fapiVersion: kubeflow.org/v1
kind: TFJob
metadata:
name: {train_name}
spec:
tfReplicaSpecs:
Ps:
replicas: {num_ps}
template:
metadata:
annotations:
sidecar.istio.io/inject: "false"
spec:
serviceAccount: default-editor
containers:
- name: tensorflow
command:
- python
- /opt/model.py
- --tf-model-dir={model_dir}
- --tf-export-dir={export_path}
- --tf-train-steps={train_steps}
- --tf-batch-size={batch_size}
- --tf-learning-rate={learning_rate}
image: {image}
workingDir: /opt
restartPolicy: OnFailure
Chief:
replicas: 1
template:
metadata:
annotations:
sidecar.istio.io/inject: "false"
spec:
serviceAccount: default-editor
containers:
- name: tensorflow
command:
- python
- /opt/model.py
- --tf-model-dir={model_dir}
- --tf-export-dir={export_path}
- --tf-train-steps={train_steps}
- --tf-batch-size={batch_size}
- --tf-learning-rate={learning_rate}
image: {image}
workingDir: /opt
restartPolicy: OnFailure
Worker:
replicas: 1
template:
metadata:
annotations:
sidecar.istio.io/inject: "false"
spec:
serviceAccount: default-editor
containers:
- name: tensorflow
command:
- python
- /opt/model.py
- --tf-model-dir={model_dir}
- --tf-export-dir={export_path}
- --tf-train-steps={train_steps}
- --tf-batch-size={batch_size}
- --tf-learning-rate={learning_rate}
image: {image}
workingDir: /opt
restartPolicy: OnFailure
tf_job_client = tf_job_client_module.TFJobClient()
tf_job_body = yaml.safe_load(train_spec)
tf_job = tf_job_client.create(tf_job_body, namespace=namespace)
logging.info(f"Created job {namespace}.{train_name}")
!kubectl get tfjobs -o yaml {train_name}
from urllib.parse import urlencode
for replica in ["chief", "worker", "ps"]:
logs_filter = fresource.type="k8s_container"
labels."k8s-pod/tf-job-name" = "{train_name}"
labels."k8s-pod/tf-replica-type" = "{replica}"
resource.labels.container_name="tensorflow"
new_params = {'project': GCP_PROJECT,
# Logs for last 7 days
'interval': 'P7D',
'advancedFilter': logs_filter}
query = urlencode(new_params)
url = "https://console.cloud.google.com/logs/viewer?" + query
display(HTML(f"Link to: <a href='{url}'>{replica} logs</a>"))
tb_name = "mnist-tensorboard"
tb_deploy = fapiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: mnist-tensorboard
name: {tb_name}
namespace: {namespace}
spec:
selector:
matchLabels:
app: mnist-tensorboard
template:
metadata:
labels:
app: mnist-tensorboard
version: v1
spec:
serviceAccount: default-editor
containers:
- command:
- /usr/local/bin/tensorboard
- --logdir={model_dir}
- --port=80
image: tensorflow/tensorflow:1.15.2-py3
name: tensorboard
ports:
- containerPort: 80
tb_service = fapiVersion: v1
kind: Service
metadata:
labels:
app: mnist-tensorboard
name: {tb_name}
namespace: {namespace}
spec:
ports:
- name: http-tb
port: 80
targetPort: 80
selector:
app: mnist-tensorboard
type: ClusterIP
tb_virtual_service = fapiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: {tb_name}
namespace: {namespace}
spec:
gateways:
- kubeflow/kubeflow-gateway
hosts:
- '*'
http:
- match:
- uri:
prefix: /mnist/{namespace}/tensorboard/
rewrite:
uri: /
route:
- destination:
host: {tb_name}.{namespace}.svc.cluster.local
port:
number: 80
timeout: 300s
tb_specs = [tb_deploy, tb_service, tb_virtual_service]
k8s_util.apply_k8s_specs(tb_specs, k8s_util.K8S_CREATE_OR_REPLACE)
endpoint = None
if endpoint:
logging.info(f"endpoint set to {endpoint}")
else:
logging.info("Warning: You must set {endpoint} in order to print out the URLs where you can access your web apps.")
if endpoint:
vs = yaml.safe_load(tb_virtual_service)
path= vs["spec"]["http"][0]["match"][0]["uri"]["prefix"]
tb_endpoint = endpoint + path
display(HTML(f"TensorBoard UI is at <a href='{tb_endpoint}'>{tb_endpoint}</a>"))
tf_job = tf_job_client.wait_for_condition(train_name, expected_condition=["Succeeded", "Failed"], namespace=namespace)
if tf_job_client.is_job_succeeded(train_name, namespace):
logging.info(f"TFJob {namespace}.{train_name} succeeded")
else:
raise ValueError(f"TFJob {namespace}.{train_name} failed")
deploy_name = "mnist-model"
model_base_path = export_path
# The web UI defaults to mnist-service so if you change the name, you must
# change it in the UI as well.
model_service = "mnist-service"
deploy_spec = fapiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: mnist
name: {deploy_name}
namespace: {namespace}
spec:
selector:
matchLabels:
app: mnist-model
template:
metadata:
# TODO(jlewi): Right now we disable the istio side car because otherwise ISTIO rbac will prevent the
# UI from sending RPCs to the server. We should create an appropriate ISTIO rbac authorization
# policy to allow traffic from the UI to the model servier.
# https://istio.io/docs/concepts/security/#target-selectors
annotations:
sidecar.istio.io/inject: "false"
labels:
app: mnist-model
version: v1
spec:
serviceAccount: default-editor
containers:
- args:
- --port=9000
- --rest_api_port=8500
- --model_name=mnist
- --model_base_path={model_base_path}
- --monitoring_config_file=/var/config/monitoring_config.txt
command:
- /usr/bin/tensorflow_model_server
env:
- name: modelBasePath
value: {model_base_path}
image: tensorflow/serving:1.15.0
imagePullPolicy: IfNotPresent
livenessProbe:
initialDelaySeconds: 30
periodSeconds: 30
tcpSocket:
port: 9000
name: mnist
ports:
- containerPort: 9000
- containerPort: 8500
resources:
limits:
cpu: "4"
memory: 4Gi
requests:
cpu: "1"
memory: 1Gi
volumeMounts:
- mountPath: /var/config/
name: model-config
volumes:
- configMap:
name: {deploy_name}
name: model-config
service_spec = fapiVersion: v1
kind: Service
metadata:
annotations:
prometheus.io/path: /monitoring/prometheus/metrics
prometheus.io/port: "8500"
prometheus.io/scrape: "true"
labels:
app: mnist-model
name: {model_service}
namespace: {namespace}
spec:
ports:
- name: grpc-tf-serving
port: 9000
targetPort: 9000
- name: http-tf-serving
port: 8500
targetPort: 8500
selector:
app: mnist-model
type: ClusterIP
monitoring_config = fkind: ConfigMap
apiVersion: v1
metadata:
name: {deploy_name}
namespace: {namespace}
data:
monitoring_config.txt: |-
prometheus_config: {{
enable: true,
path: "/monitoring/prometheus/metrics"
}}
model_specs = [deploy_spec, service_spec, monitoring_config]
k8s_util.apply_k8s_specs(model_specs, k8s_util.K8S_CREATE_OR_REPLACE)
ui_name = "mnist-ui"
ui_deploy = fapiVersion: apps/v1
kind: Deployment
metadata:
name: {ui_name}
namespace: {namespace}
spec:
replicas: 1
selector:
matchLabels:
app: mnist-web-ui
template:
metadata:
labels:
app: mnist-web-ui
spec:
containers:
- image: gcr.io/kubeflow-examples/mnist/web-ui:v20190112-v0.2-142-g3b38225
name: web-ui
ports:
- containerPort: 5000
serviceAccount: default-editor
ui_service = fapiVersion: v1
kind: Service
metadata:
annotations:
name: {ui_name}
namespace: {namespace}
spec:
ports:
- name: http-mnist-ui
port: 80
targetPort: 5000
selector:
app: mnist-web-ui
type: ClusterIP
ui_virtual_service = fapiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: {ui_name}
namespace: {namespace}
spec:
gateways:
- kubeflow/kubeflow-gateway
hosts:
- '*'
http:
- match:
- uri:
prefix: /mnist/{namespace}/ui/
rewrite:
uri: /
route:
- destination:
host: {ui_name}.{namespace}.svc.cluster.local
port:
number: 80
timeout: 300s
ui_specs = [ui_deploy, ui_service, ui_virtual_service]
k8s_util.apply_k8s_specs(ui_specs, k8s_util.K8S_CREATE_OR_REPLACE)
if endpoint:
vs = yaml.safe_load(ui_virtual_service)
path= vs["spec"]["http"][0]["match"][0]["uri"]["prefix"]
ui_endpoint = endpoint + path
display(HTML(f"mnist UI is at <a href='{ui_endpoint}'>{ui_endpoint}</a>"))
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: When feature_weight = None, the output should match Random Forest.
Step2: When feature_weight is uniform, it should give the same feature importance.
Step3: When feature_weight is weighted, it should give the roughly same feature ranking.
| <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import matplotlib.pyplot as plt
from sklearn.datasets import load_breast_cancer
import numpy as np
from functools import reduce
# Import our custom utilities
from imp import reload
from utils import irf_jupyter_utils
from utils import irf_utils
reload(irf_jupyter_utils)
reload(irf_utils)
# Import RF related functions
from sklearn.ensemble import RandomForestClassifier
feature_weight0 = None
X_train, X_test, y_train, y_test, rf = irf_jupyter_utils.generate_rf_example(n_estimators=1000,
feature_weight=feature_weight0)
all_rf_tree_data = irf_utils.get_rf_tree_data(rf=rf, X_train=X_train, X_test=X_test, y_test=y_test)
#all_rf_tree_data
# Print the feature importance
feature_importances_rank_idx0 = all_rf_tree_data['feature_importances_rank_idx']
feature_importances0 = all_rf_tree_data['feature_importances']
print(feature_importances0)
correct_feature_importance =[ 0.04153319, 0.0136872, 0.05287382, 0.05537257, 0.00571718, 0.01101297,
0.04525511, 0.08925701, 0.00407582, 0.00337926, 0.01301454, 0.00396505,
0.01022279, 0.03255195, 0.00498767, 0.00438016, 0.00771317, 0.00459407,
0.0037973, 0.00448982, 0.10938616, 0.01690837, 0.14415417, 0.1204331,
0.01276175, 0.01472586, 0.03019196, 0.12449026, 0.00858072, 0.00648698]
feature_weight1 = [1]*30
X_train, X_test, y_train, y_test, rf = irf_jupyter_utils.generate_rf_example(n_estimators=1000,
feature_weight=feature_weight1)
all_rf_tree_data = irf_utils.get_rf_tree_data(rf=rf, X_train=X_train, X_test=X_test, y_test=y_test)
#all_rf_tree_data
#feature importance
feature_importances_rank_idx1 = all_rf_tree_data['feature_importances_rank_idx']
feature_importances1 = all_rf_tree_data['feature_importances']
print(feature_importances1)
feature_weight2 = correct_feature_importance
X_train, X_test, y_train, y_test, rf = irf_jupyter_utils.generate_rf_example(n_estimators=1000,
feature_weight=feature_weight2)
all_rf_tree_data = irf_utils.get_rf_tree_data(rf=rf, X_train=X_train, X_test=X_test, y_test=y_test)
#all_rf_tree_data
#feature importance
feature_importances_rank_idx2 = all_rf_tree_data['feature_importances_rank_idx']
feature_importances2 = all_rf_tree_data['feature_importances']
for f in range(X_train.shape[1]):
print("%2d. feature %2d (%10.9f) and feature %2d (%10.9f)" % (f + 1
, feature_importances_rank_idx1[f]
, feature_importances1[feature_importances_rank_idx1[f]]
, feature_importances_rank_idx2[f]
, feature_importances2[feature_importances_rank_idx2[f]]))
def test_iRF_weight1():
#Check when label is random, whether the feature importance of every feature is the same.
n_samples = 1000
n_features = 10
random_state_classifier = 2018
np.random.seed(random_state_classifier)
X_train = np.random.uniform(low=0, high=1, size=(n_samples, n_features))
y_train = np.random.choice([0, 1], size=(n_samples,), p=[.5, .5])
X_test = np.random.uniform(low=0, high=1, size=(n_samples, n_features))
y_test = np.random.choice([0, 1], size=(n_samples,), p=[.5, .5])
all_rf_weights, all_K_iter_rf_data, \
all_rf_bootstrap_output, all_rit_bootstrap_output, \
stability_score = irf_utils.run_iRF(X_train=X_train,
X_test=X_test,
y_train=y_train,
y_test=y_test,
K=5,
n_estimators=20,
B=30,
random_state_classifier=2018,
propn_n_samples=.2,
bin_class_type=1,
M=20,
max_depth=5,
noisy_split=False,
num_splits=2,
n_estimators_bootstrap=5)
assert np.max(all_rf_weights['rf_weight5'])<.135
test_iRF_weight1()
def test_iRF_weight2():
#Check when feature 1 fully predict the label, its importance should be 1.
n_samples = 1000
n_features = 10
random_state_classifier = 2018
np.random.seed(random_state_classifier)
X_train = np.random.uniform(low=0, high=1, size=(n_samples, n_features))
y_train = np.random.choice([0, 1], size=(n_samples,), p=[.5, .5])
X_test = np.random.uniform(low=0, high=1, size=(n_samples, n_features))
y_test = np.random.choice([0, 1], size=(n_samples,), p=[.5, .5])
# first feature is very important
X_train[:,1] = X_train[:,1] + y_train
X_test[:,1] = X_test[:,1] + y_test
all_rf_weights, all_K_iter_rf_data, \
all_rf_bootstrap_output, all_rit_bootstrap_output, \
stability_score = irf_utils.run_iRF(X_train=X_train,
X_test=X_test,
y_train=y_train,
y_test=y_test,
K=5,
n_estimators=20,
B=30,
random_state_classifier=2018,
propn_n_samples=.2,
bin_class_type=1,
M=20,
max_depth=5,
noisy_split=False,
num_splits=2,
n_estimators_bootstrap=5)
print(all_rf_weights['rf_weight5'])
assert all_rf_weights['rf_weight5'][1] == 1
test_iRF_weight2()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The first two lines deal with the ability to show your graphs (generated via matplotlib) within this notebook, the remaining two lines import matplotlib's sublibrary pyplot as <FONT FACE="courier" style="color
Step2: <h3 style="color
Step3: A slower but easier to understand version of this function is shown below. The tag slow is explained shortly after.
Step4: <h3>Step 3
Step5: The choice for the interpolation is obvious
Step6: <h3>Step 4
Step7: Although the plot suggests that the interpolation works, a visual proof can be deceptive. It is best to calculate the error between the exact and interpolated solution. Here we use an $l^2$-norm
Step8: For reasons that will become clearer later, we want to consider other interpolation schemes
Step9: <h3 style="color
Step10: <h3>Step 5
Step11: The discretization of the time derivative is crude. A better discretization is the 2<sup>nd</sup>-order Runge-Kutta
| <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
# plots graphs within the notebook
%config InlineBackend.figure_format='svg' # not sure what this does, may be default images to svg format
import matplotlib.pyplot as plt #calls the plotting library hereafter referred as to plt
import numpy as np
L = 8*np.pi
N = 200
dx = L/N
u_0 = 1.
phi = np.zeros(N)
F = np.zeros(N+1)
u = u_0*np.ones(N+1)
x_phi = np.linspace(dx/2.,L-dx/2.,N)
x_u = np.linspace(0.,L,N+1)
def init_simulation(x_phi,N):
phi = np.zeros(N)
phi = 1.+np.cos(x_phi-L/2.)
xmask = np.where(np.abs(x_phi-L/2.) > np.pi)
phi[xmask] = 0.
return phi
phi = init_simulation(x_phi,N)
plt.plot(x_phi,phi,lw=2)
plt.xlabel('$x$', fontdict = font)
plt.ylabel('$\phi$', fontdict = font)
plt.xlim(0,L)
plt.show()
def init_simulation_slow(u,phi,x_phi,N):
for i in range(N):
if (np.abs(x_phi[i]-L/2.) > np.pi):
phi[i] = 0.
else:
phi[i] = 1.+np.cos(x_phi[i]-L/2.)
return phi
phi = init_simulation_slow(u,phi,x_phi,N)
plt.plot(x_phi,phi,lw=2)
plt.xlabel('$x$', fontdict = font)
plt.ylabel('$\phi$', fontdict = font)
plt.xlim(0,L)
plt.show()
%%timeit
flux0 = np.zeros(N+1)
for i in range(1,N):
flux0[i] = 0.5*(phi[i-1]+phi[i])*u[i]
%%timeit
flux1 = np.zeros(N+1)
flux1[1:N] = 0.5*(phi[0:N-1]+phi[1:N])*u[1:N]
def compute_flux(a,v,N):
f=np.zeros(N+1)
f[1:N] = 0.5*(a[0:N-1]+a[1:N])*v[1:N]
f[0] = f[1]
f[N] = f[N-1]
return f
F_exact = np.zeros(N+1)
F_exact = init_simulation(x_u,N+1)
F = compute_flux(phi,u,N)
plt.plot(x_u,F_exact,lw=2,label="exact")
plt.plot(x_u,F,'r--',lw=2,label="interpolated")
plt.xlabel('$x$', fontdict = font)
plt.ylabel('$\phi$', fontdict = font)
plt.xlim(0,L)
plt.legend(loc="upper left", bbox_to_anchor=[0, 1],
ncol=1, shadow=True, fancybox=True)
plt.show()
N = 200
phi = np.zeros(N)
F_exact = np.zeros(N+1)
F = np.zeros(N+1)
u = u_0*np.ones(N+1)
x_phi = np.linspace(dx/2.,L-dx/2.,N)
x_u = np.linspace(0.,L,N+1)
phi = init_simulation(x_phi,N)
F_exact = init_simulation(x_u,N+1)
F = compute_flux(phi,u,N)
error = np.sqrt(np.sum(np.power(F-F_exact,2)))
errorx = np.power(F-F_exact,2)
plt.plot(x_u,errorx)
plt.show()
print('error norm L 2= %1.4e' %error)
Nerror = 3
Narray = np.array([10, 100, 200])
delta = L/Narray
error = np.zeros(Nerror)
order = np.zeros(Nerror)
for ierror in range(Nerror):
N = Narray[ierror]
phi = np.zeros(N)
F_exact = np.zeros(N+1)
F = np.zeros(N+1)
u = u_0*np.ones(N+1)
x_phi = np.linspace(dx/2.,L-dx/2.,N)
x_u = np.linspace(0.,L,N+1)
phi = init_simulation(x_phi,N)
F_exact = init_simulation(x_u,N+1)
F = compute_flux(phi,u,N)
error[ierror] = np.linalg.norm(F-F_exact)
#error[ierror] = np.sqrt(np.sum(np.power(F-F_exact,2)))
print('error norm L 2= %1.4e' %error[ierror])
order = 0.1*delta**(2)
plt.loglog(delta,error,lw=2,label='interpolate')
plt.loglog(delta,order,lw=2,label='$\propto\Delta x^2$')
plt.legend(loc="upper left", bbox_to_anchor=[0, 1],
ncol=1, shadow=True, fancybox=True)
plt.xlabel('$\Delta x$', fontdict = font)
plt.ylabel('$\Vert F\Vert_2$', fontdict = font)
plt.show
Nscheme = 4
Scheme = np.array(['CS','US1','US2','US3'])
g_1 = np.array([1./2.,0.,0.,3./8.])
g_2 = np.array([0.,0.,1./2.,1./8.])
def compute_flux_advanced(a,v,N,num_scheme):
imask = np.where(Scheme == num_scheme)
g1 = g_1[imask]
g2 = g_2[imask]
f=np.zeros(N+1)
f[2:N] = ((1.-g1+g2)*a[1:N-1]+g1*a[2:N]-g2*a[0:N-2])*v[2:N]
if (num_scheme == 'US2') or (num_scheme == 'US3'):
f[1] = ((1.-g1)*a[0]+g1*a[1])*v[1]
f[0] = f[1]
f[N] = f[N-1]
return f
table = ListTable()
table.append(['Scheme', '$g_1$', '$g_2$'])
for i in range(4):
table.append([Scheme[i],g_1[i], g_2[i]])
table
Nerror = 3
Narray = np.array([10, 100, 200])
delta = L/Narray
error = np.zeros((Nerror,Nscheme))
order = np.zeros((Nerror,Nscheme))
for ischeme in range(Nscheme):
num_scheme = Scheme[ischeme]
for ierror in range(Nerror):
N = Narray[ierror]
dx = L/N
phi = np.zeros(N)
F_exact = np.zeros(N+1)
F = np.zeros(N+1)
u = u_0*np.ones(N+1)
x_phi = np.linspace(dx/2.,L-dx/2.,N)
x_u = np.linspace(0.,L,N+1)
phi = init_simulation(x_phi,N)
F_exact = init_simulation(x_u,N+1)
F = compute_flux_advanced(phi,u,N,num_scheme)
error[ierror,ischeme] = np.linalg.norm(F-F_exact)
#print('error norm L 2= %1.4e' %error[ierror,ischeme])
for ischeme in range(Nscheme):
plt.loglog(delta,error[:,ischeme],lw=2,label=Scheme[ischeme])
order = 2.0*(delta/delta[0])
plt.loglog(delta,order,'k:',lw=2,label='$\propto\Delta x$')
order = 0.1*(delta/delta[0])**(2)
plt.loglog(delta,order,'k-',lw=2,label='$\propto\Delta x^2$')
order = 0.1*(delta/delta[0])**(3)
plt.loglog(delta,order,'k--',lw=2,label='$\propto\Delta x^3$')
plt.legend(loc=2, bbox_to_anchor=[0, 1],
ncol=3, shadow=True, fancybox=True)
plt.xlabel('$\Delta x$', fontdict = font)
plt.ylabel('$\Vert F\Vert_2$', fontdict = font)
plt.xlim(L/300,L/9.)
plt.ylim(1e-5,1e2)
plt.show
def flux_divergence(f,N,dx):
df = np.zeros(N)
df[0:N] = (f[1:N+1]-f[0:N])/dx
return df
N=200
Simulation_time = 5.
dx = L/N
x_phi = np.linspace(dx/2.,L-dx/2.,N)
x_u = np.linspace(0.,L,N+1)
u_0 = 1.
num_scheme = 'US2'
u = u_0*np.ones(N+1)
phi = np.zeros(N)
flux = np.zeros(N+1)
divflux = np.zeros(N)
phi = init_simulation(x_phi,N)
phi_init = phi.copy()
number_of_iterations = 100
dt = Simulation_time/number_of_iterations
t = 0.
for it in range (number_of_iterations):
flux = compute_flux_advanced(phi,u,N,num_scheme)
divflux = flux_divergence(flux,N,dx)
phi -= dt*divflux
t += dt
plt.plot(x_phi,phi,lw=2,label='simulated')
plt.plot(x_phi,phi_init,lw=2,label='initial')
plt.legend(loc=2, bbox_to_anchor=[0, 1],
ncol=2, shadow=True, fancybox=True)
plt.xlabel('$x$', fontdict = font)
plt.ylabel('$\phi$', fontdict = font)
plt.xlim(0,L)
plt.show()
N=200
Simulation_time = 5.
dx = L/N
x_phi = np.linspace(dx/2.,L-dx/2.,N)
x_u = np.linspace(0.,L,N+1)
u_0 = 1.
num_scheme = 'CS'
u = u_0*np.ones(N+1)
phi = np.zeros(N)
flux = np.zeros(N+1)
divflux = np.zeros(N)
phiold = np.zeros(N)
phi = init_simulation(x_phi,N)
phi_init = phi.copy()
rk_coef = np.array([0.5,1.])
number_of_iterations = 100
dt = Simulation_time/number_of_iterations
t = 0.
for it in range (number_of_iterations):
phiold = phi
for irk in range(2):
flux = compute_flux_advanced(phi,u,N,num_scheme)
divflux = flux_divergence(flux,N,dx)
phi = phiold-rk_coef[irk]*dt*divflux
t += dt
plt.plot(x_phi,phi,lw=2,label='simulated')
plt.plot(x_phi,phi_init,lw=2,label='initial')
plt.legend(loc=2, bbox_to_anchor=[0, 1],
ncol=2, shadow=True, fancybox=True)
plt.xlabel('$x$', fontdict = font)
plt.ylabel('$\phi$', fontdict = font)
plt.xlim(0,L)
plt.show()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Plots
Step2: Looking at Node size conditional probabilities
| <ASSISTANT_TASK:>
Python Code:
# Necessary imports
import os
import time
from nbminer.notebook_miner import NotebookMiner
from nbminer.cells.cells import Cell
from nbminer.features.ast_features import ASTFeatures
from nbminer.stats.summary import Summary
from nbminer.stats.multiple_summary import MultipleSummary
#Loading in the notebooks
people = os.listdir('../testbed/Final')
notebooks = []
for person in people:
person = os.path.join('../testbed/Final', person)
if os.path.isdir(person):
direc = os.listdir(person)
notebooks.extend([os.path.join(person, filename) for filename in direc if filename.endswith('.ipynb')])
notebook_objs = [NotebookMiner(file) for file in notebooks]
a = ASTFeatures(notebook_objs)
for i, nb in enumerate(a.nb_features):
a.nb_features[i] = nb.get_new_notebook()
from helper_classes.cond_computer import CondComputer
node_list = []
for i, nb in enumerate(a.nb_features):
node_list.append('start')
for cell in (nb.get_all_cells()):
t = type(cell.get_feature('ast').body[0])
node_list.append(t)
node_list.append('end')
cc = CondComputer(node_list)
arr, arr_names = cc.compute_probabilities(cc.count_totals,.01)
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
plt.rcParams['figure.figsize'] = (20, 10)
cc.plot_bar(arr, arr_names, 'Probability per Node type')
cc.plot_conditional_bar(arr, arr_names, 0, 'Probability per Node type')
cc.plot_conditional_bar(arr, arr_names, 1, 'Probability per Node type')
cc.plot_conditional_bar(arr, arr_names, 2, 'Probability per Node type')
cc.plot_conditional_bar(arr, arr_names, 3, 'Probability per Node type')
cc.plot_conditional_bar(arr, arr_names, 4, 'Probability per Node type')
cc.plot_conditional_bar(arr, arr_names, 5, 'Probability per Node type')
ast_sizes = []
for i, nb in enumerate(a.nb_features):
nb.set_ast_size()
for el in nb.get_all_cells():
ast_sizes.append(el.get_feature('ast_size'))
bin_end = [4, 7, 9, 12, 15, 22, 36]
bin_count = {}
for el in bin_end:
bin_count[el] = 0
for num in ast_sizes:
for i in range(len(bin_end)):
if num < bin_end[i]:
bin_count[bin_end[i]] += 1
break
names = ['Less than ' + str(bin_end[0])]
for i in range(1,len(bin_end)-1):
names.append(str(bin_end[i-1]) + ' <= Num Nodes < ' + str(bin_end[i]))
names.append('Greater than' + str(bin_end[-1]))
for key in bin_count.keys():
print (key, bin_count[key])
size_features = []
for i, nb in enumerate(a.nb_features):
nb.set_ast_size()
size_features.append('start')
for el in nb.get_all_cells():
num = el.get_feature('ast_size')
for ind in range(len(bin_end)):
if num < bin_end[ind]:
size_features.append(ind)
break
size_features.append('end')
cc = CondComputer(size_features)
arr, arr_names = cc.compute_probabilities(cc.count_totals,0,np.arange(7))
cc.plot_bar(arr, names, 'Probability per Node size')
cc.plot_conditional_bar(arr, arr_names, 0, 'Probability per Node size', x_labels = names)
cc.plot_conditional_bar(arr, arr_names, 1, 'Probability per Node size', x_labels = names)
cc.plot_conditional_bar(arr, arr_names, 2, 'Probability per Node size', x_labels = names)
cc.plot_conditional_bar(arr, arr_names, 3, 'Probability per Node size', x_labels = names)
cc.plot_conditional_bar(arr, arr_names, 4, 'Probability per Node size', x_labels = names)
cc.plot_conditional_bar(arr, arr_names, 5, 'Probability per Node size', x_labels = names)
cc.plot_conditional_bar(arr, arr_names, 6, 'Probability per Node size', x_labels = names)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Najprej sem spletne strani FIS pobrala podatke o smučarjih in njihovih id številkah na spletišču FIS. Id-je sem potrebovala za sestavljanje url naslovov posameznih športnikov. Zbrane podatke sem nato spravila v datoteko smucarji.csv.
Step2: Tabela izgleda tako
Step3: Nato sem za vsakega od tekmovalcev s strani z njegovimi rezultati (npr. Eva-Maria Brem) pobrala podatke o vsaki tekmi
Step4: Tabela za Evo-Mario Brem
Step5: Žal si z datumi ne bomo mogli pomagati, saj so na spletni strani z rezultati zapisani v različnih formatih. Kot primer si lahko ogledamo prvo in drugo vrstico zgornje tabele. Iz prve vrstice lahko sklepamo, da je zapis v obliki YYYY-MM-DD, kar pa bi pomenilo, da je bila tekma v drugi vrstici izvedena 3. julija 2016, kar pa iz očitnih razlogov ni res. Tega žal ne morem popraviti.
Step6: Tabela izgleda tako
Step7: V kasnejši analizi se pojavi težava, da so podatki o uvrstitvi lahko številke ali besedilo (npr. DNQ1, DNF1, DSQ2 in DNS1), ki označuje odstope, diskvalifikacije in podobne anomalije.
Step8: Če bomo želeli delati analizo skupnega seštevka, moramo pretvoriti mesto tudi v točke. Definiramo seznam 'tocke', v katerega na i-to mesto (i teče od 0 do 30) zapišemo, koliko točk tekmovalec dobi za osvojeno i-to mesto. Nato napišemo še funkcijo, ki nam pretvori mesto v osvojene točke.
Step9: Tabelam zdaj dodamo stolpce
Step10: Zgornja tabela prikazuje predelano tabelo za Evo-Mario Brem. Enako smo naredili s tabelo z vsemi podatki
Step11: Analiza
Step12: Eva-Maria Brem je torej najpogosteje tekmuje v slalomu in veleslalomu. Ponazorimo to še z grafom
Step13: Čeprav najpogosteje tekmuje v slalomu in veleslalomu, pa to nista nujno disciplini, v katerih dosega najboljše rezultate. Najprej si poglejmo, kakšni so njeni rezultati v slalomu in nato še veleslalomu
Step14: Iz tabel je razvidno, da so njeni razultati v slalomu v vačini na repu trideseterice, med tem ko se v veleslalomu uvršča med 5 najboljših. To se še lepše vidi z grafov
Step15: Poglejmo še koliko točk je v povprečju osvojila pri posamezni disciplini, da določimo njeno "paradno disciplino".
Step16: Veleslalom je torej precej očitno disciplina, ki ji prinaša največ točk.
Step17: Ker si z datumi ne moremo pomagati, glejmo le dosežena mesta
Step18: Podoben graf si poglejmo posebej še za veleslalom
Step19: Opazimo, da ena uvrstitev odstopa
Step20: V Are-ju je osvojila kar 20 % vseh svojih točk. Ali tam dosega tudi najvišja mesta?
Step21: Najvišja mesta očitno dosega v Meribelu. Da rezultate še bolje razložimo, poglejmo, koliko tekem se je Eva-Maria Brem udeležila v posameznem kraju.
Step22: V Are-ju se E. M. Brem je udeležila daleč največ tekem, zato ni čudno, da je tam osvojila največ točk. Prav tako ne moremo trditi, da je Meribel njeno "najboljše" prizorišče, saj je tam tekmovala na le eni tekmi.
Step23: Zanima nas, koliko nastopov so zbrali tekmovalci v letošnji sezoni. Prikazujemo število nastopov za 10 tekmovalcev.
Step24: Tekmovalec ali tekmovalka z največ nastopi je nastopila 21-krat. Kdo pa je to?
Step25: To sta torej dva tekmovalca in sicer z id številkama 125871 in 127048 - Lara Gut in Alexis Pinturault.
Step26: Kdo pa je osvojil največ točk? Poglejmo 10 najboljših
Step27: Še njihova imena
Step28: Analiza narodnosti
Step29: Analiza smuči
Step30: Poglejmo, predstavniki katerih držav uporabljajo smuči Head (in koliko jih je)
Step31: Podobno si lahko pogledamo, katerim proizvajalcem smuči najbolj zaupajo smučarji iz avstrije
Step32: Z analizo enega samega proizvajalca oz. ene same države ne dobimo širše slike, zato si oglejmo graf, ki za vsako državo prikaže, smuči katerega proizvajalca uporabljajo njeni reprezentanti.
| <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import numpy as py
#import scipy
# Make the graphs a bit prettier, and bigger
#pd.set_option('display.mpl_style', 'default')
#plt.rcParams['figure.figsize'] = (15, 5)
# This is necessary to show lots of columns in pandas 0.12.
# Not necessary in pandas 0.13.
pd.set_option('display.width', 5000)
pd.set_option('display.max_columns', 60)
pot="csv-datoteke/smucarji.csv"
smucarji = pd.read_csv(pot, parse_dates=['rojstvo'], index_col='id')
smucarji[:10]
pot_brem = "csv-datoteke/BREM Eva-Maria.csv"
brem = pd.read_csv(pot_brem, parse_dates=['datum'])
brem[:10]
pot1 ="csv-datoteke/vse.csv"
vse = pd.read_csv(pot1, parse_dates=['datum'])
vse[197:203]
def pretvori(bes):
if bes in ['DNQ1', 'DNF1', 'DSQ2', 'DNS1','DNF2','DSQ1','DNS2','DNQ2','DNF','DNC1','DSQ','DNS']:
return 0
else:
return int(bes)
tocke=[0,100,80,60,50,45,40,36,32,29,26,24,22,20,18,16,15,14,13,12,11,10,9,8,7,6,5,4,3,2,1]
def pretvori_2(bes):
if bes in ["DNQ1", "DNF1", "DSQ2", "DNS1", "DNF2"]:
return 0
else:
if int(bes) > 30:
return 0
else:
return tocke[int(bes)];
vse['mesto1'] = vse['mesto'].map(pretvori)
brem['mesto1'] = brem['mesto'].map(pretvori)
vse['tocke'] = vse['mesto1'].map(pretvori_2)
brem['tocke'] = brem['mesto1'].map(pretvori_2)
brem[:5]
vse[2024:2028]
brem['disciplina'].value_counts()
brem['disciplina'].value_counts().plot(kind='pie', figsize=(6,6))
slalom = brem['disciplina'] == 'Slalom'
brem[slalom][:10]
veleslalom = brem['disciplina'] == 'Giant Slalom'
brem[veleslalom][:10]
brem[slalom]['mesto1'].value_counts().plot(kind='bar', title="Rezultati E. M. Brem v slalomu")
brem[veleslalom]['mesto1'].value_counts().plot(kind='bar', title='Rezultati E. M. Brem v veleslalomu')
brem.groupby(['disciplina'])['tocke'].sum() / brem['tocke'].sum()
prvi_del = brem[brem['datum'].dt.year == 2016]
drugi_del = brem[(brem['datum'].dt.month > 9) & (brem['datum'].dt.year == 2015)]
tabela = prvi_del.append(drugi_del)
tabela
tabela['mesto1'].value_counts().plot(kind='pie')
tabela[tabela['disciplina'] == 'Giant Slalom']['mesto1'].value_counts().plot(kind='pie')
po_krajih = brem.groupby(['kraj'])['tocke'].sum() / brem['tocke'].sum()
po_krajih.nlargest(7)
po_krajih1 = brem.groupby(['kraj'])['tocke'].mean()
po_krajih1.nlargest(7)
brem.groupby(['kraj']).size().sort_values(ascending = False)
sezona = vse[vse['datum'].dt.year == 2016]
drugi_del_vse = vse[(vse['datum'].dt.month > 9) & (vse['datum'].dt.year == 2015)]
sezona.append(drugi_del)[40:46]
sezona.groupby(['id']).size()[:10]
sezona.groupby(['id']).size().max()
nova = sezona.groupby(['id']).size()
nova.nlargest(6)
print(smucarji.get_value(index = 125871, col = 'ime'), ", ", smucarji.get_value(index = 127048, col = 'ime'))
sezona.groupby(['id']).agg({'tocke':sum}).nlargest(columns = 'tocke', n = 10)
naj = [106332,127048,154950,125871,104502,27657,30368,107164,109079,137380]
for i in naj:
print(i,': ', smucarji.get_value(index = i, col = 'ime'))
smucarji['drzava'].value_counts().head(10)
smucarji['drzava'].value_counts().plot(kind='bar', figsize = (12,6))
smucarji['smuci'].value_counts()
smucarji['smuci'].value_counts().plot(kind='pie', figsize=(6,6))
smucarji[smucarji['smuci'] == "Head"]['drzava'].value_counts().plot(kind='bar')
smucarji[smucarji['drzava'] == "AUT"]['smuci'].value_counts().plot(kind='bar')
oboje = smucarji.groupby(['smuci','drzava']).size()
priprava = oboje.unstack().plot(kind='barh', stacked=True, figsize=(20, 14), fontsize=12)
priprava.legend(loc=(0.02,1), ncol=6)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Generate the data
Step2: Calculate the covariance matrix and get the eigen values/vectors
Step3: Plot the eigen vectors on the data
Step4: Now let's check that the eigen vectors form indeed a base
Step6: Projection of the data on the different planes
| <ASSISTANT_TASK:>
Python Code:
# to display interactive plots within the notebook
%matplotlib notebook
# to define the size of the plotted images
from pylab import rcParams
rcParams['figure.figsize'] = (10, 8)
import matplotlib.pyplot as plt
import numpy as np
from fct import generate_multivariate, normalize, plot_3d, plot_2d, Arrow3D
data = generate_multivariate(size=500, dimension=3)
plot_3d(data)
# build the covariance matrix from the randomly generated data
cov = np.cov(data.T)
# get its eigen values and vectors
eigenvalues, eigenvectors = np.linalg.eig(cov)
# sorting the eigenvalues
idx = eigenvalues.argsort()[::-1]
eigenvalues = eigenvalues[idx]
eigenvectors = eigenvectors[:,idx]
fig = plt.figure()
# RQ: gca = get current axis
ax = fig.gca(projection='3d')
data_t = data.T
maxi = max([max(abs(el)) for el in data])
ax.set_xlim([-maxi, maxi])
ax.set_ylim([-maxi, maxi])
ax.set_zlim([-maxi, maxi])
ax.scatter(data_t[0], data_t[1], data_t[2], alpha=0.2)
plt.title('Data with vectors of the new base.')
for vector in eigenvectors:
# vectors are made bigger to better visualize them
vector_plt = 2 * vector
a = Arrow3D([0, vector_plt[0]],[0, vector_plt[1]],[0, vector_plt[2]],
mutation_scale=20, lw=1, arrowstyle="-|>", color="r")
ax.add_artist(a)
# The new vectors might not seem orthogonal because of scaling issues so
# here is a proof.
# Note that sometimes, as computers tend to have issues with
# floating numbers, you might not get 0.0 but a very very
# small number (10^-16 for instance).
v1, v2, v3 = eigenvectors
print(sum(v1 * v2))
print(sum(v1 * v3))
print(sum(v2 * v3))
def projection(data, vectors):
Return the dataset projected on the two vectors given.
v1, v2 = vectors
data_projected = []
for datapoint in data:
# we use a scalar product to project on the new base (v1, v2)
# RQ: the multiplication datapoint * v is only possible
# because datapoint is a ndarray.
new_data = []
new_data.append(sum(datapoint * v1))
new_data.append(sum(datapoint * v2))
data_projected.append(np.array(new_data))
return data_projected
def plot_projection(data, vectors, title=''):
data_projected = projection(data, vectors)
fig = plt.figure()
maxi = max([max(abs(el)) for el in data])
plot_2d(data_projected, color='b', alpha=1, maxi=maxi, fig=fig, title=title)
plot_projection(data, [v1, v2], title='two best vectors')
plot_projection(data, [v1, v3], title='best and worst vectors')
plot_projection(data, [v2, v3], title='two worst vectors')
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: This code sets up Ipython Notebook environments (lines beginning with %), and loads several libraries and functions. The core scientific stack in python consists of a number of free libraries. The ones I have loaded above include
Step2: Vectors and Lists
Step3: We could have done this by defining a python list and converting it to an array
Step4: Matrix Addition and Subtraction
Step5: Adding or subtracting two matrices
Step6: Matrix Multiplication
Step7: Multiplying two matricies
Step8: We will use the numpy dot operator to perform the these multiplications. You can use it two ways to yield the same result
Step9: Matrix Division
Step10: Check that $C\times C^{-1} = I$
Step11: Transposing a Matrix
Step12: One important property of transposing a matrix is the transpose of a product of two matrices. Let matrix A be of dimension $N \times M$ and let B of of dimension $M \times P$. Then
Step13: Mechanics
Step14: Logic, Comparison
Step15: Concatenate, Reshape
Step16: Example
Step17: Let's add the bias, i.e. a column of $1$s to the explanatory variables
Step18: Closed-form Linear Regression
Step19: Multiple Linear Regression
Step20: Evaluation
Step21: Regularization, Ridge-Regression
| <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import math
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sbn
##from scipy import *
x = .5
print x
x_vector = np.array([1,2,3])
print x_vector
c_list = [1,2]
print "The list:",c_list
print "Has length:", len(c_list)
c_vector = np.array(c_list)
print "The vector:", c_vector
print "Has shape:",c_vector.shape
z = [5,6]
print "This is a list, not an array:",z
print type(z)
result = A + 3
#or
result = 3 + A
print result
B = np.random.randn(2,2)
print B
A * 3
# Let's redefine A and C to demonstrate matrix multiplication:
A = np.arange(6).reshape((3,2))
C = np.random.randn(2,2)
print A.shape
print C.shape
print A.dot(C)
print np.dot(A,C)
# What would happen to
C.dot(A)
# note, we need a square matrix (# rows = # cols), use C:
C_inverse = np.linalg.inv(C)
print C_inverse
print C.dot(C_inverse)
print "Is identical to:"
print C_inverse.dot(C)
A = np.arange(6).reshape((3,2))
B = np.arange(8).reshape((2,4))
print "A is"
print A
print "The Transpose of A is"
print A.T
print B.T.dot(A.T)
print "Is identical to:"
print (A.dot(B)).T
a = np.arange(10)
s = slice(2,7,2)
print a[s]
a = np.arange(10)
b = a[2:7:2]
print b
a = np.arange(10)
b = a[5]
print b
a = np.arange(10)
print a[2:]
import numpy as np
a = np.arange(10)
print a[2:5]
a = np.array([[1,2,3],[3,4,5],[4,5,6]])
print a
# slice items starting from index
print 'Now we will slice the array from the index a[1:]'
print a[1:]
# array to begin with
a = np.array([[1,2,3],[3,4,5],[4,5,6]])
print 'Our array is:'
print a
print '\n'
# this returns array of items in the second column
print 'The items in the second column are:'
print a[...,1]
print '\n'
# Now we will slice all items from the second row
print 'The items in the second row are:'
print a[1,...]
print '\n'
# Now we will slice all items from column 1 onwards
print 'The items column 1 onwards are:'
print a[...,1:]
A = np.random.rand(5,5)*10
print A[:,1]>4
A[A[:,1]>4]
### Pure iterative Python ###
points = [[9,2,8],[4,7,2],[3,4,4],[5,6,9],[5,0,7],[8,2,7],[0,3,2],[7,3,0],[6,1,1],[2,9,6]]
qPoint = [4,5,3]
minIdx = -1
minDist = -1
for idx, point in enumerate(points): # iterate over all points
dist = sum([(dp-dq)**2 for dp,dq in zip(point,qPoint)])**0.5 # compute the euclidean distance for each point to q
if dist < minDist or minDist < 0: # if necessary, update minimum distance and index of the corresponding point
minDist = dist
minIdx = idx
print 'Nearest point to q: ', points[minIdx]
# # # Equivalent NumPy vectorization # # #
import numpy as np
points = np.array([[9,2,8],[4,7,2],[3,4,4],[5,6,9],[5,0,7],[8,2,7],[0,3,2],[7,3,0],[6,1,1],[2,9,6]])
qPoint = np.array([4,5,3])
minIdx = np.argmin(np.linalg.norm(points-qPoint,axis=1)) # compute all euclidean distances at once and return the index of the smallest one
print 'Nearest point to q: ', points[minIdx]
n = 100 # numeber of samples
Xr = np.random.rand(n)*99.0
y = -7.3 + 2.5*Xr + np.random.randn(n)*27.0
plt.plot(Xr, y, "o", alpha=0.5)
X = np.vstack((np.ones(n), Xr)).T
print X.shape
X[0:10,:]
beta = np.linalg.inv(X.T.dot(X)).dot(X.T).dot(y)
yhat = X.dot(beta)
yhat.shape
plt.plot(X[:,1], y, "o", alpha=0.5)
plt.plot(X[:,1], yhat, "-", alpha=1, color="red")
n = 100 # numeber of samples
X1 = np.random.rand(n)*99.0
X2 = np.random.rand(n)*51.0 - 26.8
X3 = np.random.rand(n)*5.0 + 6.1
X4 = np.random.rand(n)*1.0 - 0.5
X5 = np.random.rand(n)*300.0
y_m = -7.3 + 2.5*X1 + -7.9*X2 + 1.5*X3 + 10.0*X4 + 0.13*X5 + np.random.randn(n)*7.0
plt.hist(y_m, bins=20)
;
X_m = np.vstack((np.ones(n), X1, X2, X3, X4, X5)).T
X_m.shape
beta_m = np.linalg.inv(X_m.T.dot(X_m)).dot(X_m.T).dot(y_m)
beta_m
yhat_m = X.dot(beta_m)
yhat_m.shape
import math
RSMD = math.sqrt(np.square(yhat_m-y_m).sum()/n)
print RSMD
p = X.shape[1] ## get number of parameters
lam = 10.0
p, lam
beta2 = np.linalg.inv(X.T.dot(X) + lam*np.eye(p)).dot(X.T).dot(y)
yhat2 = X.dot(beta2)
RSMD2 = math.sqrt(np.square(yhat2-y).sum()/n)
print RSMD2
##n = float(X.shape[0])
print " RMSE = ", math.sqrt(np.square(yhat-y).sum()/n)
print "Ridge RMSE = ", math.sqrt(np.square(yhat2-y).sum()/n)
plt.plot(X[:,1], y, "o", alpha=0.5)
plt.plot(X[:,1], yhat, "-", alpha=0.7, color="red")
plt.plot(X[:,1], yhat2, "-", alpha=0.7, color="green")
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. Calculate the translational partition function of a CO molecule in the bottle at 298 K. What is the unit of the partition function?
Step2: 3. Plot the rotational and vibrational partition functions of a CO molecule in the bottle from $T$ = 200 to 2000 K (assume the CO remains a gas over the whole range). Hint
Step3: 4. Plot the total translational, rotational, and vibrational energies of CO in the bottle from $T =$ 200 to 2000 K (assume the CO remains a gas over the whole range). Which (if any) of the three types of motions dominate the total energy?
Step4: 5. Plot the total translational, rotational, and vibrational constant volume molar heat capacities of CO in the bottle from $T =$ 200 to 2000 K. Which (if any) of the three types of motions dominate the heat capacity?
Step5: 6. Plot the total translational, rotational, and vibrational Helmholtz energies of CO in the bottle from $T =$ 200 to 2000 K. Which (if any) of the three types of motions dominate the Helmholtz energy?
Step6: 7. Use your formulas to calculate $\Delta P$, $\Delta U$, $\Delta A$, and $\Delta S$ associated with isothermally expanding the gas from 20 dm$^3$ to 40 dm$^3$.
Step7: Reactions from scratch
Step8: 9. Using the data provided, determine $\Delta A^{\circ}$(298 K) in kJ mol$^{-1}$, assuming ideal behavior and 1 M standard state. Recall that $A^\circ=E^\text{elec} + \text{ZPE}-RT\ln(q^\circ)-RT$ and that $q^\circ =(q^\text{trans}/V)q^\text{rot}q^\text{vib}/c^\circ$ in units corresponding with the standard state.
Step9: 10. Determine $\Delta G^\circ$(298 K). Recall that $G = A + PV = A + RT$ for an ideal ga.
Step10: 11. Determine $\Delta S^{\circ}$(298 K), in J mol$^{-1}$ K$^{-1}$ , assuming a 1 M standard state. Recall that $S = (U - A)/T$.
Step11: 12. Using the data provided, determine $K_c$ (298 K), assuming a 1 M standard state. You may either determine from partition functions of from the relationship between $K_c$ and $\Delta G^\circ$.
Step12: 13. 1 mole of CF$_3$OH is generated in a 20 L vessel at 298 K and left long enough to come to equilibrium with respect to its decomposition reaction. What is the composition of the gas (concentrations of all the components) at equilibrium (in mol/L)?
Step13: 14. How, directionally, would your answer to Question 13 change if the vessel was at a higher temperature? Use the van'T Hoff relationship to determine the equilibrium constant and equilibrium concentrations at 273 and 323 K. How good was your guess?
Step14: Since $\Delta H$ is positive, we expect $K \uparrow$ as $T \uparrow$
Step15: Therefore, the at higher temperatures, the reaction shifts towards the products.
Step16: 16. Consult a thermodynamics source (e.g. https
| <ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
hbar = 1.05457e-34 # J*s
h = 6.62607e-34 # J*s
kB = 1.38065e-23 # J/K
m = 28.01*1.6605e-27 # kg/molecule
V = 0.02 # m^3
c = 2.99792e10 # cm/s
B = 1.931 # cm^-1
v = 2156.6 # cm^-1
T_trans = np.pi**2*hbar**2/2/m/V**(2/3)/kB
T_rot = h*c*B/kB
T_vib = h*c*v/kB
print('Characteristic temperatures of CO:')
print('T_trans = {0:.3e} K, T_rot = {1:.3f} K, T_vib = {2:0.2f} K.'.format(T_trans,T_rot,T_vib))
Lamda = h*(1/(kB*298*2*np.pi*m))**0.5
print(Lamda)
q_trans = V/Lamda**3
print('The translational partition function of a CO molecule in the bottle at 298 K is {:.4E}.'.format(q_trans))
print('It is dimensionless.')
T = np.linspace(200,2000,1000) # r = R/a_0
q_rot = T/T_rot
q_vib = 1/(1-np.exp(-T_vib/T))
plt.plot(T,q_rot)
plt.xlabel('T (K)')
plt.ylabel('$q_{rot}$')
plt.title('The rotational partition function of a CO molecule')
plt.show()
plt.plot(T,q_vib)
plt.xlabel('T (K)')
plt.ylabel('$q_{vib}$')
plt.title('The vibrational partition function of a CO molecule')
plt.show()
R = 8.31447 # J/(mol*K)
U_trans = 1.5*R*T
U_rot = R*T
U_vib = R*T_vib/(np.exp(T_vib/T)-1)
plt.plot(T,U_trans,label='U_trans')
plt.plot(T,U_rot,label='U_rot')
plt.plot(T,U_vib,label='U_vib')
plt.legend()
plt.xlabel('T (K)')
plt.ylabel('Internal Energy (J/mol)')
plt.title('Internal energies of CO in the bottle')
plt.show()
Cv_trans = np.linspace(1.5*R,1.5*R,1000)
Cv_rot = np.linspace(R,R,1000)
Cv_vib = R*(T_vib/T*np.exp(T_vib/2./T)/(np.exp(T_vib/T)-1))**2
plt.plot(T,Cv_trans,label='Cv_trans')
plt.plot(T,Cv_rot,label='Cv_rot')
plt.plot(T,Cv_vib,label='Cv_vib')
plt.legend()
plt.xlabel('T (K)')
plt.ylabel('Heat Capacity (J/mol/K)')
plt.title('Constant volume molar heat capacities of CO in the bottle ')
plt.show()
print('While translational motion contributes the most to the molar heat capacity of CO,')
print('it does not dominate over rotational and vibrational motion.')
NA = 6.022e23
S_trans = R*np.log(np.exp(2.5)*V/NA/Lamda**3)
S_rot = R*(1-np.log(T_rot/T))
S_vib = R*(T_vib/T/(np.exp(T_vib/T)-1)-np.log(1-np.exp(-T_vib/T)))
A_trans = U_trans-T*S_trans
A_rot = U_rot-T*S_rot
A_vib = U_vib-T*S_vib
plt.plot(T,A_trans,label='A_trans')
plt.plot(T,A_rot,label='A_rot')
plt.plot(T,A_vib,label='A_vib')
plt.legend()
plt.xlabel('T (K)')
plt.ylabel('Helmholtz Energy (J/mol)')
plt.title('Helmholtz energies of CO in the bottle')
plt.show()
V2 = 0.04 # m^3
deltaP = R*298*(1/V2-1/V)
deltaS = R*np.log(np.exp(2.5)*V2/NA/Lamda**3) - R*np.log(np.exp(2.5)*V/NA/Lamda**3)
deltaA = -deltaS*298
print('Delta P = {0:.3f} Pa, Delta U = 0, Delta A = {1:.3f} J/mol, and Delta S = {2:.3f} J/mol/K.'.format(deltaP,deltaA,deltaS))
import numpy as np
T = 298 # K
k = 1.38065e-23 # J/K
R = 8.31447 # J/(mol*K)
Na = 6.0221e23 # 1/mol
c = 6.0221e26 # 1/m^3, conversion factor of 1mol/L = 6.02e26 particles/m^3
autokJ = 2625.50
Eelec = [-412.90047 ,-312.57028 ,-100.31885 ] # kJ/mol
ZPE = [0.02889 ,0.01422 ,0.00925 ] # kJ/mol
dE0 = ((Eelec[1] + ZPE[1] + Eelec[2] + ZPE[2]) - (Eelec[0] + ZPE[0]))* autokJ # kJ/mol
u_trans = [3.7,3.7,3.7] #kJ/mol
u_rot = [3.7,3.7,2.5] #kJ/mol
u_vib = [4.3,1.2,0] #kJ/mol
dU = dE0 + (u_trans[1]+u_rot[1]+u_vib[1])+(u_trans[2]+u_rot[2]+u_vib[2])-(u_trans[0]+u_rot[0]+u_vib[0])#kJ/mol
print("delta_U = %.2f kJ/mol"%dU)
q_trans = [7.72e32/c,1.59e32/c,8.65e31/c] # change translational partition functions from 1/m3 to mol/l std state
q_rot = [61830,679,9.59] # unitless
q_vib = [2.33,1.16,1] # unitless
Q = (q_trans[1]*q_rot[1]*q_vib[1])*(q_trans[2]*q_rot[2]*q_vib[2])/(q_trans[0]*q_rot[0]*q_vib[0]) # total partition
dA = dE0 + (-R*T*np.log(Q)- R*T)/1000 #kJ/mol
print("Q = %.2f"%Q)
print("delta_E0 = %.2f"%dE0)
print("delta_A = %.2f kJ/mol"%dA)
dG = dA + R*T/1000 #kJ/mol
print("delta_G = %.2f kJ/mol"%dG)
dS = 1000*(dU - dA)/T
print("delta_S = %.2f J/mol K"%dS)
Kc = Q *np.exp(-dE0*1000/(R*T)) # Kc equation from lecture notes
print('Kc = %.3f (unitless). '%(Kc))
Kc = np.exp(-dG*1000/(R*T))
print('Kc = %.3f (unitless). '%(Kc))
from sympy import *
x = symbols('x',positive=True)
c = solve(x**2-(0.05-x)*Kc,x)
print('At equilibrium, CF3OH = %.2E mol/L, COF2 = %.5f mol/L, HF = %.5f mol/L.'%(0.05-c[0],c[0],c[0]))
print('At equilibrium, CF3OH = %.2E mol, COF2 = %.5f mol, HF = %.5f mol.'%((0.05-c[0])*20,c[0]*20,c[0]*20))
dn = 2-1
R = 8.314/1000 #kJ/mol K
T = 298 #K
dH = dU+dn*R*T #kJ/mol
print("dH =",round(dH,3),"kJ/mol")
K1 = 2.926
T1 = 298 #K
T2 = 273 #K
K2 = K1*np.exp(-dH/R*(1/T2-1/T1))
print('K=', round(K2,4), 'at 273 K.')
x = symbols('x',positive=True)
c = solve(x**2-(0.05-x)*K2,x)
print('At equilibrium, CF3OH = %.2E mol/L, COF2 = %.5f mol/L, HF = %.5f mol/L.'%(0.05-c[0],c[0],c[0]))
K1 = 2.926
T1 = 298 #K
T2 = 323 #K
K2 = K1*np.exp(-dH/R*(1/T2-1/T1))
print('K=', round(K2,4), 'at 323 K.')
x = symbols('x',positive=True)
c = solve(x**2-(0.05-x)*K2,x)
print('At equilibrium, CF3OH = %.2E mol/L, COF2 = %.5f mol/L, HF = %.5f mol/L.'%(0.05-c[0],c[0],c[0]))
T = 298 #K
R = 8.314
Kc = np.exp(-dG*1000/(R*T))
print('At 298 K, Kc = %.3f (unitless). '%(Kc))
x = symbols('x',positive=True)
c = solve(x**2-(0.2-x)*Kc,x)
print('At equilibrium, CF3OH = %.2E mol/L, COF2 = %.5f mol/L, HF = %.5f mol/L.'%(0.2-c[0],c[0],c[0]))
print('At equilibrium, CF3OH = %.2E mol, COF2 = %.5f mol, HF = %.5f mol.'%((0.2-c[0])*5,c[0]*5,c[0]*5))
print('At a smaller volume, the concentration of products increases, but the number of moles decreases.')
T = 298 #K
#All values were taken from NIST
#Methanol
Hm = -205 #kJ/mol
Sm = .2399 #kJ/mol K
Gm = Hm - T*Sm #kJ/mol
#Hydrogen
Hh = 0
Sh = .13068 #J/mol K
Gh = Hh - T*Sh #kJ/mol
#Formaldehyde
Hf = -108.6 #kJ/mol
Sf = .21895 #kJ/mol K
Gf = Hf - T*Sf #kJ/mol
delta_H = Hf+Hh-Hm #kJ/mol
delta_S = Sf+Sh-Sm #kJ/mol K
delta_G = Gf+Gh-Gm #kJ/mol
print('Delta H =',delta_H,'kJ/mol.')
print('Delta S =',delta_S,'kJ/mol K.')
print('Delta G =',delta_G,'kJ/mol.')
print('Therefore, by substituting F with H makes the reaction less favorable.')
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Grab the min and max submission dates for filtering main_summary.
Step2: Load in main_summary, filtered to the min date of the experiment, and (7 * N_WEEKS) days beyond its compleition (max_date) to allow for the specified n-week Retention Analysis. We then join main_summary with the experiment data.
Step3: Assign a branch label to each (client_id, submission_date) tuple
Step4: Calculate Retention Data
Step5: Write to s3 since this job is quite expensive and should only be run once.
Step6: Load processed Retention Data
| <ASSISTANT_TASK:>
Python Code:
S3_PATH = "s3://net-mozaws-prod-us-west-2-pipeline-analysis/taarv2/cleaned_data/"
# Select essential columns.
clean_data = sqlContext.read.parquet(S3_PATH).select('client_id', 'locale', 'branch', 'submission_date_s3')
# Display number of rows per branch.
clean_data.groupBy('branch').count().collect()
min_date = clean_data.select(F.min('submission_date_s3').alias('min_d')).collect()[0].min_d
max_date = clean_data.select(F.max('submission_date_s3').alias('max_d')).collect()[0].max_d
print("min date: " + str(min_date))
print("max date: " + str(max_date))
# Get distinct client_ids that were observed in the TAAR experiment.
ensemble_ids = clean_data.rdd.filter(lambda p: p['branch'] == 'ensemble-taar').map(lambda x: x['client_id']).distinct()
linear_ids = clean_data.rdd.filter(lambda p: p['branch'] == 'linear-taar').map(lambda x: x['client_id']).distinct()
control_ids = clean_data.rdd.filter(lambda p: p['branch'] == 'control').map(lambda x: x['client_id']).distinct()
# Reduce redundant Rows to a set of client_ids per branch observed in TAAR.
local_ensemble_ids = set(ensemble_ids.collect())
local_linear_ids = set(linear_ids.collect())
local_control_ids = set(control_ids.collect())
# Sanity check that there are no elements in the set intersection between branches.
print(set.intersection(*[local_ensemble_ids, local_linear_ids, local_control_ids]))
# Broadcast the sets of ids for fast filtering on Main Summary.
bc_ensemble_ids = sc.broadcast(local_ensemble_ids)
bc_linear_ids = sc.broadcast(local_linear_ids)
bc_control_ids = sc.broadcast(local_control_ids)
# print(len(local_ensemble_ids))
# print(len(local_linear_ids))
# print(len(local_control_ids))
ms = (
sqlContext.read.option("mergeSchema", True)
.parquet("s3://telemetry-parquet/main_summary/v4")
.filter("submission_date_s3 >= '{}'".format(min_date))
.filter("normalized_channel = 'release'")
.filter("app_name = 'Firefox'")
.select('client_id', 'active_addons', 'locale', 'subsession_start_date', 'submission_date', 'submission_date_s3')
)
# branches_col = ms.rdd.map(lambda p: (p['client_id'], count_addons(p['active_addons']), assign_branch(p['client_id']), p['submission_date_s3']))
branches_col = ms.withColumn("branch", assign_branch("client_id"))
branches_col = branches_col.filter(branches_col.branch != "None")
branches_col.take(1)
# Double group by and count distinct shoudl leave us with a managable Pandas DF containing:
# datastring in %Y%m%d format (sortable), branch: {ensemble, linear, control} and distinct_client_count
# Everything we need for a day over day retetnion analysis for only clients observed in the TAAR study
# spanning from the earliest study date to latest available ping.
df_daily_grouped = branches_col.groupby("submission_date_s3", "branch")
retention_pd = df_daily_grouped.agg(F.countDistinct('client_id')).toPandas()
ret_df = retention_pd.sort('submission_date_s3', ascending=True)
ret_df.to_csv("taar_v2_retention-alternate.csv", index=False)
%%bash
aws s3 cp taar_v2_retention-alternate.csv s3://net-mozaws-prod-us-west-2-pipeline-analysis/taarv2/
%%bash
aws s3 cp s3://net-mozaws-prod-us-west-2-pipeline-analysis/taarv2/taar_v2_retention-alternate.csv .
ret = pd.read_csv("taar_v2_retention-alternate.csv")
plt.rcParams['figure.figsize'] = (12, 6)
fig, ax = plt.subplots()
for group, data in ret.groupby("branch"):
(data.sort_values("submission_date_s3")
.plot(x='submission_date_s3',
y='count(DISTINCT client_id)',
ax=ax,
label=group))
plt.ylabel("Retention")
plt.xlabel("submission date ")
plt.title("Day-over-day Retention by Branch")
plt.show()
ret
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: obtido no site <a href="http
Step2: O módulo ElementTree (ET)
Step3: Para vermos o elemento raiz da árvore, usamos
Step4: O objeto root, que é um Element, tem as propriedades tag e attrib, que é um dicionário de seus atributos.
Step5: Para acessarmos cada um dos nós do elemento raiz, iteramos nestes nós (que são, também, Elements)
Step6: Selecionando os dados
Step7: Observe que temos mais uma camada de dados
Step8: Assim, podemos por exemplo explorar os nós netos da árvore
Step9: Vamos transformar agora os dados em um DataFrame.
Step10: Como a biblioteca <a href="http
Step11: Inicialmente, criamos um DataFrame, ou seja, uma tabela, com os dados que já temos.
Step12: Observe que nesta tabela temos dados de 2007 e 2009. Não vamos usar os dados relativos a 2007 por simplicidade.
Step13: Obtendo códigos IBGE para os municípios
Step14: Podemos olhar o tipo de tabela que temos usando o método head do pandas.
Step15: Como não são todos os dados que nos interessam, vamos selecionar apenas as colunas "Nome_UF" (pois pode ser interessante referenciarmos o estado da federação mais tarde), "Cod Municipio Completo" e "Nome_Município".
Step16: Em seguida, precisamos selecionar na tabela completa dadosMunicipioIBGE os dados dos municípios presentes na tabelaInicial contendo os valores calculados do IDEB. Para isso, vamos extrair dos dois DataFrames as colunas correspondentes aos codigos de município (lembrando que nos dadosMunicipioIBGE os códigos contém um dígito verificador que não será utilizado)
Step17: Observe que usamos acima o método map para transformar os dados numéricos em string, e depois extrair o último dígito.
Step18: E agora vamos extrair as linhas correspondentes na tabela dadosMunicipioIBGE.
Step19: Por fim, vamos criar uma nova tabela (DataFrame) juntando nome e valor do IDEB calculado na tabelaInicial.
Step20: A tabela final é
Step21: Para terminar
Step22: Em seguida, vamos substituir os índices da tabela dadosFinais pelos nomes dos municípios listados, já que gostaríamos de fazer um gráfico do valor do IDEB por município.
Step23: Finalmente, como nos interessa um gráfico do IDEB por município, só vamos utilizar os dados da coluna "Valor" na tabela dadosFinais (observe que o resultado desta operação é uma Series)
Step24: Estamos prontos para fazer nosso gráfico.
Step25: Comentários sobre a geração dos documentos e do script
| <ASSISTANT_TASK:>
Python Code:
arquivo = "IDEB por Município Rede Federal Séries Finais (5ª a 8ª).xml"
import xml.etree.ElementTree as ET
tree = ET.parse(arquivo)
root = tree.getroot()
root.tag
root.attrib
for child in root:
print(child.tag, child.attrib)
valoresIDEB = root.find('valores')
valoresIDEB
valoresIDEB[0]
for child in valoresIDEB:
for grandchild in child:
print(grandchild.tag, grandchild.attrib)
data = []
for child in valoresIDEB:
data.append([float(child[0].text), child[1].text, child[2].text])
data
import pandas as pd
tabelaInicial = pd.DataFrame(data, columns = ["Valor", "Municipio", "Ano"])
tabelaInicial
tabelaInicial = tabelaInicial.loc[0:19]
dadosMunicipioIBGE = pd.read_excel("DTB_2014_Municipio.xls")
dadosMunicipioIBGE.head()
dadosMunicipioIBGE = dadosMunicipioIBGE[["Nome_UF", "Cod Municipio Completo", "Nome_Município"]]
listaMunicipiosInicial = tabelaInicial["Municipio"]
listaMunicipios = dadosMunicipioIBGE["Cod Municipio Completo"].map(lambda x: str(x)[0:6])
indicesMunicipios = listaMunicipios[~listaMunicipios.isin(listaMunicipiosInicial)]
new = dadosMunicipioIBGE.drop(indicesMunicipios.index).reset_index(drop=True)
dadosFinais = pd.concat([new, tabelaInicial], axis=1)
dadosFinais
import matplotlib.pyplot as plt
dadosFinais.set_index(["Nome_Município"], inplace=True)
dadosFinais["Valor"]
dadosFinais["Valor"].plot(kind='barh')
plt.title("IDEB por Município (Dados de 2009)")
%matplotlib inline
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: KL and non overlapping distributions
Step2: Approximation of the ratio using the f-gan approach
Step3: Gradients
Step4: Wasserstein distance for the same two distributions
Step5: MMD computation
| <ASSISTANT_TASK:>
Python Code:
import jax
import random
import numpy as np
import jax.numpy as jnp
import seaborn as sns
import matplotlib.pyplot as plt
import scipy
!pip install -qq dm-haiku
!pip install -qq optax
try:
import haiku as hk
except ModuleNotFoundError:
%pip install -qq haiku
import haiku as hk
try:
import optax
except ModuleNotFoundError:
%pip install -qq optax
import optax
sns.set(rc={"lines.linewidth": 2.8}, font_scale=2)
sns.set_style("whitegrid")
import scipy.stats
from scipy.stats import truncnorm
from scipy.stats import beta
# We allow a displacement from 0 of the beta distribution.
class TranslatedBeta:
def __init__(self, a, b, expand_dims=False, displacement=0):
self._a = a
self._b = b
self.expand_dims = expand_dims
self.displacement = displacement
def rvs(self, size):
val = beta.rvs(self._a, self._b, size=size) + self.displacement
return np.expand_dims(val, axis=1) if self.expand_dims else val
def pdf(self, x):
return beta.pdf(x - self.displacement, self._a, self._b)
p_param1 = 3
p_param2 = 5
q_param1 = 2
q_param2 = 3
start_p = 0
start_r = 1
start_q = 2
p_dist = TranslatedBeta(p_param1, p_param2, displacement=start_p)
q_dist = TranslatedBeta(q_param1, q_param2, displacement=start_q)
r_dist = TranslatedBeta(q_param1, q_param2, displacement=start_r)
plt.figure(figsize=(14, 10))
p_x_samples = p_dist.rvs(size=15)
q_x_samples = q_dist.rvs(size=15)
p_linspace_x = np.linspace(start_p, start_p + 1, 100)
p_x_pdfs = p_dist.pdf(p_linspace_x)
q_linspace_x = np.linspace(start_q, start_q + 1, 100)
q_x_pdfs = q_dist.pdf(q_linspace_x)
plt.plot(p_linspace_x, p_x_pdfs, "b", label=r"$p_1(x)$")
plt.plot(p_x_samples, [0] * len(p_x_samples), "bo", ms=10)
plt.plot(q_linspace_x, q_x_pdfs, "r", label=r"$p_2(x)$")
plt.plot(q_x_samples, [0] * len(q_x_samples), "rd", ms=10)
plt.ylim(-0.5, 2.7)
plt.xlim(-0.2, 3.5)
plt.axis("off")
plt.legend()
plt.xticks([])
plt.yticks([])
plt.figure(figsize=(14, 8))
local_start_p = 0
local_start_r = 1.2
local_start_q = 2.4
local_p_dist = TranslatedBeta(p_param1, p_param2, displacement=local_start_p)
local_q_dist = TranslatedBeta(q_param1, q_param2, displacement=local_start_q)
local_r_dist = TranslatedBeta(q_param1, q_param2, displacement=local_start_r)
p_linspace_x = np.linspace(local_start_p, local_start_p + 1, 100)
q_linspace_x = np.linspace(local_start_q, local_start_q + 1, 100)
r_linspace_x = np.linspace(local_start_r, local_start_r + 1, 100)
p_x_pdfs = local_p_dist.pdf(p_linspace_x)
q_x_pdfs = local_q_dist.pdf(q_linspace_x)
r_x_pdfs = local_r_dist.pdf(r_linspace_x)
plt.plot(p_linspace_x, p_x_pdfs, "b")
plt.plot(q_linspace_x, q_x_pdfs, "r")
plt.plot(r_linspace_x, r_x_pdfs, "g")
num_samples = 15
plt.plot(local_p_dist.rvs(size=num_samples), [0] * num_samples, "bo", ms=10, label=r"$p^*$")
plt.plot(local_q_dist.rvs(size=num_samples), [0] * num_samples, "rd", ms=10, label=r"$q(\theta_1)$")
plt.plot(local_r_dist.rvs(size=num_samples), [0] * num_samples, "gd", ms=10, label=r"$q(\theta_2)$")
plt.ylim(-0.5, 2.7)
plt.xlim(-0.2, 3.5)
plt.axis("off")
plt.legend(framealpha=0)
plt.xticks([])
plt.yticks([])
model_transform = hk.without_apply_rng(
hk.transform(
lambda *args, **kwargs: hk.Sequential(
[hk.Linear(10), jax.nn.relu, hk.Linear(10), jax.nn.tanh, hk.Linear(40), hk.Linear(1)]
)(*args, **kwargs)
)
)
BATCH_SIZE = 100
NUM_UPDATES = 1000
dist1 = TranslatedBeta(p_param1, p_param2, expand_dims=True, displacement=start_p)
dist2 = TranslatedBeta(q_param1, q_param2, expand_dims=True, displacement=start_q)
@jax.jit
def estimate_kl(params, dist1_batch, dist2_batch):
dist1_logits = model_transform.apply(params, dist1_batch)
dist2_logits = model_transform.apply(params, dist2_batch)
return jnp.mean(dist1_logits - jnp.exp(dist2_logits - 1))
def update(params, opt_state, dist1_batch, dist2_batch):
model_loss = lambda *args: -estimate_kl(*args)
loss, grads = jax.value_and_grad(model_loss, has_aux=False)(params, dist1_batch, dist2_batch)
params_update, new_opt_state = optim.update(grads, opt_state, params)
new_params = optax.apply_updates(params, params_update)
return loss, new_params, new_opt_state
NUM_UPDATES = 200
rng = jax.random.PRNGKey(1)
init_model_params = model_transform.init(rng, dist1.rvs(BATCH_SIZE))
params = init_model_params
optim = optax.adam(learning_rate=0.0005, b1=0.9, b2=0.999)
opt_state = optim.init(init_model_params)
for i in range(NUM_UPDATES):
# Get a new batch of data
x = dist1.rvs(BATCH_SIZE)
y = dist2.rvs(BATCH_SIZE)
loss, params, opt_state = update(params, opt_state, x, y)
if i % 50 == 0:
print("Loss at {}".format(i))
print(loss)
plotting_x = np.expand_dims(np.linspace(-1.0, 3.5, 100), axis=1)
# TODO: how do you get the ratio values form the estimate - need to check the fgan paper
ratio_values = model_transform.apply(params, plotting_x)
# ratio_values = 1 + np.log(model_transform.apply(params, plotting_x))
plt.figure(figsize=(14, 8))
p_linspace_x = np.linspace(start_p, start_p + 1, 100)
q_linspace_x = np.linspace(start_q, start_q + 1, 100)
plt.plot(p_linspace_x, p_x_pdfs, "b", label=r"$p^*$")
plt.plot(p_x_samples, [0] * len(p_x_samples), color="b", marker=10, linestyle="None", ms=18)
plt.plot(q_linspace_x, q_x_pdfs, "g", label=r"$q(\theta)$")
plt.plot(q_x_samples, [0] * len(q_x_samples), color="g", marker=11, linestyle="None", ms=18)
x = np.linspace(-1, 3.5, 200)
ratio = p_dist.pdf(x) / q_dist.pdf(x)
plt.hlines(6.1, -0.6, start_q, linestyles="--", color="r")
plt.hlines(6.1, start_q + 1, 3.5, linestyles="--", color="r")
plt.text(3.4, 5.6, r"$\infty$")
plt.plot(x, ratio, "r", label=r"$\frac{p^*}{q(\theta)}$", linewidth=4)
plt.plot(
plotting_x, ratio_values[:, 0].T, color="darkgray", label=r"MLP approx to $\frac{p^*}{q(\theta)}$", linewidth=4
)
plt.ylim(-2.5, 8)
plt.xlim(-0.2, 3.5)
plt.axis("off")
plt.legend(loc="upper center", bbox_to_anchor=(0.35, 0.0, 0.25, 1.0), ncol=4, framealpha=0)
plt.xticks([])
plt.yticks([])
plt.figure(figsize=(14, 8))
grad_fn = jax.grad(lambda x: model_transform.apply(params, x)[0])
grad_values = jax.vmap(grad_fn)(plotting_x)
plt.figure(figsize=(14, 8))
p_linspace_x = np.linspace(start_p, start_p + 1, 100)
q_linspace_x = np.linspace(start_q, start_q + 1, 100)
plt.plot(p_linspace_x, p_x_pdfs, "b", label=r"$p^*$")
plt.plot(p_x_samples, [0] * len(p_x_samples), color="b", marker=10, linestyle="None", ms=18)
plt.plot(q_linspace_x, q_x_pdfs, "g", label=r"$q(\theta)$")
plt.plot(q_x_samples, [0] * len(q_x_samples), color="g", marker=11, linestyle="None", ms=18)
x = np.linspace(-1, 3.5, 200)
ratio = p_dist.pdf(x) / q_dist.pdf(x)
plt.hlines(5.8, -0.6, start_q, linestyles="--", color="r")
plt.hlines(5.8, start_q + 1, 3.5, linestyles="--", color="r")
plt.text(3.4, 5.4, r"$\infty$")
plt.plot(x, ratio, "r", label=r"$\frac{p^*}{q(\theta)}$", linewidth=4)
plt.plot(
plotting_x,
ratio_values[:, 0].T,
color="darkgray",
label=r"$f_{\phi}$ approximating $\frac{p^*}{q(\theta)}$",
linewidth=4,
)
plt.plot(plotting_x, grad_values[:, 0].T, color="orange", label=r"$\nabla_{x} f_{\phi}(x)$", linewidth=4, ls="-.")
plt.ylim(-2.5, 8)
plt.xlim(-0.2, 3.5)
plt.axis("off")
plt.legend(loc="upper center", bbox_to_anchor=(0.35, 0.0, 0.25, 1.0), ncol=4, framealpha=0)
plt.xticks([])
plt.yticks([])
from scipy.optimize import linprog
def get_W_witness_spectrum(p_samples, q_samples):
n = len(p_samples)
m = len(q_samples)
X = np.concatenate([p_samples, q_samples], axis=0)
## AG: repeat [-1/n] n times
c = np.array(n * [-1 / n] + m * [1 / m])
A_ub, b_ub = [], []
for i in range(n + m):
for j in range(n + m):
if i == j:
continue
z = np.zeros(n + m)
z[i] = 1
z[j] = -1
A_ub.append(z)
b_ub.append(np.abs(X[i] - X[j]))
## AG: Minimize: c^T * x
## Subject to: A_ub * x <= b_ub
res = linprog(c=c, A_ub=A_ub, b_ub=b_ub, method="simplex", options={"tol": 1e-5})
a = res["x"]
## AG: second argument xs to be passed into the internal
## function.
def witness_spectrum(x):
diff = np.abs(x - X[:, np.newaxis])
one = np.min(a[:, np.newaxis] + diff, axis=0)
two = np.max(a[:, np.newaxis] - diff, axis=0)
return one, two
return witness_spectrum
x = np.linspace(-1, 3.5, 100)
wass_estimate = get_W_witness_spectrum(p_x_samples + start_p, q_x_samples + start_q)(x)
wa, wb = wass_estimate
w = (wa + wb) / 2
w -= w.mean()
plt.figure(figsize=(14, 6))
display_offset = 0.8
plt.plot(p_linspace_x, display_offset + p_x_pdfs, "b", label=r"$p^*$")
plt.plot(p_x_samples, [display_offset] * len(p_x_samples), color="b", marker=10, linestyle="None", ms=18)
plt.plot(q_linspace_x, display_offset + q_x_pdfs, "g", label=r"$q(\theta)$")
plt.plot(q_x_samples, [display_offset] * len(q_x_samples), color="g", marker=11, linestyle="None", ms=18)
x = np.linspace(-1, 3.5, 100)
plt.plot(x, w + display_offset, "r", label=r"$f^{\star}$", linewidth=4)
plt.ylim(-2.5, 8)
plt.xlim(-0.2, 3.5)
plt.axis("off")
plt.legend(loc="upper center", bbox_to_anchor=(0.35, 0.0, 0.5, 1.34), ncol=3, framealpha=0)
plt.xticks([])
plt.yticks([])
def covariance(kernel_fn, X, Y):
num_rows = len(X)
num_cols = len(Y)
K = np.zeros((num_rows, num_cols))
for i in range(num_rows):
for j in range(num_cols):
K[i, j] = kernel_fn(X[i], Y[j])
return K
def gaussian_kernel(x1, x2, gauss_var=0.1, height=2.2):
return height * np.exp(-np.linalg.norm(x1 - x2) ** 2 / gauss_var)
def evaluate_mmd_critic(p_samples, q_samples):
n = p_samples.shape[0]
m = q_samples.shape[0]
p_cov = covariance(gaussian_kernel, p_samples, p_samples)
print("indices")
print(np.diag_indices(n))
p_samples_norm = np.sum(p_cov) - np.sum(p_cov[np.diag_indices(n)])
p_samples_norm /= n * (n - 1)
q_cov = covariance(gaussian_kernel, q_samples, q_samples)
q_samples_norm = np.sum(q_cov) - np.sum(q_cov[np.diag_indices(m)])
q_samples_norm /= m * (m - 1)
p_q_cov = covariance(gaussian_kernel, p_samples, q_samples)
p_q_norm = np.sum(p_q_cov)
p_q_norm /= n * m
norm = p_samples_norm + q_samples_norm - 2 * p_q_norm
def critic(x):
p_val = np.mean([gaussian_kernel(x, y) for y in p_samples])
q_val = np.mean([gaussian_kernel(x, y) for y in q_samples])
return (p_val - q_val) / norm
return critic
critic_fn = evaluate_mmd_critic(p_x_samples, q_x_samples)
plt.figure(figsize=(14, 6))
display_offset = 0
plt.plot(p_linspace_x, display_offset + p_x_pdfs, "b", label=r"$p^*$")
plt.plot(p_x_samples, [display_offset] * len(p_x_samples), color="b", marker=10, linestyle="None", ms=18)
plt.plot(q_linspace_x, display_offset + q_x_pdfs, "g", label=r"$q(\theta)$")
plt.plot(q_x_samples, [display_offset] * len(q_x_samples), color="g", marker=11, linestyle="None", ms=18)
x = np.linspace(-1, 3.5, 100)
plt.plot(
start_p + x, np.array([critic_fn(x_val) for x_val in x]) + display_offset, "r", label=r"$f^{\star}$", linewidth=4
)
plt.ylim(-2.5, 8)
plt.xlim(-0.2, 3.5)
plt.axis("off")
plt.legend(loc="upper center", bbox_to_anchor=(0.35, 0.0, 0.5, 1.34), ncol=3, framealpha=0)
plt.xticks([])
plt.yticks([])
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
| <ASSISTANT_TASK:>
Python Code:
import numpy as np
a = [np.array([1,2,3]),np.array([1,2,3]),np.array([1,2,3])]
def all_equal(iterator):
try:
iterator = iter(iterator)
first = next(iterator)
return all(np.array_equal(first, rest) for rest in iterator)
except StopIteration:
return True
result = all_equal(a)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: If you already have an H2O cluster running that you'd like to connect to (for example, in a multi-node Hadoop environment), then you can specify the IP and port of that cluster as follows
Step2: Download EEG Data
Step3: Explore Data
Step4: Now let's take a look at the top of the frame
Step5: The first 14 columns are numeric values that represent EEG measurements from the headset. The "eyeDetection" column is the response. There is an additional column called "split" that was added (by me) in order to specify partitions of the data (so we can easily benchmark against other tools outside of H2O using the same splits). I randomly divided the dataset into three partitions
Step6: To select a subset of the columns to look at, typical Pandas indexing applies
Step7: Now let's select a single column, for example -- the response column, and look at the data more closely
Step8: It looks like a binary response, but let's validate that assumption
Step9: If you don't specify the column types when you import the file, H2O makes a guess at what your column types are. If there are 0's and 1's in a column, H2O will automatically parse that as numeric by default.
Step10: Now we can check that there are two levels in our response column
Step11: We can query the categorical "levels" as well ('0' and '1' stand for "eye open" and "eye closed") to see what they are
Step12: We may want to check if there are any missing values, so let's look for NAs in our dataset. For tree-based methods like GBM and RF, H2O handles missing feature values automatically, so it's not a problem if we are missing certain feature values. However, it is always a good idea to check to make sure that you are not missing any of the training labels.
Step13: The isna method doesn't directly answer the question, "Does the response column contain any NAs?", rather it returns a 0 if that cell is not missing (Is NA? FALSE == 0) and a 1 if it is missing (Is NA? TRUE == 1). So if there are no missing values, then summing over the whole column should produce a summand equal to 0.0. Let's take a look
Step14: Great, no missing labels.
Step15: The sum is still zero, so there are no missing values in any of the cells.
Step16: Ok, the data is not exactly evenly distributed between the two classes -- there are more 0's than 1's in the dataset. However, this level of imbalance shouldn't be much of an issue for the machine learning algos. (We will revisit this later in the modeling section below).
Step17: Split H2O Frame into a train and test set
Step18: Machine Learning in H2O
Step19: We first create a model object of class, "H2OGradientBoostingEstimator". This does not actually do any training, it just sets the model up for training by specifying model parameters.
Step20: Specify the predictor set and response
Step21: Now that we have specified x and y, we can train the model
Step22: Inspect Model
Step23: Model Performance on a Test Set
Step24: Individual model performance metrics can be extracted using methods like auc and mse. In the case of binary classification, we may be most interested in evaluating test set Area Under the ROC Curve (AUC).
Step25: Cross-validated Performance
Step26: This time around, we will simply pull the training and cross-validation metrics out of the model. To do so, you use the auc method again, and you can specify train or xval as True to get the correct metric.
Step27: Grid Search
Step28: Define an "H2OGridSearch" object by specifying the algorithm (GBM) and the hyper parameters
Step29: An "H2OGridSearch" object also has a train method, which is used to train all the models in the grid.
Step30: Compare Models
Step31: The "best" model in terms of validation set AUC is listed first in auc_table.
Step32: The last thing we may want to do is generate predictions on the test set using the "best" model, and evaluate the test set AUC.
| <ASSISTANT_TASK:>
Python Code:
import h2o
# Start an H2O Cluster on your local machine
h2o.init()
# This will not actually do anything since it's a fake IP address
# h2o.init(ip="123.45.67.89", port=54321)
#csv_url = "http://www.stat.berkeley.edu/~ledell/data/eeg_eyestate_splits.csv"
csv_url = "https://h2o-public-test-data.s3.amazonaws.com/smalldata/eeg/eeg_eyestate_splits.csv"
data = h2o.import_file(csv_url)
data.shape
data.head()
data.columns
columns = ['AF3', 'eyeDetection', 'split']
data[columns].head()
y = 'eyeDetection'
data[y]
data[y].unique()
data[y] = data[y].asfactor()
data[y].nlevels()
data[y].levels()
data.isna()
data[y].isna()
data[y].isna().sum()
data.isna().sum()
data[y].table()
n = data.shape[0] # Total number of training samples
data[y].table()['Count']/n
train = data[data['split']=="train"]
train.shape
valid = data[data['split']=="valid"]
valid.shape
test = data[data['split']=="test"]
test.shape
# Import H2O GBM:
from h2o.estimators.gbm import H2OGradientBoostingEstimator
model = H2OGradientBoostingEstimator(distribution='bernoulli',
ntrees=100,
max_depth=4,
learn_rate=0.1)
x = list(train.columns)
x
del x[14:16] #Remove the 14th and 15th columns, 'eyeDetection' and 'split'
x
model.train(x=x, y=y, training_frame=train, validation_frame=valid)
print(model)
perf = model.model_performance(test)
print(perf.__class__)
perf.auc()
perf.mse()
cvmodel = H2OGradientBoostingEstimator(distribution='bernoulli',
ntrees=100,
max_depth=4,
learn_rate=0.1,
nfolds=5)
cvmodel.train(x=x, y=y, training_frame=data)
print(cvmodel.auc(train=True))
print(cvmodel.auc(xval=True))
ntrees_opt = [5,50,100]
max_depth_opt = [2,3,5]
learn_rate_opt = [0.1,0.2]
hyper_params = {'ntrees': ntrees_opt,
'max_depth': max_depth_opt,
'learn_rate': learn_rate_opt}
from h2o.grid.grid_search import H2OGridSearch
gs = H2OGridSearch(H2OGradientBoostingEstimator, hyper_params = hyper_params)
gs.train(x=x, y=y, training_frame=train, validation_frame=valid)
print(gs)
# print out the auc for all of the models
auc_table = gs.sort_by('auc(valid=True)',increasing=False)
print(auc_table)
best_model = h2o.get_model(auc_table['Model Id'][0])
best_model.auc()
best_perf = best_model.model_performance(test)
best_perf.auc()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Bubble sort
Step2: We can see the essential features of Python used
Step4: Note
Step6: This gets rid of the need for a temporary variable.
Step7: Simplex Method
Step8: To access an entry we use square brackets as with lists
Step9: To access a complete row or column, we use slicing notation
Step10: To apply the simplex method, we have to remove the negative entries in row 0. These appear in columns 1 and 2. For column 1 the pivot in row 1 has magnitude $|-1/2| = 1/2$ and the pivot in row 2 has magnitude $|-1/1|=1$. So we choose row 1.
Step11: Now we repeat this on column 2, noting that we can only pivot on row 2
Step12: We read off the solution (noting that floating point representations mean we need care interpreting the results)
Step14: Let's turn that into a function.
Step15: Building the tableau
Step16: See how this compares to VBA.
Step17: This isn't very informative. However, we can define the string representation of our class using the __repr__ method
Step18: We can also define what it means to add two instances of our class
Step19: Going back to the simplex method, we want to define a class that contains the objective function and the constraints, a method to solve the problem, and a representation of the problem and solution.
Step20: Using libraries - pulp
Step21: This gives a "meaningful" title to the problem and says if we're going to maximize or minimize.
Step22: Defining the variables again gives them "meaningful" names, and specifies their lower and upper bounds, and whether the variable type is continuous or integer. We could ignore the latter two definitions as they take their default values.
Step23: Again we have given a "meaningful" name to the objective function we're maximizing.
Step24: If you want to save the problem at this stage, you can use problem.writeLP(<filename>), where the .lp extension is normally used.
Step25: The 1 just means it did it
Step26: As it's found a solution, we can print the objective function and the variables
Step27: Using pulp is far easier and robust than coding our own, and will cover a much wider range of problems.
Step28: We will assume that the bus capacity is $85$ people, that $250$ people want to travel, that they are distributed at the $10$ stops following a discrete random distribution, and each wants to travel a number of stops that also follows a discrete random distribution (distributed between $1$ and the maximum number of stops they could travel).
Step30: And now that we know how to do it once, we can do it many times
Step31: We see that, as expected, it's the stops in the middle that fare worst. We can easily plot this
Step32: Exercise
Step33: A quick reminder of what the dataset contains
Step34: There are different types of iris, classified by the Name. Each individual flower observed has four measurements, given by the data. We want to use some of the data (the Sepal Length and Width, and the Petal Length and Width) to construct a model. The model will take an observation - these four numbers - and predict which flower type we have. We'll use the rest of the data to check how accurate our model is.
Step35: So we're trying to choose one of three types.
Step36: There's 150 observations, with a reasonable range of values.
Step37: We then want to split our data, and associated labels, into a training set (where we tell the classifier what the answer is) and a testing set (to check the accuracy of the model)
Step38: Here we have split the data set in two
Step39: We now have a model
Step40: We see from the first couple of entries that it's done ok, but that there are errors. As estimating the accuracy of a classification by comparing to test data is so standard, there's a function for that
Step41: So the result is very accurate on this simple dataset.
Step42: The accuracy of this classifier is not great. As the train_test_split function randomly selects its training and test data, the accuracy will change every time you run it, but it tends to be 60-70%. Let's try excluding the pop data.
| <ASSISTANT_TASK:>
Python Code:
def bubblesort(unsorted):
Sorts an array using bubble sort algorithm
Paramters
---------
unsorted : list
The unsorted list
Returns
sorted : list
The sorted list (in place)
last = len(unsorted)
# All Python lists start from 0
for i in range(last):
for j in range(i+1, last):
if unsorted[i] > unsorted[j]:
temp = unsorted[j]
unsorted[j] = unsorted[i]
unsorted[i] = temp
return unsorted
unsorted = [2, 4, 6, 0, 1, 3, 5]
print(bubblesort(unsorted))
unsorted = [2, 4, 6, 0, 1, 3, 5]
print(sorted(unsorted))
def bubblesort(unsorted):
Sorts an array using bubble sort algorithm
Paramters
---------
unsorted : list
The unsorted list
Returns
sorted : list
The sorted list (in place)
last = len(unsorted)
# All Python lists start from 0
for i in range(last):
for j in range(i+1, last):
if unsorted[i] > unsorted[j]:
unsorted[j], unsorted[i] = unsorted[i], unsorted[j]
return unsorted
unsorted = [2, 4, 6, 0, 1, 3, 5]
print(bubblesort(unsorted))
def countingsort(unsorted):
Sorts an array using counting sort algorithm
Paramters
---------
unsorted : list
The unsorted list
Returns
sorted : list
The sorted list (in place)
# Allocate the counts array
min_value = min(unsorted)
max_value = max(unsorted)
# This creates a list of the right length, but the entries are not zero, so reset
counts = list(range(min_value, max_value+1))
for i in range(len(counts)):
counts[i] = 0
# Count the values
last = len(unsorted)
for i in range(last):
counts[unsorted[i]] += 1
# Write the items back into the list array
next_index = 0
for i in range(min_value, max_value+1):
for j in range(counts[i]):
unsorted[next_index] = i
next_index += 1
return unsorted
unsorted = [2, 4, 6, 0, 1, 3, 5]
print(countingsort(unsorted))
import numpy
tableau = numpy.array([ [1, -1, -1, 0, 0, 0],
[0, 2, 1, 1, 0, 4],
[0, 1, 2, 0, 1, 3] ], dtype=numpy.float64)
print(tableau)
print(tableau[0, 0])
print(tableau[1, 2])
row = 2
column = 5
print(tableau[row, column])
print(tableau[row, :])
print(tableau[:, column])
column = 1
pivot_row = 1
# Rescale pivot row
tableau[pivot_row, :] /= tableau[pivot_row, column]
# Remove all entries in columns except the pivot
pivot0 = tableau[0, column] / tableau[pivot_row, column]
tableau[0, :] -= pivot0 * tableau[pivot_row, :]
pivot2 = tableau[2, column] / tableau[pivot_row, column]
tableau[2, :] -= pivot2 * tableau[pivot_row, :]
print(tableau)
column = 2
pivot_row = 2
# Rescale pivot row
tableau[pivot_row, :] /= tableau[pivot_row, column]
# Remove all entries in columns except the pivot
pivot0 = tableau[0, column] / tableau[pivot_row, column]
tableau[0, :] -= pivot0 * tableau[pivot_row, :]
pivot1 = tableau[1, column] / tableau[pivot_row, column]
tableau[1, :] -= pivot1 * tableau[pivot_row, :]
print(tableau)
print("z =", tableau[0, -1])
print("x_1 =", tableau[1, -1])
print("x_2 =", tableau[2, -1])
def simplex(tableau):
Assuming a standard form tableau, find the solution
nvars = tableau.shape[1] - tableau.shape[0] - 1
for column in range(1, nvars+2):
if tableau[0, column] < 0:
pivot_row = numpy.argmin(numpy.abs(tableau[0, column] / tableau[1:, column])) + 1
# Rescale pivot row
tableau[pivot_row, :] /= tableau[pivot_row, column]
# Remove all entries in columns except the pivot
for row in range(0, pivot_row):
pivot = tableau[row, column] / tableau[pivot_row, column]
tableau[row, :] -= pivot * tableau[pivot_row, :]
for row in range(pivot_row+1, tableau.shape[0]):
pivot = tableau[row, column] / tableau[pivot_row, column]
tableau[row, :] -= pivot * tableau[pivot_row, :]
z = tableau[0, -1]
x = tableau[1:nvars+1, -1]
return z, x
tableau = numpy.array([ [1, -1, -1, 0, 0, 0],
[0, 2, 1, 1, 0, 4],
[0, 1, 2, 0, 1, 3] ], dtype=numpy.float64)
z, x = simplex(tableau)
print("z =", z)
print("x =", x)
class Student(object):
def __init__(self, name):
self.name = name
def print_name(self):
print("Hello", self.name)
s1 = Student("Christine Carpenter")
print(s1.name)
s2 = Student("Jörg Fliege")
s2.print_name()
print(s1)
print(s2)
class Student(object):
def __init__(self, name):
self.name = name
def __repr__(self):
return self.name
s1 = Student("Christine Carpenter")
s2 = Student("Jörg Fliege")
print(s1)
print(s2)
class Student(object):
def __init__(self, name):
self.name = name
def __repr__(self):
return self.name
def __add__(self, other):
return Student(self.name + " and " + other.name)
s1 = Student("Christine Carpenter")
s2 = Student("Jörg Fliege")
print(s1 + s2)
class Constraint(object):
def __init__(self, coefficients, value):
self.coefficients = numpy.array(coefficients)
self.value = value
def __repr__(self):
string = ""
for i in range(len(self.coefficients)-1):
string += str(self.coefficients[i]) + " x_{}".format(i+1) + " + "
string += str(self.coefficients[-1]) + " x_{}".format(len(self.coefficients))
string += " \le "
string += str(self.value)
return string
c1 = Constraint([2, 1], 4)
c2 = Constraint([1, 2], 3)
print(c1)
print(c2)
class Linearprog(object):
def __init__(self, objective, constraints):
self.objective = numpy.array(objective)
self.nvars = len(self.objective)
self.constraints = constraints
self.nconstraints = len(self.constraints)
self.tableau = numpy.zeros((1+self.nconstraints, 2+self.nvars+self.nconstraints))
self.tableau[0, 0] = 1.0
self.tableau[0, 1:1+self.nvars] = -self.objective
for nc, c in enumerate(self.constraints):
self.tableau[1+nc, 1:1+self.nvars] = c.coefficients
self.tableau[1+nc, 1+self.nvars+nc] = 1.0
self.tableau[1+nc, -1] = c.value
self.z, self.x = self.simplex()
def simplex(self):
for column in range(1, self.nvars+2):
if self.tableau[0, column] < 0:
pivot_row = numpy.argmin(numpy.abs(self.tableau[0, column] / self.tableau[1:, column])) + 1
# Rescale pivot row
self.tableau[pivot_row, :] /= self.tableau[pivot_row, column]
# Remove all entries in columns except the pivot
for row in range(0, pivot_row):
pivot = self.tableau[row, column] / self.tableau[pivot_row, column]
self.tableau[row, :] -= pivot * self.tableau[pivot_row, :]
for row in range(pivot_row+1, self.tableau.shape[0]):
pivot = self.tableau[row, column] / self.tableau[pivot_row, column]
self.tableau[row, :] -= pivot * self.tableau[pivot_row, :]
z = self.tableau[0, -1]
x = self.tableau[1:self.nvars+1, -1]
return z, x
def __repr__(self):
string = "max "
for i in range(len(self.objective)-1):
string += str(self.objective[i]) + " x_{}".format(i+1) + " + "
string += str(self.objective[-1]) + " x_{}".format(len(self.objective))
string += "\n\nwith constraints\n"
for c in self.constraints:
string += "\n"
string += c.__repr__()
string += "\n\n"
string += "Solution has objective function maximum of " + str(self.z)
string += "\n\n"
string += "at location x = " + str(self.x)
return string
problem = Linearprog([1, 1], [c1, c2])
print(problem)
import pulp
problem = pulp.LpProblem("Simple problem", pulp.LpMaximize)
x1 = pulp.LpVariable("x_1", lowBound=0, upBound=None, cat='continuous')
x2 = pulp.LpVariable("x_2", lowBound=0, upBound=None, cat='continuous')
objective = x1 + x2, "Objective function to maximize"
problem += objective
c1 = 2 * x1 + x2 <= 4, "First constraint"
c2 = x1 + 2 * x2 <= 3, "Second constraint"
problem += c1
problem += c2
problem.solve()
print("Status:", pulp.LpStatus[problem.status])
print("Maximized objective function = ", pulp.value(problem.objective))
for v in problem.variables():
print(v.name, "=", v.varValue)
bus_stops = ["Airport Parkway Station",
"Wessex Lane",
"Highfield Interchange",
"Portswood Broadway",
"The Avenue Archers Road",
"Civic Centre",
"Central Station",
"West Quay",
"Town Quay",
"NOCS"]
import numpy
capacity = 85
n_people = 250
total_stops = len(bus_stops)
initial_stops = numpy.random.randint(0, total_stops-1, n_people)
n_stops = numpy.zeros_like(initial_stops)
n_onboard = numpy.zeros((total_stops,), dtype=numpy.int)
n_left_behind = numpy.zeros_like(n_onboard)
for i in range(total_stops):
if i == total_stops - 1: # Can only take one stop
n_stops[initial_stops == i] = 1
else:
n_people_at_stop = len(initial_stops[initial_stops == i])
n_stops[initial_stops == i] = numpy.random.randint(1, total_stops-i, n_people_at_stop)
for i in range(total_stops):
n_people_at_stop = len(initial_stops[initial_stops == i])
n_people_getting_on = max([0, min([n_people_at_stop, capacity - n_onboard[i]])])
n_left_behind[i] = max([n_people_at_stop - n_people_getting_on, 0])
for fill_stops in n_stops[initial_stops == i][:n_people_getting_on]:
n_onboard[i:i+fill_stops] += 1
print(n_left_behind)
print(n_onboard)
def mc_unilink(n_people, n_runs = 10000):
Given n_people wanting to ride the U1, use Monte Carlo to see how many are left behind on average at each stop.
Parameters
----------
n_people : int
Total number of people wanting to use the bus
n_runs : int
Number of realizations
Returns
-------
n_left_behind_average : array of float
Average number of people left behind at each stop
bus_stops = ["Airport Parkway Station",
"Wessex Lane",
"Highfield Interchange",
"Portswood Broadway",
"The Avenue Archers Road",
"Civic Centre",
"Central Station",
"West Quay",
"Town Quay",
"NOCS"]
total_stops = len(bus_stops)
capacity = 85
n_left_behind = numpy.zeros((total_stops, n_runs), dtype = numpy.int)
for run in range(n_runs):
initial_stops = numpy.random.randint(0, total_stops-1, n_people)
n_stops = numpy.zeros_like(initial_stops)
n_onboard = numpy.zeros((total_stops,), dtype=numpy.int)
for i in range(total_stops):
if i == total_stops - 1: # Can only take one stop
n_stops[initial_stops == i] = 1
else:
n_people_at_stop = len(initial_stops[initial_stops == i])
n_stops[initial_stops == i] = numpy.random.randint(1, total_stops-i, n_people_at_stop)
for i in range(total_stops):
n_people_at_stop = len(initial_stops[initial_stops == i])
n_people_getting_on = max([0, min([n_people_at_stop, capacity - n_onboard[i]])])
n_left_behind[i, run] = max([n_people_at_stop - n_people_getting_on, 0])
for fill_stops in n_stops[initial_stops == i][:n_people_getting_on]:
n_onboard[i:i+fill_stops] += 1
return numpy.mean(n_left_behind, axis=1)
n_left_behind_average = mc_unilink(250, 10000)
n_left_behind_average
%matplotlib inline
from matplotlib import pyplot
x = list(range(len(n_left_behind_average)))
pyplot.bar(x, n_left_behind_average)
pyplot.xticks(x, bus_stops, rotation='vertical')
pyplot.ylabel("Average # passengers unable to board")
pyplot.show()
import numpy
import pandas
import sklearn
iris = pandas.read_csv('https://raw.githubusercontent.com/pandas-dev/pandas/master/pandas/tests/data/iris.csv')
iris.head()
iris['Name'].unique()
iris.describe()
labels = iris['Name']
data = iris.drop('Name', axis=1)
from sklearn.model_selection import train_test_split
data_train, data_test, labels_train, labels_test = train_test_split(data, labels, test_size = 0.5)
from sklearn import tree
classifier = tree.DecisionTreeClassifier()
classifier.fit(data_train, labels_train)
print(labels_test)
print(classifier.predict(data_test))
from sklearn.metrics import accuracy_score
accuracy = accuracy_score(labels_test, classifier.predict(data_test))
print("Decision Tree Accuracy with 50/50: {}".format(accuracy))
from sklearn.model_selection import train_test_split
from sklearn import tree
from sklearn.metrics import accuracy_score
dfs = {'indie': pandas.read_csv('spotify_data/indie.csv'), 'pop': pandas.read_csv('spotify_data/pop.csv'),
'country': pandas.read_csv('spotify_data/country.csv'), 'metal': pandas.read_csv('spotify_data/metal.csv'),
'house': pandas.read_csv('spotify_data/house.csv'), 'rap': pandas.read_csv('spotify_data/rap.csv')}
for genre, df in dfs.items():
df['genre'] = genre
dat = pandas.concat(dfs.values())
# define a list of the fields we want to use to train our classifier
columns = ['duration_ms', 'explicit', 'popularity', 'acousticness', 'danceability',
'energy', 'instrumentalness', 'key', 'liveness', 'loudness',
'mode', 'speechiness', 'tempo', 'time_signature', 'valence', 'genre']
# define data as all columns but the genre column
data = dat[columns].drop('genre', axis=1)
# define labels as the genre column
labels = dat[columns].genre
# split the data into a training set and a testing set
data_train, data_test, labels_train, labels_test = train_test_split(data, labels, test_size = 0.3)
# create the classifier
classifier = tree.DecisionTreeClassifier()
# train the classifier using the training data
classifier.fit(data_train, labels_train)
# calculate the accuracy of the classifier using the testing data
accuracy = accuracy_score(labels_test, classifier.predict(data_test))
print("Decision Tree Accuracy with 50/50: {}".format(accuracy))
nopop_dat = dat[dat.genre != 'pop']
# define data as all columns but the genre column
data = nopop_dat[columns].drop('genre', axis=1)
# define labels as the genre column
labels = nopop_dat[columns].genre
data_train, data_test, labels_train, labels_test = train_test_split(data, labels, test_size = 0.1)
classifier = tree.DecisionTreeClassifier()
classifier.fit(data_train, labels_train)
accuracy = accuracy_score(labels_test, classifier.predict(data_test))
print("Decision Tree Accuracy with 50/50: {}".format(accuracy))
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Survival analysis
Step3: The survival function is just the complementary CDF.
Step4: Here's the CDF and SF.
Step5: And here's the hazard function.
Step6: Age at first marriage
Step7: We have to clean up a few variables.
Step8: And the extract the age at first marriage for people who are married, and the age at time of interview for people who are not.
Step10: The following function uses Kaplan-Meier to estimate the hazard function.
Step11: Here is the hazard function and corresponding survival function.
Step14: Quantifying uncertainty
Step15: The following plot shows the survival function based on the raw data and a 90% CI based on resampling.
Step16: The SF based on the raw data falls outside the 90% CI because the CI is based on weighted resampling, and the raw data is not. You can confirm that by replacing ResampleRowsWeighted with ResampleRows in ResampleSurvival.
Step20: The following is the code from survival.py that generates SFs broken down by decade of birth.
Step21: Here are the results for the combined data.
Step23: We can generate predictions by assuming that the hazard function of each generation will be the same as for the previous generation.
Step24: And here's what that looks like.
Step25: Remaining lifetime
Step26: Here's the expected remaining duration of a pregnancy as a function of the number of weeks elapsed. After week 36, the process becomes "memoryless".
Step27: And here's the median remaining time until first marriage as a function of age.
Step29: Exercises
| <ASSISTANT_TASK:>
Python Code:
from os.path import basename, exists
def download(url):
filename = basename(url)
if not exists(filename):
from urllib.request import urlretrieve
local, _ = urlretrieve(url, filename)
print("Downloaded " + local)
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/thinkstats2.py")
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/thinkplot.py")
import thinkstats2
import thinkplot
import numpy as np
import pandas as pd
try:
import empiricaldist
except ImportError:
!pip install empiricaldist
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/nsfg.py")
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/2002FemPreg.dct")
download(
"https://github.com/AllenDowney/ThinkStats2/raw/master/code/2002FemPreg.dat.gz"
)
import nsfg
preg = nsfg.ReadFemPreg()
complete = preg.query("outcome in [1, 3, 4]").prglngth
cdf = thinkstats2.Cdf(complete, label="cdf")
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/survival.py")
import survival
def MakeSurvivalFromCdf(cdf, label=""):
Makes a survival function based on a CDF.
cdf: Cdf
returns: SurvivalFunction
ts = cdf.xs
ss = 1 - cdf.ps
return survival.SurvivalFunction(ts, ss, label)
sf = MakeSurvivalFromCdf(cdf, label="survival")
print(cdf[13])
print(sf[13])
thinkplot.Plot(sf)
thinkplot.Cdf(cdf, alpha=0.2)
thinkplot.Config(loc="center left")
hf = sf.MakeHazardFunction(label="hazard")
print(hf[39])
thinkplot.Plot(hf)
thinkplot.Config(ylim=[0, 0.75], loc="upper left")
download("https://github.com/AllenDowney/ThinkStats2/raw/master/code/2002FemResp.dct")
download(
"https://github.com/AllenDowney/ThinkStats2/raw/master/code/2002FemResp.dat.gz"
)
resp6 = nsfg.ReadFemResp()
resp6.cmmarrhx.replace([9997, 9998, 9999], np.nan, inplace=True)
resp6["agemarry"] = (resp6.cmmarrhx - resp6.cmbirth) / 12.0
resp6["age"] = (resp6.cmintvw - resp6.cmbirth) / 12.0
complete = resp6[resp6.evrmarry == 1].agemarry.dropna()
ongoing = resp6[resp6.evrmarry == 0].age
from collections import Counter
def EstimateHazardFunction(complete, ongoing, label="", verbose=False):
Estimates the hazard function by Kaplan-Meier.
http://en.wikipedia.org/wiki/Kaplan%E2%80%93Meier_estimator
complete: list of complete lifetimes
ongoing: list of ongoing lifetimes
label: string
verbose: whether to display intermediate results
if np.sum(np.isnan(complete)):
raise ValueError("complete contains NaNs")
if np.sum(np.isnan(ongoing)):
raise ValueError("ongoing contains NaNs")
hist_complete = Counter(complete)
hist_ongoing = Counter(ongoing)
ts = list(hist_complete | hist_ongoing)
ts.sort()
at_risk = len(complete) + len(ongoing)
lams = pd.Series(index=ts, dtype=float)
for t in ts:
ended = hist_complete[t]
censored = hist_ongoing[t]
lams[t] = ended / at_risk
if verbose:
print(t, at_risk, ended, censored, lams[t])
at_risk -= ended + censored
return survival.HazardFunction(lams, label=label)
hf = EstimateHazardFunction(complete, ongoing)
thinkplot.Plot(hf)
thinkplot.Config(xlabel="Age (years)", ylabel="Hazard")
sf = hf.MakeSurvival()
thinkplot.Plot(sf)
thinkplot.Config(xlabel="Age (years)", ylabel="Prob unmarried", ylim=[0, 1])
def EstimateMarriageSurvival(resp):
Estimates the survival curve.
resp: DataFrame of respondents
returns: pair of HazardFunction, SurvivalFunction
# NOTE: Filling missing values would be better than dropping them.
complete = resp[resp.evrmarry == 1].agemarry.dropna()
ongoing = resp[resp.evrmarry == 0].age
hf = EstimateHazardFunction(complete, ongoing)
sf = hf.MakeSurvival()
return hf, sf
def ResampleSurvival(resp, iters=101):
Resamples respondents and estimates the survival function.
resp: DataFrame of respondents
iters: number of resamples
_, sf = EstimateMarriageSurvival(resp)
thinkplot.Plot(sf)
low, high = resp.agemarry.min(), resp.agemarry.max()
ts = np.arange(low, high, 1 / 12.0)
ss_seq = []
for _ in range(iters):
sample = thinkstats2.ResampleRowsWeighted(resp)
_, sf = EstimateMarriageSurvival(sample)
ss_seq.append(sf.Probs(ts))
low, high = thinkstats2.PercentileRows(ss_seq, [5, 95])
thinkplot.FillBetween(ts, low, high, color="gray", label="90% CI")
ResampleSurvival(resp6)
thinkplot.Config(
xlabel="Age (years)",
ylabel="Prob unmarried",
xlim=[12, 46],
ylim=[0, 1],
loc="upper right",
)
download(
"https://github.com/AllenDowney/ThinkStats2/raw/master/code/1995FemRespData.dat.gz"
)
download(
"https://github.com/AllenDowney/ThinkStats2/raw/master/code/2006_2010_FemRespSetup.dct"
)
download(
"https://github.com/AllenDowney/ThinkStats2/raw/master/code/2006_2010_FemResp.dat.gz"
)
resp5 = survival.ReadFemResp1995()
resp6 = survival.ReadFemResp2002()
resp7 = survival.ReadFemResp2010()
resps = [resp5, resp6, resp7]
def AddLabelsByDecade(groups, **options):
Draws fake points in order to add labels to the legend.
groups: GroupBy object
thinkplot.PrePlot(len(groups))
for name, _ in groups:
label = "%d0s" % name
thinkplot.Plot([15], [1], label=label, **options)
def EstimateMarriageSurvivalByDecade(groups, **options):
Groups respondents by decade and plots survival curves.
groups: GroupBy object
thinkplot.PrePlot(len(groups))
for _, group in groups:
_, sf = EstimateMarriageSurvival(group)
thinkplot.Plot(sf, **options)
def PlotResampledByDecade(resps, iters=11, predict_flag=False, omit=None):
Plots survival curves for resampled data.
resps: list of DataFrames
iters: number of resamples to plot
predict_flag: whether to also plot predictions
for i in range(iters):
samples = [thinkstats2.ResampleRowsWeighted(resp) for resp in resps]
sample = pd.concat(samples, ignore_index=True)
groups = sample.groupby("decade")
if omit:
groups = [(name, group) for name, group in groups if name not in omit]
# TODO: refactor this to collect resampled estimates and
# plot shaded areas
if i == 0:
AddLabelsByDecade(groups, alpha=0.7)
if predict_flag:
PlotPredictionsByDecade(groups, alpha=0.1)
EstimateMarriageSurvivalByDecade(groups, alpha=0.1)
else:
EstimateMarriageSurvivalByDecade(groups, alpha=0.2)
PlotResampledByDecade(resps)
thinkplot.Config(
xlabel="Age (years)", ylabel="Prob unmarried", xlim=[13, 45], ylim=[0, 1]
)
def PlotPredictionsByDecade(groups, **options):
Groups respondents by decade and plots survival curves.
groups: GroupBy object
hfs = []
for _, group in groups:
hf, sf = EstimateMarriageSurvival(group)
hfs.append(hf)
thinkplot.PrePlot(len(hfs))
for i, hf in enumerate(hfs):
if i > 0:
hf.Extend(hfs[i - 1])
sf = hf.MakeSurvival()
thinkplot.Plot(sf, **options)
PlotResampledByDecade(resps, predict_flag=True)
thinkplot.Config(
xlabel="Age (years)", ylabel="Prob unmarried", xlim=[13, 45], ylim=[0, 1]
)
preg = nsfg.ReadFemPreg()
complete = preg.query("outcome in [1, 3, 4]").prglngth
print("Number of complete pregnancies", len(complete))
ongoing = preg[preg.outcome == 6].prglngth
print("Number of ongoing pregnancies", len(ongoing))
hf = EstimateHazardFunction(complete, ongoing)
sf1 = hf.MakeSurvival()
rem_life1 = sf1.RemainingLifetime()
thinkplot.Plot(rem_life1)
thinkplot.Config(
title="Remaining pregnancy length", xlabel="Weeks", ylabel="Mean remaining weeks"
)
hf, sf2 = EstimateMarriageSurvival(resp6)
func = lambda pmf: pmf.Percentile(50)
rem_life2 = sf2.RemainingLifetime(filler=np.inf, func=func)
thinkplot.Plot(rem_life2)
thinkplot.Config(
title="Years until first marriage",
ylim=[0, 15],
xlim=[11, 31],
xlabel="Age (years)",
ylabel="Median remaining years",
)
def CleanData(resp):
Cleans respondent data.
resp: DataFrame
resp.cmdivorcx.replace([9998, 9999], np.nan, inplace=True)
resp["notdivorced"] = resp.cmdivorcx.isnull().astype(int)
resp["duration"] = (resp.cmdivorcx - resp.cmmarrhx) / 12.0
resp["durationsofar"] = (resp.cmintvw - resp.cmmarrhx) / 12.0
month0 = pd.to_datetime("1899-12-15")
dates = [month0 + pd.DateOffset(months=cm) for cm in resp.cmbirth]
resp["decade"] = (pd.DatetimeIndex(dates).year - 1900) // 10
CleanData(resp6)
married6 = resp6[resp6.evrmarry == 1]
CleanData(resp7)
married7 = resp7[resp7.evrmarry == 1]
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Why NumPy?
Step2: Introduction
Step3: In the numpy package the terminology used for vectors, matrices and higher-dimensional data sets is array.
Step4: The v and M objects are both of the type ndarray that the numpy module provides.
Step5: The difference between the v and M arrays is only their shapes. We can get information about the shape of an array by using the ndarray.shape property.
Step6: The number of elements in the array is available through the ndarray.size property
Step7: Equivalently, we could use the function numpy.shape and numpy.size
Step8: So far the numpy.ndarray looks awefully much like a Python list (or nested list). Why not simply use Python lists for computations instead of creating a new array type?
Step9: We get an error if we try to assign a value of the wrong type to an element in a numpy array
Step10: If we want, we can explicitly define the type of the array data when we create it, using the dtype keyword argument
Step11: Common data types that can be used with dtype are
Step12: linspace and logspace
Step13: mgrid
Step14: random data
Step15: diag
Step16: zeros and ones
Step17: File I/O
Step18: Using numpy.savetxt we can store a Numpy array to a file in CSV format
Step19: Numpy's native file format
Step20: More properties of the numpy arrays
Step21: Manipulating arrays
Step22: If we omit an index of a multidimensional array it returns the whole row (or, in general, a N-1 dimensional array)
Step23: The same thing can be achieved with using
Step24: We can assign new values to elements in an array using indexing
Step25: Index slicing
Step26: Array slices are mutable
Step27: We can omit any of the three parameters in M[lower
Step28: Negative indices counts from the end of the array (positive index from the begining)
Step29: Index slicing works exactly the same way for multidimensional arrays
Step30: Fancy indexing
Step31: We can also use index masks
Step32: This feature is very useful to conditionally select elements from an array, using for example comparison operators
Step33: Functions for extracting data from arrays and creating arrays
Step34: diag
Step35: take
Step36: But take also works on lists and other objects
Step37: choose
Step38: Linear algebra
Step39: Element-wise array-array operations
Step40: If we multiply arrays with compatible shapes, we get an element-wise multiplication of each row
Step41: Matrix algebra
Step42: Python 3 has a new operator for using infix notation with matrix multiplication.
Step43: Alternatively, we can cast the array objects to the type matrix. This changes the behavior of the standard arithmetic operators +, -, * to use matrix algebra.
Step44: If we try to add, subtract or multiply objects with incomplatible shapes we get an error
Step45: See also the related functions
Step46: Hermitian conjugate
Step47: We can extract the real and imaginary parts of complex-valued arrays using real and imag
Step48: Or the complex argument and absolute value
Step49: Matrix computations
Step50: Determinant
Step51: Data processing
Step52: mean
Step53: The daily mean temperature in Stockholm over the last 200 years has been about 6.2 C.
Step54: min and max
Step55: sum, prod, and trace
Step56: Computations on subsets of arrays
Step57: The dataformat is
Step58: With these tools we have very powerful data processing capabilities at our disposal. For example, to extract the average monthly average temperatures for each month of the year only takes a few lines of code
Step59: Calculations with higher-dimensional data
Step60: Many other functions and methods in the array and matrix classes accept the same (optional) axis keyword argument.
Step61: We can also use the function flatten to make a higher-dimensional array into a vector. But this function create a copy of the data.
Step62: Adding a new dimension
Step63: Stacking and repeating arrays
Step64: concatenate
Step65: hstack and vstack
Step66: Copy and "deep copy"
Step67: If we want to avoid this behavior, so that when we get a new completely independent object B copied from A, then we need to do a so-called "deep copy" using the function copy
Step68: Iterating over array elements
Step69: When we need to iterate over each element of an array and modify its elements, it is convenient to use the enumerate function to obtain both the element and its index in the for loop
Step71: Vectorizing functions
Step72: OK, that didn't work because we didn't write the Theta function so that it can handle a vector input...
Step74: We can also implement the function to accept a vector input from the beginning (requires more effort but might give better performance)
Step75: Using arrays in conditions
Step76: Type casting
Step77: Further reading
| <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import traceback
import matplotlib.pyplot as plt
import numpy as np
%%time
total = 0
for i in range(100000):
total += i
%%time
total = np.arange(100000).sum()
%%time
l = list(range(0, 1000000))
ltimes5 = [x * 5 for x in l]
%%time
l = np.arange(1000000)
ltimes5 = l * 5
import numpy as np
# a vector: the argument to the array function is a Python list
v = np.array([1,2,3,4])
v
# a matrix: the argument to the array function is a nested Python list
M = np.array([[1, 2], [3, 4]])
M
type(v), type(M)
v.shape
M.shape
M.size
np.shape(M)
np.size(M)
M.dtype
try:
M[0,0] = "hello"
except ValueError as e:
print(traceback.format_exc())
M = np.array([[1, 2], [3, 4]], dtype=complex)
M
# create a range
x = np.arange(0, 10, 1) # arguments: start, stop, step
x
x = np.arange(-1, 1, 0.1)
x
# using linspace, both end points ARE included
np.linspace(0, 10, 25)
np.logspace(0, 10, 10, base=np.e)
x, y = np.mgrid[0:5, 0:5] # similar to meshgrid in MATLAB
x
y
# uniform random numbers in [0,1]
np.random.rand(5,5)
# standard normal distributed random numbers
np.random.randn(5,5)
# a diagonal matrix
np.diag([1,2,3])
# diagonal with offset from the main diagonal
np.diag([1,2,3], k=1)
np.zeros((3,3))
np.ones((3,3))
!head ../data/stockholm_td_adj.dat
data = np.genfromtxt('../data/stockholm_td_adj.dat')
data.shape
fig, ax = plt.subplots(figsize=(14,4))
ax.plot(data[:,0]+data[:,1]/12.0+data[:,2]/365, data[:,5])
ax.axis('tight')
ax.set_title('tempeatures in Stockholm')
ax.set_xlabel('year')
ax.set_ylabel('temperature (C)');
M = np.random.rand(3,3)
M
np.savetxt("../data/random-matrix.csv", M)
!cat ../data/random-matrix.csv
np.savetxt("../data/random-matrix.csv", M, fmt='%.5f') # fmt specifies the format
!cat ../data/random-matrix.csv
np.save("../data/random-matrix.npy", M)
!file ../data/random-matrix.npy
np.load("../data/random-matrix.npy")
M.itemsize # bytes per element
M.nbytes # number of bytes
M.ndim # number of dimensions
# v is a vector, and has only one dimension, taking one index
v[0]
# M is a matrix, or a 2 dimensional array, taking two indices
M[1,1]
M
M[1]
M[1,:] # row 1
M[:,1] # column 1
M[0,0] = 1
M
# also works for rows and columns
M[1,:] = 0
M[:,2] = -1
M
A = np.array([1,2,3,4,5])
A
A[1:3]
A[1:3] = [-2,-3]
A
A[::] # lower, upper, step all take the default values
A[::2] # step is 2, lower and upper defaults to the beginning and end of the array
A[:3] # first three elements
A[3:] # elements from index 3
A = np.array([1,2,3,4,5])
A[-1] # the last element in the array
A[-3:] # the last three elements
A = np.array([[n+m*10 for n in range(5)] for m in range(5)])
A
# a block from the original array
A[1:4, 1:4]
# strides
A[::2, ::2]
row_indices = [1, 2, 3]
A[row_indices]
col_indices = [1, 2, -1] # remember, index -1 means the last element
A[row_indices, col_indices]
B = np.array([n for n in range(5)])
B
row_mask = np.array([True, False, True, False, False])
B[row_mask]
# same thing
row_mask = np.array([1,0,1,0,0], dtype=bool)
B[row_mask]
x = np.arange(0, 10, 0.5)
x
mask = (5 < x) * (x < 7.5)
mask
x[mask]
indices = np.where(mask)
indices
x[indices] # this indexing is equivalent to the fancy indexing x[mask]
np.diag(A)
np.diag(A, -1)
v2 = np.arange(-3,3)
v2
row_indices = [1, 3, 5]
v2[row_indices] # fancy indexing
v2.take(row_indices)
np.take([-3, -2, -1, 0, 1, 2], row_indices)
which = [1, 0, 1, 0]
choices = [[-2,-2,-2,-2], [5,5,5,5]]
np.choose(which, choices)
v1 = np.arange(0, 5)
v1 * 2
v1 + 2
A * 2, A + 2
A * A # element-wise multiplication
v1 * v1
A.shape, v1.shape
A * v1
np.dot(A, A)
A @ A
np.dot(A, v1)
np.dot(v1, v1)
M = np.matrix(A)
v = np.matrix(v1).T # make it a column vector
v
M * M
M * v
# inner product
v.T * v
# with matrix objects, standard matrix algebra applies
v + M*v
v = np.matrix([1,2,3,4,5,6]).T
M.shape, v.shape
import traceback
try:
M * v
except ValueError as e:
print(traceback.format_exc())
C = np.matrix([[1j, 2j], [3j, 4j]])
C
np.conjugate(C)
C.H
np.real(C) # same as: C.real
np.imag(C) # same as: C.imag
np.angle(C+1) # heads up MATLAB Users, angle is used instead of arg
abs(C)
np.linalg.inv(C) # equivalent to C.I
C.I * C
np.linalg.det(C)
np.linalg.det(C.I)
# reminder, the tempeature dataset is stored in the data variable:
np.shape(data)
# the temperature data is in column 3
np.mean(data[:,3])
np.std(data[:,3]), np.var(data[:,3])
# lowest daily average temperature
data[:,3].min()
# highest daily average temperature
data[:,3].max()
d = np.arange(0, 10)
d
# sum up all elements
np.sum(d)
# product of all elements
np.prod(d+1)
# cummulative sum
np.cumsum(d)
# cummulative product
np.cumprod(d+1)
# same as: diag(A).sum()
np.trace(A)
!head -n 3 ../data/stockholm_td_adj.dat
np.unique(data[:,1]) # the month column takes values from 1 to 12
mask_feb = data[:,1] == 2
# the temperature data is in column 3
np.mean(data[mask_feb,3])
months = np.arange(1,13)
monthly_mean = [np.mean(data[data[:,1] == month, 3]) for month in months]
fig, ax = plt.subplots()
ax.bar(months, monthly_mean)
ax.set_xlabel("Month")
ax.set_ylabel("Monthly avg. temp.");
m = np.random.rand(3,3)
m
# global max
m.max()
# max in each column
m.max(axis=0)
# max in each row
m.max(axis=1)
A
n, m = A.shape
B = A.reshape((1,n*m))
B
B[0,0:5] = 5 # modify the array
B
A # and the original variable is also changed. B is only a different view of the same data
B = A.flatten()
B
B[0:5] = 10
B
A # now A has not changed, because B's data is a copy of A's, not refering to the same data
v = np.array([1,2,3])
v.shape
# make a column matrix of the vector v
v[:, np.newaxis]
# column matrix
v[:, np.newaxis].shape
# row matrix
v[np.newaxis, :].shape
a = np.array([[1, 2], [3, 4]])
# repeat each element 3 times
np.repeat(a, 3)
# tile the matrix 3 times
np.tile(a, 3)
b = np.array([[5, 6]])
np.concatenate((a, b), axis=0)
np.concatenate((a, b.T), axis=1)
np.vstack((a,b))
np.hstack((a,b.T))
A = np.array([[1, 2], [3, 4]])
A
# now B is referring to the same array data as A
B = A
# changing B affects A
B[0,0] = 10
B
A
B = np.copy(A)
# now, if we modify B, A is not affected
B[0,0] = -5
B
A
v = np.array([1,2,3,4])
for element in v:
print(element)
M = np.array([[1,2], [3,4]])
for row in M:
print("row", row)
for element in row:
print(element)
for row_idx, row in enumerate(M):
print("row_idx", row_idx, "row", row)
for col_idx, element in enumerate(row):
print("col_idx", col_idx, "element", element)
# update the matrix M: square each element
M[row_idx, col_idx] = element ** 2
# each element in M is now squared
M
def theta(x):
Scalar implemenation of the Heaviside step function.
if x >= 0:
return 1
else:
return 0
try:
theta(np.array([-3,-2,-1,0,1,2,3]))
except Exception as e:
print(traceback.format_exc())
theta_vec = np.vectorize(theta)
%%time
theta_vec(np.array([-3,-2,-1,0,1,2,3]))
def theta(x):
Vector-aware implemenation of the Heaviside step function.
return 1 * (x >= 0)
%%time
theta(np.array([-3,-2,-1,0,1,2,3]))
# still works for scalars as well
theta(-1.2), theta(2.6)
M
if (M > 5).any():
print("at least one element in M is larger than 5")
else:
print("no element in M is larger than 5")
if (M > 5).all():
print("all elements in M are larger than 5")
else:
print("all elements in M are not larger than 5")
M.dtype
M2 = M.astype(float)
M2
M2.dtype
M3 = M.astype(bool)
M3
%reload_ext version_information
%version_information numpy
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Low frequency drifts and line noise
Step2: we see high amplitude undulations in low frequencies, spanning across tens of
Step3: On MEG sensors we see narrow frequency peaks at 60, 120, 180, 240 Hz,
Step4: we can see typical time courses and non dipolar topographies
| <ASSISTANT_TASK:>
Python Code:
import numpy as np
import mne
from mne.datasets import sample
from mne.preprocessing import create_ecg_epochs, create_eog_epochs
# getting some data ready
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
raw = mne.io.read_raw_fif(raw_fname, preload=True)
(raw.copy().pick_types(meg='mag')
.del_proj(0)
.plot(duration=60, n_channels=100, remove_dc=False))
raw.plot_psd(tmax=np.inf, fmax=250)
average_ecg = create_ecg_epochs(raw).average()
print('We found %i ECG events' % average_ecg.nave)
joint_kwargs = dict(ts_args=dict(time_unit='s'),
topomap_args=dict(time_unit='s'))
average_ecg.plot_joint(**joint_kwargs)
average_eog = create_eog_epochs(raw).average()
print('We found %i EOG events' % average_eog.nave)
average_eog.plot_joint(**joint_kwargs)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step 1 - Creating a Checkpoint
Step1: Pre-Questions
Step2: This table shows the top 10 water consuming counties, the population, the amount of the population that is connected to public water (PublicSupply), and the total water used by each county.
Step3: Most Water Consuming Counties in Florida
Step4: Try to change the histogram so that it displays the County and the Population Total. (Right now it is displaying County and Water Use Total)
Step5: Rainfall in February 2005 (Inches)
Step6: Try to change the histogram to display the data in decending order.
Step7: PART 3
Step8: Water Level in February
Step9: Water Level in July
| <ASSISTANT_TASK:>
Python Code:
# Import modules that contain functions we need
import pandas as pd
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
# Our data is a table and is defined as the word 'data'.
# 'data' is set equal to the .csv file that is read by the pandas function.
# The .csv file must be in the same disrectory as the program.
#data = pd.read_csv("Public Water Supply FL 2010.csv")
# You can also use external links to .xls, .csv, or .txt files and would import useing the same funtion but replaceing the
# file name with the webpage. For example:
data = pd.read_csv("https://gist.githubusercontent.com/GoodmanSciences/9d53d0874281a61354cc8a9a962cb926/raw/e379c22e667aa309cc02048bd2b7bb31ce540d60/Public%2520Water%2520Supply%2520FL%25202010.csv")
# displays the first few rows of the table
data.head(4)
# Set variables for scatter plot
x = data.Population
y = data.WaterUsed
fig = plt.figure(figsize=(15, 6))
plt.scatter(x,y)
plt.xlim(0,3000000)
plt.ylim(0,350)
plt.title('The Relationship Between Population and How Much Water a County Consumes Each Year')
plt.xlabel('Population (individuals)')
plt.ylabel('Water Used (million gallons)')
# This actually shows the plot
plt.show()
# Creates a new dataset for County
place = data.groupby("County", as_index = False).sum()
# Orginizes by County with the highest water usage in decending order
#Only displays the top 10 highestest water consuming counties by putting .head(10)
mostwater = place.sort_values(by="WaterUsed", ascending = False).head(10)
mostwater
# Displays a histogram of the top 10 water consuming counties in ascending order
mostwater.sort_values(by="WaterUsed", ascending=True).plot(x="County", y="WaterUsed", kind="barh", title="Top 10 Water Consuming Counties", legend=False);
# Imports more csv files locally
#feb = pd.read_csv("Feb2005_FL_rainfall.csv")
#july = pd.read_csv("July2005_FL_rainfall.csv")
# Imports more csv files from the web
july = pd.read_csv("https://gist.githubusercontent.com/GoodmanSciences/354fa30fb1e506c055621b893b26ebe8/raw/523e483ae4534c9432f91e5d5b7f9fb0356e95e1/Rainfall%2520FL%2520Jul2005.csv")
feb = pd.read_csv("https://gist.githubusercontent.com/GoodmanSciences/7088ff6b7b8e915a87ee987f3b767641/raw/a76a0dd975f95e6c0c5e6ee810e6f6e66faeca9b/Rainfall%2520FL%2520Feb2005.csv")
feb.head()
# Plots rainfall form ascending order
feb.sort_values(by="Monthly Total", ascending=True).plot(x="County", y="Monthly Total", kind="barh", title="Rainfall in February (Inches)", legend=False);
july.head()
july.sort_values(by="Monthly Total", ascending=True).plot(x="County", y="Monthly Total", kind="barh", title="Rainfall in July (Inches)", legend=False);
from IPython.display import Image
from IPython.core.display import HTML
Image(url= 'https://preview.ibb.co/g7Z6sa/Average_Monthly_Water_Consumption.png')
Image(url= 'https://floridamaps.files.wordpress.com/2015/03/florida-counties.jpg')
#Double-click to make this image GINORMOUS
# Imports another csv file locally
#level = pd.read_csv("Lake Apopka Waterlevel 2005.csv")
# Imports another csv file from the web
level = pd.read_csv("https://gist.githubusercontent.com/GoodmanSciences/e63b6cb68cd6ef5235dc8c113ea9995a/raw/39139535f7ef05057ecce1126ea336ca7bcfb879/Lake%2520Apopka%2520Waterlevel%25202005.csv")
# Sets Date as index
lev2 = level.set_index("Date")
# Displays only Feb 1st through the 28th
lev2.loc["2/1/2005":"2/28/2005", :]
# Displays only July 1st through the 7th
lev2.loc["7/1/2005":"7/7/2005", :]
# Plot of all values in level dataset
level.plot('Date', 'Water Level')
Image(url= 'http://www.floridacountiesmap.com/graphics/orange.gif')
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1. Initialization of setup
Step2: 2. Elemental Mass and Stiffness matrices
Step3: 3. Flux Matrices
Step4: 4. Discontinuous Galerkin Solution
| <ASSISTANT_TASK:>
Python Code:
# Import all necessary libraries, this is a configuration step for the exercise.
# Please run it before the simulation code!
import numpy as np
import matplotlib.pyplot as plt
from gll import gll
from lagrange1st import lagrange1st
from flux_homo import flux
# Show the plots in the Notebook.
plt.switch_backend("nbagg")
# Initialization of setup
# --------------------------------------------------------------------------
c = 2500 # acoustic velocity [m/s]
tmax = 2.0 # Length of seismogram [s]
xmax = 10000 # Length of domain [m]
vs = 2500 # Advection velocity
rho = 2500 # Density [kg/m^3]
mu = rho*vs**2 # shear modulus
N = 4 # Order of Lagrange polynomials
ne = 200 # Number of elements
sig = 200 # Gaussian width
x0 = 5000 # x location of Gaussian
eps = 0.4 # Courant criterion
iplot = 20 # Plotting frequency
imethod = 'RK' # 'Euler', 'RK'
#--------------------------------------------------------------------
# GLL points and integration weights
[xi,w] = gll(N) # xi, N+1 coordinates [-1 1] of GLL points
# w Integration weights at GLL locations
# Space domain
le = xmax/ne # Length of elements
ng = ne*N + 1
# Vector with GLL points
k=0
xg = np.zeros((N+1)*ne)
for i in range(0, ne):
for j in range(0, N+1):
k += 1
xg[k-1] = i*le + .5*(xi[j]+1)*le
x = np.reshape(xg, (N+1, ne), order='F').T
# Calculation of time step acoording to Courant criterion
dxmin = np.min(np.diff(xg[1:N+1]))
dt = eps*dxmin/vs # Global time step
nt = int(np.floor(tmax/dt))
# Mapping - Jacobian
J = le/2 # Jacobian
Ji = 1/J # Inverse Jacobian
# 1st derivative of Lagrange polynomials
l1d = lagrange1st(N)
#################################################################
# IMPLEMENT THE ELEMENTAL MASS MATRIX HERE!
#################################################################
#################################################################
# COMPUTE THE INVERSE MASS MATRIX HERE!
#################################################################
#################################################################
# IMPLEMENT THE ELEMENTAL STIFFNESS MATRIX HERE!
#################################################################
#################################################################
# INITIALIZE ALL MATRICES HERE!
#################################################################
# Z =
# R =
# Rinv =
# Lm =
# Lp =
# Ap =
# Am =
# A =
# DG Solution, Time extrapolation
# ---------------------------------------------------------------
# Initalize solution vectors
Q = np.zeros([ne, N+1, 2])
Qnew = np.zeros([ne, N+1, 2])
k1 = np.zeros([ne, N+1, 2])
k2 = np.zeros([ne, N+1, 2])
Q[:,:,0] = np.exp(-1/sig**2*((x-x0))**2)
Qs = np.zeros(xg.size) # for plotting
Qv = np.zeros(xg.size) # for plotting
Qa = np.zeros((2, xg.size)) # for analytical solution
# Initialize animated plot
# ---------------------------------------------------------------
fig = plt.figure(figsize=(10,6))
ax1 = fig.add_subplot(2,1,1)
ax2 = fig.add_subplot(2,1,2)
line1 = ax1.plot(xg, Qs, 'k', xg, Qa[0,:], 'r--', lw=1.5)
line2 = ax2.plot(xg, Qv, 'k', xg, Qa[1,:], 'r--', lw=1.5)
ax1.set_ylabel('Stress')
ax2.set_ylabel('Velocity')
ax2.set_xlabel(' x ')
plt.suptitle('Homogeneous Disc. Galerkin - %s method'%imethod, size=16)
plt.ion() # set interective mode
plt.show()
# ---------------------------------------------------------------
# Time extrapolation
# ---------------------------------------------------------------
for it in range(nt):
if imethod == 'Euler': # Euler
# Calculate Fluxes
Flux = flux(Q, N, ne, Ap, Am)
# Extrapolate each element using flux F
for i in range(1,ne-1):
Qnew[i,:,0] = dt * Minv @ (-mu * K @ Q[i,:,1].T - Flux[i,:,0].T) + Q[i,:,0].T
Qnew[i,:,1] = dt * Minv @ (-1/rho * K @ Q[i,:,0].T - Flux[i,:,1].T) + Q[i,:,1].T
elif imethod == 'RK':
# Calculate Fluxes
Flux = flux(Q, N, ne, Ap, Am)
# Extrapolate each element using flux F
for i in range(1,ne-1):
k1[i,:,0] = Minv @ (-mu * K @ Q[i,:,1].T - Flux[i,:,0].T)
k1[i,:,1] = Minv @ (-1/rho * K @ Q[i,:,0].T - Flux[i,:,1].T)
for i in range(1,ne-1):
Qnew[i,:,0] = dt * Minv @ (-mu * K @ Q[i,:,1].T - Flux[i,:,0].T) + Q[i,:,0].T
Qnew[i,:,1] = dt * Minv @ (-1/rho * K @ Q[i,:,0].T - Flux[i,:,1].T) + Q[i,:,1].T
Flux = flux(Qnew,N,ne,Ap,Am)
for i in range(1,ne-1):
k2[i,:,0] = Minv @ (-mu * K @ Qnew[i,:,1].T - Flux[i,:,0].T)
k2[i,:,1] = Minv @ (-1/rho * K @ Qnew[i,:,0].T - Flux[i,:,1].T)
# Extrapolate
Qnew = Q + (dt/2) * (k1 + k2)
else:
raise NotImplementedError
Q, Qnew = Qnew, Q
# --------------------------------------
# Animation plot. Display solution
if not it % iplot:
for l in line1:
l.remove()
del l
for l in line2:
l.remove()
del l
# stretch for plotting
k = 0
for i in range(ne):
for j in range(N+1):
Qs[k] = Q[i,j,0]
Qv[k] = Q[i,j,1]
k = k + 1
#################################################################
# IMPLEMENT THE ANALYTICAL SOLUTION HERE!
#################################################################
# --------------------------------------
# Display lines
line1 = ax1.plot(xg, Qs, 'k', xg, Qa[0,:], 'r--', lw=1.5)
line2 = ax2.plot(xg, Qv, 'k', xg, Qa[1,:], 'r--', lw=1.5)
plt.legend(iter(line2), ('D. Galerkin', 'Analytic'))
plt.gcf().canvas.draw()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Define helper functions
Step2: Project specific parameters
Step3: Iterate through subjects and runs
Step4: Compute 2-way correlations
| <ASSISTANT_TASK:>
Python Code:
import pandas as pd
import json
from scipy import stats, signal, linalg
from sklearn.decomposition import PCA
import nibabel as nib
import nipype
from nipype import Node, SelectFiles, DataSink, IdentityInterface
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.use("Agg")
from nipype.interfaces import fsl
from nipy.modalities.fmri.glm import FMRILinearModel
from nibabel import Nifti1Image, save
import numpy as np
import os
import os.path as op
import shutil
import sys
import copy
import lyman
import moss
from lyman import tools
from lyman import default_experiment_parameters
import lyman.workflows as wf
from moss import glm
import seaborn as sns
%matplotlib inline
sns.set(context="notebook", style="ticks", font="Arial")
pd.set_option('display.precision', 3)
def plotSimilarityStruct(run_data, run_evs):
from scipy.cluster.hierarchy import linkage, dendrogram
from scipy.spatial.distance import pdist, squareform
data_dist = pdist(run_data.T, 'correlation')
data_link = linkage(data_dist)
# Compute and plot first dendrogram.
fig = plt.figure(figsize=(8,8))
# x ywidth height
ax1 = fig.add_axes([0.05,0.1,0.2,0.6])
Y = linkage(data_dist, method='single')
Z1 = dendrogram(Y, orientation='right',labels=run_evs, distance_sort=True) # adding/removing the axes
ax1.set_xticks([])
# Compute and plot second dendrogram.
ax2 = fig.add_axes([0.3,0.71,0.6,0.2])
Z2 = dendrogram(Y)
ax2.set_xticks([])
ax2.set_yticks([])
#Compute and plot the heatmap
axmatrix = fig.add_axes([0.37,0.1,0.6,0.6])
idx1 = Z1['leaves']
idx2 = Z2['leaves']
D = squareform(data_dist)
D = D[idx1,:]
D = D[:,idx2]
im = axmatrix.matshow(D, aspect='auto', origin='lower', cmap=plt.cm.YlGnBu)
axmatrix.set_xticks([])
axmatrix.set_yticks([])
# Plot colorbar.
axcolor = fig.add_axes([1,0.1,0.02,0.6])
plt.colorbar(im, cax=axcolor)
def removeSegment(run_evs, sep, remove_seg):
run_evs = ["-".join(x.split('-')[0:remove_seg]) for x in list(run_evs)]
return run_evs
def transform_fisherZ(r):
z = 0.5*np.log((1+r)/(1-r))
return z
experiment = 'objfam'
altmodel = 'trial-prototype'
nruns = 12
subject_list = '/Volumes/group/awagner/sgagnon/ObjFam/data/subids_subset_no23or19.txt'
unsmoothed = True
condition_labels = True # If condition_file to specify which condition the trials belong to
condition_filename = 'trial-prototype-condition.csv' # only necessary if condition_labels = True
project = lyman.gather_project_info()
exp = lyman.gather_experiment_info(experiment, altmodel)
group = np.loadtxt(subject_list, str).tolist()
exp_base = experiment
exp_name = "-".join([exp_base, altmodel])
data_dir = project["data_dir"]
analysis_dir = project["analysis_dir"]
smoothing = "unsmoothed" if unsmoothed else "smoothed"
data_dir
mask_name = 'lateraloccipital'
out_val = 't' # t or beta
sub_mat = []
group_evs = []
for subid in group:
print subid
design_file = op.join(data_dir, subid, "design", exp["design_name"] + ".csv")
sub_dmat = pd.read_csv(design_file)
if condition_labels:
condition_file = op.join(data_dir, subid, "design", condition_filename)
cond_map = pd.read_csv(condition_file)
# get 3D mask as bool
# mask_file = op.join(timeseries_dir, "functional_mask_xfm.nii.gz")
mask_file = op.join(data_dir, subid, 'masks', mask_name + '.nii.gz')
mask = nib.load(mask_file).get_data() == 1
run_mat = []
ev_list = []
for run in range(1, nruns+1):
print 'Run: ' + str(run)
# Setup run specific directories
# preproc timeseries registered to first run
timeseries_dir = op.join(analysis_dir, experiment, subid, "reg/epi/unsmoothed/run_" + str(run))
preproc_dir = op.join(analysis_dir, experiment, subid, "preproc/run_" + str(run))
realign_file = op.join(preproc_dir, "realignment_params.csv")
artifact_file = op.join(preproc_dir, "artifacts.csv")
timeseries_file = op.join(timeseries_dir, "timeseries_xfm.nii.gz")
# Build the model design
run_dmat = sub_dmat[sub_dmat.run == run]
realign = pd.read_csv(realign_file)
realign = realign.filter(regex="rot|trans").apply(stats.zscore)
artifacts = pd.read_csv(artifact_file).max(axis=1)
ntp = len(artifacts)
tr = exp["TR"]
hrf = getattr(glm, exp["hrf_model"])
hrf = hrf(exp["temporal_deriv"], tr, **exp["hrf_params"])
ev_mat = []
for ev in run_dmat.condition.unique():
ev_list.append(ev)
design_LSS = copy.deepcopy(run_dmat)
design_LSS.condition[design_LSS.condition != ev] = 'other'
design_kwargs = dict(confounds=realign,
artifacts=artifacts,
tr=tr,
condition_names=sorted(design_LSS.condition.unique()), # sort to keep condition of interest first
confound_pca=exp["confound_pca"],
hpf_cutoff=exp["hpf_cutoff"])
X = glm.DesignMatrix(design_LSS, hrf, ntp, **design_kwargs)
# print ev
# print X.design_matrix.columns
# Fit model
fmri_glm = FMRILinearModel(timeseries_file, np.array(X.design_matrix), mask=mask_file)
fmri_glm.fit(do_scaling=True, model='ar1')
# Get beta
beta_hat = fmri_glm.glms[0].get_beta()
# Output appropriate statistic
if out_val == 'beta':
ev_mat.append(beta_hat[0])
elif out_val == 't':
# Calc t-statistic
num_reg = beta_hat.shape[0]
con = [[1] + [0]*(num_reg-1)]
t_map, = fmri_glm.contrast(con, con_id=ev, contrast_type='t')
t_map = t_map.get_data()[mask].ravel()
ev_mat.append(t_map)
run_mat.append(ev_mat)
sub_mat.append(run_mat)
group_evs.append(ev_list)
data = np.array(sub_mat)
evs = np.array(group_evs)
print 'Data shape (subid x run x trial x voxel):' + str(data.shape)
print 'EVs shape (subid x trial):' + str(evs.shape)
data[5,0]
group[5]
sub_num = 8
run_data = data[sub_num,0]
run_data = np.vstack(run_data).T # voxel x ev
run_evs = evs[sub_num].reshape(12,30)[0]
sns.corrplot(run_data[np.argsort(run_evs)],
names = run_evs[np.argsort(run_evs)],
diag_names=False,
annot=False, cmap_range=(-1,1))
df = pd.DataFrame(run_data, columns=run_evs)
corr_mat = df.corr()
sns.clustermap(corr_mat)
data.shape[0]
sub_data.shape[0]
df_corr = pd.DataFrame(columns=['subid', 'run', 'condition', 'corr'])
df_condmap = pd.DataFrame()
for sub_num in range(data.shape[0]):
print group[sub_num]
subid = group[sub_num]
sub_data = data[sub_num]
sub_evs = evs[sub_num].reshape(12,30)
condition_file = op.join(data_dir, subid, "design", condition_filename)
cond_map = pd.read_csv(condition_file)
cond_map['subid'] = subid
df_condmap = pd.concat([df_condmap, cond_map])
for run in range(sub_data.shape[0]):
run_data = sub_data[run]
run_data = np.vstack(run_data).T # voxel x ev
run_evs = sub_evs[run]
run_conds = removeSegment(run_evs, '-', 2)
df = pd.DataFrame(run_data, columns=run_conds)
for cond in set(run_conds):
corr_value = np.array(df[cond].corr())[0][1]
df_corr = df_corr.append([dict(subid=subid,
run=run+1,
condition=cond,
corr=corr_value)],
ignore_index=True)
df_corr.head()
df_corr.subid.unique().shape
df_condmap.subid.unique().shape
df_corr2 = df_corr.merge(df_condmap, on=['subid', 'run', 'condition'])
df_corr2.head()
df_corr2.subid.unique().shape
sns.distplot(data.ix[1500:3000]['corr'])
sns.set(context='poster', style='whitegrid')
data_corr = df_corr2.join(pd.DataFrame(df_corr2.morphmem.str.split('_').tolist(),
columns=['morph', 'resp']))
sns.factorplot(x='morph', y='corr', hue='resp',dodge=0.1,
hue_order=['new', 'old'],
ci=68, units='subid', data=data_corr)
%load_ext rpy2.ipython
%R require(lme4)
%R require(lmerTest)
%R require(ggplot2)
%R -i data_corr
%%R
print(str(data_corr))
data_corr$morph_q = as.numeric(data_corr$morph)
data_corr_noguess = data_corr[data_corr$resp != 'guess',]
data_corr_noguess$resp = factor(data_corr_noguess$resp)
print(str(data_corr_noguess))
contrasts(data_corr_noguess$resp) = c(-1,1)
print(contrasts(data_corr_noguess$resp))
%%R
res1 = lmer(corr ~ morph_q * resp + (1|subid), data=data_corr_noguess)
print(summary(res1))
data_corr['morph_q'] = data_corr.morph.astype('int')
data_group = data.groupby(['subid', 'morph']).mean().reset_index()
data_group.head()
data_group['z'] = transform_fisherZ(data_group['corr'])
data_group.head()
sns.violinplot(x='morph', y='z', data=data_group,
inner="points")
sns.coefplot("z~scale(morph_q)",
data=data_group,
ci=68)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Interaction Between Neurons - Feature Visualization
Step2: Combining Objectives
Step3: Random Directions
Step4: Aligned Interpolation
| <ASSISTANT_TASK:>
Python Code:
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Install Lucid
!pip install --quiet lucid==0.0.5
#!pip install --quiet --upgrade-strategy=only-if-needed git+https://github.com/tensorflow/lucid.git
# Imports
import numpy as np
import scipy.ndimage as nd
import tensorflow as tf
import lucid.modelzoo.vision_models as models
from lucid.misc.io import show
import lucid.optvis.objectives as objectives
import lucid.optvis.param as param
import lucid.optvis.render as render
import lucid.optvis.transform as transform
# Let's import a model from the Lucid modelzoo!
model = models.InceptionV1()
model.load_graphdef()
neuron1 = ('mixed4b_pre_relu', 111) # large fluffy
# neuron1 = ('mixed3a_pre_relu', 139) # pointilist
# neuron1 = ('mixed3b_pre_relu', 81) # brush trokes
# neuron1 = ('mixed4a_pre_relu', 97) # wavy
# neuron1 = ('mixed4a_pre_relu', 41) # frames
# neuron1 = ('mixed4a_pre_relu', 479) # B/W
neuron2 = ('mixed4a_pre_relu', 476) # art
# neuron2 = ('mixed4b_pre_relu', 360) # lattices
# neuron2 = ('mixed4b_pre_relu', 482) # arcs
# neuron2 = ('mixed4c_pre_relu', 440) # small fluffy
# neuron2 = ('mixed4d_pre_relu', 479) # bird beaks
# neuron2 = ('mixed4e_pre_relu', 718) # shoulders
C = lambda neuron: objectives.channel(*neuron)
_ = render.render_vis(model, C(neuron1))
_ = render.render_vis(model, C(neuron2))
_ = render.render_vis(model, C(neuron1) + C(neuron2))
obj = objectives.direction("mixed4d_pre_relu", np.random.randn(528))
_ = render.render_vis(model, obj)
def interpolate_param_f():
unique = param.fft_image((6, 128, 128, 3))
shared = [
param.lowres_tensor((6, 128, 128, 3), (1, 128//2, 128//2, 3)),
param.lowres_tensor((6, 128, 128, 3), (1, 128//4, 128//4, 3)),
param.lowres_tensor((6, 128, 128, 3), (1, 128//8, 128//8, 3)),
param.lowres_tensor((6, 128, 128, 3), (2, 128//8, 128//8, 3)),
param.lowres_tensor((6, 128, 128, 3), (1, 128//16, 128//16, 3)),
param.lowres_tensor((6, 128, 128, 3), (2, 128//16, 128//16, 3)),
]
return param.to_valid_rgb(unique + sum(shared), decorrelate=True)
obj = objectives.channel_interpolate("mixed4a_pre_relu", 476, "mixed4a_pre_relu", 460)
_ = render.render_vis(model, obj, interpolate_param_f)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Continuous data is stored in objects of type
Step2: <div class="alert alert-info"><h4>Note</h4><p>Accessing the `._data` attribute is done here for educational
Step3: Selecting subsets of channels and samples
Step4: Notice the different scalings of these types
Step5: You can restrict the data to a specific time range
Step6: And drop channels by name
Step7: Concatenating
| <ASSISTANT_TASK:>
Python Code:
from __future__ import print_function
import mne
import os.path as op
from matplotlib import pyplot as plt
# Load an example dataset, the preload flag loads the data into memory now
data_path = op.join(mne.datasets.sample.data_path(), 'MEG',
'sample', 'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(data_path, preload=True)
raw.set_eeg_reference() # set EEG average reference
# Give the sample rate
print('sample rate:', raw.info['sfreq'], 'Hz')
# Give the size of the data matrix
print('channels x samples:', raw._data.shape)
# Extract data from the first 5 channels, from 1 s to 3 s.
sfreq = raw.info['sfreq']
data, times = raw[:5, int(sfreq * 1):int(sfreq * 3)]
_ = plt.plot(times, data.T)
_ = plt.title('Sample channels')
# Pull all MEG gradiometer channels:
# Make sure to use .copy() or it will overwrite the data
meg_only = raw.copy().pick_types(meg=True)
eeg_only = raw.copy().pick_types(meg=False, eeg=True)
# The MEG flag in particular lets you specify a string for more specificity
grad_only = raw.copy().pick_types(meg='grad')
# Or you can use custom channel names
pick_chans = ['MEG 0112', 'MEG 0111', 'MEG 0122', 'MEG 0123']
specific_chans = raw.copy().pick_channels(pick_chans)
print(meg_only)
print(eeg_only)
print(grad_only)
print(specific_chans)
f, (a1, a2) = plt.subplots(2, 1)
eeg, times = eeg_only[0, :int(sfreq * 2)]
meg, times = meg_only[0, :int(sfreq * 2)]
a1.plot(times, meg[0])
a2.plot(times, eeg[0])
del eeg, meg, meg_only, grad_only, eeg_only, data, specific_chans
raw = raw.crop(0, 50) # in seconds
print('New time range from', raw.times.min(), 's to', raw.times.max(), 's')
nchan = raw.info['nchan']
raw = raw.drop_channels(['MEG 0241', 'EEG 001'])
print('Number of channels reduced from', nchan, 'to', raw.info['nchan'])
# Create multiple :class:`Raw <mne.io.RawFIF>` objects
raw1 = raw.copy().crop(0, 10)
raw2 = raw.copy().crop(10, 20)
raw3 = raw.copy().crop(20, 40)
# Concatenate in time (also works without preloading)
raw1.append([raw2, raw3])
print('Time extends from', raw1.times.min(), 's to', raw1.times.max(), 's')
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Describes the tests needed to validate the PutFile functionality.
Step2: Check this by running
Step3: The response should be
Step4: Verify that it is now present at all pillars, and that it has the same checksum at all pillars
Step5: Put a file with a different file-id to all pillars
Step6: Verify that it is now present at all pillars, and that it has the same checksum at all pillars
Step7: Put a file by using an URL
Step8: Verify that it is now present at al pillars, with the requested checksum
Step9: Idempotent test
Step10: Put with returned checksums
Step11: Use the request-checksum-algorithm and request-checksum-salt arguments (-R and -S)
Step12: Note, this will return the stored MD5 hashes, if the file is already put'et
Step13: Verify that the checksumpillar does not reply with a checksum.
Step14: Verify that the checksumpillar and all the other pillars have the file
Step15: Put a file including a request for checksum, but using both a salt and a non-salt algorithm.
Step16: Verify that all the data pillars return the same checksum, and that it is identical to the previous test (where '-R HMAC_SHA1')
Step17: Verify that the data pillars all deliver the same checksum, which must be different from the checksum in the previous two tests.
Step18: Attempt to put a file which does not exist
Step19: Attempt to put a file to a non-existing collection
Step20: Attempt to put a file to a non-existing pillar
| <ASSISTANT_TASK:>
Python Code:
%env CLIENT bitrepository-client-1.9-RC1
!wget -Nq "https://sbforge.org/download/attachments/25395346/${CLIENT}.zip"
!unzip -quo ${CLIENT}.zip
%alias bitmag ${CLIENT}/bin/bitmag.sh %l
#Some imports we will need later
import random
import string
TESTFILE1='README.md'
%bitmag get-file-ids -c integrationtest2 -i {TESTFILE1}
hash=!cat {TESTFILE1} | md5sum - | cut -d' ' -f1
print("md5: {}".format(hash))
%bitmag delete -c integrationtest2 -i {TESTFILE1} -C {hash.s} -p "sbtape2"
%bitmag delete -c integrationtest2 -i {TESTFILE1} -C {hash.s} -p "reference2"
%bitmag delete -c integrationtest2 -i {TESTFILE1} -C {hash.s} -p "checksum2"
%bitmag delete -c integrationtest2 -i {TESTFILE1} -C {hash.s} -p "sbdisk1"
%bitmag delete -c integrationtest2 -i {TESTFILE1} -C {hash.s} -p "kbpillar2"
%bitmag put-file -c integrationtest2 -f {TESTFILE1}
%bitmag get-file-ids -c integrationtest2 -i {TESTFILE1}
%bitmag get-checksums -c integrationtest2 -i {TESTFILE1}
TESTFILE2=''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(10))
%bitmag put-file -c integrationtest2 -f {TESTFILE1} -i {TESTFILE2}
%bitmag get-file-ids -c integrationtest2 -i {TESTFILE2}
%bitmag get-checksums -c integrationtest2 -i {TESTFILE2}
TESTFILE3=''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(10))
URL='http://sandkasse-01.kb.dk/dav/test.txt'
hash=!curl -s {URL} | md5sum - | cut -d' ' -f1
print("md5: {}".format(hash))
%bitmag put-file -c integrationtest2 -u {URL} -i {TESTFILE3} -C {hash.s}
%bitmag get-file-ids -c integrationtest2 -i {TESTFILE3}
%bitmag get-checksums -c integrationtest2 -i {TESTFILE3}
%bitmag put-file -c integrationtest2 -f {TESTFILE1}
import hmac
import hashlib
import urllib.request
def getSaltedChecksum(url,salt,algorithm):
saltBytes = bytes.fromhex(salt)
digester = hmac.new(saltBytes,None,algorithm)
with urllib.request.urlopen(url) as from_fh:
while True:
chunk = from_fh.read()
if not chunk:
break
digester.update(chunk)
return digester.hexdigest().lower()
saltedChecksum=getSaltedChecksum(url=URL,salt='abcd',algorithm=hashlib.sha1)
print(saltedChecksum)
TESTFILE4=''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(10))
hash=!curl -s {URL} | md5sum - | cut -d' ' -f1
print("md5: {}".format(hash))
%bitmag put-file -c integrationtest2 -u {URL} -i {TESTFILE4} -C {hash.s} -S 'abcd' -R HMAC_SHA1
%bitmag get-checksums -c integrationtest2 -i {TESTFILE4} -R HMAC_SHA1 -S 'abcd'
%bitmag get-file-ids -c integrationtest2 -i {TESTFILE4}
TESTFILE5=''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(10))
hash=!curl -s {URL} | md5sum - | cut -d' ' -f1
print("md5: {}".format(hash))
%bitmag put-file -c integrationtest2 -u {URL} -i {TESTFILE5} -C {hash.s} -R SHA1 -S 'abcd'
hash=!curl -s {URL} | md5sum - | cut -d' ' -f1
print("md5: {}".format(hash))
hash=!curl -s {URL} | sha1sum - | cut -d' ' -f1
print("sha1: {}".format(hash))
%bitmag get-checksums -c integrationtest2 -i {TESTFILE5}
%bitmag put-file -c integrationtest2 -f .gitignore -i {TESTFILE1}
%bitmag put-file -c integrationtest2 -f ThisFileDoesNotExist
%bitmag put-file -c integrationtest3 -i {TESTFILE1}
%bitmag put-file -c integrationtest1 -i {TESTFILE1} -p non-existing-pillar
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <a id='sec1.3'></a>
Step2: Extract POI category and visiting frequency.
Step3: <a id='sec1.4'></a>
Step5: <a id='sec1.5'></a>
Step7: <a id='sec1.6'></a>
Step9: Distance based rules
Step11: POI Popularity based rules
Step14: <a id='sec1.7'></a>
| <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import os
import math
import random
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from datetime import datetime
random.seed(123456789)
data_dir = 'data/data-ijcai15'
#fvisit = os.path.join(data_dir, 'userVisits-Osak.csv')
#fcoord = os.path.join(data_dir, 'photoCoords-Osak.csv')
#fvisit = os.path.join(data_dir, 'userVisits-Glas.csv')
#fcoord = os.path.join(data_dir, 'photoCoords-Glas.csv')
#fvisit = os.path.join(data_dir, 'userVisits-Edin.csv')
#fcoord = os.path.join(data_dir, 'photoCoords-Edin.csv')
fvisit = os.path.join(data_dir, 'userVisits-Toro.csv')
fcoord = os.path.join(data_dir, 'photoCoords-Toro.csv')
suffix = fvisit.split('-')[-1].split('.')[0]
visits = pd.read_csv(fvisit, sep=';')
visits.head()
coords = pd.read_csv(fcoord, sep=';')
coords.head()
# merge data frames according to column 'photoID'
assert(visits.shape[0] == coords.shape[0])
traj = pd.merge(visits, coords, on='photoID')
traj.head()
num_photo = traj['photoID'].unique().shape[0]
num_user = traj['userID'].unique().shape[0]
num_seq = traj['seqID'].unique().shape[0]
num_poi = traj['poiID'].unique().shape[0]
pd.DataFrame([num_photo, num_user, num_seq, num_poi, num_photo/num_user, num_seq/num_user], \
index = ['#photo', '#user', '#seq', '#poi', '#photo/user', '#seq/user'], columns=[str(suffix)])
poi_coords = traj[['poiID', 'photoLon', 'photoLat']].groupby('poiID').agg(np.mean)
poi_coords.reset_index(inplace=True)
poi_coords.rename(columns={'photoLon':'poiLon', 'photoLat':'poiLat'}, inplace=True)
poi_coords.head()
poi_catfreq = traj[['poiID', 'poiTheme', 'poiFreq']].groupby('poiID').first()
poi_catfreq.reset_index(inplace=True)
poi_catfreq.head()
poi_all = pd.merge(poi_catfreq, poi_coords, on='poiID')
poi_all.set_index('poiID', inplace=True)
poi_all.head()
seq_all = traj[['userID', 'seqID', 'poiID', 'dateTaken']].copy()\
.groupby(['userID', 'seqID', 'poiID']).agg([np.min, np.max])
seq_all.columns = seq_all.columns.droplevel()
seq_all.reset_index(inplace=True)
seq_all.rename(columns={'amin':'arrivalTime', 'amax':'departureTime'}, inplace=True)
seq_all['poiDuration(sec)'] = seq_all['departureTime'] - seq_all['arrivalTime']
seq_all.head()
seq_start = seq_all[['userID', 'seqID', 'arrivalTime']].copy().groupby(['userID', 'seqID']).agg(np.min)
seq_start.rename(columns={'arrivalTime':'startTime'}, inplace=True)
seq_start.reset_index(inplace=True)
seq_start.head()
seq_end = seq_all[['userID', 'seqID', 'departureTime']].copy().groupby(['userID', 'seqID']).agg(np.max)
seq_end.rename(columns={'departureTime':'endTime'}, inplace=True)
seq_end.reset_index(inplace=True)
seq_end.head()
assert(seq_start.shape[0] == seq_end.shape[0])
user_seqs = pd.merge(seq_start, seq_end, on=['userID', 'seqID'])
user_seqs.head()
#user_seqs.loc[0, 'seqID']
#user_seqs['userID'].iloc[-1]
def generate_ext_transmat(poi_all, seq_all, user_seqs, timeGap):
Calculate the extended transition matrix of POI category for actual trajectories with a special category REST.
For a specific user, if the time gap between the earlier sequence and the latter sequence is less than 'timeGap',
then add a REST state between the two sequences, otherwise,
add a REST to REST transition after the earlier sequence.
assert(timeGap > 0)
states = poi_all['poiTheme'].unique().tolist()
states.sort()
states.append('REST')
ext_transmat = pd.DataFrame(data=np.zeros((len(states), len(states)), dtype=np.float64), \
index=states, columns=states)
for user in user_seqs['userID'].unique():
sequ = user_seqs[user_seqs['userID'] == user].copy()
sequ.sort(columns=['startTime'], ascending=True, inplace=True)
prev_seqEndTime = None
prev_endPOICat = None
# sequence with length 1 should be considered
for i in range(len(sequ.index)):
idx = sequ.index[i]
seqid = sequ.loc[idx, 'seqID']
seq = seq_all[seq_all['seqID'] == seqid].copy()
seq.sort(columns=['arrivalTime'], ascending=True, inplace=True)
for j in range(len(seq.index)-1):
poi1 = seq.loc[seq.index[j], 'poiID']
poi2 = seq.loc[seq.index[j+1], 'poiID']
cat1 = poi_all.loc[poi1, 'poiTheme']
cat2 = poi_all.loc[poi2, 'poiTheme']
ext_transmat.loc[cat1, cat2] += 1
# REST state
if i > 0:
startTime = sequ.loc[idx, 'startTime']
assert(prev_seqEndTime is not None)
assert(startTime >= prev_seqEndTime)
ext_transmat.loc[prev_endPOICat, 'REST'] += 1 # POI-->REST
if startTime - prev_seqEndTime < timeGap: # REST-->POI
poi0 = seq.loc[seq.index[0], 'poiID']
startPOICat = poi_all.loc[poi0, 'poiTheme']
ext_transmat.loc['REST', startPOICat] += 1
else: # REST-->REST
ext_transmat.loc['REST', 'REST'] += 1
# memorise info of previous sequence
prev_seqEndTime = sequ.loc[idx, 'endTime']
poiN = seq.loc[seq.index[-1], 'poiID']
prev_endPOICat = poi_all.loc[poiN, 'poiTheme']
# normalize each row to get the transition probability from cati to catj
for r in ext_transmat.index:
rowsum = ext_transmat.ix[r].sum()
if rowsum == 0: continue # deal with lack of data
ext_transmat.loc[r] /= rowsum
return ext_transmat
timeGap = 24 * 60 * 60 # 24 hours
trans_mat = generate_ext_transmat(poi_all, seq_all, user_seqs, timeGap)
trans_mat
#trans_mat.columns[-1]
#trans_mat.loc['Sport']
#np.array(trans_mat.loc['Sport'])
#np.array(trans_mat.loc['Sport']).sum()
def calc_dist(longitude1, latitude1, longitude2, latitude2):
Calculate the distance (unit: km) between two places on earth
# convert degrees to radians
lon1 = math.radians(longitude1)
lat1 = math.radians(latitude1)
lon2 = math.radians(longitude2)
lat2 = math.radians(latitude2)
radius = 6371.009 # mean earth radius is 6371.009km, en.wikipedia.org/wiki/Earth_radius#Mean_radius
# The haversine formula, en.wikipedia.org/wiki/Great-circle_distance
dlon = math.fabs(lon1 - lon2)
dlat = math.fabs(lat1 - lat2)
return 2 * radius * math.asin( math.sqrt( \
(math.sin(0.5*dlat))**2 + math.cos(lat1) * math.cos(lat2) * (math.sin(0.5*dlon))**2 ))
def rule_NN(current_poi, next_poi_cat, poi_all, randomized):
choosing a specific POI within a category.
if randomized == True,
return a random POI choosing with probability proportional to the reciprocal of its distance to current POI
otherwise, return the Nearest Neighbor of the current POI
assert(current_poi in poi_all.index)
assert(next_poi_cat in poi_all['poiTheme'].unique())
poi_index = None
if poi_all.loc[current_poi, 'poiTheme'] == next_poi_cat:
poi_index = [x for x in poi_all[poi_all['poiTheme'] == next_poi_cat].index if x != current_poi]
else:
poi_index = poi_all[poi_all['poiTheme'] == next_poi_cat].index
probs = np.zeros(len(poi_index), dtype=np.float64)
for i in range(len(poi_index)):
dist = calc_dist(poi_all.loc[current_poi, 'poiLon'], poi_all.loc[current_poi, 'poiLat'], \
poi_all.loc[poi_index[i],'poiLon'], poi_all.loc[poi_index[i],'poiLat'])
assert(dist > 0.)
probs[i] = 1. / dist
idx = None
if randomized == True:
probs /= np.sum(probs) # normalise
sample = np.random.multinomial(1, probs) # catgorical/multinoulli distribution, multinomial distribution (n=1)
for j in range(len(sample)):
if sample[j] == 1:
idx = j
break
else:
idx = probs.argmax()
assert(idx is not None)
return poi_index[idx]
def rule_Pop(current_poi, next_poi_cat, poi_all, randomized):
choosing a specific POI within a category.
if randomized == True,
returen a random POI choosing with probability proportional to its popularity
otherwise, return the The most Popular POI
assert(current_poi in poi_all.index)
assert(next_poi_cat in poi_all['poiTheme'].unique())
poi_index = None
if poi_all.loc[current_poi, 'poiTheme'] == next_poi_cat:
poi_index = [x for x in poi_all[poi_all['poiTheme'] == next_poi_cat].index if x != current_poi]
else:
poi_index = poi_all[poi_all['poiTheme'] == next_poi_cat].index
probs = np.zeros(len(poi_index), dtype=np.float64)
for i in range(len(poi_index)):
probs[i] = poi_all.loc[poi_index[i],'poiFreq']
idx = None
if randomized == True:
probs /= np.sum(probs) # normalise
sample = np.random.multinomial(1, probs) # catgorical/multinoulli distribution, multinomial distribution (n=1)
for j in range(len(sample)):
if sample[j] == 1:
idx = j
break
else:
idx = probs.argmax()
assert(idx is not None)
return poi_index[idx]
def extract_seq(seqid_set, seq_all):
Extract the actual sequences (i.e. a list of POI) from a set of sequence ID
seq_dict = dict()
for seqid in seqid_set:
seqi = seq_all[seq_all['seqID'] == seqid].copy()
seqi.sort(columns=['arrivalTime'], ascending=True, inplace=True)
seq_dict[seqid] = seqi['poiID'].tolist()
return seq_dict
all_seqid = seq_all['seqID'].unique()
all_seq_dict = extract_seq(all_seqid, seq_all)
def choose_start_poi(all_seq_dict, seqLen):
choose the first POI in a random actual sequence
assert(seqLen > 0)
while True:
seqid = random.choice(sorted(all_seq_dict.keys()))
if len(all_seq_dict[seqid]) > seqLen:
return all_seq_dict[seqid][0]
obs_mat = trans_mat.copy() * 0
obs_mat
prefer_NN_over_Pop = True
randomized = True
N = 1000 # number of observations
prevpoi = choose_start_poi(all_seq_dict, 1)
prevcat = poi_all.loc[prevpoi, 'poiTheme']
nextpoi = None
nextcat = None
print('(%s, POI %d)->' % (prevcat, prevpoi))
n = 0
while n < N:
# choose the next POI category
# catgorical/multinoulli distribution, special case of multinomial distribution (n=1)
sample = np.random.multinomial(1, np.array(trans_mat.loc[prevcat]))
nextcat = None
for j in range(len(sample)):
if sample[j] == 1: nextcat = trans_mat.columns[j]
assert(nextcat is not None)
obs_mat.loc[prevcat, nextcat] += 1
# choose the next POI
if nextcat == 'REST':
nextpoi = choose_start_poi(all_seq_dict, 1) # restart
print('(REST)->')
else:
if prefer_NN_over_Pop == True:
nextpoi = rule_NN(prevpoi, nextcat, poi_all, randomized)
else:
nextpoi = rule_Pop(prevpoi, nextcat, poi_all, randomized)
print('(%s, POI %d)->' % (nextcat, nextpoi))
prevcat = nextcat
prevpoi = nextpoi
n += 1
obs_mat
# MEL estimation
est_mat = obs_mat.copy()
for r in est_mat.index:
rowsum = est_mat.ix[r].sum()
if rowsum == 0: continue # deal with lack of data
est_mat.loc[r] /= rowsum
est_mat
trans_mat
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Set parameters
Step2: Compute statistic
Step3: View time-frequency plots
| <ASSISTANT_TASK:>
Python Code:
# Authors: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
#
# License: BSD (3-clause)
import numpy as np
import matplotlib.pyplot as plt
import mne
from mne.time_frequency import tfr_morlet
from mne.stats import permutation_cluster_1samp_test
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
tmin, tmax, event_id = -0.3, 0.6, 1
# Setup for reading the raw data
raw = mne.io.read_raw_fif(raw_fname)
events = mne.find_events(raw, stim_channel='STI 014')
include = []
raw.info['bads'] += ['MEG 2443', 'EEG 053'] # bads + 2 more
# picks MEG gradiometers
picks = mne.pick_types(raw.info, meg='grad', eeg=False, eog=True,
stim=False, include=include, exclude='bads')
# Load condition 1
event_id = 1
epochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), preload=True,
reject=dict(grad=4000e-13, eog=150e-6))
# Take only one channel
ch_name = 'MEG 1332'
epochs.pick_channels([ch_name])
evoked = epochs.average()
# Factor to down-sample the temporal dimension of the TFR computed by
# tfr_morlet. Decimation occurs after frequency decomposition and can
# be used to reduce memory usage (and possibly computational time of downstream
# operations such as nonparametric statistics) if you don't need high
# spectrotemporal resolution.
decim = 5
freqs = np.arange(8, 40, 2) # define frequencies of interest
sfreq = raw.info['sfreq'] # sampling in Hz
tfr_epochs = tfr_morlet(epochs, freqs, n_cycles=4., decim=decim,
average=False, return_itc=False, n_jobs=1)
# Baseline power
tfr_epochs.apply_baseline(mode='logratio', baseline=(-.100, 0))
# Crop in time to keep only what is between 0 and 400 ms
evoked.crop(0., 0.4)
tfr_epochs.crop(0., 0.4)
epochs_power = tfr_epochs.data[:, 0, :, :] # take the 1 channel
threshold = 2.5
T_obs, clusters, cluster_p_values, H0 = \
permutation_cluster_1samp_test(epochs_power, n_permutations=100,
threshold=threshold, tail=0)
evoked_data = evoked.data
times = 1e3 * evoked.times
plt.figure()
plt.subplots_adjust(0.12, 0.08, 0.96, 0.94, 0.2, 0.43)
# Create new stats image with only significant clusters
T_obs_plot = np.nan * np.ones_like(T_obs)
for c, p_val in zip(clusters, cluster_p_values):
if p_val <= 0.05:
T_obs_plot[c] = T_obs[c]
vmax = np.max(np.abs(T_obs))
vmin = -vmax
plt.subplot(2, 1, 1)
plt.imshow(T_obs, cmap=plt.cm.gray,
extent=[times[0], times[-1], freqs[0], freqs[-1]],
aspect='auto', origin='lower', vmin=vmin, vmax=vmax)
plt.imshow(T_obs_plot, cmap=plt.cm.RdBu_r,
extent=[times[0], times[-1], freqs[0], freqs[-1]],
aspect='auto', origin='lower', vmin=vmin, vmax=vmax)
plt.colorbar()
plt.xlabel('Time (ms)')
plt.ylabel('Frequency (Hz)')
plt.title('Induced power (%s)' % ch_name)
ax2 = plt.subplot(2, 1, 2)
evoked.plot(axes=[ax2], time_unit='s')
plt.show()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step4: Objects
Step5: The above construct is a class, which is to say a model for creating objects.
Step6: Now we have an object called "jyry", which has the value
Step7: One can create multiple objects that all have their own identity, even though they share some variables.
Step8: Typically object comparison is done using the same syntax as for basic types (which, by the way are objects too in Python).
Step9: Python permits the programmer to edit objects without any access control mechanics. See for example.
Step10: Figuring out an object
Step11: How can we find things out about this object? Below are a few examples
Step12: Exceptions
Step13: The container-class can exhibit at least two different exceptions.
Step14: Who should worry about the various issues is a good philosophical question. We could either make the Container-class secure in that it doesn't raise any errors to whoever calls it or we could let the caller worry about such errors.
Step15: A try-except may contain a finallyblock, which is always guaranteed to execute.
Step16: There is also syntax for catching multiple error types in the same catch clause.
| <ASSISTANT_TASK:>
Python Code:
class Student(object):
The above states that the code-block (indented area) below will define a
class Student, that derives from a class called 'object'. Inheriting from 'object' is S
def __init__(self, name, birthyear, interest=None):
__init__ is special method that is called when instantiating the object.
Typically the methods can then be used to
self.name = name
self.birthyear = birthyear
self.interest = interest
def say_hi(self):
This is a classical example of a function that prints something.
The more complex your system, the less likely it is that it is a good idea to print anything other than
warnings from whithin your classes.
if not self.interest:
print("Hi, my name is " + self.name + "!")
else:
print("Hi, my name is " + self.name + " and I'm interested in " + self.interest + ".")
def get_age(self):
This is a much more style-pure example of classes.
Recording a birthyear instead of age is a good idea because next year we'll all be a year older.
However requiring everyone who uses your class is impolite and would lead to duplicate code.
Doing it once and asking everyone to use that implementation reduces code complexity and improves
maintainability.
import datetime
return datetime.datetime.now().year-self.birthyear
jyry = Student("Jyry", 1984, interest="Python")
jyry.say_hi()
print(jyry.birthyear)
tuomas = Student("Tuomas", 1984, interest="Java")
tuomas.say_hi()
tuomas == jyry
jyry.interest = "teaching"
jyry.say_hi()
fobj = open("../data/grep.txt")
print(fobj)
dir(fobj)
help(jyry.say_hi)
jyry.say_hi.__doc__
class Container(object):
def __init__(self):
self.bag = {}
def put(self, key, item):
self.bag[key] = item
def get(self, key):
return self.bag[key]
container = Container()
container.put([1, 2, 3], "example")
container.get("not_in_it")
try:
container = Container()
container.put([1,2,3], "value")
except TypeError as err:
print("Stupid programmer caused an error: " + str(err))
try:
container = Container()
container.put(3, "value")
container.get(3)
except TypeError as err:
print("Stupid programmer caused an error: " + str(err))
except KeyError as err:
print("Stupid programmer caused another error: " + str(err))
finally:
print("all is well in the end")
# go ahead, make changes that cause one of the exceptions to be raised
try:
container = Container()
container.put(3, "value")
container.get(5)
except (TypeError, KeyError) as err:
print("please shoot me")
if type(err) == TypeError:
raise Exception("That's it I quit!")
else:
raise
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Once generate data
Step2: Step 1 - collect data
Step3: Step 2 - Build model
Step4: Step 3 training the network
Step5: One epoch takes approximately 268 secs
| <ASSISTANT_TASK:>
Python Code:
num_units = 400 #state size
input_len = 60
target_len = 30
batch_size = 64
with_EOS = False
total_size = 57994
train_size = 46400
test_size = 11584
data_folder = '../../../../Dropbox/data'
ph_data_path = '../data/price_history'
npz_full = ph_data_path + '/price_history_dp_60to30_57994.npz'
npz_train = ph_data_path + '/price_history_dp_60to30_57994_46400_train.npz'
npz_test = ph_data_path + '/price_history_dp_60to30_57994_11584_test.npz'
# dp = PriceHistorySeq2SeqDataProvider(npz_path=npz_train, batch_size=batch_size, with_EOS=with_EOS)
# dp.inputs.shape, dp.targets.shape
# aa, bb = dp.next()
# aa.shape, bb.shape
model = PriceHistorySeq2SeqDynDecIns(rng=random_state, dtype=dtype, config=config, with_EOS=with_EOS)
# graph = model.getGraph(batch_size=batch_size,
# num_units=num_units,
# input_len=input_len,
# target_len=target_len)
#show_graph(graph)
best_params = [500,
tf.nn.tanh,
0.0001,
0.62488034788862112,
0.001]
num_units, activation, lamda2, keep_prob_input, learning_rate = best_params
batch_size
def experiment():
return model.run(npz_path=npz_train,
npz_test = npz_test,
epochs=100,
batch_size = batch_size,
num_units = num_units,
input_len=input_len,
target_len=target_len,
learning_rate = learning_rate,
preds_gather_enabled=True,
batch_norm_enabled = True,
activation = activation,
decoder_first_input = PriceHistorySeq2SeqDynDecIns.DECODER_FIRST_INPUT.ZEROS,
keep_prob_input = keep_prob_input,
lamda2 = lamda2,
)
#%%time
dyn_stats, preds_dict, targets = get_or_run_nn(experiment, filename='024_seq2seq_60to30_002',
nn_runs_folder= data_folder + '/nn_runs')
dyn_stats.plotStats()
plt.show()
data_len = len(targets)
mses = np.empty(data_len)
for ii, (pred, target) in enumerate(zip(preds_dict.values(), targets.values())):
mses[ii] = mean_squared_error(pred, target)
np.mean(mses)
huber_losses = np.empty(data_len)
for ii, (pred, target) in enumerate(zip(preds_dict.values(), targets.values())):
huber_losses[ii] = np.mean(huber_loss(pred, target))
np.mean(huber_losses)
targets_arr = np.array(targets.values())
targets_arr.shape
preds_arr = np.array(preds_dict.values())
preds_arr.shape
np.mean(huber_loss(y_true=targets_arr, y_pred=preds_arr))
r2_scores = [r2_score(y_true=targets[ind], y_pred=preds_dict[ind])
for ind in range(len(targets))]
ind = np.argmin(r2_scores)
ind
reals = targets[ind]
preds = preds_dict[ind]
r2_score(y_true=reals, y_pred=preds)
#sns.tsplot(data=dp.inputs[ind].flatten())
fig = plt.figure(figsize=(15,6))
plt.plot(reals, 'b')
plt.plot(preds, 'g')
plt.legend(['reals','preds'])
plt.show()
%%time
dtw_scores = [fastdtw(targets[ind], preds_dict[ind])[0]
for ind in range(len(targets))]
np.mean(dtw_scores)
coint(preds, reals)
cur_ind = np.random.randint(len(targets))
reals = targets[cur_ind]
preds = preds_dict[cur_ind]
fig = plt.figure(figsize=(15,6))
plt.plot(reals, 'b')
plt.plot(preds, 'g')
plt.legend(['reals','preds'])
plt.show()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 在这个例子中,对已排序的 _sorted 元素逐个与 i 进行比较,若 i 比已排序的所有元素都大,则只能排在已排序列表的最后。这时我们就需要一个额外的状态变量 inserted 来标记完成遍历循环还是中途被 break,在这种情况下,我们可以用 else 来取代这一状态变量:
Step2: 我认为这是一个非常酷的做法!不过要注意的是,除了 break 可以触发后面的 else 语句,没有循环的时候也会:
Step3: 错误捕捉中的 else
Step4: 当然,也可以用状态变量的做法来替代 else:
| <ASSISTANT_TASK:>
Python Code:
from random import randrange
def insertion_sort(seq):
if len(seq) <= 1:
return seq
_sorted = seq[:1]
for i in seq[1:]:
inserted = False
for j in range(len(_sorted)):
if i < _sorted[j]:
_sorted = [*_sorted[:j], i, *_sorted[j:]]
inserted = True
break
if not inserted:
_sorted.append(i)
return _sorted
print(insertion_sort([randrange(1, 100) for i in range(10)]))
def insertion_sort(seq):
if len(seq) <= 1:
return seq
_sorted = seq[:1]
for i in seq[1:]:
for j in range(len(_sorted)):
if i < _sorted[j]:
_sorted = [*_sorted[:j], i, *_sorted[j:]]
break
else:
_sorted.append(i)
return _sorted
print(insertion_sort([randrange(1, 100) for i in range(10)]))
while False:
print("Will never print!")
else:
print("Loop failed!")
def divide(x, y):
try:
result = x / y
except ZeroDivisionError:
print("division by 0!")
else:
print("result = {}".format(result))
finally:
print("divide finished!")
divide(5,2)
print("*"*20)
divide(5,0)
def divide(x, y):
result = None
try:
result = x / y
except ZeroDivisionError:
print("division by 0!")
if result is not None:
print("result = {}".format(result))
print("divide finished!")
divide(5,2)
print("*"*20)
divide(5,0)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 2. The data
Step2: The resulting DataFrame contains a row for each user and each column represents an artist. The values indicate whether the user listend to a song by that artist (1) or not (0). Note that the number of times a person listened to a specific artist is not listed.
Step3: The cosine_similarity function returned a 2-dimensional numpy array. This array contains all the similarity values we need, but it is not labelled. Since the entire array will not fit the screen, we will use slicing to print a subset of the result.
Step4: The artist names are both the row and column labels for the similarity_matrix. We can add these labels by creating a new DataFrame based on the numpy array. By using the pandas.DataFrame.iloc integer-location based indexer, we get the same slice as above, but with added labels.
Step5: Pandas also provides a label based indexer, pandas.DataFrame.loc, which we can use to get a slice based on label values.
Step6: As you can see above, bands are 100% similar to themselves and The White Stripes are nothing like Abba.
Step7: To view the first n rows, we can use the pandas.DataFrame.head method, the default value for n is 5.
Step8: Note that we created a MultiIndex by specifying two columns in the set_index call.
Step9: The use of the MultiIndex enables flexible access to the data. If we index with a single artist name, we get all compared artists. To view the last n rows for this result, we can use the pandas.DataFrame.tail method.
Step10: We can index on multiple levels by providing a tuple of indexes
Step11: 4. Picking the best matches
Step13: We can transform the task of getting the most similar bands for a given band to a function.
Step14: Note that we also defined a docstring for this function, which we can view by using help() or shift + tab in a jupyter notebook.
Step15: 5. Get the listening history
Step16: We now have the complete listening history, but we only need the history for the similar artists. For this we can use the index labels from the DataFrame returned by the most_similar_artists function. Index labels for a DataFrame can be retrieved by using the pandas.DataFrame.index attribute.
Step17: We can combine the user id and similar labels in the .loc indexer to get the listening history for the most similar artists.
Step19: Let's make a function to get the most similar artists and their listening history for a given artist and user. The function creates two DataFrames with the same index, and then uses pandas.concat to create a single DataFrame from them.
Step20: 6. Calculate the recommendation score.
Step21: Remember what the DataFrame returned by the most_similar_artists_history function looks like
Step22: Pandas provides methods to do column or row aggregation, like e.g. pandas.DataFrame.product. This method will calculate all values in a column or row. The direction can be chosen with the axis parameter. As we need the product of the values in the rows (similarity * history), we will need to specify axis=1.
Step23: Then there's pandas.DataFrame.sum which does the same thing for summing the values. As we want the sum for all values in the column we would have to specify axis=0. Since 0 is the default value for the axis parameter we don't have to add it to the method call.
Step25: Knowing these methods, it is only a small step to define the scoring function based on the output of most_similar_artists_history.
Step27: Determine artists to recommend
Step29: The last requirement for our recommender engine is a function that can score all unknown artists for a given user. We will make this function return a list of dictionaries, which can be easily converted to a DataFrame later on. The list will be generated using a list comprehension.
Step31: From the scored artists we can easily derive the best recommendations for a given user.
Step32: With this final function, it is a small step to get recommendations for multiple users. As our code hasn't been optimized for performance, it is advised to limit the number of users somewhat.
Step33: We can now use the concat function again to get a nice overview of the recommended artists.
| <ASSISTANT_TASK:>
Python Code:
import numpy as np
import pandas as pd
import sklearn.metrics.pairwise
data = pd.read_csv('data/lastfm-matrix-germany.csv').set_index('user')
data.head()
data.shape
### BEGIN SOLUTION
similarity_matrix = sklearn.metrics.pairwise.cosine_similarity(np.transpose(data))
### END SOLUTION
# similarity_matrix = sklearn.metrics.pairwise.cosine_similarity( ? )
assert similarity_matrix.shape == (285, 285)
print(similarity_matrix.ndim)
similarity_matrix[:5, :5]
### BEGIN SOLUTION
artist_similarities = pd.DataFrame(similarity_matrix, index=data.columns, columns=data.columns)
### END SOLUTION
# artist_similarities = pd.DataFrame( ? , index=data.columns, columns= ? )
assert np.array_equal(artist_similarities.columns, data.columns)
assert artist_similarities.shape == similarity_matrix.shape
artist_similarities.iloc[:5, :5]
slice_artists = ['ac/dc', 'madonna', 'metallica', 'rihanna', 'the white stripes']
artist_similarities.loc[slice_artists, slice_artists]
similarities = (
# start from untidy DataFrame
artist_similarities
# add a name to the index
.rename_axis(index='artist')
# artist needs to be a column for melt
.reset_index()
# create the tidy dataset
.melt(id_vars='artist', var_name='compared_with', value_name='cosine_similarity')
# artist compared with itself not needed, keep rows where artist and compared_with are not equal.
.query('artist != compared_with')
# set identifying observations to index
.set_index(['artist', 'compared_with'])
# sort the index
.sort_index()
)
similarities.head()
similarities.index
similarities.loc['the beatles', :].tail()
similarities.loc[('abba', 'madonna'), :]
print(slice_artists)
similarities.loc[('abba', slice_artists), :]
artist = 'a perfect circle'
n_artists = 10
### BEGIN SOLUTION
top_n = similarities.loc[artist, :].sort_values('cosine_similarity').tail(n_artists)
### END SOLUTION
# top_n = similarities.loc[?, :].sort_values('cosine_similarity') ?
print(top_n)
assert len(top_n) == 10
assert type(top_n) == pd.DataFrame
def most_similar_artists(artist, n_artists=10):
Get the most similar artists for a given artist.
Parameters
----------
artist: str
The artist for which to get similar artists
n_artists: int, optional
The number of similar artists to return
Returns
-------
pandas.DataFrame
A DataFrame with the similar artists and their cosine_similarity to
the given artist
### BEGIN SOLUTION
return similarities.loc[artist, :].sort_values('cosine_similarity').tail(n_artists)
### END SOLUTION
# return similarities.loc[ ? ].sort_values( ? ) ?
print(most_similar_artists('a perfect circle'))
assert top_n.equals(most_similar_artists('a perfect circle'))
assert most_similar_artists('abba', n_artists=15).shape == (15, 1)
help(most_similar_artists)
user_id = 42
### BEGIN SOLUTION
user_history = data.loc[user_id, :]
### END SOLUTION
# user_history = data.loc[ ? , ?]
print(user_history)
assert user_history.name == user_id
assert len(user_history) == 285
artist = 'the beatles'
### BEGIN SOLUTION
similar_labels = most_similar_artists(artist).index
### END SOLUTION
# similar_labels = most_similar_artists( ? ). ?
print(similar_labels)
assert len(similar_labels) == 10
assert type(similar_labels) == pd.Index
user_id = 42
### BEGIN SOLUTION
similar_history = data.loc[user_id, similar_labels]
### END SOLUTION
# similar_history = data.loc[?, ?]
assert similar_history.name == user_id
print(similar_history)
def most_similar_artists_history(artist, user_id):
Get most similar artists and their listening history.
Parameters
----------
artist: str
The artist for which to get the most similar bands
user_id: int
The user for which to get the listening history
Returns
-------
pandas.DataFrame
A DataFrame containing the most similar artists for the given artist,
with their cosine similarities and their listening history status for
the given user.
### BEGIN SOLUTION
artists = most_similar_artists(artist)
history = data.loc[user_id, artists.index].rename('listening_history')
### END SOLUTION
# artists = most_similar_artists( ? )
# history = data.loc[ ? , ? ].rename('listening_history')
return pd.concat([artists, history], axis=1)
example = most_similar_artists_history('abba', 42)
assert example.columns.to_list() == ['cosine_similarity', 'listening_history']
example
listening_history = np.array([0, 1, 0])
similarity_scores = np.array([0.3, 0.2, 0.1])
recommendation_score = sum(listening_history * similarity_scores) / sum(similarity_scores)
print(f'{recommendation_score:.3f}')
user_id = 42
artist = 'abba'
most_similar_artists_history(artist, user_id)
most_similar_artists_history(artist, user_id).product(axis=1)
most_similar_artists_history(artist, user_id).product(axis=1).sum()
def recommendation_score(artist, user_id):
Calculate recommendation score.
Parameters
----------
artist: str
The artist for which to calculate the recommendation score.
user_id: int
The user for which to calculate the recommendation score.
Returns:
float
Recommendation score
df = most_similar_artists_history(artist, user_id)
### BEGIN SOLUTION
return df.product(axis=1).sum() / df.loc[:, 'cosine_similarity'].sum()
### END SOLUTION
# return df.?(axis=1).?() / df.loc[:, ? ].sum()
assert np.allclose(recommendation_score('abba', 42), 0.08976655361839528)
assert np.allclose(recommendation_score('the white stripes', 1), 0.09492796371597861)
recommendation_score('abba', 42)
def unknown_artists(user_id):
Get artists the user hasn't listened to.
Parameters
----------
user_id: int
User for which to get unknown artists
Returns
-------
pandas.Index
Collection of artists the user hasn't listened to.
### BEGIN SOLUTION
history = data.loc[user_id, :]
return history.loc[history == 0].index
### END SOLUTION
# history = data.loc[ ? , :]
# return history.loc[ ? == 0].index
print(unknown_artists(42))
assert len(unknown_artists(42)) == 278
assert type(unknown_artists(42)) == pd.Index
def score_unknown_artists(user_id):
Score all unknown artists for a given user.
Parameters
----------
user_id: int
User for which to get unknown artists
Returns
-------
list of dict
A list of dictionaries.
### BEGIN SOLUTION
artists = unknown_artists(user_id)
return [{'recommendation': artist, 'score': recommendation_score(artist, user_id)} for artist in artists]
### END SOLUTION
# artists = unknown_artists( ? )
# return [{'recommendation': artist, 'score': recommendation_score( ? , user_id)} for artist in ?]
assert np.allclose(score_unknown_artists(42)[1]['score'], 0.08976655361839528)
assert np.allclose(score_unknown_artists(313)[137]['score'], 0.20616395469219984)
score_unknown_artists(42)[:5]
def user_recommendations(user_id, n_rec=5):
Recommend new artists for a user.
Parameters
----------
user_id: int
User for which to get recommended artists
n_rec: int, optional
Number of recommendations to make
Returns
-------
pandas.DataFrame
A DataFrame containing artist recommendations for the given user,
with their recommendation score.
scores = score_unknown_artists(user_id)
### BEGIN SOLUTION
return (
pd.DataFrame(scores)
.sort_values('score', ascending=False)
.head(n_rec)
.reset_index(drop=True)
)
### END SOLUTION
# return (
# pd.DataFrame( ? )
# .sort_values( ? , ascending=False)
# . ? (n_rec)
# .reset_index(drop=True)
# )
assert user_recommendations(313).loc[4, 'recommendation'] == 'jose gonzalez'
assert len(user_recommendations(1, n_rec=10)) == 10
user_recommendations(642)
recommendations = [user_recommendations(user).loc[:, 'recommendation'].rename(user) for user in data.index[:10]]
np.transpose(pd.concat(recommendations, axis=1))
g_s = most_similar_artists_history('gorillaz', 642).assign(sim2 = lambda x: x.product(axis=1))
r_1 = g_s.sim2.sum()
total = g_s.cosine_similarity.sum()
print(total)
r_1/total
g_s
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Comparing distributions
Step2: Based on the data, the distribution for Rhode is slightly farther right than the distribution for Wei, but there is a lot of overlap.
Step3: To compute the probability that Rhode actually has a higher value of p, there are two options
Step4: Beta also provides Sample, which returns a NumPy array, so we an perform the comparisons using array operations
Step5: The other option is to make Pmf objects that approximate the Beta distributions, and enumerate pairs of values
Step6: Exercise
Step7: flip returns True with probability p and False with probability 1-p
Step8: Or, realizing that the distribution of k is binomial, we can simplify the code using NumPy
Step9: Alternatively, we can make a mixture that represents the distribution of k, taking into account our uncertainty about x
Step10: Alternatively, we could use MakeMixture
Step11: Here's how we use it.
Step12: Exercise
Step13: Or we could use Sample and NumPy
Step14: Alternatively, we could compute the distribution of the sum by enumeration
Step15: Here's how it's used
Step16: The Pmf class provides a + operator that does the same thing.
Step17: Exercise
Step18: Distribution of maximum
Step19: And here's a version using NumPy. I'll generate an array with 6 rows and 10 columns
Step20: Compute the maximum in each column
Step21: And then plot the distribution of maximums
Step22: Or we can figure it out analytically. If the maximum is less-than-or-equal-to some value k, all 6 random selections must be less-than-or-equal-to k, so
Step23: Exercise
| <ASSISTANT_TASK:>
Python Code:
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import classes from thinkbayes2
from thinkbayes2 import Hist, Pmf, Suite, Beta
import thinkplot
import numpy as np
rhode = Beta(1, 1, label='Rhode')
rhode.Update((22, 11))
wei = Beta(1, 1, label='Wei')
wei.Update((21, 12))
thinkplot.Pdf(rhode.MakePmf())
thinkplot.Pdf(wei.MakePmf())
thinkplot.Config(xlabel='x', ylabel='Probability')
iters = 1000
count = 0
for _ in range(iters):
x1 = rhode.Random()
x2 = wei.Random()
if x1 > x2:
count += 1
count / iters
rhode_sample = rhode.Sample(iters)
wei_sample = wei.Sample(iters)
np.mean(rhode_sample > wei_sample)
def ProbGreater(pmf1, pmf2):
total = 0
for x1, prob1 in pmf1.Items():
for x2, prob2 in pmf2.Items():
if x1 > x2:
total += prob1 * prob2
return total
pmf1 = rhode.MakePmf(1001)
pmf2 = wei.MakePmf(1001)
ProbGreater(pmf1, pmf2)
pmf1.ProbGreater(pmf2)
pmf1.ProbLess(pmf2)
import random
def flip(p):
return random.random() < p
iters = 1000
wins = 0
losses = 0
for _ in range(iters):
x1 = rhode.Random()
x2 = wei.Random()
count1 = count2 = 0
for _ in range(25):
if flip(x1):
count1 += 1
if flip(x2):
count2 += 1
if count1 > count2:
wins += 1
if count1 < count2:
losses += 1
wins/iters, losses/iters
rhode_rematch = np.random.binomial(25, rhode_sample)
thinkplot.Hist(Pmf(rhode_rematch))
wei_rematch = np.random.binomial(25, wei_sample)
np.mean(rhode_rematch > wei_rematch)
np.mean(rhode_rematch < wei_rematch)
from thinkbayes2 import MakeBinomialPmf
def MakeBinomialMix(pmf, label=''):
mix = Pmf(label=label)
for x, prob in pmf.Items():
binom = MakeBinomialPmf(n=25, p=x)
for k, p in binom.Items():
mix[k] += prob * p
return mix
rhode_rematch = MakeBinomialMix(rhode.MakePmf(), label='Rhode')
wei_rematch = MakeBinomialMix(wei.MakePmf(), label='Wei')
thinkplot.Pdf(rhode_rematch)
thinkplot.Pdf(wei_rematch)
thinkplot.Config(xlabel='hits')
rhode_rematch.ProbGreater(wei_rematch), rhode_rematch.ProbLess(wei_rematch)
from thinkbayes2 import MakeMixture
def MakeBinomialMix2(pmf):
binomials = Pmf()
for x, prob in pmf.Items():
binom = MakeBinomialPmf(n=25, p=x)
binomials[binom] = prob
return MakeMixture(binomials)
rhode_rematch = MakeBinomialMix2(rhode.MakePmf())
wei_rematch = MakeBinomialMix2(wei.MakePmf())
rhode_rematch.ProbGreater(wei_rematch), rhode_rematch.ProbLess(wei_rematch)
iters = 1000
pmf = Pmf()
for _ in range(iters):
k = rhode_rematch.Random() + wei_rematch.Random()
pmf[k] += 1
pmf.Normalize()
thinkplot.Hist(pmf)
ks = rhode_rematch.Sample(iters) + wei_rematch.Sample(iters)
pmf = Pmf(ks)
thinkplot.Hist(pmf)
def AddPmfs(pmf1, pmf2):
pmf = Pmf()
for v1, p1 in pmf1.Items():
for v2, p2 in pmf2.Items():
pmf[v1 + v2] += p1 * p2
return pmf
pmf = AddPmfs(rhode_rematch, wei_rematch)
thinkplot.Pdf(pmf)
pmf = rhode_rematch + wei_rematch
thinkplot.Pdf(pmf)
# Solution
pmf = rhode_rematch - wei_rematch
thinkplot.Pdf(pmf)
# Solution
# On average, we expect Rhode to win by about 1 clay.
pmf.Mean(), pmf.Median(), pmf.Mode()
# Solution
# But there is, according to this model, a 2% chance that she could win by 10.
sum([p for (x, p) in pmf.Items() if x >= 10])
iters = 1000
pmf = Pmf()
for _ in range(iters):
ks = rhode_rematch.Sample(6)
pmf[max(ks)] += 1
pmf.Normalize()
thinkplot.Hist(pmf)
iters = 1000
ks = rhode_rematch.Sample((6, iters))
ks
maxes = np.max(ks, axis=0)
maxes[:10]
pmf = Pmf(maxes)
thinkplot.Hist(pmf)
pmf = rhode_rematch.Max(6).MakePmf()
thinkplot.Hist(pmf)
def Min(pmf, k):
cdf = pmf.MakeCdf()
cdf.ps = 1 - (1-cdf.ps)**k
return cdf
pmf = Min(rhode_rematch, 6).MakePmf()
thinkplot.Hist(pmf)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Let's show the symbols data, to see how good the recommender has to be.
Step2: Let's run the trained agent, with the test set
Step3: And now a "realistic" test, in which the learner continues to learn from past samples in the test set (it even makes some random moves, though very few).
Step4: What are the metrics for "holding the position"?
| <ASSISTANT_TASK:>
Python Code:
# Basic imports
import os
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import datetime as dt
import scipy.optimize as spo
import sys
from time import time
from sklearn.metrics import r2_score, median_absolute_error
from multiprocessing import Pool
import pickle
%matplotlib inline
%pylab inline
pylab.rcParams['figure.figsize'] = (20.0, 10.0)
%load_ext autoreload
%autoreload 2
sys.path.append('../../')
import recommender.simulator as sim
from utils.analysis import value_eval
from recommender.agent_predictor import AgentPredictor
from functools import partial
from sklearn.externals import joblib
def show_results(results_list, data_in_df, graph=False):
for values in results_list:
total_value = values.sum(axis=1)
print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(total_value))))
print('-'*100)
initial_date = total_value.index[0]
compare_results = data_in_df.loc[initial_date:, 'Close'].copy()
compare_results.name = SYMBOL
compare_results_df = pd.DataFrame(compare_results)
compare_results_df['portfolio'] = total_value
std_comp_df = compare_results_df / compare_results_df.iloc[0]
if graph:
plt.figure()
std_comp_df.plot()
NUM_THREADS = 1
LOOKBACK = 252*3
STARTING_DAYS_AHEAD = 252
POSSIBLE_FRACTIONS = [0.0, 1.0]
DYNA = 20
BASE_DAYS = 112
# Get the data
SYMBOL = 'SPY'
total_data_train_df = pd.read_pickle('../../data/data_train_val_df.pkl').stack(level='feature')
data_train_df = total_data_train_df[SYMBOL].unstack()
total_data_test_df = pd.read_pickle('../../data/data_test_df.pkl').stack(level='feature')
data_test_df = total_data_test_df[SYMBOL].unstack()
if LOOKBACK == -1:
total_data_in_df = total_data_train_df
data_in_df = data_train_df
else:
data_in_df = data_train_df.iloc[-LOOKBACK:]
total_data_in_df = total_data_train_df.loc[data_in_df.index[0]:]
# Crop the final days of the test set as a workaround to make dyna work
# (the env, only has the market calendar up to a certain time)
data_test_df = data_test_df.iloc[:-DYNA]
total_data_test_df = total_data_test_df.loc[:data_test_df.index[-1]]
# Create many agents
index = np.arange(NUM_THREADS).tolist()
env, num_states, num_actions = sim.initialize_env(total_data_in_df,
SYMBOL,
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS)
estimator_close = joblib.load('../../data/best_predictor.pkl')
estimator_volume = joblib.load('../../data/best_volume_predictor.pkl')
agents = [AgentPredictor(num_states=num_states,
num_actions=num_actions,
random_actions_rate=0.98,
random_actions_decrease=0.999,
dyna_iterations=DYNA,
name='Agent_{}'.format(i),
estimator_close=estimator_close,
estimator_volume=estimator_volume,
env=env,
prediction_window=BASE_DAYS) for i in index]
print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(data_in_df['Close'].iloc[STARTING_DAYS_AHEAD:]))))
# Simulate (with new envs, each time)
n_epochs = 4
for i in range(n_epochs):
tic = time()
env.reset(STARTING_DAYS_AHEAD)
results_list = sim.simulate_period(total_data_in_df,
SYMBOL,
agents[0],
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
verbose=False,
other_env=env)
toc = time()
print('Epoch: {}'.format(i))
print('Elapsed time: {} seconds.'.format((toc-tic)))
print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))
show_results([results_list], data_in_df)
env.reset(STARTING_DAYS_AHEAD)
results_list = sim.simulate_period(total_data_in_df,
SYMBOL, agents[0],
learn=False,
starting_days_ahead=STARTING_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
other_env=env)
show_results([results_list], data_in_df, graph=True)
import pickle
with open('../../data/dyna_q_with_predictor.pkl', 'wb') as best_agent:
pickle.dump(agents[0], best_agent)
TEST_DAYS_AHEAD = 112
env.set_test_data(total_data_test_df, TEST_DAYS_AHEAD)
tic = time()
results_list = sim.simulate_period(total_data_test_df,
SYMBOL,
agents[0],
learn=False,
starting_days_ahead=TEST_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
verbose=False,
other_env=env)
toc = time()
print('Epoch: {}'.format(i))
print('Elapsed time: {} seconds.'.format((toc-tic)))
print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))
show_results([results_list], data_test_df, graph=True)
env.set_test_data(total_data_test_df, TEST_DAYS_AHEAD)
agents[0].env = env
tic = time()
results_list = sim.simulate_period(total_data_test_df,
SYMBOL,
agents[0],
learn=True,
starting_days_ahead=TEST_DAYS_AHEAD,
possible_fractions=POSSIBLE_FRACTIONS,
verbose=False,
other_env=env)
toc = time()
print('Epoch: {}'.format(i))
print('Elapsed time: {} seconds.'.format((toc-tic)))
print('Random Actions Rate: {}'.format(agents[0].random_actions_rate))
show_results([results_list], data_test_df, graph=True)
print('Sharpe ratio: {}\nCum. Ret.: {}\nAVG_DRET: {}\nSTD_DRET: {}\nFinal value: {}'.format(*value_eval(pd.DataFrame(data_test_df['Close'].iloc[TEST_DAYS_AHEAD:]))))
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: What do we mean when we say local scope?
Step2: What would change if you added the line global name into the function update_name?
| <ASSISTANT_TASK:>
Python Code:
initial_var = 358645317684531432678
name = "Oi, you there"
def update_name():
name = "what?"
print(name)
update_name()
print(name)
num_one = None
num_two = None
def power(one, two):
pass
from nose.tools import assert_equal, assert_not_equal
assert_equal(square(12,2), 144)
assert_equal(square(50, 4), 6250000)
assert_equal(square(2,2), 4)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Here's the Rosenbrock function in code. Since we're pretending it's the log-posterior, I've introduced a minus sign that doesn't normally appear.
Step2: Let's plot "standard delta-log-pdf" contours, just to get a feel for the shape of the function.
Step3: That's one ugly banana! It's worth noting that, in less extreme cases, cleverly re-parametrizing your model can be a significant help, for example sampling the logarithm of a parameter with a large dynamic range. As you'll see in the homework, however, it's important to be aware of the effect re-parametrizations have on the prior if not done carefully. As you can see below, considering $\ln(y)$ instead of $y$ makes part of the posterior distribution look easier to deal with... but at the cost of introducing another little problem.
Step4: Eggbox function
Step5: Yikes! Lot's of well separated peaks in the posterior distribution. To converge properly, chains need to be able to move between them, which is clearly a challenge for the approach we've used so far.
| <ASSISTANT_TASK:>
Python Code:
from IPython.display import Image
Image(filename="DifficultDensities_banana_eg.png", width=350)
def Rosenbrock_lnP(x, y, a=1.0, b=100.0):
return -( (a-x)**2 + b*(y-x**2)**2 )
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (8.0, 8.0)
xs = np.arange(-2.0, 3.0, 0.0025)
ys = np.arange(0.0, 5.0, 0.0025)
zs = np.array([Rosenbrock_lnP(xs,y) for y in ys])
plt.contour(xs, ys, -2.0*zs, levels=[2.3, 6.18, 11.8]);
xs = np.arange(-2.0, 3.0, 0.0025)
lnys = np.arange(-3.0, 1.6, 0.0025)
zs = np.array([Rosenbrock_lnP(xs,y) for y in np.exp(lnys)])
plt.contour(xs, lnys, -2.0*zs, levels=[2.3, 6.18, 11.8]);
def eggbox_lnP(x, y):
return (2.0 + np.cos(0.5*x)*np.cos(0.5*y))**3
xs = np.arange(0.0, 30.0, 0.1)
ys = np.arange(0.0, 30.0, 0.1)
zs = np.array([eggbox_lnP(xs,y) for y in ys])
plt.contour(xs, ys, -2.0*(zs-np.max(zs)), levels=[2.3, 6.18, 11.8]);
Image(filename="DifficultDensities_multimodes_eg.png", width=350)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: As always, let's do imports and initialize a logger and a new Bundle. See Building a System for more details.
Step2: Dataset Parameters
Step3: times
Step4: Compute Options
Step5: dynamics_method
Step6: The 'dynamics_method' parameter controls how stars and components are placed in the coordinate system as a function of time and has several choices
Step7: The 'ltte' parameter sets whether light travel time effects (Roemer delay) are included. If set to False, the positions and velocities are returned as they actually are for that given object at that given time. If set to True, they are instead returned as they were or will be when their light reaches the origin of the coordinate system.
Step8: Plotting
Step9: As always, you have access to any of the arrays for either axes, so if you want to plot 'vxs' vs 'times'
Step10: 3d axes are not yet supported for orbits, but hopefully will be soon.
| <ASSISTANT_TASK:>
Python Code:
!pip install -I "phoebe>=2.0,<2.1"
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger()
b = phoebe.default_binary()
ps, constraints = phoebe.dataset.orb()
print ps
print ps['times']
ps_compute = phoebe.compute.phoebe()
print ps_compute
print ps_compute['dynamics_method']
print ps_compute['ltte']
b.add_dataset('orb', times=np.linspace(0,3,201))
b.run_compute()
b['orb@model'].twigs
print b['times@primary@orb01@orb@model']
print b['xs@primary@orb01@orb@model']
print b['vxs@primary@orb01@orb@model']
axs, artists = b['orb@model'].plot()
axs, artists = b['orb@model'].plot(x='times', y='vxs')
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
axs, artists = b['orb@model'].plot(xlim=(-4,4), ylim=(-4,4), zlim=(-4,4))
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
| <ASSISTANT_TASK:>
Python Code:
mod = 1000000007
def waysToColor(arr , n , k ) :
global mod
powOf2 =[0 for i in range(500 ) ]
c =[[ 0 for i in range(500 ) ] for j in range(500 ) ]
for i in range(n + 1 ) :
c[i ][0 ] = 1 ;
for j in range(1 , i + 1 ) :
c[i ][j ] =(c[i - 1 ][j ] + c[i - 1 ][j - 1 ] ) % mod ;
powOf2[0 ] = 1
powOf2[1 ] = 1 ;
for i in range(2 , n + 1 ) :
powOf2[i ] =(powOf2[i - 1 ] * 2 ) % mod ;
rem = n - k ;
arr[k ] = n + 1 ;
k += 1
arr . sort()
answer = 1 ;
for i in range(k ) :
x = 0
if i - 1 >= 0 :
x = arr[i ] - arr[i - 1 ] - 1
else :
x = arr[i ] - 1
answer = answer *(c[rem ][x ] % mod ) *(( powOf2[x ] if(i != 0 and i != k - 1 ) else 1 ) ) % mod
rem -= x ;
return answer ;
if __name__== ' __main __' :
N = 6 ;
K = 3 ;
arr =[1 , 2 , 6 , 0 ]
print(waysToColor(arr , N , K ) )
<END_TASK>
|
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Basic matrix arithmetics like
Step2: In mathematics, the dot product is an algebraic operation that takes two coordinate vectors of equal size and returns a single number. The result is calculated by multiplying corresponding entries and adding up those products. The name "dot product" stems from the fact that the centered dot "·" is often used to designate this operation.
Step3: Matrices
Step4: If we want to perform matrix multiplication with two numpy arrays (ndarray), we have to use the dot product
Step5: Alternatively, we can cast them into matrix objects and use the "*" operator
Step6: In the following practical example, we come to talk about the sweet things of life.
Step7: This means that Lucas paid 13.98 CAD, Mia 11.97 CAD, Leon 9.90 and Hannah 9.75.
| <ASSISTANT_TASK:>
Python Code:
import numpy as np
x = np.array([1,5,2])
y = np.array([7,4,1])
x + y
x * y
x - y
x / y
x % y
x = np.array([1,2,3])
y = np.array([-7,8,9])
dot = np.dot(x,y)
np.dot(x,y)
x = np.array( ((2,3), (3, 5)) )
y = np.array( ((1,2), (5, -1)) )
x * y
x = np.matrix( ((2,3), (3, 5)) )
y = np.matrix( ((1,2), (5, -1)) )
x * y
x = np.array( ((2,3), (3, 5)) )
y = np.array( ((1,2), (5, -1)) )
np.dot(x,y)
np.mat(x) * np.mat(y)
NumPersons = np.array([[100,175,210],[90,160,150],[200,50,100],[120,0,310]])
Price_per_100_g = np.array([2.98,3.90,1.99])
Price_in_Cent = np.dot(NumPersons,Price_per_100_g)
Price_in_CAD = Price_in_Cent / np.array([100,100,100,100])
Price_in_CAD
H = np.matrix( ((3,-1,0), (-1,3,0), (0,0,5)) )
eigenvalues, eigenvectors = np.linalg.eig(H)
eigenvalues
determinant = np.linalg.det(H)
determinant
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Setup the axon code
Step3: Single example axon, how does a0 (intial volage) drift with the number of nodes?
Step4: Now let's run axon for a few n choices over many iterations j, reproducing the major figure in Tee's SFN abstract.
Step5: In a biologically constrained parameterization Tee's effect holds up perfectly.
Step6: The loss function
Step7: FORCE does a pretty nice job learning how to be a sin wave. If you rerun this a few times, you'll see the quality of the fits varies. Such if woring with randomness and chaos.
Step8: Binary FORCE
Step9: Here's when the binary version looks like. I use random (U) selected thresholds to convert from rates to binary codes. Don't have any idea how this really works, so random seems as good a guess as any.
Step10: The binary loss function
Step11: Besides some small fluctations, the binary version does nearly as well as the real valued version above!
| <ASSISTANT_TASK:>
Python Code:
import pylab as plt
import numpy as np
%matplotlib inline
from __future__ import division
from scipy.integrate import odeint,ode
from numpy import zeros,ones,eye,tanh,dot,outer,sqrt,linspace,cos,pi,hstack,zeros_like,abs,repeat
from numpy.random import uniform,normal,choice
%config InlineBackend.figure_format = 'retina'
from numpy.random import normal
from numpy import asarray
def axon(a0, n, sigma_l, sigma_b, min_v=-0.07, max_v=-0.04):
A Tee-type axon.
An intial voltage a0 is sequentially corrupted
by additive noise even when signal boosting
is perfect.
Params
------
a0 : scalar
The initial voltage
n : scalar
Number of nodes
sigma_l : scalar
Std dev of the leak
sigma_b : scalar
Std dev of the boost (should be very small)
min_v : scalar
Min voltage (-0.07 mV)
max_v : scalar
Max volate (-0.04 mV)
if a0 > max_v:
raise ValueError("a0 must be smaller than {0}".format(max_v))
if a0 < min_v:
raise ValueError("a0 must be larger than {0}".format(min_v))
ax = [a0, ]
for i in range(1, n+1):
ax.append(
ax[-1] +
normal(loc=0.0, scale=sigma_l, size=1)[0] +
normal(loc=0.0, scale=sigma_b, size=1)[0]
)
ax = asarray(ax)
# Clip
ax[ax < min_v] = min_v
ax[ax > max_v] = max_v
return ax
a0 = -0.06 # 60 mV
n = 100 # 100 nodes of rav
plt.plot(axon(a0, n, 0.001, 0.00001), color='k', label="red = a0")
plt.ylabel("Membrane voltage (V)")
plt.xlabel("Number of nodes")
plt.ylim(-0.07, -0.04)
plt.hlines(a0, 0, n, color='r')
plt.legend()
# Params
a0 = -0.06
ns = [2, 10, 100]
js = range(500)
sigma_l = 0.01 / 10 # small
sigma_b = 0.01 / 1000 # tiny (APs are stereotyped!)
# Run, saving in res indexed by `n`.
res = {}
for n in ns:
res[n] = [axon(a0, n, sigma_l, sigma_b)[-1] for j in js]
# Plot
plt.figure(figsize=(4,10))
hieght = 100
plt.subplot(311)
plt.hist(res[2], color="k", label="n = 1\nred = a0", bins=100, range=(-0.07, -0.04))
plt.vlines(a0, 0, 30)
plt.xlim(-0.07, -0.04)
plt.vlines(a0, 0, hieght, color='r')
plt.ylim(0, hieght)
plt.legend()
plt.subplot(312)
plt.hist(res[10], color="k", label="n = 10", bins=100, range=(-0.07, -0.04))
plt.vlines(a0, 0, 30)
plt.xlim(-0.07, -0.04)
plt.vlines(a0, 0, hieght, color='r')
plt.ylim(0, hieght)
plt.ylabel("Count")
plt.legend()
plt.subplot(313)
plt.hist(res[100], color="k", label="n = 100", bins=100, range=(-0.07, -0.04))
plt.vlines(a0, 0, hieght, color='r')
plt.xlim(-0.07, -0.04)
plt.ylim(0, hieght)
plt.xlabel("Membrane voltage")
plt.legend()
def f1(x,t0):
return -x + g*dot(J,tanh(x))
N = 100
J = normal(0,sqrt(1/N),(N,N))
x0 = uniform(-0.8,0.3,N)
t = linspace(0,50,1000)
plt.figure(figsize=(10,5))
for s,g in enumerate(linspace(0.5,1.5,3)):
plt.subplot(1,3,s+1)
x = odeint(f1,x0,t)
plt.plot(t,x[:,choice(N,10)])
plt.title('g = '+str(g),fontweight='bold')
plt.show()
x.shape
target = lambda t0: cos(2 * pi * t0 / 50) # target pattern
def f3(t0, x):
return -x + g * dot(J, tanh_x) + dot(w, tanh_x) * u
dt = 1 # time step
tmax = 800 # simulation length
tstop = 300
N = 300
J = normal(0, sqrt(1 / N), (N, N))
x0 = uniform(-0.5, 0.5, N)
t = linspace(0, 50, 500)
g = 1.5
u = uniform(-1, 1, N)
w = uniform(-1 / sqrt(N), 1 / sqrt(N), N) # initial weights
P = eye(N) # Running estimate of the inverse correlation matrix
lr = 1.0 # learning rate
# simulation data: state, output, time, weight updates
x, z, t, wu = [x0], [], [0], [0]
# response stats
x_stats = []
# Set up ode solver
solver = ode(f3)
solver.set_initial_value(x0)
# Integrate ode, update weights, repeat
while t[-1] < tmax:
tanh_x = tanh(x[-1]) # cache
z.append(dot(w, tanh_x))
error = target(t[-1]) - z[-1]
q = dot(P, tanh_x)
c = lr / (1 + dot(q, tanh_x))
P = P - c * outer(q, q)
w = w + c * error * q
# Stop leaning here
if t[-1] > tstop:
lr = 0
wu.append(np.sum(np.abs(c * error * q)))
solver.integrate(solver.t + dt)
res = solver.y
x.append(res)
x_stats.append([np.min(res), np.max(res), np.mean(res), np.var(res)])
t.append(solver.t)
# last update for readout neuron
z.append(dot(w, tanh_x))
x = np.array(x)
t = np.array(t)
plt.figure(figsize=(10, 5))
plt.subplot(2, 1, 1)
plt.plot(t, target(t), '-r', lw=2)
plt.plot(t, z, '-b')
plt.legend(('target', 'output'))
plt.ylim([-1.1, 3])
plt.xticks([])
plt.subplot(2, 1, 2)
plt.plot(t, wu, '-k')
plt.yscale('log')
plt.ylabel('$|\Delta w|$', fontsize=20)
plt.xlabel('time', fontweight='bold', fontsize=16)
plt.show()
x_stats = np.array(x_stats).T
# X(t) Reponse Stats
plt.figure(figsize=(16, 4))
plt.subplot(1,4,1)
plt.hist(x_stats[0], bins=20, alpha=.5)
plt.title(r'$x(t)$ Min')
plt.subplot(1,4,2)
plt.hist(x_stats[1], bins=20, alpha=.5)
plt.title(r'$x(t)$ Max')
plt.subplot(1,4,3)
plt.hist(x_stats[2], bins=20, alpha=.5)
plt.title(r'$x(t)$ Mean');
plt.subplot(1,4,4)
plt.hist(x_stats[3], bins=20, alpha=.5)
plt.title(r'$x(t)$ Variance');
print ("MEAN of Statistics:")
print ("Min [ x(t) ] = %.4f " % np.mean(x_stats[0]))
print ("Max [ x(t) ] = %.4f " % np.mean(x_stats[1]))
print ("Mean [ x(t) ] = %.4f " % np.mean(x_stats[2]))
print ("Var [ x(t) ] = %.4f " % np.mean(x_stats[3]))
def decode(x, rho):
xd = zeros_like(x)
xd[x > rho] = 1
xd[x < -rho] = -1
return xd
def f1(x,t0):
return -x + g*dot(J,tanh(x))
N = 500
J = normal(0,sqrt(1/N),(N,N))
x0 = uniform(-0.5,0.5,N)
t = linspace(0,50,500)
rho = uniform(0,0.1,N) # Rand thresholds!
plt.figure(figsize=(10,5))
for s,g in enumerate(linspace(0.5,1.5,3)):
plt.subplot(1,3,s+1)
x = odeint(f1,x0,t)
xd = decode(x, rho)
plt.plot(t,xd[:,choice(N,10)])
plt.title('g = '+str(g),fontweight='bold')
plt.ylim(-2,2)
plt.show()
target = lambda t0: cos(2 * pi * t0 / 50) # target pattern
def f3(t0, x):
return -x + g * dot(J, tanh_x) + dot(w, tanh_x) * u
dt = 1 # time step
tmax = 800 # simulation length
tstop = 500
N = 300
J = normal(0, sqrt(1 / N), (N, N))
x0 = uniform(-0.5, 0.5, N)
t = linspace(0, 50, 500)
rho = 0.1 # Set and rand vec
rho = uniform(0, 0.5, N)
g = 1.5
u = uniform(-1, 1, N)
w = uniform(-1 / sqrt(N), 1 / sqrt(N), N) # initial weights
P = eye(N) # Running estimate of the inverse correlation matrix
lr = .4 # learning rate
rho = repeat(.5, N)
# simulation data: state,
# output, time, weight updates
x, z, t, wu = [x0], [], [0], [0]
# Set up ode solver
solver = ode(f3)
solver.set_initial_value(x0)
# Integrate ode, update weights, repeat
while t[-1] < tmax:
tanh_x = tanh(x[-1])
tanh_xd = decode(tanh_x, rho) # BINARY CODE INTRODUCED HERE!
z.append(dot(w, tanh_xd))
error = target(t[-1]) - z[-1]
q = dot(P, tanh_xd)
c = lr / (1 + dot(q, tanh_xd))
P = P - c * outer(q, q)
w = w + c * error * q
# Stop training time
if t[-1] > tstop:
lr = 0
wu.append(np.sum(np.abs(c * error * q)))
solver.integrate(solver.t + dt)
x.append(solver.y)
t.append(solver.t)
# last update for readout neuron
z.append(dot(w, tanh_x))
# plot
x = np.array(x)
t = np.array(t)
plt.figure(figsize=(10, 5))
plt.subplot(2, 1, 1)
plt.plot(t, target(t), '-r', lw=2)
plt.plot(t, z, '-b')
plt.legend(('target', 'output'))
plt.ylim([-1.1, 3])
plt.xticks([])
plt.subplot(2, 1, 2)
plt.plot(t, wu, '-k')
plt.yscale('log')
plt.ylabel('$|\Delta w|$', fontsize=20)
plt.xlabel('time', fontweight='bold', fontsize=16)
plt.show()
repeat?
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Install and import TFX
Step2: Please ignore the incompatibility error and warnings. Make sure to re-run the cell.
Step3: Import the MLMD library.
Step4: Download the dataset
Step5: Create an InteractiveContext
Step6: Construct the TFX Pipeline
Step7: Instantiate and run the StatisticsGen Component
Step8: Instantiate and run the SchemaGen Component
Step9: Instantiate and run the Trainer Component
Step10: Run the Trainer component.
Step11: Evaluate and push the model
Step12: Running the TFX pipeline populates the MLMD Database. In the next section, you use the MLMD API to query this database for metadata information.
Step13: Create some helper functions to view the data from the MD store.
Step14: First, query the MD store for a list of all its stored ArtifactTypes.
Step15: Next, query all PushedModel artifacts.
Step16: Query the MD store for the latest pushed model. This notebook has only one pushed model.
Step17: One of the first steps in debugging a pushed model is to look at which trained model is pushed and to see which training data is used to train that model.
Step18: Query the parent artifacts for the pushed model.
Step19: Query the properties for the model.
Step20: Query the upstream artifacts for the model.
Step21: Get the training data the model trained with.
Step22: Now that you have the training data that the model trained with, query the database again to find the training step (execution). Query the MD store for a list of the registered execution types.
Step23: The training step is the ExecutionType named tfx.components.trainer.component.Trainer. Traverse the MD store to get the trainer run that corresponds to the pushed model.
| <ASSISTANT_TASK:>
Python Code:
!pip install --upgrade pip
!pip install -q -U tfx
import os
import tempfile
import urllib
import pandas as pd
import tensorflow_model_analysis as tfma
from tfx.orchestration.experimental.interactive.interactive_context import InteractiveContext
from tfx import v1 as tfx
print('TFX version: {}'.format(tfx.__version__))
import ml_metadata as mlmd
print('MLMD version: {}'.format(mlmd.__version__))
from ml_metadata.proto import metadata_store_pb2
DATA_PATH = 'https://raw.githubusercontent.com/tensorflow/tfx/master/tfx/examples/penguin/data/labelled/penguins_processed.csv'
_data_root = tempfile.mkdtemp(prefix='tfx-data')
# Join various path components
_data_filepath = # TODO: Your code goes here
urllib.request.urlretrieve(DATA_PATH, _data_filepath)
interactive_context = # TODO: Your code goes here
example_gen = # TODO: Your code goes here
interactive_context.run(example_gen)
statistics_gen = # TODO: Your code goes here
interactive_context.run(statistics_gen)
infer_schema = # TODO: Your code goes here
interactive_context.run(infer_schema)
# Define the module file for the Trainer component
trainer_module_file = 'penguin_trainer.py'
%%writefile {trainer_module_file}
# Define the training algorithm for the Trainer module file
import os
from typing import List, Text
import tensorflow as tf
from tensorflow import keras
from tfx import v1 as tfx
from tfx_bsl.public import tfxio
from tensorflow_metadata.proto.v0 import schema_pb2
# Features used for classification - culmen length and depth, flipper length,
# body mass, and species.
_LABEL_KEY = 'species'
_FEATURE_KEYS = [
'culmen_length_mm', 'culmen_depth_mm', 'flipper_length_mm', 'body_mass_g'
]
def _input_fn(file_pattern: List[Text],
data_accessor: tfx.components.DataAccessor,
schema: schema_pb2.Schema, batch_size: int) -> tf.data.Dataset:
return data_accessor.tf_dataset_factory(
file_pattern,
tfxio.TensorFlowDatasetOptions(
batch_size=batch_size, label_key=_LABEL_KEY), schema).repeat()
def _build_keras_model():
inputs = [keras.layers.Input(shape=(1,), name=f) for f in _FEATURE_KEYS]
d = keras.layers.concatenate(inputs)
d = keras.layers.Dense(8, activation='relu')(d)
d = keras.layers.Dense(8, activation='relu')(d)
outputs = keras.layers.Dense(3)(d)
model = keras.Model(inputs=inputs, outputs=outputs)
model.compile(
optimizer=keras.optimizers.Adam(1e-2),
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[keras.metrics.SparseCategoricalAccuracy()])
return model
def run_fn(fn_args: tfx.components.FnArgs):
schema = schema_pb2.Schema()
tfx.utils.parse_pbtxt_file(fn_args.schema_path, schema)
train_dataset = _input_fn(
fn_args.train_files, fn_args.data_accessor, schema, batch_size=10)
eval_dataset = _input_fn(
fn_args.eval_files, fn_args.data_accessor, schema, batch_size=10)
model = _build_keras_model()
model.fit(
train_dataset,
epochs=int(fn_args.train_steps / 20),
steps_per_epoch=20,
validation_data=eval_dataset,
validation_steps=fn_args.eval_steps)
model.save(fn_args.serving_model_dir, save_format='tf')
trainer = tfx.components.Trainer(
module_file=os.path.abspath(trainer_module_file),
examples=example_gen.outputs['examples'],
schema=infer_schema.outputs['schema'],
train_args=tfx.proto.TrainArgs(num_steps=100),
eval_args=tfx.proto.EvalArgs(num_steps=50))
interactive_context.run(trainer)
_serving_model_dir = os.path.join(tempfile.mkdtemp(),
'serving_model/penguins_classification')
eval_config = tfma.EvalConfig(
model_specs=[
tfma.ModelSpec(label_key='species', signature_name='serving_default')
],
metrics_specs=[
tfma.MetricsSpec(metrics=[
tfma.MetricConfig(
class_name='SparseCategoricalAccuracy',
threshold=tfma.MetricThreshold(
value_threshold=tfma.GenericValueThreshold(
lower_bound={'value': 0.6})))
])
],
slicing_specs=[tfma.SlicingSpec()])
evaluator = tfx.components.Evaluator(
examples=example_gen.outputs['examples'],
model=trainer.outputs['model'],
schema=infer_schema.outputs['schema'],
eval_config=eval_config)
interactive_context.run(evaluator)
pusher = tfx.components.Pusher(
model=trainer.outputs['model'],
model_blessing=evaluator.outputs['blessing'],
push_destination=tfx.proto.PushDestination(
filesystem=tfx.proto.PushDestination.Filesystem(
base_directory=_serving_model_dir)))
interactive_context.run(pusher)
connection_config = interactive_context.metadata_connection_config
store = mlmd.MetadataStore(connection_config)
# All TFX artifacts are stored in the base directory
base_dir = connection_config.sqlite.filename_uri.split('metadata.sqlite')[0]
def display_types(types):
# Helper function to render dataframes for the artifact and execution types
table = {'id': [], 'name': []}
for a_type in types:
table['id'].append(a_type.id)
table['name'].append(a_type.name)
return pd.DataFrame(data=table)
def display_artifacts(store, artifacts):
# Helper function to render dataframes for the input artifacts
table = {'artifact id': [], 'type': [], 'uri': []}
for a in artifacts:
table['artifact id'].append(a.id)
artifact_type = store.get_artifact_types_by_id([a.type_id])[0]
table['type'].append(artifact_type.name)
table['uri'].append(a.uri.replace(base_dir, './'))
return pd.DataFrame(data=table)
def display_properties(store, node):
# Helper function to render dataframes for artifact and execution properties
table = {'property': [], 'value': []}
for k, v in node.properties.items():
table['property'].append(k)
table['value'].append(
v.string_value if v.HasField('string_value') else v.int_value)
for k, v in node.custom_properties.items():
table['property'].append(k)
table['value'].append(
v.string_value if v.HasField('string_value') else v.int_value)
return pd.DataFrame(data=table)
display_types(store.get_artifact_types())
pushed_models = store.get_artifacts_by_type("PushedModel")
display_artifacts(store, pushed_models)
pushed_model = pushed_models[-1]
display_properties(store, pushed_model)
def get_one_hop_parent_artifacts(store, artifacts):
# Get a list of artifacts within a 1-hop of the artifacts of interest
artifact_ids = [artifact.id for artifact in artifacts]
executions_ids = set(
event.execution_id
for event in store.get_events_by_artifact_ids(artifact_ids)
if event.type == mlmd.proto.Event.OUTPUT)
artifacts_ids = set(
event.artifact_id
for event in store.get_events_by_execution_ids(executions_ids)
if event.type == mlmd.proto.Event.INPUT)
return [artifact for artifact in store.get_artifacts_by_id(artifacts_ids)]
# TODO
parent_artifacts = get_one_hop_parent_artifacts(store, [pushed_model])
display_artifacts(store, parent_artifacts)
exported_model = parent_artifacts[0]
display_properties(store, exported_model)
model_parents = get_one_hop_parent_artifacts(store, [exported_model])
display_artifacts(store, model_parents)
used_data = model_parents[0]
display_properties(store, used_data)
display_types(store.get_execution_types())
def find_producer_execution(store, artifact):
executions_ids = set(
event.execution_id
for event in store.get_events_by_artifact_ids([artifact.id])
if event.type == mlmd.proto.Event.OUTPUT)
return store.get_executions_by_id(executions_ids)[0]
trainer = # TODO: Your code goes here
display_properties(store, trainer)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Upper air data can be obtained using the siphon package, but for this example we will use
Step2: We will pull the data out of the example dataset into individual variables and
Step3: Create a new figure. The dimensions here give a good aspect ratio.
| <ASSISTANT_TASK:>
Python Code:
import matplotlib.pyplot as plt
import pandas as pd
import metpy.calc as mpcalc
from metpy.cbook import get_test_data
from metpy.plots import add_metpy_logo, SkewT
from metpy.units import units
col_names = ['pressure', 'height', 'temperature', 'dewpoint', 'direction', 'speed']
df = pd.read_fwf(get_test_data('may4_sounding.txt', as_file_obj=False),
skiprows=5, usecols=[0, 1, 2, 3, 6, 7], names=col_names)
# Drop any rows with all NaN values for T, Td, winds
df = df.dropna(subset=('temperature', 'dewpoint', 'direction', 'speed'), how='all'
).reset_index(drop=True)
p = df['pressure'].values * units.hPa
T = df['temperature'].values * units.degC
Td = df['dewpoint'].values * units.degC
wind_speed = df['speed'].values * units.knots
wind_dir = df['direction'].values * units.degrees
u, v = mpcalc.wind_components(wind_speed, wind_dir)
fig = plt.figure(figsize=(9, 9))
add_metpy_logo(fig, 115, 100)
skew = SkewT(fig, rotation=45)
# Plot the data using normal plotting functions, in this case using
# log scaling in Y, as dictated by the typical meteorological plot.
skew.plot(p, T, 'r')
skew.plot(p, Td, 'g')
skew.plot_barbs(p, u, v)
skew.ax.set_ylim(1000, 100)
skew.ax.set_xlim(-40, 60)
# Calculate LCL height and plot as black dot. Because `p`'s first value is
# ~1000 mb and its last value is ~250 mb, the `0` index is selected for
# `p`, `T`, and `Td` to lift the parcel from the surface. If `p` was inverted,
# i.e. start from low value, 250 mb, to a high value, 1000 mb, the `-1` index
# should be selected.
lcl_pressure, lcl_temperature = mpcalc.lcl(p[0], T[0], Td[0])
skew.plot(lcl_pressure, lcl_temperature, 'ko', markerfacecolor='black')
# Calculate full parcel profile and add to plot as black line
prof = mpcalc.parcel_profile(p, T[0], Td[0]).to('degC')
skew.plot(p, prof, 'k', linewidth=2)
# Shade areas of CAPE and CIN
skew.shade_cin(p, T, prof, Td)
skew.shade_cape(p, T, prof)
# An example of a slanted line at constant T -- in this case the 0
# isotherm
skew.ax.axvline(0, color='c', linestyle='--', linewidth=2)
# Add the relevant special lines
skew.plot_dry_adiabats()
skew.plot_moist_adiabats()
skew.plot_mixing_lines()
# Show the plot
plt.show()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Make performance scorers
Step2: Sequential Feature Selection with mlextend
Step3: The next cell will take many hours to run, skip it
Step4: Restart from here
Step5: It looks like the score stabilizes after about 6 features, reaches a max at 16, then begins to taper off after about 70 features. We will save the top 45 and the top 75.
| <ASSISTANT_TASK:>
Python Code:
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt
from sklearn import preprocessing
from sklearn.metrics import f1_score, accuracy_score, make_scorer
filename = 'engineered_features.csv'
training_data = pd.read_csv(filename)
training_data.describe()
training_data['Well Name'] = training_data['Well Name'].astype('category')
training_data['Formation'] = training_data['Formation'].astype('category')
training_data['Well Name'].unique()
y = training_data['Facies'].values
print y[25:40]
print np.shape(y)
X = training_data.drop(['Formation', 'Well Name','Facies'], axis=1)
print np.shape(X)
X.describe(percentiles=[.05, .25, .50, .75, .95])
scaler = preprocessing.StandardScaler().fit(X)
X = scaler.transform(X)
Fscorer = make_scorer(f1_score, average = 'micro')
from sklearn.ensemble import RandomForestClassifier
from mlxtend.feature_selection import SequentialFeatureSelector as SFS
clf = RandomForestClassifier(random_state=49)
sfs = SFS(clf,
k_features=100,
forward=True,
floating=False,
scoring=Fscorer,
cv = 8,
n_jobs = -1)
sfs = sfs.fit(X, y)
np.save('sfs_RF_metric_dict.npy', sfs.get_metric_dict())
# load previously saved dictionary
read_dictionary = np.load('sfs_RF_metric_dict.npy').item()
# plot results
from mlxtend.plotting import plot_sequential_feature_selection as plot_sfs
# run this twice
fig = plt.figure()
ax = plot_sfs(read_dictionary, kind='std_err')
fig_size = plt.rcParams["figure.figsize"]
fig_size[0] = 22
fig_size[1] = 18
plt.title('Sequential Forward Selection (w. StdDev)')
plt.grid()
plt.xticks( rotation='vertical')
locs, labels = plt.xticks()
plt.xticks( locs, labels)
plt.show()
# save results to dataframe
selected_summary = pd.DataFrame.from_dict(read_dictionary).T
selected_summary['index'] = selected_summary.index
selected_summary.sort_values(by='avg_score', ascending=0)
# save dataframe
selected_summary.to_csv('SFS_RF_selected_features_summary.csv', sep=',', header=True, index = False)
# re load saved dataframe and sort by score
filename = 'SFS_RF_selected_features_summary.csv'
selected_summary = pd.read_csv(filename)
selected_summary = selected_summary.set_index(['index'])
selected_summary.sort_values(by='avg_score', ascending=0).head()
# feature selection with highest score
selected_summary.iloc[44]['feature_idx']
slct = np.array([257, 3, 4, 6, 7, 8, 10, 12, 16, 273, 146, 19, 26, 27, 284, 285, 30, 34, 163, 1, 42, 179, 155, 181, 184, 58, 315, 190, 320, 193, 194, 203, 290, 80, 210, 35, 84, 90, 97, 18, 241, 372, 119, 120, 126])
slct
# isolate and save selected features
filename = 'engineered_features_validation_set2.csv'
training_data = pd.read_csv(filename)
X = training_data.drop(['Formation', 'Well Name'], axis=1)
Xs = X.iloc[:, slct]
Xs = pd.concat([training_data[['Depth', 'Well Name', 'Formation']], Xs], axis = 1)
print np.shape(Xs), list(Xs)
Xs.to_csv('SFS_top45_selected_engineered_features_validation_set.csv', sep=',', index=False)
# feature selection with highest score
selected_summary.iloc[74]['feature_idx']
slct = np.array([257, 3, 4, 5, 6, 7, 8, 265, 10, 12, 13, 16, 273, 18, 19, 26, 27, 284, 285, 30, 34, 35, 1, 42, 304, 309, 313, 58, 315, 319, 320, 75, 80, 338, 84, 341, 89, 90, 92, 97, 101, 102, 110, 372, 119, 120, 122, 124, 126, 127, 138, 139, 146, 155, 163, 165, 167, 171, 177, 179, 180, 181, 184, 190, 193, 194, 198, 203, 290, 210, 211, 225, 241, 249, 253])
slct
# isolate and save selected features
filename = 'engineered_features_validation_set2.csv'
training_data = pd.read_csv(filename)
X = training_data.drop(['Formation', 'Well Name'], axis=1)
Xs = X.iloc[:, slct]
Xs = pd.concat([training_data[['Depth', 'Well Name', 'Formation']], Xs], axis = 1)
print np.shape(Xs), list(Xs)
Xs.to_csv('SFS_top75_selected_engineered_features_validation_set.csv', sep=',', index=False)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Load Boston Housing Dataset
Step2: Create Decision Tree
Step3: Train Model
Step4: Create Observation To Predict
| <ASSISTANT_TASK:>
Python Code:
# Load libraries
from sklearn.tree import DecisionTreeRegressor
from sklearn import datasets
# Load data with only two features
boston = datasets.load_boston()
X = boston.data[:,0:2]
y = boston.target
# Create decision tree classifer object
regr = DecisionTreeRegressor(random_state=0)
# Train model
model = regr.fit(X, y)
# Make new observation
observation = [[0.02, 16]]
# Predict observation's value
model.predict(observation)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Restart the kernel
Step2: Before you begin
Step3: Region
Step4: Timestamp
Step5: Authenticate your Google Cloud account
Step6: Create a Cloud Storage bucket
Step7: Only if your bucket doesn't already exist
Step8: Finally, validate access to your Cloud Storage bucket by examining its contents
Step9: Set up variables
Step10: Initialize Vertex AI SDK for Python
Step11: Set hardware accelerators
Step12: Set pre-built containers
Step13: Set machine type
Step14: Get pretrained encoder bedding model from TensorFlow Hub
Step15: Download the dataset
Step16: Preprocess the dataset
Step17: Compile the model
Step18: Train the model
Step19: Epoch 1/3
Step20: Save the model artifacts
Step21: Upload the TensorFlow Hub model to a Vertex AI Model resource with standard TensorFlow container
Step22: Upload the TensorFlow Hub model to a Vertex AI Model resource with optimized TensorFlow container
Step23: Creating two public Endpoint resource
Step24: Deploying the Model resources to a the Endpoint resources.
Step25: Prepare test data for prediction
Step26: Make the prediction
Step27: Compare difference in precision
Step28: Cleaning up
| <ASSISTANT_TASK:>
Python Code:
import os
# The Vertex AI Workbench Notebook product has specific requirements
IS_WORKBENCH_NOTEBOOK = os.getenv("DL_ANACONDA_HOME")
IS_USER_MANAGED_WORKBENCH_NOTEBOOK = os.path.exists(
"/opt/deeplearning/metadata/env_version"
)
# Vertex AI Notebook requires dependencies to be installed with '--user'
USER_FLAG = ""
if IS_WORKBENCH_NOTEBOOK:
USER_FLAG = "--user"
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG -q
! pip3 install --upgrade google-cloud-pipeline-components $USER_FLAG -q
! pip3 install tensorflow tensorflow-text $USER_FLAG -q
! pip3 install tensorflow-hub $USER_FLAG -q
! pip3 install tf-models-official $USER_FLAG -q
# Automatically restart kernel after installs
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
REGION = "[your-region]" # @param {type: "string"}
if REGION == "[your-region]":
REGION = "us-central1"
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Vertex AI Workbench, then don't execute this code
IS_COLAB = False
if not os.path.exists("/opt/deeplearning/metadata/env_version") and not os.getenv(
"DL_ANACONDA_HOME"
):
if "google.colab" in sys.modules:
IS_COLAB = True
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
BUCKET_NAME = "[your-bucket-name]" # @param {type:"string"}
BUCKET_URI = f"gs://{BUCKET_NAME}"
if BUCKET_URI == "" or BUCKET_URI is None or BUCKET_URI == "gs://[your-bucket-name]":
BUCKET_URI = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
! gsutil mb -l $REGION $BUCKET_URI
! gsutil ls -al $BUCKET_URI
import google.cloud.aiplatform as aip
import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_text
from official.nlp import optimization # to create AdamW optimizer
aip.init(project=PROJECT_ID, staging_bucket=BUCKET_URI)
if os.getenv("IS_TESTING_DEPLOY_GPU"):
DEPLOY_GPU, DEPLOY_NGPU = (
aip.gapic.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_DEPLOY_GPU")),
)
else:
DEPLOY_GPU, DEPLOY_NGPU = (None, None)
if os.getenv("IS_TESTING_TF"):
TF = os.getenv("IS_TESTING_TF")
else:
TF = "2.5".replace(".", "-")
if TF[0] == "2":
if DEPLOY_GPU:
DEPLOY_VERSION = "tf2-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf2-cpu.{}".format(TF)
else:
if DEPLOY_GPU:
DEPLOY_VERSION = "tf-gpu.{}".format(TF)
else:
DEPLOY_VERSION = "tf-cpu.{}".format(TF)
DEPLOY_IMAGE = "{}-docker.pkg.dev/vertex-ai/prediction/{}:latest".format(
REGION.split("-")[0], DEPLOY_VERSION
)
print("Deployment:", DEPLOY_IMAGE, DEPLOY_GPU, DEPLOY_NGPU)
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Train machine type", DEPLOY_COMPUTE)
inputs = tf.keras.layers.Input(shape=(), dtype=tf.string)
preprocess = hub.KerasLayer("https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3")
encoder = hub.KerasLayer(
"https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/3", trainable=True
)
classifier = tf.keras.layers.Dense(
1, activation="sigmoid", kernel_regularizer=tf.keras.regularizers.l2(0.001)
)
x = preprocess(inputs)
x = encoder(x)
outputs = classifier(x["pooled_output"])
tfhub_model = tf.keras.Model(inputs, outputs)
tfhub_model.summary()
import shutil
url = "https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz"
dataset = tf.keras.utils.get_file(
"aclImdb_v1.tar.gz", url, untar=True, cache_dir=".", cache_subdir=""
)
dataset_dir = os.path.join(os.path.dirname(dataset), "aclImdb")
train_dir = os.path.join(dataset_dir, "train")
# remove unused folders to make it easier to load the data
remove_dir = os.path.join(train_dir, "unsup")
shutil.rmtree(remove_dir)
AUTOTUNE = tf.data.AUTOTUNE
batch_size = 32
seed = 42
raw_train_ds = tf.keras.preprocessing.text_dataset_from_directory(
"aclImdb/train",
batch_size=batch_size,
validation_split=0.2,
subset="training",
seed=seed,
)
class_names = raw_train_ds.class_names
train_ds = raw_train_ds.cache().prefetch(buffer_size=AUTOTUNE)
val_ds = tf.keras.preprocessing.text_dataset_from_directory(
"aclImdb/train",
batch_size=batch_size,
validation_split=0.2,
subset="validation",
seed=seed,
)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
test_ds = tf.keras.preprocessing.text_dataset_from_directory(
"aclImdb/test", batch_size=batch_size
)
test_ds = test_ds.cache().prefetch(buffer_size=AUTOTUNE)
epochs = 3
steps_per_epoch = tf.data.experimental.cardinality(train_ds).numpy()
num_train_steps = steps_per_epoch * epochs
num_warmup_steps = int(0.1 * num_train_steps)
init_lr = 3e-5
optimizer = optimization.create_optimizer(
init_lr=init_lr,
num_train_steps=num_train_steps,
num_warmup_steps=num_warmup_steps,
optimizer_type="adamw",
)
loss = tf.keras.losses.BinaryCrossentropy()
metrics = tf.metrics.BinaryAccuracy()
tfhub_model.compile(optimizer=optimizer, loss=loss, metrics=metrics)
history = tfhub_model.fit(x=train_ds, validation_data=val_ds, epochs=epochs)
loss, accuracy = tfhub_model.evaluate(test_ds)
print(f"Loss: {loss}")
print(f"Accuracy: {accuracy}")
MODEL_DIR = BUCKET_URI + "/model"
tfhub_model.save(MODEL_DIR)
DEPLOY_IMAGE = "us-docker.pkg.dev/vertex-ai/prediction/tf2-cpu.2-7:latest"
model_standard = aip.Model.upload(
display_name="standard_" + TIMESTAMP,
artifact_uri=MODEL_DIR,
serving_container_image_uri=DEPLOY_IMAGE,
)
print(model_standard)
DEPLOY_IMAGE = "us-docker.pkg.dev/vertex-ai-restricted/prediction/tensorflow-enterprise-inference-engine-alpha:latest"
model_tfe_opt = aip.Model.upload(
display_name="tfe_opt_" + TIMESTAMP,
artifact_uri=MODEL_DIR,
serving_container_image_uri=DEPLOY_IMAGE,
serving_container_args=[
"--allow_precompilation=true",
"--allow_precision_affecting_optimizations=true",
],
)
print(model_tfe_opt)
endpoint_standard = aip.Endpoint.create(
display_name="standard_" + TIMESTAMP,
project=PROJECT_ID,
location=REGION,
labels={"your_key": "your_value"},
)
print(endpoint_standard)
endpoint_tfe_opt = aip.Endpoint.create(
display_name="tfe_opt_" + TIMESTAMP,
project=PROJECT_ID,
location=REGION,
labels={"your_key": "your_value"},
)
print(endpoint_tfe_opt)
response = endpoint_standard.deploy(
model=model_standard,
deployed_model_display_name="standard_" + TIMESTAMP,
machine_type=DEPLOY_COMPUTE,
)
print(endpoint_standard)
response = endpoint_tfe_opt.deploy(
model=model_tfe_opt,
deployed_model_display_name="tfe_opt_" + TIMESTAMP,
machine_type=DEPLOY_COMPUTE,
)
print(endpoint_tfe_opt)
INSTANCES = ["This was the best movie ever", "Movie was boring"]
serving_input = tfhub_model.input.name
# The format of each instance should conform to the deployed model's prediction input schema.
instances = [{serving_input: INSTANCES[0]}, {serving_input: INSTANCES[1]}]
prediction_standard = endpoint_standard.predict(instances=instances)
print(prediction_standard)
# The format of each instance should conform to the deployed model's prediction input schema.
instances = [{serving_input: INSTANCES[0]}, {serving_input: INSTANCES[1]}]
prediction_tfe_opt = endpoint_tfe_opt.predict(instances=instances)
print(prediction_tfe_opt)
import numpy as np
abs(
np.asarray(prediction_standard.predictions)
- np.asarray(prediction_tfe_opt.predictions)
)
delete_bucket = False
delete_model = True
delete_endpoint = True
if delete_endpoint:
try:
endpoint_standard.undeploy_all()
endpoint_standard.delete()
endpoint_tfe_opt.undeploy_all()
endpoint_tfe_opt.delete()
except Exception as e:
print(e)
if delete_model:
try:
tfhub_model.delete()
except Exception as e:
print(e)
if delete_bucket or os.getenv("IS_TESTING"):
! gsutil rm -rf {BUCKET_URI}
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: IPython Console
Step2: IPython Qt Console
Step3: IPython.parallel
| <ASSISTANT_TASK:>
Python Code:
from IPython.display import display, Image, HTML
from talktools import website, nbviewer
Image('images/ipython_console.png')
Image('images/ipython_qtconsole.png')
Image("images/ParallelKernels.png", width="80%")
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 10 Palavras Mais Frequentes
Step2: n-Grams
Step3: TF-IDF com CountVectorizer
| <ASSISTANT_TASK:>
Python Code:
# Bibliotecas
from pyspark.ml import Pipeline
from pyspark.ml.feature import Tokenizer, StopWordsRemover, CountVectorizer, NGram
livro = sc.textFile("Machado-de-Assis-Memorias-Postumas.txt")
text = ""
for line in livro.collect():
text += " " + line
data = spark.createDataFrame([(0, text)], ["id", "text"])
tokenizer = Tokenizer(inputCol="text", outputCol="words")
remover = StopWordsRemover(inputCol="words", outputCol="filtered")
count = CountVectorizer(inputCol="filtered", outputCol="features", vocabSize=10)
pipeline = Pipeline(stages=[tokenizer, remover, count])
model = pipeline.fit(data).transform(data)
model.select("features").show(truncate=False)
tokenizer = Tokenizer(inputCol="text", outputCol="words")
remover = StopWordsRemover(inputCol="words", outputCol="filtered")
ngram = NGram(inputCol="filtered", outputCol="ngrams", n=2)
count = CountVectorizer(inputCol="ngrams", outputCol="features", vocabSize=10)
pipeline = Pipeline(stages=[tokenizer, remover, ngram, count])
model = pipeline.fit(data).transform(data)
model.select("features").show(truncate=False)
ngram = NGram(inputCol="filtered", outputCol="ngrams", n=3)
pipeline = Pipeline(stages=[tokenizer, remover, ngram, count])
model = pipeline.fit(data).transform(data)
model.select("features").show(truncate=False)
countVect = CountVectorizer(inputCol="rawFeatures", outputCol="features")
countVectModel = countVect.fit(featurizedData)
rescaledData = countVectModel.transform(featurizedData)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Run as a Python module
Step2: Training should finish in just a few seconds because it ran outside of the Jupyter's runtime, as an independent process of the node's operating system. You can explore the training metrics using TensorBoard.
Step3: Next, let's use Cloud ML Engine so we can train on GPU
Step4: Monitoring training with TensorBoard
Step5: Deploying and predicting with model
Step6: The previous step of deploying the model can take a few minutes. If it is successful, you should see an output similar to this one
Step7: To predict with the model, save one of the test images as a JavaScript Object Notation (JSON) file. Also, take a look at it as a graphic and notice the expected class value in the title.
Step8: Here's how the same image looks when it saved in the test.json file for use with the prediction API.
Step9: Send the file to the prediction service and check whether the model you trained returns the correct prediction.
| <ASSISTANT_TASK:>
Python Code:
import os
PROJECT = 'my-project-id' # REPLACE WITH YOUR PROJECT ID
BUCKET = 'my-bucket-name' # REPLACE WITH YOUR BUCKET NAME
REGION = 'us-central1' # REPLACE WITH YOUR BUCKET REGION e.g. us-central1
MODEL_TYPE='dnn' # 'dnn' or 'cnn'
# do not change these
os.environ['PROJECT'] = PROJECT
os.environ['BUCKET'] = BUCKET
os.environ['REGION'] = REGION
os.environ['MODEL_TYPE'] = MODEL_TYPE
os.environ['TFVERSION'] = '1.8' # Tensorflow version
%bash
gcloud config set project $PROJECT
gcloud config set compute/region $REGION
%bash
rm -rf fashionmodel.tar.gz fashion_trained
gcloud ml-engine local train \
--module-name=trainer.task \
--package-path=${PWD}/fashionmodel/trainer \
-- \
--output_dir=${PWD}/fashion_trained \
--train_steps=1000 \
--learning_rate=0.01 \
--train_batch_size=512 \
--model=$MODEL_TYPE
from google.datalab.ml import TensorBoard
TensorBoard().start('fashion_trained'.format(BUCKET, MODEL_TYPE))
for pid in TensorBoard.list()['pid']:
TensorBoard().stop(pid)
print 'Stopped TensorBoard with pid {}'.format(pid)
%bash
OUTDIR=gs://${BUCKET}/fashion/trained_${MODEL_TYPE}
JOBNAME=fashion_${MODEL_TYPE}_$(date -u +%y%m%d_%H%M%S)
echo $OUTDIR $REGION $JOBNAME
gsutil -m rm -rf $OUTDIR
gcloud ml-engine jobs submit training $JOBNAME \
--region=$REGION \
--module-name=trainer.task \
--package-path=${PWD}/fashionmodel/trainer \
--job-dir=$OUTDIR \
--staging-bucket=gs://$BUCKET \
--scale-tier=BASIC_GPU \
--runtime-version=$TFVERSION \
-- \
--output_dir=$OUTDIR \
--train_steps=1000 --learning_rate=0.01 --train_batch_size=512 \
--model=$MODEL_TYPE
from google.datalab.ml import TensorBoard
TensorBoard().start('gs://{}/fashion/trained_{}'.format(BUCKET, MODEL_TYPE))
for pid in TensorBoard.list()['pid']:
TensorBoard().stop(pid)
print 'Stopped TensorBoard with pid {}'.format(pid)
%bash
MODEL_NAME="fashion"
MODEL_VERSION=${MODEL_TYPE}
MODEL_LOCATION=$(gsutil ls gs://${BUCKET}/fashion/trained_${MODEL_TYPE}/export/exporter | tail -1)
echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes"
#gcloud ml-engine versions delete ${MODEL_VERSION} --model ${MODEL_NAME}
#gcloud ml-engine models delete ${MODEL_NAME}
gcloud ml-engine models create ${MODEL_NAME} --regions $REGION
gcloud ml-engine versions create ${MODEL_VERSION} --model ${MODEL_NAME} --origin ${MODEL_LOCATION} --runtime-version=$TFVERSION
import tensorflow as tf
(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.fashion_mnist.load_data()
LABELS = ['T-shirt/top', 'Trouser', 'Pullover', 'Dress', 'Coat', 'Sandal', 'Shirt', 'Sneaker', 'Bag', 'Ankle boot']
HEIGHT=28
WIDTH=28
IMGNO=12 #CHANGE THIS to get different images
#Convert raw image data to a test.json file and persist it to disk
import json, codecs
jsondata = {'image': test_images[IMGNO].reshape(HEIGHT, WIDTH).tolist()}
json.dump(jsondata, codecs.open('test.json', 'w', encoding='utf-8'))
#Take a look at a sample image and the correct label from the test dataset
import matplotlib.pyplot as plt
plt.imshow(test_images[IMGNO].reshape(HEIGHT, WIDTH))
title = plt.title('{} / Class #{}'.format(LABELS[test_labels[IMGNO]], test_labels[IMGNO]))
%bash
cat test.json
%bash
gcloud ml-engine predict \
--model=fashion \
--version=${MODEL_TYPE} \
--json-instances=./test.json
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: In SciPy, the definition of the negative binomial distribution differs a little from the one in our introduction. They define $Y$ = Number of failures until k successes and then $y$ starts at 0. In the following plot, we have the probability of observing $y$ failures before we see $k=3$ successes.
Step2: For example, when $p=0.5$, the probability of seeing $y=0$ failures before 3 successes (or in other words, the probability of having 3 successes out of 3 trials) is 0.125, and the probability of seeing $y=3$ failures before 3 successes is 0.156.
Step3: Finally, if one wants to show this probability mass function as if we are following the first definition of negative binomial distribution we introduced, we just need to shift the whole thing to the right by adding $k$ to the $y$ values.
Step4: Negative binomial in GLM
Step5: We assign categories to the values 1, 2, and 3 of our "prog" variable.
Step6: The Academic program is the most popular program (167/314) and General is the least popular one (40/314)
Step7: Let's explore the distributions of math score and days of absence for each of the three programs listed above. The vertical lines indicate the mean values.
Step8: The first impression we have is that the distribution of math scores is not equal for any of the programs. It looks right-skewed for students under the Academic program, left-skewed for students under the Vocational program, and roughly uniform for students in the General program (although there's a drop in the highest values). Clearly those in the Vocational program has the highest mean for the math score.
Step9: Model 2
Step10: Explore models
Step11: The information in the two tables above can be visualized in a more concise manner using a forest plot. ArviZ provides us with plot_forest(). There we simply pass a list containing the InferenceData objects of the models we want to compare.
Step12: One of the first things one can note when seeing this plot is the similarity between the marginal posteriors. Maybe one can conclude that the variability of the marginal posterior of scale(math) is slightly lower in the model that considers the interaction, but the difference is not significant.
Step13: Plot predicted mean response
Step14: As we can see in this plot, the interval for the mean response for the Vocational program does not overlap with the interval for the other two groups, representing the group of students who miss fewer classes. On the right panel we can also see that including interaction terms does not change the slopes significantly because the posterior distributions of these coefficients have a substantial overlap with 0.
| <ASSISTANT_TASK:>
Python Code:
import arviz as az
import bambi as bmb
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from scipy.stats import nbinom
az.style.use("arviz-darkgrid")
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
y = np.arange(0, 30)
k = 3
p1 = 0.5
p2 = 0.3
fig, ax = plt.subplots(1, 2, figsize=(12, 4), sharey=True)
ax[0].bar(y, nbinom.pmf(y, k, p1))
ax[0].set_xticks(np.linspace(0, 30, num=11))
ax[0].set_title(f"k = {k}, p = {p1}")
ax[1].bar(y, nbinom.pmf(y, k, p2))
ax[1].set_xticks(np.linspace(0, 30, num=11))
ax[1].set_title(f"k = {k}, p = {p2}")
fig.suptitle("Y = Number of failures until k successes", fontsize=16);
print(nbinom.pmf(y, k, p1)[0])
print(nbinom.pmf(y, k, p1)[3])
fig, ax = plt.subplots(1, 2, figsize=(12, 4), sharey=True)
ax[0].bar(y + k, nbinom.pmf(y, k, p1))
ax[0].set_xticks(np.linspace(3, 30, num=10))
ax[0].set_title(f"k = {k}, p = {p1}")
ax[1].bar(y + k, nbinom.pmf(y, k, p2))
ax[1].set_xticks(np.linspace(3, 30, num=10))
ax[1].set_title(f"k = {k}, p = {p2}")
fig.suptitle("Y = Number of trials until k successes", fontsize=16);
data = pd.read_stata("https://stats.idre.ucla.edu/stat/stata/dae/nb_data.dta")
data.head()
data["prog"] = data["prog"].map({1: "General", 2: "Academic", 3: "Vocational"})
data.head()
data["prog"].value_counts()
fig, ax = plt.subplots(3, 2, figsize=(8, 6), sharex="col")
programs = list(data["prog"].unique())
programs.sort()
for idx, program in enumerate(programs):
# Histogram
ax[idx, 0].hist(data[data["prog"] == program]["math"], edgecolor='black', alpha=0.9)
ax[idx, 0].axvline(data[data["prog"] == program]["math"].mean(), color="C1")
# Barplot
days = data[data["prog"] == program]["daysabs"]
days_mean = days.mean()
days_counts = days.value_counts()
values = list(days_counts.index)
count = days_counts.values
ax[idx, 1].bar(values, count, edgecolor='black', alpha=0.9)
ax[idx, 1].axvline(days_mean, color="C1")
# Titles
ax[idx, 0].set_title(program)
ax[idx, 1].set_title(program)
plt.setp(ax[-1, 0], xlabel="Math score")
plt.setp(ax[-1, 1], xlabel="Days of absence");
model_additive = bmb.Model("daysabs ~ 0 + prog + scale(math)", data, family="negativebinomial")
idata_additive = model_additive.fit()
model_interaction = bmb.Model("daysabs ~ 0 + prog + scale(math) + prog:scale(math)", data, family="negativebinomial")
idata_interaction = model_interaction.fit()
az.summary(idata_additive)
az.summary(idata_interaction)
az.plot_forest(
[idata_additive, idata_interaction],
model_names=["Additive", "Interaction"],
var_names=["prog", "scale(math)"],
combined=True,
figsize=(8, 4)
);
az.plot_forest(idata_interaction, var_names=["prog:scale(math)"], combined=True, figsize=(8, 4))
plt.axvline(0);
math_score = np.arange(1, 100)
# This function takes a model and an InferenceData object.
# It returns of length 3 with predictions for each type of program.
def predict(model, idata):
predictions = []
for program in programs:
new_data = pd.DataFrame({"math": math_score, "prog": [program] * len(math_score)})
new_idata = model.predict(
idata,
data=new_data,
inplace=False
)
prediction = new_idata.posterior.stack(sample=["chain", "draw"])["daysabs_mean"].values
predictions.append(prediction)
return predictions
prediction_additive = predict(model_additive, idata_additive)
prediction_interaction = predict(model_interaction, idata_interaction)
mu_additive = [prediction.mean(1) for prediction in prediction_additive]
mu_interaction = [prediction.mean(1) for prediction in prediction_interaction]
fig, ax = plt.subplots(1, 2, sharex=True, sharey=True, figsize = (10, 4))
for idx, program in enumerate(programs):
ax[0].plot(math_score, mu_additive[idx], label=f"{program}", color=f"C{idx}", lw=2)
az.plot_hdi(math_score, prediction_additive[idx].T, color=f"C{idx}", ax=ax[0])
ax[1].plot(math_score, mu_interaction[idx], label=f"{program}", color=f"C{idx}", lw=2)
az.plot_hdi(math_score, prediction_interaction[idx].T, color=f"C{idx}", ax=ax[1])
ax[0].set_title("Additive");
ax[1].set_title("Interaction");
ax[0].set_xlabel("Math score")
ax[1].set_xlabel("Math score")
ax[0].set_ylim(0, 25)
ax[0].legend(loc="upper right");
%load_ext watermark
%watermark -n -u -v -iv -w
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 1) Data Source and Data Usage
Step2: From this data we calculated two key metrics which will be useful in determining which of these three artist’s markets was most robust in this period. The percent deviation between Price Realized and the cooresponding Low and High estimate is an efficient way to test for market strength. If an artist is consistently performing above the appraiser’s high estimate (i.e. a positive deviation value) that indicates that the artist is outperforming market expectations. On the other hand, a weak artist’s market is consistently underperformaning if it is reamining close to its low estimate value. The real utility of this metric is as a means to measuring 'strength' of market performance. Whichever of these artist have the highest positive deviation values will likely be the one to be most consistently performaning. Whichever artists is determenined to have the highest positive deviation value( if my hypothesis holds true) would be the "market leader" of this group of artists . If we can see any coorelation between upward trending deiviations and upward moving sale prices, we can more confidently believe that indiviudual artists' markets do affect one another.
Step3: It is clear from this set of data that Lucian Freud is constistantly outperfomning his market while Frank Auerbach takes a close second while Francis Bacon lags behind. Now we will compare these finding to sale prices to see if there are any connections that can be drawn.
| <ASSISTANT_TASK:>
Python Code:
import sys # system module
import pandas as pd # data package
import matplotlib as mpl # graphics package
import matplotlib.pyplot as plt # pyplot module
import datetime as dt # date and time module
import numpy as np
import pandas as pd
import seaborn as sns
import sys
from scipy.stats import linregress
%matplotlib inline
#Data for Francis Bacon Art Prices
path_fb = '/Users/Sohil/Desktop/Data_Bootcamp_Final/FrancisBacon.csv'
fb = pd.read_csv(path_fb,
thousands=',')
fb = fb.rename(columns={'a' : 'Year', 'b' : 'Low_Estimate', 'c' : 'High_Estimate', 'd' : 'Price'})
#Calculating key metric of Sale Price Deviation %
fb['Percent_Deviation_from_Low_Estimate'] = ((fb['Price'] - fb['Low_Estimate']) / (fb['Low_Estimate'])) * 100
fb['Percent_Deviation_from_High_Estimate'] = ((fb['Price'] - fb['High_Estimate']) / (fb['High_Estimate']))* 100
#Data for Frank Auerbach Sale Prices
path_fa = '/Users/Sohil/Desktop/Data_Bootcamp_Final/FrankAuerbach.csv'
fa = pd.read_csv(path_fa,
thousands=',')
fa = fa.rename(columns={'a' : 'Year', 'b' : 'Low_Estimate', 'c' : 'High_Estimate', 'd' : 'Price'})
#Calculating key metric of Sale Price Deviation %
fa['Percent_Deviation_from_Low_Estimate'] = ((fa['Price'] - fa['Low_Estimate']) / (fa['Low_Estimate'])) * 100
fa['Percent_Deviation_from_High_Estimate'] = ((fa['Price'] - fa['High_Estimate']) / fa['High_Estimate']) * 100
#Data for Lucian Freud Sale Prices
path_lf = '/Users/Sohil/Desktop/Data_Bootcamp_Final/LucianFreud.csv'
lf = pd.read_csv(path_lf,
thousands=',')
lf = lf.rename(columns={'a' : 'Year', 'b' : 'Low_Estimate', 'c' : 'High_Estimate', 'd' : 'Price'})
#Calculating key metric of Sale Price Deviation %
lf['Percent_Deviation_from_Low_Estimate'] = ((lf['Price'] - lf['Low_Estimate']) / (lf['Low_Estimate'])) * 100
lf['Percent_Deviation_from_High_Estimate'] = ((lf['Price'] - lf['High_Estimate']) / (lf['High_Estimate'])) * 100
#Metric test of market strength of each artist
fb_low_dev_mean = fb['Percent_Deviation_from_Low_Estimate'].mean()
fb_high_dev_mean = fb['Percent_Deviation_from_High_Estimate'].mean()
#Mean of deviation percentage for Francis Bacon
fa_low_dev_mean = fa['Percent_Deviation_from_Low_Estimate'].mean()
fa_high_dev_mean = fa['Percent_Deviation_from_High_Estimate'].mean()
#Mean of deviation percentages for Frank Auerbach
lf_low_dev_mean = lf['Percent_Deviation_from_Low_Estimate'].mean()
lf_high_dev_mean = lf['Percent_Deviation_from_High_Estimate'].mean()
#Mean of deviation percentages for Lucian Freud
#Bar graph depiciting mean deviation percentages per artist
D = {'Franics Bacon':fb_low_dev_mean, 'Frank Auerbach': fa_low_dev_mean, 'Lucian Freud':lf_low_dev_mean}
plt.barh(range(len(D)),
D.values(),
align='center',
color=['#A9D0F5', '#58ACFA', '#045FB4'],)
plt.yticks(range(len(D)), D.keys())
plt.axvline(x=0.3, color='#B43104')
plt.ylabel('Artists', fontsize=10)
plt.xlabel ('Mean Percent Deviation between Low Estimate and Price', fontsize= 10)
plt.title("Comparison of Mean % Deviation between Price Realized and Low Estimate", fontsize=13)
plt.show()
#Bar graph depiciting mean deviation percentages per artist
D = {'Franics Bacon':fb_high_dev_mean, 'Frank Auerbach': fa_high_dev_mean, 'Lucian Freud':lf_high_dev_mean}
plt.barh(range(len(D)),
D.values(),
align='center',
color=['#A9D0F5', '#58ACFA', '#045FB4'],)
plt.axvline(x=0, color='#B43104')
plt.yticks(range(len(D)),
D.keys())
plt.ylabel('Artists', fontsize=10)
plt.xlabel ('Mean Percent Deviation between Price and High Estimate', fontsize= 10)
plt.title("Comparison of Mean % Deviation between Price Realized and High Estimate", fontsize=13)
x= fb['Year']
y= fb['Price']
slope, intercept, r_value, p_value, slope_std_error = stats.linregress(x, y)
# Used a linear regession model to estimate trend of markets.
predict_y = intercept + slope * x
pred_error = y - predict_y
degrees_of_freedom = len(x) - 2
residual_std_error = np.sqrt(np.sum(pred_error**2) / degrees_of_freedom)
pylab.plot(x, y, 'o')
pylab.plot(x, predict_y, 'k-')
pylab.ylim(0,1000000)
pylab.xlim(1990,1995)
pylab.title('Francis Bacon Sale Price, 1990-95, with Linear Regression', fontsize=13)
pylab.ylabel('USD($)', fontsize=10)
pylab.xlabel('Years (Scientific Notation)(1990-95)', fontsize=10)
pylab.show()
x= fa['Year']
y= fa['Price']
slope, intercept, r_value, p_value, slope_std_error = stats.linregress(x, y)
# Used a linear regession model to estimate trend of markets.
predict_y = intercept + slope * x
pred_error = y - predict_y
degrees_of_freedom = len(x) - 2
residual_std_error = np.sqrt(np.sum(pred_error**2) / degrees_of_freedom)
pylab.plot(x, y, 'o')
pylab.plot(x, predict_y, 'k-')
pylab.ylim(0,100000)
pylab.xlim(1990,1995)
pylab.title('Frank Auerbach Sale Price, 1990-95, with Linear Regression', fontsize=13)
pylab.ylabel('USD($)', fontsize=10)
pylab.xlabel('Years (Scientific Notation)(1990-95)', fontsize=10)
pylab.show()
x= lf['Year']
y= lf['Price']
slope, intercept, r_value, p_value, slope_std_error = stats.linregress(x, y)
# Used a linear regession model to estimate trend of markets.
predict_y = intercept + slope * x
pred_error = y - predict_y
degrees_of_freedom = len(x) - 2
residual_std_error = np.sqrt(np.sum(pred_error**2) / degrees_of_freedom)
pylab.plot(x, y, 'o')
pylab.plot(x, predict_y, 'k-')
pylab.ylim(0,100000)
pylab.xlim(1990,1995)
pylab.title('Lucian Freud Sale Price, 1990-95, with Linear Regression', fontsize=13)
pylab.ylabel('USD($)', fontsize=10)
pylab.xlabel('Years (Scientific Notation)(1990-95)', fontsize=10)
pylab.show()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Even before you call your first TensorFlow function, a lot is going on behind the scenes. For example, an empty default graph object is created. (In order to make it easier to hit the ground running with TensorFlow, and to make using the library less verbose, Google has laced the system with global state. In order to fully understand what's happening in your program, you need to know that these state variables, like "current graph" and "current scope" and "current session", exist. As we'll soon see, most of the functions you'll call in TensorFlow operate by quietly accessing this hidden state.)
Step2: It is currently empty. We can check this fact by listing the operations (nodes) in the graph
Step3: Let's start adding some operations to g. An operation is a node of the computation graph. It contains only some light metadata, like "I am an addition operation, and my inputs come from these two other operations." Although Python Operation objects don't actually do anything, we usually think of an operation in terms of what it will cause the execution engine to do after the graph has been completely built and is handed over to TensorFlow to run.
Step4: g now has a Const operation! Note that tf.constant affected the graph g, even though we didn't explicitly say we wanted the constant operation to be added to g. It is possible to add operations to a specific, non-default graph, but most of the time, we add directly to the default graph, using functions like tf.constant. In fact, we generally don't even call get_default_graph to give g a name; we just use it implicitly.
Step5: Those inputs and outputs are of type Tensor.
Step6: A Tensor is a lightweight Python object that represents a piece of data flowing along the edges of our graph. That data can be a scalar, a vector, a matrix, or a higher-dimensional array. (The dimensionality of a tensor t is accessible as an attribute
Step7: Now, there are two operations in the graph. TensorFlow has named them 'Const' and 'Const_1', but you can also give them names yourself by passing a name keyword argument to the tf.constant function. Tensors are named following the formula op_name
Step8: It should make sense that add_op has two inputs and one output.
Step9: The run method of the Session class is used to send your graph to the computation engine and execute it. As an argument, we pass a tensor we'd like to "fetch" the value of. Based on which tensor we'd like to compute, TensorFlow will calculate a path through the graph, and then execute only the parts of the graph necessary to compute our desired tensor.
Step10: Above, we call sess.run three times, which invokes the execution engine three times. There is no memoization; each time you call sess.run, everything is computed anew. Because of this, if you want to fetch more than one tensor, it's more efficient to fetch them all in one go, by passing a list to sess.run. You can also pass a dictionary, a tuple, a named tuple, or nested combinations of these data structures.
Step11: In the last example, TensorFlow created a copy of the fetches data structure, but with each tensor replaced by its actual value, computed during execution.
Step12: Sessions are more powerful than this, but to understand why, we need to talk about placeholders and variables.
Step13: When we call sess.run, we now pass in a second argument, feed_dict. This is a Python dictionary in which the keys are placeholder tensors (i.e., the outputs of placeholder operations) and the values are numbers, numpy arrays, or Python lists. (Numbers, lists, and numpy arrays can all be converted automatically into a data format compatible with TensorFlow's execution engine.) Note that the keys of feed_dict are the actual tensor objects, not strings.
Step14: A few things to note
Step15: Another trick
Step16: The session is closed automatically at the end of the with block.
Step17: The operations are all still there! That's because reset_default_graph doesn't delete operations, it just creates a new graph and makes it the default. g still refers to the old graph. We can fix this (and let the old graph be garbage-collected) by reassigning g
Step18: Variables
Step19: Wow -- four operations were just added to the graph! Let's go through them one by one
Step20: In fact, let's start a second session as well. One of the roles sessions play is to keep track of variable values across executions of the graph, and this will help us visualize that.
Step21: If we attempt to get the value of the variable, we will get an error
Step22: "Attempting to use uninitialized value Variable." In order to initialize the variable, we actually have to run the assign op that was created for us. Recall that this was the third operation in the graph.
Step23: But in sess2, $x$ is still not initialized, and running sess2.run(x) still gives an error
Step24: Let's fix that.
Step25: Notice
Step26: Computing gradients
Step27: Notice that Tensorflow has added a constant node, mul/x, to represent the constant 2. Other than that, this should look as expected
Step28: As you can see, a lot of new nodes were added for gradient calculation. The output of the last op listed above will be our derivative, $\frac{d\text{eqn}}{db}$. tf.gradients returns that tensor, so we don't have to grab it explicitly
Step29: We can now execute the graph
Step30: As expected, we get 2, the partial derivative of eqn with respect to b at the point (a=5, b=7) (at any point, actually, but TensorFlow is computing it at this specific point).
Step31: Let's create a second equation.
Step32: Remember that a = 5 and b = 7, which is why we get the values we did above. Although you can think of what Tensorflow does as a kind of symbolic differentiation (it constructs a formula, i.e. computation graph, for computing a derivative), it always calculates gradients at a specific point.
Step33: Here, [8, 8] == [7, 6] + [1, 2].
Step34: Optimization
Step35: Two start with, we'll calculate our derivative manually, without even using tf.gradients. The derivative of $x^2$ at a point is just $2x$
Step36: We will now create an operation that changes $x$ based on the current derivative of the loss. We can do this using the tf.assign function, which creates an operation that assigns a variable to a new value. (The Assign operation has two inputs, the reference to the variable's mutable storage, and the new value we wish to assign. As mentioned in the Variables section, we can pass a Variable object as the first input and TensorFlow will substitute the correct tensor. The Assign operation has one output, the tensor for which is returned by tf.assign, which is the new value of the variable after assignment.)
Step37: Our graph is in place. In order to run our algorithm, we need to
Step38: As you can see, the value of loss gets closer and closer to 0 as time goes on.
Step39: The algorithm we have used here is vanilla gradient descent. TensorFlow actually comes with a whole family of optimizers that we can just plug into our model. We do not need to call tf.gradients or manually assign a variable at all; TensorFlow can create that whole portion of our computational graph.
Step40: minimize_loss now refers to an operation that runs a single step of gradient descent, updating x. (By default optimizer.minimize assumes you want to run gradient descent on every Variable in the computation graph. If you want to change this, you can pass in a var_list argument, specifying exactly which variables should be updated.) Note that before, assign_new_x referred to a tensor; minimize_loss refers to an actual operation, and sess.running it will return None. Still, it is working, and produces the exact same values as our previous two attempts
Step41: There are other optimizers that implement clever variations of gradient descent. For example, here's the popular Adam optimizer
Step42: Notice that some of the operations Adam has added are Variables! This is because Adam keeps track of certain statistics across iterations, to implement a sort of "momentum." Because of this, before we can run our minimize_loss operation, we need to make sure Adam's variables are initialized. Rather than painstakingly initialize each one, we can use tf.global_variables_initializer() to add an op to the graph that initializes all variables created so far.
Step43: As you can see, for our extremely simple problem, Adam hurts more than it helps. Its momentum feature means that the big derivatives from early on in training still have an effect even as we get close to the valley, causing us to overshoot the minimum. Running this for more than 100 iterations would eventually bring us back to 0. (An image that is sometimes used to explain momentum-based optimization approaches vs. typical gradient descent is that if normal gradient descent can be thought of as a man slowly and cautiously walking downhill, momentum-based approaches are better understood as a heavy ball rolling its way downhill. The ball will likely overshoot the bottom then have to roll back down, taking a while to settle. The benefit is that the ball can roll straight past small local minima, and stay immune to certain types of pathological terrains.)
Step44: It's common to perform some data cleaning operations on our input data before attempting to use a machine learning algorithm. We'll do that here, subtracting the (empirical) mean and dividing by the standard deviation of the temperature
Step45: Now, let's divide the data into training, validation, and test sets.
Step46: Finally, we'll create a "batch generator" for the training set. The following function is a Python generator function; instead of returning a value, it continuously yields new batches of data. When we call batch_generator, Python creates a generator iterator, which we here call training_generator, that we can use with Python's next function.
Step47: Building the model
Step48: We then set up the major quantities in our model as follows
Step49: Computing the loss
Step50: Creating an optimizer
Step51: Initializing variables and training
| <ASSISTANT_TASK:>
Python Code:
import tensorflow as tf
g = tf.get_default_graph()
g
g.get_operations()
tf.constant(3.14)
g.get_operations()
const_operation = g.get_operations()[0]
len(const_operation.inputs), len(const_operation.outputs)
const_tensor = const_operation.outputs[0]
const_tensor
another_const_tensor = tf.constant(1.414)
another_const_tensor
g.get_operations()
sum_tensor = tf.add(const_tensor, another_const_tensor)
sum_tensor
g.get_operations()
add_op = g.get_operations()[2]
len(add_op.inputs), len(add_op.outputs)
# This piece is only necessary so as not to use up an ungodly amount of GPU memory:
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
# This is the actual code creating the session. You can omit the config arg
# if you have no configuration to do.
sess = tf.Session(config=config)
sess.graph == g
sess.run(sum_tensor), sess.run(const_tensor), sess.run(another_const_tensor)
sess.run([sum_tensor, const_tensor, another_const_tensor])
sess.run({'a': const_tensor, 'b': another_const_tensor, '[a,b,a+b]': [const_tensor, another_const_tensor, sum_tensor]})
sess.close()
a = tf.placeholder(tf.float32)
a
b = tf.placeholder(tf.float32)
flexible_sum_tensor = tf.add(a, b)
g.get_operations()
sess = tf.Session(config=config)
sess.run(flexible_sum_tensor, feed_dict={a: 1., b: 2.})
sess.run(flexible_sum_tensor, feed_dict={a: [1.], b: [2.]})
sess.run(flexible_sum_tensor, feed_dict={a: [[1., 2.], [3., 4.]], b: [[5., 6.], [7., 8.]]})
sess.close()
distance_from_origin = tf.sqrt((a * a) + (b * b))
g.get_operations()
distance_from_origin
sess = tf.Session(config=config)
sess.run(distance_from_origin, feed_dict={a: 3., b: 4.})
sess.close()
with tf.Session(config=config) as sess:
print(sess.run(distance_from_origin, feed_dict={a: 9., b: 12.}))
tf.reset_default_graph()
g.get_operations()
g = tf.get_default_graph()
g.get_operations()
x = tf.Variable(42)
# summarize the operations now in the graph:
[(op, "{} inputs and {} output".format(len(op.inputs), len(op.outputs))) for op in g.get_operations()]
sess = tf.Session(config=config)
sess2 = tf.Session(config=config)
sess.run(x)
sess.run(g.get_operations()[2])
sess.run(x)
sess2.run(x)
sess2.run(x.initializer), sess2.run(x)
sess.close()
sess2.close()
tf.reset_default_graph()
g = tf.get_default_graph()
a = tf.constant(5, name='a')
b = tf.constant(7, name='b')
eqn = a + 2 * b
g.get_operations()
grad = tf.gradients(eqn, b)
g.get_operations()
grad
sess = tf.Session(config=config)
sess.run(grad)
gradient_wrt_both = tf.gradients(eqn, [a, b])
sess.run(gradient_wrt_both)
eqn2 = a * b + b
gradient_of_eqn2 = tf.gradients(eqn2, [a,b])
sess.run(gradient_of_eqn2)
gradient_with_two_ys = tf.gradients([eqn, eqn2], [a,b])
sess.run(gradient_with_two_ys)
sess.close()
tf.reset_default_graph()
g = tf.get_default_graph()
x = tf.Variable(5.0)
loss = x * x
g.get_operations()
dloss_dx = 2. * x
step_size = 0.1
new_x_value = x - dloss_dx * step_size
assign_new_x = tf.assign(x, new_x_value)
with tf.Session(config=config) as sess:
# Initialize x:
sess.run(x.initializer)
# We will fetch the following tensors each iteration
to_fetch = {'x': x, 'loss': loss, 'derivative': assign_new_x}
for i in range(100):
fetched = sess.run(to_fetch)
# every tenth step, print our progress
if i % 10 == 0 or i+1==100:
print("Iter {}: {}".format(i, fetched))
tf.reset_default_graph()
g = tf.get_default_graph()
x = tf.Variable(5.0)
loss = x * x
# tf.gradients is called "gradients" and not "derivative" for a reason: it
# returns a _list_ of partial derivatives, even if you only pass in one x.
# Pull out first element (in our case, the list only has one element).
dloss_dx = tf.gradients(loss, x)[0]
new_x_value = x - dloss_dx * step_size
assign_new_x = tf.assign(x, new_x_value)
with tf.Session(config=config) as sess:
# Initialize x:
sess.run(x.initializer)
# We will fetch the following tensors each iteration
to_fetch = {'x': x, 'loss': loss, 'derivative': assign_new_x}
for i in range(100):
fetched = sess.run(to_fetch)
# every tenth step, print our progress
if i % 10 == 0 or i+1==100:
print("Iter {}: {}".format(i, fetched))
tf.reset_default_graph()
g = tf.get_default_graph()
x = tf.Variable(5.0)
loss = x * x
# Create the optimizer
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.1)
# Add optimize nodes to the graph
minimize_loss = optimizer.minimize(loss)
with tf.Session(config=config) as sess:
# Initialize x:
sess.run(x.initializer)
# We will fetch the following tensors each iteration
to_fetch = {'x': x, 'loss': loss, 'train_op': minimize_loss}
for i in range(100):
fetched = sess.run(to_fetch)
# every tenth step, print our progress
if i % 10 == 0 or i+1==100:
print("Iter {}: {}".format(i, fetched))
adam_optimizer = tf.train.AdamOptimizer(learning_rate=0.1)
minimize_loss = adam_optimizer.minimize(loss)
g.get_operations()
initialize_all = tf.global_variables_initializer()
with tf.Session(config=config) as sess:
# Initialize ALL variables
sess.run(initialize_all)
# We will fetch the following tensors each iteration
to_fetch = {'x': x, 'loss': loss, 'train_op': minimize_loss}
for i in range(100):
fetched = sess.run(to_fetch)
# every tenth step, print our progress
if i % 10 == 0 or i+1==100:
print("Iter {}: {}".format(i, fetched))
import numpy as np
temps = np.random.normal(55, 20, 1000)
random_noise = np.random.normal(0, 100, 1000)
hosp_visits = 1000 - 5 * temps + random_noise
plt.plot(temps[:200], hosp_visits[:200], "o")
normalized_temps = (temps - np.mean(temps)) / np.std(temps)
train_X, train_y = normalized_temps[:800], hosp_visits[:800]
valid_X, valid_y = normalized_temps[800:900], hosp_visits[800:900]
test_X, test_y = normalized_temps[900:], hosp_visits[900:]
def batch_generator(X, y, batch_size):
total_batches = len(X) // batch_size
current_batch = 0
while True:
start = batch_size * current_batch
end = start + batch_size
yield (X[start:end], y[start:end])
current_batch = (current_batch + 1) % total_batches
training_generator = batch_generator(train_X, train_y, batch_size=100)
# Later, call next(training_generator) to get a new batch of the form (X, y)
tf.reset_default_graph()
g = tf.get_default_graph()
X = tf.placeholder(tf.float32)
y = tf.placeholder(tf.float32)
m = tf.Variable(0.)
b = tf.Variable(0.)
predicted_y = X * m + b
avg_loss = tf.reduce_mean(tf.squared_difference(predicted_y, y))
train_one_step = tf.train.GradientDescentOptimizer(learning_rate=0.0005).minimize(avg_loss)
init_all_vars = tf.global_variables_initializer()
with tf.Session(config=config) as sess:
sess.run(init_all_vars)
for i in range(5000):
X_batch, y_batch = next(training_generator)
feed_dict = {X: X_batch, y: y_batch}
_, loss, m_pred, b_pred = sess.run([train_one_step, avg_loss, m, b], feed_dict=feed_dict)
if i % 500 == 0:
validation_feed_dict = {X: valid_X, y: valid_y}
valid_loss = sess.run(avg_loss, feed_dict=validation_feed_dict)
print("Iter {}: training loss = {}, validation loss = {}, m={}, b={}".format(i, loss, valid_loss, m_pred, b_pred))
test_feed_dict = {X: test_X, y: test_y}
m_pred, b_pred, loss = sess.run([m, b, avg_loss], test_feed_dict)
print("m: {}, b: {}, test loss: {}".format(m_pred, b_pred, loss))
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: El bloque anterior se ejecuta en python y mantiene una conexión a una instancia de interprete interactivo de python, de manera que podemos usar celdas de markdown y otras celdas de código, y luego hacer uso de las definiciones de estas celdas. Como a continuación
Step2: Magics
Step3: Para la ejecución de bash se puede anteceder un signo ! antes del comando a usar (de una sola linea) o usar el magic correspondiente para múltiples lineas. Esto solo es posible en sistemas UNIX.
Step4: En caso de usar los comandos bash con ! es posible asociar sus salidas a una variable python, y obtener una integración natural entre python y bash.
Step5: Otros magics disponibles son
| <ASSISTANT_TASK:>
Python Code:
from __future__ import print_function, division # Compatibilidad entre python 2 y 3
def imprime_division(a, b):
print(a/b)
imprime_division(4, 5)
%%latex
$\left(\frac{1}{\Gamma}\right)^{2}$
!ls -al
%%bash
touch archivoprueba.txt
ls *.txt
echo "Ya se verifico creación, ahora se eliminará."
rm archivoprueba.txt
ls archivoprueba.txt
listaarchivos = !ls
print(listaarchivos)
%lsmagic
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: If we need only the keys 'a' and 'd', we can do this
Step2: We can also specify 'subkeys' by using a dotted-syntax
Step3: The dotted-syntax can have any needed level of depth
Step4: Specifying invalid keys behavior
Step5: By default, invalid keys passed to the extract_subdict function are ignored
Step6: However, by passing True to the strict parameter of the function, invalid keys will raise a KeyError exception
Step7: Successive extractions
Step8: Now, extracting only the 'name' of each album
Step9: The result is the following
| <ASSISTANT_TASK:>
Python Code:
d = {
'a': 'A',
'b': 'B',
'c': 'C',
'd': {
'x': 'D_X',
'y': 'D_Y',
'z': {
'I': 'D_Z_I',
'II': {
'1': 'D_Z_II_1',
'2': 'D_Z_II_2'
},
'III': 'D_Z_III'
}
}
}
from subdict import extract_subdict # The main function of the library
from pprint import pprint # Just for a nice presentation here
pprint( extract_subdict(d, ['a', 'd']) )
pprint( extract_subdict(d, ['a', 'd.x', 'd.z']) )
pprint( extract_subdict(d, ['a', 'd.z.II.1']) )
person = {
'name': 'John Frusciante',
'birth': '1970-03-05',
'city': {
'name': 'New York City',
'state': {'name': 'New York', 'country': 'USA'}
},
'albums': [
{
'year': 2001,
'name': 'To Record Only Water For Ten Days',
'label': {
'name': 'Warner Bros Records',
'link': 'https://en.wikipedia.org/wiki/Warner_Bros._Records'
}
},
{
'year': 2004,
'name': 'Shadows Collide With People',
'label': {
'name': 'Warner Bros Records',
'link': 'https://en.wikipedia.org/wiki/Warner_Bros._Records'
}
},
{
'year': 2009,
'name': 'The Empyrean',
'label': {
'name': 'Record Collection',
'link': 'https://en.wikipedia.org/wiki/Record_Collection'
}
}
]
}
extract_subdict(person, ['name', 'birth', 'hair_color']) # 'hair_color' is invalid
extract_subdict(person, ['name', 'birth', 'hair_color'], strict=True)
subdict = extract_subdict(person, ['name', 'albums'])
pprint(subdict)
for index in range(len(subdict['albums'])):
subdict['albums'][index] = extract_subdict(subdict['albums'][index], ['name'])
pprint(subdict)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Step 4
Step2: Step 5
Step3: There should be 9 containers running
Step4: Step 7
Step5: There should be an additional 5 containers running
Step6: Step 9
Step7: There should be an additional 9 containers running
| <ASSISTANT_TASK:>
Python Code:
cd ~/nexus/esip-workshop/docker/infrastructure
docker-compose up -d cassandra1
docker logs -f cassandra1
docker-compose up -d
docker ps
cd ~/nexus/esip-workshop/docker/analysis
docker-compose up -d
docker ps
cd ~/nexus/esip-workshop/docker/ingest
docker-compose up -d
docker ps
docker exec -it xd-admin /usr/local/nx-deploy-stream.sh --datasetName AVHRR_OI_L4_GHRSST_NCEI --dataDirectory /usr/local/data/nexus/avhrr/2017 --variableName analysed_sst --tilesDesired 1296
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Here we define the parameters that we use to extract all the informations from the vtk file
Step2: Now we initialize the file_handler that deals with all the vtk files. Then we need to start it in order to extract the output on the corners of the parametric domain.
Step3: Then the algorithm waits for a new sequential vtk output. To emphasise the iterative procedure we explicit the while True cycle. We think is the best way to interact with the package. If the user choose True he has to run a new simulation with the new parameter suggested before ([-0.29334384 -0.2312056]) and then place the new vtk output with the sequential name in the same folder as the others (namefile_prefix). If the user choose False the code writes out the structures needed for the online phase. Here we respond True and then False just as an example.
| <ASSISTANT_TASK:>
Python Code:
import ezyrb as ez
output_name = 'Pressure'
weights_name = 'Weights'
namefile_prefix = '../tests/test_datasets/matlab_0'
file_format = '.vtk'
file_handler = ez.pod.Pod(output_name, weights_name, namefile_prefix, file_format)
file_handler.start()
while True:
add = input('Add a new snapshot to the database? (True or False) ')
if add == True:
file_handler.add_snapshot()
else:
file_handler.write_structures()
break
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Notarás por el número a la izquierda (el número 1) que esa celda es diferente. Ese número significa que es el primero output o salida del programa y sirve para diferenciar e identificar las celdas con código entre sí y de las textuales.
Step2: Los nombres son bastante obvios, la primera cantidad T_init permite iniciar el timepo inicial mientras que Tmax permite controlar hasta donde llegará el vector de tiempos. Finalmente dt permite ajustar el tamaño del salto entre un número y el siguiente. La declaración del vector se hace con la instrucción np.arange (nota que usamos el alias para decir que esta herramientas pertenece a la caja de numpy).
Step3: Para modificar una celda de programación como la anterior simplemente dale click y puedes modificar las cosas directamente con tu teclado. Cuando quieras ejectura (o correr) el contenido presiona shift y enter al mismo tiempo (shift + enter). En caso de que no necesites algo más en el menú de la notebook está un menú de ayuda con todos los comandos.
Step4: Notarás que la onda se ve bastante mal deformada, si buscas en google una fotografía del seno deberías encontrar que en realidad se debería de ver algo como así
Step5: Basandonos en esta imagen debería ser notorio que el problema principal es que no tenemos suficientes puntos para cubrir la señal de manera adecuada. Este tipo de observaciones dan pie al tipo de invetigaciones de cuantos puntos concretos son necesarios para recuperar la información de una señal (teoría de la información) y más particularmente el fenomeno de aliasing en el procesamiento de señales. Pero esto es para una futura discusión y simplemente un comentario lateral.
Step6: Como ejercicio deberías de modificar los valores de dt y los tiempos iniciales y finales para ver como eso afecta la gráfica. Otra cosa que puedes hacer es leer en google sobre los commandos xlim and ylim y agregarlos al pequeño programa de arriba para ver como afectan la visualización.
Step7: Notarás que la frecuencia es demasiado alta y que por lo tanto no se puede apreciar correctamente, modifiquemos los tiempos finales para ver esto desde otro angulo
| <ASSISTANT_TASK:>
Python Code:
import numpy as np # Alias es np
import matplotlib.pyplot as plt # Alias es plt
T_init = 0.0 # Tiempo inicial
Tmax = 10.0 # Tiempo Total
dt= 2.0 # Salto
tiempos = np.arange(T_init, Tmax, dt)
print tiempos
T_init = 2.0 # Tiempo inicial
Tmax = 8.0 # Tiempo Total
dt= 1.0 # Salto
tiempos = np.arange(T_init, Tmax, dt)
print tiempos
%matplotlib inline
# Esto es para que las gráficas se muestren en patanlla en lugar de una ventana aparte.
T_init = 0.0 # Tiempo inicial
Tmax = 20.0 # Tiempo Total
dt= 1.0 # Salto
tiempos = np.arange(T_init, Tmax, dt)
# El siguiente comando es equivalente a calcular el seno en TODOS los tiempos de arriba.
onda = np.sin(tiempos) # La onda evaluada en todos los tiempos. sin=seno=sinus
plt.plot(tiempos, onda) # Muestra en pantalla
onda = np.sin(tiempos) # La onda evaluada en todos los tiempos. sin=seno=sinus
plt.plot(tiempos, onda, '*-') # El astericos indica que queremos los puntos, el guion que los queremos encadenados
T_init = 0.0 # Tiempo inicial
Tmax = 20.0 # Tiempo Total
dt= 0.1 # Salto ESTOS SON MÁS PEQUEÑOS AHORA
tiempos = np.arange(T_init, Tmax, dt)
# El siguiente comando es equivalente a calcular el seno en TODOS los tiempos de arriba.
onda = np.sin(tiempos) # La onda evaluada en todos los tiempos. sin=seno=sinus
plt.plot(tiempos, onda, '*-') # Muestra en pantalla
T_init = 0.0
Tmax = 20.0
dt= 0.1
tiempos = np.arange(T_init, Tmax, dt)
f = 1.0 # Frequencia
unidades = 2 * np.pi # Esta es la longitud del circulo
onda = np.sin(f * unidades * tiempos) # La onda evaluada en todos los tiempos. sin=seno=sinus
plt.plot(tiempos, onda, '*-') # Muestra en pantalla
T_init = 0.0
Tmax = 5.0
dt= 0.1
tiempos = np.arange(T_init, Tmax, dt)
f = 1.0 # Frequencia en Hz
unidades = 2 * np.pi # Esta es la longitud del circulo
onda = np.sin(f * unidades * tiempos) # La onda evaluada en todos los tiempos. sin=seno=sinus
plt.plot(tiempos, onda, '*-') # Muestra en pantalla
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Document Authors
Step2: Document Contributors
Step3: Document Publication
Step4: Document Table of Contents
Step5: 1.2. Model Name
Step6: 2. Key Properties --> Variables
Step7: 3. Key Properties --> Seawater Properties
Step8: 3.2. Ocean Freezing Point Value
Step9: 4. Key Properties --> Resolution
Step10: 4.2. Canonical Horizontal Resolution
Step11: 4.3. Number Of Horizontal Gridpoints
Step12: 5. Key Properties --> Tuning Applied
Step13: 5.2. Target
Step14: 5.3. Simulations
Step15: 5.4. Metrics Used
Step16: 5.5. Variables
Step17: 6. Key Properties --> Key Parameter Values
Step18: 6.2. Additional Parameters
Step19: 7. Key Properties --> Assumptions
Step20: 7.2. On Diagnostic Variables
Step21: 7.3. Missing Processes
Step22: 8. Key Properties --> Conservation
Step23: 8.2. Properties
Step24: 8.3. Budget
Step25: 8.4. Was Flux Correction Used
Step26: 8.5. Corrected Conserved Prognostic Variables
Step27: 9. Grid --> Discretisation --> Horizontal
Step28: 9.2. Grid Type
Step29: 9.3. Scheme
Step30: 9.4. Thermodynamics Time Step
Step31: 9.5. Dynamics Time Step
Step32: 9.6. Additional Details
Step33: 10. Grid --> Discretisation --> Vertical
Step34: 10.2. Number Of Layers
Step35: 10.3. Additional Details
Step36: 11. Grid --> Seaice Categories
Step37: 11.2. Number Of Categories
Step38: 11.3. Category Limits
Step39: 11.4. Ice Thickness Distribution Scheme
Step40: 11.5. Other
Step41: 12. Grid --> Snow On Seaice
Step42: 12.2. Number Of Snow Levels
Step43: 12.3. Snow Fraction
Step44: 12.4. Additional Details
Step45: 13. Dynamics
Step46: 13.2. Transport In Thickness Space
Step47: 13.3. Ice Strength Formulation
Step48: 13.4. Redistribution
Step49: 13.5. Rheology
Step50: 14. Thermodynamics --> Energy
Step51: 14.2. Thermal Conductivity
Step52: 14.3. Heat Diffusion
Step53: 14.4. Basal Heat Flux
Step54: 14.5. Fixed Salinity Value
Step55: 14.6. Heat Content Of Precipitation
Step56: 14.7. Precipitation Effects On Salinity
Step57: 15. Thermodynamics --> Mass
Step58: 15.2. Ice Vertical Growth And Melt
Step59: 15.3. Ice Lateral Melting
Step60: 15.4. Ice Surface Sublimation
Step61: 15.5. Frazil Ice
Step62: 16. Thermodynamics --> Salt
Step63: 16.2. Sea Ice Salinity Thermal Impacts
Step64: 17. Thermodynamics --> Salt --> Mass Transport
Step65: 17.2. Constant Salinity Value
Step66: 17.3. Additional Details
Step67: 18. Thermodynamics --> Salt --> Thermodynamics
Step68: 18.2. Constant Salinity Value
Step69: 18.3. Additional Details
Step70: 19. Thermodynamics --> Ice Thickness Distribution
Step71: 20. Thermodynamics --> Ice Floe Size Distribution
Step72: 20.2. Additional Details
Step73: 21. Thermodynamics --> Melt Ponds
Step74: 21.2. Formulation
Step75: 21.3. Impacts
Step76: 22. Thermodynamics --> Snow Processes
Step77: 22.2. Snow Aging Scheme
Step78: 22.3. Has Snow Ice Formation
Step79: 22.4. Snow Ice Formation Scheme
Step80: 22.5. Redistribution
Step81: 22.6. Heat Diffusion
Step82: 23. Radiative Processes
Step83: 23.2. Ice Radiation Transmission
| <ASSISTANT_TASK:>
Python Code:
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'uhh', 'sandbox-1', 'seaice')
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.model.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.variables.prognostic')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sea ice temperature"
# "Sea ice concentration"
# "Sea ice thickness"
# "Sea ice volume per grid cell area"
# "Sea ice u-velocity"
# "Sea ice v-velocity"
# "Sea ice enthalpy"
# "Internal ice stress"
# "Salinity"
# "Snow temperature"
# "Snow depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS-10"
# "Constant"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.target')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ice strength (P*) in units of N m{-2}"
# "Snow conductivity (ks) in units of W m{-1} K{-1} "
# "Minimum thickness of ice created in leads (h0) in units of m"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.description')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.properties')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Mass"
# "Salt"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ocean grid"
# "Atmosphere Grid"
# "Own Grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Structured grid"
# "Unstructured grid"
# "Adaptive grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite differences"
# "Finite elements"
# "Finite volumes"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Zero-layer"
# "Two-layers"
# "Multi-layers"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.seaice_categories.other')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.horizontal_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Incremental Re-mapping"
# "Prather"
# "Eulerian"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Hibler 1979"
# "Rothrock 1975"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.redistribution')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Rafting"
# "Ridging"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.dynamics.rheology')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Free-drift"
# "Mohr-Coloumb"
# "Visco-plastic"
# "Elastic-visco-plastic"
# "Elastic-anisotropic-plastic"
# "Granular"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice latent heat (Semtner 0-layer)"
# "Pure ice latent and sensible heat"
# "Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)"
# "Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pure ice"
# "Saline ice"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Conduction fluxes"
# "Conduction and radiation heat fluxes"
# "Conduction, radiation and latent heat transport"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Heat Reservoir"
# "Thermal Fixed Salinity"
# "Thermal Varying Salinity"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Floe-size dependent (Bitz et al 2001)"
# "Virtual thin ice melting (for single-category)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Prescribed salinity profile"
# "Prognostic salinity profile"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Virtual (enhancement of thermal conductivity, thin ice melting)"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Parameterised"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flocco and Feltham (2010)"
# "Level-ice melt ponds"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Albedo"
# "Freshwater"
# "Heat"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Single-layered heat diffusion"
# "Multi-layered heat diffusion"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.surface_albedo')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Parameterized"
# "Multi-band albedo"
# "Other: [Please specify]"
# TODO - please enter value(s)
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Delta-Eddington"
# "Exponential attenuation"
# "Ice radiation transmission per category"
# "Other: [Please specify]"
# TODO - please enter value(s)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Remise à zero(angle) Affichage et changement de l'id d'un moteur.
| <ASSISTANT_TASK:>
Python Code:
ports = pypot.dynamixel.get_available_ports()
if not ports:
raise IOError('no port found!')
print "Ports founds %s" % ports
for port in ports:
print('Connecting on port:', port)
dxl_io = pypot.dynamixel.DxlIO(port)
motors = dxl_io.scan()
print(" %s motors founds : %s\n" % (len(motors),motors))
dxl_io.close()
#57142 => 1000000
#return_delay_time => 0
def motor_config():
ports = pypot.dynamixel.get_available_ports()
if len(ports) == 1:
print("Connection au port %s" % ports[0])
dxl_io = pypot.dynamixel.DxlIO(ports[0])
print('Scan des moteurs (cela peut prendre quelques secondes)')
motors = dxl_io.scan()
if len(motors) == 1:
print("OK, un seul moteur trouvé : %s" % motors[0])
for k,v in dxl_io.__dict__.items():
print(" - %s : %s" % (k,v))
dxl_io.enable_torque(motors)
dxl_io.set_moving_speed({motors[0]:200})
print("Positionnement du moteur %s à 90° " % motors[0])
dxl_io.set_goal_position({motors[0]:90})
while dxl_io.is_moving((motors[0],))[0]:
time.sleep(0.02)
print("Positionnement du moteur %s à -90° " % motors[0])
dxl_io.set_goal_position({motors[0]:-90})
while dxl_io.is_moving((motors[0],))[0]:
time.sleep(0.02)
print("Positionnement du moteur %s à 0° " % motors[0])
dxl_io.set_goal_position({motors[0]:0})
while dxl_io.is_moving((motors[0],))[0]:
time.sleep(0.02)
dxl_io.disable_torque(motors)
target_id = raw_input("Changer l'id du moteur %s : " % motors[0])
try:
target_id = int(target_id)
dxl_io.change_id({motors[0]:target_id})
print("ID modifié")
except ValueError:
print("ID non modifié")
dxl_io.close()
else:
print("Erreur, %s moteurs conncectés : %s" % (len(motors), motors))
else :
print("Erreur, %s ports trouvés : %s" % (len(ports),ports))
motor_config()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Part 1
Step2: To cluster the images, we'll need to convert the images into a format we can pass into our KMeans model, which expects 1D feature vectors. For this assignment, we'll just flatten our image. This is akin to cutting each image up into rows, and concatenating the rows end-to-end to form one long, skinny image.
Step3: Now run the code box below using different numbers of clusters, and note how the clustering results change.
Step4: 1.1) Effect of number of clusters
Step5: 2.1) The Importance of Normalization
Step6: 2.2) Interpreting Clusters
Step7: Analyzing Centroids
Step8: Visualizing Centroids
Step9: For Q2-Q4, please answer using k=3.
Step10: Now, let's see where these countries would actually have clustered.
| <ASSISTANT_TASK:>
Python Code:
!pip install datacommons --upgrade --quiet
!pip install datacommons_pandas --upgrade --quiet
import datacommons
import datacommons_pandas
import numpy as np
import pandas as pd
# for visualization
import matplotlib.pyplot as plt
import seaborn as sns
# for clustering
from sklearn.cluster import KMeans
# load dataset
from keras.datasets import fashion_mnist
(fashion_mnist, labels), _ = fashion_mnist.load_data()
# limit to first 2000 images so runtimes are reasonable
fashion_mnist = fashion_mnist[:2000, :, :]
# Show the first 25 images as a sample
print(f"Loaded {fashion_mnist.shape[0]} images. The first 25 are:")
sns.set(rc={'figure.figsize':(11,8)})
for i in range(25):
# define subplot
plt.subplot(5, 5, i+1)
# plot raw pixel data
plt.imshow(fashion_mnist[i], cmap=plt.get_cmap('gray'))
plt.axis("off")
plt.show()
# squash images into 1D
fashion_data = fashion_mnist.reshape((-1, 28*28))
# set number of clusters
k = 2 #Change me!
# cluster the images
kmeans = KMeans(n_clusters=k).fit(fashion_data)
# create a dictionary of clusters to images
clusters = {n:[] for n in range(k)}
for i in range(fashion_data.shape[0]):
key = kmeans.labels_[i]
value = fashion_data[i,:].reshape(1, 28, 28).squeeze()
clusters[key].append(value)
# display images from each cluster
sns.set(rc={'figure.figsize':(6,4)})
for cluster_num, images in clusters.items():
print(f'Cluster {cluster_num} contains {len(images)} images. The first 25 are:')
for i in range(min(25, len(images))):
# define subplot
plt.subplot(5, 5, i+1)
# plot raw pixel data
plt.imshow(images[i], cmap=plt.get_cmap('gray'))
plt.axis("off")
# show the figure
plt.show()
dcids = [
'country/AGO', 'country/ALB', 'country/ARG', 'country/ARM', 'country/AUS',
'country/AZE', 'country/BDI', 'country/BGD', 'country/BGR', 'country/BIH',
'country/BLZ', 'country/BOL', 'country/BRA', 'country/BTN', 'country/BWA',
'country/CAN', 'country/CHL', 'country/CHN', 'country/CMR', 'country/COD',
'country/COG', 'country/COL', 'country/CRI', 'country/CZE', 'country/DOM',
'country/DZA', 'country/ECU', 'country/EGY', 'country/ETH', 'country/FJI',
'country/GAB', 'country/GEO', 'country/GHA', 'country/GTM', 'country/GUY',
'country/HND', 'country/IDN', 'country/IND', 'country/IRN', 'country/JAM',
'country/JOR', 'country/JPN', 'country/KAZ', 'country/KEN', 'country/KGZ',
'country/KIR', 'country/KOR', 'country/LAO', 'country/LBN', 'country/LCA',
'country/LSO', 'country/MAR', 'country/MDA', 'country/MDG', 'country/MEX',
'country/MKD', 'country/MLI', 'country/MMR', 'country/MNE', 'country/MNG',
'country/MOZ', 'country/MRT', 'country/MWI', 'country/MYS', 'country/NAM',
'country/NER', 'country/NGA', 'country/NIC', 'country/NPL', 'country/PAK',
'country/PAN', 'country/PER', 'country/PHL', 'country/PNG', 'country/PRY',
'country/ROU', 'country/RWA', 'country/SDN', 'country/SLV', 'country/SRB',
'country/SWZ', 'country/SYR', 'country/THA', 'country/TJK', 'country/TKM',
'country/TLS', 'country/TON', 'country/TTO', 'country/TUN', 'country/TUR',
'country/TZA', 'country/UGA', 'country/UKR', 'country/USA', 'country/UZB',
'country/VNM', 'country/VUT', 'country/WSM', 'country/YEM', 'country/ZAF',
'country/ZMB', 'country/ZWE'
]
stat_vars_to_query = [
"Amount_Emissions_CarbonDioxide_PerCapita",
"LifeExpectancy_Person",
"Count_Person_IsInternetUser_PerCapita",
"GrowthRate_Count_Person",
"Count_Person_Upto4Years_Overweight_AsFractionOf_Count_Person_Upto4Years",
"GiniIndex_EconomicActivity",
"Count_Product_MobileCellularSubscription_AsFractionOf_Count_Person",
"Amount_EconomicActivity_GrossDomesticProduction_Nominal_PerCapita",
"FertilityRate_Person_Female",
"Count_Death_AsAFractionOfCount_Person"
]
df = datacommons_pandas.build_multivariate_dataframe(dcids,stat_vars_to_query)
# swap index for names
df.insert(0, 'name', df.index.map(datacommons.get_property_values(df.index, 'name')).str[0])
df.set_index('name', drop=True, inplace=True)
display(df)
# normalize the values
mean_df = df.mean()
std_df = df.std()
normalized_df = ((df-mean_df)/std_df)
display(normalized_df)
# Clustering using K-means
n_clusters = 3
kmeans_model = KMeans(n_clusters).fit(normalized_df)
labels_df = pd.DataFrame(data=np.transpose(kmeans_model.labels_), index=normalized_df.index, columns=['cluster'])
# list countries in each cluster:
for i in range(n_clusters):
print(f'Countries in Cluster {i}:')
print(labels_df.index[labels_df['cluster']==i].tolist())
print(len(labels_df.index[labels_df['cluster']==i].tolist()))
# Get centroids of each cluster.
for i in range(n_clusters):
print(f'\nCluster {i}:')
# display non-normalized mean values
mean_to_display = df[labels_df['cluster']==i].mean()
display(mean_to_display)
# get normalized values
normalized_means = []
for i in range(n_clusters):
# calculate normalized values for the next part
mean_normalized = normalized_df[labels_df['cluster']==i].mean()
normalized_means.append(mean_normalized.to_frame().transpose())
normalized_means_df = pd.concat(normalized_means)
# For better visualization, we'll multiply the following features by -1
# so that a higher value is associated with more development.
features_to_flip = [
"GiniIndex_EconomicActivity",
"GrowthRate_Count_Person",
"FertilityRate_Person_Female"
]
for column in features_to_flip:
normalized_means_df[column] *= -1
# Plot heatmap
sns.set(rc={'figure.figsize':(11.7,8.27)})
sns.set(font_scale = 1.5)
ax = sns.heatmap(normalized_means_df.to_numpy(), linewidth=0.5, xticklabels=normalized_means_df.columns, center=0)
ax.set_ylabel("Cluster")
plt.show()
dcids_holdout = ['country/HTI','country/DEU', 'country/IRQ']
# get values for each holdout
df_holdout = datacommons_pandas.build_multivariate_dataframe(dcids_holdout,stat_vars_to_query)
df_holdout = df_holdout.dropna()
# swap index for names
df_holdout.insert(0, 'name', df_holdout.index.map(datacommons.get_property_values(df_holdout.index, 'name')).str[0])
df_holdout.set_index('name', drop=True, inplace=True)
# show values
display(df_holdout)
# normalized version for clustering later
normalized_df_holdout = ((df_holdout-mean_df)/std_df)
# find which cluster centroid is closest
for country in df_holdout.index:
country_data = normalized_df_holdout.loc[country].to_numpy()
country_data = country_data[np.newaxis, :]
difference = normalized_means_df.to_numpy() - country_data
distance = np.linalg.norm(difference,axis=1)
cluster_membership = np.argmin(distance, axis=0)
print(f"{country} belongs to cluster {cluster_membership}")
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: The open() function opens a given filename as an object in 'read mode'.
Step2: To open a file in write mode, pass the w parameter to the open() function.
Step3: Unless otherwise specified, all new files are created in the current working directory.
Step4: To append a file instead of overwriting it, use append mode by passing the a parameter to the open() function.
Step5: To handle more data types than strings, the shelve module is often used
Step6: shelve files work like dictionaries, but are stored in binary files, which cannot be easily understood in text editors
| <ASSISTANT_TASK:>
Python Code:
import os
os.listdir(os.path.abspath('files'))
dictionaryFile = open(os.path.abspath('files/dictionary.txt')) # Open the file
print(dictionaryFile.read()) # Read the file
print(dictionaryFile.readline()) # Read a line in the file until newline or EOF
content = dictionaryFile.read() # Store the content in a variable
helloFile = open(os.path.abspath('files/helloworld.txt'), 'w') # Create the file in writeable mode.
helloFile.write('Hello World!') # Writes to file, and returns bytes used exactly as is (no newline)
helloFile.close() # Closes/saves files
helloFile = open(os.path.abspath('files/helloworld.txt'), 'r') # Open the file in read mode
print(helloFile.read()) # Print file contents
helloFile = open(os.path.abspath('files/helloworld.txt'), 'w') # Open file for rewriting
helloFile.write('Hello Another World!') # Writes to file, and returns bytes used exactly as is (no newline)
helloFile.close() # Closes/saves files
helloFile = open(os.path.abspath('files/helloworld.txt'), 'r') # Open the file in read mode
print(helloFile.read()) # Final file contents
os.getcwd()
baconFile = open(os.path.abspath('files/bacon.txt'), 'w') # Create file in write mode
baconFile.write('Bacon is not a vegetable.') # Write string to file
baconFile.close() # Save/close file
baconFile = open(os.path.abspath('files/bacon.txt'), 'r') # Reopen file in read mode
print(baconFile.read()) # Read the current file
baconFile = open(os.path.abspath('files/bacon.txt'), 'a') # Reopen file in append mode
baconFile.write('\nBacon is delicious.') # Write string to file
baconFile.close() # Save/close file
baconFile = open(os.path.abspath('files/bacon.txt'), 'r') # Reopen file in read mode
print(baconFile.read()) # Read the appended file
# Define base directory
defaultpath = os.path.expanduser('~/Dropbox/learn/books/Python/AutomateTheBoringStuffWithPython')
#Change directory to files directory if set in default
if (os.getcwd() == defaultpath):
os.chdir('/files')
else:
os.chdir(defaultpath + '/files')
print(os.getcwd())
import shelve # import the shelve module
shelfFile = shelve.open('mycatdata')
# Create a shelf file that works like a dictionary
shelfFile['cats'] = ['Zophie', 'Pooka', 'Simon', 'Fat-tail', 'Cleo']
shelfFile.close()
shelfFile = shelve.open('mycatdata') # Open shelve file
list(shelfFile.keys())
list(shelfFile.values())
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step2: Introduction
Step14: Training
Step15: Start training. Training takes about 20 ~ 30 minutes on a n1-standard-1 GCP VM.
Step16: In epoch 1, the mode predicted almost everything to be 'n'. It makes sense because vast majority of targets is "no punctuation" for each word so betting on that gives good overal accuracy already, although useless.
Step17: Tensorboard is good but the curves are not saved with notebook. We can use Datalab's library to list and plot events.
Step20: From the curves above, we got the best validation results around step 4000, and then in some runs a little bit over-fitting after.
Step21: View accuracy and confusion matrix.
Step22: Confusion matrix after removing "no punctuation".
Step27: Many of the "," are mistakenly predicted as "no punctuation", probably because many times either with or without comma is correct in syntax. There are some confusions between "," and ".", meaning that the model "knows" it is a break in sentence, but mistakenly chose comma or period. 65% of question marks are predicted correctly. For that we can give credits to LSTM model because it can "remember" the beginning of a sentence (which, what, where, etc) even if it is long.
Step28: Let's play with three paragraphs. First and second are single sentences, the third one contains multiple sentences.
Step29: The last prediction is actually somewhat incorrect. It should be
Step30: As a convenience, the predictor can pick random sentences from a test files.
Step31: Clean up
| <ASSISTANT_TASK:>
Python Code:
# Download and unzip data.
!mkdir -p /content/datalab/punctuation/tmp
!mkdir -p /content/datalab/punctuation/data
!mkdir -p /content/datalab/punctuation/datapreped
!wget -q -P /content/datalab/punctuation/tmp/ https://raw.githubusercontent.com/nltk/nltk_data/gh-pages/packages/corpora/europarl_raw.zip
!wget -q -P /content/datalab/punctuation/tmp/ https://raw.githubusercontent.com/nltk/nltk_data/gh-pages/packages/corpora/comtrans.zip
!unzip -q -o /content/datalab/punctuation/tmp/europarl_raw.zip -d /content/datalab/punctuation/tmp
!unzip -q -o /content/datalab/punctuation/tmp/comtrans.zip -d /content/datalab/punctuation/tmp
!cp /content/datalab/punctuation/tmp/europarl_raw/english/* /content/datalab/punctuation/data
# We only need English from `comtran` data. Extract English text only.
with open('/content/datalab/punctuation/tmp/comtrans/alignment-en-fr.txt', 'r') as f_in, \
open('/content/datalab/punctuation/data/comtrans.txt', 'w') as f_out:
num_lines = 0
for l in f_in.readlines():
if num_lines == 0:
f_out.write(l)
num_lines = (0 if num_lines == 2 else num_lines + 1)
Prepare data by cleaning up text.
import glob
import os
from random import randint
import re
import string
def prep_data(corpora_path, out_dir):
Clean up raw data and split them into train, validation, and test source.
printable = set(string.printable)
all_corpora_files = glob.glob(corpora_path)
lines = []
for corpora_file in all_corpora_files:
with open(corpora_file, 'r') as f:
lines += f.readlines()
dest_train = os.path.join(out_dir, 'train.txt')
dest_valid = os.path.join(out_dir, 'valid.txt')
dest_test = os.path.join(out_dir, 'test.txt')
valid_lines = 0
test_lines = 0
train_lines = 0
with open(dest_train, 'w') as f_train, open(dest_valid, 'w') as f_valid, open(dest_test, 'w') as f_test:
for l in lines:
s = l.strip()
# Remove "bad" sentences.
if s.endswith(')') and s.startswith('('):
continue
if not s.endswith('.') and not s.endswith('!') and not s.endswith('?'):
continue
if s.find('...') != -1:
continue
# Remove quotes, apostrophes, leading dashes.
s = re.sub('"', '', s)
s = re.sub(' \' s ', 's ', s)
s = re.sub('\'', '', s)
s = re.sub('^- ', '', s)
# Clean double punctuations.
s = re.sub('\? \.', '\?', s)
s = re.sub('\! \.', '\!', s)
# Extract human names to reduce vocab size. There are many names like 'Mrs Plooij-van Gorsel'
# 'Mr Cox'.
s = re.sub('Mr [\w]+ [A-Z][\w]+ ', '[humanname] ', s)
s = re.sub('Mrs [\w]+ [A-Z][\w]+ ', '[humanname] ', s)
s = re.sub('Mr [\w]+ ', '[humanname] ', s)
s = re.sub('Mrs [\w]+ ', '[humanname] ', s)
# Remove brackets and contents inside.
s = re.sub('\(.*\) ', '', s)
s = re.sub('\(', '', s)
s = re.sub('\)', '', s)
# Extract numbers to reduce the vocab size.
s = re.sub('[0-9\.]+ ', '[number] ', s)
# Replace i.e., p.m., a.m. to reduce confusion on period.
s = re.sub(' i\.e\.', ' for example', s)
s = re.sub(' p\.m\.', ' pm', s)
s = re.sub(' a\.m\.', ' am', s)
# Remove unprintable characters.
s = filter(lambda x: x in printable, s)
s = s.lower()
# For every 3 sentences we cut a new line to simulate a paragraph.
# Produce train/validation/test sets by 20:2:78
r = randint(0,50)
if r < 10:
valid_lines += 1
sep = '\n' if (valid_lines % 3) == 0 else ' '
f_valid.write(s + sep)
elif r == 11:
test_lines += 1
sep = '\n' if (test_lines % 3) == 0 else ' '
f_test.write(s + sep)
else:
train_lines += 1
sep = '\n' if (train_lines % 3) == 0 else ' '
f_train.write(s + sep)
prep_data('/content/datalab/punctuation/data/*', '/content/datalab/punctuation/datapreped')
# We deal with limited punctuations only because of limited training data.
PUNCTUATIONS = (u'.', u',', u'?', u'!', u':')
# `n` means no punctuation.
TARGETS = list(PUNCTUATIONS) + ['n']
# Set vocab size to remove low frequency words. Roughly with 10000 vocab, words with less than three counts are excluded.
VOCAB_SIZE = 10000
Helper functions for reading input data.
import collections
import os
import tensorflow as tf
def read_words(filename):
Read words from file.
Args:
filename: path to the file to read words from.
Returns:
Words split by white space.
with tf.gfile.GFile(filename, "r") as f:
x = f.read().decode("utf-8").replace("\n", " <eos> ").split()
if x[-1] != '<eos>':
x.append('<eos>')
indices = [i for i, w in enumerate(x) if w in PUNCTUATIONS]
# The next word after a punctuation is an important signal. We switch the punctuation
# with next word so it can be used as part of the context.
for i in indices:
x[i], x[i+1] = x[i+1], x[i]
return x
def build_vocab(filename):
Build vocabulary from training data file.
Args:
filename: path to the file to read words from.
Returns:
A dict with key being words and value being indices.
x = read_words(filename)
counter = collections.Counter(x)
count_pairs = sorted(counter.items(), key=lambda a: (-a[1], a[0]))
count_pairs = count_pairs[:VOCAB_SIZE-1]
words, _ = list(zip(*count_pairs))
word_to_id = dict(zip(words, range(len(words))))
word_to_id['<unk>'] = VOCAB_SIZE - 1
return word_to_id
def file_to_word_and_punc_ids(filename, word_to_id):
Produce indices from words in file. x are indices for words, and y are indices for punctuations.
Args:
filename: path to the file to read words from.
word_to_id: the vocab to indices dict.
Returns:
A pair. First element is the words indices. Second element is the target punctuation indices.
x_words = read_words(filename)
x_id = [word_to_id[w] if w in word_to_id else word_to_id['<unk>'] for w in x_words]
target_to_id = {p:i for i, p in enumerate(TARGETS)}
y_words = x_words[1:] + ['padding']
y_puncts = ['n' if elem not in PUNCTUATIONS else elem for elem in y_words]
y_id = [target_to_id[p] for p in y_puncts]
return x_id, y_id
def content_to_word_ids(content, word_to_id):
Produce indices from words from a given string.
Args:
filename: path to the file to read words from.
word_to_id: the vocab to indices dict.
Returns:
Words indices.
x = content.decode("utf-8").replace("\n", " <eos> ").split()
indices = [i for i, w in enumerate(x) if w in PUNCTUATIONS]
for i in indices:
x[i], x[i+1] = x[i+1], x[i]
x_id = [word_to_id[w] if w in word_to_id else word_to_id['<unk>'] for w in x]
return x_id
The training model.
import tensorflow as tf
import json
class TrainingConfig(object):
init_scale = 0.1
learning_rate = 1.0
max_grad_norm = 5
num_layers = 2
num_steps = 50
hidden_size = 150
max_epoch =20
max_max_epoch = 25
keep_prob = 0.5
lr_decay = 0.7
batch_size = 100
class TrainingInput(object):
The input data producer.
def _make_input_producer(self, raw_data, batch_size, num_steps, name=None):
with tf.name_scope(name, "InputProducer"):
raw_data = tf.convert_to_tensor(raw_data, name="raw_data", dtype=tf.int32)
data_len = tf.size(raw_data)
batch_len = data_len // batch_size
data = tf.reshape(raw_data[0 : batch_size * batch_len], [batch_size, batch_len])
epoch_size = (batch_len - 1) // num_steps
epoch_size = tf.identity(epoch_size, name="epoch_size")
i = tf.train.range_input_producer(epoch_size, shuffle=False).dequeue()
x = tf.strided_slice(data, [0, i * num_steps], [batch_size, (i + 1) * num_steps])
x.set_shape([batch_size, num_steps])
return x
def __init__(self, config, data_x, data_y, name=None):
self.epoch_size = ((len(data_x) // config.batch_size) - 1) // config.num_steps
self.input_data = self._make_input_producer(data_x, config.batch_size, config.num_steps, name=name)
self.targets = self._make_input_producer(data_y, config.batch_size, config.num_steps, name=name)
class PuctuationModel(object):
The Punctuation training/evaluation model.
def __init__(self, is_training, config, input_):
self._input = input_
batch_size = config.batch_size
num_steps = config.num_steps
size = config.hidden_size
def lstm_cell():
return tf.contrib.rnn.BasicLSTMCell(size, forget_bias=0.0, state_is_tuple=True)
attn_cell = lstm_cell
if is_training and config.keep_prob < 1:
def attn_cell():
return tf.contrib.rnn.DropoutWrapper(lstm_cell(), output_keep_prob=config.keep_prob)
cell = tf.contrib.rnn.MultiRNNCell([attn_cell() for _ in range(config.num_layers)], state_is_tuple=True)
self._initial_state = cell.zero_state(batch_size, tf.float32)
embedding = tf.get_variable("embedding", [VOCAB_SIZE, size], dtype=tf.float32)
inputs = tf.nn.embedding_lookup(embedding, input_.input_data)
if is_training and config.keep_prob < 1:
inputs = tf.nn.dropout(inputs, config.keep_prob)
inputs = tf.unstack(inputs, num=num_steps, axis=1)
outputs, state = tf.contrib.rnn.static_rnn(cell, inputs, initial_state=self._initial_state)
output = tf.reshape(tf.concat(axis=1, values=outputs), [-1, size])
softmax_w = tf.get_variable("softmax_w", [size, len(TARGETS)], dtype=tf.float32)
softmax_b = tf.get_variable("softmax_b", [len(TARGETS)], dtype=tf.float32)
logits = tf.matmul(output, softmax_w) + softmax_b
self._predictions = tf.argmax(logits, 1)
self._targets = tf.reshape(input_.targets, [-1])
loss = tf.contrib.legacy_seq2seq.sequence_loss_by_example(
[logits],
[tf.reshape(input_.targets, [-1])],
[tf.ones([batch_size * num_steps], dtype=tf.float32)])
self._cost = cost = tf.reduce_sum(loss) / batch_size
self._final_state = state
if not is_training:
return
self._lr = tf.Variable(0.0, trainable=False)
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), config.max_grad_norm)
optimizer = tf.train.GradientDescentOptimizer(self._lr)
self._train_op = optimizer.apply_gradients(
zip(grads, tvars),
global_step=tf.contrib.framework.get_or_create_global_step())
self._new_lr = tf.placeholder(tf.float32, shape=[], name="new_learning_rate")
self._lr_update = tf.assign(self._lr, self._new_lr)
def assign_lr(self, session, lr_value):
session.run(self._lr_update, feed_dict={self._new_lr: lr_value})
@property
def input(self):
return self._input
@property
def initial_state(self):
return self._initial_state
@property
def final_state(self):
return self._final_state
@property
def cost(self):
return self._cost
@property
def predictions(self):
return self._predictions
@property
def targets(self):
return self._targets
@property
def lr(self):
return self._lr
@property
def train_op(self):
return self._train_op
The trainer.
import numpy as np
def run_epoch(session, model, num_steps, word_to_id, is_eval=False):
Runs the model on the given data for one epoch.
costs = 0.0
iters = 0
state = session.run(model.initial_state)
fetches = {
"cost": model.cost,
"final_state": model.final_state,
"predictions": model.predictions,
"targets": model.targets,
}
if is_eval is False:
fetches["train_op"] = model.train_op
confusion_matrix = np.zeros(shape=(len(TARGETS),len(TARGETS)), dtype=np.int64)
for step in range(model.input.epoch_size):
feed_dict = {}
# Set the state back to model after each run.
for i, (c, h) in enumerate(model.initial_state):
feed_dict[c] = state[i].c
feed_dict[h] = state[i].h
vals = session.run(fetches, feed_dict)
cost = vals["cost"]
state = vals["final_state"]
targets = vals["targets"]
predictions = vals['predictions']
for t, p in zip(targets, predictions):
confusion_matrix[t][p] += 1
costs += cost
iters += num_steps
if is_eval is True:
for i, t in enumerate(confusion_matrix):
print('%s --- total: %d, correct: %d, accuracy: %.3f, ' % (TARGETS[i], sum(t), t[i], float(t[i]) / sum(t)))
# Costs are calculated as cross-entropy loss.
# Returns perplexity value (https://en.wikipedia.org/wiki/Perplexity), which is a common measurements on language models.
return np.exp(costs / iters), confusion_matrix
def train(train_data_path, validation_data_path, save_path):
Train the model and save a checkpoint at the end.
word_to_id = build_vocab(train_data_path)
train_data_x, train_data_y = file_to_word_and_punc_ids(train_data_path, word_to_id)
valid_data_x, valid_data_y = file_to_word_and_punc_ids(validation_data_path, word_to_id)
config = TrainingConfig()
with tf.Graph().as_default():
initializer = tf.random_uniform_initializer(-config.init_scale, config.init_scale)
with tf.name_scope("Train"):
train_input = TrainingInput(config=config, data_x=train_data_x, data_y=train_data_y, name="TrainInput")
with tf.variable_scope("Model", reuse=None, initializer=initializer):
train_model = PuctuationModel(is_training=True, config=config, input_=train_input)
tf.summary.scalar("Training_Loss", train_model.cost)
tf.summary.scalar("Learning_Rate", train_model.lr)
with tf.name_scope("Valid"):
valid_input = TrainingInput(config=config, data_x=valid_data_x, data_y=valid_data_y, name="ValidInput")
with tf.variable_scope("Model", reuse=True, initializer=initializer):
valid_model = PuctuationModel(is_training=False, config=config, input_=valid_input)
tf.summary.scalar("Validation_Loss", valid_model.cost)
sv = tf.train.Supervisor(logdir=save_path)
with sv.managed_session() as session:
for i in range(config.max_max_epoch):
lr_decay = config.lr_decay ** max(i + 1 - config.max_epoch, 0.0)
train_model.assign_lr(session, config.learning_rate * lr_decay)
print("Epoch: %d Learning rate: %.3f" % (i + 1, session.run(train_model.lr)))
train_perplexity, _ = run_epoch(session, train_model, config.num_steps, word_to_id)
print("Epoch: %d Train Perplexity: %.3f" % (i + 1, train_perplexity))
valid_perplexity, _ = run_epoch(session, valid_model, config.num_steps, word_to_id, is_eval=True)
print("Epoch: %d Valid Perplexity: %.3f" % (i + 1, valid_perplexity))
model_file_prefix = sv.saver.save(session, save_path, global_step=sv.global_step)
word_to_id_file = os.path.join(os.path.dirname(save_path), 'word_to_id.json')
with open(word_to_id_file, 'w') as outfile:
json.dump(word_to_id, outfile)
return model_file_prefix
# Delete the model directory if it exists so it always trains from beginning.
!rm -r -f /content/datalab/punctuation/model
model_dir = '/content/datalab/punctuation/model'
saved_model_path = model_dir + '/punctuation'
model_file_prefix = train(
'/content/datalab/punctuation/datapreped/train.txt',
'/content/datalab/punctuation/datapreped/valid.txt',
saved_model_path)
# Start a tensorboard to see the curves in Datalab.
from google.datalab.ml import TensorBoard
tb = TensorBoard.start(model_dir)
from google.datalab.ml import Summary
summary = Summary(model_dir)
summary.list_events()
summary.plot(event_names=['Train/Training_Loss', 'Valid/Validation_Loss'])
Run the model with some test data.
import os
def run_eval(model_file_prefix, test_data_path):
Run evaluation on test data.
word_to_id_file = os.path.join(os.path.dirname(model_file_prefix), 'word_to_id.json')
with open(word_to_id_file, 'r') as f:
word_to_id = json.load(f)
test_data_x, test_data_y = file_to_word_and_punc_ids(test_data_path, word_to_id)
eval_config = TrainingConfig()
eval_config.batch_size = 1
eval_config.num_steps = 1
with tf.Graph().as_default():
with tf.name_scope("Test"):
test_input = TrainingInput(config=eval_config, data_x=test_data_x, data_y=test_data_y, name="TestInput")
with tf.variable_scope("Model", reuse=None):
mtest = PuctuationModel(is_training=False, config=eval_config, input_=test_input)
logdir=os.path.join(os.path.dirname(model_file_prefix), 'eval')
sv = tf.train.Supervisor(logdir=logdir)
with sv.managed_session() as session:
sv.saver.restore(session, model_file_prefix)
test_perplexity, cm_data = run_epoch(session, mtest, 1, word_to_id, is_eval=True)
return cm_data
from google.datalab.ml import ConfusionMatrix
from pprint import pprint
cm_data = run_eval(model_file_prefix, '/content/datalab/punctuation/datapreped/test.txt')
pprint(cm_data.tolist())
cm = ConfusionMatrix(cm_data, TARGETS)
cm.plot()
cm_data_puncuations = cm_data.tolist()
for i, r in enumerate(cm_data_puncuations):
cm_data_puncuations[i] = r[:-1]
cm_data_puncuations = cm_data_puncuations[:-1]
ConfusionMatrix(cm_data_puncuations, TARGETS[:-1]).plot()
import tensorflow as tf
class PredictModel(object):
The Prediction model.
def __init__(self, config):
self._input = tf.placeholder(shape=[1, 1], dtype=tf.int64)
size = config.hidden_size
def lstm_cell():
return tf.contrib.rnn.BasicLSTMCell(size, forget_bias=0.0, state_is_tuple=True)
cell = tf.contrib.rnn.MultiRNNCell([lstm_cell() for _ in range(config.num_layers)], state_is_tuple=True)
self._initial_state = cell.zero_state(1, tf.float32)
embedding = tf.get_variable("embedding", [VOCAB_SIZE, size], dtype=tf.float32)
inputs = tf.nn.embedding_lookup(embedding, self._input)
inputs = tf.unstack(inputs, num=1, axis=1)
outputs, self._final_state = tf.contrib.rnn.static_rnn(cell, inputs, initial_state=self._initial_state)
output = tf.reshape(tf.concat(axis=1, values=outputs), [-1, size])
softmax_w = tf.get_variable("softmax_w", [size, len(TARGETS)], dtype=tf.float32)
softmax_b = tf.get_variable("softmax_b", [len(TARGETS)], dtype=tf.float32)
logits = tf.matmul(output, softmax_w) + softmax_b
self._prediction = tf.argmax(logits, 1)
@property
def input(self):
return self._input
@property
def initial_state(self):
return self._initial_state
@property
def final_state(self):
return self._final_state
@property
def prediction(self):
return self._prediction
The Predictor that runs the prediction model.
import json
import os
import random
class Predictor(object):
def __init__(self, model_file_prefix):
word_to_id_file = os.path.join(os.path.dirname(model_file_prefix), 'word_to_id.json')
with open(word_to_id_file, 'r') as f:
self._word_to_id = json.load(f)
config = TrainingConfig()
with tf.Graph().as_default():
with tf.variable_scope("Model", reuse=None):
self._model = PredictModel(config=config)
saver = tf.train.Saver()
self._session = tf.Session()
saver.restore(self._session, model_file_prefix)
def _get_predicted_until_punc(self, min_steps, data_x):
state = self._session.run(self._model.initial_state)
fetches = {
"final_state": self._model.final_state,
"prediction": self._model.prediction,
}
predicted_puncs = []
step = 0
for x in data_x:
feed_dict = {}
for i, (c, h) in enumerate(self._model.initial_state):
feed_dict[c] = state[i].c
feed_dict[h] = state[i].h
feed_dict[self._model.input] = [[x]]
vals = self._session.run(fetches, feed_dict)
state = vals["final_state"]
prediction = vals["prediction"]
predicted = TARGETS[prediction[0]]
predicted_puncs.append(predicted)
step += 1
if predicted != 'n' and step > min_steps:
break
return predicted_puncs
def _apply_puncts_to_original(self, original, inserted):
current_index = 0
punc_positions = {}
for w in inserted.split():
if w in PUNCTUATIONS:
punc_positions[current_index] = w
else:
current_index += 1
words = []
for i, w in enumerate(original.split() + ['']):
if i in punc_positions:
words.append(punc_positions[i])
words.append(w)
return ' '.join(words)
def predict(self, content):
Insert punctuations with given string.
content = content.strip().lower()
for p in PUNCTUATIONS:
content = content.replace(' ' + p, '')
prediction_source = content
prediction_result = ''
content = '<eos> ' + content + ' <eos>'
min_step = 0
while True:
data_x = content_to_word_ids(content, self._word_to_id)
puncts = self._get_predicted_until_punc(min_step, data_x)
if len(data_x) == len(puncts):
content = content.replace('. <eos> ', '').replace(' <eos>', ' ' + puncts[-1]) + '\n'
prediction_result = self._apply_puncts_to_original(prediction_source, content)
break
else:
words1 = [self._word_to_id.keys()[self._word_to_id.values().index(data_x[index])] for index in range(len(puncts) - 1)]
indices = [i for i, w in enumerate(words1) if w in PUNCTUATIONS]
for i in indices:
words1[i], words1[i-1] = words1[i-1], words1[i]
words2 = [self._word_to_id.keys()[self._word_to_id.values().index(data_x[index])] for index in range(len(puncts) - 1, len(data_x))]
all_words = words1 + [puncts[-1]] + words2
content = ' '.join(all_words)
min_step = len(puncts)
return prediction_source, prediction_result
def predict_from_test_file(self, filename, num_random_lines):
given a file from test file, pick some random lines and do prediction.
num_lines = sum(1 for line in open(filename))
with open(filename) as f:
lines = random.sample(f.readlines(), num_random_lines)
for line in lines:
line = line.strip().lower()
source, predicted = self.predict(line)
yield line, source, predicted
def close(self):
self._session.close()
predictor = Predictor(model_file_prefix)
sources = [
'i think it is a report which will for the most part be supported by my group',
'so what is the european union doing about it',
'we must work more rapidly towards achieving the targets stipulated ' +
'in the white paper for renewable energy sources as this would bring ' +
'about a massive reduction in greenhouse gases but in common with others ' +
' we too are having to endure the greenhouse effect furthermore we should ' +
'utilise an extraordinary budget line since this is an extraordinarily catastrophic situation',
]
for s in sources:
source, predicted = predictor.predict(s)
print('\n---SOURCE----\n' + source)
print('---PREDICTED----\n' + predicted)
predictor.close()
predictor = Predictor(model_file_prefix)
sources = [
'how are you',
'where do you see yourself in five years',
'last december the european commission proposed updating the existing customs union with ' +
'turkey and extending bilateral trade relations once negotiations have been completed ' +
'the agreement would still have to be approved by the Parliament before it could enter into force',
]
for s in sources:
source, predicted = predictor.predict(s)
print('\n---SOURCE----\n' + source)
print('---PREDICTED----\n' + predicted)
predictor.close()
predictor = Predictor(model_file_prefix)
for t, s, p in predictor.predict_from_test_file('/content/datalab/punctuation/datapreped/test.txt', 3):
print('\n---SOURCE----\n' + s)
print('---PREDICTED----\n' + p)
print('---TRUTH----\n' + t)
predictor.close()
TensorBoard.stop(tb)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Validate lab package version installation
Step2: Note
Step3: Note
Step4: The config.py module configures the default values for the environment specific settings and the default values for the pipeline runtime parameters.
Step5: CUSTOM_SERVICE_ACCOUNT - In the gcp console Click on the Navigation Menu. Navigate to IAM & Admin, then to Service Accounts and use the service account starting with prefix - 'tfx-tuner-caip-service-account'. This enables CloudTuner and the Google Cloud AI Platform extensions Tuner component to work together and allows for distributed and parallel tuning backed by AI Platform Vizier's hyperparameter search algorithm. Please see the lab setup README for setup instructions.
Step6: Set the compile time settings to first create a pipeline version without hyperparameter tuning
Step7: Compile your pipeline code
Step8: Note
Step9: Hint
Step10: To view the status of existing pipeline runs
Step11: To retrieve the status of a given run
Step12: Important
Step13: Compile your pipeline code
Step14: Deploy your pipeline container to AI Platform Pipelines with the TFX CLI
| <ASSISTANT_TASK:>
Python Code:
import yaml
# Set `PATH` to include the directory containing TFX CLI and skaffold.
PATH=%env PATH
%env PATH=/home/jupyter/.local/bin:{PATH}
!python -c "import tfx; print('TFX version: {}'.format(tfx.__version__))"
!python -c "import kfp; print('KFP version: {}'.format(kfp.__version__))"
%pip install --upgrade --user tfx==0.25.0
%pip install --upgrade --user kfp==1.0.4
%cd pipeline
!ls -la
# Use the following command to identify the GCS bucket for metadata and pipeline storage.
!gsutil ls
#TODO: Set your environment resource settings here for GCP_REGION, ARTIFACT_STORE_URI, ENDPOINT, and CUSTOM_SERVICE_ACCOUNT.
GCP_REGION = ''
ARTIFACT_STORE_URI = ''
ENDPOINT = ''
CUSTOM_SERVICE_ACCOUNT = ''
PROJECT_ID = !(gcloud config get-value core/project)
PROJECT_ID = PROJECT_ID[0]
# Set your resource settings as environment variables. These override the default values in pipeline/config.py.
%env GCP_REGION={GCP_REGION}
%env ARTIFACT_STORE_URI={ARTIFACT_STORE_URI}
%env CUSTOM_SERVICE_ACCOUNT={CUSTOM_SERVICE_ACCOUNT}
%env PROJECT_ID={PROJECT_ID}
PIPELINE_NAME = 'tfx_covertype_continuous_training'
MODEL_NAME = 'tfx_covertype_classifier'
DATA_ROOT_URI = 'gs://workshop-datasets/covertype/small'
CUSTOM_TFX_IMAGE = 'gcr.io/{}/{}'.format(PROJECT_ID, PIPELINE_NAME)
RUNTIME_VERSION = '2.3'
PYTHON_VERSION = '3.7'
USE_KFP_SA=False
ENABLE_TUNING=False
%env PIPELINE_NAME={PIPELINE_NAME}
%env MODEL_NAME={MODEL_NAME}
%env DATA_ROOT_URI={DATA_ROOT_URI}
%env KUBEFLOW_TFX_IMAGE={CUSTOM_TFX_IMAGE}
%env RUNTIME_VERSION={RUNTIME_VERSION}
%env PYTHON_VERIONS={PYTHON_VERSION}
%env USE_KFP_SA={USE_KFP_SA}
%env ENABLE_TUNING={ENABLE_TUNING}
!tfx pipeline compile --engine kubeflow --pipeline_path runner.py
# TODO: Your code here to use the TFX CLI to deploy your pipeline image to AI Platform Pipelines.
# TODO: your code here to trigger a pipeline run with the TFX CLI
!tfx run list --pipeline_name {PIPELINE_NAME} --endpoint {ENDPOINT}
RUN_ID='[YOUR RUN ID]'
!tfx run status --pipeline_name {PIPELINE_NAME} --run_id {RUN_ID} --endpoint {ENDPOINT}
ENABLE_TUNING=True
%env ENABLE_TUNING={ENABLE_TUNING}
!tfx pipeline compile --engine kubeflow --pipeline_path runner.py
#TODO: your code to update your pipeline
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Are there certain populations we're not getting reports from?
Step2: From the data, it seems that there's not much underrepresentation by gender. There are only around a thousand less males than females in a dataset of 28,000. Age seems to be a bigger issue. There appears to be a lack of representation of older people using the API. Given that older folks may be less likely to self report, or if they wanted to self report, they may not be tech-savvy enough to use with a web interface. My assumption that people over 70 are probably experience product harm at a higher rate and are not reporting this.
Step3: This is actually preplexing, so I decided to investigate further by analyzing the complaints filed for the "Footwear" category. To do this, I created a Word2Vec model that uses a convolution neural network for text analysis. This process maps a word and the linguistic context it is in to be able to calculate similarity between words. The purpose of this is to find words that related to each other. Rather than doing a simple cross tab of product categories, I can ingest the complaints and map out their relationship. For instance, using the complaints that resulted in bodily harm, I found that footwear was associated with pain and walking. It seems that there is injuries related to Sketcher sneakers specifically since it was the only brand that showed up enough to be included in the model's dictionary. In fact, there was a lawsuit regarding Sketchers and their toning shoes
Step4: Although, while it is label to have no injury, it does not necessarily mean that we can't take precaution. What I did was take the same approach as the previous model, I subsetted the data to only complaints that had "no injury" and ran a model to examine words used. From the analysis, we see that the word to, was, and it were the top three words. At first glance, it may seem that these words are meaningless, however if we examine words that are similar to it, we can start seeing a connection.
Step5: Who are the people who are actually reporting to us?
Step6: However after doing this, we still have around 13,000 people with an age of zero, whether it is that they did not fill in the age or that the incident involves infant is still unknown but looking at the distribution betweeen of the product that are affecting people with an age of 0 and the overall dataset, it appears that null values in the age range represents people who did not fill out an age when reporting
Step7: Question 2.2
Step8: This shows that incidents where there are actually injuries and medical attention was given was that in footwear, which was weird. To explore this, I created a Word2Vec model that maps out how certain words relate to each other. To train the model, I used the comments that were made from the API. This will train a model and help us identify words that are similar. For instance, if you type in foot, you will get left and right as these words are most closely related to the word foot. However after some digging around, I found out that the word "walking" was associated with "painful". I have some reason to believe that there are orthopedic injuries associated with shoes and people have been experience pain while walking with Sketchers that were supposed to tone up their bodies and having some instability or balance issues.
Step9: Question 2.3
| <ASSISTANT_TASK:>
Python Code:
import pickle
import operator
import numpy as np
import pandas as pd
import gensim.models
data = pickle.load(open('/home/datauser/cpsc/data/processed/cleaned_api_data', 'rb'))
data.head()
pd.crosstab(data['GenderDescription'], data['age_range'])
#removing minor harm incidents
no_injuries = ['Incident, No Injury', 'Unspecified', 'Level of care not known',
'No Incident, No Injury', 'No First Aid or Medical Attention Received']
damage = data.ix[~data['SeverityTypePublicName'].isin(no_injuries), :]
damage.ProductCategoryPublicName.value_counts()[0:9]
data.SeverityTypeDescription.value_counts()
model.most_similar('was')
data.GenderDescription.value_counts()
data['age'] = map(lambda x: x/12, data['VictimAgeInMonths'])
labels = ['under 10', '10-20', '20-30', '30-40', '40-50', '50-60',
'60-70','70-80', '80-90', '90-100', 'over 100']
data['age_range'] = pd.cut(data['age'], bins=np.arange(0,120,10), labels=labels)
data['age_range'][data['age'] > 100] = 'over 100'
counts = data['age_range'].value_counts()
counts.sort()
counts
#Top products affect by people with 0 age
data.ix[data['age_range'].isnull(), 'ProductCategoryPublicName'].value_counts()[0:9]
#top products that affect people overall
data.ProductCategoryPublicName.value_counts()[0:9]
#overall products listed
data.ProductCategoryPublicName.value_counts()[0:9]
#removing minor harm incidents
no_injuries = ['Incident, No Injury', 'Unspecified', 'Level of care not known',
'No Incident, No Injury', 'No First Aid or Medical Attention Received']
damage = data.ix[~data['SeverityTypePublicName'].isin(no_injuries), :]
damage.ProductCategoryPublicName.value_counts()[0:9]
model = gensim.models.Word2Vec.load('/home/datauser/cpsc/models/footwear')
model.most_similar('walking')
model.most_similar('injury')
model.most_similar('instability')
model = gensim.models.Word2Vec.load('/home/datauser/cpsc/models/severity')
items_dict = {}
for word, vocab_obj in model.vocab.items():
items_dict[word] = vocab_obj.count
sorted_dict = sorted(items_dict.items(), key=operator.itemgetter(1))
sorted_dict.reverse()
sorted_dict[0:5]
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: <center>Find Simulations and Load data
Step2: <center>Recompose the Waveforms
Step3: <center>Plot the amplitudes to verify correct scaling between GT and SXS waveforms
| <ASSISTANT_TASK:>
Python Code:
# Setup ipython environment
%load_ext autoreload
%autoreload 2
%matplotlib inline
# Setup plotting backend
import matplotlib as mpl
mpl.rcParams['lines.linewidth'] = 0.8
mpl.rcParams['font.family'] = 'serif'
mpl.rcParams['font.size'] = 12
mpl.rcParams['axes.labelsize'] = 20
from matplotlib.pyplot import *
# Import useful things
from nrutils import scsearch,gwylm
from numpy import array
# Search for a simulation. Hopefully the results will be from different codes. NOTE that this could be done more manually so that we don't "hope" but know.
A = scsearch( q=[1,4], nonspinning=True, verbose=True )
# Select which of the search results we wish to keep
U,V = A[77],A[131]
# Load the modes
u = gwylm(U,lmax=2,verbose=True)
v = gwylm(V,lmax=2,verbose=True)
# Plot the waveforms
u.plot(); v.plot()
#
theta,phi = 0,0
a,b = u.recompose(theta,phi,kind='strain'),v.recompose(theta,phi,kind='strain')
figure( figsize=2*array([5,3]) )
plot( a.t - a.intrp_t_amp_max, a.amp )
plot( b.t - b.intrp_t_amp_max, b.amp )
gca().set_yscale("log", nonposy='clip')
ylim([1e-5,1e-1])
xlim([-400,100])
title('the amplitudes should be approx. the same')
a.plot();b.plot();
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Language Translation
Step3: Explore the Data
Step6: Implement Preprocessing Function
Step8: Preprocess all the data and save it
Step10: Check Point
Step12: Check the Version of TensorFlow and Access to GPU
Step15: Build the Neural Network
Step18: Process Decoding Input
Step21: Encoding
Step24: Decoding - Training
Step27: Decoding - Inference
Step30: Build the Decoding Layer
Step33: Build the Neural Network
Step34: Neural Network Training
Step36: Build the Graph
Step39: Train
Step41: Save Parameters
Step43: Checkpoint
Step46: Sentence to Sequence
Step48: Translate
| <ASSISTANT_TASK:>
Python Code:
DON'T MODIFY ANYTHING IN THIS CELL
import helper
import problem_unittests as tests
source_path = 'data/small_vocab_en'
target_path = 'data/small_vocab_fr'
source_text = helper.load_data(source_path)
target_text = helper.load_data(target_path)
view_sentence_range = (0, 10)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))
sentences = source_text.split('\n')
word_counts = [len(sentence.split()) for sentence in sentences]
print('Number of sentences: {}'.format(len(sentences)))
print('Average number of words in a sentence: {}'.format(np.average(word_counts)))
print()
print('English sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(source_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
print()
print('French sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(target_text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):
Convert source and target text to proper word ids
:param source_text: String that contains all the source text.
:param target_text: String that contains all the target text.
:param source_vocab_to_int: Dictionary to go from the source words to an id
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: A tuple of lists (source_id_text, target_id_text)
# TODO: Implement Function
source_id_text = [[source_vocab_to_int[word] for word in sentence.split()] for sentence in source_text.split('\n')]
target_id_text = [[target_vocab_to_int[word] for word in sentence.split() + ['<EOS>']] for sentence in target_text.split('\n')]
return source_id_text, target_id_text
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_text_to_ids(text_to_ids)
DON'T MODIFY ANYTHING IN THIS CELL
helper.preprocess_and_save_data(source_path, target_path, text_to_ids)
DON'T MODIFY ANYTHING IN THIS CELL
import numpy as np
import helper
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
DON'T MODIFY ANYTHING IN THIS CELL
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
def model_inputs():
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate, keep probability)
# TODO: Implement Function
input_ = tf.placeholder (dtype=tf.int32, shape=[None, None], name='input')
targets = tf.placeholder (dtype=tf.int32, shape=[None, None], name='targets')
lr = tf.placeholder (dtype=tf.float32, name='learning_rate')
keep_prob = tf.placeholder (dtype=tf.float32, name='keep_prob')
return (input_, targets, lr, keep_prob)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_model_inputs(model_inputs)
def process_decoding_input(target_data, target_vocab_to_int, batch_size):
Preprocess target data for decoding
:param target_data: Target Placeholder
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param batch_size: Batch Size
:return: Preprocessed target data
# TODO: Implement Function
ending = tf.strided_slice (target_data, [0, 0], [batch_size, -1], [1, 1])
decoding_input = tf.concat ([tf.fill ([batch_size, 1], target_vocab_to_int['<GO>']), ending], 1)
return decoding_input
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_process_decoding_input(process_decoding_input)
def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob):
Create encoding layer
:param rnn_inputs: Inputs for the RNN
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param keep_prob: Dropout keep probability
:return: RNN state
# TODO: Implement Function
cell = tf.contrib.rnn.BasicLSTMCell (rnn_size)
cell = tf.contrib.rnn.DropoutWrapper (cell, output_keep_prob = keep_prob)
encoding_cell = tf.contrib.rnn.MultiRNNCell ([cell] * num_layers)
_, state = tf.nn.dynamic_rnn (encoding_cell, rnn_inputs, dtype=tf.float32)
return state
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_encoding_layer(encoding_layer)
def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,
output_fn, keep_prob):
Create a decoding layer for training
:param encoder_state: Encoder State
:param dec_cell: Decoder RNN Cell
:param dec_embed_input: Decoder embedded input
:param sequence_length: Sequence Length
:param decoding_scope: TenorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Train Logits
# TODO: Implement Function
decoder_fn_train = tf.contrib.seq2seq.simple_decoder_fn_train (encoder_state)
dec_cell = tf.contrib.rnn.DropoutWrapper (dec_cell, keep_prob)
prediction, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder (dec_cell, decoder_fn_train, dec_embed_input, sequence_length, scope=decoding_scope)
logits = output_fn (prediction)
# Alternative implementation:
# decoder_fn_train = tf.contrib.seq2seq.simple_decoder_fn_train (encoder_state)
# prediction, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder (dec_cell, decoder_fn_train, dec_embed_input, sequence_length, scope=decoding_scope)
# logits = output_fn (prediction)
# logits = tf.nn.dropout (logits, keep_prob=keep_prob)
return logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_train(decoding_layer_train)
def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,
maximum_length, vocab_size, decoding_scope, output_fn, keep_prob):
Create a decoding layer for inference
:param encoder_state: Encoder state
:param dec_cell: Decoder RNN Cell
:param dec_embeddings: Decoder embeddings
:param start_of_sequence_id: GO ID
:param end_of_sequence_id: EOS Id
:param maximum_length: The maximum allowed time steps to decode
:param vocab_size: Size of vocabulary
:param decoding_scope: TensorFlow Variable Scope for decoding
:param output_fn: Function to apply the output layer
:param keep_prob: Dropout keep probability
:return: Inference Logits
# TODO: Implement Function
decoder_fn_infer = tf.contrib.seq2seq.simple_decoder_fn_inference (output_fn, encoder_state, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size)
inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder (dec_cell, decoder_fn_infer, scope=decoding_scope)
return inference_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer_infer(decoding_layer_infer)
def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size,
num_layers, target_vocab_to_int, keep_prob):
Create decoding layer
:param dec_embed_input: Decoder embedded input
:param dec_embeddings: Decoder embeddings
:param encoder_state: The encoded state
:param vocab_size: Size of vocabulary
:param sequence_length: Sequence Length
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:param keep_prob: Dropout keep probability
:return: Tuple of (Training Logits, Inference Logits)
# TODO: Implement Function
cell = tf.contrib.rnn.BasicLSTMCell (rnn_size)
cell = tf.contrib.rnn.DropoutWrapper (cell, output_keep_prob=keep_prob) # no need for dropout here?
decoding_cell = tf.contrib.rnn.MultiRNNCell ([cell] * num_layers)
with tf.variable_scope ('decoding') as decoding_scope:
output_fn = lambda x: tf.contrib.layers.fully_connected (x, vocab_size, scope=decoding_scope)
training_logits = decoding_layer_train (encoder_state, decoding_cell, dec_embed_input,
sequence_length, decoding_scope, output_fn, keep_prob)
with tf.variable_scope ('decoding', reuse=True) as decoding_scope:
inference_logits = decoding_layer_infer (encoder_state, decoding_cell, dec_embeddings,
target_vocab_to_int['<GO>'], target_vocab_to_int['<EOS>'], sequence_length, vocab_size,
decoding_scope, output_fn, keep_prob)
return (training_logits, inference_logits)
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_decoding_layer(decoding_layer)
def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, target_vocab_size,
enc_embedding_size, dec_embedding_size, rnn_size, num_layers, target_vocab_to_int):
Build the Sequence-to-Sequence part of the neural network
:param input_data: Input placeholder
:param target_data: Target placeholder
:param keep_prob: Dropout keep probability placeholder
:param batch_size: Batch Size
:param sequence_length: Sequence Length
:param source_vocab_size: Source vocabulary size
:param target_vocab_size: Target vocabulary size
:param enc_embedding_size: Decoder embedding size
:param dec_embedding_size: Encoder embedding size
:param rnn_size: RNN Size
:param num_layers: Number of layers
:param target_vocab_to_int: Dictionary to go from the target words to an id
:return: Tuple of (Training Logits, Inference Logits)
# TODO: Implement Function
enc_embed_input = tf.contrib.layers.embed_sequence (input_data, embed_dim=enc_embedding_size, vocab_size=source_vocab_size)
encoder_state = encoding_layer (enc_embed_input, rnn_size, num_layers, keep_prob)
decoding_input = process_decoding_input (target_data, target_vocab_to_int, batch_size)
# Decoder Embedding
dec_embeddings = tf.Variable (tf.random_uniform ([target_vocab_size, dec_embedding_size]))
dec_embed_input = tf.nn.embedding_lookup (dec_embeddings, decoding_input)
training_logits, inference_logits = decoding_layer (dec_embed_input, dec_embeddings, encoder_state, target_vocab_size,
sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob)
return training_logits, inference_logits
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_seq2seq_model(seq2seq_model)
# Number of Epochs
epochs = 10
# Batch Size
batch_size = 256
# RNN Size
rnn_size = 300
# Number of Layers
num_layers = 2
# Embedding Size
encoding_embedding_size = 128
decoding_embedding_size = 128
# Learning Rate
learning_rate = 0.001
# Dropout Keep Probability
keep_probability = 0.6
DON'T MODIFY ANYTHING IN THIS CELL
save_path = 'checkpoints/dev'
(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()
max_source_sentence_length = max([len(sentence) for sentence in source_int_text])
train_graph = tf.Graph()
with train_graph.as_default():
input_data, targets, lr, keep_prob = model_inputs()
sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length')
input_shape = tf.shape(input_data)
train_logits, inference_logits = seq2seq_model(
tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int),
encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int)
tf.identity(inference_logits, 'logits')
with tf.name_scope("optimization"):
# Loss function
cost = tf.contrib.seq2seq.sequence_loss(
train_logits,
targets,
tf.ones([input_shape[0], sequence_length]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
DON'T MODIFY ANYTHING IN THIS CELL
import time
def get_accuracy(target, logits):
Calculate accuracy
max_seq = max(target.shape[1], logits.shape[1])
if max_seq - target.shape[1]:
target = np.pad(
target,
[(0,0),(0,max_seq - target.shape[1])],
'constant')
if max_seq - logits.shape[1]:
logits = np.pad(
logits,
[(0,0),(0,max_seq - logits.shape[1]), (0,0)],
'constant')
return np.mean(np.equal(target, np.argmax(logits, 2)))
train_source = source_int_text[batch_size:]
train_target = target_int_text[batch_size:]
valid_source = helper.pad_sentence_batch(source_int_text[:batch_size])
valid_target = helper.pad_sentence_batch(target_int_text[:batch_size])
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epochs):
for batch_i, (source_batch, target_batch) in enumerate(
helper.batch_data(train_source, train_target, batch_size)):
start_time = time.time()
_, loss = sess.run(
[train_op, cost],
{input_data: source_batch,
targets: target_batch,
lr: learning_rate,
sequence_length: target_batch.shape[1],
keep_prob: keep_probability})
batch_train_logits = sess.run(
inference_logits,
{input_data: source_batch, keep_prob: 1.0})
batch_valid_logits = sess.run(
inference_logits,
{input_data: valid_source, keep_prob: 1.0})
train_acc = get_accuracy(target_batch, batch_train_logits)
valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits)
end_time = time.time()
print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'
.format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_path)
print('Model Trained and Saved')
DON'T MODIFY ANYTHING IN THIS CELL
# Save parameters for checkpoint
helper.save_params(save_path)
DON'T MODIFY ANYTHING IN THIS CELL
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()
load_path = helper.load_params()
def sentence_to_seq(sentence, vocab_to_int):
Convert a sentence to a sequence of ids
:param sentence: String
:param vocab_to_int: Dictionary to go from the words to an id
:return: List of word ids
# TODO: Implement Function
print ('{}'.format (sentence))
sequence = [vocab_to_int[word] if word in vocab_to_int else vocab_to_int['<UNK>'] for word in sentence.lower().split()]
print ('{}'.format (sequence))
return sequence
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
tests.test_sentence_to_seq(sentence_to_seq)
#translate_sentence = 'he saw a old yellow truck .' # il a vu un vieux camion jaune
#translate_sentence = 'she likes peaches , limes , and mangoes .'
translate_sentence = 'night .'
#translate_sentence = 'that cat was my most loved animal .'
#translate_sentence = 'he never saw such a wonderful car .'
#translate_sentence = 'he never saw that wonderful car .'
DON'T MODIFY ANYTHING IN THIS CELL
translate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_path + '.meta')
loader.restore(sess, load_path)
input_data = loaded_graph.get_tensor_by_name('input:0')
logits = loaded_graph.get_tensor_by_name('logits:0')
keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')
translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0]
print('Input')
print(' Word Ids: {}'.format([i for i in translate_sentence]))
print(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))
print('\nPrediction')
print(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)]))
print(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)]))
print (' '.join ([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)]))
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: 3. When to use Python?
Step2: 5. Python as a calculator
Step3: 6. Variables & Types
Step4: 7. Variable Assignment
Step5: 8. Calculations with variables
Step6: 9. Other variable types
Step7: 10. Guess the type
Step8: 12. Type conversion
| <ASSISTANT_TASK:>
Python Code:
# Example, do not modify!
print(5 / 8)
# Put code below here
print(7 + 10)
# Just testing division
print(5 / 8)
# Addition works too
print(7 + 10)
# Addition and subtraction
print(5 + 5)
print(5 - 5)
# Multiplication and division
print(3 * 5)
print(10 / 2)
# Exponentiation
print(4 ** 2)
# Modulo
print(18 % 7)
# How much is your $100 worth after 7 years?
print(100 * (1.1 ** 7))
height = 1.79
weight = 68.7
height
68.7 / 1.79 ** 2
weight / height ** 2
bmi = weight / height ** 2
bmi
height = 1.79
weight = 74.2
bmi = weight / height ** 2
print(bmi)
type(bmi)
day_of_week = 5
type(day_of_week)
x = "body mass index"
y = 'this works too'
type(x)
z = True
type(z)
2 + 3
'ab' +'cd'
# Create a variable savings
savings = 100
# Print out savings
print(savings)
# Create a variable savings
savings = 100
# Create a variable factor
factor = 1.10
# Calculate result
result = savings * factor ** 7
# Print out result
print(result)
# Create a variable desc
desc = 'compound interest'
# Create a variable profitable
profitable = True
# Several variables to experiment with
savings = 100
factor = 1.1
desc = "compound interest"
# Assign product of factor and savings to year1
year1 = savings * factor
# Print the type of year1
print(type(year1))
# Assign sum of desc and desc to doubledesc
doubledesc = desc + desc
# Print out doubledesc
print(doubledesc)
# Definition of savings and result
savings = 100
result = 100 * 1.10 ** 7
# Fix the printout
print("I started with $" + str(savings) + " and now have $" + str(result) + ". Awesome!")
# Definition of pi_string
pi_string = "3.1415926"
# Convert pi_string into float: pi_float
pi_float = float(pi_string)
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Numpy has a really handy built-in function for padding images called pad. The required inputs are the array to be padded, the size of the pad region (in each direction), and the nature of the padding (lots of interpolation options are available, but we just need a constant value). The default for a constant value is to propagate the values at the edges outward, but we want in this case to draw a clear distinction between where we have data and where we don't, so we can set the padded region values to some specific constant value. Often astronomers pad arrays with either zeros or NaNs, but NaNs can be problematic when executing the next step (shifting the images) and zero is too close to the actual data values in our sample image below to stand out, so I've chosen an arbitrary value below based solely on its ability to stand out relative to the rest of the colorbar so that I can tell where the padded regions is.
Step2: The next thing that we'd want to do with padded images is to shift the image to some common center so that it can be combined with other images.
Step3: One last thing we may wish to do before combining is to replace our arbitrarily set constant pad values with NaNs so that when we combine images, these won't be treated as equal to the real values in the other parts of the array. This replacement is pretty easy in python using the following handy syntax. Because the shifting of the array is done via interpolation, the padded region values will no longer all be exactly the same. For that reason, you will often need to tweak the value below which you choose to replace pixels with NaNs to coincide with the padded region. To wrap your brain around this a bit, try changing the value after the <= to 0.001 and/or 0.0005 below. Before doing so, you'll need to reexecute the cell above that contains newarr2 = interp.shift(newarr,(2.5,4.5), cval=-0.001) in order to refresh the array with a clean version.
Step4: Exercise 4
| <ASSISTANT_TASK:>
Python Code:
# The standard fare:
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
%matplotlib inline
# Recall our use of this module to work with FITS files in Lab 4:
from astropy.io import fits
#A dummy image - just a Gaussian PSF with a standard deviation of 5 pixels
import astropy.convolution as conv
gauss = conv.Gaussian2DKernel(stddev=5)
gauss.shape
newarr = np.pad(gauss,5,'constant', constant_values=-0.001)
newarr.shape
plt.imshow(newarr)
plt.colorbar()
import scipy.ndimage.interpolation as interp
newarr2 = interp.shift(newarr,(2.5,4.5), cval=-0.001)
plt.imshow(newarr2)
plt.colorbar()
newarr2[newarr2 <= -0.0001] = np.nan
plt.imshow(newarr2)
plt.colorbar()
#your code goes here
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Target distribution
Step2: Heat bath
Step3: SA algorithm
Step4: Run experiments
| <ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib
matplotlib.use("nbagg")
import matplotlib.pyplot as plt
from IPython import display
from mpl_toolkits.mplot3d import Axes3D
!mkdir figures
!mkdir scripts
%cd /content/scripts
!wget -q https://raw.githubusercontent.com/probml/pyprobml/master/scripts/pyprobml_utils.py
import pyprobml_utils as pml
# Generate a pdf
# the following steps generate a pdf; this is equivalent to the function "peaks(n)" in matlab
n = 100 # number of dimension
pdf = np.zeros([n, n])
sigma = np.zeros([n, n])
s = np.zeros([n, n])
x = -3.0
for i in range(0, n):
y = -3.0
for j in range(0, n):
pdf[j, i] = (
3.0 * (1 - x) ** 2 * np.exp(-(x**2) - (y + 1) ** 2)
- 10.0 * (x / 5 - x**3 - y**5) * np.exp(-(x**2) - y**2)
- 1.0 / 3 * np.exp(-((x + 1) ** 2) - y**2)
)
if pdf[j, i] < 0:
pdf[j, i] = pdf[j, i] * (
-1
) # in contrast to the peaks function: all negative values are multiplied by (-1)
y = y + 6.0 / (n - 1)
x = x + 6.0 / (n - 1)
pdf = pdf / pdf.max()
energy = -np.log(pdf)
# Plot the 3D plot of pdf
# --------------------------
X = np.arange(0, 100 + 100.0 / (n - 1), 100.0 / (n - 1))
Y = np.arange(0, 100 + 100.0 / (n - 1), 100.0 / (n - 1))
fig0 = plt.figure()
ax = fig0.gca(projection="3d")
X, Y = np.meshgrid(X, Y)
surf = ax.plot_surface(Y, X, pdf, rstride=2, cstride=2, cmap=plt.cm.coolwarm, linewidth=0.1)
# plt.gca().invert_xaxis()
plt.tight_layout()
pml.savefig("sim_anneal_2d_peaks.pdf")
plt.show()
# Plot the 3D plot of Energy function
# --------------------------
X = np.arange(0, 100 + 100.0 / (n - 1), 100.0 / (n - 1))
Y = np.arange(0, 100 + 100.0 / (n - 1), 100.0 / (n - 1))
fig0 = plt.figure()
ax = fig0.gca(projection="3d")
X, Y = np.meshgrid(X, Y)
surf = ax.plot_surface(Y, X, energy / energy.max(), rstride=2, cstride=2, cmap=plt.cm.coolwarm, linewidth=0.1)
# plt.gca().invert_xaxis()
plt.tight_layout()
pml.savefig("sim_anneal_2d_energy.pdf")
plt.show()
Tplots = 10 # initial temperature for the plots
stepT = 4 # how many steps should the Temperature be *0.2 for
for i in range(0, stepT):
sigma = np.exp(-(energy) / Tplots)
sigma = sigma / sigma.max()
ttl = "T={:0.2f}".format(Tplots)
Tplots = Tplots * 0.2
X = np.arange(0, 100 + 100.0 / (n - 1), 100.0 / (n - 1))
Y = np.arange(0, 100 + 100.0 / (n - 1), 100.0 / (n - 1))
fig = plt.figure()
ax = fig.gca(projection="3d")
X, Y = np.meshgrid(X, Y)
ax.set_title(ttl)
ax.plot_surface(Y, X, sigma, rstride=2, cstride=2, cmap=plt.cm.coolwarm, linewidth=0, antialiased=False)
# plt.gca().invert_xaxis()
plt.tight_layout()
pml.savefig(f"sim_anneal_2d_cooled{i}.pdf")
plt.show()
def sim_anneal(proposal="gaussian", sigma=10):
np.random.seed(42)
xcur = np.array([np.floor(np.random.uniform(0, 100)), np.floor(np.random.uniform(0, 100))])
xcur = xcur.astype(int)
ns = 300 # number of samples to keep
T = 1 # start temperature
alpha = 0.99999 # cooling schedule
alpha = 0.99 # cooling schedule
# list of visited points, temperatures, probabilities
x_hist = xcur # will be (N,2) array
prob_hist = []
temp_hist = []
nreject = 0
iis = 0 # number of accepted points
npp = 0 # num proposed points
while npp < ns:
npp = npp + 1
if proposal == "uniform":
xnew = np.array([np.floor(np.random.uniform(0, 100)), np.floor(np.random.uniform(0, 100))])
elif proposal == "gaussian":
xnew = xcur + np.random.normal(size=2) * sigma
xnew = np.maximum(xnew, 0)
xnew = np.minimum(xnew, 99)
else:
raise ValueError("Unknown proposal")
xnew = xnew.astype(int)
# compare energies
Ecur = energy[xcur[0], xcur[1]]
Enew = energy[xnew[0], xnew[1]]
deltaE = Enew - Ecur
# print([npp, xcur, xnew, Ecur, Enew, deltaE])
temp_hist.append(T)
T = alpha * T
P = np.exp(-1.0 * deltaE / T)
P = min(1, P)
test = np.random.uniform(0, 1)
if test <= P:
xcur = xnew
iis = iis + 1
else:
nreject += 1
x_hist = np.vstack((x_hist, xcur))
prob_hist.append(pdf[xcur[0], xcur[1]])
npp = npp + 1
print(f"nproposed {npp}, naccepted {iis}, nreject {nreject}")
return x_hist, prob_hist, temp_hist
proposals = {"gaussian", "uniform"}
x_hist = {}
prob_hist = {}
temp_hist = {}
for proposal in proposals:
print(proposal)
x_hist[proposal], prob_hist[proposal], temp_hist[proposal] = sim_anneal(proposal=proposal)
for proposal in proposals:
plt.figure()
plt.plot(temp_hist[proposal])
plt.title("temperature vs time")
plt.tight_layout()
pml.savefig(f"sim_anneal_2d_temp_vs_time_{proposal}.pdf")
plt.show()
for proposal in proposals:
plt.figure()
plt.plot(prob_hist[proposal])
plt.xlabel("iteration")
plt.ylabel("probability")
plt.tight_layout()
pml.savefig(f"sim_anneal_2d_prob_vs_time_{proposal}.pdf")
plt.show()
# Plot points visited
for proposal in proposals:
probs = prob_hist[proposal]
xa = x_hist[proposal]
f1, ax = plt.subplots()
ax.imshow(pdf.transpose(), aspect="auto", extent=[0, 100, 100, 0], interpolation="none")
# Maximum value achieved ploted with white cirlce
# maxi = np.argmax(probs) # index of best model
# ax.plot(xa[maxi,0],xa[maxi,1],'wo', markersize=10)
# Starting point with white cirlce
ax.plot(xa[0, 0], xa[0, 1], "wo", markersize=10)
# Global maximm with red cirlce
ind = np.unravel_index(np.argmax(pdf, axis=None), pdf.shape)
ax.plot(ind[0], ind[1], "ro", markersize=10)
ax.plot(xa[:, 0], xa[:, 1], "w+") # Plot the steps with white +
ax.set_ylabel("y")
ax.set_xlabel("x")
plt.tight_layout()
pml.savefig(f"sim_anneal_2d_samples_{proposal}.pdf")
plt.show()
<END_TASK> |
<SYSTEM_TASK:>
Given the following text description, write Python code to implement the functionality described below step by step
<END_TASK>
<USER_TASK:>
Description:
Step1: Import section specific modules
Step3: 1.9 A brief introduction to interferometry and its history
Step4: This function draws a double-slit setup, with a light source at position $p$ (in fact the function can render multiple sources, but we'll only use it for one source for the moment). The dotted blue line shows the optical axis ($p=0$). The sine wave (schematically) shows the wavelength. (Note that the units here are arbitrary, since it is only geometry relative to wavelength that determines the results). The black lines show the path of the light waves through the slits and onto the screen at the right. The strip on the right schematically renders the resulting interference pattern, and the red curve shows a cross-section through the pattern.
Step5: 1.9.4 From the double-slit box to an interferometer
Step6: Note that long baselines are very sensitive to change in source position, while short baselines are less sensitive. As we'll learn in Chapter 4, the spatial resolution (i.e. the distance at which we can distinguish sources) of an interfrometer is given by $\lambda/B$ , while the spatial resolution of a conventional telescope is given by $\lambda/D$, where $D$ is the dish (or mirror) aperture. This is a fortunate fact, as in practice it is much cheaper to build long baselines than large apertures!
Step7: On the other hand, using a shorter baseline resolves the ambiguity
Step8: Modern interferometers exploit this by using an array of elements, which provides a whole range of possible baselines.
Step9: So we can already use our double-slit box to infer something about the structure of the light source. Note that with two sources of equal intensity, it is possible to have the interference pattern almost cancel out on any one baseline -- but never on all baselines at once
Step10: Now, let us simulate an extended source, by giving the simulator an array of closely spaced point-like sources. Try playing with the extent slider. What's happening here is that the many interference patterns generated by each little part of the extended source tend to "wash out" each other, resulting in a net loss of amplitude in the pattern. Note also how each particular baseline length is sensitive to a particular range of source sizes.
Step11: We can therefore measure source size by measuring the reduction in the amplitude of the interference pattern
Step12: In fact historically, this was the first application of interferometry in astronomy. In a famous experiment in 1920, a Michelson interferometer installed at Mount Wilson Observatory was used to measure the diameter of the red giant star Betelgeuse.
Step13: This simple fact has led to many other applications for interferometers, from geodetic VLBI (where continental drift is measured by measuring extremely accurate antenna positions via radio interferometry of known radio sources), to the recent gravitational wave detection by LIGO (where the light source is a laser, and the interference pattern is used to measure miniscule distortions in space-time -- and thus the geometry of the interferometer -- caused by gravitational waves).
Step14: However, as soon as we take a measurement on another baseline, the difference becomes apparent
Step16: With a larger number of baselines, we can gather enough information to reconstruct an image of the sky. This is because each baseline essentially measures one Fourier component of the sky brightness distribution (Chapter 4 will explain this in more detail); and once we know the Fourier components, we can compute a Fourier transform in order to recover the sky image. The advent of sufficiently powerful computers in the late 1960s made this technique practical, and turned radio interferometers from exotic contraptions into generic imaging instruments. With a few notable exceptions, modern radio interferometry is aperture synthesis.
Step17: We have modified the setup as follows. First, the source is now infinitely distant, so we define the source position in terms of the angle of arrival of the incoming wavefront (with 0 meaning on-axis, i.e. along the vertical axis). We now define the baseline in terms of wavelengths. The phase difference of the wavefront arriving at the two arms of the interferometer is completely defined in terms of the angle of arrival. The two "rays" entering the outer arms of the interferometer indicate the angle of arrival.
Step18: And here's the same experiment for two sources
Step19: A.1 The Betelgeuse size measurement
| <ASSISTANT_TASK:>
Python Code:
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.display import HTML
HTML('../style/course.css') #apply general CSS
from IPython.display import display
from ipywidgets import interact
HTML('../style/code_toggle.html')
def double_slit (p0=[0],a0=[1],baseline=1,d1=5,d2=5,wavelength=.1,maxint=None):
Renders a toy dual-slit experiment.
'p0' is a list or array of source positions (drawn along the vertical axis)
'a0' is an array of source intensities
'baseline' is the distance between the slits
'd1' and 'd2' are distances between source and plate and plate and screen
'wavelength' is wavelength
'maxint' is the maximum intensity scale use to render the fringe pattern. If None, the pattern
is auto-scaled. Maxint is useful if you want to render fringes from multiple invocations
of double_slit() into the same intensity scale, i.e. for comparison.
## setup figure and axes
plt.figure(figsize=(20, 5))
plt.axes(frameon=False)
plt.xlim(-d1-.1, d2+2) and plt.ylim(-1, 1)
plt.xticks([]) and plt.yticks([])
plt.axhline(0, ls=':')
baseline /= 2.
## draw representation of slits
plt.arrow(0, 1,0, baseline-1, lw=0, width=.1, head_width=.1, length_includes_head=True)
plt.arrow(0,-1,0, 1-baseline, lw=0, width=.1, head_width=.1, length_includes_head=True)
plt.arrow(0, 0,0, baseline, lw=0, width=.1, head_width=.1, length_includes_head=True)
plt.arrow(0, 0,0, -baseline, lw=0, width=.1, head_width=.1, length_includes_head=True)
## draw representation of lightpath from slits to centre of screen
plt.arrow(0, baseline,d2,-baseline, length_includes_head=True)
plt.arrow(0,-baseline,d2, baseline, length_includes_head=True)
## draw representation of sinewave from the central position
xw = np.arange(-d1, -d1+(d1+d2)/4, .01)
yw = np.sin(2*np.pi*xw/wavelength)*.1 + (p0[0]+p0[-1])/2
plt.plot(xw,yw,'b')
## 'xs' is a vector of x cordinates on the screen
## and we accumulate the interference pattern for each source into 'pattern'
xs = np.arange(-1, 1, .01)
pattern = 0
total_intensity = 0
## compute contribution to pattern from each source position p
for p,a in np.broadcast(p0,a0):
plt.plot(-d1, p, marker='o', ms=10, mfc='red', mew=0)
total_intensity += a
if p == p0[0] or p == p0[-1]:
plt.arrow(-d1, p, d1, baseline-p, length_includes_head=True)
plt.arrow(-d1, p, d1,-baseline-p, length_includes_head=True)
# compute the two pathlenghts
path1 = np.sqrt(d1**2 + (p-baseline)**2) + np.sqrt(d2**2 + (xs-baseline)**2)
path2 = np.sqrt(d1**2 + (p+baseline)**2) + np.sqrt(d2**2 + (xs+baseline)**2)
diff = path1 - path2
# caccumulate interference pattern from this source
pattern = pattern + a*np.cos(2*np.pi*diff/wavelength)
maxint = maxint or total_intensity
# add fake axis to interference pattern just to make it a "wide" image
pattern_image = pattern[:,np.newaxis] + np.zeros(10)[np.newaxis,:]
plt.imshow(pattern_image, extent=(d2,d2+1,-1,1), cmap=plt.gray(), vmin=-maxint, vmax=maxint)
# make a plot of the interference pattern
plt.plot(d2+1.5+pattern/(maxint*2), xs, 'r')
plt.show()
# show patern for one source at 0
double_slit(p0=[0])
interact(lambda baseline,wavelength:double_slit(p0=[0],baseline=baseline,wavelength=wavelength),
baseline=(0.1,2,.01),wavelength=(.05,.2,.01)) and None
interact(lambda position,baseline,wavelength:double_slit(p0=[position],baseline=baseline,wavelength=wavelength),
position=(-1,1,.01),baseline=(0.1,2,.01),wavelength=(.05,.2,.01)) and None
double_slit([0],baseline=1.5,wavelength=0.1)
double_slit([0.69],baseline=1.5,wavelength=0.1)
double_slit([0],baseline=0.5,wavelength=0.1)
double_slit([0.69],baseline=0.5,wavelength=0.1)
interact(lambda position,intensity,baseline,wavelength:
double_slit(p0=[0,position],a0=[1,intensity],baseline=baseline,wavelength=wavelength),
position=(-1,1,.01),intensity=(.2,1,.01),baseline=(0.1,2,.01),wavelength=(.01,.2,.01)) and None
double_slit(p0=[0,0.25],baseline=1,wavelength=0.1)
double_slit(p0=[0,0.25],baseline=1.5,wavelength=0.1)
interact(lambda extent,baseline,wavelength:
double_slit(p0=np.arange(-extent,extent+.01,.01),baseline=baseline,wavelength=wavelength),
extent=(0,1,.01),baseline=(0.1,2,.01),wavelength=(.01,.2,.01)) and None
double_slit(p0=[0],baseline=1,wavelength=0.1)
double_slit(p0=np.arange(-0.2,.21,.01),baseline=1,wavelength=0.1)
interact(lambda d1,d2,position,extent: double_slit(p0=np.arange(position-extent,position+extent+.01,.01),d1=d1,d2=d2),
d1=(1,5,.1),d2=(1,5,.1),
position=(-1,1,.01),extent=(0,1,.01)) and None
double_slit(p0=[0], a0=[0.4], maxint=2)
double_slit(p0=[0,0.25], a0=[1, 0.6], maxint=2)
double_slit(p0=np.arange(-0.2,.21,.01), a0=.05, maxint=2)
double_slit(p0=[0], a0=[0.4], baseline=0.5, maxint=2)
double_slit(p0=[0,0.25], a0=[1, 0.6], baseline=0.5, maxint=2)
double_slit(p0=np.arange(-0.2,.21,.01), a0=.05, baseline=0.5, maxint=2)
def michelson (p0=[0],a0=[1],baseline=50,maxbaseline=100,extent=0,d1=9,d2=1,d3=.2,wavelength=.1,fov=5,maxint=None):
Renders a toy Michelson interferometer with an infinitely distant (astronomical) source
'p0' is a list or array of source positions (as angles, in degrees).
'a0' is an array of source intensities
'extent' are source extents, in degrees
'baseline' is the baseline, in lambdas
'maxbaseline' is the max baseline to which the plot is scaled
'd1' is the plotted distance between the "sky" and the interferometer arms
'd2' is the plotted distance between arms and screen, in plot units
'd3' is the plotted distance between inner mirrors, in plot units
'fov' is the notionally rendered field of view radius (in degrees)
'wavelength' is wavelength, used for scale
'maxint' is the maximum intensity scale use to render the fringe pattern. If None, the pattern
is auto-scaled. Maxint is useful if you want to render fringes from multiple invocations
of michelson() into the same intensity scale, i.e. for comparison.
## setup figure and axes
plt.figure(figsize=(20, 5))
plt.axes(frameon=False)
plt.xlim(-d1-.1, d2+2) and plt.ylim(-1, 1)
plt.xticks([])
# label Y axis with degrees
yt,ytlab = plt.yticks()
plt.yticks(yt,["-%g"%(float(y)*fov) for y in yt])
plt.ylabel("Angle of Arrival (degrees)")
plt.axhline(0, ls=':')
## draw representation of arms and light path
maxbaseline = max(maxbaseline,baseline)
bl2 = baseline/float(maxbaseline) # coordinate of half a baseline, in plot units
plt.plot([0,0],[-bl2,bl2], 'o', ms=10)
plt.plot([0,d2/2.,d2/2.,d2],[-bl2,-bl2,-d3/2.,0],'-k')
plt.plot([0,d2/2.,d2/2.,d2],[ bl2, bl2, d3/2.,0],'-k')
plt.text(0,0,'$b=%d\lambda$'%baseline, ha='right', va='bottom', size='xx-large')
## draw representation of sinewave from the central position
if isinstance(p0,(int,float)):
p0 = [p0]
xw = np.arange(-d1, -d1+(d1+d2)/4, .01)
yw = np.sin(2*np.pi*xw/wavelength)*.1 + (p0[0]+p0[-1])/(2.*fov)
plt.plot(xw,yw,'b')
## 'xs' is a vector of x cordinates on the screen
xs = np.arange(-1, 1, .01)
## xsdiff is corresponding pathlength difference
xsdiff = (np.sqrt(d2**2 + (xs-d3)**2) - np.sqrt(d2**2 + (xs+d3)**2))
## and we accumulate the interference pattern for each source into 'pattern'
pattern = 0
total_intensity = 0
## compute contribution to pattern from each source position p
for pos,ampl in np.broadcast(p0,a0):
total_intensity += ampl
pos1 = pos/float(fov)
if extent: # simulate extent by plotting 100 sources of 1/100th intensity
positions = np.arange(-1,1.01,.01)*extent/fov + pos1
else:
positions = [pos1]
# draw arrows indicating lightpath
plt.arrow(-d1, bl2+pos1, d1, -pos1, head_width=.1, fc='k', length_includes_head=True)
plt.arrow(-d1,-bl2+pos1, d1, -pos1, head_width=.1, fc='k', length_includes_head=True)
for p in positions:
# compute the pathlength difference between slits and position on screen
plt.plot(-d1, p, marker='o', ms=10*ampl, mfc='red', mew=0)
# add pathlength difference at slits
diff = xsdiff + (baseline*wavelength)*np.sin(p*fov*np.pi/180)
# accumulate interference pattern from this source
pattern = pattern + (float(ampl)/len(positions))*np.cos(2*np.pi*diff/wavelength)
maxint = maxint or total_intensity
# add fake axis to interference pattern just to make it a "wide" image
pattern_image = pattern[:,np.newaxis] + np.zeros(10)[np.newaxis,:]
plt.imshow(pattern_image, extent=(d2,d2+1,-1,1), cmap=plt.gray(), vmin=-maxint, vmax=maxint)
# make a plot of the interference pattern
plt.plot(d2+1.5+pattern/(maxint*2), xs, 'r')
plt.show()
print "visibility (Imax-Imin)/(Imax+Imin): ",(pattern.max()-pattern.min())/(total_intensity*2)
# show patern for one source at 0
michelson(p0=[0])
# single source
interact(lambda position, intensity, baseline:
michelson(p0=[position], a0=[intensity], baseline=baseline, maxint=2),
position=(-5,5,.01),intensity=(.2,1,.01),baseline=(10,100,.01)) and None
interact(lambda position1,position2,intensity1,intensity2,baseline:
michelson(p0=[position1,position2], a0=[intensity1,intensity2], baseline=baseline, maxint=2),
position1=(-5,5,.01), position2=(-5,5,.01), intensity1=(.2,1,.01), intensity2=(.2,1,.01),
baseline=(10,100,.01)) and None
arcsec = 1/3600.
interact(lambda extent_arcsec, baseline:
michelson(p0=[0], a0=[1], extent=extent_arcsec*arcsec, maxint=1,
baseline=baseline,fov=1*arcsec),
extent_arcsec=(0,0.1,0.001),
baseline=(1e+4,1e+7,1e+4)
) and None
<END_TASK> |